Nov 22 07:39:11 localhost kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 22 07:39:11 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 22 07:39:11 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 07:39:11 localhost kernel: BIOS-provided physical RAM map:
Nov 22 07:39:11 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 22 07:39:11 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 22 07:39:11 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 22 07:39:11 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 22 07:39:11 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 22 07:39:11 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 22 07:39:11 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 22 07:39:11 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 22 07:39:11 localhost kernel: NX (Execute Disable) protection: active
Nov 22 07:39:11 localhost kernel: APIC: Static calls initialized
Nov 22 07:39:11 localhost kernel: SMBIOS 2.8 present.
Nov 22 07:39:11 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 22 07:39:11 localhost kernel: Hypervisor detected: KVM
Nov 22 07:39:11 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 22 07:39:11 localhost kernel: kvm-clock: using sched offset of 8064981917 cycles
Nov 22 07:39:11 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 22 07:39:11 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 22 07:39:11 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 22 07:39:11 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 22 07:39:11 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 22 07:39:11 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 22 07:39:11 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 22 07:39:11 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 22 07:39:11 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 22 07:39:11 localhost kernel: Using GB pages for direct mapping
Nov 22 07:39:11 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 22 07:39:11 localhost kernel: ACPI: Early table checksum verification disabled
Nov 22 07:39:11 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 22 07:39:11 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:39:11 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:39:11 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:39:11 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 22 07:39:11 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:39:11 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:39:11 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 22 07:39:11 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 22 07:39:11 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 22 07:39:11 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 22 07:39:11 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 22 07:39:11 localhost kernel: No NUMA configuration found
Nov 22 07:39:11 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 22 07:39:11 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 22 07:39:11 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 22 07:39:11 localhost kernel: Zone ranges:
Nov 22 07:39:11 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 22 07:39:11 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 22 07:39:11 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 07:39:11 localhost kernel:   Device   empty
Nov 22 07:39:11 localhost kernel: Movable zone start for each node
Nov 22 07:39:11 localhost kernel: Early memory node ranges
Nov 22 07:39:11 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 22 07:39:11 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 22 07:39:11 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 07:39:11 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 22 07:39:11 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 22 07:39:11 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 22 07:39:11 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 22 07:39:11 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 22 07:39:11 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 22 07:39:11 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 22 07:39:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 22 07:39:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 22 07:39:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 22 07:39:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 22 07:39:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 22 07:39:11 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 22 07:39:11 localhost kernel: TSC deadline timer available
Nov 22 07:39:11 localhost kernel: CPU topo: Max. logical packages:   8
Nov 22 07:39:11 localhost kernel: CPU topo: Max. logical dies:       8
Nov 22 07:39:11 localhost kernel: CPU topo: Max. dies per package:   1
Nov 22 07:39:11 localhost kernel: CPU topo: Max. threads per core:   1
Nov 22 07:39:11 localhost kernel: CPU topo: Num. cores per package:     1
Nov 22 07:39:11 localhost kernel: CPU topo: Num. threads per package:   1
Nov 22 07:39:11 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 22 07:39:11 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 22 07:39:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 22 07:39:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 22 07:39:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 22 07:39:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 22 07:39:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 22 07:39:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 22 07:39:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 22 07:39:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 22 07:39:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 22 07:39:11 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 22 07:39:11 localhost kernel: Booting paravirtualized kernel on KVM
Nov 22 07:39:11 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 22 07:39:11 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 22 07:39:11 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 22 07:39:11 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 22 07:39:11 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 22 07:39:11 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 22 07:39:11 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 07:39:11 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 22 07:39:11 localhost kernel: random: crng init done
Nov 22 07:39:11 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 22 07:39:11 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 22 07:39:11 localhost kernel: Fallback order for Node 0: 0 
Nov 22 07:39:11 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 22 07:39:11 localhost kernel: Policy zone: Normal
Nov 22 07:39:11 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 22 07:39:11 localhost kernel: software IO TLB: area num 8.
Nov 22 07:39:11 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 22 07:39:11 localhost kernel: ftrace: allocating 49298 entries in 193 pages
Nov 22 07:39:11 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 22 07:39:11 localhost kernel: Dynamic Preempt: voluntary
Nov 22 07:39:11 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 22 07:39:11 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 22 07:39:11 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 22 07:39:11 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 22 07:39:11 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 22 07:39:11 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 22 07:39:11 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 22 07:39:11 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 22 07:39:11 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 07:39:11 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 07:39:11 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 07:39:11 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 22 07:39:11 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 22 07:39:11 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 22 07:39:11 localhost kernel: Console: colour VGA+ 80x25
Nov 22 07:39:11 localhost kernel: printk: console [ttyS0] enabled
Nov 22 07:39:11 localhost kernel: ACPI: Core revision 20230331
Nov 22 07:39:11 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 22 07:39:11 localhost kernel: x2apic enabled
Nov 22 07:39:11 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 22 07:39:11 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 22 07:39:11 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 22 07:39:11 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 22 07:39:11 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 22 07:39:11 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 22 07:39:11 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 22 07:39:11 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 22 07:39:11 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 22 07:39:11 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 22 07:39:11 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 22 07:39:11 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 22 07:39:11 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 22 07:39:11 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 22 07:39:11 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 22 07:39:11 localhost kernel: x86/bugs: return thunk changed
Nov 22 07:39:11 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 22 07:39:11 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 22 07:39:11 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 22 07:39:11 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 22 07:39:11 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 22 07:39:11 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 22 07:39:11 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 22 07:39:11 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 22 07:39:11 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 22 07:39:11 localhost kernel: landlock: Up and running.
Nov 22 07:39:11 localhost kernel: Yama: becoming mindful.
Nov 22 07:39:11 localhost kernel: SELinux:  Initializing.
Nov 22 07:39:11 localhost kernel: LSM support for eBPF active
Nov 22 07:39:11 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 07:39:11 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 07:39:11 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 22 07:39:11 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 22 07:39:11 localhost kernel: ... version:                0
Nov 22 07:39:11 localhost kernel: ... bit width:              48
Nov 22 07:39:11 localhost kernel: ... generic registers:      6
Nov 22 07:39:11 localhost kernel: ... value mask:             0000ffffffffffff
Nov 22 07:39:11 localhost kernel: ... max period:             00007fffffffffff
Nov 22 07:39:11 localhost kernel: ... fixed-purpose events:   0
Nov 22 07:39:11 localhost kernel: ... event mask:             000000000000003f
Nov 22 07:39:11 localhost kernel: signal: max sigframe size: 1776
Nov 22 07:39:11 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 22 07:39:11 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 22 07:39:11 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 22 07:39:11 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 22 07:39:11 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 22 07:39:11 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 22 07:39:11 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 22 07:39:11 localhost kernel: node 0 deferred pages initialised in 7ms
Nov 22 07:39:11 localhost kernel: Memory: 7765676K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616276K reserved, 0K cma-reserved)
Nov 22 07:39:11 localhost kernel: devtmpfs: initialized
Nov 22 07:39:11 localhost kernel: x86/mm: Memory block size: 128MB
Nov 22 07:39:11 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 22 07:39:11 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 22 07:39:11 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 22 07:39:11 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 22 07:39:11 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 22 07:39:11 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 22 07:39:11 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 22 07:39:11 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 22 07:39:11 localhost kernel: audit: type=2000 audit(1763797149.503:1): state=initialized audit_enabled=0 res=1
Nov 22 07:39:11 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 22 07:39:11 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 22 07:39:11 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 22 07:39:11 localhost kernel: cpuidle: using governor menu
Nov 22 07:39:11 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 22 07:39:11 localhost kernel: PCI: Using configuration type 1 for base access
Nov 22 07:39:11 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 22 07:39:11 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 22 07:39:11 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 22 07:39:11 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 22 07:39:11 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 22 07:39:11 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 22 07:39:11 localhost kernel: Demotion targets for Node 0: null
Nov 22 07:39:11 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 22 07:39:11 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 22 07:39:11 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 22 07:39:11 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 22 07:39:11 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 22 07:39:11 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 22 07:39:11 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 22 07:39:11 localhost kernel: ACPI: Interpreter enabled
Nov 22 07:39:11 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 22 07:39:11 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 22 07:39:11 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 22 07:39:11 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 22 07:39:11 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 22 07:39:11 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 22 07:39:11 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [3] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [4] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [5] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [6] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [7] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [8] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [9] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [10] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [11] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [12] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [13] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [14] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [15] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [16] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [17] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [18] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [19] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [20] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [21] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [22] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [23] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [24] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [25] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [26] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [27] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [28] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [29] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [30] registered
Nov 22 07:39:11 localhost kernel: acpiphp: Slot [31] registered
Nov 22 07:39:11 localhost kernel: PCI host bridge to bus 0000:00
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 22 07:39:11 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 22 07:39:11 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 22 07:39:11 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 22 07:39:11 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 22 07:39:11 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 22 07:39:11 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 22 07:39:11 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 22 07:39:11 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 22 07:39:11 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 22 07:39:11 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 22 07:39:11 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 22 07:39:11 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 22 07:39:11 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 22 07:39:11 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 22 07:39:11 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 22 07:39:11 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 07:39:11 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 22 07:39:11 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 22 07:39:11 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 22 07:39:11 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 22 07:39:11 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 22 07:39:11 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 22 07:39:11 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 22 07:39:11 localhost kernel: iommu: Default domain type: Translated
Nov 22 07:39:11 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 22 07:39:11 localhost kernel: SCSI subsystem initialized
Nov 22 07:39:11 localhost kernel: ACPI: bus type USB registered
Nov 22 07:39:11 localhost kernel: usbcore: registered new interface driver usbfs
Nov 22 07:39:11 localhost kernel: usbcore: registered new interface driver hub
Nov 22 07:39:11 localhost kernel: usbcore: registered new device driver usb
Nov 22 07:39:11 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 22 07:39:11 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 22 07:39:11 localhost kernel: PTP clock support registered
Nov 22 07:39:11 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 22 07:39:11 localhost kernel: NetLabel: Initializing
Nov 22 07:39:11 localhost kernel: NetLabel:  domain hash size = 128
Nov 22 07:39:11 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 22 07:39:11 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 22 07:39:11 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 22 07:39:11 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 22 07:39:11 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 22 07:39:11 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 22 07:39:11 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 22 07:39:11 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 22 07:39:11 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 22 07:39:11 localhost kernel: vgaarb: loaded
Nov 22 07:39:11 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 22 07:39:11 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 22 07:39:11 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 22 07:39:11 localhost kernel: pnp: PnP ACPI init
Nov 22 07:39:11 localhost kernel: pnp 00:03: [dma 2]
Nov 22 07:39:11 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 22 07:39:11 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 22 07:39:11 localhost kernel: NET: Registered PF_INET protocol family
Nov 22 07:39:11 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 22 07:39:11 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 22 07:39:11 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 22 07:39:11 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 22 07:39:11 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 22 07:39:11 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 22 07:39:11 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 22 07:39:11 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 07:39:11 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 07:39:11 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 22 07:39:11 localhost kernel: NET: Registered PF_XDP protocol family
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 22 07:39:11 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 22 07:39:11 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 22 07:39:11 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 22 07:39:11 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 77429 usecs
Nov 22 07:39:11 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 22 07:39:11 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 22 07:39:11 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 22 07:39:11 localhost kernel: ACPI: bus type thunderbolt registered
Nov 22 07:39:11 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 22 07:39:11 localhost kernel: Initialise system trusted keyrings
Nov 22 07:39:11 localhost kernel: Key type blacklist registered
Nov 22 07:39:11 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 22 07:39:11 localhost kernel: zbud: loaded
Nov 22 07:39:11 localhost kernel: integrity: Platform Keyring initialized
Nov 22 07:39:11 localhost kernel: integrity: Machine keyring initialized
Nov 22 07:39:11 localhost kernel: Freeing initrd memory: 85868K
Nov 22 07:39:11 localhost kernel: NET: Registered PF_ALG protocol family
Nov 22 07:39:11 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 22 07:39:11 localhost kernel: Key type asymmetric registered
Nov 22 07:39:11 localhost kernel: Asymmetric key parser 'x509' registered
Nov 22 07:39:11 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 22 07:39:11 localhost kernel: io scheduler mq-deadline registered
Nov 22 07:39:11 localhost kernel: io scheduler kyber registered
Nov 22 07:39:11 localhost kernel: io scheduler bfq registered
Nov 22 07:39:11 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 22 07:39:11 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 22 07:39:11 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 22 07:39:11 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 22 07:39:11 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 22 07:39:11 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 22 07:39:11 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 22 07:39:11 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 22 07:39:11 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 22 07:39:11 localhost kernel: Non-volatile memory driver v1.3
Nov 22 07:39:11 localhost kernel: rdac: device handler registered
Nov 22 07:39:11 localhost kernel: hp_sw: device handler registered
Nov 22 07:39:11 localhost kernel: emc: device handler registered
Nov 22 07:39:11 localhost kernel: alua: device handler registered
Nov 22 07:39:11 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 22 07:39:11 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 22 07:39:11 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 22 07:39:11 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 22 07:39:11 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 22 07:39:11 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 22 07:39:11 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 22 07:39:11 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 22 07:39:11 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 22 07:39:11 localhost kernel: hub 1-0:1.0: USB hub found
Nov 22 07:39:11 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 22 07:39:11 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 22 07:39:11 localhost kernel: usbserial: USB Serial support registered for generic
Nov 22 07:39:11 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 22 07:39:11 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 22 07:39:11 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 22 07:39:11 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 22 07:39:11 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 22 07:39:11 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 22 07:39:11 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 22 07:39:11 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 22 07:39:11 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-22T07:39:10 UTC (1763797150)
Nov 22 07:39:11 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 22 07:39:11 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 22 07:39:11 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 22 07:39:11 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 22 07:39:11 localhost kernel: usbcore: registered new interface driver usbhid
Nov 22 07:39:11 localhost kernel: usbhid: USB HID core driver
Nov 22 07:39:11 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 22 07:39:11 localhost kernel: Initializing XFRM netlink socket
Nov 22 07:39:11 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 22 07:39:11 localhost kernel: Segment Routing with IPv6
Nov 22 07:39:11 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 22 07:39:11 localhost kernel: mpls_gso: MPLS GSO support
Nov 22 07:39:11 localhost kernel: IPI shorthand broadcast: enabled
Nov 22 07:39:11 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 22 07:39:11 localhost kernel: AES CTR mode by8 optimization enabled
Nov 22 07:39:11 localhost kernel: sched_clock: Marking stable (1341004488, 174348135)->(1651944580, -136591957)
Nov 22 07:39:11 localhost kernel: registered taskstats version 1
Nov 22 07:39:11 localhost kernel: Loading compiled-in X.509 certificates
Nov 22 07:39:11 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 07:39:11 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 22 07:39:11 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 22 07:39:11 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 22 07:39:11 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 22 07:39:11 localhost kernel: Demotion targets for Node 0: null
Nov 22 07:39:11 localhost kernel: page_owner is disabled
Nov 22 07:39:11 localhost kernel: Key type .fscrypt registered
Nov 22 07:39:11 localhost kernel: Key type fscrypt-provisioning registered
Nov 22 07:39:11 localhost kernel: Key type big_key registered
Nov 22 07:39:11 localhost kernel: Key type encrypted registered
Nov 22 07:39:11 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 22 07:39:11 localhost kernel: Loading compiled-in module X.509 certificates
Nov 22 07:39:11 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 07:39:11 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 22 07:39:11 localhost kernel: ima: No architecture policies found
Nov 22 07:39:11 localhost kernel: evm: Initialising EVM extended attributes:
Nov 22 07:39:11 localhost kernel: evm: security.selinux
Nov 22 07:39:11 localhost kernel: evm: security.SMACK64 (disabled)
Nov 22 07:39:11 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 22 07:39:11 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 22 07:39:11 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 22 07:39:11 localhost kernel: evm: security.apparmor (disabled)
Nov 22 07:39:11 localhost kernel: evm: security.ima
Nov 22 07:39:11 localhost kernel: evm: security.capability
Nov 22 07:39:11 localhost kernel: evm: HMAC attrs: 0x1
Nov 22 07:39:11 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 22 07:39:11 localhost kernel: Running certificate verification RSA selftest
Nov 22 07:39:11 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 22 07:39:11 localhost kernel: Running certificate verification ECDSA selftest
Nov 22 07:39:11 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 22 07:39:11 localhost kernel: clk: Disabling unused clocks
Nov 22 07:39:11 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 22 07:39:11 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 22 07:39:11 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 22 07:39:11 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 22 07:39:11 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 22 07:39:11 localhost kernel: Run /init as init process
Nov 22 07:39:11 localhost kernel:   with arguments:
Nov 22 07:39:11 localhost kernel:     /init
Nov 22 07:39:11 localhost kernel:   with environment:
Nov 22 07:39:11 localhost kernel:     HOME=/
Nov 22 07:39:11 localhost kernel:     TERM=linux
Nov 22 07:39:11 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64
Nov 22 07:39:11 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 22 07:39:11 localhost systemd[1]: Detected virtualization kvm.
Nov 22 07:39:11 localhost systemd[1]: Detected architecture x86-64.
Nov 22 07:39:11 localhost systemd[1]: Running in initrd.
Nov 22 07:39:11 localhost systemd[1]: No hostname configured, using default hostname.
Nov 22 07:39:11 localhost systemd[1]: Hostname set to <localhost>.
Nov 22 07:39:11 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 22 07:39:11 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 22 07:39:11 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 22 07:39:11 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 22 07:39:11 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 22 07:39:11 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 22 07:39:11 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 22 07:39:11 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 22 07:39:11 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 22 07:39:11 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 22 07:39:11 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 22 07:39:11 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 22 07:39:11 localhost systemd[1]: Reached target Local File Systems.
Nov 22 07:39:11 localhost systemd[1]: Reached target Path Units.
Nov 22 07:39:11 localhost systemd[1]: Reached target Slice Units.
Nov 22 07:39:11 localhost systemd[1]: Reached target Swaps.
Nov 22 07:39:11 localhost systemd[1]: Reached target Timer Units.
Nov 22 07:39:11 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 22 07:39:11 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 22 07:39:11 localhost systemd[1]: Listening on Journal Socket.
Nov 22 07:39:11 localhost systemd[1]: Listening on udev Control Socket.
Nov 22 07:39:11 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 22 07:39:11 localhost systemd[1]: Reached target Socket Units.
Nov 22 07:39:11 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 22 07:39:11 localhost systemd[1]: Starting Journal Service...
Nov 22 07:39:11 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 22 07:39:11 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 22 07:39:11 localhost systemd[1]: Starting Create System Users...
Nov 22 07:39:11 localhost systemd[1]: Starting Setup Virtual Console...
Nov 22 07:39:11 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 22 07:39:11 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 22 07:39:11 localhost systemd[1]: Finished Create System Users.
Nov 22 07:39:11 localhost systemd-journald[304]: Journal started
Nov 22 07:39:11 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/02722e9f996f4a018f3068e10821087c) is 8.0M, max 153.6M, 145.6M free.
Nov 22 07:39:11 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Nov 22 07:39:11 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Nov 22 07:39:11 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 22 07:39:11 localhost systemd[1]: Started Journal Service.
Nov 22 07:39:11 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 22 07:39:11 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 22 07:39:11 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 22 07:39:11 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 22 07:39:11 localhost systemd[1]: Finished Setup Virtual Console.
Nov 22 07:39:11 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 22 07:39:11 localhost systemd[1]: Starting dracut cmdline hook...
Nov 22 07:39:11 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Nov 22 07:39:11 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 07:39:11 localhost systemd[1]: Finished dracut cmdline hook.
Nov 22 07:39:11 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 22 07:39:11 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 22 07:39:11 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 22 07:39:11 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 22 07:39:11 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 22 07:39:11 localhost kernel: RPC: Registered udp transport module.
Nov 22 07:39:11 localhost kernel: RPC: Registered tcp transport module.
Nov 22 07:39:11 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 22 07:39:11 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 22 07:39:11 localhost rpc.statd[443]: Version 2.5.4 starting
Nov 22 07:39:11 localhost rpc.statd[443]: Initializing NSM state
Nov 22 07:39:11 localhost rpc.idmapd[448]: Setting log level to 0
Nov 22 07:39:11 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 22 07:39:11 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 22 07:39:11 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 22 07:39:11 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 22 07:39:11 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 22 07:39:11 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 22 07:39:11 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 22 07:39:11 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 22 07:39:11 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 07:39:11 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 22 07:39:11 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 07:39:11 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 07:39:11 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 07:39:11 localhost systemd[1]: Reached target Network.
Nov 22 07:39:11 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 07:39:11 localhost systemd[1]: Starting dracut initqueue hook...
Nov 22 07:39:11 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 22 07:39:11 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 22 07:39:11 localhost kernel:  vda: vda1
Nov 22 07:39:11 localhost kernel: libata version 3.00 loaded.
Nov 22 07:39:11 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 22 07:39:11 localhost systemd-udevd[462]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 07:39:11 localhost kernel: scsi host0: ata_piix
Nov 22 07:39:11 localhost kernel: scsi host1: ata_piix
Nov 22 07:39:11 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 22 07:39:11 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 22 07:39:12 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 07:39:12 localhost systemd[1]: Reached target Initrd Root Device.
Nov 22 07:39:12 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 22 07:39:12 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 22 07:39:12 localhost systemd[1]: Reached target System Initialization.
Nov 22 07:39:12 localhost systemd[1]: Reached target Basic System.
Nov 22 07:39:12 localhost kernel: ata1: found unknown device (class 0)
Nov 22 07:39:12 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 22 07:39:12 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 22 07:39:12 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 22 07:39:12 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 22 07:39:12 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 22 07:39:12 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 22 07:39:12 localhost systemd[1]: Finished dracut initqueue hook.
Nov 22 07:39:12 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 22 07:39:12 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 22 07:39:12 localhost systemd[1]: Reached target Remote File Systems.
Nov 22 07:39:12 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 22 07:39:12 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 22 07:39:12 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 22 07:39:12 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 22 07:39:12 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 07:39:12 localhost systemd[1]: Mounting /sysroot...
Nov 22 07:39:12 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 22 07:39:12 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 22 07:39:13 localhost kernel: XFS (vda1): Ending clean mount
Nov 22 07:39:13 localhost systemd[1]: Mounted /sysroot.
Nov 22 07:39:13 localhost systemd[1]: Reached target Initrd Root File System.
Nov 22 07:39:13 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 22 07:39:13 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 22 07:39:13 localhost systemd[1]: Reached target Initrd File Systems.
Nov 22 07:39:13 localhost systemd[1]: Reached target Initrd Default Target.
Nov 22 07:39:13 localhost systemd[1]: Starting dracut mount hook...
Nov 22 07:39:13 localhost systemd[1]: Finished dracut mount hook.
Nov 22 07:39:13 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 22 07:39:13 localhost rpc.idmapd[448]: exiting on signal 15
Nov 22 07:39:13 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 22 07:39:13 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 22 07:39:13 localhost systemd[1]: Stopped target Network.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Timer Units.
Nov 22 07:39:13 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 22 07:39:13 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Basic System.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Path Units.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Remote File Systems.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Slice Units.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Socket Units.
Nov 22 07:39:13 localhost systemd[1]: Stopped target System Initialization.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Local File Systems.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Swaps.
Nov 22 07:39:13 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped dracut mount hook.
Nov 22 07:39:13 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 22 07:39:13 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 22 07:39:13 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 22 07:39:13 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 22 07:39:13 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 22 07:39:13 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 22 07:39:13 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 22 07:39:13 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 22 07:39:13 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 22 07:39:13 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 22 07:39:13 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 22 07:39:13 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 22 07:39:13 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Closed udev Control Socket.
Nov 22 07:39:13 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Closed udev Kernel Socket.
Nov 22 07:39:13 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 22 07:39:13 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 22 07:39:13 localhost systemd[1]: Starting Cleanup udev Database...
Nov 22 07:39:13 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 22 07:39:13 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 22 07:39:13 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Stopped Create System Users.
Nov 22 07:39:13 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 22 07:39:13 localhost systemd[1]: Finished Cleanup udev Database.
Nov 22 07:39:13 localhost systemd[1]: Reached target Switch Root.
Nov 22 07:39:13 localhost systemd[1]: Starting Switch Root...
Nov 22 07:39:13 localhost systemd[1]: Switching root.
Nov 22 07:39:13 localhost systemd-journald[304]: Journal stopped
Nov 22 08:51:04 compute-0 multipathd[235286]: 4315.120842 | --------start up--------
Nov 22 08:51:04 compute-0 multipathd[235286]: 4315.120869 | read /etc/multipath.conf
Nov 22 08:51:04 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:51:04 compute-0 multipathd[235286]: 4315.128277 | path checkers start up
Nov 22 08:51:04 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:51:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:04 compute-0 python3.9[235475]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:51:05 compute-0 sudo[235627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emawrovyhndhjnxepmmbhtyukqxvbhjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801465.1265037-586-72294306899375/AnsiballZ_command.py'
Nov 22 08:51:05 compute-0 sudo[235627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:05 compute-0 ceph-mon[75021]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:05 compute-0 python3.9[235629]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:05 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 08:51:05 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 22 08:51:05 compute-0 sudo[235627]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:06 compute-0 sudo[235794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-larcqowacnfxhhshxbiuezustcmeufor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801465.8836775-594-275113627087599/AnsiballZ_systemd.py'
Nov 22 08:51:06 compute-0 sudo[235794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:06 compute-0 python3.9[235796]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:51:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:06 compute-0 systemd[1]: Stopping multipathd container...
Nov 22 08:51:06 compute-0 multipathd[235286]: 4317.391933 | exit (signal)
Nov 22 08:51:06 compute-0 multipathd[235286]: 4317.392475 | --------shut down-------
Nov 22 08:51:06 compute-0 systemd[1]: libpod-fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8.scope: Deactivated successfully.
Nov 22 08:51:06 compute-0 podman[235800]: 2025-11-22 08:51:06.609357884 +0000 UTC m=+0.061348184 container died fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:51:06 compute-0 systemd[1]: fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-13a2dc41a9d3bd01.timer: Deactivated successfully.
Nov 22 08:51:06 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8.
Nov 22 08:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-userdata-shm.mount: Deactivated successfully.
Nov 22 08:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcb533eb1678d1c82b2f8667c712a43091e01fd0ce2745a3260a21a67f8daeaa-merged.mount: Deactivated successfully.
Nov 22 08:51:06 compute-0 podman[235800]: 2025-11-22 08:51:06.952649243 +0000 UTC m=+0.404639533 container cleanup fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true, config_id=multipathd)
Nov 22 08:51:06 compute-0 podman[235800]: multipathd
Nov 22 08:51:07 compute-0 podman[235827]: multipathd
Nov 22 08:51:07 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 22 08:51:07 compute-0 systemd[1]: Stopped multipathd container.
Nov 22 08:51:07 compute-0 systemd[1]: Starting multipathd container...
Nov 22 08:51:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcb533eb1678d1c82b2f8667c712a43091e01fd0ce2745a3260a21a67f8daeaa/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcb533eb1678d1c82b2f8667c712a43091e01fd0ce2745a3260a21a67f8daeaa/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:07 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8.
Nov 22 08:51:07 compute-0 podman[235840]: 2025-11-22 08:51:07.169977652 +0000 UTC m=+0.120245988 container init fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 08:51:07 compute-0 multipathd[235853]: + sudo -E kolla_set_configs
Nov 22 08:51:07 compute-0 podman[235840]: 2025-11-22 08:51:07.194715226 +0000 UTC m=+0.144983542 container start fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:51:07 compute-0 sudo[235861]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 08:51:07 compute-0 sudo[235861]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:51:07 compute-0 sudo[235861]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 08:51:07 compute-0 podman[235840]: multipathd
Nov 22 08:51:07 compute-0 systemd[1]: Started multipathd container.
Nov 22 08:51:07 compute-0 multipathd[235853]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:51:07 compute-0 multipathd[235853]: INFO:__main__:Validating config file
Nov 22 08:51:07 compute-0 multipathd[235853]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:51:07 compute-0 multipathd[235853]: INFO:__main__:Writing out command to execute
Nov 22 08:51:07 compute-0 sudo[235794]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:07 compute-0 sudo[235861]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:07 compute-0 multipathd[235853]: ++ cat /run_command
Nov 22 08:51:07 compute-0 multipathd[235853]: + CMD='/usr/sbin/multipathd -d'
Nov 22 08:51:07 compute-0 multipathd[235853]: + ARGS=
Nov 22 08:51:07 compute-0 multipathd[235853]: + sudo kolla_copy_cacerts
Nov 22 08:51:07 compute-0 sudo[235881]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 08:51:07 compute-0 sudo[235881]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:51:07 compute-0 sudo[235881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 08:51:07 compute-0 sudo[235881]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:07 compute-0 multipathd[235853]: + [[ ! -n '' ]]
Nov 22 08:51:07 compute-0 multipathd[235853]: + . kolla_extend_start
Nov 22 08:51:07 compute-0 multipathd[235853]: Running command: '/usr/sbin/multipathd -d'
Nov 22 08:51:07 compute-0 multipathd[235853]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 08:51:07 compute-0 multipathd[235853]: + umask 0022
Nov 22 08:51:07 compute-0 multipathd[235853]: + exec /usr/sbin/multipathd -d
Nov 22 08:51:07 compute-0 multipathd[235853]: 4318.100514 | --------start up--------
Nov 22 08:51:07 compute-0 multipathd[235853]: 4318.100533 | read /etc/multipath.conf
Nov 22 08:51:07 compute-0 multipathd[235853]: 4318.107036 | path checkers start up
Nov 22 08:51:07 compute-0 podman[235862]: 2025-11-22 08:51:07.305876229 +0000 UTC m=+0.098900239 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:51:07 compute-0 systemd[1]: fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-5865c03f55730aff.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:51:07 compute-0 systemd[1]: fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-5865c03f55730aff.service: Failed with result 'exit-code'.
Nov 22 08:51:07 compute-0 ceph-mon[75021]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:07 compute-0 sudo[236043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfuybuzpguudnjlihoqkqdlcopslqxha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801467.4201899-602-189810803890732/AnsiballZ_file.py'
Nov 22 08:51:07 compute-0 sudo[236043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:07 compute-0 python3.9[236045]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:07 compute-0 sudo[236043]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:08 compute-0 sudo[236195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfbkzjdoabjdknvtmoouhmjztjwxffdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801468.2953918-614-191625106703254/AnsiballZ_file.py'
Nov 22 08:51:08 compute-0 sudo[236195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:08 compute-0 python3.9[236197]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 08:51:08 compute-0 sudo[236195]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:09 compute-0 sudo[236347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phxkfvgrntwuijtmbrvlinpaqtafpoay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801468.9553661-622-214619893195919/AnsiballZ_modprobe.py'
Nov 22 08:51:09 compute-0 sudo[236347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:09 compute-0 python3.9[236349]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 22 08:51:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:09 compute-0 kernel: Key type psk registered
Nov 22 08:51:09 compute-0 sudo[236347]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:09 compute-0 ceph-mon[75021]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:09 compute-0 sudo[236510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmuvwojgsyulmdsutsvdqccgypliewrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801469.7293074-630-218230609592733/AnsiballZ_stat.py'
Nov 22 08:51:09 compute-0 sudo[236510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:10 compute-0 python3.9[236512]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:51:10 compute-0 sudo[236510]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:10 compute-0 sudo[236633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ludcbuymfjmtjpfkmmltezisiuufycku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801469.7293074-630-218230609592733/AnsiballZ_copy.py'
Nov 22 08:51:10 compute-0 sudo[236633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:10 compute-0 python3.9[236635]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801469.7293074-630-218230609592733/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:10 compute-0 sudo[236633]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:11 compute-0 sudo[236785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gljrfachrxqqvjhlaixytjvniqnotjmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801471.0341017-646-197836874912402/AnsiballZ_lineinfile.py'
Nov 22 08:51:11 compute-0 sudo[236785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:11 compute-0 python3.9[236787]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:11 compute-0 sudo[236785]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:11 compute-0 ceph-mon[75021]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:12 compute-0 sudo[236937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvnwufknvoaqphkfwncnxpclzrqyskwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801471.7139087-654-158923053086370/AnsiballZ_systemd.py'
Nov 22 08:51:12 compute-0 sudo[236937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:12 compute-0 python3.9[236939]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:51:12 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 08:51:12 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 22 08:51:12 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 22 08:51:12 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 08:51:12 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 08:51:12 compute-0 sudo[236937]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:12 compute-0 sudo[237093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjlblgqupmudkjhihtodgtzocnrglobs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801472.6624155-662-185044246992045/AnsiballZ_dnf.py'
Nov 22 08:51:12 compute-0 sudo[237093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:13 compute-0 python3.9[237095]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:51:13 compute-0 ceph-mon[75021]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:14 compute-0 podman[237097]: 2025-11-22 08:51:14.424435458 +0000 UTC m=+0.102425175 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 08:51:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.488278) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474488379, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1277, "num_deletes": 506, "total_data_size": 1497478, "memory_usage": 1532208, "flush_reason": "Manual Compaction"}
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474501660, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1472620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13651, "largest_seqno": 14927, "table_properties": {"data_size": 1466944, "index_size": 2560, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14516, "raw_average_key_size": 18, "raw_value_size": 1453633, "raw_average_value_size": 1805, "num_data_blocks": 117, "num_entries": 805, "num_filter_entries": 805, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801363, "oldest_key_time": 1763801363, "file_creation_time": 1763801474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13422 microseconds, and 4331 cpu microseconds.
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.501712) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1472620 bytes OK
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.501733) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.504369) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.504440) EVENT_LOG_v1 {"time_micros": 1763801474504432, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.504466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1490601, prev total WAL file size 1490601, number of live WAL files 2.
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.505209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1438KB)], [32(7413KB)]
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474505281, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9063673, "oldest_snapshot_seqno": -1}
Nov 22 08:51:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3851 keys, 7225646 bytes, temperature: kUnknown
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474556038, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7225646, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7197697, "index_size": 17215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94491, "raw_average_key_size": 24, "raw_value_size": 7125742, "raw_average_value_size": 1850, "num_data_blocks": 726, "num_entries": 3851, "num_filter_entries": 3851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.556373) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7225646 bytes
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.558330) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.2 rd, 142.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.2 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(11.1) write-amplify(4.9) OK, records in: 4876, records dropped: 1025 output_compression: NoCompression
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.558349) EVENT_LOG_v1 {"time_micros": 1763801474558340, "job": 14, "event": "compaction_finished", "compaction_time_micros": 50856, "compaction_time_cpu_micros": 18088, "output_level": 6, "num_output_files": 1, "total_output_size": 7225646, "num_input_records": 4876, "num_output_records": 3851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474558819, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474560343, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.505090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:51:14 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:51:15 compute-0 ceph-mon[75021]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:16 compute-0 systemd[1]: Reloading.
Nov 22 08:51:16 compute-0 systemd-sysv-generator[237155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:51:16 compute-0 systemd-rc-local-generator[237152]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:51:16 compute-0 systemd[1]: Reloading.
Nov 22 08:51:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:16 compute-0 systemd-sysv-generator[237192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:51:16 compute-0 systemd-rc-local-generator[237188]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:51:16 compute-0 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 08:51:16 compute-0 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 08:51:17 compute-0 lvm[237237]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 08:51:17 compute-0 lvm[237237]: VG ceph_vg1 finished
Nov 22 08:51:17 compute-0 lvm[237235]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 08:51:17 compute-0 lvm[237235]: VG ceph_vg0 finished
Nov 22 08:51:17 compute-0 lvm[237234]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 08:51:17 compute-0 lvm[237234]: VG ceph_vg2 finished
Nov 22 08:51:17 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 08:51:17 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 08:51:17 compute-0 systemd[1]: Reloading.
Nov 22 08:51:17 compute-0 systemd-rc-local-generator[237289]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:51:17 compute-0 systemd-sysv-generator[237292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:51:17 compute-0 ceph-mon[75021]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:17 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 08:51:18 compute-0 sudo[237093]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:18 compute-0 sudo[238576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loausnvzzagcrdtftpaieidbptgediar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801478.4073954-670-186516348139477/AnsiballZ_systemd_service.py'
Nov 22 08:51:18 compute-0 sudo[238576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:18 compute-0 python3.9[238578]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:51:19 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 22 08:51:19 compute-0 iscsid[226005]: iscsid shutting down.
Nov 22 08:51:19 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 22 08:51:19 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 22 08:51:19 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 08:51:19 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 22 08:51:19 compute-0 systemd[1]: Started Open-iSCSI.
Nov 22 08:51:19 compute-0 sudo[238576]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:19 compute-0 ceph-mon[75021]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:19 compute-0 python3.9[238732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:51:20 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 08:51:20 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 08:51:20 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.705s CPU time.
Nov 22 08:51:20 compute-0 systemd[1]: run-r7980be7ed0484ebc8d2b10d0891a5454.service: Deactivated successfully.
Nov 22 08:51:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:20 compute-0 sudo[238887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrvdnnqrtonxspxpeeoobbggahkuyjja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801480.3790708-688-246123870055622/AnsiballZ_file.py'
Nov 22 08:51:20 compute-0 sudo[238887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:20 compute-0 python3.9[238889]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:20 compute-0 sudo[238887]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:21 compute-0 sudo[239039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvgekpzidkaxapxuuzfktxbrgoykywxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801481.2369242-699-70082954768286/AnsiballZ_systemd_service.py'
Nov 22 08:51:21 compute-0 sudo[239039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:21 compute-0 python3.9[239041]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:51:21 compute-0 systemd[1]: Reloading.
Nov 22 08:51:21 compute-0 ceph-mon[75021]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:22 compute-0 systemd-rc-local-generator[239070]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:51:22 compute-0 systemd-sysv-generator[239074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:51:22 compute-0 sudo[239039]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:51:22 compute-0 python3.9[239225]: ansible-ansible.builtin.service_facts Invoked
Nov 22 08:51:23 compute-0 network[239242]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 08:51:23 compute-0 network[239243]: 'network-scripts' will be removed from distribution in near future.
Nov 22 08:51:23 compute-0 network[239244]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 08:51:23 compute-0 ceph-mon[75021]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:25 compute-0 ceph-mon[75021]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:26 compute-0 sudo[239517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqrkmnuyazcbbaumztsktfsecjbozawg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801486.6101465-718-71269057061961/AnsiballZ_systemd_service.py'
Nov 22 08:51:26 compute-0 sudo[239517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:27 compute-0 python3.9[239519]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:51:27 compute-0 sudo[239517]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:27 compute-0 ceph-mon[75021]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:27 compute-0 sudo[239670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oncrmaxxcqryenwxfahpibdlinjsuwpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801487.4211292-718-245687158398704/AnsiballZ_systemd_service.py'
Nov 22 08:51:27 compute-0 sudo[239670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:51:27.931 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:51:27.933 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:51:27.934 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:28 compute-0 python3.9[239672]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:51:28 compute-0 sudo[239670]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:28 compute-0 sudo[239823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwszqswwxuoxzlmrfssttwarglzqvpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801488.1923428-718-84460888554471/AnsiballZ_systemd_service.py'
Nov 22 08:51:28 compute-0 sudo[239823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:28 compute-0 python3.9[239825]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:51:28 compute-0 sudo[239823]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:29 compute-0 sudo[239976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvnkdmnguwiouoqfblophsmfjnkdaicw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801489.0413358-718-89107316773394/AnsiballZ_systemd_service.py'
Nov 22 08:51:29 compute-0 sudo[239976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:29 compute-0 python3.9[239978]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:51:29 compute-0 ceph-mon[75021]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:29 compute-0 sudo[239976]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:30 compute-0 sudo[240129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eymgpmptyroywzxewvhwuujworgiogxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801489.8256936-718-4569571583117/AnsiballZ_systemd_service.py'
Nov 22 08:51:30 compute-0 sudo[240129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:30 compute-0 python3.9[240131]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:51:30 compute-0 sudo[240129]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:30 compute-0 sudo[240282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrkvpoxhembdpmsweggwwepoczrdnqoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801490.588003-718-90202450330141/AnsiballZ_systemd_service.py'
Nov 22 08:51:30 compute-0 sudo[240282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:31 compute-0 python3.9[240284]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:51:31 compute-0 sudo[240282]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:31 compute-0 ceph-mon[75021]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:31 compute-0 sudo[240450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cumehooirvgeljphraspghofjbbsuquo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801491.3934505-718-136335238775192/AnsiballZ_systemd_service.py'
Nov 22 08:51:31 compute-0 sudo[240450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:31 compute-0 podman[240409]: 2025-11-22 08:51:31.720507001 +0000 UTC m=+0.067361294 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 08:51:32 compute-0 python3.9[240456]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:51:32 compute-0 sudo[240450]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:32 compute-0 sudo[240607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxvjllrknmtrpqyzhhhtblnykfhvzzax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801492.212558-718-218126638875279/AnsiballZ_systemd_service.py'
Nov 22 08:51:32 compute-0 sudo[240607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:32 compute-0 python3.9[240609]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:51:32 compute-0 sudo[240607]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:33 compute-0 sudo[240760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evjdpsnmafprwpixvokzxgydfejwfiyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801493.2036638-777-10373741815998/AnsiballZ_file.py'
Nov 22 08:51:33 compute-0 sudo[240760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:33 compute-0 ceph-mon[75021]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:33 compute-0 python3.9[240762]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:33 compute-0 sudo[240760]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:34 compute-0 sudo[240912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idxsazlxnacpwlcohvxfecmhzjhdmxlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801493.8680966-777-158668434745736/AnsiballZ_file.py'
Nov 22 08:51:34 compute-0 sudo[240912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:34 compute-0 python3.9[240914]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:34 compute-0 sudo[240912]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:34 compute-0 sudo[241064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipzekmvesmerrjwuoonynrkcjjmxvjkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801494.4623532-777-212751656251063/AnsiballZ_file.py'
Nov 22 08:51:34 compute-0 sudo[241064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:34 compute-0 python3.9[241066]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:34 compute-0 sudo[241064]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:35 compute-0 sudo[241216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkeoihsxvuboxkvoexrcvvoirlotcuze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801495.0752823-777-116210902793457/AnsiballZ_file.py'
Nov 22 08:51:35 compute-0 sudo[241216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:35 compute-0 python3.9[241218]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:35 compute-0 sudo[241216]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:35 compute-0 ceph-mon[75021]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:35 compute-0 sudo[241368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plrjjbauavqhgfheardlhkgxokkcjqwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801495.6804538-777-77372999705206/AnsiballZ_file.py'
Nov 22 08:51:35 compute-0 sudo[241368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:36 compute-0 python3.9[241370]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:36 compute-0 sudo[241368]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:36 compute-0 sudo[241520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxzoinqqnjqhqwqityiitszrhfzshtze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801496.3502946-777-48053031757523/AnsiballZ_file.py'
Nov 22 08:51:36 compute-0 sudo[241520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:36 compute-0 python3.9[241522]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:36 compute-0 sudo[241520]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:37 compute-0 podman[241646]: 2025-11-22 08:51:37.410284415 +0000 UTC m=+0.064290911 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 22 08:51:37 compute-0 sudo[241686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpdyrbvhqlaavpzhbhndooyyucachxfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801497.095568-777-45647327630888/AnsiballZ_file.py'
Nov 22 08:51:37 compute-0 sudo[241686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:37 compute-0 python3.9[241691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:37 compute-0 sudo[241686]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:37 compute-0 ceph-mon[75021]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:38 compute-0 sudo[241842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gparouyaouxfmydribuqewbwhekvobfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801497.8060308-777-5872984950586/AnsiballZ_file.py'
Nov 22 08:51:38 compute-0 sudo[241842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:38 compute-0 python3.9[241844]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:38 compute-0 sudo[241842]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:38 compute-0 sudo[241994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uziwkvcuujnlzdevcferafhcrpnxkzef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801498.4639194-834-185439026213788/AnsiballZ_file.py'
Nov 22 08:51:38 compute-0 sudo[241994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:38 compute-0 ceph-mon[75021]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:38 compute-0 python3.9[241996]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:38 compute-0 sudo[241994]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:39 compute-0 sudo[242146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhmfgcsyzpcidjhzbqsakzkjmyqbhbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801499.114998-834-160359876081334/AnsiballZ_file.py'
Nov 22 08:51:39 compute-0 sudo[242146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:39 compute-0 python3.9[242148]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:39 compute-0 sudo[242146]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:40 compute-0 sudo[242298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgecxrdpmbafgpzrygzwhvqamfrszdfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801499.7831633-834-48688024411639/AnsiballZ_file.py'
Nov 22 08:51:40 compute-0 sudo[242298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:40 compute-0 python3.9[242300]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:40 compute-0 sudo[242298]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:40 compute-0 sudo[242450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlneqvqqnxlezntmraqzdpfhwbtwzkyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801500.4298897-834-70236983189107/AnsiballZ_file.py'
Nov 22 08:51:40 compute-0 sudo[242450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:40 compute-0 python3.9[242452]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:40 compute-0 sudo[242450]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:41 compute-0 sudo[242602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqfiwuhxwlpnhnbqundzuvtqcgsuappl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801501.0565767-834-55986327224612/AnsiballZ_file.py'
Nov 22 08:51:41 compute-0 sudo[242602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:41 compute-0 python3.9[242604]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:41 compute-0 sudo[242602]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:41 compute-0 ceph-mon[75021]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:41 compute-0 sudo[242754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mftwlgvhcdhdphoibjiwkyjhbobbvkxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801501.677472-834-188299631039684/AnsiballZ_file.py'
Nov 22 08:51:41 compute-0 sudo[242754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:42 compute-0 python3.9[242756]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:42 compute-0 sudo[242754]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:42 compute-0 sudo[242906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvzwylwsvjampzpvlpefymhnmraloatl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801502.3110006-834-178264796815675/AnsiballZ_file.py'
Nov 22 08:51:42 compute-0 sudo[242906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:42 compute-0 python3.9[242908]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:42 compute-0 sudo[242906]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:43 compute-0 sudo[243058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caetxjxuwnqkjabntmutbtkvxqtivcns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801502.9449992-834-267436619022881/AnsiballZ_file.py'
Nov 22 08:51:43 compute-0 sudo[243058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:43 compute-0 python3.9[243060]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:51:43 compute-0 sudo[243058]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:43 compute-0 ceph-mon[75021]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:43 compute-0 sudo[243210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqlrvzxlkmrcvwdztyqwvjtxejtwisg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801503.672662-892-109335600100742/AnsiballZ_command.py'
Nov 22 08:51:43 compute-0 sudo[243210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:44 compute-0 python3.9[243212]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:44 compute-0 sudo[243210]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:44 compute-0 podman[243338]: 2025-11-22 08:51:44.862478558 +0000 UTC m=+0.093306803 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Nov 22 08:51:45 compute-0 python3.9[243377]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 08:51:45 compute-0 sudo[243538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcwrwlmdixielgxwotlqcpepzxlsdhng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801505.26935-910-116695648033252/AnsiballZ_systemd_service.py'
Nov 22 08:51:45 compute-0 sudo[243538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:45 compute-0 ceph-mon[75021]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:45 compute-0 python3.9[243540]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:51:45 compute-0 systemd[1]: Reloading.
Nov 22 08:51:46 compute-0 systemd-sysv-generator[243571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:51:46 compute-0 systemd-rc-local-generator[243568]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:51:46 compute-0 sudo[243538]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:46 compute-0 sudo[243725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exgwyqmuxcpfejwxnvdpbfqjwirlqmqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801506.5586681-918-155714476763843/AnsiballZ_command.py'
Nov 22 08:51:46 compute-0 sudo[243725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:47 compute-0 python3.9[243727]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:47 compute-0 sudo[243725]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:47 compute-0 sudo[243878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drcuyhygpzmepdyluvbydirjyzrbmocz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801507.2225096-918-66339996286818/AnsiballZ_command.py'
Nov 22 08:51:47 compute-0 sudo[243878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:47 compute-0 ceph-mon[75021]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:47 compute-0 python3.9[243880]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:47 compute-0 sudo[243878]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:48 compute-0 sudo[244031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyifmbnjbvrxeewdgkpwcftyllkpwttc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801507.8953328-918-278347610834678/AnsiballZ_command.py'
Nov 22 08:51:48 compute-0 sudo[244031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:48 compute-0 python3.9[244033]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:48 compute-0 sudo[244031]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:48 compute-0 sudo[244035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:51:48 compute-0 sudo[244035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:48 compute-0 sudo[244035]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:48 compute-0 sudo[244083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:51:48 compute-0 sudo[244083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:48 compute-0 sudo[244083]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:48 compute-0 sudo[244132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:51:48 compute-0 sudo[244132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:48 compute-0 sudo[244132]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:48 compute-0 sudo[244185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 08:51:48 compute-0 sudo[244185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:48 compute-0 sudo[244286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twqbgmvilywsiyomljeuseqcjrwofjol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801508.6502585-918-269202038814382/AnsiballZ_command.py'
Nov 22 08:51:48 compute-0 sudo[244286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:49 compute-0 python3.9[244294]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:49 compute-0 sudo[244286]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:49 compute-0 sudo[244185]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:51:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:51:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 08:51:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:51:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 08:51:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:51:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6c9693f8-1ec6-45f5-afec-67bbea0ff1a3 does not exist
Nov 22 08:51:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 494467cf-4e16-4ec6-89eb-06f707b5c44c does not exist
Nov 22 08:51:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 74eeae6a-fde0-4d03-a7af-b6547864e0b7 does not exist
Nov 22 08:51:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 08:51:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:51:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 08:51:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:51:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:51:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:51:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:49 compute-0 sudo[244417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:51:49 compute-0 sudo[244417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:49 compute-0 sudo[244417]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:49 compute-0 sudo[244467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:51:49 compute-0 sudo[244467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:49 compute-0 sudo[244467]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:49 compute-0 sudo[244521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jheocrgdpojbvswtnigkvjdyrieqpvtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801509.327554-918-245956736114768/AnsiballZ_command.py'
Nov 22 08:51:49 compute-0 sudo[244521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:49 compute-0 sudo[244519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:51:49 compute-0 sudo[244519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:49 compute-0 sudo[244519]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:49 compute-0 sudo[244547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 08:51:49 compute-0 sudo[244547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:49 compute-0 ceph-mon[75021]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:51:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:51:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:51:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:51:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:51:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:51:49 compute-0 python3.9[244538]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:49 compute-0 sudo[244521]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:50 compute-0 podman[244634]: 2025-11-22 08:51:50.001346881 +0000 UTC m=+0.023158690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:51:50 compute-0 sudo[244773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwhitczonillbzlavbtlekhfachlnzha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801510.0159426-918-145643031698733/AnsiballZ_command.py'
Nov 22 08:51:50 compute-0 sudo[244773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:50 compute-0 podman[244634]: 2025-11-22 08:51:50.297390134 +0000 UTC m=+0.319201923 container create 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:51:50 compute-0 systemd[1]: Started libpod-conmon-7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe.scope.
Nov 22 08:51:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:51:50 compute-0 podman[244634]: 2025-11-22 08:51:50.446994849 +0000 UTC m=+0.468806638 container init 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 08:51:50 compute-0 podman[244634]: 2025-11-22 08:51:50.458394079 +0000 UTC m=+0.480205868 container start 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 08:51:50 compute-0 awesome_almeida[244778]: 167 167
Nov 22 08:51:50 compute-0 systemd[1]: libpod-7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe.scope: Deactivated successfully.
Nov 22 08:51:50 compute-0 podman[244634]: 2025-11-22 08:51:50.471756877 +0000 UTC m=+0.493568676 container attach 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:51:50 compute-0 podman[244634]: 2025-11-22 08:51:50.472217069 +0000 UTC m=+0.494028858 container died 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:51:50 compute-0 python3.9[244775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:50 compute-0 sudo[244773]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-27c1771a61cd7be6751145788644ab7f558a460ec9be2c8a273c64ff6e8bbab1-merged.mount: Deactivated successfully.
Nov 22 08:51:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:50 compute-0 podman[244634]: 2025-11-22 08:51:50.537838781 +0000 UTC m=+0.559650570 container remove 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 08:51:50 compute-0 systemd[1]: libpod-conmon-7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe.scope: Deactivated successfully.
Nov 22 08:51:50 compute-0 podman[244848]: 2025-11-22 08:51:50.713947896 +0000 UTC m=+0.049675710 container create 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 08:51:50 compute-0 systemd[1]: Started libpod-conmon-956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197.scope.
Nov 22 08:51:50 compute-0 podman[244848]: 2025-11-22 08:51:50.69127344 +0000 UTC m=+0.027001274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:51:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:50 compute-0 podman[244848]: 2025-11-22 08:51:50.855480554 +0000 UTC m=+0.191208388 container init 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 22 08:51:50 compute-0 podman[244848]: 2025-11-22 08:51:50.864865534 +0000 UTC m=+0.200593348 container start 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 08:51:50 compute-0 podman[244848]: 2025-11-22 08:51:50.870051142 +0000 UTC m=+0.205778976 container attach 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:51:50 compute-0 sudo[244971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmjnqemheajjjkqdubzbkylrgziymsit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801510.6486967-918-128590903903363/AnsiballZ_command.py'
Nov 22 08:51:50 compute-0 sudo[244971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:51 compute-0 python3.9[244973]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:51 compute-0 sudo[244971]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:51 compute-0 sudo[245124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkdbjkcyeqfsjpeosrqxkuexretvwlhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801511.3255982-918-62413461078368/AnsiballZ_command.py'
Nov 22 08:51:51 compute-0 sudo[245124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:51 compute-0 python3.9[245126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:51:51 compute-0 ceph-mon[75021]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:51 compute-0 sudo[245124]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:52 compute-0 ecstatic_franklin[244906]: --> passed data devices: 0 physical, 3 LVM
Nov 22 08:51:52 compute-0 ecstatic_franklin[244906]: --> relative data size: 1.0
Nov 22 08:51:52 compute-0 ecstatic_franklin[244906]: --> All data devices are unavailable
Nov 22 08:51:52 compute-0 systemd[1]: libpod-956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197.scope: Deactivated successfully.
Nov 22 08:51:52 compute-0 systemd[1]: libpod-956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197.scope: Consumed 1.143s CPU time.
Nov 22 08:51:52 compute-0 podman[244848]: 2025-11-22 08:51:52.070666886 +0000 UTC m=+1.406394700 container died 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:51:52
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'backups', 'images', 'default.rgw.log']
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 08:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b-merged.mount: Deactivated successfully.
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:52 compute-0 podman[244848]: 2025-11-22 08:51:52.552114314 +0000 UTC m=+1.887842128 container remove 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:51:52 compute-0 sudo[244547]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:52 compute-0 systemd[1]: libpod-conmon-956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197.scope: Deactivated successfully.
Nov 22 08:51:52 compute-0 sudo[245188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:51:52 compute-0 sudo[245188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:52 compute-0 sudo[245188]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:51:52 compute-0 sudo[245236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:51:52 compute-0 sudo[245236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:52 compute-0 sudo[245236]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:52 compute-0 sudo[245290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:51:52 compute-0 sudo[245290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:52 compute-0 sudo[245290]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 08:51:52 compute-0 sudo[245338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:51:52 compute-0 sudo[245338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:51:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:51:52 compute-0 sudo[245413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icmvsfzslmzrrnidzablriannxtngkwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801512.6840339-997-201669317154257/AnsiballZ_file.py'
Nov 22 08:51:52 compute-0 sudo[245413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:52 compute-0 ceph-mon[75021]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:53 compute-0 python3.9[245415]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:53 compute-0 sudo[245413]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:53 compute-0 podman[245456]: 2025-11-22 08:51:53.205532006 +0000 UTC m=+0.025362364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:51:53 compute-0 podman[245456]: 2025-11-22 08:51:53.31152558 +0000 UTC m=+0.131355908 container create 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 08:51:53 compute-0 systemd[1]: Started libpod-conmon-51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284.scope.
Nov 22 08:51:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:51:53 compute-0 podman[245456]: 2025-11-22 08:51:53.594909792 +0000 UTC m=+0.414740150 container init 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:51:53 compute-0 podman[245456]: 2025-11-22 08:51:53.603908003 +0000 UTC m=+0.423738331 container start 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:51:53 compute-0 systemd[1]: libpod-51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284.scope: Deactivated successfully.
Nov 22 08:51:53 compute-0 frosty_mcclintock[245584]: 167 167
Nov 22 08:51:53 compute-0 sudo[245624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snxahbqiezyofjeytynvhkknwnozmxkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801513.338923-997-201077606011451/AnsiballZ_file.py'
Nov 22 08:51:53 compute-0 conmon[245584]: conmon 51f7d86aaddc37ac672a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284.scope/container/memory.events
Nov 22 08:51:53 compute-0 sudo[245624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:53 compute-0 podman[245456]: 2025-11-22 08:51:53.708331668 +0000 UTC m=+0.528162016 container attach 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 08:51:53 compute-0 podman[245456]: 2025-11-22 08:51:53.709165709 +0000 UTC m=+0.528996047 container died 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 08:51:53 compute-0 python3.9[245628]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:53 compute-0 sudo[245624]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c968a6741bcc85cfe10d6879986d9a057e30891444bf2e49ffaa51cadb168135-merged.mount: Deactivated successfully.
Nov 22 08:51:54 compute-0 podman[245456]: 2025-11-22 08:51:54.103040505 +0000 UTC m=+0.922870833 container remove 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:51:54 compute-0 systemd[1]: libpod-conmon-51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284.scope: Deactivated successfully.
Nov 22 08:51:54 compute-0 sudo[245812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgstjsnxupauwlcvrkljafflozxrxaso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801514.0126824-997-86046648251676/AnsiballZ_file.py'
Nov 22 08:51:54 compute-0 sudo[245812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:54 compute-0 podman[245772]: 2025-11-22 08:51:54.312059869 +0000 UTC m=+0.072486811 container create 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 08:51:54 compute-0 podman[245772]: 2025-11-22 08:51:54.26568859 +0000 UTC m=+0.026115562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:51:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:54 compute-0 systemd[1]: Started libpod-conmon-6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c.scope.
Nov 22 08:51:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:54 compute-0 python3.9[245814]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:54 compute-0 podman[245772]: 2025-11-22 08:51:54.629621521 +0000 UTC m=+0.390048493 container init 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 08:51:54 compute-0 podman[245772]: 2025-11-22 08:51:54.640795035 +0000 UTC m=+0.401221967 container start 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 08:51:54 compute-0 sudo[245812]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:54 compute-0 podman[245772]: 2025-11-22 08:51:54.675873337 +0000 UTC m=+0.436300309 container attach 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:51:55 compute-0 sudo[245972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hougxmryfuxnlntxmloptbbhvnoujvku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801514.8296466-1019-28284936229365/AnsiballZ_file.py'
Nov 22 08:51:55 compute-0 sudo[245972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:55 compute-0 python3.9[245974]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:55 compute-0 sudo[245972]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:55 compute-0 adoring_chaum[245818]: {
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:     "0": [
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:         {
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "devices": [
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "/dev/loop3"
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             ],
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_name": "ceph_lv0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_size": "21470642176",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "name": "ceph_lv0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "tags": {
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.cluster_name": "ceph",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.crush_device_class": "",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.encrypted": "0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.osd_id": "0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.type": "block",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.vdo": "0"
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             },
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "type": "block",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "vg_name": "ceph_vg0"
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:         }
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:     ],
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:     "1": [
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:         {
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "devices": [
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "/dev/loop4"
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             ],
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_name": "ceph_lv1",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_size": "21470642176",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "name": "ceph_lv1",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "tags": {
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.cluster_name": "ceph",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.crush_device_class": "",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.encrypted": "0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.osd_id": "1",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.type": "block",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.vdo": "0"
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             },
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "type": "block",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "vg_name": "ceph_vg1"
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:         }
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:     ],
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:     "2": [
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:         {
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "devices": [
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "/dev/loop5"
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             ],
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_name": "ceph_lv2",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_size": "21470642176",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "name": "ceph_lv2",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "tags": {
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.cluster_name": "ceph",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.crush_device_class": "",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.encrypted": "0",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.osd_id": "2",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.type": "block",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:                 "ceph.vdo": "0"
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             },
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "type": "block",
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:             "vg_name": "ceph_vg2"
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:         }
Nov 22 08:51:55 compute-0 adoring_chaum[245818]:     ]
Nov 22 08:51:55 compute-0 adoring_chaum[245818]: }
Nov 22 08:51:55 compute-0 systemd[1]: libpod-6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c.scope: Deactivated successfully.
Nov 22 08:51:55 compute-0 podman[245772]: 2025-11-22 08:51:55.548212017 +0000 UTC m=+1.308638959 container died 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:51:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd-merged.mount: Deactivated successfully.
Nov 22 08:51:55 compute-0 ceph-mon[75021]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:55 compute-0 sudo[246140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvvpoiqkuwskyjvdhatcjtcerjuiawod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801515.5080748-1019-105027178648354/AnsiballZ_file.py'
Nov 22 08:51:55 compute-0 sudo[246140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:55 compute-0 podman[245772]: 2025-11-22 08:51:55.874741059 +0000 UTC m=+1.635168041 container remove 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:51:55 compute-0 systemd[1]: libpod-conmon-6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c.scope: Deactivated successfully.
Nov 22 08:51:55 compute-0 sudo[245338]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:55 compute-0 sudo[246143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:51:55 compute-0 sudo[246143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:55 compute-0 sudo[246143]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:56 compute-0 python3.9[246142]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:56 compute-0 sudo[246168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:51:56 compute-0 sudo[246168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:56 compute-0 sudo[246168]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:56 compute-0 sudo[246140]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:56 compute-0 sudo[246193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:51:56 compute-0 sudo[246193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:56 compute-0 sudo[246193]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:56 compute-0 sudo[246241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 08:51:56 compute-0 sudo[246241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:51:56 compute-0 sudo[246435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfoezzuzeninwheszngfsfyfaocpkmun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801516.2075374-1019-112643919394052/AnsiballZ_file.py'
Nov 22 08:51:56 compute-0 sudo[246435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:56 compute-0 podman[246431]: 2025-11-22 08:51:56.503945926 +0000 UTC m=+0.027608460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:51:56 compute-0 podman[246431]: 2025-11-22 08:51:56.608190497 +0000 UTC m=+0.131852990 container create 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 08:51:56 compute-0 systemd[1]: Started libpod-conmon-28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779.scope.
Nov 22 08:51:56 compute-0 python3.9[246447]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:51:56 compute-0 sudo[246435]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:56 compute-0 podman[246431]: 2025-11-22 08:51:56.992593551 +0000 UTC m=+0.516256054 container init 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:51:57 compute-0 podman[246431]: 2025-11-22 08:51:57.00276382 +0000 UTC m=+0.526426303 container start 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 08:51:57 compute-0 ceph-mon[75021]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:57 compute-0 systemd[1]: libpod-28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779.scope: Deactivated successfully.
Nov 22 08:51:57 compute-0 bold_snyder[246452]: 167 167
Nov 22 08:51:57 compute-0 conmon[246452]: conmon 28438b30ee9793a4e907 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779.scope/container/memory.events
Nov 22 08:51:57 compute-0 sudo[246617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnwimzrrffeyvwictrepddjvvyjlpjnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801516.9255233-1019-215464624267831/AnsiballZ_file.py'
Nov 22 08:51:57 compute-0 sudo[246617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:57 compute-0 podman[246431]: 2025-11-22 08:51:57.301950641 +0000 UTC m=+0.825613174 container attach 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 08:51:57 compute-0 podman[246431]: 2025-11-22 08:51:57.30438715 +0000 UTC m=+0.828049703 container died 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:51:57 compute-0 python3.9[246619]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:57 compute-0 sudo[246617]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-966f7a9229e4cbb1de9d63adde62f0f95ee58a35bac5f6502aa37af730cd34d3-merged.mount: Deactivated successfully.
Nov 22 08:51:57 compute-0 sudo[246770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhoutljnzzcpnuxbrwklfwpmxtzbglha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801517.6522899-1019-272937942475234/AnsiballZ_file.py'
Nov 22 08:51:57 compute-0 sudo[246770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:58 compute-0 python3.9[246772]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:58 compute-0 sudo[246770]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:58 compute-0 podman[246431]: 2025-11-22 08:51:58.338794142 +0000 UTC m=+1.862456645 container remove 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:51:58 compute-0 systemd[1]: libpod-conmon-28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779.scope: Deactivated successfully.
Nov 22 08:51:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:58 compute-0 podman[246879]: 2025-11-22 08:51:58.488363476 +0000 UTC m=+0.024010471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:51:58 compute-0 sudo[246943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djzappwsmgjvovnpwfgkacvdfawvsbvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801518.315147-1019-41990289533389/AnsiballZ_file.py'
Nov 22 08:51:58 compute-0 sudo[246943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:58 compute-0 podman[246879]: 2025-11-22 08:51:58.775218003 +0000 UTC m=+0.310864978 container create 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 08:51:58 compute-0 python3.9[246945]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:58 compute-0 sudo[246943]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:59 compute-0 systemd[1]: Started libpod-conmon-86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2.scope.
Nov 22 08:51:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:59 compute-0 ceph-mon[75021]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:51:59 compute-0 sudo[247100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvlftxdkcjzgekbmktxjxtcyijikmjsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801518.988168-1019-72025984359713/AnsiballZ_file.py'
Nov 22 08:51:59 compute-0 sudo[247100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:51:59 compute-0 podman[246879]: 2025-11-22 08:51:59.467782317 +0000 UTC m=+1.003429312 container init 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 08:51:59 compute-0 podman[246879]: 2025-11-22 08:51:59.479052304 +0000 UTC m=+1.014699299 container start 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 08:51:59 compute-0 python3.9[247102]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:51:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:51:59 compute-0 sudo[247100]: pam_unix(sudo:session): session closed for user root
Nov 22 08:51:59 compute-0 podman[246879]: 2025-11-22 08:51:59.534417814 +0000 UTC m=+1.070064809 container attach 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]: {
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "osd_id": 1,
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "type": "bluestore"
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:     },
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "osd_id": 0,
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "type": "bluestore"
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:     },
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "osd_id": 2,
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:         "type": "bluestore"
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]:     }
Nov 22 08:52:00 compute-0 sleepy_hofstadter[247075]: }
Nov 22 08:52:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:00 compute-0 systemd[1]: libpod-86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2.scope: Deactivated successfully.
Nov 22 08:52:00 compute-0 systemd[1]: libpod-86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2.scope: Consumed 1.099s CPU time.
Nov 22 08:52:00 compute-0 podman[247157]: 2025-11-22 08:52:00.613614556 +0000 UTC m=+0.027396295 container died 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 08:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d-merged.mount: Deactivated successfully.
Nov 22 08:52:00 compute-0 podman[247157]: 2025-11-22 08:52:00.683734638 +0000 UTC m=+0.097516367 container remove 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:52:00 compute-0 systemd[1]: libpod-conmon-86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2.scope: Deactivated successfully.
Nov 22 08:52:00 compute-0 sudo[246241]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 08:52:00 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:52:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 08:52:00 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:52:00 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e3e63fc8-fa1f-4406-845e-8a09e1ac0a80 does not exist
Nov 22 08:52:00 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 3c39539a-7cd2-4479-a6f8-499d39216b6e does not exist
Nov 22 08:52:00 compute-0 sudo[247169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:52:00 compute-0 sudo[247169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:52:00 compute-0 sudo[247169]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:00 compute-0 sudo[247194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 08:52:00 compute-0 sudo[247194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:52:00 compute-0 sudo[247194]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:01 compute-0 ceph-mon[75021]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:52:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 08:52:02 compute-0 podman[247219]: 2025-11-22 08:52:02.377833076 +0000 UTC m=+0.064168127 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 08:52:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:03 compute-0 ceph-mon[75021]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:05 compute-0 sudo[247365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omoxkwggufvbnqxailzmykhfyuwnjcel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801524.6833956-1208-63191839597622/AnsiballZ_getent.py'
Nov 22 08:52:05 compute-0 sudo[247365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:05 compute-0 python3.9[247367]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 22 08:52:05 compute-0 sudo[247365]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:05 compute-0 ceph-mon[75021]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:06 compute-0 sudo[247518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xabwatlyvfrpupogaejddoipgzzclvix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801525.6145606-1216-227832327379694/AnsiballZ_group.py'
Nov 22 08:52:06 compute-0 sudo[247518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:06 compute-0 python3.9[247520]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 08:52:06 compute-0 groupadd[247521]: group added to /etc/group: name=nova, GID=42436
Nov 22 08:52:06 compute-0 groupadd[247521]: group added to /etc/gshadow: name=nova
Nov 22 08:52:06 compute-0 groupadd[247521]: new group: name=nova, GID=42436
Nov 22 08:52:06 compute-0 sudo[247518]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:07 compute-0 ceph-mon[75021]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:07 compute-0 sudo[247676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxkursjqruoghttvwazfdyjwnjznnwhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801526.7221181-1224-233901974035839/AnsiballZ_user.py'
Nov 22 08:52:07 compute-0 sudo[247676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:07 compute-0 python3.9[247678]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 08:52:07 compute-0 podman[247680]: 2025-11-22 08:52:07.612478083 +0000 UTC m=+0.067925790 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:52:08 compute-0 useradd[247681]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 22 08:52:08 compute-0 useradd[247681]: add 'nova' to group 'libvirt'
Nov 22 08:52:08 compute-0 useradd[247681]: add 'nova' to shadow group 'libvirt'
Nov 22 08:52:08 compute-0 sudo[247676]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:09 compute-0 sshd-session[247730]: Accepted publickey for zuul from 192.168.122.30 port 35240 ssh2: ECDSA SHA256:tikRPC42/ncVfP2lnh0iO6vjJo8w9amYgweJm9+SStg
Nov 22 08:52:09 compute-0 systemd-logind[822]: New session 51 of user zuul.
Nov 22 08:52:09 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 22 08:52:09 compute-0 sshd-session[247730]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:52:09 compute-0 sshd-session[247733]: Received disconnect from 192.168.122.30 port 35240:11: disconnected by user
Nov 22 08:52:09 compute-0 sshd-session[247733]: Disconnected from user zuul 192.168.122.30 port 35240
Nov 22 08:52:09 compute-0 sshd-session[247730]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:52:09 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 22 08:52:09 compute-0 systemd-logind[822]: Session 51 logged out. Waiting for processes to exit.
Nov 22 08:52:09 compute-0 systemd-logind[822]: Removed session 51.
Nov 22 08:52:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:09 compute-0 ceph-mon[75021]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:10 compute-0 python3.9[247883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:52:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:10 compute-0 python3.9[248004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801529.6655097-1249-205350960515205/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:52:11 compute-0 ceph-mon[75021]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:11 compute-0 python3.9[248154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:52:11 compute-0 python3.9[248230]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:52:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:12 compute-0 python3.9[248380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:52:13 compute-0 python3.9[248501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801532.1032903-1249-209344498097658/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:52:13 compute-0 python3.9[248651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:52:13 compute-0 ceph-mon[75021]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:14 compute-0 python3.9[248772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801533.3160992-1249-107570181640127/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:52:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:15 compute-0 ceph-mon[75021]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:15 compute-0 python3.9[248922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:52:15 compute-0 podman[248980]: 2025-11-22 08:52:15.417610425 +0000 UTC m=+0.104937559 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 08:52:15 compute-0 python3.9[249070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801534.6082919-1249-7896105229988/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:52:16 compute-0 python3.9[249220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:52:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:16 compute-0 python3.9[249341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801535.867949-1249-97481030015628/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:52:17 compute-0 sudo[249491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxkiymxkrtdvhxazqumrsqzgmhywozdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801537.1319172-1332-88382300524980/AnsiballZ_file.py'
Nov 22 08:52:17 compute-0 sudo[249491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:17 compute-0 python3.9[249493]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:52:17 compute-0 sudo[249491]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:17 compute-0 ceph-mon[75021]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:18 compute-0 sudo[249643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmutlvztysrfxkqkuoqwyqjkptbysbkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801537.873959-1340-266134059828629/AnsiballZ_copy.py'
Nov 22 08:52:18 compute-0 sudo[249643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:18 compute-0 python3.9[249645]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:52:18 compute-0 sudo[249643]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:18 compute-0 ceph-mon[75021]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:18 compute-0 sudo[249795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqmatbpcsihwntyykpmpbdnjiqbbpljs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801538.6293516-1348-83966997606336/AnsiballZ_stat.py'
Nov 22 08:52:18 compute-0 sudo[249795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:19 compute-0 python3.9[249797]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:52:19 compute-0 sudo[249795]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:19 compute-0 sudo[249947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afcwfdsjypzdqffakedmtziznpikzbtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801539.3724637-1356-225291577664943/AnsiballZ_stat.py'
Nov 22 08:52:19 compute-0 sudo[249947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:19 compute-0 python3.9[249949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:52:19 compute-0 sudo[249947]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:20 compute-0 sudo[250070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biftgbygbvnedsloqmxvfaewxjdhjcdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801539.3724637-1356-225291577664943/AnsiballZ_copy.py'
Nov 22 08:52:20 compute-0 sudo[250070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:20 compute-0 python3.9[250072]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763801539.3724637-1356-225291577664943/.source _original_basename=.z5gmi96d follow=False checksum=d979bb4b9932f74e77e9b58335859e74e5c0b61d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 22 08:52:20 compute-0 sudo[250070]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:21 compute-0 python3.9[250224]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:52:21 compute-0 ceph-mon[75021]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:21 compute-0 python3.9[250376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:52:22 compute-0 python3.9[250497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801541.379246-1382-125692241791445/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:52:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:52:23 compute-0 python3.9[250647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:52:23 compute-0 python3.9[250768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801542.6112316-1397-81530525551120/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:52:23 compute-0 ceph-mon[75021]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:24 compute-0 sudo[250918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkgwqcjrnfdorynjjuipdivhvjcmqnky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801543.9501176-1414-66274161434427/AnsiballZ_container_config_data.py'
Nov 22 08:52:24 compute-0 sudo[250918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:24 compute-0 python3.9[250920]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 22 08:52:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:24 compute-0 sudo[250918]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:25 compute-0 sudo[251070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlotsrmyzsxcbkdasazvzziyshvbcqas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801544.8905733-1423-55727175998782/AnsiballZ_container_config_hash.py'
Nov 22 08:52:25 compute-0 sudo[251070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:25 compute-0 python3.9[251072]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:52:25 compute-0 sudo[251070]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:25 compute-0 ceph-mon[75021]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:25 compute-0 sudo[251222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwnyahbkpendrmrlmfccwdohliwaawjb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763801545.6736524-1433-95675860109771/AnsiballZ_edpm_container_manage.py'
Nov 22 08:52:25 compute-0 sudo[251222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:26 compute-0 python3[251224]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:52:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:27 compute-0 ceph-mon[75021]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:52:27.932 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:52:27.934 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:52:27.934 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:30 compute-0 ceph-mon[75021]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:31 compute-0 ceph-mon[75021]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:33 compute-0 ceph-mon[75021]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:37 compute-0 ceph-mon[75021]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:39 compute-0 ceph-mon[75021]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:41 compute-0 ceph-mon[75021]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:43 compute-0 ceph-mon[75021]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:43 compute-0 podman[251294]: 2025-11-22 08:52:43.740350364 +0000 UTC m=+11.303553418 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:52:43 compute-0 podman[251305]: 2025-11-22 08:52:43.764061807 +0000 UTC m=+5.456909368 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:52:43 compute-0 podman[251238]: 2025-11-22 08:52:43.858926567 +0000 UTC m=+17.482278917 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 22 08:52:44 compute-0 podman[251357]: 2025-11-22 08:52:44.057948496 +0000 UTC m=+0.096777229 container create d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 08:52:44 compute-0 podman[251357]: 2025-11-22 08:52:43.987469255 +0000 UTC m=+0.026298018 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 22 08:52:44 compute-0 python3[251224]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 22 08:52:44 compute-0 sudo[251222]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:44 compute-0 ceph-mon[75021]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:44 compute-0 sudo[251545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bheqzoftaknzuwmtuotxuqikdopkxobb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801564.3904443-1441-192738522874771/AnsiballZ_stat.py'
Nov 22 08:52:44 compute-0 sudo[251545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:44 compute-0 python3.9[251547]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:52:44 compute-0 sudo[251545]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:45 compute-0 ceph-mon[75021]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:45 compute-0 sudo[251714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icwlxapoyedqsiethtrumlghybzxoqlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801565.2925563-1453-68275454216015/AnsiballZ_container_config_data.py'
Nov 22 08:52:45 compute-0 sudo[251714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:45 compute-0 podman[251673]: 2025-11-22 08:52:45.638592936 +0000 UTC m=+0.096874300 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 08:52:45 compute-0 python3.9[251721]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 22 08:52:45 compute-0 sudo[251714]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:46 compute-0 sudo[251877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdlwbtysvdlkzcusdcgvdqxlizkfdwzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801566.036291-1462-155685541287/AnsiballZ_container_config_hash.py'
Nov 22 08:52:46 compute-0 sudo[251877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:46 compute-0 python3.9[251879]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:52:46 compute-0 sudo[251877]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:47 compute-0 sudo[252029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwcivkyvynfglwsnzfnornsqqwecxwwf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763801566.8349242-1472-103339559586135/AnsiballZ_edpm_container_manage.py'
Nov 22 08:52:47 compute-0 sudo[252029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:47 compute-0 python3[252031]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:52:47 compute-0 ceph-mon[75021]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:47 compute-0 podman[252069]: 2025-11-22 08:52:47.627965308 +0000 UTC m=+0.054116570 container create 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 08:52:47 compute-0 podman[252069]: 2025-11-22 08:52:47.598711739 +0000 UTC m=+0.024863011 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 22 08:52:47 compute-0 python3[252031]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 22 08:52:47 compute-0 sudo[252029]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:48 compute-0 sudo[252258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qspzrqvwrbmhakeczawopazlexkffldy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801567.9383311-1480-16194171491150/AnsiballZ_stat.py'
Nov 22 08:52:48 compute-0 sudo[252258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:48 compute-0 python3.9[252260]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:52:48 compute-0 sudo[252258]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:48 compute-0 sudo[252412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucycarwqjuanojzufztnvkrhkzqtuxpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801568.6771958-1489-60590735971902/AnsiballZ_file.py'
Nov 22 08:52:48 compute-0 sudo[252412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:49 compute-0 python3.9[252414]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:52:49 compute-0 sudo[252412]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:49 compute-0 ceph-mon[75021]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:49 compute-0 sudo[252563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alejkaguzhgzuxksbixlgihgamqbzhys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801569.2608943-1489-23894595279325/AnsiballZ_copy.py'
Nov 22 08:52:49 compute-0 sudo[252563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:49 compute-0 python3.9[252565]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763801569.2608943-1489-23894595279325/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:52:49 compute-0 sudo[252563]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:50 compute-0 sudo[252639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tivfwfrsndqyivddsqmgixlfdbqbuyuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801569.2608943-1489-23894595279325/AnsiballZ_systemd.py'
Nov 22 08:52:50 compute-0 sudo[252639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:50 compute-0 python3.9[252641]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:52:50 compute-0 systemd[1]: Reloading.
Nov 22 08:52:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:50 compute-0 systemd-rc-local-generator[252668]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:52:50 compute-0 systemd-sysv-generator[252673]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:52:50 compute-0 sudo[252639]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:51 compute-0 sudo[252751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omwdxlxmazlzbutujwogkzscsjtszrrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801569.2608943-1489-23894595279325/AnsiballZ_systemd.py'
Nov 22 08:52:51 compute-0 sudo[252751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:51 compute-0 python3.9[252753]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:52:51 compute-0 systemd[1]: Reloading.
Nov 22 08:52:51 compute-0 systemd-rc-local-generator[252785]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:52:51 compute-0 systemd-sysv-generator[252789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:52:51 compute-0 ceph-mon[75021]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:51 compute-0 systemd[1]: Starting nova_compute container...
Nov 22 08:52:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:52 compute-0 podman[252793]: 2025-11-22 08:52:52.079165778 +0000 UTC m=+0.135544500 container init 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:52:52 compute-0 podman[252793]: 2025-11-22 08:52:52.088566709 +0000 UTC m=+0.144945401 container start 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 22 08:52:52 compute-0 nova_compute[252809]: + sudo -E kolla_set_configs
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:52:52
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.meta']
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 08:52:52 compute-0 podman[252793]: nova_compute
Nov 22 08:52:52 compute-0 systemd[1]: Started nova_compute container.
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Validating config file
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying service configuration files
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Deleting /etc/ceph
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Creating directory /etc/ceph
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Writing out command to execute
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:52:52 compute-0 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 08:52:52 compute-0 sudo[252751]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:52 compute-0 nova_compute[252809]: ++ cat /run_command
Nov 22 08:52:52 compute-0 nova_compute[252809]: + CMD=nova-compute
Nov 22 08:52:52 compute-0 nova_compute[252809]: + ARGS=
Nov 22 08:52:52 compute-0 nova_compute[252809]: + sudo kolla_copy_cacerts
Nov 22 08:52:52 compute-0 nova_compute[252809]: + [[ ! -n '' ]]
Nov 22 08:52:52 compute-0 nova_compute[252809]: + . kolla_extend_start
Nov 22 08:52:52 compute-0 nova_compute[252809]: Running command: 'nova-compute'
Nov 22 08:52:52 compute-0 nova_compute[252809]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 08:52:52 compute-0 nova_compute[252809]: + umask 0022
Nov 22 08:52:52 compute-0 nova_compute[252809]: + exec nova-compute
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:52:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:52:53 compute-0 python3.9[252970]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:52:53 compute-0 ceph-mon[75021]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:54 compute-0 python3.9[253120]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:52:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:54 compute-0 python3.9[253270]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:52:55 compute-0 sudo[253421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obopzusgqwxwoiqeplvvjtajcldpcmhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801575.128527-1549-68505166290879/AnsiballZ_podman_container.py'
Nov 22 08:52:55 compute-0 sudo[253421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:55 compute-0 ceph-mon[75021]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:55 compute-0 python3.9[253423]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 08:52:56 compute-0 sudo[253421]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:56 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:52:56 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:52:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:56 compute-0 sudo[253597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knizgwtzqjlkyvyelawauvgjibhqspcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801576.349352-1557-237176727332401/AnsiballZ_systemd.py'
Nov 22 08:52:56 compute-0 sudo[253597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:56 compute-0 ceph-mon[75021]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:56 compute-0 python3.9[253599]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:52:57 compute-0 systemd[1]: Stopping nova_compute container...
Nov 22 08:52:57 compute-0 systemd[1]: libpod-5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772.scope: Deactivated successfully.
Nov 22 08:52:57 compute-0 systemd[1]: libpod-5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772.scope: Consumed 2.012s CPU time.
Nov 22 08:52:57 compute-0 podman[253603]: 2025-11-22 08:52:57.100169986 +0000 UTC m=+0.070373449 container died 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3)
Nov 22 08:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772-userdata-shm.mount: Deactivated successfully.
Nov 22 08:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9-merged.mount: Deactivated successfully.
Nov 22 08:52:58 compute-0 podman[253603]: 2025-11-22 08:52:58.419674372 +0000 UTC m=+1.389877845 container cleanup 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:52:58 compute-0 podman[253603]: nova_compute
Nov 22 08:52:58 compute-0 podman[253633]: nova_compute
Nov 22 08:52:58 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 22 08:52:58 compute-0 systemd[1]: Stopped nova_compute container.
Nov 22 08:52:58 compute-0 systemd[1]: Starting nova_compute container...
Nov 22 08:52:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:58 compute-0 podman[253646]: 2025-11-22 08:52:58.601564361 +0000 UTC m=+0.088352282 container init 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 22 08:52:58 compute-0 podman[253646]: 2025-11-22 08:52:58.608380258 +0000 UTC m=+0.095168159 container start 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 08:52:58 compute-0 podman[253646]: nova_compute
Nov 22 08:52:58 compute-0 nova_compute[253661]: + sudo -E kolla_set_configs
Nov 22 08:52:58 compute-0 systemd[1]: Started nova_compute container.
Nov 22 08:52:58 compute-0 sudo[253597]: pam_unix(sudo:session): session closed for user root
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Validating config file
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying service configuration files
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /etc/ceph
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Creating directory /etc/ceph
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Writing out command to execute
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:52:58 compute-0 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 08:52:58 compute-0 nova_compute[253661]: ++ cat /run_command
Nov 22 08:52:58 compute-0 nova_compute[253661]: + CMD=nova-compute
Nov 22 08:52:58 compute-0 nova_compute[253661]: + ARGS=
Nov 22 08:52:58 compute-0 nova_compute[253661]: + sudo kolla_copy_cacerts
Nov 22 08:52:58 compute-0 nova_compute[253661]: + [[ ! -n '' ]]
Nov 22 08:52:58 compute-0 nova_compute[253661]: + . kolla_extend_start
Nov 22 08:52:58 compute-0 nova_compute[253661]: Running command: 'nova-compute'
Nov 22 08:52:58 compute-0 nova_compute[253661]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 08:52:58 compute-0 nova_compute[253661]: + umask 0022
Nov 22 08:52:58 compute-0 nova_compute[253661]: + exec nova-compute
Nov 22 08:52:59 compute-0 sudo[253822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmvlveczsswuapbsptrkfupbjimzjsnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763801578.8898368-1566-239291649131460/AnsiballZ_podman_container.py'
Nov 22 08:52:59 compute-0 sudo[253822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:52:59 compute-0 python3.9[253824]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 08:52:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:52:59 compute-0 ceph-mon[75021]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:52:59 compute-0 systemd[1]: Started libpod-conmon-d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706.scope.
Nov 22 08:52:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ea78d8dc84818553b60a0a84a8dec96e3ddbaf2671a2d3554399936ecf8aeb/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ea78d8dc84818553b60a0a84a8dec96e3ddbaf2671a2d3554399936ecf8aeb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ea78d8dc84818553b60a0a84a8dec96e3ddbaf2671a2d3554399936ecf8aeb/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 22 08:52:59 compute-0 podman[253847]: 2025-11-22 08:52:59.94564197 +0000 UTC m=+0.265727919 container init d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 08:52:59 compute-0 podman[253847]: 2025-11-22 08:52:59.953543013 +0000 UTC m=+0.273628942 container start d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:52:59 compute-0 python3.9[253824]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Applying nova statedir ownership
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 22 08:53:00 compute-0 nova_compute_init[253869]: INFO:nova_statedir:Nova statedir ownership complete
Nov 22 08:53:00 compute-0 systemd[1]: libpod-d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706.scope: Deactivated successfully.
Nov 22 08:53:00 compute-0 podman[253870]: 2025-11-22 08:53:00.034791859 +0000 UTC m=+0.029760882 container died d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:53:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706-userdata-shm.mount: Deactivated successfully.
Nov 22 08:53:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-72ea78d8dc84818553b60a0a84a8dec96e3ddbaf2671a2d3554399936ecf8aeb-merged.mount: Deactivated successfully.
Nov 22 08:53:00 compute-0 podman[253880]: 2025-11-22 08:53:00.111362051 +0000 UTC m=+0.077934636 container cleanup d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:53:00 compute-0 systemd[1]: libpod-conmon-d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706.scope: Deactivated successfully.
Nov 22 08:53:00 compute-0 sudo[253822]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:00 compute-0 sshd-session[223740]: Connection closed by 192.168.122.30 port 57480
Nov 22 08:53:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:00 compute-0 sshd-session[223737]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:53:00 compute-0 systemd-logind[822]: Session 50 logged out. Waiting for processes to exit.
Nov 22 08:53:00 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 22 08:53:00 compute-0 systemd[1]: session-50.scope: Consumed 2min 29.504s CPU time.
Nov 22 08:53:00 compute-0 systemd-logind[822]: Removed session 50.
Nov 22 08:53:00 compute-0 ceph-mon[75021]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:00 compute-0 sudo[253935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:53:00 compute-0 sudo[253935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:00 compute-0 sudo[253935]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:00 compute-0 nova_compute[253661]: 2025-11-22 08:53:00.977 253665 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 08:53:00 compute-0 nova_compute[253661]: 2025-11-22 08:53:00.978 253665 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 08:53:00 compute-0 nova_compute[253661]: 2025-11-22 08:53:00.978 253665 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 08:53:00 compute-0 nova_compute[253661]: 2025-11-22 08:53:00.978 253665 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 22 08:53:01 compute-0 sudo[253960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:53:01 compute-0 sudo[253960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:01 compute-0 sudo[253960]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:01 compute-0 sudo[253985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:53:01 compute-0 sudo[253985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:01 compute-0 sudo[253985]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:01 compute-0 sudo[254010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 08:53:01 compute-0 sudo[254010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:01 compute-0 nova_compute[253661]: 2025-11-22 08:53:01.191 253665 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:01 compute-0 nova_compute[253661]: 2025-11-22 08:53:01.213 253665 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:01 compute-0 nova_compute[253661]: 2025-11-22 08:53:01.214 253665 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 22 08:53:01 compute-0 sudo[254010]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:53:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:53:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 08:53:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:53:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 08:53:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:53:01 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b1bf8f8c-b474-44fe-b4d0-e265ef158eba does not exist
Nov 22 08:53:01 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2cd76acd-9714-43ef-97e9-86f9d8f70a57 does not exist
Nov 22 08:53:01 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e6abb35c-8a1c-44e2-9561-12c99fc1b338 does not exist
Nov 22 08:53:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 08:53:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:53:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 08:53:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:53:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:53:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:53:01 compute-0 sudo[254069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:53:01 compute-0 sudo[254069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:01 compute-0 sudo[254069]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:01 compute-0 sudo[254094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:53:01 compute-0 sudo[254094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:01 compute-0 sudo[254094]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:01 compute-0 nova_compute[253661]: 2025-11-22 08:53:01.831 253665 INFO nova.virt.driver [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 22 08:53:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:53:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:53:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:53:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:53:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:53:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:53:01 compute-0 sudo[254119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:53:01 compute-0 sudo[254119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:01 compute-0 sudo[254119]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:01 compute-0 sudo[254144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 08:53:01 compute-0 sudo[254144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.060 253665 INFO nova.compute.provider_config [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.069 253665 DEBUG oslo_concurrency.lockutils [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.070 253665 DEBUG oslo_concurrency.lockutils [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.070 253665 DEBUG oslo_concurrency.lockutils [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.070 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.073 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.073 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.073 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.073 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.078 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.078 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.078 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.078 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.081 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.081 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.081 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.081 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.083 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.083 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.083 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.083 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.084 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.084 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.084 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.084 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.085 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.085 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.085 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.085 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.088 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.088 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.088 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.088 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.099 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.099 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.099 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.099 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.134 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.134 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.134 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.134 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.137 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.137 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.137 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.142 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.142 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.142 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.143 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.143 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.143 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.161 253665 WARNING oslo_config.cfg [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 08:53:02 compute-0 nova_compute[253661]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 08:53:02 compute-0 nova_compute[253661]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 08:53:02 compute-0 nova_compute[253661]: and ``live_migration_inbound_addr`` respectively.
Nov 22 08:53:02 compute-0 nova_compute[253661]: ).  Its value may be silently ignored in the future.
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.161 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.161 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.161 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_secret_uuid        = 34829716-a12c-57a6-8915-c1aa615c9d8a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.229 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.229 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.230 253665 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.241 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.242 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.242 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.242 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 22 08:53:02 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 08:53:02 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.332 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb134462820> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.336 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb134462820> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.337 253665 INFO nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Connection event '1' reason 'None'
Nov 22 08:53:02 compute-0 podman[254231]: 2025-11-22 08:53:02.340231246 +0000 UTC m=+0.051712202 container create 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.358 253665 WARNING nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 08:53:02 compute-0 nova_compute[253661]: 2025-11-22 08:53:02.359 253665 DEBUG nova.virt.libvirt.volume.mount [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 22 08:53:02 compute-0 systemd[1]: Started libpod-conmon-11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5.scope.
Nov 22 08:53:02 compute-0 podman[254231]: 2025-11-22 08:53:02.314949955 +0000 UTC m=+0.026430951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:53:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:53:02 compute-0 podman[254231]: 2025-11-22 08:53:02.451569621 +0000 UTC m=+0.163050607 container init 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:53:02 compute-0 podman[254231]: 2025-11-22 08:53:02.462611522 +0000 UTC m=+0.174092488 container start 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 08:53:02 compute-0 podman[254231]: 2025-11-22 08:53:02.469049951 +0000 UTC m=+0.180530917 container attach 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:53:02 compute-0 dazzling_saha[254268]: 167 167
Nov 22 08:53:02 compute-0 systemd[1]: libpod-11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5.scope: Deactivated successfully.
Nov 22 08:53:02 compute-0 podman[254231]: 2025-11-22 08:53:02.472498455 +0000 UTC m=+0.183979421 container died 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 08:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4530dcd359ddc3bf11ba45add225fec7c3244e3228f2e092317a45ccf4ca61e-merged.mount: Deactivated successfully.
Nov 22 08:53:02 compute-0 podman[254231]: 2025-11-22 08:53:02.535232486 +0000 UTC m=+0.246713442 container remove 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 08:53:02 compute-0 systemd[1]: libpod-conmon-11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5.scope: Deactivated successfully.
Nov 22 08:53:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:02 compute-0 podman[254298]: 2025-11-22 08:53:02.710156273 +0000 UTC m=+0.046469932 container create f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:53:02 compute-0 systemd[1]: Started libpod-conmon-f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa.scope.
Nov 22 08:53:02 compute-0 podman[254298]: 2025-11-22 08:53:02.687375914 +0000 UTC m=+0.023689603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:53:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:02 compute-0 podman[254298]: 2025-11-22 08:53:02.805936707 +0000 UTC m=+0.142250386 container init f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:53:02 compute-0 podman[254298]: 2025-11-22 08:53:02.813359119 +0000 UTC m=+0.149672788 container start f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 08:53:02 compute-0 podman[254298]: 2025-11-22 08:53:02.818207438 +0000 UTC m=+0.154521137 container attach f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:53:02 compute-0 ceph-mon[75021]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.191 253665 INFO nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host capabilities <capabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]: 
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <host>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <uuid>02722e9f-996f-4a01-8f30-68e10821087c</uuid>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <arch>x86_64</arch>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model>EPYC-Rome-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <vendor>AMD</vendor>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <microcode version='16777317'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <signature family='23' model='49' stepping='0'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='x2apic'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='tsc-deadline'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='osxsave'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='hypervisor'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='tsc_adjust'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='spec-ctrl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='stibp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='arch-capabilities'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='cmp_legacy'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='topoext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='virt-ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='lbrv'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='tsc-scale'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='vmcb-clean'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='pause-filter'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='pfthreshold'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='svme-addr-chk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='rdctl-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='skip-l1dfl-vmentry'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='mds-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature name='pschange-mc-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <pages unit='KiB' size='4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <pages unit='KiB' size='2048'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <pages unit='KiB' size='1048576'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <power_management>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <suspend_mem/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </power_management>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <iommu support='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <migration_features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <live/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <uri_transports>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <uri_transport>tcp</uri_transport>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <uri_transport>rdma</uri_transport>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </uri_transports>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </migration_features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <topology>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <cells num='1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <cell id='0'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:           <memory unit='KiB'>7864312</memory>
Nov 22 08:53:03 compute-0 nova_compute[253661]:           <pages unit='KiB' size='4'>1966078</pages>
Nov 22 08:53:03 compute-0 nova_compute[253661]:           <pages unit='KiB' size='2048'>0</pages>
Nov 22 08:53:03 compute-0 nova_compute[253661]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 22 08:53:03 compute-0 nova_compute[253661]:           <distances>
Nov 22 08:53:03 compute-0 nova_compute[253661]:             <sibling id='0' value='10'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:           </distances>
Nov 22 08:53:03 compute-0 nova_compute[253661]:           <cpus num='8'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:           </cpus>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         </cell>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </cells>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </topology>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <cache>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </cache>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <secmodel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model>selinux</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <doi>0</doi>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </secmodel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <secmodel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model>dac</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <doi>0</doi>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </secmodel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </host>
Nov 22 08:53:03 compute-0 nova_compute[253661]: 
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <guest>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <os_type>hvm</os_type>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <arch name='i686'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <wordsize>32</wordsize>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <domain type='qemu'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <domain type='kvm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </arch>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <pae/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <nonpae/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <acpi default='on' toggle='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <apic default='on' toggle='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <cpuselection/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <deviceboot/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <disksnapshot default='on' toggle='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <externalSnapshot/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </guest>
Nov 22 08:53:03 compute-0 nova_compute[253661]: 
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <guest>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <os_type>hvm</os_type>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <arch name='x86_64'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <wordsize>64</wordsize>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <domain type='qemu'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <domain type='kvm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </arch>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <acpi default='on' toggle='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <apic default='on' toggle='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <cpuselection/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <deviceboot/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <disksnapshot default='on' toggle='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <externalSnapshot/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </guest>
Nov 22 08:53:03 compute-0 nova_compute[253661]: 
Nov 22 08:53:03 compute-0 nova_compute[253661]: </capabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]: 
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.203 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.230 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 22 08:53:03 compute-0 nova_compute[253661]: <domainCapabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <domain>kvm</domain>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <arch>i686</arch>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <vcpu max='240'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <iothreads supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <os supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <enum name='firmware'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <loader supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>rom</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pflash</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='readonly'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>yes</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>no</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='secure'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>no</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </loader>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </os>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='host-passthrough' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='hostPassthroughMigratable'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>on</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>off</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='maximum' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='maximumMigratable'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>on</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>off</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='host-model' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <vendor>AMD</vendor>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='x2apic'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='hypervisor'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='stibp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='overflow-recov'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='succor'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='lbrv'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc-scale'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='flushbyasid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='pause-filter'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='pfthreshold'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='disable' name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='custom' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Dhyana-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Genoa'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='auto-ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='auto-ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-128'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-256'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-512'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v6'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v7'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='KnightsMill'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4fmaps'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4vnniw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512er'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512pf'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='KnightsMill-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4fmaps'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4vnniw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512er'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512pf'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G4-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tbm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G5-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tbm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SierraForest'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ne-convert'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cmpccxadd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SierraForest-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ne-convert'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cmpccxadd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='athlon'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='athlon-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='core2duo'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='core2duo-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='coreduo'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='coreduo-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='n270'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='n270-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='phenom'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='phenom-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <memoryBacking supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <enum name='sourceType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>file</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>anonymous</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>memfd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </memoryBacking>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <devices>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <disk supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='diskDevice'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>disk</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>cdrom</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>floppy</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>lun</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='bus'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ide</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>fdc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>scsi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>sata</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-non-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </disk>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <graphics supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vnc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>egl-headless</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dbus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </graphics>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <video supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='modelType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vga</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>cirrus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>none</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>bochs</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ramfb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </video>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <hostdev supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='mode'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>subsystem</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='startupPolicy'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>default</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>mandatory</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>requisite</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>optional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='subsysType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pci</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>scsi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='capsType'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='pciBackend'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </hostdev>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <rng supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-non-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>random</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>egd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>builtin</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </rng>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <filesystem supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='driverType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>path</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>handle</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtiofs</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </filesystem>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <tpm supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tpm-tis</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tpm-crb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>emulator</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>external</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendVersion'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>2.0</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </tpm>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <redirdev supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='bus'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </redirdev>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <channel supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pty</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>unix</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </channel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <crypto supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>qemu</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>builtin</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </crypto>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <interface supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>default</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>passt</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </interface>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <panic supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>isa</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>hyperv</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </panic>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <console supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>null</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pty</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dev</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>file</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pipe</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>stdio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>udp</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tcp</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>unix</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>qemu-vdagent</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dbus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </console>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </devices>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <gic supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <vmcoreinfo supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <genid supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <backingStoreInput supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <backup supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <async-teardown supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <ps2 supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <sev supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <sgx supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <hyperv supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='features'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>relaxed</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vapic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>spinlocks</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vpindex</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>runtime</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>synic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>stimer</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>reset</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vendor_id</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>frequencies</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>reenlightenment</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tlbflush</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ipi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>avic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>emsr_bitmap</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>xmm_input</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <defaults>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <spinlocks>4095</spinlocks>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <stimer_direct>on</stimer_direct>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </defaults>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </hyperv>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <launchSecurity supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='sectype'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tdx</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </launchSecurity>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </features>
Nov 22 08:53:03 compute-0 nova_compute[253661]: </domainCapabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.242 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 22 08:53:03 compute-0 nova_compute[253661]: <domainCapabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <domain>kvm</domain>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <arch>i686</arch>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <vcpu max='4096'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <iothreads supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <os supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <enum name='firmware'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <loader supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>rom</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pflash</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='readonly'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>yes</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>no</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='secure'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>no</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </loader>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </os>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='host-passthrough' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='hostPassthroughMigratable'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>on</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>off</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='maximum' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='maximumMigratable'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>on</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>off</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='host-model' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <vendor>AMD</vendor>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='x2apic'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='hypervisor'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='stibp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='overflow-recov'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='succor'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='lbrv'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc-scale'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='flushbyasid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='pause-filter'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='pfthreshold'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='disable' name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='custom' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Dhyana-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Genoa'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='auto-ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='auto-ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-128'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-256'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-512'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v6'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v7'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='KnightsMill'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4fmaps'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4vnniw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512er'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512pf'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='KnightsMill-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4fmaps'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4vnniw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512er'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512pf'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G4-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tbm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G5-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tbm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SierraForest'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ne-convert'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cmpccxadd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SierraForest-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ne-convert'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cmpccxadd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='athlon'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='athlon-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='core2duo'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='core2duo-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='coreduo'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='coreduo-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='n270'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='n270-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='phenom'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='phenom-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <memoryBacking supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <enum name='sourceType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>file</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>anonymous</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>memfd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </memoryBacking>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <devices>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <disk supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='diskDevice'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>disk</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>cdrom</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>floppy</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>lun</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='bus'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>fdc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>scsi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>sata</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-non-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </disk>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <graphics supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vnc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>egl-headless</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dbus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </graphics>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <video supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='modelType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vga</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>cirrus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>none</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>bochs</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ramfb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </video>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <hostdev supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='mode'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>subsystem</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='startupPolicy'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>default</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>mandatory</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>requisite</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>optional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='subsysType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pci</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>scsi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='capsType'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='pciBackend'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </hostdev>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <rng supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-non-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>random</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>egd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>builtin</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </rng>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <filesystem supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='driverType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>path</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>handle</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtiofs</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </filesystem>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <tpm supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tpm-tis</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tpm-crb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>emulator</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>external</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendVersion'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>2.0</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </tpm>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <redirdev supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='bus'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </redirdev>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <channel supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pty</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>unix</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </channel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <crypto supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>qemu</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>builtin</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </crypto>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <interface supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>default</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>passt</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </interface>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <panic supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>isa</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>hyperv</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </panic>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <console supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>null</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pty</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dev</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>file</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pipe</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>stdio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>udp</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tcp</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>unix</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>qemu-vdagent</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dbus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </console>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </devices>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <gic supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <vmcoreinfo supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <genid supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <backingStoreInput supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <backup supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <async-teardown supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <ps2 supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <sev supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <sgx supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <hyperv supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='features'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>relaxed</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vapic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>spinlocks</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vpindex</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>runtime</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>synic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>stimer</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>reset</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vendor_id</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>frequencies</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>reenlightenment</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tlbflush</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ipi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>avic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>emsr_bitmap</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>xmm_input</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <defaults>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <spinlocks>4095</spinlocks>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <stimer_direct>on</stimer_direct>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </defaults>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </hyperv>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <launchSecurity supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='sectype'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tdx</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </launchSecurity>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </features>
Nov 22 08:53:03 compute-0 nova_compute[253661]: </domainCapabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.268 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.276 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 22 08:53:03 compute-0 nova_compute[253661]: <domainCapabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <domain>kvm</domain>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <arch>x86_64</arch>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <vcpu max='240'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <iothreads supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <os supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <enum name='firmware'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <loader supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>rom</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pflash</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='readonly'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>yes</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>no</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='secure'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>no</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </loader>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </os>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='host-passthrough' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='hostPassthroughMigratable'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>on</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>off</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='maximum' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='maximumMigratable'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>on</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>off</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='host-model' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <vendor>AMD</vendor>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='x2apic'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='hypervisor'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='stibp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='overflow-recov'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='succor'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='lbrv'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc-scale'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='flushbyasid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='pause-filter'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='pfthreshold'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='disable' name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='custom' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Dhyana-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Genoa'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='auto-ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='auto-ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-128'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-256'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-512'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v6'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v7'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='KnightsMill'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4fmaps'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4vnniw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512er'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512pf'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='KnightsMill-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4fmaps'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4vnniw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512er'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512pf'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G4-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tbm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G5-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tbm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SierraForest'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ne-convert'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cmpccxadd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SierraForest-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ne-convert'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cmpccxadd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='athlon'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='athlon-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='core2duo'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='core2duo-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='coreduo'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='coreduo-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='n270'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='n270-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='phenom'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='phenom-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <memoryBacking supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <enum name='sourceType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>file</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>anonymous</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>memfd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </memoryBacking>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <devices>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <disk supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='diskDevice'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>disk</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>cdrom</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>floppy</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>lun</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='bus'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ide</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>fdc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>scsi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>sata</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-non-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </disk>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <graphics supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vnc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>egl-headless</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dbus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </graphics>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <video supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='modelType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vga</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>cirrus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>none</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>bochs</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ramfb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </video>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <hostdev supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='mode'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>subsystem</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='startupPolicy'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>default</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>mandatory</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>requisite</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>optional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='subsysType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pci</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>scsi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='capsType'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='pciBackend'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </hostdev>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <rng supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-non-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>random</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>egd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>builtin</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </rng>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <filesystem supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='driverType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>path</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>handle</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtiofs</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </filesystem>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <tpm supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tpm-tis</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tpm-crb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>emulator</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>external</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendVersion'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>2.0</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </tpm>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <redirdev supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='bus'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </redirdev>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <channel supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pty</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>unix</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </channel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <crypto supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>qemu</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>builtin</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </crypto>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <interface supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>default</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>passt</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </interface>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <panic supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>isa</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>hyperv</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </panic>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <console supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>null</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pty</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dev</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>file</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pipe</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>stdio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>udp</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tcp</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>unix</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>qemu-vdagent</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dbus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </console>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </devices>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <gic supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <vmcoreinfo supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <genid supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <backingStoreInput supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <backup supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <async-teardown supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <ps2 supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <sev supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <sgx supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <hyperv supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='features'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>relaxed</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vapic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>spinlocks</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vpindex</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>runtime</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>synic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>stimer</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>reset</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vendor_id</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>frequencies</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>reenlightenment</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tlbflush</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ipi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>avic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>emsr_bitmap</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>xmm_input</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <defaults>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <spinlocks>4095</spinlocks>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <stimer_direct>on</stimer_direct>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </defaults>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </hyperv>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <launchSecurity supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='sectype'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tdx</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </launchSecurity>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </features>
Nov 22 08:53:03 compute-0 nova_compute[253661]: </domainCapabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.339 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 22 08:53:03 compute-0 nova_compute[253661]: <domainCapabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <domain>kvm</domain>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <arch>x86_64</arch>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <vcpu max='4096'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <iothreads supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <os supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <enum name='firmware'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>efi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <loader supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>rom</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pflash</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='readonly'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>yes</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>no</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='secure'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>yes</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>no</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </loader>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </os>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='host-passthrough' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='hostPassthroughMigratable'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>on</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>off</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='maximum' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='maximumMigratable'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>on</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>off</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='host-model' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <vendor>AMD</vendor>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='x2apic'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='hypervisor'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='stibp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='overflow-recov'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='succor'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='lbrv'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='tsc-scale'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='flushbyasid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='pause-filter'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='pfthreshold'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <feature policy='disable' name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <mode name='custom' supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Broadwell-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Cooperlake-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Denverton-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Dhyana-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Genoa'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='auto-ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='auto-ibrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Milan-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amd-psfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='no-nested-data-bp'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='null-sel-clr-base'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='stibp-always-on'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-Rome-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='EPYC-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='GraniteRapids-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-128'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-256'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx10-512'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='prefetchiti'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Haswell-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v6'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Icelake-Server-v7'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='IvyBridge-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='KnightsMill'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4fmaps'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4vnniw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512er'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512pf'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='KnightsMill-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4fmaps'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-4vnniw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512er'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512pf'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G4-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tbm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Opteron_G5-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fma4'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tbm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xop'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SapphireRapids-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='amx-tile'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-bf16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-fp16'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bitalg'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vbmi2'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrc'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fzrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='la57'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='taa-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='tsx-ldtrk'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xfd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SierraForest'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ne-convert'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cmpccxadd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='SierraForest-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ifma'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-ne-convert'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx-vnni-int8'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='bus-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cmpccxadd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fbsdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='fsrs'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ibrs-all'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mcdt-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pbrsb-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='psdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='serialize'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vaes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='vpclmulqdq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Client-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='hle'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='rtm'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Skylake-Server-v5'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512bw'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512cd'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512dq'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512f'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='avx512vl'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='invpcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pcid'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='pku'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='mpx'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v2'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v3'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='core-capability'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='split-lock-detect'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='Snowridge-v4'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='cldemote'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='erms'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='gfni'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdir64b'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='movdiri'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='xsaves'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='athlon'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='athlon-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='core2duo'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='core2duo-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='coreduo'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='coreduo-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='n270'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='n270-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='ss'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='phenom'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <blockers model='phenom-v1'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnow'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <feature name='3dnowext'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </blockers>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </mode>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </cpu>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <memoryBacking supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <enum name='sourceType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>file</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>anonymous</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <value>memfd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </memoryBacking>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <devices>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <disk supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='diskDevice'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>disk</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>cdrom</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>floppy</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>lun</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='bus'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>fdc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>scsi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>sata</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-non-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </disk>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <graphics supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vnc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>egl-headless</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dbus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </graphics>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <video supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='modelType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vga</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>cirrus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>none</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>bochs</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ramfb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </video>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <hostdev supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='mode'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>subsystem</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='startupPolicy'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>default</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>mandatory</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>requisite</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>optional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='subsysType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pci</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>scsi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='capsType'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='pciBackend'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </hostdev>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <rng supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtio-non-transitional</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>random</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>egd</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>builtin</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </rng>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <filesystem supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='driverType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>path</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>handle</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>virtiofs</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </filesystem>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <tpm supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tpm-tis</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tpm-crb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>emulator</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>external</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendVersion'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>2.0</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </tpm>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <redirdev supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='bus'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>usb</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </redirdev>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <channel supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pty</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>unix</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </channel>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <crypto supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>qemu</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendModel'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>builtin</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </crypto>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <interface supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='backendType'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>default</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>passt</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </interface>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <panic supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='model'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>isa</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>hyperv</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </panic>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <console supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='type'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>null</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vc</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pty</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dev</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>file</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>pipe</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>stdio</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>udp</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tcp</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>unix</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>qemu-vdagent</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>dbus</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </console>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </devices>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   <features>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <gic supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <vmcoreinfo supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <genid supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <backingStoreInput supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <backup supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <async-teardown supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <ps2 supported='yes'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <sev supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <sgx supported='no'/>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <hyperv supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='features'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>relaxed</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vapic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>spinlocks</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vpindex</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>runtime</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>synic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>stimer</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>reset</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>vendor_id</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>frequencies</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>reenlightenment</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tlbflush</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>ipi</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>avic</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>emsr_bitmap</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>xmm_input</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <defaults>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <spinlocks>4095</spinlocks>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <stimer_direct>on</stimer_direct>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </defaults>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </hyperv>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     <launchSecurity supported='yes'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       <enum name='sectype'>
Nov 22 08:53:03 compute-0 nova_compute[253661]:         <value>tdx</value>
Nov 22 08:53:03 compute-0 nova_compute[253661]:       </enum>
Nov 22 08:53:03 compute-0 nova_compute[253661]:     </launchSecurity>
Nov 22 08:53:03 compute-0 nova_compute[253661]:   </features>
Nov 22 08:53:03 compute-0 nova_compute[253661]: </domainCapabilities>
Nov 22 08:53:03 compute-0 nova_compute[253661]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.405 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.406 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.406 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.406 253665 INFO nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Secure Boot support detected
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.409 253665 INFO nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.410 253665 INFO nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.422 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.452 253665 INFO nova.virt.node [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Determined node identity f0c5987a-d277-4022-aba2-19e7fecb4518 from /var/lib/nova/compute_id
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.467 253665 WARNING nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Compute nodes ['f0c5987a-d277-4022-aba2-19e7fecb4518'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.494 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.527 253665 WARNING nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.528 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.528 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.528 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.528 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.529 253665 DEBUG oslo_concurrency.processutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:03 compute-0 keen_blackwell[254314]: --> passed data devices: 0 physical, 3 LVM
Nov 22 08:53:03 compute-0 keen_blackwell[254314]: --> relative data size: 1.0
Nov 22 08:53:03 compute-0 keen_blackwell[254314]: --> All data devices are unavailable
Nov 22 08:53:03 compute-0 systemd[1]: libpod-f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa.scope: Deactivated successfully.
Nov 22 08:53:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:53:03 compute-0 systemd[1]: libpod-f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa.scope: Consumed 1.058s CPU time.
Nov 22 08:53:03 compute-0 podman[254298]: 2025-11-22 08:53:03.942501548 +0000 UTC m=+1.278815217 container died f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 08:53:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111935792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:53:03 compute-0 nova_compute[253661]: 2025-11-22 08:53:03.978 253665 DEBUG oslo_concurrency.processutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e-merged.mount: Deactivated successfully.
Nov 22 08:53:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1111935792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:53:04 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 08:53:04 compute-0 podman[254298]: 2025-11-22 08:53:04.037420859 +0000 UTC m=+1.373734528 container remove f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 08:53:04 compute-0 systemd[1]: libpod-conmon-f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa.scope: Deactivated successfully.
Nov 22 08:53:04 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 22 08:53:04 compute-0 sudo[254144]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:04 compute-0 sudo[254412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:53:04 compute-0 sudo[254412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:04 compute-0 sudo[254412]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:04 compute-0 sudo[254437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:53:04 compute-0 sudo[254437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:04 compute-0 sudo[254437]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:04 compute-0 sudo[254464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:53:04 compute-0 sudo[254464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:04 compute-0 sudo[254464]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:04 compute-0 nova_compute[253661]: 2025-11-22 08:53:04.328 253665 WARNING nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:53:04 compute-0 nova_compute[253661]: 2025-11-22 08:53:04.330 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:53:04 compute-0 nova_compute[253661]: 2025-11-22 08:53:04.330 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:04 compute-0 sudo[254489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 08:53:04 compute-0 nova_compute[253661]: 2025-11-22 08:53:04.331 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:04 compute-0 sudo[254489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:04 compute-0 nova_compute[253661]: 2025-11-22 08:53:04.342 253665 WARNING nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] No compute node record for compute-0.ctlplane.example.com:f0c5987a-d277-4022-aba2-19e7fecb4518: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host f0c5987a-d277-4022-aba2-19e7fecb4518 could not be found.
Nov 22 08:53:04 compute-0 nova_compute[253661]: 2025-11-22 08:53:04.357 253665 INFO nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: f0c5987a-d277-4022-aba2-19e7fecb4518
Nov 22 08:53:04 compute-0 nova_compute[253661]: 2025-11-22 08:53:04.425 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:53:04 compute-0 nova_compute[253661]: 2025-11-22 08:53:04.425 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:53:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:04 compute-0 podman[254555]: 2025-11-22 08:53:04.656123809 +0000 UTC m=+0.025483507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:53:04 compute-0 podman[254555]: 2025-11-22 08:53:04.760139474 +0000 UTC m=+0.129499172 container create 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 08:53:04 compute-0 systemd[1]: Started libpod-conmon-115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019.scope.
Nov 22 08:53:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:53:04 compute-0 podman[254555]: 2025-11-22 08:53:04.877167069 +0000 UTC m=+0.246526787 container init 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 08:53:04 compute-0 podman[254555]: 2025-11-22 08:53:04.885711319 +0000 UTC m=+0.255071027 container start 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 08:53:04 compute-0 podman[254555]: 2025-11-22 08:53:04.889674957 +0000 UTC m=+0.259034685 container attach 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 08:53:04 compute-0 angry_shockley[254571]: 167 167
Nov 22 08:53:04 compute-0 systemd[1]: libpod-115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019.scope: Deactivated successfully.
Nov 22 08:53:04 compute-0 podman[254555]: 2025-11-22 08:53:04.892287281 +0000 UTC m=+0.261646979 container died 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 08:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0055add2d4c9f67f69ca8cea567759fe837662567c495813a849d049ba1a28c7-merged.mount: Deactivated successfully.
Nov 22 08:53:04 compute-0 podman[254555]: 2025-11-22 08:53:04.947841395 +0000 UTC m=+0.317201093 container remove 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 08:53:04 compute-0 systemd[1]: libpod-conmon-115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019.scope: Deactivated successfully.
Nov 22 08:53:05 compute-0 ceph-mon[75021]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:05 compute-0 podman[254595]: 2025-11-22 08:53:05.137083504 +0000 UTC m=+0.066470994 container create 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 08:53:05 compute-0 systemd[1]: Started libpod-conmon-15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78.scope.
Nov 22 08:53:05 compute-0 podman[254595]: 2025-11-22 08:53:05.095580145 +0000 UTC m=+0.024967665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:53:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:05 compute-0 podman[254595]: 2025-11-22 08:53:05.231459283 +0000 UTC m=+0.160846803 container init 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 08:53:05 compute-0 podman[254595]: 2025-11-22 08:53:05.239501121 +0000 UTC m=+0.168888611 container start 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:53:05 compute-0 podman[254595]: 2025-11-22 08:53:05.243387435 +0000 UTC m=+0.172774925 container attach 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:53:05 compute-0 nova_compute[253661]: 2025-11-22 08:53:05.248 253665 INFO nova.scheduler.client.report [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [req-877c3a5b-cb07-4895-8df9-ac0149f5dab8] Created resource provider record via placement API for resource provider with UUID f0c5987a-d277-4022-aba2-19e7fecb4518 and name compute-0.ctlplane.example.com.
Nov 22 08:53:05 compute-0 nova_compute[253661]: 2025-11-22 08:53:05.612 253665 DEBUG oslo_concurrency.processutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:53:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1370162445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]: {
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.064 253665 DEBUG oslo_concurrency.processutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:     "0": [
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:         {
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "devices": [
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "/dev/loop3"
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             ],
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_name": "ceph_lv0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_size": "21470642176",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "name": "ceph_lv0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "tags": {
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.cluster_name": "ceph",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.crush_device_class": "",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.encrypted": "0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.osd_id": "0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.type": "block",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.vdo": "0"
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             },
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "type": "block",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "vg_name": "ceph_vg0"
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:         }
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:     ],
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:     "1": [
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:         {
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "devices": [
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "/dev/loop4"
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             ],
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_name": "ceph_lv1",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_size": "21470642176",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "name": "ceph_lv1",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "tags": {
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.cluster_name": "ceph",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.crush_device_class": "",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.encrypted": "0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.osd_id": "1",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.type": "block",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.vdo": "0"
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             },
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "type": "block",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "vg_name": "ceph_vg1"
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:         }
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:     ],
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:     "2": [
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:         {
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "devices": [
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "/dev/loop5"
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             ],
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_name": "ceph_lv2",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_size": "21470642176",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "name": "ceph_lv2",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "tags": {
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.cluster_name": "ceph",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.crush_device_class": "",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.encrypted": "0",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.osd_id": "2",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.type": "block",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:                 "ceph.vdo": "0"
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             },
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "type": "block",
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:             "vg_name": "ceph_vg2"
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:         }
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]:     ]
Nov 22 08:53:06 compute-0 condescending_elgamal[254611]: }
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.071 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 22 08:53:06 compute-0 nova_compute[253661]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.071 253665 INFO nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] kernel doesn't support AMD SEV
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.072 253665 DEBUG nova.compute.provider_tree [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.072 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:53:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1370162445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:53:06 compute-0 systemd[1]: libpod-15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78.scope: Deactivated successfully.
Nov 22 08:53:06 compute-0 podman[254595]: 2025-11-22 08:53:06.106745315 +0000 UTC m=+1.036132805 container died 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4-merged.mount: Deactivated successfully.
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.148 253665 DEBUG nova.scheduler.client.report [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updated inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.149 253665 DEBUG nova.compute.provider_tree [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updating resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.150 253665 DEBUG nova.compute.provider_tree [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:53:06 compute-0 podman[254595]: 2025-11-22 08:53:06.191298702 +0000 UTC m=+1.120686192 container remove 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:53:06 compute-0 systemd[1]: libpod-conmon-15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78.scope: Deactivated successfully.
Nov 22 08:53:06 compute-0 sudo[254489]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:06 compute-0 sudo[254656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:53:06 compute-0 sudo[254656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:06 compute-0 sudo[254656]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:06 compute-0 sudo[254681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:53:06 compute-0 sudo[254681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:06 compute-0 sudo[254681]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.386 253665 DEBUG nova.compute.provider_tree [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updating resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.416 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.417 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.417 253665 DEBUG nova.service [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 22 08:53:06 compute-0 sudo[254706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:53:06 compute-0 sudo[254706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:06 compute-0 sudo[254706]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.471 253665 DEBUG nova.service [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 22 08:53:06 compute-0 nova_compute[253661]: 2025-11-22 08:53:06.472 253665 DEBUG nova.servicegroup.drivers.db [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 22 08:53:06 compute-0 sudo[254731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 08:53:06 compute-0 sudo[254731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:06 compute-0 podman[254797]: 2025-11-22 08:53:06.835409076 +0000 UTC m=+0.041202683 container create f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 08:53:06 compute-0 systemd[1]: Started libpod-conmon-f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd.scope.
Nov 22 08:53:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:53:06 compute-0 podman[254797]: 2025-11-22 08:53:06.817432765 +0000 UTC m=+0.023226412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:53:06 compute-0 podman[254797]: 2025-11-22 08:53:06.927050427 +0000 UTC m=+0.132844064 container init f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 08:53:06 compute-0 podman[254797]: 2025-11-22 08:53:06.936441098 +0000 UTC m=+0.142234715 container start f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 22 08:53:06 compute-0 busy_shockley[254813]: 167 167
Nov 22 08:53:06 compute-0 systemd[1]: libpod-f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd.scope: Deactivated successfully.
Nov 22 08:53:06 compute-0 podman[254797]: 2025-11-22 08:53:06.944055085 +0000 UTC m=+0.149848702 container attach f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:53:06 compute-0 conmon[254813]: conmon f090d080f8e88f1d58de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd.scope/container/memory.events
Nov 22 08:53:06 compute-0 podman[254797]: 2025-11-22 08:53:06.944970168 +0000 UTC m=+0.150763785 container died f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 08:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b7af3c9f36e0a3be90e80abc25ab5d7ee33993645e27a72bf5164f1001e523d-merged.mount: Deactivated successfully.
Nov 22 08:53:06 compute-0 podman[254797]: 2025-11-22 08:53:06.986385015 +0000 UTC m=+0.192178632 container remove f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:53:06 compute-0 systemd[1]: libpod-conmon-f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd.scope: Deactivated successfully.
Nov 22 08:53:07 compute-0 ceph-mon[75021]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:07 compute-0 podman[254835]: 2025-11-22 08:53:07.180793881 +0000 UTC m=+0.059617125 container create 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 08:53:07 compute-0 systemd[1]: Started libpod-conmon-082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4.scope.
Nov 22 08:53:07 compute-0 podman[254835]: 2025-11-22 08:53:07.158588935 +0000 UTC m=+0.037412199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:53:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:07 compute-0 podman[254835]: 2025-11-22 08:53:07.277570208 +0000 UTC m=+0.156393452 container init 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 08:53:07 compute-0 podman[254835]: 2025-11-22 08:53:07.284503728 +0000 UTC m=+0.163326962 container start 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 22 08:53:07 compute-0 podman[254835]: 2025-11-22 08:53:07.288188599 +0000 UTC m=+0.167011833 container attach 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]: {
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "osd_id": 1,
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "type": "bluestore"
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:     },
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "osd_id": 0,
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "type": "bluestore"
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:     },
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "osd_id": 2,
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:         "type": "bluestore"
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]:     }
Nov 22 08:53:08 compute-0 serene_mcnulty[254851]: }
Nov 22 08:53:08 compute-0 systemd[1]: libpod-082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4.scope: Deactivated successfully.
Nov 22 08:53:08 compute-0 podman[254835]: 2025-11-22 08:53:08.364028048 +0000 UTC m=+1.242851282 container died 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:53:08 compute-0 systemd[1]: libpod-082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4.scope: Consumed 1.073s CPU time.
Nov 22 08:53:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae-merged.mount: Deactivated successfully.
Nov 22 08:53:08 compute-0 podman[254835]: 2025-11-22 08:53:08.704844061 +0000 UTC m=+1.583667295 container remove 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 08:53:08 compute-0 systemd[1]: libpod-conmon-082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4.scope: Deactivated successfully.
Nov 22 08:53:08 compute-0 sudo[254731]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 08:53:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:53:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 08:53:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:53:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 7e977a2a-a0f2-48cb-8a16-9a7620a7dd0f does not exist
Nov 22 08:53:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d4e26ff9-d998-485e-9dc9-cdb7efc3717b does not exist
Nov 22 08:53:08 compute-0 sudo[254896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:53:08 compute-0 sudo[254896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:08 compute-0 sudo[254896]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:08 compute-0 sudo[254921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 08:53:08 compute-0 sudo[254921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:53:08 compute-0 sudo[254921]: pam_unix(sudo:session): session closed for user root
Nov 22 08:53:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:09 compute-0 ceph-mon[75021]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:53:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:53:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:11 compute-0 ceph-mon[75021]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:13 compute-0 ceph-mon[75021]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:14 compute-0 podman[254946]: 2025-11-22 08:53:14.385648348 +0000 UTC m=+0.068136875 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:53:14 compute-0 podman[254947]: 2025-11-22 08:53:14.391035641 +0000 UTC m=+0.074160213 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 08:53:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:15 compute-0 ceph-mon[75021]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:16 compute-0 podman[254985]: 2025-11-22 08:53:16.410240265 +0000 UTC m=+0.102025037 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:53:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:17 compute-0 ceph-mon[75021]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:19 compute-0 ceph-mon[75021]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:20 compute-0 ceph-mon[75021]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:53:23 compute-0 ceph-mon[75021]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 08:53:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1158647160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:53:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 08:53:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1158647160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:53:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 08:53:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1010212063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:53:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 08:53:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1010212063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:53:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 08:53:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575153798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:53:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 08:53:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575153798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:53:25 compute-0 ceph-mon[75021]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1158647160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:53:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1158647160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:53:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1010212063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:53:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1010212063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:53:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3575153798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:53:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3575153798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:53:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:26 compute-0 ceph-mon[75021]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:53:27.934 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:53:27.935 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:53:27.935 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:29 compute-0 ceph-mon[75021]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:31 compute-0 ceph-mon[75021]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:33 compute-0 ceph-mon[75021]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:35 compute-0 ceph-mon[75021]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:36 compute-0 ceph-mon[75021]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:39 compute-0 nova_compute[253661]: 2025-11-22 08:53:39.474 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:39 compute-0 nova_compute[253661]: 2025-11-22 08:53:39.491 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:39 compute-0 ceph-mon[75021]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:41 compute-0 ceph-mon[75021]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:43 compute-0 ceph-mon[75021]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:45 compute-0 podman[255012]: 2025-11-22 08:53:45.375952547 +0000 UTC m=+0.062414035 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 08:53:45 compute-0 podman[255011]: 2025-11-22 08:53:45.392005786 +0000 UTC m=+0.080821143 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:53:45 compute-0 ceph-mon[75021]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:46 compute-0 ceph-mon[75021]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:47 compute-0 podman[255047]: 2025-11-22 08:53:47.398437831 +0000 UTC m=+0.093641102 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 08:53:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 08:53:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5753 writes, 24K keys, 5753 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5753 writes, 944 syncs, 6.09 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e325090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e325090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e325090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 08:53:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:50 compute-0 ceph-mon[75021]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:51 compute-0 ceph-mon[75021]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:53:52
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:53:52 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:53:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:53:54 compute-0 ceph-mon[75021]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 08:53:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Cumulative writes: 6777 writes, 28K keys, 6777 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6777 writes, 1219 syncs, 5.56 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a31090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a31090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a31090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 08:53:55 compute-0 ceph-mon[75021]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:57 compute-0 ceph-mon[75021]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:53:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:53:59 compute-0 ceph-mon[75021]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 08:54:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5671 writes, 24K keys, 5671 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5671 writes, 874 syncs, 6.49 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 08:54:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.246 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.248 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.248 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.248 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.281 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.282 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.282 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.282 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.283 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:54:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:54:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2597109270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.739 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:54:01 compute-0 ceph-mon[75021]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2597109270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.908 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.909 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5241MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.910 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.910 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.982 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.983 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:54:01 compute-0 nova_compute[253661]: 2025-11-22 08:54:01.999 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 08:54:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:54:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822899044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:54:02 compute-0 nova_compute[253661]: 2025-11-22 08:54:02.436 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:54:02 compute-0 nova_compute[253661]: 2025-11-22 08:54:02.446 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:54:02 compute-0 nova_compute[253661]: 2025-11-22 08:54:02.461 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:54:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:02 compute-0 nova_compute[253661]: 2025-11-22 08:54:02.620 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:54:02 compute-0 nova_compute[253661]: 2025-11-22 08:54:02.620 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:54:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/822899044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:54:03 compute-0 ceph-mon[75021]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:05 compute-0 ceph-mon[75021]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 08:54:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/693235960' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 08:54:06 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 08:54:06 compute-0 ceph-mgr[75315]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 08:54:06 compute-0 ceph-mgr[75315]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 08:54:07 compute-0 ceph-mon[75021]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/693235960' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 08:54:07 compute-0 ceph-mon[75021]: from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 08:54:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:08 compute-0 sudo[255117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:08 compute-0 sudo[255117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:08 compute-0 sudo[255117]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:09 compute-0 sudo[255142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:54:09 compute-0 sudo[255142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:09 compute-0 sudo[255142]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:09 compute-0 sudo[255167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:09 compute-0 sudo[255167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:09 compute-0 sudo[255167]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:09 compute-0 sudo[255192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 08:54:09 compute-0 sudo[255192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:09 compute-0 ceph-mon[75021]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:09 compute-0 sudo[255192]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 08:54:09 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 08:54:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:09 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:10 compute-0 sudo[255236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:10 compute-0 sudo[255236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:10 compute-0 sudo[255236]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:10 compute-0 sudo[255261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:54:10 compute-0 sudo[255261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:10 compute-0 sudo[255261]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:10 compute-0 sudo[255286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:10 compute-0 sudo[255286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:10 compute-0 sudo[255286]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:10 compute-0 sudo[255311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 08:54:10 compute-0 sudo[255311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:10 compute-0 sudo[255311]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:54:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:54:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 08:54:10 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:54:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 08:54:10 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:10 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:11 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:11 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 208dda32-f250-4715-8e50-143c19ffc685 does not exist
Nov 22 08:54:11 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2585fbd6-39f7-4e53-80f5-d3508e0e09fb does not exist
Nov 22 08:54:11 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 72d6b1c9-d24f-481f-96f9-a513f736ca8b does not exist
Nov 22 08:54:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 08:54:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:54:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 08:54:11 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:54:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:54:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.045940) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651045970, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1605, "num_deletes": 251, "total_data_size": 2620860, "memory_usage": 2653984, "flush_reason": "Manual Compaction"}
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 22 08:54:11 compute-0 sudo[255367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:11 compute-0 sudo[255367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:11 compute-0 sudo[255367]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651123390, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2574802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14928, "largest_seqno": 16532, "table_properties": {"data_size": 2567379, "index_size": 4430, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14985, "raw_average_key_size": 19, "raw_value_size": 2552545, "raw_average_value_size": 3354, "num_data_blocks": 202, "num_entries": 761, "num_filter_entries": 761, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801475, "oldest_key_time": 1763801475, "file_creation_time": 1763801651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 77501 microseconds, and 6636 cpu microseconds.
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.123438) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2574802 bytes OK
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.123458) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 22 08:54:11 compute-0 sudo[255392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.164405) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.164464) EVENT_LOG_v1 {"time_micros": 1763801651164451, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.164558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2613910, prev total WAL file size 2613910, number of live WAL files 2.
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.165610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2514KB)], [35(7056KB)]
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651165705, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9800448, "oldest_snapshot_seqno": -1}
Nov 22 08:54:11 compute-0 sudo[255392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:11 compute-0 sudo[255392]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:11 compute-0 sudo[255417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:11 compute-0 sudo[255417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:11 compute-0 sudo[255417]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:11 compute-0 sudo[255442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4098 keys, 7998637 bytes, temperature: kUnknown
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651287187, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7998637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7968373, "index_size": 18890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 100049, "raw_average_key_size": 24, "raw_value_size": 7891400, "raw_average_value_size": 1925, "num_data_blocks": 797, "num_entries": 4098, "num_filter_entries": 4098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:54:11 compute-0 sudo[255442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.287499) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7998637 bytes
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.290528) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.7 rd, 65.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 6.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 4612, records dropped: 514 output_compression: NoCompression
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.290550) EVENT_LOG_v1 {"time_micros": 1763801651290540, "job": 16, "event": "compaction_finished", "compaction_time_micros": 121429, "compaction_time_cpu_micros": 20763, "output_level": 6, "num_output_files": 1, "total_output_size": 7998637, "num_input_records": 4612, "num_output_records": 4098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651291046, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651292306, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.165490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:54:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:54:11 compute-0 podman[255507]: 2025-11-22 08:54:11.656886664 +0000 UTC m=+0.071346137 container create a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:54:11 compute-0 podman[255507]: 2025-11-22 08:54:11.607778512 +0000 UTC m=+0.022237995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:54:11 compute-0 ceph-mon[75021]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:54:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:54:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:54:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:54:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:54:11 compute-0 systemd[1]: Started libpod-conmon-a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983.scope.
Nov 22 08:54:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:54:11 compute-0 podman[255507]: 2025-11-22 08:54:11.954877201 +0000 UTC m=+0.369336694 container init a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:54:11 compute-0 podman[255507]: 2025-11-22 08:54:11.962482639 +0000 UTC m=+0.376942112 container start a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:54:11 compute-0 zealous_volhard[255523]: 167 167
Nov 22 08:54:11 compute-0 systemd[1]: libpod-a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983.scope: Deactivated successfully.
Nov 22 08:54:12 compute-0 podman[255507]: 2025-11-22 08:54:12.110030901 +0000 UTC m=+0.524490394 container attach a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 08:54:12 compute-0 podman[255507]: 2025-11-22 08:54:12.110615617 +0000 UTC m=+0.525075100 container died a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eab33910bc65b42a77fa232cd0df77856347cff9068e2998846ba309a9d2991-merged.mount: Deactivated successfully.
Nov 22 08:54:12 compute-0 podman[255507]: 2025-11-22 08:54:12.261635495 +0000 UTC m=+0.676094988 container remove a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 08:54:12 compute-0 systemd[1]: libpod-conmon-a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983.scope: Deactivated successfully.
Nov 22 08:54:12 compute-0 podman[255547]: 2025-11-22 08:54:12.436553588 +0000 UTC m=+0.052491277 container create 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 08:54:12 compute-0 systemd[1]: Started libpod-conmon-45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7.scope.
Nov 22 08:54:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:54:12 compute-0 podman[255547]: 2025-11-22 08:54:12.412566672 +0000 UTC m=+0.028504381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:12 compute-0 podman[255547]: 2025-11-22 08:54:12.53429345 +0000 UTC m=+0.150231139 container init 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:54:12 compute-0 podman[255547]: 2025-11-22 08:54:12.542205047 +0000 UTC m=+0.158142736 container start 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:54:12 compute-0 podman[255547]: 2025-11-22 08:54:12.554247578 +0000 UTC m=+0.170185267 container attach 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:54:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:12 compute-0 ceph-mon[75021]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:13 compute-0 loving_ptolemy[255563]: --> passed data devices: 0 physical, 3 LVM
Nov 22 08:54:13 compute-0 loving_ptolemy[255563]: --> relative data size: 1.0
Nov 22 08:54:13 compute-0 loving_ptolemy[255563]: --> All data devices are unavailable
Nov 22 08:54:13 compute-0 systemd[1]: libpod-45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7.scope: Deactivated successfully.
Nov 22 08:54:13 compute-0 systemd[1]: libpod-45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7.scope: Consumed 1.063s CPU time.
Nov 22 08:54:13 compute-0 podman[255547]: 2025-11-22 08:54:13.670433777 +0000 UTC m=+1.286371466 container died 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:54:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917-merged.mount: Deactivated successfully.
Nov 22 08:54:14 compute-0 podman[255547]: 2025-11-22 08:54:14.03945265 +0000 UTC m=+1.655390339 container remove 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:54:14 compute-0 systemd[1]: libpod-conmon-45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7.scope: Deactivated successfully.
Nov 22 08:54:14 compute-0 sudo[255442]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:14 compute-0 sudo[255604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:14 compute-0 sudo[255604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:14 compute-0 sudo[255604]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:14 compute-0 sudo[255629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:54:14 compute-0 sudo[255629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:14 compute-0 sudo[255629]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:14 compute-0 sudo[255654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:14 compute-0 sudo[255654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:14 compute-0 sudo[255654]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:14 compute-0 sudo[255679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 08:54:14 compute-0 sudo[255679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:14 compute-0 podman[255743]: 2025-11-22 08:54:14.734045098 +0000 UTC m=+0.105757184 container create e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:54:14 compute-0 podman[255743]: 2025-11-22 08:54:14.658052926 +0000 UTC m=+0.029765042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:54:14 compute-0 systemd[1]: Started libpod-conmon-e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d.scope.
Nov 22 08:54:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:54:14 compute-0 podman[255743]: 2025-11-22 08:54:14.899131895 +0000 UTC m=+0.270843981 container init e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 08:54:14 compute-0 podman[255743]: 2025-11-22 08:54:14.905846702 +0000 UTC m=+0.277558788 container start e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:54:14 compute-0 interesting_curie[255759]: 167 167
Nov 22 08:54:14 compute-0 systemd[1]: libpod-e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d.scope: Deactivated successfully.
Nov 22 08:54:14 compute-0 podman[255743]: 2025-11-22 08:54:14.930945327 +0000 UTC m=+0.302657413 container attach e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 08:54:14 compute-0 podman[255743]: 2025-11-22 08:54:14.931947022 +0000 UTC m=+0.303659108 container died e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-aff8254085b606c2eb096f4258fef9369185c5860e30281b09a10b6fd4155905-merged.mount: Deactivated successfully.
Nov 22 08:54:15 compute-0 podman[255743]: 2025-11-22 08:54:15.208023433 +0000 UTC m=+0.579735549 container remove e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:54:15 compute-0 systemd[1]: libpod-conmon-e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d.scope: Deactivated successfully.
Nov 22 08:54:15 compute-0 podman[255783]: 2025-11-22 08:54:15.434752816 +0000 UTC m=+0.072090665 container create c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:54:15 compute-0 podman[255783]: 2025-11-22 08:54:15.39152165 +0000 UTC m=+0.028859489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:54:15 compute-0 systemd[1]: Started libpod-conmon-c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9.scope.
Nov 22 08:54:15 compute-0 podman[255797]: 2025-11-22 08:54:15.568446124 +0000 UTC m=+0.096562984 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 08:54:15 compute-0 podman[255798]: 2025-11-22 08:54:15.576040633 +0000 UTC m=+0.103877226 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:54:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:15 compute-0 podman[255783]: 2025-11-22 08:54:15.722185069 +0000 UTC m=+0.359522978 container init c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 08:54:15 compute-0 podman[255783]: 2025-11-22 08:54:15.731656965 +0000 UTC m=+0.368994784 container start c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:54:15 compute-0 ceph-mon[75021]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:15 compute-0 podman[255783]: 2025-11-22 08:54:15.747966862 +0000 UTC m=+0.385304681 container attach c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]: {
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:     "0": [
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:         {
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "devices": [
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "/dev/loop3"
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             ],
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_name": "ceph_lv0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_size": "21470642176",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "name": "ceph_lv0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "tags": {
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.cluster_name": "ceph",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.crush_device_class": "",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.encrypted": "0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.osd_id": "0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.type": "block",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.vdo": "0"
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             },
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "type": "block",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "vg_name": "ceph_vg0"
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:         }
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:     ],
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:     "1": [
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:         {
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "devices": [
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "/dev/loop4"
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             ],
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_name": "ceph_lv1",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_size": "21470642176",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "name": "ceph_lv1",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "tags": {
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.cluster_name": "ceph",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.crush_device_class": "",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.encrypted": "0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.osd_id": "1",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.type": "block",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.vdo": "0"
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             },
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "type": "block",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "vg_name": "ceph_vg1"
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:         }
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:     ],
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:     "2": [
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:         {
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "devices": [
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "/dev/loop5"
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             ],
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_name": "ceph_lv2",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_size": "21470642176",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "name": "ceph_lv2",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "tags": {
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.cluster_name": "ceph",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.crush_device_class": "",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.encrypted": "0",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.osd_id": "2",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.type": "block",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:                 "ceph.vdo": "0"
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             },
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "type": "block",
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:             "vg_name": "ceph_vg2"
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:         }
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]:     ]
Nov 22 08:54:16 compute-0 sleepy_kalam[255828]: }
Nov 22 08:54:16 compute-0 systemd[1]: libpod-c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9.scope: Deactivated successfully.
Nov 22 08:54:16 compute-0 podman[255783]: 2025-11-22 08:54:16.580972233 +0000 UTC m=+1.218310052 container died c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:54:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:16 compute-0 ceph-mon[75021]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925-merged.mount: Deactivated successfully.
Nov 22 08:54:17 compute-0 podman[255783]: 2025-11-22 08:54:17.26230226 +0000 UTC m=+1.899640079 container remove c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:54:17 compute-0 sudo[255679]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:17 compute-0 systemd[1]: libpod-conmon-c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9.scope: Deactivated successfully.
Nov 22 08:54:17 compute-0 sudo[255856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:17 compute-0 sudo[255856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:17 compute-0 sudo[255856]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:17 compute-0 sudo[255881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:54:17 compute-0 sudo[255881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:17 compute-0 sudo[255881]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:17 compute-0 sudo[255907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:17 compute-0 sudo[255907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:17 compute-0 sudo[255907]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:17 compute-0 podman[255905]: 2025-11-22 08:54:17.584537769 +0000 UTC m=+0.101589038 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:54:17 compute-0 sudo[255952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 08:54:17 compute-0 sudo[255952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:18 compute-0 podman[256023]: 2025-11-22 08:54:17.957773039 +0000 UTC m=+0.026297806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:54:18 compute-0 podman[256023]: 2025-11-22 08:54:18.144951567 +0000 UTC m=+0.213476314 container create 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 08:54:18 compute-0 systemd[1]: Started libpod-conmon-4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529.scope.
Nov 22 08:54:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:54:18 compute-0 podman[256023]: 2025-11-22 08:54:18.352960144 +0000 UTC m=+0.421484911 container init 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 08:54:18 compute-0 podman[256023]: 2025-11-22 08:54:18.36405903 +0000 UTC m=+0.432583767 container start 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:54:18 compute-0 sweet_faraday[256040]: 167 167
Nov 22 08:54:18 compute-0 systemd[1]: libpod-4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529.scope: Deactivated successfully.
Nov 22 08:54:18 compute-0 conmon[256040]: conmon 4e18c9661c3c7569fe26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529.scope/container/memory.events
Nov 22 08:54:18 compute-0 podman[256023]: 2025-11-22 08:54:18.400728402 +0000 UTC m=+0.469253169 container attach 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 08:54:18 compute-0 podman[256023]: 2025-11-22 08:54:18.401203485 +0000 UTC m=+0.469728232 container died 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:54:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-30b3ceea982e801f663247f4f9645e083195d1c5c4ba29703a0829073e99a874-merged.mount: Deactivated successfully.
Nov 22 08:54:18 compute-0 podman[256023]: 2025-11-22 08:54:18.964614347 +0000 UTC m=+1.033139114 container remove 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 08:54:19 compute-0 systemd[1]: libpod-conmon-4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529.scope: Deactivated successfully.
Nov 22 08:54:19 compute-0 podman[256064]: 2025-11-22 08:54:19.197925143 +0000 UTC m=+0.115999298 container create 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 08:54:19 compute-0 podman[256064]: 2025-11-22 08:54:19.107993215 +0000 UTC m=+0.026067390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:54:19 compute-0 systemd[1]: Started libpod-conmon-9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00.scope.
Nov 22 08:54:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:54:19 compute-0 podman[256064]: 2025-11-22 08:54:19.404138015 +0000 UTC m=+0.322212200 container init 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 08:54:19 compute-0 podman[256064]: 2025-11-22 08:54:19.414605426 +0000 UTC m=+0.332679581 container start 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 08:54:19 compute-0 podman[256064]: 2025-11-22 08:54:19.630652723 +0000 UTC m=+0.548726908 container attach 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 08:54:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:19 compute-0 ceph-mon[75021]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]: {
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "osd_id": 1,
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "type": "bluestore"
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:     },
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "osd_id": 0,
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "type": "bluestore"
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:     },
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "osd_id": 2,
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:         "type": "bluestore"
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]:     }
Nov 22 08:54:20 compute-0 angry_zhukovsky[256081]: }
Nov 22 08:54:20 compute-0 systemd[1]: libpod-9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00.scope: Deactivated successfully.
Nov 22 08:54:20 compute-0 podman[256064]: 2025-11-22 08:54:20.472463712 +0000 UTC m=+1.390537877 container died 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:54:20 compute-0 systemd[1]: libpod-9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00.scope: Consumed 1.055s CPU time.
Nov 22 08:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221-merged.mount: Deactivated successfully.
Nov 22 08:54:20 compute-0 podman[256064]: 2025-11-22 08:54:20.584339367 +0000 UTC m=+1.502413522 container remove 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:54:20 compute-0 systemd[1]: libpod-conmon-9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00.scope: Deactivated successfully.
Nov 22 08:54:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:20 compute-0 sudo[255952]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 08:54:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 08:54:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 110a56f2-7ff6-4c85-99a5-05fb17b87f64 does not exist
Nov 22 08:54:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a260e74e-fde6-46fc-811d-4969dc5b8d68 does not exist
Nov 22 08:54:20 compute-0 sudo[256128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:54:20 compute-0 sudo[256128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:20 compute-0 sudo[256128]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:20 compute-0 sudo[256153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 08:54:20 compute-0 sudo[256153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:54:20 compute-0 sudo[256153]: pam_unix(sudo:session): session closed for user root
Nov 22 08:54:21 compute-0 ceph-mon[75021]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:54:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 08:54:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2974503250' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 08:54:21 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 08:54:21 compute-0 ceph-mgr[75315]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 08:54:21 compute-0 ceph-mgr[75315]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 08:54:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2974503250' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 08:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:54:23 compute-0 ceph-mon[75021]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 08:54:23 compute-0 ceph-mon[75021]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:25 compute-0 ceph-mon[75021]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:27 compute-0 ceph-mon[75021]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:54:27.935 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:54:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:54:27.936 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:54:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:54:27.937 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:54:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:29 compute-0 ceph-mon[75021]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:31 compute-0 ceph-mon[75021]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:33 compute-0 ceph-mon[75021]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:35 compute-0 ceph-mon[75021]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:37 compute-0 ceph-mon[75021]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:40 compute-0 ceph-mon[75021]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:41 compute-0 ceph-mon[75021]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:43 compute-0 ceph-mon[75021]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:45 compute-0 ceph-mon[75021]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:46 compute-0 podman[256178]: 2025-11-22 08:54:46.388597531 +0000 UTC m=+0.074260959 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:54:46 compute-0 podman[256179]: 2025-11-22 08:54:46.390568069 +0000 UTC m=+0.074853003 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 08:54:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:47 compute-0 ceph-mon[75021]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:48 compute-0 podman[256215]: 2025-11-22 08:54:48.400923023 +0000 UTC m=+0.090883483 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:54:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:49 compute-0 ceph-mon[75021]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:51 compute-0 ceph-mon[75021]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:54:52
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root', 'vms', 'backups', 'volumes', 'default.rgw.log']
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:54:53 compute-0 ceph-mon[75021]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:54:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:54:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:54:55 compute-0 ceph-mon[75021]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:57 compute-0 ceph-mon[75021]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:54:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:00 compute-0 ceph-mon[75021]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:01 compute-0 ceph-mon[75021]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.615 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.631 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.631 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.632 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.632 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.632 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.632 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.654 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.654 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.655 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.655 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:55:02 compute-0 nova_compute[253661]: 2025-11-22 08:55:02.655 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:55:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:55:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050746616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.160 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.336 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.338 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5219MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.339 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.339 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.411 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.412 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.431 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:55:03 compute-0 ceph-mon[75021]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2050746616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:55:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:55:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1883544117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.973 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.979 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.996 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.998 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:55:03 compute-0 nova_compute[253661]: 2025-11-22 08:55:03.998 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:55:04 compute-0 nova_compute[253661]: 2025-11-22 08:55:04.595 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:04 compute-0 nova_compute[253661]: 2025-11-22 08:55:04.596 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:04 compute-0 nova_compute[253661]: 2025-11-22 08:55:04.596 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:55:04 compute-0 nova_compute[253661]: 2025-11-22 08:55:04.596 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:55:04 compute-0 nova_compute[253661]: 2025-11-22 08:55:04.610 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:55:04 compute-0 nova_compute[253661]: 2025-11-22 08:55:04.610 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:04 compute-0 nova_compute[253661]: 2025-11-22 08:55:04.610 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1883544117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:55:05 compute-0 ceph-mon[75021]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:07 compute-0 ceph-mon[75021]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:09 compute-0 ceph-mon[75021]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:10 compute-0 ceph-mon[75021]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 08:55:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/890003468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:55:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 08:55:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/890003468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:55:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/890003468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:55:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/890003468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:55:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:55:12.755 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:55:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:55:12.756 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:55:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:55:12.758 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:55:13 compute-0 ceph-mon[75021]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:15 compute-0 ceph-mon[75021]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:17 compute-0 podman[256286]: 2025-11-22 08:55:17.372661409 +0000 UTC m=+0.062114147 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:55:17 compute-0 podman[256287]: 2025-11-22 08:55:17.376199026 +0000 UTC m=+0.065192383 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 08:55:17 compute-0 ceph-mon[75021]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:19 compute-0 ceph-mon[75021]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:19 compute-0 podman[256326]: 2025-11-22 08:55:19.422635086 +0000 UTC m=+0.119826423 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 08:55:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:20 compute-0 sudo[256352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:55:20 compute-0 sudo[256352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:20 compute-0 sudo[256352]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:21 compute-0 sudo[256377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:55:21 compute-0 sudo[256377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:21 compute-0 sudo[256377]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:21 compute-0 sudo[256402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:55:21 compute-0 sudo[256402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:21 compute-0 sudo[256402]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:21 compute-0 sudo[256427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 08:55:21 compute-0 sudo[256427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:21 compute-0 sudo[256427]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 08:55:21 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:55:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 08:55:21 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 08:55:21 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:55:21 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a0e49f75-f232-4330-bdbe-07c6f72cc35a does not exist
Nov 22 08:55:21 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1314c2df-0bd4-41fc-9292-ed298993c7eb does not exist
Nov 22 08:55:21 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c3943bdb-3b4f-4c4c-a524-676bb0bcd2b4 does not exist
Nov 22 08:55:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 08:55:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 08:55:21 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:55:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:55:21 compute-0 sudo[256483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:55:21 compute-0 sudo[256483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:21 compute-0 sudo[256483]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:21 compute-0 ceph-mon[75021]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:55:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:55:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:55:21 compute-0 sudo[256508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:55:21 compute-0 sudo[256508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:21 compute-0 sudo[256508]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:21 compute-0 sudo[256533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:55:21 compute-0 sudo[256533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:21 compute-0 sudo[256533]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:21 compute-0 sudo[256558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 08:55:21 compute-0 sudo[256558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:22 compute-0 podman[256625]: 2025-11-22 08:55:22.318177628 +0000 UTC m=+0.028054729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:55:22 compute-0 podman[256625]: 2025-11-22 08:55:22.423814628 +0000 UTC m=+0.133691709 container create c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 08:55:22 compute-0 systemd[1]: Started libpod-conmon-c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89.scope.
Nov 22 08:55:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:55:22 compute-0 podman[256625]: 2025-11-22 08:55:22.539056946 +0000 UTC m=+0.248934057 container init c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:55:22 compute-0 podman[256625]: 2025-11-22 08:55:22.549748031 +0000 UTC m=+0.259625122 container start c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 08:55:22 compute-0 quizzical_banzai[256641]: 167 167
Nov 22 08:55:22 compute-0 systemd[1]: libpod-c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89.scope: Deactivated successfully.
Nov 22 08:55:22 compute-0 podman[256625]: 2025-11-22 08:55:22.591292655 +0000 UTC m=+0.301169766 container attach c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:55:22 compute-0 podman[256625]: 2025-11-22 08:55:22.592515466 +0000 UTC m=+0.302392547 container died c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:55:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Nov 22 08:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdb04fae261fe17baf0a51d34efbd35d9a21731b83d2ac5dca3569ff21a73d4f-merged.mount: Deactivated successfully.
Nov 22 08:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:55:22 compute-0 podman[256625]: 2025-11-22 08:55:22.817638889 +0000 UTC m=+0.527515970 container remove c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 08:55:22 compute-0 systemd[1]: libpod-conmon-c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89.scope: Deactivated successfully.
Nov 22 08:55:23 compute-0 podman[256665]: 2025-11-22 08:55:23.011187046 +0000 UTC m=+0.051287748 container create d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:55:23 compute-0 systemd[1]: Started libpod-conmon-d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b.scope.
Nov 22 08:55:23 compute-0 podman[256665]: 2025-11-22 08:55:22.990234194 +0000 UTC m=+0.030334896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:55:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:23 compute-0 podman[256665]: 2025-11-22 08:55:23.119247645 +0000 UTC m=+0.159348367 container init d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 08:55:23 compute-0 podman[256665]: 2025-11-22 08:55:23.128173448 +0000 UTC m=+0.168274150 container start d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 08:55:23 compute-0 podman[256665]: 2025-11-22 08:55:23.136442923 +0000 UTC m=+0.176543695 container attach d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:55:23 compute-0 ceph-mon[75021]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Nov 22 08:55:24 compute-0 tender_nightingale[256681]: --> passed data devices: 0 physical, 3 LVM
Nov 22 08:55:24 compute-0 tender_nightingale[256681]: --> relative data size: 1.0
Nov 22 08:55:24 compute-0 tender_nightingale[256681]: --> All data devices are unavailable
Nov 22 08:55:24 compute-0 systemd[1]: libpod-d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b.scope: Deactivated successfully.
Nov 22 08:55:24 compute-0 podman[256665]: 2025-11-22 08:55:24.361286206 +0000 UTC m=+1.401386928 container died d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:55:24 compute-0 systemd[1]: libpod-d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b.scope: Consumed 1.102s CPU time.
Nov 22 08:55:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821-merged.mount: Deactivated successfully.
Nov 22 08:55:24 compute-0 podman[256665]: 2025-11-22 08:55:24.437713079 +0000 UTC m=+1.477813781 container remove d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 08:55:24 compute-0 systemd[1]: libpod-conmon-d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b.scope: Deactivated successfully.
Nov 22 08:55:24 compute-0 sudo[256558]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:24 compute-0 sudo[256721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:55:24 compute-0 sudo[256721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:24 compute-0 sudo[256721]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:24 compute-0 sudo[256746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:55:24 compute-0 sudo[256746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:24 compute-0 sudo[256746]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 22 08:55:24 compute-0 sudo[256771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:55:24 compute-0 sudo[256771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:24 compute-0 sudo[256771]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:24 compute-0 sudo[256796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 08:55:24 compute-0 sudo[256796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:24 compute-0 ceph-mon[75021]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 22 08:55:25 compute-0 podman[256861]: 2025-11-22 08:55:25.094755471 +0000 UTC m=+0.040684174 container create fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:55:25 compute-0 systemd[1]: Started libpod-conmon-fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0.scope.
Nov 22 08:55:25 compute-0 podman[256861]: 2025-11-22 08:55:25.07740863 +0000 UTC m=+0.023337363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:55:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:55:25 compute-0 podman[256861]: 2025-11-22 08:55:25.202535223 +0000 UTC m=+0.148463946 container init fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:55:25 compute-0 podman[256861]: 2025-11-22 08:55:25.212119022 +0000 UTC m=+0.158047725 container start fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 08:55:25 compute-0 angry_sammet[256878]: 167 167
Nov 22 08:55:25 compute-0 systemd[1]: libpod-fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0.scope: Deactivated successfully.
Nov 22 08:55:25 compute-0 conmon[256878]: conmon fcb2bdb64893895d278b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0.scope/container/memory.events
Nov 22 08:55:25 compute-0 podman[256861]: 2025-11-22 08:55:25.229976527 +0000 UTC m=+0.175905250 container attach fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 22 08:55:25 compute-0 podman[256861]: 2025-11-22 08:55:25.230425227 +0000 UTC m=+0.176353930 container died fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f798cbf963aef6890f985f06a32eff95d76d6a230efd4c6f199031c7f2a77036-merged.mount: Deactivated successfully.
Nov 22 08:55:25 compute-0 podman[256861]: 2025-11-22 08:55:25.29360963 +0000 UTC m=+0.239538333 container remove fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:55:25 compute-0 systemd[1]: libpod-conmon-fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0.scope: Deactivated successfully.
Nov 22 08:55:25 compute-0 podman[256902]: 2025-11-22 08:55:25.474075271 +0000 UTC m=+0.045681518 container create 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 08:55:25 compute-0 systemd[1]: Started libpod-conmon-499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1.scope.
Nov 22 08:55:25 compute-0 podman[256902]: 2025-11-22 08:55:25.454955985 +0000 UTC m=+0.026562252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:55:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:25 compute-0 podman[256902]: 2025-11-22 08:55:25.578859189 +0000 UTC m=+0.150465456 container init 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:55:25 compute-0 podman[256902]: 2025-11-22 08:55:25.587646898 +0000 UTC m=+0.159253145 container start 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:55:25 compute-0 podman[256902]: 2025-11-22 08:55:25.592139689 +0000 UTC m=+0.163745936 container attach 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 08:55:26 compute-0 sleepy_williams[256919]: {
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:     "0": [
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:         {
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "devices": [
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "/dev/loop3"
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             ],
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_name": "ceph_lv0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_size": "21470642176",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "name": "ceph_lv0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "tags": {
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.cluster_name": "ceph",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.crush_device_class": "",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.encrypted": "0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.osd_id": "0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.type": "block",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.vdo": "0"
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             },
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "type": "block",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "vg_name": "ceph_vg0"
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:         }
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:     ],
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:     "1": [
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:         {
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "devices": [
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "/dev/loop4"
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             ],
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_name": "ceph_lv1",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_size": "21470642176",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "name": "ceph_lv1",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "tags": {
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.cluster_name": "ceph",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.crush_device_class": "",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.encrypted": "0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.osd_id": "1",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.type": "block",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.vdo": "0"
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             },
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "type": "block",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "vg_name": "ceph_vg1"
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:         }
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:     ],
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:     "2": [
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:         {
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "devices": [
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "/dev/loop5"
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             ],
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_name": "ceph_lv2",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_size": "21470642176",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "name": "ceph_lv2",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "tags": {
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.cluster_name": "ceph",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.crush_device_class": "",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.encrypted": "0",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.osd_id": "2",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.type": "block",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:                 "ceph.vdo": "0"
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             },
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "type": "block",
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:             "vg_name": "ceph_vg2"
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:         }
Nov 22 08:55:26 compute-0 sleepy_williams[256919]:     ]
Nov 22 08:55:26 compute-0 sleepy_williams[256919]: }
Nov 22 08:55:26 compute-0 systemd[1]: libpod-499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1.scope: Deactivated successfully.
Nov 22 08:55:26 compute-0 podman[256902]: 2025-11-22 08:55:26.475363101 +0000 UTC m=+1.046969348 container died 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 08:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772-merged.mount: Deactivated successfully.
Nov 22 08:55:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 22 08:55:26 compute-0 podman[256902]: 2025-11-22 08:55:26.689246694 +0000 UTC m=+1.260852941 container remove 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 08:55:26 compute-0 systemd[1]: libpod-conmon-499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1.scope: Deactivated successfully.
Nov 22 08:55:26 compute-0 sudo[256796]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:26 compute-0 sudo[256940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:55:26 compute-0 sudo[256940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:26 compute-0 sudo[256940]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:26 compute-0 sudo[256965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:55:26 compute-0 sudo[256965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:26 compute-0 sudo[256965]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:26 compute-0 sudo[256990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:55:26 compute-0 sudo[256990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:26 compute-0 sudo[256990]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:27 compute-0 sudo[257015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 08:55:27 compute-0 sudo[257015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:27 compute-0 podman[257081]: 2025-11-22 08:55:27.33155364 +0000 UTC m=+0.023394944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:55:27 compute-0 podman[257081]: 2025-11-22 08:55:27.741983854 +0000 UTC m=+0.433825138 container create c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:55:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:55:27.937 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:55:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:55:27.938 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:55:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:55:27.938 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:55:28 compute-0 ceph-mon[75021]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 22 08:55:28 compute-0 systemd[1]: Started libpod-conmon-c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7.scope.
Nov 22 08:55:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:55:28 compute-0 podman[257081]: 2025-11-22 08:55:28.28344436 +0000 UTC m=+0.975285664 container init c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 08:55:28 compute-0 podman[257081]: 2025-11-22 08:55:28.291651104 +0000 UTC m=+0.983492408 container start c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:55:28 compute-0 beautiful_khayyam[257097]: 167 167
Nov 22 08:55:28 compute-0 systemd[1]: libpod-c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7.scope: Deactivated successfully.
Nov 22 08:55:28 compute-0 podman[257081]: 2025-11-22 08:55:28.319289882 +0000 UTC m=+1.011131166 container attach c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:55:28 compute-0 podman[257081]: 2025-11-22 08:55:28.320106262 +0000 UTC m=+1.011947546 container died c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 08:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f50b66920f706a270db322b34a73296796aa4ee6cdab5c29f0001e14657bb314-merged.mount: Deactivated successfully.
Nov 22 08:55:28 compute-0 podman[257081]: 2025-11-22 08:55:28.515117635 +0000 UTC m=+1.206958959 container remove c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 08:55:28 compute-0 systemd[1]: libpod-conmon-c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7.scope: Deactivated successfully.
Nov 22 08:55:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 08:55:28 compute-0 podman[257123]: 2025-11-22 08:55:28.732312801 +0000 UTC m=+0.082446363 container create cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:55:28 compute-0 podman[257123]: 2025-11-22 08:55:28.675919537 +0000 UTC m=+0.026053119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:55:28 compute-0 systemd[1]: Started libpod-conmon-cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5.scope.
Nov 22 08:55:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:55:28 compute-0 podman[257123]: 2025-11-22 08:55:28.836937255 +0000 UTC m=+0.187070847 container init cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 08:55:28 compute-0 podman[257123]: 2025-11-22 08:55:28.84639821 +0000 UTC m=+0.196531772 container start cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 08:55:28 compute-0 podman[257123]: 2025-11-22 08:55:28.884529189 +0000 UTC m=+0.234662771 container attach cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:55:29 compute-0 ceph-mon[75021]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 08:55:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:29 compute-0 sharp_liskov[257139]: {
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "osd_id": 1,
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "type": "bluestore"
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:     },
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "osd_id": 0,
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "type": "bluestore"
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:     },
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "osd_id": 2,
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:         "type": "bluestore"
Nov 22 08:55:29 compute-0 sharp_liskov[257139]:     }
Nov 22 08:55:29 compute-0 sharp_liskov[257139]: }
Nov 22 08:55:29 compute-0 systemd[1]: libpod-cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5.scope: Deactivated successfully.
Nov 22 08:55:29 compute-0 podman[257123]: 2025-11-22 08:55:29.938042018 +0000 UTC m=+1.288175580 container died cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:55:29 compute-0 systemd[1]: libpod-cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5.scope: Consumed 1.091s CPU time.
Nov 22 08:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81-merged.mount: Deactivated successfully.
Nov 22 08:55:30 compute-0 podman[257123]: 2025-11-22 08:55:30.068931326 +0000 UTC m=+1.419064878 container remove cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 08:55:30 compute-0 systemd[1]: libpod-conmon-cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5.scope: Deactivated successfully.
Nov 22 08:55:30 compute-0 sudo[257015]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 08:55:30 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:55:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 08:55:30 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:55:30 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0e8817b0-31e0-49b3-acda-8ee8525806ca does not exist
Nov 22 08:55:30 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev cffcdb08-ecf3-4c42-a71a-dadb1761366e does not exist
Nov 22 08:55:30 compute-0 sudo[257184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:55:30 compute-0 sudo[257184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:30 compute-0 sudo[257184]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:30 compute-0 sudo[257209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 08:55:30 compute-0 sudo[257209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:55:30 compute-0 sudo[257209]: pam_unix(sudo:session): session closed for user root
Nov 22 08:55:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 08:55:31 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:55:31 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:55:31 compute-0 ceph-mon[75021]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 08:55:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 08:55:33 compute-0 ceph-mon[75021]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 08:55:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Nov 22 08:55:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:35 compute-0 ceph-mon[75021]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Nov 22 08:55:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 08:55:37 compute-0 ceph-mon[75021]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 08:55:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 08:55:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:39 compute-0 ceph-mon[75021]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 08:55:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:41 compute-0 ceph-mon[75021]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:43 compute-0 ceph-mon[75021]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:44 compute-0 ceph-mon[75021]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:47 compute-0 ceph-mon[75021]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:48 compute-0 podman[257234]: 2025-11-22 08:55:48.38140207 +0000 UTC m=+0.069657785 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:55:48 compute-0 podman[257235]: 2025-11-22 08:55:48.410332389 +0000 UTC m=+0.098343488 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 08:55:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:49 compute-0 ceph-mon[75021]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:50 compute-0 podman[257272]: 2025-11-22 08:55:50.420479547 +0000 UTC m=+0.108364927 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:55:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:50 compute-0 ceph-mon[75021]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:55:52
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'vms']
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:55:53 compute-0 ceph-mon[75021]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:55:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:55:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:55:54 compute-0 ceph-mon[75021]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:57 compute-0 ceph-mon[75021]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:59 compute-0 ceph-mon[75021]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:55:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:01 compute-0 ceph-mon[75021]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:02 compute-0 nova_compute[253661]: 2025-11-22 08:56:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:02 compute-0 nova_compute[253661]: 2025-11-22 08:56:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:02 compute-0 nova_compute[253661]: 2025-11-22 08:56:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 08:56:02 compute-0 nova_compute[253661]: 2025-11-22 08:56:02.413 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:56:02 compute-0 nova_compute[253661]: 2025-11-22 08:56:02.414 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:56:02 compute-0 nova_compute[253661]: 2025-11-22 08:56:02.414 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:56:02 compute-0 nova_compute[253661]: 2025-11-22 08:56:02.414 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:56:02 compute-0 nova_compute[253661]: 2025-11-22 08:56:02.414 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:56:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:56:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1843863018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:56:02 compute-0 nova_compute[253661]: 2025-11-22 08:56:02.946 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.124 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.125 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5212MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.126 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.126 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.186 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.186 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.209 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:56:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:56:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838507600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.701 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.708 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.722 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.723 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:56:03 compute-0 nova_compute[253661]: 2025-11-22 08:56:03.724 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:56:03 compute-0 ceph-mon[75021]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1843863018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:56:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2838507600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:56:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.724 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.724 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.725 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.725 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.738 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.739 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.740 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.740 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.740 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:04 compute-0 nova_compute[253661]: 2025-11-22 08:56:04.740 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:56:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:05 compute-0 ceph-mon[75021]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:07 compute-0 ceph-mon[75021]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:08 compute-0 ceph-mon[75021]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:11 compute-0 ceph-mon[75021]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 08:56:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471482244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:56:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 08:56:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471482244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:56:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1471482244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:56:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1471482244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:56:13 compute-0 ceph-mon[75021]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:14 compute-0 ceph-mon[75021]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:17 compute-0 ceph-mon[75021]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:19 compute-0 podman[257342]: 2025-11-22 08:56:19.35594834 +0000 UTC m=+0.050365232 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:56:19 compute-0 podman[257343]: 2025-11-22 08:56:19.37041833 +0000 UTC m=+0.063207872 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:56:19 compute-0 ceph-mon[75021]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:21 compute-0 podman[257381]: 2025-11-22 08:56:21.40798755 +0000 UTC m=+0.090321375 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 08:56:21 compute-0 ceph-mon[75021]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:56:23 compute-0 ceph-mon[75021]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:25 compute-0 ceph-mon[75021]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:27 compute-0 ceph-mon[75021]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:56:27.938 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:56:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:56:27.939 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:56:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:56:27.939 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:56:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:29 compute-0 ceph-mon[75021]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:30 compute-0 sudo[257408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:30 compute-0 sudo[257408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:30 compute-0 sudo[257408]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:30 compute-0 sudo[257433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:56:30 compute-0 sudo[257433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:30 compute-0 sudo[257433]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:30 compute-0 sudo[257458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:30 compute-0 sudo[257458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:30 compute-0 sudo[257458]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:30 compute-0 sudo[257483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 08:56:30 compute-0 sudo[257483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:31 compute-0 podman[257577]: 2025-11-22 08:56:31.09358012 +0000 UTC m=+0.087032523 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 08:56:31 compute-0 podman[257577]: 2025-11-22 08:56:31.201089411 +0000 UTC m=+0.194541814 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:56:31 compute-0 sudo[257483]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 08:56:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 08:56:31 compute-0 ceph-mon[75021]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:31 compute-0 sudo[257735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:31 compute-0 sudo[257735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:31 compute-0 sudo[257735]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:32 compute-0 sudo[257760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:56:32 compute-0 sudo[257760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:32 compute-0 sudo[257760]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:32 compute-0 sudo[257785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:32 compute-0 sudo[257785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:32 compute-0 sudo[257785]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:32 compute-0 sudo[257810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 08:56:32 compute-0 sudo[257810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:32 compute-0 sudo[257810]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:56:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 08:56:32 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 08:56:32 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:32 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 56d776df-58f0-4c66-80be-32ab058c23ca does not exist
Nov 22 08:56:32 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a22f662b-b53f-456f-84f4-44db2f324fc5 does not exist
Nov 22 08:56:32 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 462888b9-3332-4b28-96f7-ce8b678c4930 does not exist
Nov 22 08:56:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 08:56:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 08:56:32 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:56:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:32 compute-0 sudo[257865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:32 compute-0 sudo[257865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:32 compute-0 sudo[257865]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:32 compute-0 sudo[257890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:56:32 compute-0 sudo[257890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:32 compute-0 sudo[257890]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:32 compute-0 sudo[257915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:32 compute-0 sudo[257915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:32 compute-0 sudo[257915]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:56:32 compute-0 ceph-mon[75021]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:32 compute-0 sudo[257940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 08:56:32 compute-0 sudo[257940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:33 compute-0 podman[258006]: 2025-11-22 08:56:33.245156503 +0000 UTC m=+0.051180861 container create 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 08:56:33 compute-0 systemd[1]: Started libpod-conmon-42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a.scope.
Nov 22 08:56:33 compute-0 podman[258006]: 2025-11-22 08:56:33.2152036 +0000 UTC m=+0.021227988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:56:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:56:33 compute-0 podman[258006]: 2025-11-22 08:56:33.34283812 +0000 UTC m=+0.148862508 container init 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:56:33 compute-0 podman[258006]: 2025-11-22 08:56:33.351456114 +0000 UTC m=+0.157480472 container start 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 08:56:33 compute-0 friendly_sinoussi[258021]: 167 167
Nov 22 08:56:33 compute-0 systemd[1]: libpod-42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a.scope: Deactivated successfully.
Nov 22 08:56:33 compute-0 podman[258006]: 2025-11-22 08:56:33.361744499 +0000 UTC m=+0.167768857 container attach 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:56:33 compute-0 podman[258006]: 2025-11-22 08:56:33.362538669 +0000 UTC m=+0.168563027 container died 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:56:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b83d58e934a501c19e17120ab4eb8241d38694d2adbe7fd2e4294f8a5fe4c51e-merged.mount: Deactivated successfully.
Nov 22 08:56:33 compute-0 podman[258006]: 2025-11-22 08:56:33.432440455 +0000 UTC m=+0.238464813 container remove 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:56:33 compute-0 systemd[1]: libpod-conmon-42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a.scope: Deactivated successfully.
Nov 22 08:56:33 compute-0 podman[258049]: 2025-11-22 08:56:33.61341415 +0000 UTC m=+0.054727390 container create 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 08:56:33 compute-0 systemd[1]: Started libpod-conmon-29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3.scope.
Nov 22 08:56:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:56:33 compute-0 podman[258049]: 2025-11-22 08:56:33.587447486 +0000 UTC m=+0.028760776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:33 compute-0 podman[258049]: 2025-11-22 08:56:33.699480769 +0000 UTC m=+0.140794029 container init 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 08:56:33 compute-0 podman[258049]: 2025-11-22 08:56:33.707350014 +0000 UTC m=+0.148663284 container start 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 08:56:33 compute-0 podman[258049]: 2025-11-22 08:56:33.714799389 +0000 UTC m=+0.156112629 container attach 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:56:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:34 compute-0 cool_lewin[258065]: --> passed data devices: 0 physical, 3 LVM
Nov 22 08:56:34 compute-0 cool_lewin[258065]: --> relative data size: 1.0
Nov 22 08:56:34 compute-0 cool_lewin[258065]: --> All data devices are unavailable
Nov 22 08:56:34 compute-0 systemd[1]: libpod-29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3.scope: Deactivated successfully.
Nov 22 08:56:34 compute-0 systemd[1]: libpod-29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3.scope: Consumed 1.065s CPU time.
Nov 22 08:56:34 compute-0 podman[258049]: 2025-11-22 08:56:34.822596295 +0000 UTC m=+1.263909535 container died 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:56:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119-merged.mount: Deactivated successfully.
Nov 22 08:56:34 compute-0 podman[258049]: 2025-11-22 08:56:34.889932488 +0000 UTC m=+1.331245738 container remove 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 08:56:34 compute-0 systemd[1]: libpod-conmon-29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3.scope: Deactivated successfully.
Nov 22 08:56:34 compute-0 sudo[257940]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:34 compute-0 sudo[258108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:35 compute-0 sudo[258108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:35 compute-0 sudo[258108]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:35 compute-0 sudo[258133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:56:35 compute-0 sudo[258133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:35 compute-0 sudo[258133]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:35 compute-0 sudo[258158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:35 compute-0 sudo[258158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:35 compute-0 sudo[258158]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:35 compute-0 sudo[258183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 08:56:35 compute-0 sudo[258183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:35 compute-0 podman[258247]: 2025-11-22 08:56:35.548106376 +0000 UTC m=+0.039017120 container create 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:56:35 compute-0 systemd[1]: Started libpod-conmon-68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f.scope.
Nov 22 08:56:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:56:35 compute-0 podman[258247]: 2025-11-22 08:56:35.62476743 +0000 UTC m=+0.115678194 container init 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 08:56:35 compute-0 podman[258247]: 2025-11-22 08:56:35.532179491 +0000 UTC m=+0.023090265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:56:35 compute-0 podman[258247]: 2025-11-22 08:56:35.631672312 +0000 UTC m=+0.122583056 container start 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:56:35 compute-0 podman[258247]: 2025-11-22 08:56:35.635509117 +0000 UTC m=+0.126419891 container attach 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:56:35 compute-0 serene_noether[258263]: 167 167
Nov 22 08:56:35 compute-0 systemd[1]: libpod-68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f.scope: Deactivated successfully.
Nov 22 08:56:35 compute-0 conmon[258263]: conmon 68393964448faac77352 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f.scope/container/memory.events
Nov 22 08:56:35 compute-0 podman[258247]: 2025-11-22 08:56:35.63925647 +0000 UTC m=+0.130167214 container died 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 08:56:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9df5f4fe58b11c8e4facf267e385189292a7edb48bc7cbfe876c2ce3cf5d45ab-merged.mount: Deactivated successfully.
Nov 22 08:56:35 compute-0 podman[258247]: 2025-11-22 08:56:35.680224897 +0000 UTC m=+0.171135641 container remove 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 08:56:35 compute-0 systemd[1]: libpod-conmon-68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f.scope: Deactivated successfully.
Nov 22 08:56:35 compute-0 ceph-mon[75021]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:35 compute-0 podman[258287]: 2025-11-22 08:56:35.848063346 +0000 UTC m=+0.048086185 container create 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 08:56:35 compute-0 systemd[1]: Started libpod-conmon-69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef.scope.
Nov 22 08:56:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:56:35 compute-0 podman[258287]: 2025-11-22 08:56:35.82928424 +0000 UTC m=+0.029307099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:35 compute-0 podman[258287]: 2025-11-22 08:56:35.952689085 +0000 UTC m=+0.152711944 container init 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 08:56:35 compute-0 podman[258287]: 2025-11-22 08:56:35.959600167 +0000 UTC m=+0.159623006 container start 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:56:35 compute-0 podman[258287]: 2025-11-22 08:56:35.966140369 +0000 UTC m=+0.166163208 container attach 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 08:56:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:36 compute-0 nervous_pascal[258303]: {
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:     "0": [
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:         {
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "devices": [
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "/dev/loop3"
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             ],
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_name": "ceph_lv0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_size": "21470642176",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "name": "ceph_lv0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "tags": {
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.cluster_name": "ceph",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.crush_device_class": "",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.encrypted": "0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.osd_id": "0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.type": "block",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.vdo": "0"
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             },
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "type": "block",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "vg_name": "ceph_vg0"
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:         }
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:     ],
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:     "1": [
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:         {
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "devices": [
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "/dev/loop4"
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             ],
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_name": "ceph_lv1",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_size": "21470642176",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "name": "ceph_lv1",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "tags": {
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.cluster_name": "ceph",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.crush_device_class": "",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.encrypted": "0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.osd_id": "1",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.type": "block",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.vdo": "0"
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             },
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "type": "block",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "vg_name": "ceph_vg1"
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:         }
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:     ],
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:     "2": [
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:         {
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "devices": [
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "/dev/loop5"
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             ],
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_name": "ceph_lv2",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_size": "21470642176",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "name": "ceph_lv2",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "tags": {
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.cluster_name": "ceph",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.crush_device_class": "",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.encrypted": "0",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.osd_id": "2",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.type": "block",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:                 "ceph.vdo": "0"
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             },
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "type": "block",
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:             "vg_name": "ceph_vg2"
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:         }
Nov 22 08:56:36 compute-0 nervous_pascal[258303]:     ]
Nov 22 08:56:36 compute-0 nervous_pascal[258303]: }
Nov 22 08:56:36 compute-0 systemd[1]: libpod-69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef.scope: Deactivated successfully.
Nov 22 08:56:36 compute-0 podman[258287]: 2025-11-22 08:56:36.829494384 +0000 UTC m=+1.029517243 container died 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 08:56:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24-merged.mount: Deactivated successfully.
Nov 22 08:56:36 compute-0 podman[258287]: 2025-11-22 08:56:36.899247777 +0000 UTC m=+1.099270616 container remove 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 08:56:36 compute-0 systemd[1]: libpod-conmon-69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef.scope: Deactivated successfully.
Nov 22 08:56:36 compute-0 sudo[258183]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:37 compute-0 sudo[258326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:37 compute-0 sudo[258326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:37 compute-0 sudo[258326]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:37 compute-0 sudo[258351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:56:37 compute-0 sudo[258351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:37 compute-0 sudo[258351]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:37 compute-0 sudo[258376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:37 compute-0 sudo[258376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:37 compute-0 sudo[258376]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:37 compute-0 sudo[258401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 08:56:37 compute-0 sudo[258401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:37 compute-0 podman[258467]: 2025-11-22 08:56:37.57936416 +0000 UTC m=+0.060075113 container create f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:56:37 compute-0 podman[258467]: 2025-11-22 08:56:37.542888334 +0000 UTC m=+0.023599307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:56:37 compute-0 systemd[1]: Started libpod-conmon-f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801.scope.
Nov 22 08:56:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:56:37 compute-0 podman[258467]: 2025-11-22 08:56:37.763941404 +0000 UTC m=+0.244652387 container init f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 22 08:56:37 compute-0 podman[258467]: 2025-11-22 08:56:37.77139932 +0000 UTC m=+0.252110273 container start f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:56:37 compute-0 jolly_buck[258484]: 167 167
Nov 22 08:56:37 compute-0 systemd[1]: libpod-f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801.scope: Deactivated successfully.
Nov 22 08:56:37 compute-0 ceph-mon[75021]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:37 compute-0 podman[258467]: 2025-11-22 08:56:37.938663754 +0000 UTC m=+0.419374737 container attach f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:56:37 compute-0 podman[258467]: 2025-11-22 08:56:37.939429154 +0000 UTC m=+0.420140107 container died f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 08:56:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6972e95b520ff4260b80690f89571e9e5db8d7526dec5b1be591d93fec853ab6-merged.mount: Deactivated successfully.
Nov 22 08:56:38 compute-0 podman[258467]: 2025-11-22 08:56:38.142369355 +0000 UTC m=+0.623080308 container remove f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:56:38 compute-0 systemd[1]: libpod-conmon-f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801.scope: Deactivated successfully.
Nov 22 08:56:38 compute-0 podman[258509]: 2025-11-22 08:56:38.326808825 +0000 UTC m=+0.051393957 container create 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 08:56:38 compute-0 systemd[1]: Started libpod-conmon-164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935.scope.
Nov 22 08:56:38 compute-0 podman[258509]: 2025-11-22 08:56:38.302508612 +0000 UTC m=+0.027093774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:56:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:56:38 compute-0 podman[258509]: 2025-11-22 08:56:38.432485621 +0000 UTC m=+0.157070773 container init 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 08:56:38 compute-0 podman[258509]: 2025-11-22 08:56:38.441522645 +0000 UTC m=+0.166107777 container start 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:56:38 compute-0 podman[258509]: 2025-11-22 08:56:38.445024732 +0000 UTC m=+0.169609884 container attach 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 08:56:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:38 compute-0 ceph-mon[75021]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]: {
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "osd_id": 1,
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "type": "bluestore"
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:     },
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "osd_id": 0,
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "type": "bluestore"
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:     },
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "osd_id": 2,
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:         "type": "bluestore"
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]:     }
Nov 22 08:56:39 compute-0 xenodochial_diffie[258526]: }
Nov 22 08:56:39 compute-0 systemd[1]: libpod-164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935.scope: Deactivated successfully.
Nov 22 08:56:39 compute-0 systemd[1]: libpod-164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935.scope: Consumed 1.102s CPU time.
Nov 22 08:56:39 compute-0 podman[258559]: 2025-11-22 08:56:39.571201745 +0000 UTC m=+0.023329080 container died 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:56:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4-merged.mount: Deactivated successfully.
Nov 22 08:56:39 compute-0 podman[258559]: 2025-11-22 08:56:39.621380961 +0000 UTC m=+0.073508286 container remove 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 08:56:39 compute-0 systemd[1]: libpod-conmon-164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935.scope: Deactivated successfully.
Nov 22 08:56:39 compute-0 sudo[258401]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 08:56:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 08:56:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev f53eb80e-e3a1-4fdd-a3fc-b628d65c8657 does not exist
Nov 22 08:56:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 28f92e2d-12c1-452e-9e27-4c3ea6d94a58 does not exist
Nov 22 08:56:39 compute-0 sudo[258574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:56:39 compute-0 sudo[258574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:39 compute-0 sudo[258574]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:39 compute-0 sudo[258599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 08:56:39 compute-0 sudo[258599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:56:39 compute-0 sudo[258599]: pam_unix(sudo:session): session closed for user root
Nov 22 08:56:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:56:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:41 compute-0 ceph-mon[75021]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:43 compute-0 ceph-mon[75021]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:46 compute-0 ceph-mon[75021]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:47 compute-0 ceph-mon[75021]: pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:49 compute-0 ceph-mon[75021]: pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:50 compute-0 podman[258624]: 2025-11-22 08:56:50.387444687 +0000 UTC m=+0.070557073 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 08:56:50 compute-0 podman[258625]: 2025-11-22 08:56:50.389773765 +0000 UTC m=+0.072933063 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 08:56:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:51 compute-0 ceph-mon[75021]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:56:52
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.mgr']
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 08:56:52 compute-0 podman[258663]: 2025-11-22 08:56:52.4174693 +0000 UTC m=+0.112531956 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:56:52 compute-0 ceph-mon[75021]: pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:56:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:56:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:56:55 compute-0 ceph-mon[75021]: pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:57 compute-0 ceph-mon[75021]: pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:56:59 compute-0 ceph-mon[75021]: pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:01 compute-0 ceph-mon[75021]: pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 08:57:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:57:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:57:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2955240106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.720 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.881 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.882 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5184MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.883 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.883 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:03 compute-0 ceph-mon[75021]: pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2955240106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.946 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.946 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:57:03 compute-0 nova_compute[253661]: 2025-11-22 08:57:03.964 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:57:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:57:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664952485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:57:04 compute-0 nova_compute[253661]: 2025-11-22 08:57:04.436 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:57:04 compute-0 nova_compute[253661]: 2025-11-22 08:57:04.443 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:57:04 compute-0 nova_compute[253661]: 2025-11-22 08:57:04.457 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:57:04 compute-0 nova_compute[253661]: 2025-11-22 08:57:04.459 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:57:04 compute-0 nova_compute[253661]: 2025-11-22 08:57:04.459 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3664952485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:57:04 compute-0 ceph-mon[75021]: pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.460 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.461 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.461 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.461 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.591 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:05 compute-0 nova_compute[253661]: 2025-11-22 08:57:05.593 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:57:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:06 compute-0 nova_compute[253661]: 2025-11-22 08:57:06.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:06 compute-0 nova_compute[253661]: 2025-11-22 08:57:06.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:07 compute-0 ceph-mon[75021]: pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:09 compute-0 ceph-mon[75021]: pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:11 compute-0 ceph-mon[75021]: pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 08:57:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300167450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:57:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 08:57:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300167450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:57:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3300167450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:57:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3300167450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:57:14 compute-0 ceph-mon[75021]: pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:15 compute-0 ceph-mon[75021]: pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:17 compute-0 ceph-mon[75021]: pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:19 compute-0 ceph-mon[75021]: pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:21 compute-0 ceph-mon[75021]: pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:21 compute-0 podman[258734]: 2025-11-22 08:57:21.392696593 +0000 UTC m=+0.062071984 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:57:21 compute-0 podman[258735]: 2025-11-22 08:57:21.392792895 +0000 UTC m=+0.070518563 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:57:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:23 compute-0 podman[258772]: 2025-11-22 08:57:23.415979799 +0000 UTC m=+0.109678506 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:57:23 compute-0 ceph-mon[75021]: pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:24 compute-0 ceph-mon[75021]: pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:27 compute-0 ceph-mon[75021]: pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:57:27.939 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:57:27.939 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:57:27.940 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:30 compute-0 ceph-mon[75021]: pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:31 compute-0 ceph-mon[75021]: pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:33 compute-0 ceph-mon[75021]: pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:35 compute-0 ceph-mon[75021]: pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:37 compute-0 ceph-mon[75021]: pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:39 compute-0 ceph-mon[75021]: pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:39 compute-0 sudo[258798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:57:39 compute-0 sudo[258798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:39 compute-0 sudo[258798]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:39 compute-0 sudo[258823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:57:39 compute-0 sudo[258823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:39 compute-0 sudo[258823]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:40 compute-0 sudo[258848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:57:40 compute-0 sudo[258848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:40 compute-0 sudo[258848]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:40 compute-0 sudo[258873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 08:57:40 compute-0 sudo[258873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:40 compute-0 sudo[258873]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 08:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 08:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:57:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 91c8444f-f7c5-49d4-8dba-cd838a390a1c does not exist
Nov 22 08:57:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ef3b1578-0128-4401-917b-a7d7cd991d38 does not exist
Nov 22 08:57:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 418813d1-564a-4815-a8d2-525ab7fb26a1 does not exist
Nov 22 08:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 08:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 08:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:57:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:40 compute-0 sudo[258928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:57:40 compute-0 sudo[258928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:40 compute-0 sudo[258928]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:40 compute-0 sudo[258953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:57:40 compute-0 sudo[258953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:40 compute-0 sudo[258953]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:40 compute-0 sudo[258978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:57:40 compute-0 sudo[258978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:40 compute-0 sudo[258978]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:57:40 compute-0 sudo[259003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 08:57:40 compute-0 sudo[259003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:41 compute-0 podman[259069]: 2025-11-22 08:57:41.363022044 +0000 UTC m=+0.109800499 container create cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:57:41 compute-0 podman[259069]: 2025-11-22 08:57:41.278920535 +0000 UTC m=+0.025699010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:57:41 compute-0 systemd[1]: Started libpod-conmon-cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054.scope.
Nov 22 08:57:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:57:41 compute-0 podman[259069]: 2025-11-22 08:57:41.545188139 +0000 UTC m=+0.291966624 container init cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 22 08:57:41 compute-0 podman[259069]: 2025-11-22 08:57:41.558018358 +0000 UTC m=+0.304796823 container start cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:57:41 compute-0 peaceful_rosalind[259085]: 167 167
Nov 22 08:57:41 compute-0 systemd[1]: libpod-cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054.scope: Deactivated successfully.
Nov 22 08:57:41 compute-0 podman[259069]: 2025-11-22 08:57:41.582840634 +0000 UTC m=+0.329619109 container attach cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:57:41 compute-0 podman[259069]: 2025-11-22 08:57:41.583916391 +0000 UTC m=+0.330694846 container died cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:57:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d958eaf352527c01141fe707a2386a0d304270b32c043fecf80c0836385ed37e-merged.mount: Deactivated successfully.
Nov 22 08:57:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:57:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:57:41 compute-0 ceph-mon[75021]: pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:42 compute-0 podman[259069]: 2025-11-22 08:57:42.045141087 +0000 UTC m=+0.791919542 container remove cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:57:42 compute-0 systemd[1]: libpod-conmon-cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054.scope: Deactivated successfully.
Nov 22 08:57:42 compute-0 podman[259108]: 2025-11-22 08:57:42.296583943 +0000 UTC m=+0.114185208 container create b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:57:42 compute-0 podman[259108]: 2025-11-22 08:57:42.212479543 +0000 UTC m=+0.030080828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:57:42 compute-0 systemd[1]: Started libpod-conmon-b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b.scope.
Nov 22 08:57:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:42 compute-0 podman[259108]: 2025-11-22 08:57:42.734215973 +0000 UTC m=+0.551817258 container init b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 08:57:42 compute-0 podman[259108]: 2025-11-22 08:57:42.74215469 +0000 UTC m=+0.559755955 container start b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 08:57:42 compute-0 podman[259108]: 2025-11-22 08:57:42.792754147 +0000 UTC m=+0.610355432 container attach b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 08:57:43 compute-0 ceph-mon[75021]: pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:43 compute-0 vigorous_dirac[259125]: --> passed data devices: 0 physical, 3 LVM
Nov 22 08:57:43 compute-0 vigorous_dirac[259125]: --> relative data size: 1.0
Nov 22 08:57:43 compute-0 vigorous_dirac[259125]: --> All data devices are unavailable
Nov 22 08:57:43 compute-0 systemd[1]: libpod-b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b.scope: Deactivated successfully.
Nov 22 08:57:43 compute-0 systemd[1]: libpod-b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b.scope: Consumed 1.061s CPU time.
Nov 22 08:57:43 compute-0 podman[259108]: 2025-11-22 08:57:43.850230983 +0000 UTC m=+1.667832258 container died b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 08:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac-merged.mount: Deactivated successfully.
Nov 22 08:57:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:45 compute-0 ceph-mon[75021]: pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:45 compute-0 podman[259108]: 2025-11-22 08:57:45.805042139 +0000 UTC m=+3.622643404 container remove b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:57:45 compute-0 systemd[1]: libpod-conmon-b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b.scope: Deactivated successfully.
Nov 22 08:57:45 compute-0 sudo[259003]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:45 compute-0 sudo[259166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:57:45 compute-0 sudo[259166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:45 compute-0 sudo[259166]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:45 compute-0 sudo[259191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:57:45 compute-0 sudo[259191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:45 compute-0 sudo[259191]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:46 compute-0 sudo[259216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:57:46 compute-0 sudo[259216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:46 compute-0 sudo[259216]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:46 compute-0 sudo[259241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 08:57:46 compute-0 sudo[259241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:46 compute-0 podman[259307]: 2025-11-22 08:57:46.504226616 +0000 UTC m=+0.024610902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:57:46 compute-0 podman[259307]: 2025-11-22 08:57:46.667274545 +0000 UTC m=+0.187658841 container create a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:57:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:46 compute-0 systemd[1]: Started libpod-conmon-a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c.scope.
Nov 22 08:57:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:57:47 compute-0 podman[259307]: 2025-11-22 08:57:47.180813802 +0000 UTC m=+0.701198138 container init a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 22 08:57:47 compute-0 podman[259307]: 2025-11-22 08:57:47.192562384 +0000 UTC m=+0.712946650 container start a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 08:57:47 compute-0 amazing_sinoussi[259324]: 167 167
Nov 22 08:57:47 compute-0 systemd[1]: libpod-a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c.scope: Deactivated successfully.
Nov 22 08:57:47 compute-0 ceph-mon[75021]: pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:47 compute-0 podman[259307]: 2025-11-22 08:57:47.532919918 +0000 UTC m=+1.053304264 container attach a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:57:47 compute-0 podman[259307]: 2025-11-22 08:57:47.535167435 +0000 UTC m=+1.055551771 container died a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 08:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ec064df00836a9214e88fa63d82a6fbd45051631642a52b36d40e8d1a332afc-merged.mount: Deactivated successfully.
Nov 22 08:57:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:49 compute-0 ceph-mon[75021]: pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:49 compute-0 podman[259307]: 2025-11-22 08:57:49.50479914 +0000 UTC m=+3.025183416 container remove a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:57:49 compute-0 systemd[1]: libpod-conmon-a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c.scope: Deactivated successfully.
Nov 22 08:57:49 compute-0 podman[259348]: 2025-11-22 08:57:49.679842947 +0000 UTC m=+0.022431628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:57:49 compute-0 podman[259348]: 2025-11-22 08:57:49.888750396 +0000 UTC m=+0.231339077 container create f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 08:57:50 compute-0 systemd[1]: Started libpod-conmon-f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23.scope.
Nov 22 08:57:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:50 compute-0 podman[259348]: 2025-11-22 08:57:50.405351329 +0000 UTC m=+0.747940040 container init f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 08:57:50 compute-0 podman[259348]: 2025-11-22 08:57:50.414371512 +0000 UTC m=+0.756960213 container start f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:57:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:50 compute-0 podman[259348]: 2025-11-22 08:57:50.658402354 +0000 UTC m=+1.000991215 container attach f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:57:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:51 compute-0 ceph-mon[75021]: pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:51 compute-0 dreamy_cori[259364]: {
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:     "0": [
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:         {
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "devices": [
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "/dev/loop3"
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             ],
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_name": "ceph_lv0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_size": "21470642176",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "name": "ceph_lv0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "tags": {
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.cluster_name": "ceph",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.crush_device_class": "",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.encrypted": "0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.osd_id": "0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.type": "block",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.vdo": "0"
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             },
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "type": "block",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "vg_name": "ceph_vg0"
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:         }
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:     ],
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:     "1": [
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:         {
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "devices": [
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "/dev/loop4"
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             ],
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_name": "ceph_lv1",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_size": "21470642176",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "name": "ceph_lv1",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "tags": {
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.cluster_name": "ceph",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.crush_device_class": "",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.encrypted": "0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.osd_id": "1",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.type": "block",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.vdo": "0"
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             },
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "type": "block",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "vg_name": "ceph_vg1"
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:         }
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:     ],
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:     "2": [
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:         {
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "devices": [
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "/dev/loop5"
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             ],
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_name": "ceph_lv2",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_size": "21470642176",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "name": "ceph_lv2",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "tags": {
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.cluster_name": "ceph",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.crush_device_class": "",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.encrypted": "0",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.osd_id": "2",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.type": "block",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:                 "ceph.vdo": "0"
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             },
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "type": "block",
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:             "vg_name": "ceph_vg2"
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:         }
Nov 22 08:57:51 compute-0 dreamy_cori[259364]:     ]
Nov 22 08:57:51 compute-0 dreamy_cori[259364]: }
Nov 22 08:57:51 compute-0 systemd[1]: libpod-f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23.scope: Deactivated successfully.
Nov 22 08:57:51 compute-0 podman[259348]: 2025-11-22 08:57:51.232580116 +0000 UTC m=+1.575168767 container died f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7-merged.mount: Deactivated successfully.
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:57:52
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups']
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 08:57:52 compute-0 podman[259348]: 2025-11-22 08:57:52.289126778 +0000 UTC m=+2.631715439 container remove f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:57:52 compute-0 systemd[1]: libpod-conmon-f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23.scope: Deactivated successfully.
Nov 22 08:57:52 compute-0 sudo[259241]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:52 compute-0 sudo[259406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:57:52 compute-0 sudo[259406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:52 compute-0 sudo[259406]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:52 compute-0 podman[259387]: 2025-11-22 08:57:52.432297095 +0000 UTC m=+0.917397889 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:57:52 compute-0 podman[259386]: 2025-11-22 08:57:52.448301082 +0000 UTC m=+0.929089469 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 08:57:52 compute-0 sudo[259449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:57:52 compute-0 sudo[259449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:52 compute-0 sudo[259449]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:52 compute-0 sudo[259474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:57:52 compute-0 sudo[259474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:52 compute-0 sudo[259474]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:52 compute-0 sudo[259499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 08:57:52 compute-0 sudo[259499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:57:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:53 compute-0 podman[259565]: 2025-11-22 08:57:52.962197897 +0000 UTC m=+0.023728741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:57:53 compute-0 podman[259565]: 2025-11-22 08:57:53.153784776 +0000 UTC m=+0.215315610 container create a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 08:57:53 compute-0 ceph-mon[75021]: pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:53 compute-0 systemd[1]: Started libpod-conmon-a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c.scope.
Nov 22 08:57:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:57:53 compute-0 podman[259565]: 2025-11-22 08:57:53.699507922 +0000 UTC m=+0.761038836 container init a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:57:53 compute-0 podman[259565]: 2025-11-22 08:57:53.710516495 +0000 UTC m=+0.772047319 container start a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 08:57:53 compute-0 zealous_mcnulty[259582]: 167 167
Nov 22 08:57:53 compute-0 systemd[1]: libpod-a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c.scope: Deactivated successfully.
Nov 22 08:57:53 compute-0 podman[259565]: 2025-11-22 08:57:53.769202002 +0000 UTC m=+0.830732886 container attach a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:57:53 compute-0 podman[259565]: 2025-11-22 08:57:53.772437523 +0000 UTC m=+0.833968387 container died a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:57:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceb379e373f56ab1a6043f2b9e2aeb9ea0dd39c4295827abcaa71075a41028b4-merged.mount: Deactivated successfully.
Nov 22 08:57:54 compute-0 podman[259565]: 2025-11-22 08:57:54.373300838 +0000 UTC m=+1.434831702 container remove a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 08:57:54 compute-0 systemd[1]: libpod-conmon-a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c.scope: Deactivated successfully.
Nov 22 08:57:54 compute-0 podman[259584]: 2025-11-22 08:57:54.546530661 +0000 UTC m=+1.053464979 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:57:54 compute-0 podman[259636]: 2025-11-22 08:57:54.613806142 +0000 UTC m=+0.034031427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:57:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:55 compute-0 podman[259636]: 2025-11-22 08:57:55.122645511 +0000 UTC m=+0.542870696 container create 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 08:57:55 compute-0 ceph-mon[75021]: pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:55 compute-0 systemd[1]: Started libpod-conmon-237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b.scope.
Nov 22 08:57:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:57:55 compute-0 podman[259636]: 2025-11-22 08:57:55.43138553 +0000 UTC m=+0.851610815 container init 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 08:57:55 compute-0 podman[259636]: 2025-11-22 08:57:55.441578952 +0000 UTC m=+0.861804137 container start 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 08:57:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:57:55 compute-0 podman[259636]: 2025-11-22 08:57:55.659418403 +0000 UTC m=+1.079643698 container attach 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 22 08:57:56 compute-0 keen_northcutt[259653]: {
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "osd_id": 1,
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "type": "bluestore"
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:     },
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "osd_id": 0,
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "type": "bluestore"
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:     },
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "osd_id": 2,
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:         "type": "bluestore"
Nov 22 08:57:56 compute-0 keen_northcutt[259653]:     }
Nov 22 08:57:56 compute-0 keen_northcutt[259653]: }
Nov 22 08:57:56 compute-0 systemd[1]: libpod-237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b.scope: Deactivated successfully.
Nov 22 08:57:56 compute-0 podman[259636]: 2025-11-22 08:57:56.579661381 +0000 UTC m=+1.999886566 container died 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 08:57:56 compute-0 systemd[1]: libpod-237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b.scope: Consumed 1.142s CPU time.
Nov 22 08:57:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e-merged.mount: Deactivated successfully.
Nov 22 08:57:57 compute-0 podman[259636]: 2025-11-22 08:57:57.154826637 +0000 UTC m=+2.575051822 container remove 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:57:57 compute-0 systemd[1]: libpod-conmon-237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b.scope: Deactivated successfully.
Nov 22 08:57:57 compute-0 sudo[259499]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 08:57:57 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:57:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 08:57:57 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:57:57 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4a1a04d0-d044-475f-9eb6-cd565a1c8229 does not exist
Nov 22 08:57:57 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2256e572-8609-44c2-a675-3ca3d0d5cd75 does not exist
Nov 22 08:57:57 compute-0 sudo[259700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:57:57 compute-0 sudo[259700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:57 compute-0 sudo[259700]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:57 compute-0 sudo[259725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 08:57:57 compute-0 sudo[259725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:57:57 compute-0 sudo[259725]: pam_unix(sudo:session): session closed for user root
Nov 22 08:57:57 compute-0 ceph-mon[75021]: pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:57:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:57:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:57:59 compute-0 ceph-mon[75021]: pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.659958) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880660027, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 251, "total_data_size": 3452623, "memory_usage": 3498744, "flush_reason": "Manual Compaction"}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880707728, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1959094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16533, "largest_seqno": 18577, "table_properties": {"data_size": 1952501, "index_size": 3411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16610, "raw_average_key_size": 20, "raw_value_size": 1937886, "raw_average_value_size": 2363, "num_data_blocks": 158, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801651, "oldest_key_time": 1763801651, "file_creation_time": 1763801880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 47813 microseconds, and 6905 cpu microseconds.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.707780) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1959094 bytes OK
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.707803) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.714041) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.714077) EVENT_LOG_v1 {"time_micros": 1763801880714069, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.714096) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3444057, prev total WAL file size 3455368, number of live WAL files 2.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.715207) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1913KB)], [38(7811KB)]
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880715283, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9957731, "oldest_snapshot_seqno": -1}
Nov 22 08:58:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4512 keys, 8063844 bytes, temperature: kUnknown
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880804731, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8063844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8032410, "index_size": 19036, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 108780, "raw_average_key_size": 24, "raw_value_size": 7949682, "raw_average_value_size": 1761, "num_data_blocks": 810, "num_entries": 4512, "num_filter_entries": 4512, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.805468) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8063844 bytes
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.809776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.2 rd, 90.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 4918, records dropped: 406 output_compression: NoCompression
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.809830) EVENT_LOG_v1 {"time_micros": 1763801880809813, "job": 18, "event": "compaction_finished", "compaction_time_micros": 89559, "compaction_time_cpu_micros": 22731, "output_level": 6, "num_output_files": 1, "total_output_size": 8063844, "num_input_records": 4918, "num_output_records": 4512, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880810809, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880812716, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.715051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.813202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880813285, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 251, "total_data_size": 13330, "memory_usage": 19592, "flush_reason": "Manual Compaction"}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880827869, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 13302, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18578, "largest_seqno": 18833, "table_properties": {"data_size": 11549, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4640, "raw_average_key_size": 18, "raw_value_size": 8177, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801880, "oldest_key_time": 1763801880, "file_creation_time": 1763801880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 14715 microseconds, and 1112 cpu microseconds.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.827922) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 13302 bytes OK
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.827956) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832037) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832060) EVENT_LOG_v1 {"time_micros": 1763801880832052, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832087) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 11311, prev total WAL file size 11311, number of live WAL files 2.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832500) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(12KB)], [41(7874KB)]
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880832534, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8077146, "oldest_snapshot_seqno": -1}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4262 keys, 6306880 bytes, temperature: kUnknown
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880902779, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6306880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6278879, "index_size": 16244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 104280, "raw_average_key_size": 24, "raw_value_size": 6202221, "raw_average_value_size": 1455, "num_data_blocks": 683, "num_entries": 4262, "num_filter_entries": 4262, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.903175) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6306880 bytes
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.916713) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.8 rd, 89.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 7.7 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(1081.3) write-amplify(474.1) OK, records in: 4768, records dropped: 506 output_compression: NoCompression
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.916769) EVENT_LOG_v1 {"time_micros": 1763801880916747, "job": 20, "event": "compaction_finished", "compaction_time_micros": 70348, "compaction_time_cpu_micros": 16273, "output_level": 6, "num_output_files": 1, "total_output_size": 6306880, "num_input_records": 4768, "num_output_records": 4262, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880916996, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880920042, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:58:01 compute-0 nova_compute[253661]: 2025-11-22 08:58:01.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:01 compute-0 nova_compute[253661]: 2025-11-22 08:58:01.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 08:58:01 compute-0 nova_compute[253661]: 2025-11-22 08:58:01.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 08:58:01 compute-0 nova_compute[253661]: 2025-11-22 08:58:01.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:01 compute-0 nova_compute[253661]: 2025-11-22 08:58:01.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 08:58:01 compute-0 nova_compute[253661]: 2025-11-22 08:58:01.261 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:01 compute-0 ceph-mon[75021]: pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 08:58:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:03 compute-0 ceph-mon[75021]: pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.272 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.273 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.273 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.285 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.285 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.309 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.310 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.310 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.310 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.311 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:58:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:58:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2837107446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:58:04 compute-0 nova_compute[253661]: 2025-11-22 08:58:04.938 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:58:05 compute-0 ceph-mon[75021]: pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2837107446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.116 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.117 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5158MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.117 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.118 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.390 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.481 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.482 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.501 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.525 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 08:58:05 compute-0 nova_compute[253661]: 2025-11-22 08:58:05.543 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:58:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:58:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/281208527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.013 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.020 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.034 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.036 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.036 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/281208527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:58:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.980 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.981 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.981 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.982 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.982 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.982 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:06 compute-0 nova_compute[253661]: 2025-11-22 08:58:06.983 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:58:07 compute-0 ceph-mon[75021]: pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:08 compute-0 nova_compute[253661]: 2025-11-22 08:58:08.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:09 compute-0 ceph-mon[75021]: pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:11 compute-0 ceph-mon[75021]: pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 08:58:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968193383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:58:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 08:58:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968193383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:58:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2968193383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:58:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2968193383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:58:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:13 compute-0 ceph-mon[75021]: pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:16 compute-0 ceph-mon[75021]: pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:17 compute-0 ceph-mon[75021]: pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:20 compute-0 ceph-mon[75021]: pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:21 compute-0 ceph-mon[75021]: pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:58:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:23 compute-0 ceph-mon[75021]: pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:23 compute-0 podman[259794]: 2025-11-22 08:58:23.370499588 +0000 UTC m=+0.061158187 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:58:23 compute-0 podman[259795]: 2025-11-22 08:58:23.385473213 +0000 UTC m=+0.074704408 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:58:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:25 compute-0 podman[259835]: 2025-11-22 08:58:25.441227266 +0000 UTC m=+0.128308996 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 08:58:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:26 compute-0 ceph-mon[75021]: pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:27 compute-0 ceph-mon[75021]: pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:58:27.940 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:58:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:58:27.942 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:58:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:58:27.942 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:29 compute-0 ceph-mon[75021]: pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:31 compute-0 ceph-mon[75021]: pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:33 compute-0 ceph-mon[75021]: pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:35 compute-0 ceph-mon[75021]: pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:37 compute-0 ceph-mon[75021]: pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:39 compute-0 ceph-mon[75021]: pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:41 compute-0 ceph-mon[75021]: pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:42 compute-0 ceph-mon[75021]: pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:45 compute-0 ceph-mon[75021]: pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:45 compute-0 sshd-session[259862]: Invalid user  from 134.199.207.24 port 43018
Nov 22 08:58:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:48 compute-0 ceph-mon[75021]: pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:49 compute-0 ceph-mon[75021]: pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:51 compute-0 ceph-mon[75021]: pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:58:52
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'images']
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:58:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:53 compute-0 ceph-mon[75021]: pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:53 compute-0 sshd-session[259862]: Connection closed by invalid user  134.199.207.24 port 43018 [preauth]
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:58:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:58:54 compute-0 podman[259864]: 2025-11-22 08:58:54.383691754 +0000 UTC m=+0.064126218 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 22 08:58:54 compute-0 podman[259865]: 2025-11-22 08:58:54.388910848 +0000 UTC m=+0.069413873 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 08:58:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:58:55 compute-0 ceph-mon[75021]: pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:56 compute-0 podman[259901]: 2025-11-22 08:58:56.439207932 +0000 UTC m=+0.124030134 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:58:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:57 compute-0 ceph-mon[75021]: pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:58:57 compute-0 sudo[259928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:58:57 compute-0 sudo[259928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:58:57 compute-0 sudo[259928]: pam_unix(sudo:session): session closed for user root
Nov 22 08:58:57 compute-0 sudo[259953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:58:57 compute-0 sudo[259953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:58:57 compute-0 sudo[259953]: pam_unix(sudo:session): session closed for user root
Nov 22 08:58:57 compute-0 sudo[259978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:58:57 compute-0 sudo[259978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:58:57 compute-0 sudo[259978]: pam_unix(sudo:session): session closed for user root
Nov 22 08:58:58 compute-0 sudo[260003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 08:58:58 compute-0 sudo[260003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:58:58 compute-0 sudo[260003]: pam_unix(sudo:session): session closed for user root
Nov 22 08:58:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:58:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:58:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 08:58:58 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:58:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 08:58:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:02 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6c0dfd06-4a0b-4fff-a302-ea2bd7b472a1 does not exist
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 24b0c496-ffe9-4367-9f25-79e468ca6c42 does not exist
Nov 22 08:59:02 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9f5d8286-cc9b-47d3-99ad-8f5b15b006e5 does not exist
Nov 22 08:59:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 08:59:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:59:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 08:59:03 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:59:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 08:59:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:59:03 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:59:03 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 08:59:03 compute-0 ceph-mon[75021]: pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:03 compute-0 ceph-mon[75021]: pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:03 compute-0 sudo[260058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:59:03 compute-0 sudo[260058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:03 compute-0 sudo[260058]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:03 compute-0 sudo[260083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:59:03 compute-0 sudo[260083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:03 compute-0 sudo[260083]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:03 compute-0 sudo[260108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:59:03 compute-0 sudo[260108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:03 compute-0 sudo[260108]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:03 compute-0 sudo[260133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 08:59:03 compute-0 sudo[260133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:03 compute-0 podman[260196]: 2025-11-22 08:59:03.810155089 +0000 UTC m=+0.038224026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:59:04 compute-0 podman[260196]: 2025-11-22 08:59:04.152147896 +0000 UTC m=+0.380216803 container create 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.250 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:59:04 compute-0 ceph-mon[75021]: pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:04 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:59:04 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 08:59:04 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 08:59:04 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 08:59:04 compute-0 systemd[1]: Started libpod-conmon-9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449.scope.
Nov 22 08:59:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:59:04 compute-0 podman[260196]: 2025-11-22 08:59:04.619704392 +0000 UTC m=+0.847773339 container init 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:59:04 compute-0 podman[260196]: 2025-11-22 08:59:04.647358496 +0000 UTC m=+0.875427443 container start 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 08:59:04 compute-0 nostalgic_fermat[260232]: 167 167
Nov 22 08:59:04 compute-0 systemd[1]: libpod-9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449.scope: Deactivated successfully.
Nov 22 08:59:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:59:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742451565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.704 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:59:04 compute-0 podman[260196]: 2025-11-22 08:59:04.731277581 +0000 UTC m=+0.959346518 container attach 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 08:59:04 compute-0 podman[260196]: 2025-11-22 08:59:04.731788573 +0000 UTC m=+0.959857480 container died 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 08:59:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.892 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.895 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.953 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.953 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:59:04 compute-0 nova_compute[253661]: 2025-11-22 08:59:04.971 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:59:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6af04a34783e3cc94e79fbe87f5da29e256ef7d6f53e2defe555ad0f427ced3c-merged.mount: Deactivated successfully.
Nov 22 08:59:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 08:59:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842877680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:59:05 compute-0 podman[260196]: 2025-11-22 08:59:05.429401229 +0000 UTC m=+1.657470136 container remove 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 08:59:05 compute-0 nova_compute[253661]: 2025-11-22 08:59:05.429 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:59:05 compute-0 nova_compute[253661]: 2025-11-22 08:59:05.435 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:59:05 compute-0 nova_compute[253661]: 2025-11-22 08:59:05.447 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:59:05 compute-0 nova_compute[253661]: 2025-11-22 08:59:05.449 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:59:05 compute-0 nova_compute[253661]: 2025-11-22 08:59:05.449 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:59:05 compute-0 systemd[1]: libpod-conmon-9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449.scope: Deactivated successfully.
Nov 22 08:59:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3742451565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:59:05 compute-0 ceph-mon[75021]: pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/842877680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 08:59:05 compute-0 podman[260279]: 2025-11-22 08:59:05.587008017 +0000 UTC m=+0.031796533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:59:05 compute-0 podman[260279]: 2025-11-22 08:59:05.745641758 +0000 UTC m=+0.190430264 container create 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:59:05 compute-0 systemd[1]: Started libpod-conmon-974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae.scope.
Nov 22 08:59:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:05 compute-0 podman[260279]: 2025-11-22 08:59:05.878389827 +0000 UTC m=+0.323178343 container init 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 08:59:05 compute-0 podman[260279]: 2025-11-22 08:59:05.887704248 +0000 UTC m=+0.332492754 container start 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:59:05 compute-0 podman[260279]: 2025-11-22 08:59:05.903104392 +0000 UTC m=+0.347892918 container attach 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.450 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.451 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.451 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.463 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.464 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:06 compute-0 nova_compute[253661]: 2025-11-22 08:59:06.466 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:59:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:06 compute-0 adoring_buck[260295]: --> passed data devices: 0 physical, 3 LVM
Nov 22 08:59:06 compute-0 adoring_buck[260295]: --> relative data size: 1.0
Nov 22 08:59:06 compute-0 adoring_buck[260295]: --> All data devices are unavailable
Nov 22 08:59:07 compute-0 systemd[1]: libpod-974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae.scope: Deactivated successfully.
Nov 22 08:59:07 compute-0 podman[260279]: 2025-11-22 08:59:07.026939097 +0000 UTC m=+1.471727603 container died 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 08:59:07 compute-0 systemd[1]: libpod-974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae.scope: Consumed 1.082s CPU time.
Nov 22 08:59:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f-merged.mount: Deactivated successfully.
Nov 22 08:59:07 compute-0 podman[260279]: 2025-11-22 08:59:07.102051584 +0000 UTC m=+1.546840090 container remove 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:59:07 compute-0 systemd[1]: libpod-conmon-974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae.scope: Deactivated successfully.
Nov 22 08:59:07 compute-0 sudo[260133]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:07 compute-0 sudo[260336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:59:07 compute-0 sudo[260336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:07 compute-0 sudo[260336]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:07 compute-0 nova_compute[253661]: 2025-11-22 08:59:07.235 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:07 compute-0 sudo[260361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:59:07 compute-0 sudo[260361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:07 compute-0 sudo[260361]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:07 compute-0 sudo[260386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:59:07 compute-0 sudo[260386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:07 compute-0 sudo[260386]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:07 compute-0 sudo[260411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 08:59:07 compute-0 sudo[260411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:07 compute-0 podman[260476]: 2025-11-22 08:59:07.784244716 +0000 UTC m=+0.047600427 container create 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 08:59:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:07 compute-0 systemd[1]: Started libpod-conmon-32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c.scope.
Nov 22 08:59:07 compute-0 ceph-mon[75021]: pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:07 compute-0 podman[260476]: 2025-11-22 08:59:07.761789525 +0000 UTC m=+0.025145046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:59:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:59:07 compute-0 podman[260476]: 2025-11-22 08:59:07.887850086 +0000 UTC m=+0.151205607 container init 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 08:59:07 compute-0 podman[260476]: 2025-11-22 08:59:07.897814701 +0000 UTC m=+0.161170242 container start 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 08:59:07 compute-0 thirsty_sutherland[260492]: 167 167
Nov 22 08:59:07 compute-0 systemd[1]: libpod-32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c.scope: Deactivated successfully.
Nov 22 08:59:07 compute-0 podman[260476]: 2025-11-22 08:59:07.907995232 +0000 UTC m=+0.171350763 container attach 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:59:07 compute-0 podman[260476]: 2025-11-22 08:59:07.909250051 +0000 UTC m=+0.172605562 container died 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 08:59:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b4c19aa0ef49b8e10e3c0161ab14aa28aa10aaf1ccfa7937f41ddf9d84ad568-merged.mount: Deactivated successfully.
Nov 22 08:59:07 compute-0 podman[260476]: 2025-11-22 08:59:07.990238857 +0000 UTC m=+0.253594358 container remove 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 08:59:08 compute-0 systemd[1]: libpod-conmon-32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c.scope: Deactivated successfully.
Nov 22 08:59:08 compute-0 podman[260517]: 2025-11-22 08:59:08.193508663 +0000 UTC m=+0.078315393 container create 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 08:59:08 compute-0 podman[260517]: 2025-11-22 08:59:08.141454023 +0000 UTC m=+0.026260793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:59:08 compute-0 systemd[1]: Started libpod-conmon-0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6.scope.
Nov 22 08:59:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:08 compute-0 podman[260517]: 2025-11-22 08:59:08.370006127 +0000 UTC m=+0.254812847 container init 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 22 08:59:08 compute-0 podman[260517]: 2025-11-22 08:59:08.380826203 +0000 UTC m=+0.265632923 container start 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 08:59:08 compute-0 podman[260517]: 2025-11-22 08:59:08.411474568 +0000 UTC m=+0.296281288 container attach 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 08:59:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]: {
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:     "0": [
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:         {
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "devices": [
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "/dev/loop3"
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             ],
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_name": "ceph_lv0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_size": "21470642176",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "name": "ceph_lv0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "tags": {
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.cluster_name": "ceph",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.crush_device_class": "",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.encrypted": "0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.osd_id": "0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.type": "block",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.vdo": "0"
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             },
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "type": "block",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "vg_name": "ceph_vg0"
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:         }
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:     ],
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:     "1": [
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:         {
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "devices": [
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "/dev/loop4"
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             ],
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_name": "ceph_lv1",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_size": "21470642176",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "name": "ceph_lv1",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "tags": {
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.cluster_name": "ceph",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.crush_device_class": "",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.encrypted": "0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.osd_id": "1",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.type": "block",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.vdo": "0"
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             },
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "type": "block",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "vg_name": "ceph_vg1"
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:         }
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:     ],
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:     "2": [
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:         {
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "devices": [
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "/dev/loop5"
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             ],
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_name": "ceph_lv2",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_size": "21470642176",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "name": "ceph_lv2",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "tags": {
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.cluster_name": "ceph",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.crush_device_class": "",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.encrypted": "0",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.osd_id": "2",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.type": "block",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:                 "ceph.vdo": "0"
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             },
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "type": "block",
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:             "vg_name": "ceph_vg2"
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:         }
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]:     ]
Nov 22 08:59:09 compute-0 condescending_goldwasser[260533]: }
Nov 22 08:59:09 compute-0 systemd[1]: libpod-0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6.scope: Deactivated successfully.
Nov 22 08:59:09 compute-0 podman[260517]: 2025-11-22 08:59:09.215992073 +0000 UTC m=+1.100798803 container died 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 08:59:09 compute-0 nova_compute[253661]: 2025-11-22 08:59:09.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870-merged.mount: Deactivated successfully.
Nov 22 08:59:09 compute-0 podman[260517]: 2025-11-22 08:59:09.33680102 +0000 UTC m=+1.221607741 container remove 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 08:59:09 compute-0 systemd[1]: libpod-conmon-0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6.scope: Deactivated successfully.
Nov 22 08:59:09 compute-0 sudo[260411]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:09 compute-0 sudo[260557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:59:09 compute-0 sudo[260557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:09 compute-0 sudo[260557]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:09 compute-0 sudo[260582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 08:59:09 compute-0 sudo[260582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:09 compute-0 sudo[260582]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:09 compute-0 sudo[260607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:59:09 compute-0 sudo[260607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:09 compute-0 sudo[260607]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:09 compute-0 sudo[260632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 08:59:09 compute-0 sudo[260632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:09 compute-0 ceph-mon[75021]: pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:10 compute-0 podman[260698]: 2025-11-22 08:59:10.054972593 +0000 UTC m=+0.069157367 container create 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:59:10 compute-0 podman[260698]: 2025-11-22 08:59:10.014845384 +0000 UTC m=+0.029030188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:59:10 compute-0 systemd[1]: Started libpod-conmon-3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd.scope.
Nov 22 08:59:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:59:10 compute-0 podman[260698]: 2025-11-22 08:59:10.180386538 +0000 UTC m=+0.194571392 container init 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 08:59:10 compute-0 podman[260698]: 2025-11-22 08:59:10.189006683 +0000 UTC m=+0.203191467 container start 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 08:59:10 compute-0 nice_noether[260714]: 167 167
Nov 22 08:59:10 compute-0 systemd[1]: libpod-3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd.scope: Deactivated successfully.
Nov 22 08:59:10 compute-0 podman[260698]: 2025-11-22 08:59:10.213795718 +0000 UTC m=+0.227980502 container attach 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 08:59:10 compute-0 podman[260698]: 2025-11-22 08:59:10.215216172 +0000 UTC m=+0.229400926 container died 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d490eb1d2733c4f67dd0f3127f59dfca5eaa3e3b8bb6dab5f00033ff925a737-merged.mount: Deactivated successfully.
Nov 22 08:59:10 compute-0 podman[260698]: 2025-11-22 08:59:10.31030032 +0000 UTC m=+0.324485094 container remove 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 08:59:10 compute-0 systemd[1]: libpod-conmon-3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd.scope: Deactivated successfully.
Nov 22 08:59:10 compute-0 podman[260740]: 2025-11-22 08:59:10.511501439 +0000 UTC m=+0.070444627 container create 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:59:10 compute-0 podman[260740]: 2025-11-22 08:59:10.469368952 +0000 UTC m=+0.028312160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 08:59:10 compute-0 systemd[1]: Started libpod-conmon-11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9.scope.
Nov 22 08:59:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 08:59:10 compute-0 podman[260740]: 2025-11-22 08:59:10.673307945 +0000 UTC m=+0.232251173 container init 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 08:59:10 compute-0 podman[260740]: 2025-11-22 08:59:10.680915355 +0000 UTC m=+0.239858543 container start 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 08:59:10 compute-0 podman[260740]: 2025-11-22 08:59:10.688019453 +0000 UTC m=+0.246962661 container attach 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 22 08:59:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:11 compute-0 nova_compute[253661]: 2025-11-22 08:59:11.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]: {
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "osd_id": 1,
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "type": "bluestore"
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:     },
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "osd_id": 0,
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "type": "bluestore"
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:     },
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "osd_id": 2,
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:         "type": "bluestore"
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]:     }
Nov 22 08:59:11 compute-0 friendly_hamilton[260757]: }
Nov 22 08:59:11 compute-0 systemd[1]: libpod-11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9.scope: Deactivated successfully.
Nov 22 08:59:11 compute-0 systemd[1]: libpod-11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9.scope: Consumed 1.108s CPU time.
Nov 22 08:59:11 compute-0 podman[260740]: 2025-11-22 08:59:11.783109319 +0000 UTC m=+1.342052507 container died 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962-merged.mount: Deactivated successfully.
Nov 22 08:59:11 compute-0 ceph-mon[75021]: pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:12 compute-0 podman[260740]: 2025-11-22 08:59:12.009918262 +0000 UTC m=+1.568861470 container remove 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 08:59:12 compute-0 systemd[1]: libpod-conmon-11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9.scope: Deactivated successfully.
Nov 22 08:59:12 compute-0 sudo[260632]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 08:59:12 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:59:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 08:59:12 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:59:12 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 22fb056c-8545-4376-ac08-3129cdbd53f4 does not exist
Nov 22 08:59:12 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 56d86b35-cf2d-4004-9006-36d09e2134dc does not exist
Nov 22 08:59:12 compute-0 sudo[260805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 08:59:12 compute-0 sudo[260805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:12 compute-0 sudo[260805]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 08:59:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2057405644' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:59:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 08:59:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2057405644' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:59:12 compute-0 sudo[260830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 08:59:12 compute-0 sudo[260830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 08:59:12 compute-0 sudo[260830]: pam_unix(sudo:session): session closed for user root
Nov 22 08:59:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.925720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801952925855, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 788, "num_deletes": 256, "total_data_size": 1016885, "memory_usage": 1031336, "flush_reason": "Manual Compaction"}
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801952953691, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1007885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18834, "largest_seqno": 19621, "table_properties": {"data_size": 1003838, "index_size": 1763, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8516, "raw_average_key_size": 18, "raw_value_size": 995731, "raw_average_value_size": 2141, "num_data_blocks": 80, "num_entries": 465, "num_filter_entries": 465, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801881, "oldest_key_time": 1763801881, "file_creation_time": 1763801952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 28061 microseconds, and 6065 cpu microseconds.
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.953790) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1007885 bytes OK
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.953836) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.981386) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.981447) EVENT_LOG_v1 {"time_micros": 1763801952981432, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.981487) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1012889, prev total WAL file size 1012889, number of live WAL files 2.
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.982604) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(984KB)], [44(6159KB)]
Nov 22 08:59:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801952982785, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7314765, "oldest_snapshot_seqno": -1}
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4203 keys, 7164255 bytes, temperature: kUnknown
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801953097390, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7164255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7135234, "index_size": 17391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104164, "raw_average_key_size": 24, "raw_value_size": 7058196, "raw_average_value_size": 1679, "num_data_blocks": 727, "num_entries": 4203, "num_filter_entries": 4203, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.097812) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7164255 bytes
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.119649) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.8 rd, 62.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(14.4) write-amplify(7.1) OK, records in: 4727, records dropped: 524 output_compression: NoCompression
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.119699) EVENT_LOG_v1 {"time_micros": 1763801953119682, "job": 22, "event": "compaction_finished", "compaction_time_micros": 114714, "compaction_time_cpu_micros": 37335, "output_level": 6, "num_output_files": 1, "total_output_size": 7164255, "num_input_records": 4727, "num_output_records": 4203, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801953120105, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801953121289, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.982376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:59:13 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 08:59:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:59:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 08:59:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2057405644' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 08:59:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2057405644' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 08:59:13 compute-0 ceph-mon[75021]: pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:15 compute-0 ceph-mon[75021]: pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:17 compute-0 ceph-mon[75021]: pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:19 compute-0 ceph-mon[75021]: pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:21 compute-0 ceph-mon[75021]: pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:59:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:23 compute-0 ceph-mon[75021]: pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:24 compute-0 ceph-mon[75021]: pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:25 compute-0 podman[260856]: 2025-11-22 08:59:25.401170806 +0000 UTC m=+0.081048107 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:59:25 compute-0 podman[260855]: 2025-11-22 08:59:25.419468099 +0000 UTC m=+0.101431229 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 08:59:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:27 compute-0 podman[260892]: 2025-11-22 08:59:27.41835477 +0000 UTC m=+0.111422627 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:59:27 compute-0 ceph-mon[75021]: pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:59:27.940 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:59:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:59:27.941 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:59:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 08:59:27.941 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:59:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:28 compute-0 ceph-mon[75021]: pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:31 compute-0 ceph-mon[75021]: pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:33 compute-0 ceph-mon[75021]: pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:36 compute-0 ceph-mon[75021]: pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:37 compute-0 ceph-mon[75021]: pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:39 compute-0 ceph-mon[75021]: pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:41 compute-0 ceph-mon[75021]: pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:43 compute-0 ceph-mon[75021]: pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:45 compute-0 ceph-mon[75021]: pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:47 compute-0 ceph-mon[75021]: pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:49 compute-0 ceph-mon[75021]: pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:51 compute-0 ceph-mon[75021]: pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:59:52
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.control', 'images', 'vms']
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 08:59:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 08:59:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 08:59:54 compute-0 ceph-mon[75021]: pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:55 compute-0 ceph-mon[75021]: pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:56 compute-0 podman[260920]: 2025-11-22 08:59:56.40618534 +0000 UTC m=+0.084073940 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:59:56 compute-0 podman[260919]: 2025-11-22 08:59:56.424548274 +0000 UTC m=+0.105121337 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 22 08:59:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:57 compute-0 ceph-mon[75021]: pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 08:59:58 compute-0 podman[260958]: 2025-11-22 08:59:58.415279417 +0000 UTC m=+0.102376691 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:59:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 08:59:59 compute-0 ceph-mon[75021]: pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:01 compute-0 ceph-mon[75021]: pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:00:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:03 compute-0 ceph-mon[75021]: pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:03 compute-0 nova_compute[253661]: 2025-11-22 09:00:03.267 253665 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 1.76 sec
Nov 22 09:00:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:05 compute-0 ceph-mon[75021]: pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:00:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:00:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1134528289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.746 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.930 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.931 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.931 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:00:05 compute-0 nova_compute[253661]: 2025-11-22 09:00:05.931 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:00:06 compute-0 nova_compute[253661]: 2025-11-22 09:00:06.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:00:06 compute-0 nova_compute[253661]: 2025-11-22 09:00:06.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:00:06 compute-0 nova_compute[253661]: 2025-11-22 09:00:06.270 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:00:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1134528289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:00:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:00:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3160554198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:00:06 compute-0 nova_compute[253661]: 2025-11-22 09:00:06.948 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:00:06 compute-0 nova_compute[253661]: 2025-11-22 09:00:06.956 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:00:06 compute-0 nova_compute[253661]: 2025-11-22 09:00:06.969 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:00:06 compute-0 nova_compute[253661]: 2025-11-22 09:00:06.970 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:00:06 compute-0 nova_compute[253661]: 2025-11-22 09:00:06.970 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:00:07 compute-0 ceph-mon[75021]: pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3160554198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:00:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.971 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.971 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.971 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.972 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.990 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.990 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:08 compute-0 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:00:09 compute-0 nova_compute[253661]: 2025-11-22 09:00:09.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:09 compute-0 ceph-mon[75021]: pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:12 compute-0 sudo[261027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:00:12 compute-0 sudo[261027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:12 compute-0 sudo[261027]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:12 compute-0 sudo[261052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:00:12 compute-0 sudo[261052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:12 compute-0 sudo[261052]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:12 compute-0 sudo[261077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:00:12 compute-0 sudo[261077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:12 compute-0 sudo[261077]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:12 compute-0 sudo[261102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:00:12 compute-0 sudo[261102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:00:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/547708793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:00:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:00:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/547708793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:00:12 compute-0 ceph-mon[75021]: pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/547708793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:00:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/547708793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:00:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:13 compute-0 sudo[261102]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:00:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:00:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:00:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:00:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:00:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:00:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1f8fdf84-5ce7-4078-a4fd-63a2e8c739c8 does not exist
Nov 22 09:00:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 052239fe-87dd-4d39-acd9-1b9320dda08f does not exist
Nov 22 09:00:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 164c87a6-9209-4e92-af0b-7069362472fd does not exist
Nov 22 09:00:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:00:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:00:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:00:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:00:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:00:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:00:13 compute-0 sudo[261158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:00:13 compute-0 sudo[261158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:13 compute-0 sudo[261158]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:13 compute-0 sudo[261183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:00:13 compute-0 sudo[261183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:13 compute-0 sudo[261183]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:13 compute-0 sudo[261208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:00:13 compute-0 sudo[261208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:13 compute-0 sudo[261208]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:13 compute-0 sudo[261233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:00:13 compute-0 sudo[261233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:13 compute-0 ceph-mon[75021]: pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:00:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:00:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:00:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:00:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:00:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:00:14 compute-0 podman[261298]: 2025-11-22 09:00:13.914001468 +0000 UTC m=+0.025397312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:00:14 compute-0 podman[261298]: 2025-11-22 09:00:14.099545065 +0000 UTC m=+0.210940899 container create 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:00:14 compute-0 systemd[1]: Started libpod-conmon-5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9.scope.
Nov 22 09:00:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:00:14 compute-0 podman[261298]: 2025-11-22 09:00:14.454491829 +0000 UTC m=+0.565887703 container init 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:00:14 compute-0 podman[261298]: 2025-11-22 09:00:14.465874458 +0000 UTC m=+0.577270292 container start 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:00:14 compute-0 systemd[1]: libpod-5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9.scope: Deactivated successfully.
Nov 22 09:00:14 compute-0 crazy_tu[261315]: 167 167
Nov 22 09:00:14 compute-0 conmon[261315]: conmon 5cc45acd968f6699c6fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9.scope/container/memory.events
Nov 22 09:00:14 compute-0 podman[261298]: 2025-11-22 09:00:14.504594324 +0000 UTC m=+0.615990158 container attach 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:00:14 compute-0 podman[261298]: 2025-11-22 09:00:14.505225049 +0000 UTC m=+0.616620893 container died 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-466be050fa72a8ecec9ca44da0e889d777e0b29f9107a2b40ec91d6e103f384f-merged.mount: Deactivated successfully.
Nov 22 09:00:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:14 compute-0 podman[261298]: 2025-11-22 09:00:14.962763039 +0000 UTC m=+1.074158903 container remove 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:00:15 compute-0 systemd[1]: libpod-conmon-5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9.scope: Deactivated successfully.
Nov 22 09:00:15 compute-0 podman[261339]: 2025-11-22 09:00:15.129743767 +0000 UTC m=+0.029478938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:00:15 compute-0 podman[261339]: 2025-11-22 09:00:15.279270114 +0000 UTC m=+0.179005275 container create deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 09:00:15 compute-0 ceph-mon[75021]: pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:15 compute-0 systemd[1]: Started libpod-conmon-deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7.scope.
Nov 22 09:00:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:15 compute-0 podman[261339]: 2025-11-22 09:00:15.689570738 +0000 UTC m=+0.589305969 container init deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:00:15 compute-0 podman[261339]: 2025-11-22 09:00:15.703605066 +0000 UTC m=+0.603340247 container start deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 09:00:15 compute-0 podman[261339]: 2025-11-22 09:00:15.78796213 +0000 UTC m=+0.687697281 container attach deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:00:16 compute-0 clever_grothendieck[261355]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:00:16 compute-0 clever_grothendieck[261355]: --> relative data size: 1.0
Nov 22 09:00:16 compute-0 clever_grothendieck[261355]: --> All data devices are unavailable
Nov 22 09:00:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:16 compute-0 systemd[1]: libpod-deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7.scope: Deactivated successfully.
Nov 22 09:00:16 compute-0 podman[261339]: 2025-11-22 09:00:16.833266674 +0000 UTC m=+1.733001845 container died deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:00:16 compute-0 systemd[1]: libpod-deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7.scope: Consumed 1.088s CPU time.
Nov 22 09:00:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:21 compute-0 ceph-mon[75021]: pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:21 compute-0 ceph-mon[75021]: pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:21 compute-0 ceph-mon[75021]: pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827-merged.mount: Deactivated successfully.
Nov 22 09:00:22 compute-0 podman[261339]: 2025-11-22 09:00:22.643715513 +0000 UTC m=+7.543450664 container remove deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:00:22 compute-0 sudo[261233]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:00:22 compute-0 sudo[261398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:00:22 compute-0 systemd[1]: libpod-conmon-deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7.scope: Deactivated successfully.
Nov 22 09:00:22 compute-0 sudo[261398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:22 compute-0 sudo[261398]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:22 compute-0 sudo[261423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:00:22 compute-0 sudo[261423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:22 compute-0 sudo[261423]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:22 compute-0 sudo[261448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:00:22 compute-0 sudo[261448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:22 compute-0 sudo[261448]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:22 compute-0 sudo[261473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:00:22 compute-0 sudo[261473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:23 compute-0 ceph-mon[75021]: pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:23 compute-0 podman[261537]: 2025-11-22 09:00:23.295032627 +0000 UTC m=+0.025739599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:00:23 compute-0 podman[261537]: 2025-11-22 09:00:23.546601338 +0000 UTC m=+0.277308310 container create fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:00:23 compute-0 systemd[1]: Started libpod-conmon-fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8.scope.
Nov 22 09:00:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:00:23 compute-0 podman[261537]: 2025-11-22 09:00:23.962176656 +0000 UTC m=+0.692883628 container init fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:00:23 compute-0 podman[261537]: 2025-11-22 09:00:23.970952574 +0000 UTC m=+0.701659526 container start fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:00:23 compute-0 practical_chaplygin[261553]: 167 167
Nov 22 09:00:23 compute-0 systemd[1]: libpod-fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8.scope: Deactivated successfully.
Nov 22 09:00:24 compute-0 podman[261537]: 2025-11-22 09:00:24.205543203 +0000 UTC m=+0.936250175 container attach fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:00:24 compute-0 podman[261537]: 2025-11-22 09:00:24.207172238 +0000 UTC m=+0.937879270 container died fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:00:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f258035da3f721c681202928e2f0ad5ba1464c29ef660589ddf3ee13b6057a-merged.mount: Deactivated successfully.
Nov 22 09:00:25 compute-0 podman[261537]: 2025-11-22 09:00:25.405555357 +0000 UTC m=+2.136262309 container remove fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 22 09:00:25 compute-0 systemd[1]: libpod-conmon-fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8.scope: Deactivated successfully.
Nov 22 09:00:25 compute-0 podman[261576]: 2025-11-22 09:00:25.614536773 +0000 UTC m=+0.028692982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:00:26 compute-0 podman[261576]: 2025-11-22 09:00:26.32292271 +0000 UTC m=+0.737078869 container create a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 09:00:26 compute-0 ceph-mon[75021]: pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:26 compute-0 systemd[1]: Started libpod-conmon-a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f.scope.
Nov 22 09:00:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:26 compute-0 podman[261576]: 2025-11-22 09:00:26.930191767 +0000 UTC m=+1.344347916 container init a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:00:26 compute-0 podman[261576]: 2025-11-22 09:00:26.940048346 +0000 UTC m=+1.354204465 container start a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:00:27 compute-0 podman[261576]: 2025-11-22 09:00:27.095213767 +0000 UTC m=+1.509369916 container attach a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:00:27 compute-0 podman[261595]: 2025-11-22 09:00:27.217642172 +0000 UTC m=+0.607521264 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 22 09:00:27 compute-0 podman[261596]: 2025-11-22 09:00:27.322608274 +0000 UTC m=+0.712113118 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:00:27 compute-0 ceph-mon[75021]: pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]: {
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:     "0": [
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:         {
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "devices": [
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "/dev/loop3"
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             ],
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_name": "ceph_lv0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_size": "21470642176",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "name": "ceph_lv0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "tags": {
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.cluster_name": "ceph",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.crush_device_class": "",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.encrypted": "0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.osd_id": "0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.type": "block",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.vdo": "0"
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             },
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "type": "block",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "vg_name": "ceph_vg0"
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:         }
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:     ],
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:     "1": [
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:         {
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "devices": [
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "/dev/loop4"
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             ],
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_name": "ceph_lv1",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_size": "21470642176",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "name": "ceph_lv1",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "tags": {
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.cluster_name": "ceph",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.crush_device_class": "",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.encrypted": "0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.osd_id": "1",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.type": "block",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.vdo": "0"
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             },
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "type": "block",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "vg_name": "ceph_vg1"
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:         }
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:     ],
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:     "2": [
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:         {
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "devices": [
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "/dev/loop5"
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             ],
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_name": "ceph_lv2",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_size": "21470642176",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "name": "ceph_lv2",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "tags": {
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.cluster_name": "ceph",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.crush_device_class": "",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.encrypted": "0",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.osd_id": "2",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.type": "block",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:                 "ceph.vdo": "0"
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             },
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "type": "block",
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:             "vg_name": "ceph_vg2"
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:         }
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]:     ]
Nov 22 09:00:27 compute-0 eloquent_poincare[261593]: }
Nov 22 09:00:27 compute-0 systemd[1]: libpod-a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f.scope: Deactivated successfully.
Nov 22 09:00:27 compute-0 podman[261641]: 2025-11-22 09:00:27.838857194 +0000 UTC m=+0.026742189 container died a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 09:00:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:00:27.942 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:00:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:00:27.943 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:00:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:00:27.943 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01-merged.mount: Deactivated successfully.
Nov 22 09:00:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:29 compute-0 ceph-mon[75021]: pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:30 compute-0 podman[261641]: 2025-11-22 09:00:30.040342812 +0000 UTC m=+2.228227777 container remove a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:00:30 compute-0 systemd[1]: libpod-conmon-a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f.scope: Deactivated successfully.
Nov 22 09:00:30 compute-0 sudo[261473]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:30 compute-0 podman[261656]: 2025-11-22 09:00:30.123431829 +0000 UTC m=+1.562286503 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:00:30 compute-0 sudo[261680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:00:30 compute-0 sudo[261680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:30 compute-0 sudo[261680]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:30 compute-0 sudo[261708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:00:30 compute-0 sudo[261708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:30 compute-0 sudo[261708]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:30 compute-0 sudo[261733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:00:30 compute-0 sudo[261733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:30 compute-0 sudo[261733]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:30 compute-0 sudo[261758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:00:30 compute-0 sudo[261758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:30 compute-0 podman[261822]: 2025-11-22 09:00:30.693037334 +0000 UTC m=+0.024664686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:00:30 compute-0 podman[261822]: 2025-11-22 09:00:30.821753603 +0000 UTC m=+0.153380925 container create 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:00:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:30 compute-0 systemd[1]: Started libpod-conmon-56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d.scope.
Nov 22 09:00:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:00:31 compute-0 podman[261822]: 2025-11-22 09:00:31.207578508 +0000 UTC m=+0.539205850 container init 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:00:31 compute-0 podman[261822]: 2025-11-22 09:00:31.222101548 +0000 UTC m=+0.553728890 container start 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:00:31 compute-0 zealous_khorana[261838]: 167 167
Nov 22 09:00:31 compute-0 systemd[1]: libpod-56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d.scope: Deactivated successfully.
Nov 22 09:00:31 compute-0 podman[261822]: 2025-11-22 09:00:31.362586386 +0000 UTC m=+0.694213738 container attach 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:00:31 compute-0 podman[261822]: 2025-11-22 09:00:31.364038717 +0000 UTC m=+0.695666039 container died 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:00:31 compute-0 ceph-mon[75021]: pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-65640d35b213629dc4315c12a0196083d9f97c4a47460a66f02bcdd4f27586c6-merged.mount: Deactivated successfully.
Nov 22 09:00:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:31 compute-0 podman[261822]: 2025-11-22 09:00:31.954215559 +0000 UTC m=+1.285842881 container remove 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:00:31 compute-0 systemd[1]: libpod-conmon-56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d.scope: Deactivated successfully.
Nov 22 09:00:32 compute-0 podman[261862]: 2025-11-22 09:00:32.114252754 +0000 UTC m=+0.023669175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:00:32 compute-0 podman[261862]: 2025-11-22 09:00:32.491060468 +0000 UTC m=+0.400476869 container create 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:00:32 compute-0 systemd[1]: Started libpod-conmon-2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37.scope.
Nov 22 09:00:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:00:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:32 compute-0 podman[261862]: 2025-11-22 09:00:32.896986952 +0000 UTC m=+0.806403393 container init 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:00:32 compute-0 podman[261862]: 2025-11-22 09:00:32.905212207 +0000 UTC m=+0.814628608 container start 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:00:33 compute-0 podman[261862]: 2025-11-22 09:00:33.300907913 +0000 UTC m=+1.210324344 container attach 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:00:33 compute-0 ceph-mon[75021]: pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]: {
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "osd_id": 1,
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "type": "bluestore"
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:     },
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "osd_id": 0,
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "type": "bluestore"
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:     },
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "osd_id": 2,
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:         "type": "bluestore"
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]:     }
Nov 22 09:00:33 compute-0 laughing_northcutt[261878]: }
Nov 22 09:00:34 compute-0 systemd[1]: libpod-2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37.scope: Deactivated successfully.
Nov 22 09:00:34 compute-0 podman[261862]: 2025-11-22 09:00:34.02005469 +0000 UTC m=+1.929471091 container died 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:00:34 compute-0 systemd[1]: libpod-2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37.scope: Consumed 1.118s CPU time.
Nov 22 09:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5-merged.mount: Deactivated successfully.
Nov 22 09:00:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:35 compute-0 podman[261862]: 2025-11-22 09:00:35.516174172 +0000 UTC m=+3.425590613 container remove 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:00:35 compute-0 sudo[261758]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:00:35 compute-0 systemd[1]: libpod-conmon-2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37.scope: Deactivated successfully.
Nov 22 09:00:35 compute-0 ceph-mon[75021]: pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:36 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:00:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:00:36 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:00:36 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d80e1d42-7273-4dbb-b3d0-a13c192abef6 does not exist
Nov 22 09:00:36 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c0ce25ce-50c8-4018-b080-4d4fea127c0f does not exist
Nov 22 09:00:36 compute-0 sudo[261923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:00:36 compute-0 sudo[261923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:36 compute-0 sudo[261923]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:36 compute-0 sudo[261948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:00:36 compute-0 sudo[261948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:00:36 compute-0 sudo[261948]: pam_unix(sudo:session): session closed for user root
Nov 22 09:00:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:00:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:00:37 compute-0 ceph-mon[75021]: pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:39 compute-0 ceph-mon[75021]: pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:41 compute-0 ceph-mon[75021]: pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:43 compute-0 ceph-mon[75021]: pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:45 compute-0 ceph-mon[75021]: pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:47 compute-0 ceph-mon[75021]: pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:50 compute-0 ceph-mon[75021]: pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:51 compute-0 ceph-mon[75021]: pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:00:52
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'vms']
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:00:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:53 compute-0 ceph-mon[75021]: pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:00:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:00:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:00:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:00:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:00:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:00:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:00:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:00:53 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:00:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:00:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:56 compute-0 ceph-mon[75021]: pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:00:57 compute-0 ceph-mon[75021]: pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:57 compute-0 podman[261974]: 2025-11-22 09:00:57.416034974 +0000 UTC m=+0.102185484 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:00:57 compute-0 podman[261993]: 2025-11-22 09:00:57.539278625 +0000 UTC m=+0.086697285 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:00:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:00:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4506 writes, 20K keys, 4506 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4506 writes, 4506 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1277 writes, 5838 keys, 1277 commit groups, 1.0 writes per commit group, ingest: 8.41 MB, 0.01 MB/s
                                           Interval WAL: 1278 writes, 1278 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     58.5      0.36              0.07        11    0.032       0      0       0.0       0.0
                                             L6      1/0    6.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    103.8     86.5      0.81              0.21        10    0.081     43K   5161       0.0       0.0
                                            Sum      1/0    6.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     72.1     78.0      1.16              0.27        21    0.055     43K   5161       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.2     67.1     66.5      0.63              0.14        10    0.063     23K   2975       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    103.8     86.5      0.81              0.21        10    0.081     43K   5161       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     58.7      0.35              0.07        10    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 1.2 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 6.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000146 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(409,5.97 MB,1.96262%) FilterBlock(22,131.48 KB,0.0422377%) IndexBlock(22,239.73 KB,0.0770117%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 09:00:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:00:59 compute-0 ceph-mon[75021]: pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:00 compute-0 podman[262015]: 2025-11-22 09:01:00.40878786 +0000 UTC m=+0.107023147 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 22 09:01:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:01 compute-0 ceph-mon[75021]: pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:01 compute-0 CROND[262042]: (root) CMD (run-parts /etc/cron.hourly)
Nov 22 09:01:01 compute-0 run-parts[262045]: (/etc/cron.hourly) starting 0anacron
Nov 22 09:01:01 compute-0 run-parts[262051]: (/etc/cron.hourly) finished 0anacron
Nov 22 09:01:01 compute-0 CROND[262041]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:01:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:03 compute-0 ceph-mon[75021]: pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:05 compute-0 ceph-mon[75021]: pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:01:07 compute-0 ceph-mon[75021]: pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:01:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1832247781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:01:07 compute-0 nova_compute[253661]: 2025-11-22 09:01:07.827 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.014 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.016 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5192MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.016 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.017 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.073 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.074 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.090 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:01:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:01:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4102044728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.640 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.646 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.662 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.666 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:01:08 compute-0 nova_compute[253661]: 2025-11-22 09:01:08.666 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:01:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1832247781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:01:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:09 compute-0 nova_compute[253661]: 2025-11-22 09:01:09.668 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:09 compute-0 nova_compute[253661]: 2025-11-22 09:01:09.669 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:09 compute-0 nova_compute[253661]: 2025-11-22 09:01:09.670 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:01:09 compute-0 nova_compute[253661]: 2025-11-22 09:01:09.670 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:01:09 compute-0 nova_compute[253661]: 2025-11-22 09:01:09.688 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:01:09 compute-0 nova_compute[253661]: 2025-11-22 09:01:09.690 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:09 compute-0 nova_compute[253661]: 2025-11-22 09:01:09.690 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:09 compute-0 nova_compute[253661]: 2025-11-22 09:01:09.691 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4102044728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:01:09 compute-0 ceph-mon[75021]: pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:11 compute-0 ceph-mon[75021]: pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:11 compute-0 nova_compute[253661]: 2025-11-22 09:01:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:11 compute-0 nova_compute[253661]: 2025-11-22 09:01:11.249 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:01:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954446876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:01:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:01:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954446876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:01:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/954446876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:01:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/954446876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:01:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:14 compute-0 ceph-mon[75021]: pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:15 compute-0 ceph-mon[75021]: pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:17 compute-0 ceph-mon[75021]: pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:19 compute-0 ceph-mon[75021]: pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:21 compute-0 ceph-mon[75021]: pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:01:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:23 compute-0 ceph-mon[75021]: pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:25 compute-0 ceph-mon[75021]: pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:27 compute-0 ceph-mon[75021]: pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:01:27.943 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:01:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:01:27.944 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:01:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:01:27.944 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:01:28 compute-0 podman[262097]: 2025-11-22 09:01:28.371513938 +0000 UTC m=+0.062054392 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 09:01:28 compute-0 podman[262096]: 2025-11-22 09:01:28.390736266 +0000 UTC m=+0.083031587 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:01:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:29 compute-0 ceph-mon[75021]: pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:31 compute-0 podman[262134]: 2025-11-22 09:01:31.405650264 +0000 UTC m=+0.092983070 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:01:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:31 compute-0 ceph-mon[75021]: pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:32 compute-0 ceph-mon[75021]: pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:36 compute-0 ceph-mon[75021]: pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:36 compute-0 sudo[262160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:01:36 compute-0 sudo[262160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:36 compute-0 sudo[262160]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:36 compute-0 sudo[262185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:01:36 compute-0 sudo[262185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:36 compute-0 sudo[262185]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:36 compute-0 sudo[262210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:01:36 compute-0 sudo[262210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:36 compute-0 sudo[262210]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:36 compute-0 sudo[262235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:01:36 compute-0 sudo[262235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:37 compute-0 ceph-mon[75021]: pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:37 compute-0 sudo[262235]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:01:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:01:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:01:37 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:01:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:01:37 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:01:37 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 32c5d87c-d364-42ce-9fcb-9276d09dc427 does not exist
Nov 22 09:01:37 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 370a6ea7-0ad8-4efd-a943-2063f934b86d does not exist
Nov 22 09:01:37 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d2eb7618-43c9-4f46-ae9d-c4df8214a684 does not exist
Nov 22 09:01:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:01:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:01:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:01:37 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:01:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:01:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:01:37 compute-0 sudo[262290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:01:37 compute-0 sudo[262290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:37 compute-0 sudo[262290]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:37 compute-0 sudo[262315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:01:37 compute-0 sudo[262315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:37 compute-0 sudo[262315]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:37 compute-0 sudo[262340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:01:37 compute-0 sudo[262340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:37 compute-0 sudo[262340]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:37 compute-0 sudo[262365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:01:37 compute-0 sudo[262365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:38 compute-0 podman[262432]: 2025-11-22 09:01:38.160540221 +0000 UTC m=+0.020789414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:01:38 compute-0 podman[262432]: 2025-11-22 09:01:38.633170234 +0000 UTC m=+0.493419397 container create 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 22 09:01:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:01:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:01:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:01:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:01:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:01:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:01:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:38 compute-0 systemd[1]: Started libpod-conmon-98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6.scope.
Nov 22 09:01:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:01:39 compute-0 podman[262432]: 2025-11-22 09:01:39.198197921 +0000 UTC m=+1.058447114 container init 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:01:39 compute-0 podman[262432]: 2025-11-22 09:01:39.210463322 +0000 UTC m=+1.070712485 container start 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 09:01:39 compute-0 systemd[1]: libpod-98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6.scope: Deactivated successfully.
Nov 22 09:01:39 compute-0 nervous_wu[262449]: 167 167
Nov 22 09:01:39 compute-0 conmon[262449]: conmon 98e4a48852128f8990e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6.scope/container/memory.events
Nov 22 09:01:39 compute-0 podman[262432]: 2025-11-22 09:01:39.337194327 +0000 UTC m=+1.197443500 container attach 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:01:39 compute-0 podman[262432]: 2025-11-22 09:01:39.338433834 +0000 UTC m=+1.198682997 container died 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 09:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-292b8aa43a20628a9365107f1211ecd7e18123d03dd8f1000f9d0ae19079d264-merged.mount: Deactivated successfully.
Nov 22 09:01:39 compute-0 podman[262432]: 2025-11-22 09:01:39.650258617 +0000 UTC m=+1.510507770 container remove 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:01:39 compute-0 systemd[1]: libpod-conmon-98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6.scope: Deactivated successfully.
Nov 22 09:01:39 compute-0 ceph-mon[75021]: pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:39 compute-0 podman[262473]: 2025-11-22 09:01:39.79756362 +0000 UTC m=+0.025323069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:01:39 compute-0 podman[262473]: 2025-11-22 09:01:39.904078326 +0000 UTC m=+0.131837755 container create 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:01:40 compute-0 systemd[1]: Started libpod-conmon-26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520.scope.
Nov 22 09:01:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:40 compute-0 podman[262473]: 2025-11-22 09:01:40.129701366 +0000 UTC m=+0.357460805 container init 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 09:01:40 compute-0 podman[262473]: 2025-11-22 09:01:40.137722996 +0000 UTC m=+0.365482425 container start 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:01:40 compute-0 podman[262473]: 2025-11-22 09:01:40.162688077 +0000 UTC m=+0.390447526 container attach 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:01:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:41 compute-0 ceph-mon[75021]: pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:41 compute-0 gracious_tharp[262490]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:01:41 compute-0 gracious_tharp[262490]: --> relative data size: 1.0
Nov 22 09:01:41 compute-0 gracious_tharp[262490]: --> All data devices are unavailable
Nov 22 09:01:41 compute-0 systemd[1]: libpod-26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520.scope: Deactivated successfully.
Nov 22 09:01:41 compute-0 systemd[1]: libpod-26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520.scope: Consumed 1.184s CPU time.
Nov 22 09:01:41 compute-0 podman[262473]: 2025-11-22 09:01:41.368540945 +0000 UTC m=+1.596300374 container died 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:01:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694-merged.mount: Deactivated successfully.
Nov 22 09:01:41 compute-0 podman[262473]: 2025-11-22 09:01:41.687609822 +0000 UTC m=+1.915369271 container remove 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:01:41 compute-0 systemd[1]: libpod-conmon-26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520.scope: Deactivated successfully.
Nov 22 09:01:41 compute-0 sudo[262365]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:41 compute-0 sudo[262531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:01:41 compute-0 sudo[262531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:41 compute-0 sudo[262531]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:41 compute-0 sudo[262556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:01:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:41 compute-0 sudo[262556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:41 compute-0 sudo[262556]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:41 compute-0 sudo[262581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:01:41 compute-0 sudo[262581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:41 compute-0 sudo[262581]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:42 compute-0 sudo[262606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:01:42 compute-0 sudo[262606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:42 compute-0 podman[262672]: 2025-11-22 09:01:42.463881794 +0000 UTC m=+0.107131410 container create 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 09:01:42 compute-0 podman[262672]: 2025-11-22 09:01:42.387266714 +0000 UTC m=+0.030516420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:01:42 compute-0 systemd[1]: Started libpod-conmon-27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f.scope.
Nov 22 09:01:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:01:42 compute-0 podman[262672]: 2025-11-22 09:01:42.670283174 +0000 UTC m=+0.313532820 container init 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 09:01:42 compute-0 podman[262672]: 2025-11-22 09:01:42.67857301 +0000 UTC m=+0.321822626 container start 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:01:42 compute-0 upbeat_villani[262688]: 167 167
Nov 22 09:01:42 compute-0 systemd[1]: libpod-27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f.scope: Deactivated successfully.
Nov 22 09:01:42 compute-0 podman[262672]: 2025-11-22 09:01:42.741437507 +0000 UTC m=+0.384687173 container attach 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:01:42 compute-0 podman[262672]: 2025-11-22 09:01:42.742949979 +0000 UTC m=+0.386199635 container died 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 09:01:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bf976185d1b1f7502c506be3d827a161805ebe46ec7bbc12e3f7456e62cc312-merged.mount: Deactivated successfully.
Nov 22 09:01:43 compute-0 ceph-mon[75021]: pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:43 compute-0 podman[262672]: 2025-11-22 09:01:43.151601881 +0000 UTC m=+0.794851497 container remove 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:01:43 compute-0 systemd[1]: libpod-conmon-27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f.scope: Deactivated successfully.
Nov 22 09:01:43 compute-0 podman[262712]: 2025-11-22 09:01:43.413531093 +0000 UTC m=+0.103943192 container create cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:01:43 compute-0 podman[262712]: 2025-11-22 09:01:43.339780353 +0000 UTC m=+0.030192522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:01:43 compute-0 systemd[1]: Started libpod-conmon-cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56.scope.
Nov 22 09:01:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:43 compute-0 podman[262712]: 2025-11-22 09:01:43.548093185 +0000 UTC m=+0.238505314 container init cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:01:43 compute-0 podman[262712]: 2025-11-22 09:01:43.554818888 +0000 UTC m=+0.245230987 container start cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 09:01:43 compute-0 podman[262712]: 2025-11-22 09:01:43.581034846 +0000 UTC m=+0.271446955 container attach cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]: {
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:     "0": [
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:         {
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "devices": [
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "/dev/loop3"
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             ],
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_name": "ceph_lv0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_size": "21470642176",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "name": "ceph_lv0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "tags": {
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.cluster_name": "ceph",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.crush_device_class": "",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.encrypted": "0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.osd_id": "0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.type": "block",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.vdo": "0"
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             },
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "type": "block",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "vg_name": "ceph_vg0"
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:         }
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:     ],
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:     "1": [
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:         {
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "devices": [
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "/dev/loop4"
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             ],
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_name": "ceph_lv1",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_size": "21470642176",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "name": "ceph_lv1",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "tags": {
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.cluster_name": "ceph",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.crush_device_class": "",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.encrypted": "0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.osd_id": "1",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.type": "block",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.vdo": "0"
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             },
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "type": "block",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "vg_name": "ceph_vg1"
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:         }
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:     ],
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:     "2": [
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:         {
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "devices": [
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "/dev/loop5"
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             ],
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_name": "ceph_lv2",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_size": "21470642176",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "name": "ceph_lv2",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "tags": {
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.cluster_name": "ceph",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.crush_device_class": "",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.encrypted": "0",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.osd_id": "2",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.type": "block",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:                 "ceph.vdo": "0"
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             },
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "type": "block",
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:             "vg_name": "ceph_vg2"
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:         }
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]:     ]
Nov 22 09:01:44 compute-0 unruffled_satoshi[262728]: }
Nov 22 09:01:44 compute-0 systemd[1]: libpod-cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56.scope: Deactivated successfully.
Nov 22 09:01:44 compute-0 podman[262712]: 2025-11-22 09:01:44.386680832 +0000 UTC m=+1.077092951 container died cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 09:01:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0-merged.mount: Deactivated successfully.
Nov 22 09:01:45 compute-0 ceph-mon[75021]: pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:45 compute-0 podman[262712]: 2025-11-22 09:01:45.629030957 +0000 UTC m=+2.319443056 container remove cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:01:45 compute-0 systemd[1]: libpod-conmon-cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56.scope: Deactivated successfully.
Nov 22 09:01:45 compute-0 sudo[262606]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:45 compute-0 sudo[262751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:01:45 compute-0 sudo[262751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:45 compute-0 sudo[262751]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:45 compute-0 sudo[262776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:01:45 compute-0 sudo[262776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:45 compute-0 sudo[262776]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:45 compute-0 sudo[262801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:01:45 compute-0 sudo[262801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:45 compute-0 sudo[262801]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:45 compute-0 sudo[262826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:01:45 compute-0 sudo[262826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:46 compute-0 podman[262892]: 2025-11-22 09:01:46.333274876 +0000 UTC m=+0.028565348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:01:46 compute-0 podman[262892]: 2025-11-22 09:01:46.595518725 +0000 UTC m=+0.290809197 container create 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 09:01:46 compute-0 systemd[1]: Started libpod-conmon-3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd.scope.
Nov 22 09:01:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:01:46 compute-0 podman[262892]: 2025-11-22 09:01:46.834598049 +0000 UTC m=+0.529888501 container init 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:01:46 compute-0 podman[262892]: 2025-11-22 09:01:46.842450117 +0000 UTC m=+0.537740549 container start 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:01:46 compute-0 mystifying_edison[262909]: 167 167
Nov 22 09:01:46 compute-0 systemd[1]: libpod-3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd.scope: Deactivated successfully.
Nov 22 09:01:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:46 compute-0 podman[262892]: 2025-11-22 09:01:46.878926032 +0000 UTC m=+0.574216464 container attach 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:01:46 compute-0 podman[262892]: 2025-11-22 09:01:46.880140138 +0000 UTC m=+0.575430570 container died 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:01:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:47 compute-0 ceph-mon[75021]: pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a90cbefe5fd8b2fd0ca2e100c77dd25ef23334f2a79ebba63b888b42a122f131-merged.mount: Deactivated successfully.
Nov 22 09:01:47 compute-0 podman[262892]: 2025-11-22 09:01:47.31039954 +0000 UTC m=+1.005689992 container remove 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:01:47 compute-0 systemd[1]: libpod-conmon-3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd.scope: Deactivated successfully.
Nov 22 09:01:47 compute-0 podman[262931]: 2025-11-22 09:01:47.516218838 +0000 UTC m=+0.062586893 container create 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:01:47 compute-0 systemd[1]: Started libpod-conmon-8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825.scope.
Nov 22 09:01:47 compute-0 podman[262931]: 2025-11-22 09:01:47.482599333 +0000 UTC m=+0.028967438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:01:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:01:47 compute-0 podman[262931]: 2025-11-22 09:01:47.613655721 +0000 UTC m=+0.160023806 container init 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:01:47 compute-0 podman[262931]: 2025-11-22 09:01:47.623157702 +0000 UTC m=+0.169525767 container start 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:01:47 compute-0 podman[262931]: 2025-11-22 09:01:47.638782685 +0000 UTC m=+0.185150780 container attach 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:01:48 compute-0 nifty_gauss[262948]: {
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "osd_id": 1,
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "type": "bluestore"
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:     },
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "osd_id": 0,
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "type": "bluestore"
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:     },
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "osd_id": 2,
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:         "type": "bluestore"
Nov 22 09:01:48 compute-0 nifty_gauss[262948]:     }
Nov 22 09:01:48 compute-0 nifty_gauss[262948]: }
Nov 22 09:01:48 compute-0 systemd[1]: libpod-8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825.scope: Deactivated successfully.
Nov 22 09:01:48 compute-0 systemd[1]: libpod-8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825.scope: Consumed 1.064s CPU time.
Nov 22 09:01:48 compute-0 podman[262931]: 2025-11-22 09:01:48.682031515 +0000 UTC m=+1.228399600 container died 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:01:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21-merged.mount: Deactivated successfully.
Nov 22 09:01:48 compute-0 podman[262931]: 2025-11-22 09:01:48.799738559 +0000 UTC m=+1.346106624 container remove 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 22 09:01:48 compute-0 systemd[1]: libpod-conmon-8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825.scope: Deactivated successfully.
Nov 22 09:01:48 compute-0 sudo[262826]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:01:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:01:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:01:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:01:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8755f0d7-7072-465b-bff1-f15f25ebe547 does not exist
Nov 22 09:01:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a0969d3e-570e-4444-b1f4-2529fd6c925a does not exist
Nov 22 09:01:48 compute-0 sudo[262995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:01:48 compute-0 sudo[262995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:48 compute-0 sudo[262995]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:49 compute-0 sudo[263020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:01:49 compute-0 sudo[263020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:01:49 compute-0 sudo[263020]: pam_unix(sudo:session): session closed for user root
Nov 22 09:01:49 compute-0 ceph-mon[75021]: pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:01:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:01:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:51 compute-0 ceph-mon[75021]: pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:01:52
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'images', '.rgw.root', 'backups', 'vms', 'volumes']
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:01:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:53 compute-0 ceph-mgr[75315]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1636168236
Nov 22 09:01:53 compute-0 ceph-mon[75021]: pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:01:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:55 compute-0 ceph-mon[75021]: pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:01:57 compute-0 ceph-mon[75021]: pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:01:59 compute-0 podman[263046]: 2025-11-22 09:01:59.422144976 +0000 UTC m=+0.090836665 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 22 09:01:59 compute-0 podman[263045]: 2025-11-22 09:01:59.440734208 +0000 UTC m=+0.117627657 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:01:59 compute-0 ceph-mon[75021]: pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:00 compute-0 ceph-mon[75021]: pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:00 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 22 09:02:00 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:00.989269) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:02:00 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 22 09:02:00 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802120989382, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1521, "num_deletes": 251, "total_data_size": 2465243, "memory_usage": 2504544, "flush_reason": "Manual Compaction"}
Nov 22 09:02:00 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121021561, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2431358, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19622, "largest_seqno": 21142, "table_properties": {"data_size": 2424235, "index_size": 4194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14519, "raw_average_key_size": 19, "raw_value_size": 2410050, "raw_average_value_size": 3292, "num_data_blocks": 191, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801953, "oldest_key_time": 1763801953, "file_creation_time": 1763802120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 32419 microseconds, and 11362 cpu microseconds.
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.021684) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2431358 bytes OK
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.021723) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.024120) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.024151) EVENT_LOG_v1 {"time_micros": 1763802121024141, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.024183) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2458593, prev total WAL file size 2458593, number of live WAL files 2.
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.025607) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2374KB)], [47(6996KB)]
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121025700, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9595613, "oldest_snapshot_seqno": -1}
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4421 keys, 7827837 bytes, temperature: kUnknown
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121108021, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7827837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7796815, "index_size": 18880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109256, "raw_average_key_size": 24, "raw_value_size": 7715314, "raw_average_value_size": 1745, "num_data_blocks": 790, "num_entries": 4421, "num_filter_entries": 4421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.108387) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7827837 bytes
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.112686) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.4 rd, 95.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 4935, records dropped: 514 output_compression: NoCompression
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.112709) EVENT_LOG_v1 {"time_micros": 1763802121112698, "job": 24, "event": "compaction_finished", "compaction_time_micros": 82422, "compaction_time_cpu_micros": 33866, "output_level": 6, "num_output_files": 1, "total_output_size": 7827837, "num_input_records": 4935, "num_output_records": 4421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121113337, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121114997, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.025416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:02:01 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:02:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:02 compute-0 podman[263084]: 2025-11-22 09:02:02.41429009 +0000 UTC m=+0.099165688 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:02:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:03 compute-0 ceph-mon[75021]: pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:05 compute-0 ceph-mon[75021]: pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:07 compute-0 ceph-mon[75021]: pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.253 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:02:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:02:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240995301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.740 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:02:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.897 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.898 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5193MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.898 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.899 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.954 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.955 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:02:08 compute-0 nova_compute[253661]: 2025-11-22 09:02:08.971 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:02:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2240995301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:02:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:02:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3879418390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:02:09 compute-0 nova_compute[253661]: 2025-11-22 09:02:09.545 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:02:09 compute-0 nova_compute[253661]: 2025-11-22 09:02:09.552 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:02:09 compute-0 nova_compute[253661]: 2025-11-22 09:02:09.563 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:02:09 compute-0 nova_compute[253661]: 2025-11-22 09:02:09.565 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:02:09 compute-0 nova_compute[253661]: 2025-11-22 09:02:09.565 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:02:10 compute-0 ceph-mon[75021]: pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3879418390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:02:10 compute-0 nova_compute[253661]: 2025-11-22 09:02:10.566 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:10 compute-0 nova_compute[253661]: 2025-11-22 09:02:10.566 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:10 compute-0 nova_compute[253661]: 2025-11-22 09:02:10.567 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:02:10 compute-0 nova_compute[253661]: 2025-11-22 09:02:10.567 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:02:10 compute-0 nova_compute[253661]: 2025-11-22 09:02:10.580 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:02:10 compute-0 nova_compute[253661]: 2025-11-22 09:02:10.581 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:10 compute-0 nova_compute[253661]: 2025-11-22 09:02:10.582 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:10 compute-0 nova_compute[253661]: 2025-11-22 09:02:10.582 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:10 compute-0 nova_compute[253661]: 2025-11-22 09:02:10.582 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:11 compute-0 ceph-mon[75021]: pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:02:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2988617963' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:02:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:02:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2988617963' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:02:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2988617963' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:02:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2988617963' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:02:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:13 compute-0 nova_compute[253661]: 2025-11-22 09:02:13.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:13 compute-0 ceph-mon[75021]: pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:15 compute-0 ceph-mon[75021]: pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:17 compute-0 ceph-mon[75021]: pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:19 compute-0 ceph-mon[75021]: pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:20 compute-0 ceph-mon[75021]: pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:02:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:23 compute-0 ceph-mon[75021]: pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:25 compute-0 ceph-mon[75021]: pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:02:27.945 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:02:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:02:27.945 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:02:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:02:27.945 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:02:27 compute-0 ceph-mon[75021]: pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 22 09:02:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 22 09:02:29 compute-0 ceph-mon[75021]: pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 22 09:02:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 22 09:02:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 22 09:02:30 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 22 09:02:30 compute-0 podman[263155]: 2025-11-22 09:02:30.360484835 +0000 UTC m=+0.056541722 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 09:02:30 compute-0 podman[263156]: 2025-11-22 09:02:30.362471333 +0000 UTC m=+0.054691056 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:02:30 compute-0 ceph-mon[75021]: osdmap e144: 3 total, 3 up, 3 in
Nov 22 09:02:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:31 compute-0 ceph-mon[75021]: osdmap e145: 3 total, 3 up, 3 in
Nov 22 09:02:31 compute-0 ceph-mon[75021]: pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 22 09:02:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 22 09:02:32 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 22 09:02:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 8.4 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 MiB/s wr, 6 op/s
Nov 22 09:02:33 compute-0 podman[263190]: 2025-11-22 09:02:33.407377034 +0000 UTC m=+0.094790553 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 09:02:33 compute-0 ceph-mon[75021]: osdmap e146: 3 total, 3 up, 3 in
Nov 22 09:02:33 compute-0 ceph-mon[75021]: pgmap v1057: 305 pgs: 305 active+clean; 8.4 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 MiB/s wr, 6 op/s
Nov 22 09:02:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 22 09:02:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 22 09:02:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 22 09:02:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 16 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.8 MiB/s wr, 29 op/s
Nov 22 09:02:35 compute-0 ceph-mon[75021]: osdmap e147: 3 total, 3 up, 3 in
Nov 22 09:02:35 compute-0 ceph-mon[75021]: pgmap v1059: 305 pgs: 305 active+clean; 16 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.8 MiB/s wr, 29 op/s
Nov 22 09:02:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 33 MiB data, 182 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.0 MiB/s wr, 49 op/s
Nov 22 09:02:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 22 09:02:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 22 09:02:37 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 22 09:02:37 compute-0 ceph-mon[75021]: pgmap v1060: 305 pgs: 305 active+clean; 33 MiB data, 182 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.0 MiB/s wr, 49 op/s
Nov 22 09:02:38 compute-0 ceph-mon[75021]: osdmap e148: 3 total, 3 up, 3 in
Nov 22 09:02:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.2 MiB/s wr, 54 op/s
Nov 22 09:02:39 compute-0 ceph-mon[75021]: pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.2 MiB/s wr, 54 op/s
Nov 22 09:02:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.1 MiB/s wr, 42 op/s
Nov 22 09:02:41 compute-0 ceph-mon[75021]: pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.1 MiB/s wr, 42 op/s
Nov 22 09:02:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 22 09:02:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 22 09:02:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 22 09:02:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 26 op/s
Nov 22 09:02:43 compute-0 ceph-mon[75021]: osdmap e149: 3 total, 3 up, 3 in
Nov 22 09:02:43 compute-0 ceph-mon[75021]: pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 26 op/s
Nov 22 09:02:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.0 MiB/s wr, 30 op/s
Nov 22 09:02:45 compute-0 ceph-mon[75021]: pgmap v1066: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.0 MiB/s wr, 30 op/s
Nov 22 09:02:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 22 09:02:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 22 09:02:46 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 22 09:02:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 16 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Nov 22 09:02:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:47 compute-0 ceph-mon[75021]: osdmap e150: 3 total, 3 up, 3 in
Nov 22 09:02:47 compute-0 ceph-mon[75021]: pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 16 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Nov 22 09:02:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 456 KiB data, 162 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 22 09:02:49 compute-0 sudo[263219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:49 compute-0 sudo[263219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:49 compute-0 sudo[263219]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:49 compute-0 sudo[263244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:02:49 compute-0 sudo[263244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:49 compute-0 sudo[263244]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:49 compute-0 sudo[263269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:49 compute-0 sudo[263269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:49 compute-0 sudo[263269]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:49 compute-0 sudo[263294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:02:49 compute-0 sudo[263294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:49 compute-0 sudo[263294]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:49 compute-0 sudo[263350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:49 compute-0 sudo[263350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:49 compute-0 sudo[263350]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:49 compute-0 sudo[263375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:02:49 compute-0 sudo[263375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:49 compute-0 sudo[263375]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:49 compute-0 sudo[263400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:49 compute-0 sudo[263400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:49 compute-0 sudo[263400]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:50 compute-0 sudo[263425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 22 09:02:50 compute-0 sudo[263425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:50 compute-0 ceph-mon[75021]: pgmap v1069: 305 pgs: 305 active+clean; 456 KiB data, 162 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 22 09:02:50 compute-0 sudo[263425]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:02:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:02:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:02:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:02:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:02:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:02:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:02:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:02:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:02:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:02:50 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ecd74cb3-2e5c-400a-bdf9-b28326b504b0 does not exist
Nov 22 09:02:50 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 99342604-3675-45d5-87f1-05b07bbdebae does not exist
Nov 22 09:02:50 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 61b2b36c-e5d9-48eb-bee7-06f40dce4204 does not exist
Nov 22 09:02:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:02:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:02:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:02:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:02:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:02:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:02:50 compute-0 sudo[263469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:50 compute-0 sudo[263469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:50 compute-0 sudo[263469]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:50 compute-0 sudo[263494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:02:50 compute-0 sudo[263494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:50 compute-0 sudo[263494]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:50 compute-0 sudo[263519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:50 compute-0 sudo[263519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:50 compute-0 sudo[263519]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:50 compute-0 sudo[263544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:02:50 compute-0 sudo[263544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 456 KiB data, 162 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 22 09:02:50 compute-0 podman[263609]: 2025-11-22 09:02:50.940667187 +0000 UTC m=+0.066931407 container create 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 09:02:50 compute-0 systemd[1]: Started libpod-conmon-742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e.scope.
Nov 22 09:02:50 compute-0 podman[263609]: 2025-11-22 09:02:50.899690289 +0000 UTC m=+0.025954519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:02:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:02:51 compute-0 podman[263609]: 2025-11-22 09:02:51.054457606 +0000 UTC m=+0.180721846 container init 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:02:51 compute-0 podman[263609]: 2025-11-22 09:02:51.062247418 +0000 UTC m=+0.188511638 container start 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 09:02:51 compute-0 relaxed_bouman[263625]: 167 167
Nov 22 09:02:51 compute-0 systemd[1]: libpod-742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e.scope: Deactivated successfully.
Nov 22 09:02:51 compute-0 podman[263609]: 2025-11-22 09:02:51.07127135 +0000 UTC m=+0.197535580 container attach 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:02:51 compute-0 podman[263609]: 2025-11-22 09:02:51.071795622 +0000 UTC m=+0.198059842 container died 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e82f89e3b6418bd8685ecfd663cd59ee973645796f39fc37a2116afd0e7b0df-merged.mount: Deactivated successfully.
Nov 22 09:02:51 compute-0 podman[263609]: 2025-11-22 09:02:51.152821795 +0000 UTC m=+0.279086005 container remove 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 09:02:51 compute-0 systemd[1]: libpod-conmon-742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e.scope: Deactivated successfully.
Nov 22 09:02:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:02:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:02:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:02:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:02:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:02:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:02:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:02:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:02:51 compute-0 ceph-mon[75021]: pgmap v1070: 305 pgs: 305 active+clean; 456 KiB data, 162 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 22 09:02:51 compute-0 podman[263651]: 2025-11-22 09:02:51.321862264 +0000 UTC m=+0.052175274 container create 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:02:51 compute-0 systemd[1]: Started libpod-conmon-61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3.scope.
Nov 22 09:02:51 compute-0 podman[263651]: 2025-11-22 09:02:51.298289534 +0000 UTC m=+0.028602574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:02:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:02:51 compute-0 podman[263651]: 2025-11-22 09:02:51.415801275 +0000 UTC m=+0.146114315 container init 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:02:51 compute-0 podman[263651]: 2025-11-22 09:02:51.4245795 +0000 UTC m=+0.154892520 container start 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 09:02:51 compute-0 podman[263651]: 2025-11-22 09:02:51.428363183 +0000 UTC m=+0.158676233 container attach 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:02:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 22 09:02:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 22 09:02:51 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:02:52
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups']
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:02:52 compute-0 eloquent_fermat[263667]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:02:52 compute-0 eloquent_fermat[263667]: --> relative data size: 1.0
Nov 22 09:02:52 compute-0 eloquent_fermat[263667]: --> All data devices are unavailable
Nov 22 09:02:52 compute-0 systemd[1]: libpod-61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3.scope: Deactivated successfully.
Nov 22 09:02:52 compute-0 podman[263651]: 2025-11-22 09:02:52.602647159 +0000 UTC m=+1.332960179 container died 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:02:52 compute-0 systemd[1]: libpod-61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3.scope: Consumed 1.122s CPU time.
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea-merged.mount: Deactivated successfully.
Nov 22 09:02:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Nov 22 09:02:53 compute-0 podman[263651]: 2025-11-22 09:02:53.024008694 +0000 UTC m=+1.754321724 container remove 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:02:53 compute-0 systemd[1]: libpod-conmon-61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3.scope: Deactivated successfully.
Nov 22 09:02:53 compute-0 sudo[263544]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:54 compute-0 sudo[263710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:54 compute-0 sudo[263710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:54 compute-0 sudo[263710]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:54 compute-0 sudo[263735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:02:54 compute-0 sudo[263735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:54 compute-0 sudo[263735]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:54 compute-0 sudo[263760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:54 compute-0 sudo[263760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:54 compute-0 sudo[263760]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:02:54 compute-0 sudo[263785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:02:54 compute-0 sudo[263785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:02:54 compute-0 ceph-mon[75021]: osdmap e151: 3 total, 3 up, 3 in
Nov 22 09:02:54 compute-0 ceph-mon[75021]: pgmap v1072: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Nov 22 09:02:54 compute-0 podman[263852]: 2025-11-22 09:02:54.689151023 +0000 UTC m=+0.040945648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:02:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 KiB/s wr, 36 op/s
Nov 22 09:02:55 compute-0 podman[263852]: 2025-11-22 09:02:55.073857335 +0000 UTC m=+0.425651950 container create 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:02:55 compute-0 systemd[1]: Started libpod-conmon-6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327.scope.
Nov 22 09:02:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:02:55 compute-0 ceph-mon[75021]: pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 KiB/s wr, 36 op/s
Nov 22 09:02:55 compute-0 podman[263852]: 2025-11-22 09:02:55.774526171 +0000 UTC m=+1.126320876 container init 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:02:55 compute-0 podman[263852]: 2025-11-22 09:02:55.78709654 +0000 UTC m=+1.138891175 container start 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:02:55 compute-0 frosty_darwin[263869]: 167 167
Nov 22 09:02:55 compute-0 systemd[1]: libpod-6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327.scope: Deactivated successfully.
Nov 22 09:02:55 compute-0 podman[263852]: 2025-11-22 09:02:55.911852739 +0000 UTC m=+1.263647434 container attach 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:02:55 compute-0 podman[263852]: 2025-11-22 09:02:55.912544656 +0000 UTC m=+1.264339321 container died 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-258a28ccb2fe3bdf5f7b78b51f857af8e399660ef49f0b2a08969ac1f2cdde0a-merged.mount: Deactivated successfully.
Nov 22 09:02:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 818 B/s wr, 7 op/s
Nov 22 09:02:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:02:56 compute-0 podman[263852]: 2025-11-22 09:02:56.984674378 +0000 UTC m=+2.336469003 container remove 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 09:02:56 compute-0 systemd[1]: libpod-conmon-6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327.scope: Deactivated successfully.
Nov 22 09:02:57 compute-0 podman[263892]: 2025-11-22 09:02:57.168223064 +0000 UTC m=+0.028564384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:02:57 compute-0 podman[263892]: 2025-11-22 09:02:57.272508329 +0000 UTC m=+0.132849629 container create bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:02:57 compute-0 systemd[1]: Started libpod-conmon-bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5.scope.
Nov 22 09:02:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:02:57 compute-0 podman[263892]: 2025-11-22 09:02:57.47750856 +0000 UTC m=+0.337849900 container init bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 09:02:57 compute-0 podman[263892]: 2025-11-22 09:02:57.486612735 +0000 UTC m=+0.346954035 container start bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:02:57 compute-0 podman[263892]: 2025-11-22 09:02:57.500915006 +0000 UTC m=+0.361256376 container attach bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:02:58 compute-0 ceph-mon[75021]: pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 818 B/s wr, 7 op/s
Nov 22 09:02:58 compute-0 quirky_faraday[263909]: {
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:     "0": [
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:         {
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "devices": [
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "/dev/loop3"
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             ],
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_name": "ceph_lv0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_size": "21470642176",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "name": "ceph_lv0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "tags": {
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.cluster_name": "ceph",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.crush_device_class": "",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.encrypted": "0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.osd_id": "0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.type": "block",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.vdo": "0"
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             },
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "type": "block",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "vg_name": "ceph_vg0"
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:         }
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:     ],
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:     "1": [
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:         {
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "devices": [
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "/dev/loop4"
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             ],
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_name": "ceph_lv1",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_size": "21470642176",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "name": "ceph_lv1",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "tags": {
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.cluster_name": "ceph",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.crush_device_class": "",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.encrypted": "0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.osd_id": "1",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.type": "block",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.vdo": "0"
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             },
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "type": "block",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "vg_name": "ceph_vg1"
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:         }
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:     ],
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:     "2": [
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:         {
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "devices": [
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "/dev/loop5"
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             ],
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_name": "ceph_lv2",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_size": "21470642176",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "name": "ceph_lv2",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "tags": {
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.cluster_name": "ceph",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.crush_device_class": "",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.encrypted": "0",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.osd_id": "2",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.type": "block",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:                 "ceph.vdo": "0"
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             },
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "type": "block",
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:             "vg_name": "ceph_vg2"
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:         }
Nov 22 09:02:58 compute-0 quirky_faraday[263909]:     ]
Nov 22 09:02:58 compute-0 quirky_faraday[263909]: }
Nov 22 09:02:58 compute-0 systemd[1]: libpod-bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5.scope: Deactivated successfully.
Nov 22 09:02:58 compute-0 podman[263892]: 2025-11-22 09:02:58.286133221 +0000 UTC m=+1.146474521 container died bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0-merged.mount: Deactivated successfully.
Nov 22 09:02:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:59 compute-0 podman[263892]: 2025-11-22 09:02:59.245866698 +0000 UTC m=+2.106207998 container remove bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:02:59 compute-0 systemd[1]: libpod-conmon-bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5.scope: Deactivated successfully.
Nov 22 09:02:59 compute-0 sudo[263785]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:59 compute-0 sudo[263930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:59 compute-0 sudo[263930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:59 compute-0 sudo[263930]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:59 compute-0 sudo[263955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:02:59 compute-0 sudo[263955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:59 compute-0 sudo[263955]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:59 compute-0 sudo[263980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:02:59 compute-0 sudo[263980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:02:59 compute-0 sudo[263980]: pam_unix(sudo:session): session closed for user root
Nov 22 09:02:59 compute-0 ceph-mon[75021]: pgmap v1075: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:02:59 compute-0 sudo[264005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:02:59 compute-0 sudo[264005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:03:00 compute-0 podman[264070]: 2025-11-22 09:03:00.059702548 +0000 UTC m=+0.033071725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:03:00 compute-0 podman[264070]: 2025-11-22 09:03:00.221166579 +0000 UTC m=+0.194535706 container create 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:03:00 compute-0 systemd[1]: Started libpod-conmon-77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed.scope.
Nov 22 09:03:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:03:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 22 09:03:00 compute-0 podman[264070]: 2025-11-22 09:03:00.629086583 +0000 UTC m=+0.602455700 container init 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:03:00 compute-0 podman[264070]: 2025-11-22 09:03:00.638218898 +0000 UTC m=+0.611588025 container start 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 09:03:00 compute-0 peaceful_hamilton[264087]: 167 167
Nov 22 09:03:00 compute-0 systemd[1]: libpod-77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed.scope: Deactivated successfully.
Nov 22 09:03:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 22 09:03:00 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 22 09:03:00 compute-0 podman[264070]: 2025-11-22 09:03:00.708805494 +0000 UTC m=+0.682174641 container attach 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:03:00 compute-0 podman[264070]: 2025-11-22 09:03:00.710493415 +0000 UTC m=+0.683862562 container died 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:03:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef6eb6c1eb075c38c3633f2957ef78384e5147aef9c1b46d548f6224ee29da16-merged.mount: Deactivated successfully.
Nov 22 09:03:01 compute-0 podman[264070]: 2025-11-22 09:03:01.590924961 +0000 UTC m=+1.564294088 container remove 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:03:01 compute-0 systemd[1]: libpod-conmon-77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed.scope: Deactivated successfully.
Nov 22 09:03:01 compute-0 podman[264092]: 2025-11-22 09:03:01.680254389 +0000 UTC m=+0.999321352 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:03:01 compute-0 podman[264100]: 2025-11-22 09:03:01.685703303 +0000 UTC m=+1.004311274 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:03:01 compute-0 podman[264149]: 2025-11-22 09:03:01.809990021 +0000 UTC m=+0.044866475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:03:01 compute-0 podman[264149]: 2025-11-22 09:03:01.964607513 +0000 UTC m=+0.199483947 container create 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:03:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:02 compute-0 ceph-mon[75021]: osdmap e152: 3 total, 3 up, 3 in
Nov 22 09:03:02 compute-0 ceph-mon[75021]: pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:02 compute-0 systemd[1]: Started libpod-conmon-16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d.scope.
Nov 22 09:03:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:03:02 compute-0 podman[264149]: 2025-11-22 09:03:02.444061747 +0000 UTC m=+0.678938201 container init 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 09:03:02 compute-0 podman[264149]: 2025-11-22 09:03:02.453904859 +0000 UTC m=+0.688781283 container start 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:03:02 compute-0 podman[264149]: 2025-11-22 09:03:02.549869979 +0000 UTC m=+0.784746413 container attach 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:03:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 102 B/s wr, 1 op/s
Nov 22 09:03:03 compute-0 ceph-mon[75021]: pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 102 B/s wr, 1 op/s
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]: {
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "osd_id": 1,
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "type": "bluestore"
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:     },
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "osd_id": 0,
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "type": "bluestore"
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:     },
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "osd_id": 2,
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:         "type": "bluestore"
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]:     }
Nov 22 09:03:03 compute-0 xenodochial_galois[264165]: }
Nov 22 09:03:03 compute-0 systemd[1]: libpod-16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d.scope: Deactivated successfully.
Nov 22 09:03:03 compute-0 podman[264149]: 2025-11-22 09:03:03.508121691 +0000 UTC m=+1.742998115 container died 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:03:03 compute-0 systemd[1]: libpod-16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d.scope: Consumed 1.059s CPU time.
Nov 22 09:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433-merged.mount: Deactivated successfully.
Nov 22 09:03:04 compute-0 podman[264149]: 2025-11-22 09:03:04.564723282 +0000 UTC m=+2.799599756 container remove 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:03:04 compute-0 systemd[1]: libpod-conmon-16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d.scope: Deactivated successfully.
Nov 22 09:03:04 compute-0 sudo[264005]: pam_unix(sudo:session): session closed for user root
Nov 22 09:03:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:03:04 compute-0 podman[264199]: 2025-11-22 09:03:04.699500457 +0000 UTC m=+1.157573366 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:03:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:03:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:03:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 09:03:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:03:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev bd27353b-f202-475c-8e2f-5b3e6abb360e does not exist
Nov 22 09:03:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b73bfafe-61b7-4d82-9300-ba5707a9a16a does not exist
Nov 22 09:03:05 compute-0 sudo[264235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:03:05 compute-0 sudo[264235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:03:05 compute-0 sudo[264235]: pam_unix(sudo:session): session closed for user root
Nov 22 09:03:05 compute-0 sudo[264260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:03:05 compute-0 sudo[264260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:03:05 compute-0 sudo[264260]: pam_unix(sudo:session): session closed for user root
Nov 22 09:03:06 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:03:06 compute-0 ceph-mon[75021]: pgmap v1079: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 09:03:06 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:03:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 09:03:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:07 compute-0 nova_compute[253661]: 2025-11-22 09:03:07.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:07 compute-0 nova_compute[253661]: 2025-11-22 09:03:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:03:07 compute-0 ceph-mon[75021]: pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:03:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:03:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2234246451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.677 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:03:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2234246451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.894 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.896 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:03:08 compute-0 nova_compute[253661]: 2025-11-22 09:03:08.897 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:03:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.025 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.025 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.099 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.187 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.187 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.202 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.221 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.238 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:03:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:03:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833152661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.656 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.663 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.681 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.683 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.683 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.684 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.684 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.695 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:03:09 compute-0 nova_compute[253661]: 2025-11-22 09:03:09.695 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:09 compute-0 ceph-mon[75021]: pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 09:03:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2833152661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:03:10 compute-0 nova_compute[253661]: 2025-11-22 09:03:10.702 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:10 compute-0 nova_compute[253661]: 2025-11-22 09:03:10.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:10 compute-0 nova_compute[253661]: 2025-11-22 09:03:10.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:10 compute-0 nova_compute[253661]: 2025-11-22 09:03:10.704 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:03:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 09:03:11 compute-0 ceph-mon[75021]: pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 09:03:11 compute-0 nova_compute[253661]: 2025-11-22 09:03:11.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:11 compute-0 nova_compute[253661]: 2025-11-22 09:03:11.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:11 compute-0 nova_compute[253661]: 2025-11-22 09:03:11.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:03:11 compute-0 nova_compute[253661]: 2025-11-22 09:03:11.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:03:11 compute-0 nova_compute[253661]: 2025-11-22 09:03:11.254 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:03:11 compute-0 nova_compute[253661]: 2025-11-22 09:03:11.254 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:11 compute-0 nova_compute[253661]: 2025-11-22 09:03:11.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:03:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3614750764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:03:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:03:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3614750764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:03:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3614750764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:03:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3614750764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:03:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Nov 22 09:03:13 compute-0 ceph-mon[75021]: pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Nov 22 09:03:14 compute-0 nova_compute[253661]: 2025-11-22 09:03:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:14 compute-0 nova_compute[253661]: 2025-11-22 09:03:14.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Nov 22 09:03:15 compute-0 ceph-mon[75021]: pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Nov 22 09:03:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:17 compute-0 ceph-mon[75021]: pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:20 compute-0 ceph-mon[75021]: pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:21 compute-0 ceph-mon[75021]: pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:03:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:23 compute-0 ceph-mon[75021]: pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:25 compute-0 ceph-mon[75021]: pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:03:27.946 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:03:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:03:27.947 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:03:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:03:27.947 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:03:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:29 compute-0 ceph-mon[75021]: pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:30 compute-0 ceph-mon[75021]: pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:31 compute-0 ceph-mon[75021]: pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:32 compute-0 podman[264330]: 2025-11-22 09:03:32.399225758 +0000 UTC m=+0.081260100 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 09:03:32 compute-0 podman[264329]: 2025-11-22 09:03:32.421695961 +0000 UTC m=+0.103674242 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:03:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:33 compute-0 ceph-mon[75021]: pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:35 compute-0 podman[264369]: 2025-11-22 09:03:35.413441633 +0000 UTC m=+0.106394388 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:03:35 compute-0 ceph-mon[75021]: pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:37 compute-0 ceph-mon[75021]: pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:39 compute-0 nova_compute[253661]: 2025-11-22 09:03:39.475 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:40 compute-0 ceph-mon[75021]: pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:41 compute-0 ceph-mon[75021]: pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:44 compute-0 ceph-mon[75021]: pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:45 compute-0 ceph-mon[75021]: pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:47 compute-0 ceph-mon[75021]: pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:03:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6139 writes, 25K keys, 6139 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6139 writes, 1122 syncs, 5.47 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 386 writes, 857 keys, 386 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                           Interval WAL: 386 writes, 178 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:03:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:49 compute-0 ceph-mon[75021]: pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:51 compute-0 ceph-mon[75021]: pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:03:52
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'volumes']
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:03:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:53 compute-0 ceph-mon[75021]: pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:03:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:03:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.2 total, 600.0 interval
                                           Cumulative writes: 7208 writes, 29K keys, 7208 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7208 writes, 1408 syncs, 5.12 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 431 writes, 1187 keys, 431 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s
                                           Interval WAL: 431 writes, 189 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:03:55 compute-0 ceph-mon[75021]: pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:57 compute-0 ceph-mon[75021]: pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:03:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:03:59 compute-0 ceph-mon[75021]: pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:04:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6060 writes, 25K keys, 6060 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6060 writes, 1047 syncs, 5.79 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 389 writes, 934 keys, 389 commit groups, 1.0 writes per commit group, ingest: 0.51 MB, 0.00 MB/s
                                           Interval WAL: 389 writes, 173 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:04:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:01 compute-0 anacron[8155]: Job `cron.weekly' started
Nov 22 09:04:01 compute-0 anacron[8155]: Job `cron.weekly' terminated
Nov 22 09:04:01 compute-0 ceph-mon[75021]: pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:04:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:03 compute-0 podman[264398]: 2025-11-22 09:04:03.365295351 +0000 UTC m=+0.059336330 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 09:04:03 compute-0 podman[264399]: 2025-11-22 09:04:03.370208763 +0000 UTC m=+0.061155016 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:04:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:04 compute-0 ceph-mon[75021]: pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:05 compute-0 ceph-mon[75021]: pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:05 compute-0 sudo[264436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:04:05 compute-0 sudo[264436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:05 compute-0 sudo[264436]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:05 compute-0 sudo[264461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:04:05 compute-0 sudo[264461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:05 compute-0 sudo[264461]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:05 compute-0 sudo[264486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:04:05 compute-0 sudo[264486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:05 compute-0 sudo[264486]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:05 compute-0 sudo[264511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:04:05 compute-0 sudo[264511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:05 compute-0 sudo[264511]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:04:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:04:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:04:05 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:04:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:04:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:04:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4c00fc96-2d58-4330-8354-99c58b966397 does not exist
Nov 22 09:04:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ffa2018d-50ab-4eaf-99db-f933e757bc75 does not exist
Nov 22 09:04:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 3bb63b79-a43d-4e95-9a5f-731bbbb8bf53 does not exist
Nov 22 09:04:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:04:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:04:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:04:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:04:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:04:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:04:06 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:04:06 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:04:06 compute-0 sudo[264569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:04:06 compute-0 sudo[264569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:06 compute-0 sudo[264569]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:06 compute-0 sudo[264595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:04:06 compute-0 sudo[264595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:06 compute-0 sudo[264595]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:06 compute-0 podman[264593]: 2025-11-22 09:04:06.261593145 +0000 UTC m=+0.102229506 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:04:06 compute-0 sudo[264640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:04:06 compute-0 sudo[264640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:06 compute-0 sudo[264640]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:06 compute-0 sudo[264671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:04:06 compute-0 sudo[264671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:06 compute-0 podman[264735]: 2025-11-22 09:04:06.751968076 +0000 UTC m=+0.081128296 container create f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:04:06 compute-0 podman[264735]: 2025-11-22 09:04:06.696349548 +0000 UTC m=+0.025509768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:04:06 compute-0 systemd[1]: Started libpod-conmon-f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b.scope.
Nov 22 09:04:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:04:06 compute-0 podman[264735]: 2025-11-22 09:04:06.868175315 +0000 UTC m=+0.197335605 container init f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:04:06 compute-0 podman[264735]: 2025-11-22 09:04:06.876851448 +0000 UTC m=+0.206011668 container start f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:04:06 compute-0 quirky_jones[264752]: 167 167
Nov 22 09:04:06 compute-0 systemd[1]: libpod-f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b.scope: Deactivated successfully.
Nov 22 09:04:06 compute-0 conmon[264752]: conmon f47c9f54da1a9cdd6c6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b.scope/container/memory.events
Nov 22 09:04:06 compute-0 podman[264735]: 2025-11-22 09:04:06.89316572 +0000 UTC m=+0.222326010 container attach f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 09:04:06 compute-0 podman[264735]: 2025-11-22 09:04:06.895046876 +0000 UTC m=+0.224207096 container died f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 09:04:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-79d1139afc8682eeed2ef129b18506066f8ae10f0ffa2792befcb115492eb419-merged.mount: Deactivated successfully.
Nov 22 09:04:06 compute-0 podman[264735]: 2025-11-22 09:04:06.994553053 +0000 UTC m=+0.323713253 container remove f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:04:07 compute-0 systemd[1]: libpod-conmon-f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b.scope: Deactivated successfully.
Nov 22 09:04:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:04:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:04:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:04:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:04:07 compute-0 ceph-mon[75021]: pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:07 compute-0 podman[264776]: 2025-11-22 09:04:07.234568478 +0000 UTC m=+0.067459380 container create 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:04:07 compute-0 systemd[1]: Started libpod-conmon-9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8.scope.
Nov 22 09:04:07 compute-0 podman[264776]: 2025-11-22 09:04:07.207589784 +0000 UTC m=+0.040480706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:04:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:07 compute-0 podman[264776]: 2025-11-22 09:04:07.388643048 +0000 UTC m=+0.221533980 container init 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:04:07 compute-0 podman[264776]: 2025-11-22 09:04:07.397440134 +0000 UTC m=+0.230331036 container start 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:04:07 compute-0 podman[264776]: 2025-11-22 09:04:07.403736599 +0000 UTC m=+0.236627501 container attach 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.253 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:04:08 compute-0 keen_shaw[264793]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:04:08 compute-0 keen_shaw[264793]: --> relative data size: 1.0
Nov 22 09:04:08 compute-0 keen_shaw[264793]: --> All data devices are unavailable
Nov 22 09:04:08 compute-0 systemd[1]: libpod-9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8.scope: Deactivated successfully.
Nov 22 09:04:08 compute-0 systemd[1]: libpod-9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8.scope: Consumed 1.107s CPU time.
Nov 22 09:04:08 compute-0 podman[264842]: 2025-11-22 09:04:08.612440111 +0000 UTC m=+0.034554442 container died 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 09:04:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:04:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717246583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.697 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467-merged.mount: Deactivated successfully.
Nov 22 09:04:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1717246583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:04:08 compute-0 podman[264842]: 2025-11-22 09:04:08.790276795 +0000 UTC m=+0.212391086 container remove 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:04:08 compute-0 systemd[1]: libpod-conmon-9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8.scope: Deactivated successfully.
Nov 22 09:04:08 compute-0 sudo[264671]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.887 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.888 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.889 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.889 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:04:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:08 compute-0 sudo[264859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:04:08 compute-0 sudo[264859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:08 compute-0 sudo[264859]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.947 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.948 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:04:08 compute-0 nova_compute[253661]: 2025-11-22 09:04:08.975 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:04:09 compute-0 sudo[264884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:04:09 compute-0 sudo[264884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:09 compute-0 sudo[264884]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:09 compute-0 sudo[264910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:04:09 compute-0 sudo[264910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:09 compute-0 sudo[264910]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:09 compute-0 sudo[264935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:04:09 compute-0 sudo[264935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:04:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087432625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:04:09 compute-0 nova_compute[253661]: 2025-11-22 09:04:09.430 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:04:09 compute-0 nova_compute[253661]: 2025-11-22 09:04:09.436 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:04:09 compute-0 nova_compute[253661]: 2025-11-22 09:04:09.449 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:04:09 compute-0 nova_compute[253661]: 2025-11-22 09:04:09.450 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:04:09 compute-0 nova_compute[253661]: 2025-11-22 09:04:09.450 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:04:09 compute-0 podman[265020]: 2025-11-22 09:04:09.505641212 +0000 UTC m=+0.053525038 container create e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 09:04:09 compute-0 systemd[1]: Started libpod-conmon-e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a.scope.
Nov 22 09:04:09 compute-0 podman[265020]: 2025-11-22 09:04:09.47550316 +0000 UTC m=+0.023387006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:04:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:04:09 compute-0 podman[265020]: 2025-11-22 09:04:09.610682376 +0000 UTC m=+0.158566232 container init e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:04:09 compute-0 podman[265020]: 2025-11-22 09:04:09.619209255 +0000 UTC m=+0.167093101 container start e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:04:09 compute-0 systemd[1]: libpod-e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a.scope: Deactivated successfully.
Nov 22 09:04:09 compute-0 unruffled_leakey[265037]: 167 167
Nov 22 09:04:09 compute-0 conmon[265037]: conmon e989b969a4695c7195b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a.scope/container/memory.events
Nov 22 09:04:09 compute-0 podman[265020]: 2025-11-22 09:04:09.629653622 +0000 UTC m=+0.177537478 container attach e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:04:09 compute-0 podman[265020]: 2025-11-22 09:04:09.630565994 +0000 UTC m=+0.178449820 container died e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ee82271193e99c5d0512d533e6fc8f98bf13359bafb287e2d63f9f123b93a49-merged.mount: Deactivated successfully.
Nov 22 09:04:09 compute-0 podman[265020]: 2025-11-22 09:04:09.720733853 +0000 UTC m=+0.268617679 container remove e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:04:09 compute-0 systemd[1]: libpod-conmon-e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a.scope: Deactivated successfully.
Nov 22 09:04:09 compute-0 ceph-mon[75021]: pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1087432625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:04:09 compute-0 podman[265063]: 2025-11-22 09:04:09.902153935 +0000 UTC m=+0.048509945 container create b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 09:04:09 compute-0 systemd[1]: Started libpod-conmon-b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78.scope.
Nov 22 09:04:09 compute-0 podman[265063]: 2025-11-22 09:04:09.880277947 +0000 UTC m=+0.026633987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:04:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:10 compute-0 podman[265063]: 2025-11-22 09:04:10.021941792 +0000 UTC m=+0.168297822 container init b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:04:10 compute-0 podman[265063]: 2025-11-22 09:04:10.029845616 +0000 UTC m=+0.176201626 container start b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:04:10 compute-0 podman[265063]: 2025-11-22 09:04:10.041616526 +0000 UTC m=+0.187972536 container attach b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:04:10 compute-0 nova_compute[253661]: 2025-11-22 09:04:10.451 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]: {
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:     "0": [
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:         {
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "devices": [
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "/dev/loop3"
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             ],
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_name": "ceph_lv0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_size": "21470642176",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "name": "ceph_lv0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "tags": {
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.cluster_name": "ceph",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.crush_device_class": "",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.encrypted": "0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.osd_id": "0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.type": "block",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.vdo": "0"
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             },
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "type": "block",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "vg_name": "ceph_vg0"
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:         }
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:     ],
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:     "1": [
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:         {
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "devices": [
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "/dev/loop4"
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             ],
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_name": "ceph_lv1",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_size": "21470642176",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "name": "ceph_lv1",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "tags": {
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.cluster_name": "ceph",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.crush_device_class": "",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.encrypted": "0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.osd_id": "1",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.type": "block",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.vdo": "0"
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             },
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "type": "block",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "vg_name": "ceph_vg1"
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:         }
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:     ],
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:     "2": [
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:         {
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "devices": [
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "/dev/loop5"
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             ],
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_name": "ceph_lv2",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_size": "21470642176",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "name": "ceph_lv2",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "tags": {
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.cluster_name": "ceph",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.crush_device_class": "",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.encrypted": "0",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.osd_id": "2",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.type": "block",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:                 "ceph.vdo": "0"
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             },
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "type": "block",
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:             "vg_name": "ceph_vg2"
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:         }
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]:     ]
Nov 22 09:04:10 compute-0 nostalgic_robinson[265079]: }
Nov 22 09:04:10 compute-0 systemd[1]: libpod-b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78.scope: Deactivated successfully.
Nov 22 09:04:10 compute-0 podman[265063]: 2025-11-22 09:04:10.836786266 +0000 UTC m=+0.983142276 container died b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:04:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827-merged.mount: Deactivated successfully.
Nov 22 09:04:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:10 compute-0 podman[265063]: 2025-11-22 09:04:10.951359743 +0000 UTC m=+1.097715773 container remove b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:04:10 compute-0 systemd[1]: libpod-conmon-b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78.scope: Deactivated successfully.
Nov 22 09:04:10 compute-0 sudo[264935]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:11 compute-0 sudo[265099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:04:11 compute-0 sudo[265099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:11 compute-0 sudo[265099]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:11 compute-0 sudo[265124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:04:11 compute-0 sudo[265124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:11 compute-0 sudo[265124]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:11 compute-0 sudo[265149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:04:11 compute-0 sudo[265149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:11 compute-0 sudo[265149]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:11 compute-0 nova_compute[253661]: 2025-11-22 09:04:11.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:11 compute-0 nova_compute[253661]: 2025-11-22 09:04:11.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:11 compute-0 nova_compute[253661]: 2025-11-22 09:04:11.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:04:11 compute-0 sudo[265174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:04:11 compute-0 sudo[265174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:11 compute-0 podman[265238]: 2025-11-22 09:04:11.678073259 +0000 UTC m=+0.063968404 container create 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:04:11 compute-0 systemd[1]: Started libpod-conmon-1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af.scope.
Nov 22 09:04:11 compute-0 podman[265238]: 2025-11-22 09:04:11.638898205 +0000 UTC m=+0.024793350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:04:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:04:11 compute-0 podman[265238]: 2025-11-22 09:04:11.78174969 +0000 UTC m=+0.167644915 container init 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 09:04:11 compute-0 podman[265238]: 2025-11-22 09:04:11.788757752 +0000 UTC m=+0.174652887 container start 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 09:04:11 compute-0 infallible_thompson[265254]: 167 167
Nov 22 09:04:11 compute-0 systemd[1]: libpod-1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af.scope: Deactivated successfully.
Nov 22 09:04:11 compute-0 podman[265238]: 2025-11-22 09:04:11.796117053 +0000 UTC m=+0.182012178 container attach 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 09:04:11 compute-0 podman[265238]: 2025-11-22 09:04:11.796924662 +0000 UTC m=+0.182819797 container died 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ce63d6cfa057926c1c06e58d59b09a43b77e75d783d3eaf7cccf6c043b029d7-merged.mount: Deactivated successfully.
Nov 22 09:04:11 compute-0 podman[265238]: 2025-11-22 09:04:11.847497316 +0000 UTC m=+0.233392431 container remove 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:04:11 compute-0 systemd[1]: libpod-conmon-1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af.scope: Deactivated successfully.
Nov 22 09:04:11 compute-0 ceph-mon[75021]: pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:12 compute-0 podman[265276]: 2025-11-22 09:04:12.047783664 +0000 UTC m=+0.060949281 container create 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:04:12 compute-0 systemd[1]: Started libpod-conmon-7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871.scope.
Nov 22 09:04:12 compute-0 podman[265276]: 2025-11-22 09:04:12.021089546 +0000 UTC m=+0.034255243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:04:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:04:12 compute-0 podman[265276]: 2025-11-22 09:04:12.143641701 +0000 UTC m=+0.156807368 container init 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 09:04:12 compute-0 podman[265276]: 2025-11-22 09:04:12.152228352 +0000 UTC m=+0.165393969 container start 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:04:12 compute-0 podman[265276]: 2025-11-22 09:04:12.156717113 +0000 UTC m=+0.169882750 container attach 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:04:12 compute-0 nova_compute[253661]: 2025-11-22 09:04:12.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:12 compute-0 nova_compute[253661]: 2025-11-22 09:04:12.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:12 compute-0 nova_compute[253661]: 2025-11-22 09:04:12.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:04:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651213839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:04:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:04:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651213839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:04:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3651213839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:04:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3651213839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:04:13 compute-0 nova_compute[253661]: 2025-11-22 09:04:13.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:13 compute-0 nova_compute[253661]: 2025-11-22 09:04:13.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:04:13 compute-0 nova_compute[253661]: 2025-11-22 09:04:13.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:04:13 compute-0 nova_compute[253661]: 2025-11-22 09:04:13.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:04:13 compute-0 trusting_davinci[265293]: {
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "osd_id": 1,
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "type": "bluestore"
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:     },
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "osd_id": 0,
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "type": "bluestore"
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:     },
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "osd_id": 2,
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:         "type": "bluestore"
Nov 22 09:04:13 compute-0 trusting_davinci[265293]:     }
Nov 22 09:04:13 compute-0 trusting_davinci[265293]: }
Nov 22 09:04:13 compute-0 systemd[1]: libpod-7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871.scope: Deactivated successfully.
Nov 22 09:04:13 compute-0 systemd[1]: libpod-7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871.scope: Consumed 1.208s CPU time.
Nov 22 09:04:13 compute-0 podman[265276]: 2025-11-22 09:04:13.356433433 +0000 UTC m=+1.369599070 container died 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9-merged.mount: Deactivated successfully.
Nov 22 09:04:13 compute-0 podman[265276]: 2025-11-22 09:04:13.522875227 +0000 UTC m=+1.536040844 container remove 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 09:04:13 compute-0 systemd[1]: libpod-conmon-7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871.scope: Deactivated successfully.
Nov 22 09:04:13 compute-0 sudo[265174]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:04:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:04:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:04:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:04:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 11fe2974-7366-4abd-8a75-b58d9e7f2fb5 does not exist
Nov 22 09:04:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d086aa3c-7c7a-422f-959d-26ba794eb759 does not exist
Nov 22 09:04:13 compute-0 sudo[265340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:04:13 compute-0 sudo[265340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:13 compute-0 sudo[265340]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:13 compute-0 sudo[265365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:04:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:13 compute-0 sudo[265365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:04:13 compute-0 sudo[265365]: pam_unix(sudo:session): session closed for user root
Nov 22 09:04:14 compute-0 ceph-mon[75021]: pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:14 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:04:14 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:04:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:15 compute-0 ceph-mon[75021]: pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:16 compute-0 nova_compute[253661]: 2025-11-22 09:04:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:18 compute-0 ceph-mon[75021]: pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:19 compute-0 ceph-mon[75021]: pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:21 compute-0 ceph-mon[75021]: pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:04:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:24 compute-0 ceph-mon[75021]: pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:25 compute-0 ceph-mon[75021]: pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:04:27.947 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:04:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:04:27.948 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:04:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:04:27.948 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:04:28 compute-0 ceph-mon[75021]: pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:29 compute-0 ceph-mon[75021]: pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:32 compute-0 ceph-mon[75021]: pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:33 compute-0 ceph-mon[75021]: pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:34 compute-0 podman[265390]: 2025-11-22 09:04:34.400705928 +0000 UTC m=+0.077312982 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:04:34 compute-0 podman[265391]: 2025-11-22 09:04:34.412478248 +0000 UTC m=+0.084405738 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 09:04:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:35 compute-0 ceph-mon[75021]: pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:36 compute-0 podman[265427]: 2025-11-22 09:04:36.414171182 +0000 UTC m=+0.098007208 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 09:04:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:37 compute-0 ceph-mon[75021]: pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:39 compute-0 ceph-mon[75021]: pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:41 compute-0 ceph-mon[75021]: pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:43 compute-0 ceph-mon[75021]: pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:45 compute-0 ceph-mon[75021]: pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:47 compute-0 ceph-mon[75021]: pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:49 compute-0 ceph-mon[75021]: pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:51 compute-0 ceph-mon[75021]: pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:04:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 22 09:04:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 22 09:04:52 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:04:52
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'images', 'volumes']
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:04:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 8.4 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 819 KiB/s wr, 1 op/s
Nov 22 09:04:53 compute-0 ceph-mon[75021]: osdmap e153: 3 total, 3 up, 3 in
Nov 22 09:04:53 compute-0 ceph-mon[75021]: pgmap v1134: 305 pgs: 305 active+clean; 8.4 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 819 KiB/s wr, 1 op/s
Nov 22 09:04:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:04:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 22 09:04:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 22 09:04:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 22 09:04:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Nov 22 09:04:55 compute-0 ceph-mon[75021]: osdmap e154: 3 total, 3 up, 3 in
Nov 22 09:04:55 compute-0 ceph-mon[75021]: pgmap v1136: 305 pgs: 305 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Nov 22 09:04:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 29 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.6 MiB/s wr, 24 op/s
Nov 22 09:04:57 compute-0 ceph-mon[75021]: pgmap v1137: 305 pgs: 305 active+clean; 29 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.6 MiB/s wr, 24 op/s
Nov 22 09:04:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:04:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 22 09:04:59 compute-0 ceph-mon[75021]: pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 22 09:05:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.7 MiB/s wr, 42 op/s
Nov 22 09:05:01 compute-0 ceph-mon[75021]: pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.7 MiB/s wr, 42 op/s
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:05:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 36 op/s
Nov 22 09:05:03 compute-0 ceph-mon[75021]: pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 36 op/s
Nov 22 09:05:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 22 09:05:05 compute-0 ceph-mon[75021]: pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 22 09:05:05 compute-0 podman[265454]: 2025-11-22 09:05:05.384874836 +0000 UTC m=+0.068072866 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:05:05 compute-0 podman[265453]: 2025-11-22 09:05:05.387210622 +0000 UTC m=+0.071361735 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:05:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Nov 22 09:05:07 compute-0 ceph-mon[75021]: pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Nov 22 09:05:07 compute-0 podman[265491]: 2025-11-22 09:05:07.459525361 +0000 UTC m=+0.142923423 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:05:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.0 MiB/s wr, 15 op/s
Nov 22 09:05:09 compute-0 ceph-mon[75021]: pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.0 MiB/s wr, 15 op/s
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.254 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:05:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042158996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.704 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3042158996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.894 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.895 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.964 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.965 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:05:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:10 compute-0 nova_compute[253661]: 2025-11-22 09:05:10.979 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:05:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446078130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:11 compute-0 nova_compute[253661]: 2025-11-22 09:05:11.437 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:11 compute-0 nova_compute[253661]: 2025-11-22 09:05:11.447 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:05:11 compute-0 nova_compute[253661]: 2025-11-22 09:05:11.461 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:05:11 compute-0 nova_compute[253661]: 2025-11-22 09:05:11.464 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:05:11 compute-0 nova_compute[253661]: 2025-11-22 09:05:11.464 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:12 compute-0 ceph-mon[75021]: pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3446078130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:05:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3574899484' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:05:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:05:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3574899484' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:05:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3574899484' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:05:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3574899484' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:05:13 compute-0 ceph-mon[75021]: pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:13 compute-0 nova_compute[253661]: 2025-11-22 09:05:13.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:13 compute-0 nova_compute[253661]: 2025-11-22 09:05:13.466 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:13 compute-0 nova_compute[253661]: 2025-11-22 09:05:13.466 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:13 compute-0 nova_compute[253661]: 2025-11-22 09:05:13.467 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:13 compute-0 nova_compute[253661]: 2025-11-22 09:05:13.467 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:05:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:13 compute-0 sudo[265561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:13 compute-0 sudo[265561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:13 compute-0 sudo[265561]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:13 compute-0 sudo[265586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:05:13 compute-0 sudo[265586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:13 compute-0 sudo[265586]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:13 compute-0 sudo[265611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:13 compute-0 sudo[265611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:13 compute-0 sudo[265611]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:14 compute-0 sudo[265636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 09:05:14 compute-0 sudo[265636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:14 compute-0 nova_compute[253661]: 2025-11-22 09:05:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:14 compute-0 sudo[265636]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:05:14 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:05:14 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:14 compute-0 sudo[265681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:14 compute-0 sudo[265681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:14 compute-0 sudo[265681]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:14 compute-0 sudo[265706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:05:14 compute-0 sudo[265706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:14 compute-0 sudo[265706]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:14 compute-0 sudo[265731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:14 compute-0 sudo[265731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:14 compute-0 sudo[265731]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:14 compute-0 sudo[265756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:05:14 compute-0 sudo[265756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:15 compute-0 nova_compute[253661]: 2025-11-22 09:05:15.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:15 compute-0 nova_compute[253661]: 2025-11-22 09:05:15.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:05:15 compute-0 nova_compute[253661]: 2025-11-22 09:05:15.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:05:15 compute-0 nova_compute[253661]: 2025-11-22 09:05:15.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:05:15 compute-0 sudo[265756]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:15 compute-0 sudo[265813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:15 compute-0 sudo[265813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:15 compute-0 sudo[265813]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:15 compute-0 sudo[265838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:05:15 compute-0 sudo[265838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:15 compute-0 sudo[265838]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:15 compute-0 ceph-mon[75021]: pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:15 compute-0 sudo[265863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:15 compute-0 sudo[265863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:15 compute-0 sudo[265863]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:15 compute-0 sudo[265888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- inventory --format=json-pretty --filter-for-batch
Nov 22 09:05:15 compute-0 sudo[265888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:16 compute-0 podman[265954]: 2025-11-22 09:05:15.977943659 +0000 UTC m=+0.032049845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:05:16 compute-0 nova_compute[253661]: 2025-11-22 09:05:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:16 compute-0 podman[265954]: 2025-11-22 09:05:16.278586172 +0000 UTC m=+0.332692368 container create 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 09:05:16 compute-0 systemd[1]: Started libpod-conmon-1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34.scope.
Nov 22 09:05:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:05:16 compute-0 podman[265954]: 2025-11-22 09:05:16.740410968 +0000 UTC m=+0.794517204 container init 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:05:16 compute-0 podman[265954]: 2025-11-22 09:05:16.749544128 +0000 UTC m=+0.803650294 container start 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 09:05:16 compute-0 adoring_faraday[265970]: 167 167
Nov 22 09:05:16 compute-0 systemd[1]: libpod-1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34.scope: Deactivated successfully.
Nov 22 09:05:16 compute-0 podman[265954]: 2025-11-22 09:05:16.840087676 +0000 UTC m=+0.894193862 container attach 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:05:16 compute-0 podman[265954]: 2025-11-22 09:05:16.843777625 +0000 UTC m=+0.897883811 container died 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:05:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:17 compute-0 nova_compute[253661]: 2025-11-22 09:05:17.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe2b1981e40899537cb820ff39d72900fed9ccf200b68ec7155270b6a626cf7e-merged.mount: Deactivated successfully.
Nov 22 09:05:17 compute-0 ceph-mon[75021]: pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:18 compute-0 podman[265954]: 2025-11-22 09:05:18.533628325 +0000 UTC m=+2.587734521 container remove 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 09:05:18 compute-0 systemd[1]: libpod-conmon-1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34.scope: Deactivated successfully.
Nov 22 09:05:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:18 compute-0 podman[265994]: 2025-11-22 09:05:18.799667482 +0000 UTC m=+0.073746083 container create 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:05:18 compute-0 podman[265994]: 2025-11-22 09:05:18.753297662 +0000 UTC m=+0.027376283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:05:18 compute-0 systemd[1]: Started libpod-conmon-03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a.scope.
Nov 22 09:05:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:18 compute-0 podman[265994]: 2025-11-22 09:05:18.945392762 +0000 UTC m=+0.219471343 container init 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:05:18 compute-0 podman[265994]: 2025-11-22 09:05:18.953467406 +0000 UTC m=+0.227545987 container start 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 09:05:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:18 compute-0 podman[265994]: 2025-11-22 09:05:18.981191896 +0000 UTC m=+0.255270477 container attach 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:05:19 compute-0 ceph-mon[75021]: pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]: [
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:     {
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         "available": false,
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         "ceph_device": false,
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         "lsm_data": {},
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         "lvs": [],
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         "path": "/dev/sr0",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         "rejected_reasons": [
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "Insufficient space (<5GB)",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "Has a FileSystem"
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         ],
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         "sys_api": {
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "actuators": null,
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "device_nodes": "sr0",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "devname": "sr0",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "human_readable_size": "482.00 KB",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "id_bus": "ata",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "model": "QEMU DVD-ROM",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "nr_requests": "2",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "parent": "/dev/sr0",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "partitions": {},
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "path": "/dev/sr0",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "removable": "1",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "rev": "2.5+",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "ro": "0",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "rotational": "1",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "sas_address": "",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "sas_device_handle": "",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "scheduler_mode": "mq-deadline",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "sectors": 0,
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "sectorsize": "2048",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "size": 493568.0,
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "support_discard": "2048",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "type": "disk",
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:             "vendor": "QEMU"
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:         }
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]:     }
Nov 22 09:05:20 compute-0 intelligent_diffie[266011]: ]
Nov 22 09:05:20 compute-0 systemd[1]: libpod-03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a.scope: Deactivated successfully.
Nov 22 09:05:20 compute-0 systemd[1]: libpod-03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a.scope: Consumed 1.682s CPU time.
Nov 22 09:05:20 compute-0 podman[265994]: 2025-11-22 09:05:20.601102126 +0000 UTC m=+1.875180747 container died 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:05:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9-merged.mount: Deactivated successfully.
Nov 22 09:05:21 compute-0 ceph-mon[75021]: pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.339 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.340 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.341 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.341 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.364 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.409 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.493 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.494 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.502 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.502 253665 INFO nova.compute.claims [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.508 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:21 compute-0 podman[265994]: 2025-11-22 09:05:21.565923312 +0000 UTC m=+2.840001893 container remove 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:05:21 compute-0 sudo[265888]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:05:21 compute-0 nova_compute[253661]: 2025-11-22 09:05:21.616 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:21 compute-0 systemd[1]: libpod-conmon-03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a.scope: Deactivated successfully.
Nov 22 09:05:21 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:05:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475159984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.082 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.094 253665 DEBUG nova.compute.provider_tree [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.118 253665 DEBUG nova.scheduler.client.report [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.141 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.141 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.143 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.151 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.151 253665 INFO nova.compute.claims [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.273 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 3bf7f732-2ecb-41d8-baef-74fb5a007a7f does not exist
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 433fa37c-fcd9-4915-adfb-b2e7b1cde7d8 does not exist
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ca6f116e-adea-4ba0-8828-50b83944e2a6 does not exist
Nov 22 09:05:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.302 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:05:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.325 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.348 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:05:22 compute-0 sudo[268214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:22 compute-0 sudo[268214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:22 compute-0 sudo[268214]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:05:22.384 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:05:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:05:22.386 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:05:22 compute-0 sudo[268240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:05:22 compute-0 sudo[268240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:22 compute-0 sudo[268240]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.474 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.475 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.475 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Creating image(s)
Nov 22 09:05:22 compute-0 sudo[268282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:22 compute-0 sudo[268282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:22 compute-0 sudo[268282]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.512 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.546 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.575 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.579 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.580 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:22 compute-0 sudo[268327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:05:22 compute-0 sudo[268327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:05:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:05:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592572438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.767 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.773 253665 DEBUG nova.compute.provider_tree [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.794 253665 DEBUG nova.scheduler.client.report [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.817 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.818 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3475159984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:05:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/592572438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.866 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.867 253665 DEBUG nova.network.neutron [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.891 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:05:22 compute-0 nova_compute[253661]: 2025-11-22 09:05:22.910 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:05:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 22 09:05:22 compute-0 podman[268430]: 2025-11-22 09:05:22.976069385 +0000 UTC m=+0.072509162 container create f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.011 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.013 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.013 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Creating image(s)
Nov 22 09:05:23 compute-0 podman[268430]: 2025-11-22 09:05:22.928667461 +0000 UTC m=+0.025107258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.031 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.055 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.086 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.092 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:23 compute-0 systemd[1]: Started libpod-conmon-f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8.scope.
Nov 22 09:05:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:05:23 compute-0 podman[268430]: 2025-11-22 09:05:23.283509432 +0000 UTC m=+0.379949229 container init f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 09:05:23 compute-0 podman[268430]: 2025-11-22 09:05:23.291629138 +0000 UTC m=+0.388068915 container start f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:05:23 compute-0 clever_raman[268500]: 167 167
Nov 22 09:05:23 compute-0 systemd[1]: libpod-f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8.scope: Deactivated successfully.
Nov 22 09:05:23 compute-0 podman[268430]: 2025-11-22 09:05:23.437559113 +0000 UTC m=+0.533998910 container attach f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:05:23 compute-0 podman[268430]: 2025-11-22 09:05:23.438294981 +0000 UTC m=+0.534734778 container died f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee34a853407c3588020d59faa8a1970aea21c0ff2f04c8f9d4455d84b587e927-merged.mount: Deactivated successfully.
Nov 22 09:05:23 compute-0 podman[268430]: 2025-11-22 09:05:23.634572722 +0000 UTC m=+0.731012499 container remove f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.650 253665 DEBUG nova.virt.libvirt.imagebackend [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/878156d4-57f6-4a8b-8f4c-cbde182bb832/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/878156d4-57f6-4a8b-8f4c-cbde182bb832/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 22 09:05:23 compute-0 systemd[1]: libpod-conmon-f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8.scope: Deactivated successfully.
Nov 22 09:05:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.814 253665 DEBUG nova.network.neutron [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:05:23 compute-0 nova_compute[253661]: 2025-11-22 09:05:23.815 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:05:23 compute-0 podman[268524]: 2025-11-22 09:05:23.824594913 +0000 UTC m=+0.053170676 container create 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:05:23 compute-0 systemd[1]: Started libpod-conmon-467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae.scope.
Nov 22 09:05:23 compute-0 ceph-mon[75021]: pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 22 09:05:23 compute-0 podman[268524]: 2025-11-22 09:05:23.794860014 +0000 UTC m=+0.023435827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:05:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:23 compute-0 podman[268524]: 2025-11-22 09:05:23.924324241 +0000 UTC m=+0.152900024 container init 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:05:23 compute-0 podman[268524]: 2025-11-22 09:05:23.934599129 +0000 UTC m=+0.163174892 container start 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 09:05:23 compute-0 podman[268524]: 2025-11-22 09:05:23.940340038 +0000 UTC m=+0.168915801 container attach 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:05:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Nov 22 09:05:25 compute-0 ceph-mon[75021]: pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Nov 22 09:05:25 compute-0 charming_wozniak[268540]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:05:25 compute-0 charming_wozniak[268540]: --> relative data size: 1.0
Nov 22 09:05:25 compute-0 charming_wozniak[268540]: --> All data devices are unavailable
Nov 22 09:05:25 compute-0 systemd[1]: libpod-467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae.scope: Deactivated successfully.
Nov 22 09:05:25 compute-0 systemd[1]: libpod-467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae.scope: Consumed 1.126s CPU time.
Nov 22 09:05:25 compute-0 podman[268569]: 2025-11-22 09:05:25.265124279 +0000 UTC m=+0.025952217 container died 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:05:25 compute-0 nova_compute[253661]: 2025-11-22 09:05:25.302 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:25 compute-0 nova_compute[253661]: 2025-11-22 09:05:25.366 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:25 compute-0 nova_compute[253661]: 2025-11-22 09:05:25.367 253665 DEBUG nova.virt.images [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] 878156d4-57f6-4a8b-8f4c-cbde182bb832 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 22 09:05:25 compute-0 nova_compute[253661]: 2025-11-22 09:05:25.370 253665 DEBUG nova.privsep.utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 09:05:25 compute-0 nova_compute[253661]: 2025-11-22 09:05:25.371 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.part /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f-merged.mount: Deactivated successfully.
Nov 22 09:05:25 compute-0 podman[268569]: 2025-11-22 09:05:25.670595074 +0000 UTC m=+0.431422982 container remove 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:05:25 compute-0 systemd[1]: libpod-conmon-467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae.scope: Deactivated successfully.
Nov 22 09:05:25 compute-0 sudo[268327]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:25 compute-0 sudo[268597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:25 compute-0 sudo[268597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:25 compute-0 sudo[268597]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:25 compute-0 sudo[268622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:05:25 compute-0 sudo[268622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:25 compute-0 sudo[268622]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:25 compute-0 nova_compute[253661]: 2025-11-22 09:05:25.976 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.part /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.converted" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:25 compute-0 sudo[268647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:25 compute-0 sudo[268647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:25 compute-0 nova_compute[253661]: 2025-11-22 09:05:25.983 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:25 compute-0 sudo[268647]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:26 compute-0 sudo[268673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:05:26 compute-0 sudo[268673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:26 compute-0 nova_compute[253661]: 2025-11-22 09:05:26.108 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.converted --force-share --output=json" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:26 compute-0 nova_compute[253661]: 2025-11-22 09:05:26.110 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:26 compute-0 nova_compute[253661]: 2025-11-22 09:05:26.139 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:26 compute-0 nova_compute[253661]: 2025-11-22 09:05:26.146 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:26 compute-0 nova_compute[253661]: 2025-11-22 09:05:26.172 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 3.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:26 compute-0 nova_compute[253661]: 2025-11-22 09:05:26.174 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:26 compute-0 nova_compute[253661]: 2025-11-22 09:05:26.212 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:26 compute-0 nova_compute[253661]: 2025-11-22 09:05:26.217 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b9672702-d5e5-407f-bc86-ee9e64f90a01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 22 09:05:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 22 09:05:26 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 22 09:05:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:05:26.388 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:05:26 compute-0 podman[268807]: 2025-11-22 09:05:26.456540219 +0000 UTC m=+0.052453328 container create 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 09:05:26 compute-0 systemd[1]: Started libpod-conmon-412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1.scope.
Nov 22 09:05:26 compute-0 podman[268807]: 2025-11-22 09:05:26.432193941 +0000 UTC m=+0.028107030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:05:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:05:26 compute-0 podman[268807]: 2025-11-22 09:05:26.543290684 +0000 UTC m=+0.139203763 container init 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:05:26 compute-0 podman[268807]: 2025-11-22 09:05:26.550761745 +0000 UTC m=+0.146674814 container start 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:05:26 compute-0 systemd[1]: libpod-412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1.scope: Deactivated successfully.
Nov 22 09:05:26 compute-0 podman[268807]: 2025-11-22 09:05:26.557417895 +0000 UTC m=+0.153330984 container attach 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:05:26 compute-0 adoring_hugle[268823]: 167 167
Nov 22 09:05:26 compute-0 conmon[268823]: conmon 412db7bb520d6aa228be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1.scope/container/memory.events
Nov 22 09:05:26 compute-0 podman[268807]: 2025-11-22 09:05:26.559061916 +0000 UTC m=+0.154974985 container died 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5870e6cd5a0441a557a1fe368b2848b9f34ee78043574fa8661c7f35a6172de-merged.mount: Deactivated successfully.
Nov 22 09:05:26 compute-0 podman[268807]: 2025-11-22 09:05:26.619300801 +0000 UTC m=+0.215213870 container remove 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:05:26 compute-0 systemd[1]: libpod-conmon-412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1.scope: Deactivated successfully.
Nov 22 09:05:26 compute-0 podman[268847]: 2025-11-22 09:05:26.790536197 +0000 UTC m=+0.048542224 container create 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 22 09:05:26 compute-0 systemd[1]: Started libpod-conmon-92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb.scope.
Nov 22 09:05:26 compute-0 podman[268847]: 2025-11-22 09:05:26.76995477 +0000 UTC m=+0.027960587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:05:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:26 compute-0 podman[268847]: 2025-11-22 09:05:26.882785086 +0000 UTC m=+0.140790883 container init 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 09:05:26 compute-0 podman[268847]: 2025-11-22 09:05:26.893017122 +0000 UTC m=+0.151022919 container start 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:05:26 compute-0 podman[268847]: 2025-11-22 09:05:26.895967904 +0000 UTC m=+0.153973701 container attach 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:05:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 838 KiB/s rd, 0 B/s wr, 31 op/s
Nov 22 09:05:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 22 09:05:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 22 09:05:27 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 22 09:05:27 compute-0 ceph-mon[75021]: osdmap e155: 3 total, 3 up, 3 in
Nov 22 09:05:27 compute-0 ceph-mon[75021]: pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 838 KiB/s rd, 0 B/s wr, 31 op/s
Nov 22 09:05:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:05:27.949 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:05:27.950 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:05:27.950 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:28 compute-0 adoring_shirley[268863]: {
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:     "0": [
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:         {
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "devices": [
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "/dev/loop3"
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             ],
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_name": "ceph_lv0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_size": "21470642176",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "name": "ceph_lv0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "tags": {
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.cluster_name": "ceph",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.crush_device_class": "",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.encrypted": "0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.osd_id": "0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.type": "block",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.vdo": "0"
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             },
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "type": "block",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "vg_name": "ceph_vg0"
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:         }
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:     ],
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:     "1": [
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:         {
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "devices": [
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "/dev/loop4"
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             ],
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_name": "ceph_lv1",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_size": "21470642176",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "name": "ceph_lv1",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "tags": {
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.cluster_name": "ceph",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.crush_device_class": "",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.encrypted": "0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.osd_id": "1",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.type": "block",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.vdo": "0"
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             },
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "type": "block",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "vg_name": "ceph_vg1"
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:         }
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:     ],
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:     "2": [
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:         {
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "devices": [
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "/dev/loop5"
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             ],
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_name": "ceph_lv2",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_size": "21470642176",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "name": "ceph_lv2",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "tags": {
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.cluster_name": "ceph",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.crush_device_class": "",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.encrypted": "0",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.osd_id": "2",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.type": "block",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:                 "ceph.vdo": "0"
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             },
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "type": "block",
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:             "vg_name": "ceph_vg2"
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:         }
Nov 22 09:05:28 compute-0 adoring_shirley[268863]:     ]
Nov 22 09:05:28 compute-0 adoring_shirley[268863]: }
Nov 22 09:05:28 compute-0 systemd[1]: libpod-92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb.scope: Deactivated successfully.
Nov 22 09:05:28 compute-0 podman[268847]: 2025-11-22 09:05:28.152934837 +0000 UTC m=+1.410940634 container died 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:05:28 compute-0 ceph-mon[75021]: osdmap e156: 3 total, 3 up, 3 in
Nov 22 09:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6-merged.mount: Deactivated successfully.
Nov 22 09:05:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 53 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 222 KiB/s wr, 130 op/s
Nov 22 09:05:29 compute-0 podman[268847]: 2025-11-22 09:05:29.608547939 +0000 UTC m=+2.866553736 container remove 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 09:05:29 compute-0 systemd[1]: libpod-conmon-92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb.scope: Deactivated successfully.
Nov 22 09:05:29 compute-0 sudo[268673]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:29 compute-0 sudo[268896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:29 compute-0 sudo[268896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:29 compute-0 sudo[268896]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:29 compute-0 sudo[268921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:05:29 compute-0 sudo[268921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:29 compute-0 sudo[268921]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:29 compute-0 ceph-mon[75021]: pgmap v1155: 305 pgs: 305 active+clean; 53 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 222 KiB/s wr, 130 op/s
Nov 22 09:05:29 compute-0 sudo[268946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:29 compute-0 sudo[268946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:29 compute-0 sudo[268946]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:29 compute-0 sudo[268971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:05:29 compute-0 sudo[268971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:30 compute-0 podman[269036]: 2025-11-22 09:05:30.338644915 +0000 UTC m=+0.028922710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:05:30 compute-0 nova_compute[253661]: 2025-11-22 09:05:30.461 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:30 compute-0 podman[269036]: 2025-11-22 09:05:30.769186524 +0000 UTC m=+0.459464279 container create 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:05:30 compute-0 nova_compute[253661]: 2025-11-22 09:05:30.855 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b9672702-d5e5-407f-bc86-ee9e64f90a01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:30 compute-0 nova_compute[253661]: 2025-11-22 09:05:30.907 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] resizing rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:05:30 compute-0 systemd[1]: Started libpod-conmon-2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9.scope.
Nov 22 09:05:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 53 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 222 KiB/s wr, 123 op/s
Nov 22 09:05:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:05:31 compute-0 nova_compute[253661]: 2025-11-22 09:05:31.093 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] resizing rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:05:31 compute-0 podman[269036]: 2025-11-22 09:05:31.132926071 +0000 UTC m=+0.823203806 container init 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:05:31 compute-0 podman[269036]: 2025-11-22 09:05:31.141489918 +0000 UTC m=+0.831767643 container start 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:05:31 compute-0 modest_volhard[269125]: 167 167
Nov 22 09:05:31 compute-0 systemd[1]: libpod-2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9.scope: Deactivated successfully.
Nov 22 09:05:31 compute-0 podman[269036]: 2025-11-22 09:05:31.21192848 +0000 UTC m=+0.902206225 container attach 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:05:31 compute-0 podman[269036]: 2025-11-22 09:05:31.212947844 +0000 UTC m=+0.903225569 container died 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:05:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-505a6935600b7158cbd31234e95e9e9a8dddd0358ce42ba5c5e663bea0b389b2-merged.mount: Deactivated successfully.
Nov 22 09:05:31 compute-0 ceph-mon[75021]: pgmap v1156: 305 pgs: 305 active+clean; 53 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 222 KiB/s wr, 123 op/s
Nov 22 09:05:32 compute-0 podman[269036]: 2025-11-22 09:05:32.034695445 +0000 UTC m=+1.724973190 container remove 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:05:32 compute-0 systemd[1]: libpod-conmon-2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9.scope: Deactivated successfully.
Nov 22 09:05:32 compute-0 podman[269184]: 2025-11-22 09:05:32.225534926 +0000 UTC m=+0.031016232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:05:32 compute-0 podman[269184]: 2025-11-22 09:05:32.349903469 +0000 UTC m=+0.155384725 container create 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.377 253665 DEBUG nova.objects.instance [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.396 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.396 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Ensure instance console log exists: /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.397 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.397 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.397 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.399 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.405 253665 WARNING nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.411 253665 DEBUG nova.virt.libvirt.host [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.412 253665 DEBUG nova.virt.libvirt.host [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.414 253665 DEBUG nova.virt.libvirt.host [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.415 253665 DEBUG nova.virt.libvirt.host [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.415 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.415 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.416 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.416 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.416 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.417 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.417 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.417 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.417 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.418 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.418 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.418 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.421 253665 DEBUG nova.privsep.utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.422 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:32 compute-0 systemd[1]: Started libpod-conmon-27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856.scope.
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.504 253665 DEBUG nova.objects.instance [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'migration_context' on Instance uuid b9672702-d5e5-407f-bc86-ee9e64f90a01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.518 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.519 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Ensure instance console log exists: /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.519 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.520 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.520 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.521 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:05:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.527 253665 WARNING nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.531 253665 DEBUG nova.virt.libvirt.host [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.532 253665 DEBUG nova.virt.libvirt.host [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.537 253665 DEBUG nova.virt.libvirt.host [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.537 253665 DEBUG nova.virt.libvirt.host [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.538 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.538 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.539 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.539 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.540 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.540 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.540 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.540 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.541 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.541 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.541 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.542 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.545 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:32 compute-0 podman[269184]: 2025-11-22 09:05:32.589449286 +0000 UTC m=+0.394930582 container init 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:05:32 compute-0 podman[269184]: 2025-11-22 09:05:32.59917459 +0000 UTC m=+0.404655846 container start 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:05:32 compute-0 podman[269184]: 2025-11-22 09:05:32.715717896 +0000 UTC m=+0.521199162 container attach 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:05:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:05:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869589781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.863 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.894 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.901 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:05:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3170161559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 91 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 135 op/s
Nov 22 09:05:32 compute-0 nova_compute[253661]: 2025-11-22 09:05:32.982 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.009 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.013 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:05:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759623925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.376 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.380 253665 DEBUG nova.objects.instance [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.398 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <uuid>6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb</uuid>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <name>instance-00000002</name>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:name>tempest-AutoAllocateNetworkTest-server-2058029404</nova:name>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:05:32</nova:creationTime>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:user uuid="3a894a86983342d3bdece6fcf23fe1a9">tempest-AutoAllocateNetworkTest-135593428-project-member</nova:user>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:project uuid="4c55d55859ed4fb9adf33d6c40da9051">tempest-AutoAllocateNetworkTest-135593428</nova:project>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <system>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="serial">6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="uuid">6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </system>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <os>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </os>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <features>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </features>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </source>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </source>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/console.log" append="off"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <video>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </video>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:05:33 compute-0 nova_compute[253661]: </domain>
Nov 22 09:05:33 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:05:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:05:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970409928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.512 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.514 253665 DEBUG nova.objects.instance [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9672702-d5e5-407f-bc86-ee9e64f90a01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.538 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <uuid>b9672702-d5e5-407f-bc86-ee9e64f90a01</uuid>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <name>instance-00000001</name>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-1912718424</nova:name>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:05:32</nova:creationTime>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:user uuid="786abc4100494e9a9e9977b0d6534f9d">tempest-DeleteServersAdminTestJSON-1856447615-project-member</nova:user>
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <nova:project uuid="ae421771a97c45f7a0288a4b8cfd48c5">tempest-DeleteServersAdminTestJSON-1856447615</nova:project>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <system>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="serial">b9672702-d5e5-407f-bc86-ee9e64f90a01</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="uuid">b9672702-d5e5-407f-bc86-ee9e64f90a01</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </system>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <os>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </os>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <features>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </features>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b9672702-d5e5-407f-bc86-ee9e64f90a01_disk">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </source>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </source>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:05:33 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/console.log" append="off"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <video>
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </video>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:05:33 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:05:33 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:05:33 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:05:33 compute-0 nova_compute[253661]: </domain>
Nov 22 09:05:33 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:05:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3869589781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3170161559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:33 compute-0 ceph-mon[75021]: pgmap v1157: 305 pgs: 305 active+clean; 91 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 135 op/s
Nov 22 09:05:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3759623925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.557 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.559 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.560 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Using config drive
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]: {
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "osd_id": 1,
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "type": "bluestore"
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:     },
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "osd_id": 0,
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "type": "bluestore"
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:     },
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "osd_id": 2,
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:         "type": "bluestore"
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]:     }
Nov 22 09:05:33 compute-0 wizardly_mestorf[269237]: }
Nov 22 09:05:33 compute-0 systemd[1]: libpod-27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856.scope: Deactivated successfully.
Nov 22 09:05:33 compute-0 systemd[1]: libpod-27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856.scope: Consumed 1.053s CPU time.
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.669 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:33 compute-0 podman[269410]: 2025-11-22 09:05:33.715887506 +0000 UTC m=+0.033153842 container died 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.720 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.720 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.721 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Using config drive
Nov 22 09:05:33 compute-0 nova_compute[253661]: 2025-11-22 09:05:33.742 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 22 09:05:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 22 09:05:33 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 22 09:05:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8-merged.mount: Deactivated successfully.
Nov 22 09:05:34 compute-0 podman[269410]: 2025-11-22 09:05:34.075900322 +0000 UTC m=+0.393166628 container remove 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:05:34 compute-0 systemd[1]: libpod-conmon-27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856.scope: Deactivated successfully.
Nov 22 09:05:34 compute-0 sudo[268971]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:05:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.309442) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802334309526, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2122, "num_deletes": 253, "total_data_size": 3466718, "memory_usage": 3517312, "flush_reason": "Manual Compaction"}
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 22 09:05:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev aaf08777-6fa5-42b0-bf74-bcc0d509eab6 does not exist
Nov 22 09:05:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0aadcae5-8185-454d-9ee9-a315203b2fe0 does not exist
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802334363795, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3386323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21143, "largest_seqno": 23264, "table_properties": {"data_size": 3376609, "index_size": 6208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19519, "raw_average_key_size": 20, "raw_value_size": 3357234, "raw_average_value_size": 3486, "num_data_blocks": 279, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802122, "oldest_key_time": 1763802122, "file_creation_time": 1763802334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 54414 microseconds, and 10622 cpu microseconds.
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:05:34 compute-0 sudo[269444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:05:34 compute-0 sudo[269444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:34 compute-0 sudo[269444]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.363865) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3386323 bytes OK
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.363899) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.475507) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.475559) EVENT_LOG_v1 {"time_micros": 1763802334475547, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.475589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3457803, prev total WAL file size 3498451, number of live WAL files 2.
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.478250) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3306KB)], [50(7644KB)]
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802334478292, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11214160, "oldest_snapshot_seqno": -1}
Nov 22 09:05:34 compute-0 sudo[269469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:05:34 compute-0 sudo[269469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:05:34 compute-0 sudo[269469]: pam_unix(sudo:session): session closed for user root
Nov 22 09:05:34 compute-0 nova_compute[253661]: 2025-11-22 09:05:34.601 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Creating config drive at /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config
Nov 22 09:05:34 compute-0 nova_compute[253661]: 2025-11-22 09:05:34.606 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqyrvsnfg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:34 compute-0 nova_compute[253661]: 2025-11-22 09:05:34.665 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Creating config drive at /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config
Nov 22 09:05:34 compute-0 nova_compute[253661]: 2025-11-22 09:05:34.669 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68dq5ztu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:34 compute-0 nova_compute[253661]: 2025-11-22 09:05:34.963 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68dq5ztu" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4864 keys, 9460474 bytes, temperature: kUnknown
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802334973167, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9460474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9424818, "index_size": 22426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 119146, "raw_average_key_size": 24, "raw_value_size": 9333726, "raw_average_value_size": 1918, "num_data_blocks": 941, "num_entries": 4864, "num_filter_entries": 4864, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:05:34 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:05:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 137 op/s
Nov 22 09:05:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/970409928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:34 compute-0 ceph-mon[75021]: osdmap e157: 3 total, 3 up, 3 in
Nov 22 09:05:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.973638) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9460474 bytes
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.023473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.7 rd, 19.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5384, records dropped: 520 output_compression: NoCompression
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.023508) EVENT_LOG_v1 {"time_micros": 1763802335023492, "job": 26, "event": "compaction_finished", "compaction_time_micros": 494971, "compaction_time_cpu_micros": 28970, "output_level": 6, "num_output_files": 1, "total_output_size": 9460474, "num_input_records": 5384, "num_output_records": 4864, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802335024879, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802335027046, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.478132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:05:35 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:05:35 compute-0 nova_compute[253661]: 2025-11-22 09:05:35.028 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:35 compute-0 nova_compute[253661]: 2025-11-22 09:05:35.033 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:35 compute-0 nova_compute[253661]: 2025-11-22 09:05:35.053 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqyrvsnfg" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:35 compute-0 nova_compute[253661]: 2025-11-22 09:05:35.089 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:35 compute-0 nova_compute[253661]: 2025-11-22 09:05:35.093 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:36 compute-0 ceph-mon[75021]: pgmap v1159: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 137 op/s
Nov 22 09:05:36 compute-0 nova_compute[253661]: 2025-11-22 09:05:36.227 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:36 compute-0 nova_compute[253661]: 2025-11-22 09:05:36.228 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Deleting local config drive /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config because it was imported into RBD.
Nov 22 09:05:36 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 22 09:05:36 compute-0 nova_compute[253661]: 2025-11-22 09:05:36.267 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:36 compute-0 nova_compute[253661]: 2025-11-22 09:05:36.267 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Deleting local config drive /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config because it was imported into RBD.
Nov 22 09:05:36 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 22 09:05:36 compute-0 podman[269575]: 2025-11-22 09:05:36.350042625 +0000 UTC m=+0.070452242 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 22 09:05:36 compute-0 podman[269576]: 2025-11-22 09:05:36.359421682 +0000 UTC m=+0.079812949 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:05:36 compute-0 systemd-machined[215941]: New machine qemu-1-instance-00000002.
Nov 22 09:05:36 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Nov 22 09:05:36 compute-0 systemd-machined[215941]: New machine qemu-2-instance-00000001.
Nov 22 09:05:36 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000001.
Nov 22 09:05:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 120 op/s
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.151 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802337.1502492, 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.151 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] VM Resumed (Lifecycle Event)
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.155 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.155 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.160 253665 INFO nova.virt.libvirt.driver [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance spawned successfully.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.160 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.199 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.205 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.210 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.211 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.211 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.212 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.212 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.213 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 ceph-mon[75021]: pgmap v1160: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 120 op/s
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.233 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.234 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802337.1506598, 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.235 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] VM Started (Lifecycle Event)
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.267 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.271 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.292 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.303 253665 INFO nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Took 14.83 seconds to spawn the instance on the hypervisor.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.305 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.307 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802337.306414, b9672702-d5e5-407f-bc86-ee9e64f90a01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.308 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] VM Resumed (Lifecycle Event)
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.311 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.311 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.320 253665 INFO nova.virt.libvirt.driver [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance spawned successfully.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.322 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.325 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.331 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.370 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.371 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.371 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.372 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.373 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.373 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.401 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.402 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802337.3065295, b9672702-d5e5-407f-bc86-ee9e64f90a01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.402 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] VM Started (Lifecycle Event)
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.419 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.423 253665 INFO nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Took 15.96 seconds to build instance.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.426 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.447 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.475 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.512 253665 INFO nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Took 14.50 seconds to spawn the instance on the hypervisor.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.514 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.686 253665 INFO nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Took 16.20 seconds to build instance.
Nov 22 09:05:37 compute-0 nova_compute[253661]: 2025-11-22 09:05:37.727 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:38 compute-0 podman[269743]: 2025-11-22 09:05:38.47101908 +0000 UTC m=+0.144142443 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:05:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 99 op/s
Nov 22 09:05:39 compute-0 ceph-mon[75021]: pgmap v1161: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 99 op/s
Nov 22 09:05:39 compute-0 nova_compute[253661]: 2025-11-22 09:05:39.853 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquiring lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:39 compute-0 nova_compute[253661]: 2025-11-22 09:05:39.854 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:39 compute-0 nova_compute[253661]: 2025-11-22 09:05:39.854 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquiring lock "b9672702-d5e5-407f-bc86-ee9e64f90a01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:39 compute-0 nova_compute[253661]: 2025-11-22 09:05:39.855 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:39 compute-0 nova_compute[253661]: 2025-11-22 09:05:39.855 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:39 compute-0 nova_compute[253661]: 2025-11-22 09:05:39.856 253665 INFO nova.compute.manager [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Terminating instance
Nov 22 09:05:39 compute-0 nova_compute[253661]: 2025-11-22 09:05:39.857 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquiring lock "refresh_cache-b9672702-d5e5-407f-bc86-ee9e64f90a01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:05:39 compute-0 nova_compute[253661]: 2025-11-22 09:05:39.858 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquired lock "refresh_cache-b9672702-d5e5-407f-bc86-ee9e64f90a01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:05:39 compute-0 nova_compute[253661]: 2025-11-22 09:05:39.858 253665 DEBUG nova.network.neutron [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.061 253665 DEBUG nova.network.neutron [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.337 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.338 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.338 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.339 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.339 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.340 253665 INFO nova.compute.manager [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Terminating instance
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.341 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "refresh_cache-6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.341 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquired lock "refresh_cache-6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.341 253665 DEBUG nova.network.neutron [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.547 253665 DEBUG nova.network.neutron [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.571 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Releasing lock "refresh_cache-b9672702-d5e5-407f-bc86-ee9e64f90a01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.571 253665 DEBUG nova.compute.manager [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:05:40 compute-0 nova_compute[253661]: 2025-11-22 09:05:40.659 253665 DEBUG nova.network.neutron [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:05:40 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 22 09:05:40 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Consumed 3.784s CPU time.
Nov 22 09:05:40 compute-0 systemd-machined[215941]: Machine qemu-2-instance-00000001 terminated.
Nov 22 09:05:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 99 op/s
Nov 22 09:05:41 compute-0 nova_compute[253661]: 2025-11-22 09:05:41.002 253665 INFO nova.virt.libvirt.driver [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance destroyed successfully.
Nov 22 09:05:41 compute-0 nova_compute[253661]: 2025-11-22 09:05:41.002 253665 DEBUG nova.objects.instance [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lazy-loading 'resources' on Instance uuid b9672702-d5e5-407f-bc86-ee9e64f90a01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:05:41 compute-0 nova_compute[253661]: 2025-11-22 09:05:41.071 253665 DEBUG nova.network.neutron [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:05:41 compute-0 nova_compute[253661]: 2025-11-22 09:05:41.085 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Releasing lock "refresh_cache-6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:05:41 compute-0 nova_compute[253661]: 2025-11-22 09:05:41.086 253665 DEBUG nova.compute.manager [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:05:41 compute-0 ceph-mon[75021]: pgmap v1162: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 99 op/s
Nov 22 09:05:41 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 22 09:05:41 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 4.468s CPU time.
Nov 22 09:05:41 compute-0 systemd-machined[215941]: Machine qemu-1-instance-00000002 terminated.
Nov 22 09:05:41 compute-0 nova_compute[253661]: 2025-11-22 09:05:41.509 253665 INFO nova.virt.libvirt.driver [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance destroyed successfully.
Nov 22 09:05:41 compute-0 nova_compute[253661]: 2025-11-22 09:05:41.510 253665 DEBUG nova.objects.instance [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lazy-loading 'resources' on Instance uuid 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:05:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 158 op/s
Nov 22 09:05:43 compute-0 ceph-mon[75021]: pgmap v1163: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 158 op/s
Nov 22 09:05:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 27 KiB/s wr, 173 op/s
Nov 22 09:05:45 compute-0 ceph-mon[75021]: pgmap v1164: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 27 KiB/s wr, 173 op/s
Nov 22 09:05:46 compute-0 nova_compute[253661]: 2025-11-22 09:05:46.710 253665 INFO nova.virt.libvirt.driver [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Deleting instance files /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01_del
Nov 22 09:05:46 compute-0 nova_compute[253661]: 2025-11-22 09:05:46.711 253665 INFO nova.virt.libvirt.driver [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Deletion of /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01_del complete
Nov 22 09:05:46 compute-0 nova_compute[253661]: 2025-11-22 09:05:46.718 253665 INFO nova.virt.libvirt.driver [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Deleting instance files /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_del
Nov 22 09:05:46 compute-0 nova_compute[253661]: 2025-11-22 09:05:46.719 253665 INFO nova.virt.libvirt.driver [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Deletion of /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_del complete
Nov 22 09:05:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 97 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 178 op/s
Nov 22 09:05:47 compute-0 ceph-mon[75021]: pgmap v1165: 305 pgs: 305 active+clean; 97 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 178 op/s
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.147 253665 DEBUG nova.virt.libvirt.host [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.147 253665 INFO nova.virt.libvirt.host [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] UEFI support detected
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.149 253665 INFO nova.compute.manager [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Took 6.58 seconds to destroy the instance on the hypervisor.
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.150 253665 DEBUG oslo.service.loopingcall [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.151 253665 DEBUG nova.compute.manager [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.151 253665 DEBUG nova.network.neutron [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.219 253665 INFO nova.compute.manager [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Took 6.13 seconds to destroy the instance on the hypervisor.
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.220 253665 DEBUG oslo.service.loopingcall [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.221 253665 DEBUG nova.compute.manager [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.221 253665 DEBUG nova.network.neutron [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.670 253665 DEBUG nova.network.neutron [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.682 253665 DEBUG nova.network.neutron [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.693 253665 INFO nova.compute.manager [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Took 0.54 seconds to deallocate network for instance.
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.767 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.768 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.772 253665 DEBUG nova.network.neutron [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.783 253665 DEBUG nova.network.neutron [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.796 253665 INFO nova.compute.manager [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Took 0.57 seconds to deallocate network for instance.
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.836 253665 DEBUG oslo_concurrency.processutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:47 compute-0 nova_compute[253661]: 2025-11-22 09:05:47.946 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:05:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223022577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.338 253665 DEBUG oslo_concurrency.processutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.346 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:05:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2223022577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.377 253665 ERROR nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [req-2eeba460-000b-4da4-af91-9b68da56f9d6] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID f0c5987a-d277-4022-aba2-19e7fecb4518.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-2eeba460-000b-4da4-af91-9b68da56f9d6"}]}
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.397 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.413 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.413 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.426 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.446 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.490 253665 DEBUG oslo_concurrency.processutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:05:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997063578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.971 253665 DEBUG oslo_concurrency.processutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:48 compute-0 nova_compute[253661]: 2025-11-22 09:05:48.979 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:05:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 196 op/s
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.030 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updated inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.031 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.031 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.066 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.069 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.102 253665 INFO nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Deleted allocations for instance b9672702-d5e5-407f-bc86-ee9e64f90a01
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.124 253665 DEBUG oslo_concurrency.processutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.185 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/997063578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:49 compute-0 ceph-mon[75021]: pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 196 op/s
Nov 22 09:05:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:05:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190372886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.562 253665 DEBUG oslo_concurrency.processutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.570 253665 DEBUG nova.compute.provider_tree [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.584 253665 DEBUG nova.scheduler.client.report [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.644 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.687 253665 INFO nova.scheduler.client.report [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Deleted allocations for instance 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb
Nov 22 09:05:49 compute-0 nova_compute[253661]: 2025-11-22 09:05:49.776 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3190372886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.3 KiB/s wr, 149 op/s
Nov 22 09:05:51 compute-0 ceph-mon[75021]: pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.3 KiB/s wr, 149 op/s
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:05:52
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes', '.mgr', 'backups', 'images', 'cephfs.cephfs.meta']
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:05:52 compute-0 nova_compute[253661]: 2025-11-22 09:05:52.724 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:52 compute-0 nova_compute[253661]: 2025-11-22 09:05:52.724 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:52 compute-0 nova_compute[253661]: 2025-11-22 09:05:52.766 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:05:52 compute-0 nova_compute[253661]: 2025-11-22 09:05:52.945 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:52 compute-0 nova_compute[253661]: 2025-11-22 09:05:52.946 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:52 compute-0 nova_compute[253661]: 2025-11-22 09:05:52.954 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:05:52 compute-0 nova_compute[253661]: 2025-11-22 09:05:52.954 253665 INFO nova.compute.claims [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:05:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.3 KiB/s wr, 150 op/s
Nov 22 09:05:53 compute-0 ceph-mon[75021]: pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.3 KiB/s wr, 150 op/s
Nov 22 09:05:53 compute-0 nova_compute[253661]: 2025-11-22 09:05:53.092 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:05:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3970738560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:53 compute-0 nova_compute[253661]: 2025-11-22 09:05:53.523 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:53 compute-0 nova_compute[253661]: 2025-11-22 09:05:53.530 253665 DEBUG nova.compute.provider_tree [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:05:53 compute-0 nova_compute[253661]: 2025-11-22 09:05:53.544 253665 DEBUG nova.scheduler.client.report [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:05:53 compute-0 nova_compute[253661]: 2025-11-22 09:05:53.603 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:53 compute-0 nova_compute[253661]: 2025-11-22 09:05:53.604 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:05:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:53 compute-0 nova_compute[253661]: 2025-11-22 09:05:53.797 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:05:53 compute-0 nova_compute[253661]: 2025-11-22 09:05:53.798 253665 DEBUG nova.network.neutron [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:05:53 compute-0 nova_compute[253661]: 2025-11-22 09:05:53.851 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.035 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:05:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3970738560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.303 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.305 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.305 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Creating image(s)
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.330 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.354 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.382 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.387 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.423 253665 DEBUG nova.network.neutron [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.424 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.455 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.456 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.457 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.458 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.483 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.487 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:54 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.817 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.877 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] resizing rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:05:54 compute-0 nova_compute[253661]: 2025-11-22 09:05:54.988 253665 DEBUG nova.objects.instance [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'migration_context' on Instance uuid 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:05:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.3 KiB/s wr, 90 op/s
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.003 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.004 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Ensure instance console log exists: /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.004 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.005 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.005 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.007 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.013 253665 WARNING nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.020 253665 DEBUG nova.virt.libvirt.host [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.020 253665 DEBUG nova.virt.libvirt.host [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.030 253665 DEBUG nova.virt.libvirt.host [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.031 253665 DEBUG nova.virt.libvirt.host [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.031 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.032 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.032 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.033 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.033 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.033 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.033 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.034 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.034 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.034 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.035 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.035 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.039 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:55 compute-0 ceph-mon[75021]: pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.3 KiB/s wr, 90 op/s
Nov 22 09:05:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:05:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985298784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.500 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.532 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.537 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:05:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123296519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.977 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.979 253665 DEBUG nova.objects.instance [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:05:55 compute-0 nova_compute[253661]: 2025-11-22 09:05:55.992 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <uuid>77eaf2e6-9498-4f2e-9c1d-496e6369a9d1</uuid>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <name>instance-00000003</name>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-949701683</nova:name>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:05:55</nova:creationTime>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <nova:user uuid="786abc4100494e9a9e9977b0d6534f9d">tempest-DeleteServersAdminTestJSON-1856447615-project-member</nova:user>
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <nova:project uuid="ae421771a97c45f7a0288a4b8cfd48c5">tempest-DeleteServersAdminTestJSON-1856447615</nova:project>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <system>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <entry name="serial">77eaf2e6-9498-4f2e-9c1d-496e6369a9d1</entry>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <entry name="uuid">77eaf2e6-9498-4f2e-9c1d-496e6369a9d1</entry>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     </system>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <os>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   </os>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <features>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   </features>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk">
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       </source>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config">
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       </source>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:05:55 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/console.log" append="off"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <video>
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     </video>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:05:55 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:05:55 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:05:55 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:05:55 compute-0 nova_compute[253661]: </domain>
Nov 22 09:05:55 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.000 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802340.9994485, b9672702-d5e5-407f-bc86-ee9e64f90a01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.000 253665 INFO nova.compute.manager [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] VM Stopped (Lifecycle Event)
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.015 253665 DEBUG nova.compute.manager [None req-f219254b-530d-4d32-9b32-8bcc639701f7 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.239 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.240 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.241 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Using config drive
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.377 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2985298784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4123296519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.507 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802341.50647, 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.508 253665 INFO nova.compute.manager [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] VM Stopped (Lifecycle Event)
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.528 253665 DEBUG nova.compute.manager [None req-d28ca804-7a35-4e3d-843f-1d01dbdb0e1a - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.553 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Creating config drive at /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.558 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtv5v5xl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.686 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtv5v5xl" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.716 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.721 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.919 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:56 compute-0 nova_compute[253661]: 2025-11-22 09:05:56.920 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Deleting local config drive /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config because it was imported into RBD.
Nov 22 09:05:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 56 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 521 KiB/s wr, 54 op/s
Nov 22 09:05:57 compute-0 systemd-machined[215941]: New machine qemu-3-instance-00000003.
Nov 22 09:05:57 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.403 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802357.4028168, 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.405 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] VM Resumed (Lifecycle Event)
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.408 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.409 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.414 253665 INFO nova.virt.libvirt.driver [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance spawned successfully.
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.414 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.424 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.430 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.433 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.433 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.434 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.434 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.435 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.435 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:05:57 compute-0 ceph-mon[75021]: pgmap v1170: 305 pgs: 305 active+clean; 56 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 521 KiB/s wr, 54 op/s
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.459 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.460 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802357.403159, 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.461 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] VM Started (Lifecycle Event)
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.479 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.483 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.510 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.532 253665 INFO nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Took 3.23 seconds to spawn the instance on the hypervisor.
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.532 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.629 253665 INFO nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Took 4.71 seconds to build instance.
Nov 22 09:05:57 compute-0 nova_compute[253661]: 2025-11-22 09:05:57.655 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:05:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 22 09:05:59 compute-0 ceph-mon[75021]: pgmap v1171: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 22 09:06:00 compute-0 nova_compute[253661]: 2025-11-22 09:06:00.951 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:00 compute-0 nova_compute[253661]: 2025-11-22 09:06:00.952 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:00 compute-0 nova_compute[253661]: 2025-11-22 09:06:00.952 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:00 compute-0 nova_compute[253661]: 2025-11-22 09:06:00.952 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:00 compute-0 nova_compute[253661]: 2025-11-22 09:06:00.953 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:00 compute-0 nova_compute[253661]: 2025-11-22 09:06:00.954 253665 INFO nova.compute.manager [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Terminating instance
Nov 22 09:06:00 compute-0 nova_compute[253661]: 2025-11-22 09:06:00.955 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "refresh_cache-77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:06:00 compute-0 nova_compute[253661]: 2025-11-22 09:06:00.955 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquired lock "refresh_cache-77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:06:00 compute-0 nova_compute[253661]: 2025-11-22 09:06:00.955 253665 DEBUG nova.network.neutron [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:06:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 258 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 22 09:06:01 compute-0 ceph-mon[75021]: pgmap v1172: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 258 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 22 09:06:01 compute-0 nova_compute[253661]: 2025-11-22 09:06:01.121 253665 DEBUG nova.network.neutron [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:06:01 compute-0 nova_compute[253661]: 2025-11-22 09:06:01.385 253665 DEBUG nova.network.neutron [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:06:01 compute-0 nova_compute[253661]: 2025-11-22 09:06:01.402 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Releasing lock "refresh_cache-77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:06:01 compute-0 nova_compute[253661]: 2025-11-22 09:06:01.403 253665 DEBUG nova.compute.manager [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:06:01 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 22 09:06:01 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 4.532s CPU time.
Nov 22 09:06:01 compute-0 systemd-machined[215941]: Machine qemu-3-instance-00000003 terminated.
Nov 22 09:06:01 compute-0 nova_compute[253661]: 2025-11-22 09:06:01.622 253665 INFO nova.virt.libvirt.driver [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance destroyed successfully.
Nov 22 09:06:01 compute-0 nova_compute[253661]: 2025-11-22 09:06:01.623 253665 DEBUG nova.objects.instance [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'resources' on Instance uuid 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.172 253665 INFO nova.virt.libvirt.driver [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Deleting instance files /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_del
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.174 253665 INFO nova.virt.libvirt.driver [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Deletion of /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_del complete
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.251 253665 INFO nova.compute.manager [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Took 0.85 seconds to destroy the instance on the hypervisor.
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.252 253665 DEBUG oslo.service.loopingcall [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.252 253665 DEBUG nova.compute.manager [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.253 253665 DEBUG nova.network.neutron [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003482863067330859 of space, bias 1.0, pg target 0.10448589201992577 quantized to 32 (current 32)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.616 253665 DEBUG nova.network.neutron [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.631 253665 DEBUG nova.network.neutron [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.647 253665 INFO nova.compute.manager [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Took 0.39 seconds to deallocate network for instance.
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.740 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.741 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:02 compute-0 nova_compute[253661]: 2025-11-22 09:06:02.783 253665 DEBUG oslo_concurrency.processutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 805 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 22 09:06:03 compute-0 ceph-mon[75021]: pgmap v1173: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 805 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 22 09:06:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:06:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928697489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:03 compute-0 nova_compute[253661]: 2025-11-22 09:06:03.216 253665 DEBUG oslo_concurrency.processutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:03 compute-0 nova_compute[253661]: 2025-11-22 09:06:03.222 253665 DEBUG nova.compute.provider_tree [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:06:03 compute-0 nova_compute[253661]: 2025-11-22 09:06:03.236 253665 DEBUG nova.scheduler.client.report [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:06:03 compute-0 nova_compute[253661]: 2025-11-22 09:06:03.266 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:03 compute-0 nova_compute[253661]: 2025-11-22 09:06:03.336 253665 INFO nova.scheduler.client.report [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Deleted allocations for instance 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1
Nov 22 09:06:03 compute-0 nova_compute[253661]: 2025-11-22 09:06:03.396 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1928697489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 56 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Nov 22 09:06:05 compute-0 ceph-mon[75021]: pgmap v1174: 305 pgs: 305 active+clean; 56 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Nov 22 09:06:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 22 09:06:07 compute-0 ceph-mon[75021]: pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 22 09:06:07 compute-0 podman[270292]: 2025-11-22 09:06:07.38423125 +0000 UTC m=+0.062540911 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd)
Nov 22 09:06:07 compute-0 podman[270291]: 2025-11-22 09:06:07.406240502 +0000 UTC m=+0.087477724 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:06:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 113 op/s
Nov 22 09:06:09 compute-0 ceph-mon[75021]: pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 113 op/s
Nov 22 09:06:09 compute-0 podman[270328]: 2025-11-22 09:06:09.436851793 +0000 UTC m=+0.127858889 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:06:10 compute-0 nova_compute[253661]: 2025-11-22 09:06:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 89 op/s
Nov 22 09:06:11 compute-0 ceph-mon[75021]: pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 89 op/s
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.246 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.247 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.248 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:06:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2196221149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.684 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.889 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.891 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5085MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.892 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.892 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.950 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:06:11 compute-0 nova_compute[253661]: 2025-11-22 09:06:11.974 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2196221149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:06:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3364744456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:06:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:06:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3364744456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:06:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:06:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694971963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:12 compute-0 nova_compute[253661]: 2025-11-22 09:06:12.403 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:12 compute-0 nova_compute[253661]: 2025-11-22 09:06:12.411 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:06:12 compute-0 nova_compute[253661]: 2025-11-22 09:06:12.435 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:06:12 compute-0 nova_compute[253661]: 2025-11-22 09:06:12.472 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:06:12 compute-0 nova_compute[253661]: 2025-11-22 09:06:12.473 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 89 op/s
Nov 22 09:06:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3364744456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:06:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3364744456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:06:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3694971963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:13 compute-0 ceph-mon[75021]: pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 89 op/s
Nov 22 09:06:13 compute-0 nova_compute[253661]: 2025-11-22 09:06:13.473 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:13 compute-0 nova_compute[253661]: 2025-11-22 09:06:13.474 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:13 compute-0 nova_compute[253661]: 2025-11-22 09:06:13.475 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:14 compute-0 nova_compute[253661]: 2025-11-22 09:06:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:14 compute-0 nova_compute[253661]: 2025-11-22 09:06:14.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:06:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.2 KiB/s wr, 63 op/s
Nov 22 09:06:15 compute-0 ceph-mon[75021]: pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.2 KiB/s wr, 63 op/s
Nov 22 09:06:15 compute-0 nova_compute[253661]: 2025-11-22 09:06:15.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:16 compute-0 nova_compute[253661]: 2025-11-22 09:06:16.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:16 compute-0 nova_compute[253661]: 2025-11-22 09:06:16.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:06:16 compute-0 nova_compute[253661]: 2025-11-22 09:06:16.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:06:16 compute-0 nova_compute[253661]: 2025-11-22 09:06:16.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:06:16 compute-0 nova_compute[253661]: 2025-11-22 09:06:16.245 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:16 compute-0 nova_compute[253661]: 2025-11-22 09:06:16.621 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802361.6194797, 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:16 compute-0 nova_compute[253661]: 2025-11-22 09:06:16.622 253665 INFO nova.compute.manager [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] VM Stopped (Lifecycle Event)
Nov 22 09:06:16 compute-0 nova_compute[253661]: 2025-11-22 09:06:16.743 253665 DEBUG nova.compute.manager [None req-ff40e480-f139-46f0-88fc-e7c0f61fc6f7 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 852 B/s wr, 20 op/s
Nov 22 09:06:17 compute-0 ceph-mon[75021]: pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 852 B/s wr, 20 op/s
Nov 22 09:06:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:18 compute-0 nova_compute[253661]: 2025-11-22 09:06:18.994 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:18 compute-0 nova_compute[253661]: 2025-11-22 09:06:18.994 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.030 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.155 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.156 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:19 compute-0 ceph-mon[75021]: pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.165 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.166 253665 INFO nova.compute.claims [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.298 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:06:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410429959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.756 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.764 253665 DEBUG nova.compute.provider_tree [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.789 253665 DEBUG nova.scheduler.client.report [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.825 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.826 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.892 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.893 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.925 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:06:19 compute-0 nova_compute[253661]: 2025-11-22 09:06:19.983 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.105 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.106 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.106 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Creating image(s)
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.126 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.157 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3410429959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.191 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.195 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.257 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.258 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.259 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.259 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.281 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.286 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.459 253665 WARNING oslo_policy.policy [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.459 253665 WARNING oslo_policy.policy [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 22 09:06:20 compute-0 nova_compute[253661]: 2025-11-22 09:06:20.462 253665 DEBUG nova.policy [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fabb775e44cc437680ea15de97d50877', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:06:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:06:21 compute-0 ceph-mon[75021]: pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail
Nov 22 09:06:21 compute-0 nova_compute[253661]: 2025-11-22 09:06:21.307 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:21 compute-0 nova_compute[253661]: 2025-11-22 09:06:21.376 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] resizing rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:06:21 compute-0 nova_compute[253661]: 2025-11-22 09:06:21.504 253665 DEBUG nova.objects.instance [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e90ab44-2028-4ef8-ab7a-3c603be3e750 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:21 compute-0 nova_compute[253661]: 2025-11-22 09:06:21.524 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:06:21 compute-0 nova_compute[253661]: 2025-11-22 09:06:21.524 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Ensure instance console log exists: /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:06:21 compute-0 nova_compute[253661]: 2025-11-22 09:06:21.525 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:21 compute-0 nova_compute[253661]: 2025-11-22 09:06:21.525 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:21 compute-0 nova_compute[253661]: 2025-11-22 09:06:21.525 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:22 compute-0 nova_compute[253661]: 2025-11-22 09:06:22.675 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Successfully created port: f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:06:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 59 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Nov 22 09:06:23 compute-0 ceph-mon[75021]: pgmap v1183: 305 pgs: 305 active+clean; 59 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Nov 22 09:06:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:23.653 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:06:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:23.655 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:06:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:23 compute-0 nova_compute[253661]: 2025-11-22 09:06:23.930 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Successfully updated port: f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:06:23 compute-0 nova_compute[253661]: 2025-11-22 09:06:23.955 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:06:23 compute-0 nova_compute[253661]: 2025-11-22 09:06:23.955 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquired lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:06:23 compute-0 nova_compute[253661]: 2025-11-22 09:06:23.955 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:06:24 compute-0 nova_compute[253661]: 2025-11-22 09:06:24.140 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:06:24 compute-0 nova_compute[253661]: 2025-11-22 09:06:24.515 253665 DEBUG nova.compute.manager [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-changed-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:06:24 compute-0 nova_compute[253661]: 2025-11-22 09:06:24.516 253665 DEBUG nova.compute.manager [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Refreshing instance network info cache due to event network-changed-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:06:24 compute-0 nova_compute[253661]: 2025-11-22 09:06:24.516 253665 DEBUG oslo_concurrency.lockutils [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:06:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 88 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:06:25 compute-0 ceph-mon[75021]: pgmap v1184: 305 pgs: 305 active+clean; 88 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.898 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updating instance_info_cache with network_info: [{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.929 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Releasing lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.930 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance network_info: |[{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.931 253665 DEBUG oslo_concurrency.lockutils [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.932 253665 DEBUG nova.network.neutron [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Refreshing network info cache for port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.937 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start _get_guest_xml network_info=[{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.945 253665 WARNING nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.951 253665 DEBUG nova.virt.libvirt.host [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.952 253665 DEBUG nova.virt.libvirt.host [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.958 253665 DEBUG nova.virt.libvirt.host [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.958 253665 DEBUG nova.virt.libvirt.host [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.959 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.960 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:06:09Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1309919375',id=20,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-681809065',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.960 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.960 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.961 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.961 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.961 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.961 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.962 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.962 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.962 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.962 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:06:25 compute-0 nova_compute[253661]: 2025-11-22 09:06:25.965 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:06:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873309407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.451 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.479 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1873309407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.484 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:06:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648644769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.947 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.949 253665 DEBUG nova.virt.libvirt.vif [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-382486397',id=4,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-8sum2ias',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:06:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=4e90ab44-2028-4ef8-ab7a-3c603be3e750,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.950 253665 DEBUG nova.network.os_vif_util [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.951 253665 DEBUG nova.network.os_vif_util [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.953 253665 DEBUG nova.objects.instance [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e90ab44-2028-4ef8-ab7a-3c603be3e750 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.966 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <uuid>4e90ab44-2028-4ef8-ab7a-3c603be3e750</uuid>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <name>instance-00000004</name>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-382486397</nova:name>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:06:25</nova:creationTime>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-681809065">
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <nova:user uuid="fabb775e44cc437680ea15de97d50877">tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member</nova:user>
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <nova:project uuid="4e0153a0f27f4c68ad2f7910dc78a992">tempest-ServersWithSpecificFlavorTestJSON-1107415015</nova:project>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <nova:port uuid="f5fa33e1-ab24-4daa-9790-5e0dbcbf4907">
Nov 22 09:06:26 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <system>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <entry name="serial">4e90ab44-2028-4ef8-ab7a-3c603be3e750</entry>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <entry name="uuid">4e90ab44-2028-4ef8-ab7a-3c603be3e750</entry>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     </system>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <os>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   </os>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <features>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   </features>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk">
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config">
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:06:26 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:78:b6:44"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <target dev="tapf5fa33e1-ab"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/console.log" append="off"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <video>
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     </video>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:06:26 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:06:26 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:06:26 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:06:26 compute-0 nova_compute[253661]: </domain>
Nov 22 09:06:26 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.968 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Preparing to wait for external event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.969 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.969 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.969 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.970 253665 DEBUG nova.virt.libvirt.vif [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-382486397',id=4,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-8sum2ias',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:06:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=4e90ab44-2028-4ef8-ab7a-3c603be3e750,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.970 253665 DEBUG nova.network.os_vif_util [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.971 253665 DEBUG nova.network.os_vif_util [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:06:26 compute-0 nova_compute[253661]: 2025-11-22 09:06:26.971 253665 DEBUG os_vif [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:06:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.057 253665 DEBUG ovsdbapp.backend.ovs_idl [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.058 253665 DEBUG ovsdbapp.backend.ovs_idl [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.058 253665 DEBUG ovsdbapp.backend.ovs_idl [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.059 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.060 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [POLLOUT] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.067 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.078 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.079 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.079 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.080 253665 INFO oslo.privsep.daemon [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp2j1fng5x/privsep.sock']
Nov 22 09:06:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1648644769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:27 compute-0 ceph-mon[75021]: pgmap v1185: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.824 253665 DEBUG nova.network.neutron [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updated VIF entry in instance network info cache for port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.825 253665 DEBUG nova.network.neutron [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updating instance_info_cache with network_info: [{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.839 253665 DEBUG oslo_concurrency.lockutils [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:06:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:27.950 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:27.950 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:27.951 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.992 253665 INFO oslo.privsep.daemon [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Spawned new privsep daemon via rootwrap
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.778 270653 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.784 270653 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.787 270653 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 22 09:06:27 compute-0 nova_compute[253661]: 2025-11-22 09:06:27.787 270653 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270653
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.464 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.465 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5fa33e1-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.466 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5fa33e1-ab, col_values=(('external_ids', {'iface-id': 'f5fa33e1-ab24-4daa-9790-5e0dbcbf4907', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:b6:44', 'vm-uuid': '4e90ab44-2028-4ef8-ab7a-3c603be3e750'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:06:28 compute-0 NetworkManager[48920]: <info>  [1763802388.4703] manager: (tapf5fa33e1-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.477 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.479 253665 INFO os_vif [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab')
Nov 22 09:06:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 22 09:06:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 22 09:06:28 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.555 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.556 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.556 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No VIF found with MAC fa:16:3e:78:b6:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.558 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Using config drive
Nov 22 09:06:28 compute-0 nova_compute[253661]: 2025-11-22 09:06:28.585 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:28.657 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:06:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.234 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Creating config drive at /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.241 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4o3zhabs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.371 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4o3zhabs" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.398 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.403 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:29 compute-0 ceph-mon[75021]: osdmap e158: 3 total, 3 up, 3 in
Nov 22 09:06:29 compute-0 ceph-mon[75021]: pgmap v1187: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.602 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.603 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Deleting local config drive /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config because it was imported into RBD.
Nov 22 09:06:29 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 22 09:06:29 compute-0 kernel: tapf5fa33e1-ab: entered promiscuous mode
Nov 22 09:06:29 compute-0 NetworkManager[48920]: <info>  [1763802389.6711] manager: (tapf5fa33e1-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Nov 22 09:06:29 compute-0 ovn_controller[152872]: 2025-11-22T09:06:29Z|00027|binding|INFO|Claiming lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 for this chassis.
Nov 22 09:06:29 compute-0 ovn_controller[152872]: 2025-11-22T09:06:29Z|00028|binding|INFO|f5fa33e1-ab24-4daa-9790-5e0dbcbf4907: Claiming fa:16:3e:78:b6:44 10.100.0.6
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.672 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.676 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:29 compute-0 systemd-udevd[270731]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:06:29 compute-0 NetworkManager[48920]: <info>  [1763802389.7187] device (tapf5fa33e1-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:06:29 compute-0 NetworkManager[48920]: <info>  [1763802389.7196] device (tapf5fa33e1-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:06:29 compute-0 systemd-machined[215941]: New machine qemu-4-instance-00000004.
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.754 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:29 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 22 09:06:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:29.752 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:b6:44 10.100.0.6'], port_security=['fa:16:3e:78:b6:44 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e90ab44-2028-4ef8-ab7a-3c603be3e750', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27705719-461d-420b-a9b8-656219b295b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6e7bbd2-3ac0-4509-872c-a46868ca499e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e343bc4-f111-4a21-942b-257d99455815, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:06:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:29.755 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 in datapath 27705719-461d-420b-a9b8-656219b295b7 bound to our chassis
Nov 22 09:06:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:29.758 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27705719-461d-420b-a9b8-656219b295b7
Nov 22 09:06:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:29.760 162862 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpwr905jmy/privsep.sock']
Nov 22 09:06:29 compute-0 ovn_controller[152872]: 2025-11-22T09:06:29Z|00029|binding|INFO|Setting lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 ovn-installed in OVS
Nov 22 09:06:29 compute-0 ovn_controller[152872]: 2025-11-22T09:06:29Z|00030|binding|INFO|Setting lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 up in Southbound
Nov 22 09:06:29 compute-0 nova_compute[253661]: 2025-11-22 09:06:29.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.541 162862 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 09:06:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.542 162862 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwr905jmy/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 09:06:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.364 270751 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 09:06:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.368 270751 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 09:06:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.370 270751 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 22 09:06:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.371 270751 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270751
Nov 22 09:06:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[54b32ca0-b11a-48a7-a4c3-68bd08944e33]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:30 compute-0 nova_compute[253661]: 2025-11-22 09:06:30.714 253665 DEBUG nova.compute.manager [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:06:30 compute-0 nova_compute[253661]: 2025-11-22 09:06:30.715 253665 DEBUG oslo_concurrency.lockutils [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:30 compute-0 nova_compute[253661]: 2025-11-22 09:06:30.715 253665 DEBUG oslo_concurrency.lockutils [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:30 compute-0 nova_compute[253661]: 2025-11-22 09:06:30.716 253665 DEBUG oslo_concurrency.lockutils [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:30 compute-0 nova_compute[253661]: 2025-11-22 09:06:30.716 253665 DEBUG nova.compute.manager [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Processing event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:06:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Nov 22 09:06:31 compute-0 ceph-mon[75021]: pgmap v1188: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.632 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.633 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802391.6316912, 4e90ab44-2028-4ef8-ab7a-3c603be3e750 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.633 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] VM Started (Lifecycle Event)
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.635 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.640 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance spawned successfully.
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.641 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.651 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.660 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.664 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.664 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.665 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.665 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.666 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.666 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.704 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.704 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802391.6345031, 4e90ab44-2028-4ef8-ab7a-3c603be3e750 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.705 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] VM Paused (Lifecycle Event)
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.722 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.726 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802391.6355124, 4e90ab44-2028-4ef8-ab7a-3c603be3e750 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.726 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] VM Resumed (Lifecycle Event)
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.744 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.749 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.777 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.786 253665 INFO nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Took 11.68 seconds to spawn the instance on the hypervisor.
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.787 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.864 253665 INFO nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Took 12.74 seconds to build instance.
Nov 22 09:06:31 compute-0 nova_compute[253661]: 2025-11-22 09:06:31.888 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 22 09:06:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 22 09:06:32 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 22 09:06:32 compute-0 nova_compute[253661]: 2025-11-22 09:06:32.961 253665 DEBUG nova.compute.manager [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:06:32 compute-0 nova_compute[253661]: 2025-11-22 09:06:32.961 253665 DEBUG oslo_concurrency.lockutils [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:32 compute-0 nova_compute[253661]: 2025-11-22 09:06:32.961 253665 DEBUG oslo_concurrency.lockutils [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:32 compute-0 nova_compute[253661]: 2025-11-22 09:06:32.962 253665 DEBUG oslo_concurrency.lockutils [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:32 compute-0 nova_compute[253661]: 2025-11-22 09:06:32.962 253665 DEBUG nova.compute.manager [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] No waiting events found dispatching network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:06:32 compute-0 nova_compute[253661]: 2025-11-22 09:06:32.962 253665 WARNING nova.compute.manager [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received unexpected event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 for instance with vm_state active and task_state None.
Nov 22 09:06:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 88 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 23 KiB/s wr, 27 op/s
Nov 22 09:06:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:33.206 270751 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:33.207 270751 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:33.207 270751 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:33 compute-0 ceph-mon[75021]: osdmap e159: 3 total, 3 up, 3 in
Nov 22 09:06:33 compute-0 ceph-mon[75021]: pgmap v1190: 305 pgs: 305 active+clean; 88 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 23 KiB/s wr, 27 op/s
Nov 22 09:06:33 compute-0 nova_compute[253661]: 2025-11-22 09:06:33.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:34 compute-0 nova_compute[253661]: 2025-11-22 09:06:34.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:34 compute-0 sudo[270798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:34 compute-0 sudo[270798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:34 compute-0 sudo[270798]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:34 compute-0 sudo[270823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:06:34 compute-0 sudo[270823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:34 compute-0 sudo[270823]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:34 compute-0 sudo[270848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:34 compute-0 sudo[270848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:34 compute-0 sudo[270848]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:34 compute-0 sudo[270873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 09:06:34 compute-0 sudo[270873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 88 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 25 KiB/s wr, 83 op/s
Nov 22 09:06:35 compute-0 ceph-mon[75021]: pgmap v1191: 305 pgs: 305 active+clean; 88 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 25 KiB/s wr, 83 op/s
Nov 22 09:06:36 compute-0 podman[270970]: 2025-11-22 09:06:36.000765154 +0000 UTC m=+0.686237317 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:06:36 compute-0 podman[270991]: 2025-11-22 09:06:36.221721432 +0000 UTC m=+0.073352944 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:06:36 compute-0 podman[270970]: 2025-11-22 09:06:36.464399573 +0000 UTC m=+1.149871746 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:06:36 compute-0 nova_compute[253661]: 2025-11-22 09:06:36.970 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:36 compute-0 NetworkManager[48920]: <info>  [1763802396.9752] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 22 09:06:36 compute-0 NetworkManager[48920]: <info>  [1763802396.9762] device (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 09:06:36 compute-0 NetworkManager[48920]: <info>  [1763802396.9777] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 22 09:06:36 compute-0 NetworkManager[48920]: <info>  [1763802396.9782] device (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 09:06:36 compute-0 NetworkManager[48920]: <info>  [1763802396.9797] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 22 09:06:36 compute-0 NetworkManager[48920]: <info>  [1763802396.9806] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 22 09:06:36 compute-0 NetworkManager[48920]: <info>  [1763802396.9811] device (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 09:06:36 compute-0 NetworkManager[48920]: <info>  [1763802396.9815] device (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 09:06:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:36.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d77bf46-6a0d-4839-b503-5dc02f4b140b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:36.996 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27705719-41 in ovnmeta-27705719-461d-420b-a9b8-656219b295b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:06:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:36.999 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27705719-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:06:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:36.999 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe76fd1-a785-4dd1-992a-f5b2fa3de569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 23 KiB/s wr, 136 op/s
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.022 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08344934-1748-4fd4-a72b-85e2cdbda257]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.049 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[055fb78d-8d30-4fa7-a3af-6d6af5a513d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:37 compute-0 nova_compute[253661]: 2025-11-22 09:06:37.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:37 compute-0 nova_compute[253661]: 2025-11-22 09:06:37.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.107 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[468b06ba-3768-473e-b941-fe2a8cb0aa85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.109 162862 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp5jmxc_6u/privsep.sock']
Nov 22 09:06:37 compute-0 ceph-mon[75021]: pgmap v1192: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 23 KiB/s wr, 136 op/s
Nov 22 09:06:37 compute-0 nova_compute[253661]: 2025-11-22 09:06:37.457 253665 DEBUG nova.compute.manager [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-changed-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:06:37 compute-0 nova_compute[253661]: 2025-11-22 09:06:37.458 253665 DEBUG nova.compute.manager [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Refreshing instance network info cache due to event network-changed-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:06:37 compute-0 nova_compute[253661]: 2025-11-22 09:06:37.458 253665 DEBUG oslo_concurrency.lockutils [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:06:37 compute-0 nova_compute[253661]: 2025-11-22 09:06:37.459 253665 DEBUG oslo_concurrency.lockutils [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:06:37 compute-0 nova_compute[253661]: 2025-11-22 09:06:37.459 253665 DEBUG nova.network.neutron [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Refreshing network info cache for port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:06:37 compute-0 podman[271108]: 2025-11-22 09:06:37.565113192 +0000 UTC m=+0.079948752 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:06:37 compute-0 podman[271107]: 2025-11-22 09:06:37.58532575 +0000 UTC m=+0.103918951 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 09:06:37 compute-0 sudo[270873]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:06:37 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:06:37 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.884 162862 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.886 162862 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp5jmxc_6u/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.715 271173 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.720 271173 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.722 271173 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 22 09:06:37 compute-0 sudo[271174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.722 271173 INFO oslo.privsep.daemon [-] privsep daemon running as pid 271173
Nov 22 09:06:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.889 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[55b8e029-8087-45ea-9a50-863fbd5a5887]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:37 compute-0 sudo[271174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:37 compute-0 sudo[271174]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:37 compute-0 sudo[271201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:06:37 compute-0 sudo[271201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:37 compute-0 sudo[271201]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:38 compute-0 sudo[271227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:38 compute-0 sudo[271227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:38 compute-0 sudo[271227]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:38 compute-0 sudo[271252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:06:38 compute-0 sudo[271252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:38.452 271173 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:38.452 271173 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:38.452 271173 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:38 compute-0 nova_compute[253661]: 2025-11-22 09:06:38.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:38 compute-0 sudo[271252]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:06:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:06:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:06:38 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:06:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:06:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 22 09:06:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 125 op/s
Nov 22 09:06:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 7a78a27a-2d2a-4f76-8964-ea0779fd8cba does not exist
Nov 22 09:06:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 28155b19-d7ce-436c-960e-f5f27da8f577 does not exist
Nov 22 09:06:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 729b9385-c492-4c8a-9771-e844f7990655 does not exist
Nov 22 09:06:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:06:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:06:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:06:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:06:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:06:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:06:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 22 09:06:39 compute-0 sudo[271308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:39 compute-0 sudo[271308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:39 compute-0 sudo[271308]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.121 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7246d9-0af7-401f-8b7a-644f55454fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:39 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:39 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:06:39 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.141 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed6308f-ecef-4d6e-9044-e1c387621cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 NetworkManager[48920]: <info>  [1763802399.1427] manager: (tap27705719-40): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Nov 22 09:06:39 compute-0 nova_compute[253661]: 2025-11-22 09:06:39.148 253665 DEBUG nova.network.neutron [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updated VIF entry in instance network info cache for port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:06:39 compute-0 nova_compute[253661]: 2025-11-22 09:06:39.148 253665 DEBUG nova.network.neutron [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updating instance_info_cache with network_info: [{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:06:39 compute-0 systemd-udevd[271363]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.180 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[79e340c1-8563-432c-ad8b-c13610b0d64c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.183 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2e798e78-ad49-40c0-81e0-14fbc0261fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 sudo[271337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:06:39 compute-0 sudo[271337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:39 compute-0 sudo[271337]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:39 compute-0 NetworkManager[48920]: <info>  [1763802399.2203] device (tap27705719-40): carrier: link connected
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.227 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f5e5a4-c9e5-4b92-a50f-6f277a7fb75e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.251 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc18d1e1-65d4-404a-85f0-60b66698b2c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27705719-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524995, 'reachable_time': 36724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271396, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 sudo[271383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:39 compute-0 sudo[271383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.269 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2870749c-4efd-4f5b-84a2-d9fd3a781506]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:b616'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524995, 'tstamp': 524995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271407, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 sudo[271383]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.288 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81608563-a0e8-49ef-9784-f84bd8db2193]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27705719-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524995, 'reachable_time': 36724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271410, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 nova_compute[253661]: 2025-11-22 09:06:39.293 253665 DEBUG oslo_concurrency.lockutils [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.319 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d882f45e-e56b-4f2c-ad7f-8837b780b722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 sudo[271411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:06:39 compute-0 sudo[271411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2e3bb8-c3b7-4ddb-8cdb-37a21b98bcd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.426 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27705719-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.426 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.426 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27705719-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:06:39 compute-0 NetworkManager[48920]: <info>  [1763802399.4300] manager: (tap27705719-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 22 09:06:39 compute-0 nova_compute[253661]: 2025-11-22 09:06:39.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:39 compute-0 kernel: tap27705719-40: entered promiscuous mode
Nov 22 09:06:39 compute-0 nova_compute[253661]: 2025-11-22 09:06:39.434 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.436 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27705719-40, col_values=(('external_ids', {'iface-id': '66390fc9-eaea-4181-96b2-4d926c45e6e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:06:39 compute-0 nova_compute[253661]: 2025-11-22 09:06:39.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:39 compute-0 ovn_controller[152872]: 2025-11-22T09:06:39Z|00031|binding|INFO|Releasing lport 66390fc9-eaea-4181-96b2-4d926c45e6e5 from this chassis (sb_readonly=0)
Nov 22 09:06:39 compute-0 nova_compute[253661]: 2025-11-22 09:06:39.459 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:39 compute-0 nova_compute[253661]: 2025-11-22 09:06:39.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.461 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.462 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95ff9dee-5b49-44df-b4b9-49f7f1057bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.464 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-27705719-461d-420b-a9b8-656219b295b7
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:06:39 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 27705719-461d-420b-a9b8-656219b295b7
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:06:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.465 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'env', 'PROCESS_TAG=haproxy-27705719-461d-420b-a9b8-656219b295b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27705719-461d-420b-a9b8-656219b295b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:06:39 compute-0 podman[271485]: 2025-11-22 09:06:39.732554838 +0000 UTC m=+0.025932627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:06:39 compute-0 podman[271485]: 2025-11-22 09:06:39.90944112 +0000 UTC m=+0.202818889 container create e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:06:40 compute-0 systemd[1]: Started libpod-conmon-e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291.scope.
Nov 22 09:06:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:06:40 compute-0 podman[271527]: 2025-11-22 09:06:40.114602807 +0000 UTC m=+0.137537183 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:06:40 compute-0 ceph-mon[75021]: pgmap v1193: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 125 op/s
Nov 22 09:06:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:06:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:06:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:06:40 compute-0 ceph-mon[75021]: osdmap e160: 3 total, 3 up, 3 in
Nov 22 09:06:40 compute-0 podman[271485]: 2025-11-22 09:06:40.530551464 +0000 UTC m=+0.823929263 container init e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:06:40 compute-0 podman[271485]: 2025-11-22 09:06:40.544610974 +0000 UTC m=+0.837988743 container start e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:06:40 compute-0 frosty_feynman[271554]: 167 167
Nov 22 09:06:40 compute-0 systemd[1]: libpod-e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291.scope: Deactivated successfully.
Nov 22 09:06:40 compute-0 conmon[271554]: conmon e7e73344c562f26e8948 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291.scope/container/memory.events
Nov 22 09:06:40 compute-0 podman[271485]: 2025-11-22 09:06:40.845489992 +0000 UTC m=+1.138867781 container attach e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:06:40 compute-0 podman[271485]: 2025-11-22 09:06:40.849296374 +0000 UTC m=+1.142674143 container died e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:06:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.4 KiB/s wr, 122 op/s
Nov 22 09:06:41 compute-0 podman[271527]: 2025-11-22 09:06:41.239859849 +0000 UTC m=+1.262794195 container create ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:06:41 compute-0 systemd[1]: Started libpod-conmon-ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51.scope.
Nov 22 09:06:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676a94e4d511baf87b86317c8fc75c84c67f7a8300dfb76d8ed2f43fa82c4669/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ed641bd7a5d551a1a7633b060359d388c3f7a16daff85e9fdf48761f0fcb46b-merged.mount: Deactivated successfully.
Nov 22 09:06:41 compute-0 ceph-mon[75021]: pgmap v1195: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.4 KiB/s wr, 122 op/s
Nov 22 09:06:41 compute-0 podman[271485]: 2025-11-22 09:06:41.828971928 +0000 UTC m=+2.122349697 container remove e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 09:06:42 compute-0 podman[271527]: 2025-11-22 09:06:42.078996138 +0000 UTC m=+2.101930504 container init ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:06:42 compute-0 podman[271527]: 2025-11-22 09:06:42.086816448 +0000 UTC m=+2.109750794 container start ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:06:42 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [NOTICE]   (271604) : New worker (271609) forked
Nov 22 09:06:42 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [NOTICE]   (271604) : Loading success.
Nov 22 09:06:42 compute-0 systemd[1]: libpod-conmon-e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291.scope: Deactivated successfully.
Nov 22 09:06:42 compute-0 podman[271592]: 2025-11-22 09:06:42.092791562 +0000 UTC m=+0.121519437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:06:42 compute-0 podman[271592]: 2025-11-22 09:06:42.189708983 +0000 UTC m=+0.218436848 container create 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:06:42 compute-0 podman[271513]: 2025-11-22 09:06:42.225798135 +0000 UTC m=+2.272175638 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:06:42 compute-0 systemd[1]: Started libpod-conmon-986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34.scope.
Nov 22 09:06:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:42 compute-0 podman[271592]: 2025-11-22 09:06:42.334109051 +0000 UTC m=+0.362836946 container init 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:06:42 compute-0 podman[271592]: 2025-11-22 09:06:42.344283156 +0000 UTC m=+0.373011021 container start 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:06:42 compute-0 podman[271592]: 2025-11-22 09:06:42.383582135 +0000 UTC m=+0.412310020 container attach 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:06:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 KiB/s wr, 106 op/s
Nov 22 09:06:43 compute-0 ceph-mon[75021]: pgmap v1196: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 KiB/s wr, 106 op/s
Nov 22 09:06:43 compute-0 nova_compute[253661]: 2025-11-22 09:06:43.479 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:43 compute-0 fervent_borg[271620]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:06:43 compute-0 fervent_borg[271620]: --> relative data size: 1.0
Nov 22 09:06:43 compute-0 fervent_borg[271620]: --> All data devices are unavailable
Nov 22 09:06:43 compute-0 systemd[1]: libpod-986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34.scope: Deactivated successfully.
Nov 22 09:06:43 compute-0 podman[271592]: 2025-11-22 09:06:43.558908385 +0000 UTC m=+1.587636260 container died 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 09:06:43 compute-0 systemd[1]: libpod-986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34.scope: Consumed 1.142s CPU time.
Nov 22 09:06:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b-merged.mount: Deactivated successfully.
Nov 22 09:06:43 compute-0 podman[271592]: 2025-11-22 09:06:43.644027964 +0000 UTC m=+1.672755829 container remove 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:06:43 compute-0 systemd[1]: libpod-conmon-986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34.scope: Deactivated successfully.
Nov 22 09:06:43 compute-0 sudo[271411]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:43 compute-0 sudo[271664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:43 compute-0 sudo[271664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:43 compute-0 sudo[271664]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.802337) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403802388, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1151, "num_deletes": 506, "total_data_size": 1168463, "memory_usage": 1198000, "flush_reason": "Manual Compaction"}
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403810214, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 869500, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23265, "largest_seqno": 24415, "table_properties": {"data_size": 864904, "index_size": 1608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14042, "raw_average_key_size": 18, "raw_value_size": 853305, "raw_average_value_size": 1145, "num_data_blocks": 72, "num_entries": 745, "num_filter_entries": 745, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802334, "oldest_key_time": 1763802334, "file_creation_time": 1763802403, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 7936 microseconds, and 4019 cpu microseconds.
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.810277) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 869500 bytes OK
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.810310) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.812686) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.812723) EVENT_LOG_v1 {"time_micros": 1763802403812713, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.812748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1162048, prev total WAL file size 1162048, number of live WAL files 2.
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.813412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(849KB)], [53(9238KB)]
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403813467, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10329974, "oldest_snapshot_seqno": -1}
Nov 22 09:06:43 compute-0 sudo[271689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:06:43 compute-0 sudo[271689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:43 compute-0 sudo[271689]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4601 keys, 7206770 bytes, temperature: kUnknown
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403881363, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7206770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7175822, "index_size": 18358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11525, "raw_key_size": 115307, "raw_average_key_size": 25, "raw_value_size": 7092369, "raw_average_value_size": 1541, "num_data_blocks": 760, "num_entries": 4601, "num_filter_entries": 4601, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802403, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.881649) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7206770 bytes
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.884027) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.0 rd, 106.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.0 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(20.2) write-amplify(8.3) OK, records in: 5609, records dropped: 1008 output_compression: NoCompression
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.884249) EVENT_LOG_v1 {"time_micros": 1763802403884038, "job": 28, "event": "compaction_finished", "compaction_time_micros": 67978, "compaction_time_cpu_micros": 18828, "output_level": 6, "num_output_files": 1, "total_output_size": 7206770, "num_input_records": 5609, "num_output_records": 4601, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403884578, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403886157, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.813352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:06:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:06:43 compute-0 sudo[271714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:43 compute-0 sudo[271714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:43 compute-0 sudo[271714]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:43 compute-0 sudo[271739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:06:43 compute-0 sudo[271739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:44 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 09:06:44 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 09:06:44 compute-0 podman[271804]: 2025-11-22 09:06:44.334417162 +0000 UTC m=+0.041902050 container create 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:06:44 compute-0 systemd[1]: Started libpod-conmon-81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3.scope.
Nov 22 09:06:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:06:44 compute-0 podman[271804]: 2025-11-22 09:06:44.317911484 +0000 UTC m=+0.025396392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:06:44 compute-0 podman[271804]: 2025-11-22 09:06:44.426599431 +0000 UTC m=+0.134084319 container init 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:06:44 compute-0 podman[271804]: 2025-11-22 09:06:44.434723806 +0000 UTC m=+0.142208704 container start 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:06:44 compute-0 podman[271804]: 2025-11-22 09:06:44.438424045 +0000 UTC m=+0.145908933 container attach 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:06:44 compute-0 eloquent_kepler[271821]: 167 167
Nov 22 09:06:44 compute-0 systemd[1]: libpod-81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3.scope: Deactivated successfully.
Nov 22 09:06:44 compute-0 podman[271804]: 2025-11-22 09:06:44.441501449 +0000 UTC m=+0.148986337 container died 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 09:06:44 compute-0 nova_compute[253661]: 2025-11-22 09:06:44.464 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb4614ba6e52a28ff53134013a2170860323faa1b10def68bb92168122f30f6c-merged.mount: Deactivated successfully.
Nov 22 09:06:44 compute-0 podman[271804]: 2025-11-22 09:06:44.493696486 +0000 UTC m=+0.201181374 container remove 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:06:44 compute-0 systemd[1]: libpod-conmon-81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3.scope: Deactivated successfully.
Nov 22 09:06:44 compute-0 podman[271845]: 2025-11-22 09:06:44.672591591 +0000 UTC m=+0.038905536 container create a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:06:44 compute-0 systemd[1]: Started libpod-conmon-a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2.scope.
Nov 22 09:06:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:44 compute-0 podman[271845]: 2025-11-22 09:06:44.657062198 +0000 UTC m=+0.023376163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:06:44 compute-0 podman[271845]: 2025-11-22 09:06:44.756419989 +0000 UTC m=+0.122733954 container init a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:06:44 compute-0 podman[271845]: 2025-11-22 09:06:44.763176992 +0000 UTC m=+0.129490937 container start a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 09:06:44 compute-0 podman[271845]: 2025-11-22 09:06:44.766397659 +0000 UTC m=+0.132711604 container attach a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:06:44 compute-0 nova_compute[253661]: 2025-11-22 09:06:44.847 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:44 compute-0 nova_compute[253661]: 2025-11-22 09:06:44.848 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:44 compute-0 nova_compute[253661]: 2025-11-22 09:06:44.888 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:06:44 compute-0 nova_compute[253661]: 2025-11-22 09:06:44.973 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:44 compute-0 nova_compute[253661]: 2025-11-22 09:06:44.974 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:44 compute-0 nova_compute[253661]: 2025-11-22 09:06:44.984 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:06:44 compute-0 nova_compute[253661]: 2025-11-22 09:06:44.985 253665 INFO nova.compute.claims [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:06:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 66 op/s
Nov 22 09:06:45 compute-0 ceph-mon[75021]: pgmap v1197: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 66 op/s
Nov 22 09:06:45 compute-0 nova_compute[253661]: 2025-11-22 09:06:45.189 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:45 compute-0 ovn_controller[152872]: 2025-11-22T09:06:45Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:b6:44 10.100.0.6
Nov 22 09:06:45 compute-0 ovn_controller[152872]: 2025-11-22T09:06:45Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:b6:44 10.100.0.6
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]: {
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:     "0": [
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:         {
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "devices": [
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "/dev/loop3"
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             ],
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_name": "ceph_lv0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_size": "21470642176",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "name": "ceph_lv0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "tags": {
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.cluster_name": "ceph",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.crush_device_class": "",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.encrypted": "0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.osd_id": "0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.type": "block",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.vdo": "0"
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             },
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "type": "block",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "vg_name": "ceph_vg0"
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:         }
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:     ],
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:     "1": [
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:         {
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "devices": [
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "/dev/loop4"
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             ],
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_name": "ceph_lv1",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_size": "21470642176",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "name": "ceph_lv1",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "tags": {
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.cluster_name": "ceph",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.crush_device_class": "",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.encrypted": "0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.osd_id": "1",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.type": "block",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.vdo": "0"
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             },
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "type": "block",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "vg_name": "ceph_vg1"
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:         }
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:     ],
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:     "2": [
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:         {
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "devices": [
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "/dev/loop5"
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             ],
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_name": "ceph_lv2",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_size": "21470642176",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "name": "ceph_lv2",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "tags": {
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.cluster_name": "ceph",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.crush_device_class": "",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.encrypted": "0",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.osd_id": "2",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.type": "block",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:                 "ceph.vdo": "0"
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             },
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "type": "block",
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:             "vg_name": "ceph_vg2"
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:         }
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]:     ]
Nov 22 09:06:45 compute-0 lucid_sinoussi[271862]: }
Nov 22 09:06:45 compute-0 systemd[1]: libpod-a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2.scope: Deactivated successfully.
Nov 22 09:06:45 compute-0 podman[271845]: 2025-11-22 09:06:45.586293855 +0000 UTC m=+0.952607820 container died a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0-merged.mount: Deactivated successfully.
Nov 22 09:06:45 compute-0 podman[271845]: 2025-11-22 09:06:45.661358172 +0000 UTC m=+1.027672117 container remove a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:06:45 compute-0 systemd[1]: libpod-conmon-a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2.scope: Deactivated successfully.
Nov 22 09:06:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:06:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488326385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:45 compute-0 sudo[271739]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:45 compute-0 nova_compute[253661]: 2025-11-22 09:06:45.734 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:45 compute-0 nova_compute[253661]: 2025-11-22 09:06:45.746 253665 DEBUG nova.compute.provider_tree [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:06:45 compute-0 nova_compute[253661]: 2025-11-22 09:06:45.766 253665 DEBUG nova.scheduler.client.report [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:06:45 compute-0 sudo[271905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:45 compute-0 sudo[271905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:45 compute-0 sudo[271905]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:45 compute-0 sudo[271930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:06:45 compute-0 sudo[271930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:45 compute-0 sudo[271930]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:45 compute-0 nova_compute[253661]: 2025-11-22 09:06:45.865 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:45 compute-0 nova_compute[253661]: 2025-11-22 09:06:45.869 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:06:45 compute-0 sudo[271955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:45 compute-0 sudo[271955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:45 compute-0 sudo[271955]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:45 compute-0 nova_compute[253661]: 2025-11-22 09:06:45.947 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:06:45 compute-0 nova_compute[253661]: 2025-11-22 09:06:45.948 253665 DEBUG nova.network.neutron [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:06:45 compute-0 sudo[271980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:06:45 compute-0 sudo[271980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:45 compute-0 nova_compute[253661]: 2025-11-22 09:06:45.986 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.093 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:06:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3488326385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.275 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.277 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.278 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Creating image(s)
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.306 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.337 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:46 compute-0 podman[272051]: 2025-11-22 09:06:46.358976024 +0000 UTC m=+0.053323404 container create 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.366 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.375 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:46 compute-0 systemd[1]: Started libpod-conmon-520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52.scope.
Nov 22 09:06:46 compute-0 podman[272051]: 2025-11-22 09:06:46.335085539 +0000 UTC m=+0.029432949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:06:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.444 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.446 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.447 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.447 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:46 compute-0 podman[272051]: 2025-11-22 09:06:46.449197876 +0000 UTC m=+0.143545276 container init 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:06:46 compute-0 podman[272051]: 2025-11-22 09:06:46.45724964 +0000 UTC m=+0.151597020 container start 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:06:46 compute-0 podman[272051]: 2025-11-22 09:06:46.461218085 +0000 UTC m=+0.155565485 container attach 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:06:46 compute-0 hungry_swanson[272114]: 167 167
Nov 22 09:06:46 compute-0 systemd[1]: libpod-520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52.scope: Deactivated successfully.
Nov 22 09:06:46 compute-0 podman[272051]: 2025-11-22 09:06:46.467046756 +0000 UTC m=+0.161394156 container died 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.469 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.474 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-79f0480eb4c88024d2265b3ed1e228bf200adfc052998d7e0f7c3d434e233c6f-merged.mount: Deactivated successfully.
Nov 22 09:06:46 compute-0 podman[272051]: 2025-11-22 09:06:46.518018332 +0000 UTC m=+0.212365712 container remove 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 09:06:46 compute-0 systemd[1]: libpod-conmon-520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52.scope: Deactivated successfully.
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.539 253665 DEBUG nova.network.neutron [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.540 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:06:46 compute-0 podman[272176]: 2025-11-22 09:06:46.735074687 +0000 UTC m=+0.073858358 container create 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 09:06:46 compute-0 podman[272176]: 2025-11-22 09:06:46.688827944 +0000 UTC m=+0.027611635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:06:46 compute-0 systemd[1]: Started libpod-conmon-252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003.scope.
Nov 22 09:06:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.853 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:46 compute-0 podman[272176]: 2025-11-22 09:06:46.900998981 +0000 UTC m=+0.239782672 container init 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:06:46 compute-0 podman[272176]: 2025-11-22 09:06:46.909290831 +0000 UTC m=+0.248074512 container start 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:06:46 compute-0 podman[272176]: 2025-11-22 09:06:46.932573041 +0000 UTC m=+0.271356742 container attach 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.936 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] resizing rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.981 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:46 compute-0 nova_compute[253661]: 2025-11-22 09:06:46.981 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 108 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 1.2 MiB/s wr, 57 op/s
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.040 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.048 253665 DEBUG nova.objects.instance [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.059 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.059 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Ensure instance console log exists: /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.059 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.060 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.060 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.061 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.067 253665 WARNING nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.073 253665 DEBUG nova.virt.libvirt.host [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.078 253665 DEBUG nova.virt.libvirt.host [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.082 253665 DEBUG nova.virt.libvirt.host [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.083 253665 DEBUG nova.virt.libvirt.host [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.083 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.083 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.084 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.084 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.084 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.084 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.086 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.089 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:47 compute-0 ceph-mon[75021]: pgmap v1198: 305 pgs: 305 active+clean; 108 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 1.2 MiB/s wr, 57 op/s
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.283 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.284 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.292 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.293 253665 INFO nova.compute.claims [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.446 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:06:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2770213352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.581 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.606 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.610 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:06:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/21243115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.905 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.912 253665 DEBUG nova.compute.provider_tree [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.927 253665 DEBUG nova.scheduler.client.report [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.993 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:47 compute-0 nova_compute[253661]: 2025-11-22 09:06:47.994 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:06:48 compute-0 sharp_jennings[272192]: {
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "osd_id": 1,
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "type": "bluestore"
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:     },
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "osd_id": 0,
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "type": "bluestore"
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:     },
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "osd_id": 2,
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:         "type": "bluestore"
Nov 22 09:06:48 compute-0 sharp_jennings[272192]:     }
Nov 22 09:06:48 compute-0 sharp_jennings[272192]: }
Nov 22 09:06:48 compute-0 systemd[1]: libpod-252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003.scope: Deactivated successfully.
Nov 22 09:06:48 compute-0 systemd[1]: libpod-252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003.scope: Consumed 1.155s CPU time.
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.094 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.094 253665 DEBUG nova.network.neutron [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:06:48 compute-0 podman[272379]: 2025-11-22 09:06:48.123535158 +0000 UTC m=+0.028514877 container died 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.124 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:06:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2770213352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/21243115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825-merged.mount: Deactivated successfully.
Nov 22 09:06:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:06:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905890687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:48 compute-0 podman[272379]: 2025-11-22 09:06:48.192332304 +0000 UTC m=+0.097312003 container remove 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:06:48 compute-0 systemd[1]: libpod-conmon-252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003.scope: Deactivated successfully.
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.219 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.221 253665 DEBUG nova.objects.instance [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:48 compute-0 sudo[271980]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.234 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <uuid>4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76</uuid>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <name>instance-00000005</name>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1457643607</nova:name>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:06:47</nova:creationTime>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <nova:user uuid="62c2fa81e90346db80e713e8b110de6e">tempest-ListImageFiltersTestJSON-1780489870-project-member</nova:user>
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <nova:project uuid="4d149c68c4874b3bbb2b6c134b8855e0">tempest-ListImageFiltersTestJSON-1780489870</nova:project>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <system>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <entry name="serial">4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76</entry>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <entry name="uuid">4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76</entry>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     </system>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <os>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   </os>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <features>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   </features>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk">
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       </source>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config">
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       </source>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:06:48 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/console.log" append="off"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <video>
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     </video>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:06:48 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:06:48 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:06:48 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:06:48 compute-0 nova_compute[253661]: </domain>
Nov 22 09:06:48 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:06:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:06:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.249 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:06:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 404adf2b-76d2-4c7c-80d1-8f19d39135b8 does not exist
Nov 22 09:06:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8ec49176-660e-47c4-9382-9d9213404e56 does not exist
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.296 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.297 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.297 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Using config drive
Nov 22 09:06:48 compute-0 sudo[272397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.323 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:48 compute-0 sudo[272397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:48 compute-0 sudo[272397]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:48 compute-0 sudo[272440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:06:48 compute-0 sudo[272440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.395 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:06:48 compute-0 sudo[272440]: pam_unix(sudo:session): session closed for user root
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.397 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.397 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Creating image(s)
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.421 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.445 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.472 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.476 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.523 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Creating config drive at /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.529 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzstiju50 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.555 253665 DEBUG nova.network.neutron [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.556 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.556 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.557 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.557 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.558 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.577 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.580 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bfc23def-6d15-4b5e-959e-3165bc676f9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.663 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzstiju50" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.689 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.695 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.941 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bfc23def-6d15-4b5e-959e-3165bc676f9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.977 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:48 compute-0 nova_compute[253661]: 2025-11-22 09:06:48.978 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Deleting local config drive /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config because it was imported into RBD.
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.014 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] resizing rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:06:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 151 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 399 KiB/s rd, 4.0 MiB/s wr, 87 op/s
Nov 22 09:06:49 compute-0 systemd-machined[215941]: New machine qemu-5-instance-00000005.
Nov 22 09:06:49 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.119 253665 DEBUG nova.objects.instance [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'migration_context' on Instance uuid bfc23def-6d15-4b5e-959e-3165bc676f9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2905890687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:06:49 compute-0 ceph-mon[75021]: pgmap v1199: 305 pgs: 305 active+clean; 151 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 399 KiB/s rd, 4.0 MiB/s wr, 87 op/s
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.133 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.133 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Ensure instance console log exists: /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.134 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.134 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.134 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.135 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.140 253665 WARNING nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.145 253665 DEBUG nova.virt.libvirt.host [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.145 253665 DEBUG nova.virt.libvirt.host [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.148 253665 DEBUG nova.virt.libvirt.host [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.148 253665 DEBUG nova.virt.libvirt.host [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.149 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.149 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.149 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.149 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.151 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.151 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.151 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.154 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.499 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802409.498485, 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.500 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] VM Resumed (Lifecycle Event)
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.503 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.503 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.507 253665 INFO nova.virt.libvirt.driver [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance spawned successfully.
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.507 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.526 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.532 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.535 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.535 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.536 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.536 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.537 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.537 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.564 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.564 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802409.500015, 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.565 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] VM Started (Lifecycle Event)
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.588 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.591 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.610 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:06:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:06:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1969957770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.650 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.669 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:49 compute-0 nova_compute[253661]: 2025-11-22 09:06:49.672 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:06:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790742848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.145 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.148 253665 DEBUG nova.objects.instance [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid bfc23def-6d15-4b5e-959e-3165bc676f9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.161 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <uuid>bfc23def-6d15-4b5e-959e-3165bc676f9c</uuid>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <name>instance-00000006</name>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1335877445</nova:name>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:06:49</nova:creationTime>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <nova:user uuid="62c2fa81e90346db80e713e8b110de6e">tempest-ListImageFiltersTestJSON-1780489870-project-member</nova:user>
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <nova:project uuid="4d149c68c4874b3bbb2b6c134b8855e0">tempest-ListImageFiltersTestJSON-1780489870</nova:project>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <system>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <entry name="serial">bfc23def-6d15-4b5e-959e-3165bc676f9c</entry>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <entry name="uuid">bfc23def-6d15-4b5e-959e-3165bc676f9c</entry>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     </system>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <os>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   </os>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <features>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   </features>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/bfc23def-6d15-4b5e-959e-3165bc676f9c_disk">
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       </source>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config">
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       </source>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:06:50 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/console.log" append="off"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <video>
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     </video>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:06:50 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:06:50 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:06:50 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:06:50 compute-0 nova_compute[253661]: </domain>
Nov 22 09:06:50 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.174 253665 INFO nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 3.90 seconds to spawn the instance on the hypervisor.
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.175 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.222 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.222 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.223 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Using config drive
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.245 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1969957770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/790742848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.386 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Creating config drive at /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.394 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph_s3qlga execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.528 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph_s3qlga" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.555 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.559 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.593 253665 INFO nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 5.65 seconds to build instance.
Nov 22 09:06:50 compute-0 nova_compute[253661]: 2025-11-22 09:06:50.684 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 151 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 3.3 MiB/s wr, 73 op/s
Nov 22 09:06:51 compute-0 ceph-mon[75021]: pgmap v1200: 305 pgs: 305 active+clean; 151 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 3.3 MiB/s wr, 73 op/s
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.342 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.782s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.343 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Deleting local config drive /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config because it was imported into RBD.
Nov 22 09:06:51 compute-0 systemd-machined[215941]: New machine qemu-6-instance-00000006.
Nov 22 09:06:51 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.857 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.860 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.861 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802411.858862, bfc23def-6d15-4b5e-959e-3165bc676f9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.861 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] VM Resumed (Lifecycle Event)
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.863 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.864 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.868 253665 INFO nova.virt.libvirt.driver [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance spawned successfully.
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.868 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.897 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.903 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.904 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.905 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.905 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.906 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.906 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.914 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.937 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.940 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.941 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802411.8589442, bfc23def-6d15-4b5e-959e-3165bc676f9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.941 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] VM Started (Lifecycle Event)
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.966 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.971 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:06:51 compute-0 nova_compute[253661]: 2025-11-22 09:06:51.984 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:06:52 compute-0 nova_compute[253661]: 2025-11-22 09:06:52.102 253665 INFO nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 3.71 seconds to spawn the instance on the hypervisor.
Nov 22 09:06:52 compute-0 nova_compute[253661]: 2025-11-22 09:06:52.103 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:06:52
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr', 'backups']
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:06:52 compute-0 nova_compute[253661]: 2025-11-22 09:06:52.180 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:52 compute-0 nova_compute[253661]: 2025-11-22 09:06:52.181 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:52 compute-0 nova_compute[253661]: 2025-11-22 09:06:52.192 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:06:52 compute-0 nova_compute[253661]: 2025-11-22 09:06:52.193 253665 INFO nova.compute.claims [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:06:52 compute-0 nova_compute[253661]: 2025-11-22 09:06:52.551 253665 INFO nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 5.30 seconds to build instance.
Nov 22 09:06:52 compute-0 nova_compute[253661]: 2025-11-22 09:06:52.598 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:52 compute-0 nova_compute[253661]: 2025-11-22 09:06:52.674 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:06:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 185 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 1016 KiB/s rd, 4.7 MiB/s wr, 145 op/s
Nov 22 09:06:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:06:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364691987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.161 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.168 253665 DEBUG nova.compute.provider_tree [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.182 253665 DEBUG nova.scheduler.client.report [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:06:53 compute-0 ceph-mon[75021]: pgmap v1201: 305 pgs: 305 active+clean; 185 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 1016 KiB/s rd, 4.7 MiB/s wr, 145 op/s
Nov 22 09:06:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3364691987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.450 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.451 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.540 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.541 253665 DEBUG nova.network.neutron [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.561 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.638 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:06:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.836 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.839 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.839 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Creating image(s)
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.869 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.900 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.931 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.938 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.966 253665 DEBUG nova.network.neutron [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:06:53 compute-0 nova_compute[253661]: 2025-11-22 09:06:53.967 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:06:54 compute-0 nova_compute[253661]: 2025-11-22 09:06:54.019 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:54 compute-0 nova_compute[253661]: 2025-11-22 09:06:54.021 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:54 compute-0 nova_compute[253661]: 2025-11-22 09:06:54.022 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:54 compute-0 nova_compute[253661]: 2025-11-22 09:06:54.022 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:54 compute-0 nova_compute[253661]: 2025-11-22 09:06:54.051 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:54 compute-0 nova_compute[253661]: 2025-11-22 09:06:54.057 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:06:54 compute-0 nova_compute[253661]: 2025-11-22 09:06:54.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:06:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:06:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 214 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 236 op/s
Nov 22 09:06:55 compute-0 ceph-mon[75021]: pgmap v1202: 305 pgs: 305 active+clean; 214 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 236 op/s
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.338 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.426 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] resizing rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.548 253665 DEBUG nova.objects.instance [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'migration_context' on Instance uuid 1bb24315-1978-4dbf-a16d-5e7b84a25d17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.672 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.672 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Ensure instance console log exists: /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.673 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.673 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.673 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.674 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.680 253665 WARNING nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.686 253665 DEBUG nova.virt.libvirt.host [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.686 253665 DEBUG nova.virt.libvirt.host [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.690 253665 DEBUG nova.virt.libvirt.host [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.691 253665 DEBUG nova.virt.libvirt.host [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.691 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.691 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.691 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.693 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.693 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.693 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.696 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.979 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.980 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.980 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.980 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.980 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.982 253665 INFO nova.compute.manager [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Terminating instance
Nov 22 09:06:55 compute-0 nova_compute[253661]: 2025-11-22 09:06:55.983 253665 DEBUG nova.compute.manager [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:06:56 compute-0 kernel: tapf5fa33e1-ab (unregistering): left promiscuous mode
Nov 22 09:06:56 compute-0 NetworkManager[48920]: <info>  [1763802416.0970] device (tapf5fa33e1-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:06:56 compute-0 ovn_controller[152872]: 2025-11-22T09:06:56Z|00032|binding|INFO|Releasing lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 from this chassis (sb_readonly=0)
Nov 22 09:06:56 compute-0 ovn_controller[152872]: 2025-11-22T09:06:56Z|00033|binding|INFO|Setting lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 down in Southbound
Nov 22 09:06:56 compute-0 ovn_controller[152872]: 2025-11-22T09:06:56Z|00034|binding|INFO|Removing iface tapf5fa33e1-ab ovn-installed in OVS
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.143 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:06:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569626535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:56 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 22 09:06:56 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 15.603s CPU time.
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.154 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:b6:44 10.100.0.6'], port_security=['fa:16:3e:78:b6:44 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e90ab44-2028-4ef8-ab7a-3c603be3e750', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27705719-461d-420b-a9b8-656219b295b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6e7bbd2-3ac0-4509-872c-a46868ca499e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e343bc4-f111-4a21-942b-257d99455815, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.158 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 in datapath 27705719-461d-420b-a9b8-656219b295b7 unbound from our chassis
Nov 22 09:06:56 compute-0 systemd-machined[215941]: Machine qemu-4-instance-00000004 terminated.
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.160 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27705719-461d-420b-a9b8-656219b295b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.161 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cd07be68-8225-417e-bc39-685c3534316a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.162 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27705719-461d-420b-a9b8-656219b295b7 namespace which is not needed anymore
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.175 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.200 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.211 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.248 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance destroyed successfully.
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.251 253665 DEBUG nova.objects.instance [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'resources' on Instance uuid 4e90ab44-2028-4ef8-ab7a-3c603be3e750 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.268 253665 DEBUG nova.virt.libvirt.vif [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-382486397',id=4,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:06:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-8sum2ias',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:06:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=4e90ab44-2028-4ef8-ab7a-3c603be3e750,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.268 253665 DEBUG nova.network.os_vif_util [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.270 253665 DEBUG nova.network.os_vif_util [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.270 253665 DEBUG os_vif [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.272 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.273 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5fa33e1-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1569626535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.280 253665 INFO os_vif [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab')
Nov 22 09:06:56 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [NOTICE]   (271604) : haproxy version is 2.8.14-c23fe91
Nov 22 09:06:56 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [NOTICE]   (271604) : path to executable is /usr/sbin/haproxy
Nov 22 09:06:56 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [WARNING]  (271604) : Exiting Master process...
Nov 22 09:06:56 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [WARNING]  (271604) : Exiting Master process...
Nov 22 09:06:56 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [ALERT]    (271604) : Current worker (271609) exited with code 143 (Terminated)
Nov 22 09:06:56 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [WARNING]  (271604) : All workers exited. Exiting... (0)
Nov 22 09:06:56 compute-0 systemd[1]: libpod-ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51.scope: Deactivated successfully.
Nov 22 09:06:56 compute-0 podman[273168]: 2025-11-22 09:06:56.366259436 +0000 UTC m=+0.081443571 container died ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 09:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51-userdata-shm.mount: Deactivated successfully.
Nov 22 09:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-676a94e4d511baf87b86317c8fc75c84c67f7a8300dfb76d8ed2f43fa82c4669-merged.mount: Deactivated successfully.
Nov 22 09:06:56 compute-0 podman[273168]: 2025-11-22 09:06:56.426124098 +0000 UTC m=+0.141308233 container cleanup ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:06:56 compute-0 systemd[1]: libpod-conmon-ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51.scope: Deactivated successfully.
Nov 22 09:06:56 compute-0 podman[273229]: 2025-11-22 09:06:56.519152856 +0000 UTC m=+0.066893990 container remove ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.527 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4438faf-9210-4b67-a731-d7d57017ca38]: (4, ('Sat Nov 22 09:06:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7 (ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51)\nffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51\nSat Nov 22 09:06:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7 (ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51)\nffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.530 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ef2b18-5f76-43f8-9c1c-6095c6120531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.531 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27705719-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.533 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:56 compute-0 kernel: tap27705719-40: left promiscuous mode
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a54b545-fb90-4718-9ae0-b4b686c8b126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99228d90-f24d-4c3c-a3b1-ad707148e2dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23600b1f-ce51-4182-a066-5ea7c0821d10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.588 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8dc3981-9b4d-47b8-8d82-affe81206416]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524985, 'reachable_time': 25895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273242, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d27705719\x2d461d\x2d420b\x2da9b8\x2d656219b295b7.mount: Deactivated successfully.
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.604 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27705719-461d-420b-a9b8-656219b295b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:06:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.605 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f160e390-22c3-4fca-9bca-41bc47f44a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.618 253665 DEBUG nova.compute.manager [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.660 253665 INFO nova.compute.manager [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] instance snapshotting
Nov 22 09:06:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:06:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500563076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.689 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.693 253665 DEBUG nova.objects.instance [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bb24315-1978-4dbf-a16d-5e7b84a25d17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.705 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <uuid>1bb24315-1978-4dbf-a16d-5e7b84a25d17</uuid>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <name>instance-00000007</name>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <nova:name>tempest-LiveMigrationNegativeTest-server-1458062651</nova:name>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:06:55</nova:creationTime>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <nova:user uuid="a46c9aa2bf204aac90754c5cde832c1d">tempest-LiveMigrationNegativeTest-1531468932-project-member</nova:user>
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <nova:project uuid="2aac0910356c4371ad12a604c19aed9b">tempest-LiveMigrationNegativeTest-1531468932</nova:project>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <system>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <entry name="serial">1bb24315-1978-4dbf-a16d-5e7b84a25d17</entry>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <entry name="uuid">1bb24315-1978-4dbf-a16d-5e7b84a25d17</entry>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     </system>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <os>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   </os>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <features>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   </features>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk">
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config">
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:06:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/console.log" append="off"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <video>
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     </video>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:06:56 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:06:56 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:06:56 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:06:56 compute-0 nova_compute[253661]: </domain>
Nov 22 09:06:56 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.772 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.772 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.781 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Using config drive
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.811 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.884 253665 INFO nova.virt.libvirt.driver [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Deleting instance files /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750_del
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.885 253665 INFO nova.virt.libvirt.driver [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Deletion of /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750_del complete
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.891 253665 DEBUG nova.compute.manager [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-unplugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.891 253665 DEBUG oslo_concurrency.lockutils [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.892 253665 DEBUG oslo_concurrency.lockutils [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.892 253665 DEBUG oslo_concurrency.lockutils [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.892 253665 DEBUG nova.compute.manager [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] No waiting events found dispatching network-vif-unplugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.892 253665 DEBUG nova.compute.manager [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-unplugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:06:56 compute-0 nova_compute[253661]: 2025-11-22 09:06:56.918 253665 INFO nova.virt.libvirt.driver [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Beginning live snapshot process
Nov 22 09:06:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 232 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.5 MiB/s wr, 280 op/s
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.050 253665 INFO nova.compute.manager [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Took 1.07 seconds to destroy the instance on the hypervisor.
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.051 253665 DEBUG oslo.service.loopingcall [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.052 253665 DEBUG nova.compute.manager [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.053 253665 DEBUG nova.network.neutron [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:06:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3500563076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:06:57 compute-0 ceph-mon[75021]: pgmap v1203: 305 pgs: 305 active+clean; 232 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.5 MiB/s wr, 280 op/s
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.535 253665 DEBUG nova.virt.libvirt.imagebackend [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.766 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Creating config drive at /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.771 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaiddtbyi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.795 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(30da9d8325b74797a836d1553f8e1f8b) on rbd image(4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.900 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaiddtbyi" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.930 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:06:57 compute-0 nova_compute[253661]: 2025-11-22 09:06:57.934 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 22 09:06:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 22 09:06:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 22 09:06:58 compute-0 nova_compute[253661]: 2025-11-22 09:06:58.767 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.833s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:58 compute-0 nova_compute[253661]: 2025-11-22 09:06:58.769 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Deleting local config drive /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config because it was imported into RBD.
Nov 22 09:06:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:06:58 compute-0 nova_compute[253661]: 2025-11-22 09:06:58.830 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] cloning vms/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk@30da9d8325b74797a836d1553f8e1f8b to images/bf45eac9-b4aa-414f-a359-f8a2fe80c5d3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:06:58 compute-0 systemd-machined[215941]: New machine qemu-7-instance-00000007.
Nov 22 09:06:58 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 22 09:06:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 219 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.0 MiB/s wr, 289 op/s
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.117 253665 DEBUG nova.compute.manager [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.118 253665 DEBUG oslo_concurrency.lockutils [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.118 253665 DEBUG oslo_concurrency.lockutils [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.118 253665 DEBUG oslo_concurrency.lockutils [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.119 253665 DEBUG nova.compute.manager [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] No waiting events found dispatching network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.119 253665 WARNING nova.compute.manager [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received unexpected event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 for instance with vm_state active and task_state deleting.
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.256 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] flattening images/bf45eac9-b4aa-414f-a359-f8a2fe80c5d3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.425 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802419.4248726, 1bb24315-1978-4dbf-a16d-5e7b84a25d17 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.426 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] VM Resumed (Lifecycle Event)
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.428 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.429 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.433 253665 INFO nova.virt.libvirt.driver [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance spawned successfully.
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.433 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.452 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.460 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.466 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.466 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.467 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.467 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.468 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.468 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.480 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.480 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802419.4278653, 1bb24315-1978-4dbf-a16d-5e7b84a25d17 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.480 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] VM Started (Lifecycle Event)
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.500 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.513 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.539 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.574 253665 INFO nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Took 5.74 seconds to spawn the instance on the hypervisor.
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.574 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.677 253665 INFO nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Took 7.52 seconds to build instance.
Nov 22 09:06:59 compute-0 ceph-mon[75021]: osdmap e161: 3 total, 3 up, 3 in
Nov 22 09:06:59 compute-0 ceph-mon[75021]: pgmap v1205: 305 pgs: 305 active+clean; 219 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.0 MiB/s wr, 289 op/s
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.705 253665 DEBUG nova.network.neutron [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.727 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.736 253665 INFO nova.compute.manager [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Took 2.68 seconds to deallocate network for instance.
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.803 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.804 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:59 compute-0 nova_compute[253661]: 2025-11-22 09:06:59.926 253665 DEBUG oslo_concurrency.processutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.095 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] removing snapshot(30da9d8325b74797a836d1553f8e1f8b) on rbd image(4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:07:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582412337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.421 253665 DEBUG oslo_concurrency.processutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.426 253665 DEBUG nova.compute.provider_tree [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.452 253665 DEBUG nova.scheduler.client.report [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.477 253665 DEBUG nova.compute.manager [req-7f04b504-ef2f-4b71-bbd6-39fd97bb261d req-f3e09925-7877-4a04-8af1-1ddd89df998e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-deleted-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.487 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.519 253665 INFO nova.scheduler.client.report [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Deleted allocations for instance 4e90ab44-2028-4ef8-ab7a-3c603be3e750
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.612 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.769 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.770 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3582412337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.835 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.919 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.920 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.925 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:07:00 compute-0 nova_compute[253661]: 2025-11-22 09:07:00.925 253665 INFO nova.compute.claims [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:07:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 22 09:07:01 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 22 09:07:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 219 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 252 op/s
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.106 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.280 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.604 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(snap) on rbd image(bf45eac9-b4aa-414f-a359-f8a2fe80c5d3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:07:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74389598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.688 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.695 253665 DEBUG nova.compute.provider_tree [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.709 253665 DEBUG nova.scheduler.client.report [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.746 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.747 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.822 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.822 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:07:01 compute-0 nova_compute[253661]: 2025-11-22 09:07:01.852 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:07:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.005 253665 DEBUG nova.policy [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fabb775e44cc437680ea15de97d50877', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:07:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 22 09:07:02 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 22 09:07:02 compute-0 ceph-mon[75021]: osdmap e162: 3 total, 3 up, 3 in
Nov 22 09:07:02 compute-0 ceph-mon[75021]: pgmap v1207: 305 pgs: 305 active+clean; 219 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 252 op/s
Nov 22 09:07:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/74389598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.209 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.299 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.301 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.302 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Creating image(s)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010428240175490004 of space, bias 1.0, pg target 0.3128472052647001 quantized to 32 (current 32)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0007990172437650583 of space, bias 1.0, pg target 0.2397051731295175 quantized to 32 (current 32)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:07:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.537 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.567 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.602 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.607 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.713 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.714 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.715 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.715 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.745 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.751 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 61ff3d94-226c-4991-af23-6da29d64dca1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.874 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.875 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.897 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.932 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Successfully created port: efd83824-eafa-462c-abe4-952ef6631c2b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.988 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.988 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.995 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:07:02 compute-0 nova_compute[253661]: 2025-11-22 09:07:02.996 253665 INFO nova.compute.claims [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:07:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 200 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 205 op/s
Nov 22 09:07:03 compute-0 ceph-mon[75021]: osdmap e163: 3 total, 3 up, 3 in
Nov 22 09:07:03 compute-0 ceph-mon[75021]: pgmap v1209: 305 pgs: 305 active+clean; 200 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 205 op/s
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.227 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.499 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 61ff3d94-226c-4991-af23-6da29d64dca1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.600 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] resizing rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.748 253665 DEBUG nova.objects.instance [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'migration_context' on Instance uuid 61ff3d94-226c-4991-af23-6da29d64dca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817087408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.810 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.856 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.862 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.864 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.865 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.892 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.899 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.900 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.937 253665 DEBUG nova.compute.provider_tree [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.953 253665 DEBUG nova.scheduler.client.report [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.984 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.985 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.988 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:03 compute-0 nova_compute[253661]: 2025-11-22 09:07:03.989 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.016 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.022 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.066 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.067 253665 DEBUG nova.network.neutron [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.121 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.190 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Successfully updated port: efd83824-eafa-462c-abe4-952ef6631c2b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.245 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.292 253665 DEBUG nova.compute.manager [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-changed-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.292 253665 DEBUG nova.compute.manager [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Refreshing instance network info cache due to event network-changed-efd83824-eafa-462c-abe4-952ef6631c2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.293 253665 DEBUG oslo_concurrency.lockutils [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.293 253665 DEBUG oslo_concurrency.lockutils [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.293 253665 DEBUG nova.network.neutron [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Refreshing network info cache for port efd83824-eafa-462c-abe4-952ef6631c2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.306 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.457 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.459 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.459 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Creating image(s)
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.488 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.519 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2817087408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.548 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.556 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.586 253665 DEBUG nova.network.neutron [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.586 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.587 253665 DEBUG nova.network.neutron [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.675 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.676 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.677 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.677 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.710 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:04 compute-0 nova_compute[253661]: 2025-11-22 09:07:04.715 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a5fd70ef-449c-45f6-a479-42c1293bcc35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 247 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 5.6 MiB/s wr, 292 op/s
Nov 22 09:07:05 compute-0 nova_compute[253661]: 2025-11-22 09:07:05.110 253665 DEBUG nova.network.neutron [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:05 compute-0 nova_compute[253661]: 2025-11-22 09:07:05.126 253665 DEBUG oslo_concurrency.lockutils [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:05 compute-0 nova_compute[253661]: 2025-11-22 09:07:05.127 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquired lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:05 compute-0 nova_compute[253661]: 2025-11-22 09:07:05.127 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:07:05 compute-0 nova_compute[253661]: 2025-11-22 09:07:05.278 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:05 compute-0 nova_compute[253661]: 2025-11-22 09:07:05.338 253665 INFO nova.virt.libvirt.driver [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Snapshot image upload complete
Nov 22 09:07:05 compute-0 nova_compute[253661]: 2025-11-22 09:07:05.339 253665 INFO nova.compute.manager [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 8.68 seconds to snapshot the instance on the hypervisor.
Nov 22 09:07:05 compute-0 ceph-mon[75021]: pgmap v1210: 305 pgs: 305 active+clean; 247 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 5.6 MiB/s wr, 292 op/s
Nov 22 09:07:05 compute-0 nova_compute[253661]: 2025-11-22 09:07:05.917 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.894s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:05 compute-0 nova_compute[253661]: 2025-11-22 09:07:05.996 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a5fd70ef-449c-45f6-a479-42c1293bcc35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.174 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] resizing rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.230 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.231 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Ensure instance console log exists: /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.231 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.347 253665 DEBUG nova.objects.instance [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'migration_context' on Instance uuid a5fd70ef-449c-45f6-a479-42c1293bcc35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.360 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.361 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Ensure instance console log exists: /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.362 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.362 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.362 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.364 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.381 253665 WARNING nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.389 253665 DEBUG nova.virt.libvirt.host [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.390 253665 DEBUG nova.virt.libvirt.host [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.395 253665 DEBUG nova.virt.libvirt.host [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.396 253665 DEBUG nova.virt.libvirt.host [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.396 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.397 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.397 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.398 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.398 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.398 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.398 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.399 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.399 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.399 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.399 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.400 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.404 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:07:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253100425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.899 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2253100425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.934 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:06 compute-0 nova_compute[253661]: 2025-11-22 09:07:06.939 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 292 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.9 MiB/s wr, 344 op/s
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.353 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updating instance_info_cache with network_info: [{"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:07:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1829324916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.404 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.406 253665 DEBUG nova.objects.instance [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'pci_devices' on Instance uuid a5fd70ef-449c-45f6-a479-42c1293bcc35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.417 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <uuid>a5fd70ef-449c-45f6-a479-42c1293bcc35</uuid>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <name>instance-00000009</name>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <nova:name>tempest-LiveMigrationNegativeTest-server-503146601</nova:name>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:07:06</nova:creationTime>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <nova:user uuid="a46c9aa2bf204aac90754c5cde832c1d">tempest-LiveMigrationNegativeTest-1531468932-project-member</nova:user>
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <nova:project uuid="2aac0910356c4371ad12a604c19aed9b">tempest-LiveMigrationNegativeTest-1531468932</nova:project>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <system>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <entry name="serial">a5fd70ef-449c-45f6-a479-42c1293bcc35</entry>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <entry name="uuid">a5fd70ef-449c-45f6-a479-42c1293bcc35</entry>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     </system>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <os>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   </os>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <features>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   </features>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a5fd70ef-449c-45f6-a479-42c1293bcc35_disk">
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config">
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:07:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/console.log" append="off"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <video>
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     </video>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:07:07 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:07:07 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:07:07 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:07:07 compute-0 nova_compute[253661]: </domain>
Nov 22 09:07:07 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.449 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Releasing lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.449 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance network_info: |[{"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.452 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start _get_guest_xml network_info=[{"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 1, 'encryption_format': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.456 253665 WARNING nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.460 253665 DEBUG nova.virt.libvirt.host [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.460 253665 DEBUG nova.virt.libvirt.host [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.463 253665 DEBUG nova.virt.libvirt.host [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.463 253665 DEBUG nova.virt.libvirt.host [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.464 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:06:09Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1293511702',id=18,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-2076067845',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.467 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.469 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.572 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.573 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.573 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Using config drive
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.600 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.703 253665 DEBUG nova.compute.manager [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.765 253665 INFO nova.compute.manager [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] instance snapshotting
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.769 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Creating config drive at /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.774 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvr4cvojo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.901 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvr4cvojo" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:07:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141056223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:07 compute-0 ceph-mon[75021]: pgmap v1211: 305 pgs: 305 active+clean; 292 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.9 MiB/s wr, 344 op/s
Nov 22 09:07:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1829324916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3141056223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.952 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.958 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.991 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:07 compute-0 nova_compute[253661]: 2025-11-22 09:07:07.992 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:08 compute-0 nova_compute[253661]: 2025-11-22 09:07:08.027 253665 INFO nova.virt.libvirt.driver [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Beginning live snapshot process
Nov 22 09:07:08 compute-0 nova_compute[253661]: 2025-11-22 09:07:08.298 253665 DEBUG nova.virt.libvirt.imagebackend [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:07:08 compute-0 podman[274231]: 2025-11-22 09:07:08.384704869 +0000 UTC m=+0.067709432 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:07:08 compute-0 nova_compute[253661]: 2025-11-22 09:07:08.394 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:08 compute-0 nova_compute[253661]: 2025-11-22 09:07:08.394 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Deleting local config drive /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config because it was imported into RBD.
Nov 22 09:07:08 compute-0 podman[274232]: 2025-11-22 09:07:08.422162541 +0000 UTC m=+0.105681966 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3)
Nov 22 09:07:08 compute-0 systemd-machined[215941]: New machine qemu-8-instance-00000009.
Nov 22 09:07:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:07:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002468931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:08 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000009.
Nov 22 09:07:08 compute-0 nova_compute[253661]: 2025-11-22 09:07:08.484 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:08 compute-0 nova_compute[253661]: 2025-11-22 09:07:08.512 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:08 compute-0 nova_compute[253661]: 2025-11-22 09:07:08.523 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:08 compute-0 nova_compute[253661]: 2025-11-22 09:07:08.608 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(64bc3502823e4771b602402c20132257) on rbd image(bfc23def-6d15-4b5e-959e-3165bc676f9c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:07:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 22 09:07:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 22 09:07:08 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 22 09:07:08 compute-0 nova_compute[253661]: 2025-11-22 09:07:08.866 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] cloning vms/bfc23def-6d15-4b5e-959e-3165bc676f9c_disk@64bc3502823e4771b602402c20132257 to images/0f246b20-add7-47b2-8f11-a8b8543e9488 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:07:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1002468931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:08 compute-0 ceph-mon[75021]: osdmap e164: 3 total, 3 up, 3 in
Nov 22 09:07:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:07:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183382633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.004 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] flattening images/0f246b20-add7-47b2-8f11-a8b8543e9488 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:07:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 376 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 14 MiB/s wr, 483 op/s
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.054 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.056 253665 DEBUG nova.virt.libvirt.vif [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:06:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1877351007',id=8,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-wqbqv4fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:07:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=61ff3d94-226c-4991-af23-6da29d64dca1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.057 253665 DEBUG nova.network.os_vif_util [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.058 253665 DEBUG nova.network.os_vif_util [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.060 253665 DEBUG nova.objects.instance [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'pci_devices' on Instance uuid 61ff3d94-226c-4991-af23-6da29d64dca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.078 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <uuid>61ff3d94-226c-4991-af23-6da29d64dca1</uuid>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <name>instance-00000008</name>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1877351007</nova:name>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:07:07</nova:creationTime>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-2076067845">
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <nova:ephemeral>1</nova:ephemeral>
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <nova:user uuid="fabb775e44cc437680ea15de97d50877">tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member</nova:user>
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <nova:project uuid="4e0153a0f27f4c68ad2f7910dc78a992">tempest-ServersWithSpecificFlavorTestJSON-1107415015</nova:project>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <nova:port uuid="efd83824-eafa-462c-abe4-952ef6631c2b">
Nov 22 09:07:09 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <system>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <entry name="serial">61ff3d94-226c-4991-af23-6da29d64dca1</entry>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <entry name="uuid">61ff3d94-226c-4991-af23-6da29d64dca1</entry>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </system>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <os>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   </os>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <features>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   </features>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/61ff3d94-226c-4991-af23-6da29d64dca1_disk">
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0">
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <target dev="vdb" bus="virtio"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/61ff3d94-226c-4991-af23-6da29d64dca1_disk.config">
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:07:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:2c:eb:49"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <target dev="tapefd83824-ea"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/console.log" append="off"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <video>
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </video>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:07:09 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:07:09 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:07:09 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:07:09 compute-0 nova_compute[253661]: </domain>
Nov 22 09:07:09 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.082 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Preparing to wait for external event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.082 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.082 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.083 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.083 253665 DEBUG nova.virt.libvirt.vif [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:06:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1877351007',id=8,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-wqbqv4fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:07:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=61ff3d94-226c-4991-af23-6da29d64dca1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.084 253665 DEBUG nova.network.os_vif_util [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.084 253665 DEBUG nova.network.os_vif_util [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.085 253665 DEBUG os_vif [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.086 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.086 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.093 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefd83824-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.093 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapefd83824-ea, col_values=(('external_ids', {'iface-id': 'efd83824-eafa-462c-abe4-952ef6631c2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:eb:49', 'vm-uuid': '61ff3d94-226c-4991-af23-6da29d64dca1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:07:09 compute-0 NetworkManager[48920]: <info>  [1763802429.0988] manager: (tapefd83824-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.109 253665 INFO os_vif [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea')
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.275 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.275 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.276 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.276 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No VIF found with MAC fa:16:3e:2c:eb:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.276 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Using config drive
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.302 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.615 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802429.615135, a5fd70ef-449c-45f6-a479-42c1293bcc35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.615 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] VM Resumed (Lifecycle Event)
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.618 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] removing snapshot(64bc3502823e4771b602402c20132257) on rbd image(bfc23def-6d15-4b5e-959e-3165bc676f9c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.620 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.621 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.627 253665 INFO nova.virt.libvirt.driver [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance spawned successfully.
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.628 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.631 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.634 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.652 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.653 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.653 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.653 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.654 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.654 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.657 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.658 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802429.6207056, a5fd70ef-449c-45f6-a479-42c1293bcc35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.658 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] VM Started (Lifecycle Event)
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.683 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.688 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.704 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.782 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Creating config drive at /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.797 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqk_m_39s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.887 253665 INFO nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Took 5.43 seconds to spawn the instance on the hypervisor.
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.888 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.967 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqk_m_39s" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 22 09:07:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1183382633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:09 compute-0 ceph-mon[75021]: pgmap v1213: 305 pgs: 305 active+clean; 376 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 14 MiB/s wr, 483 op/s
Nov 22 09:07:09 compute-0 nova_compute[253661]: 2025-11-22 09:07:09.995 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 22 09:07:10 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.017 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.065 253665 INFO nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Took 7.11 seconds to build instance.
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.077 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(snap) on rbd image(0f246b20-add7-47b2-8f11-a8b8543e9488) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.213 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.214 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Deleting local config drive /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config because it was imported into RBD.
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.218 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:10 compute-0 kernel: tapefd83824-ea: entered promiscuous mode
Nov 22 09:07:10 compute-0 NetworkManager[48920]: <info>  [1763802430.2912] manager: (tapefd83824-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 22 09:07:10 compute-0 systemd-udevd[274455]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:07:10 compute-0 ovn_controller[152872]: 2025-11-22T09:07:10Z|00035|binding|INFO|Claiming lport efd83824-eafa-462c-abe4-952ef6631c2b for this chassis.
Nov 22 09:07:10 compute-0 ovn_controller[152872]: 2025-11-22T09:07:10Z|00036|binding|INFO|efd83824-eafa-462c-abe4-952ef6631c2b: Claiming fa:16:3e:2c:eb:49 10.100.0.13
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:10 compute-0 NetworkManager[48920]: <info>  [1763802430.3083] device (tapefd83824-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:07:10 compute-0 NetworkManager[48920]: <info>  [1763802430.3091] device (tapefd83824-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:07:10 compute-0 ovn_controller[152872]: 2025-11-22T09:07:10Z|00037|binding|INFO|Setting lport efd83824-eafa-462c-abe4-952ef6631c2b ovn-installed in OVS
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.320 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:10 compute-0 systemd-machined[215941]: New machine qemu-9-instance-00000008.
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.333 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:eb:49 10.100.0.13'], port_security=['fa:16:3e:2c:eb:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61ff3d94-226c-4991-af23-6da29d64dca1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27705719-461d-420b-a9b8-656219b295b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6e7bbd2-3ac0-4509-872c-a46868ca499e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e343bc4-f111-4a21-942b-257d99455815, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=efd83824-eafa-462c-abe4-952ef6631c2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.334 162862 INFO neutron.agent.ovn.metadata.agent [-] Port efd83824-eafa-462c-abe4-952ef6631c2b in datapath 27705719-461d-420b-a9b8-656219b295b7 bound to our chassis
Nov 22 09:07:10 compute-0 ovn_controller[152872]: 2025-11-22T09:07:10Z|00038|binding|INFO|Setting lport efd83824-eafa-462c-abe4-952ef6631c2b up in Southbound
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.336 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27705719-461d-420b-a9b8-656219b295b7
Nov 22 09:07:10 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000008.
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.354 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a51ed7-6571-45cf-aff2-87436ad23313]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.355 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27705719-41 in ovnmeta-27705719-461d-420b-a9b8-656219b295b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.357 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27705719-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.357 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b05b3fad-a856-4cd4-b8d0-f2f18126b48d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.358 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4b306d-7b4f-44c3-abca-694cc7e0d937]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.370 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1741b749-137a-4eee-b66e-a66f36513db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.390 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58f3f2f3-5f33-4deb-9caa-af0324a7bd14]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.431 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[18b7c70f-522c-4518-9dff-3858dead519d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.437 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8631fb2c-69a6-4a06-ad13-574a549ada9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 NetworkManager[48920]: <info>  [1763802430.4386] manager: (tap27705719-40): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Nov 22 09:07:10 compute-0 systemd-udevd[274545]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.481 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c13982-6f98-414d-b3ea-0f5cbbdc651f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.485 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[63e88540-1af5-4e4e-af8a-2967821c48e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 NetworkManager[48920]: <info>  [1763802430.5070] device (tap27705719-40): carrier: link connected
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.515 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[98fa4c9e-1055-4e96-abeb-f969def9b852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.540 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9790ea65-96b0-4a48-9853-448b5fb16725]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27705719-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528124, 'reachable_time': 41640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274579, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.564 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1b4dcd-5323-4a51-9e29-fe0e20d64242]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:b616'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528124, 'tstamp': 528124}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274580, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.586 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e095121f-7c87-44ba-8540-7d6df9430e29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27705719-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528124, 'reachable_time': 41640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274581, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.626 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc98360-bddb-4dcf-b361-fbf586ff536f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.716 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[67bd6ce0-82e4-4295-818f-fa66ac9ca684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.718 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27705719-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.718 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.719 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27705719-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.721 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:10 compute-0 kernel: tap27705719-40: entered promiscuous mode
Nov 22 09:07:10 compute-0 NetworkManager[48920]: <info>  [1763802430.7216] manager: (tap27705719-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.723 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27705719-40, col_values=(('external_ids', {'iface-id': '66390fc9-eaea-4181-96b2-4d926c45e6e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:10 compute-0 ovn_controller[152872]: 2025-11-22T09:07:10Z|00039|binding|INFO|Releasing lport 66390fc9-eaea-4181-96b2-4d926c45e6e5 from this chassis (sb_readonly=0)
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.727 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.728 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2af0f5-f2c4-477f-a365-718c1a7ff5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.728 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-27705719-461d-420b-a9b8-656219b295b7
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 27705719-461d-420b-a9b8-656219b295b7
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:07:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.729 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'env', 'PROCESS_TAG=haproxy-27705719-461d-420b-a9b8-656219b295b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27705719-461d-420b-a9b8-656219b295b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:07:10 compute-0 nova_compute[253661]: 2025-11-22 09:07:10.746 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 22 09:07:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 22 09:07:11 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 22 09:07:11 compute-0 ceph-mon[75021]: osdmap e165: 3 total, 3 up, 3 in
Nov 22 09:07:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 376 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 13 MiB/s wr, 330 op/s
Nov 22 09:07:11 compute-0 podman[274614]: 2025-11-22 09:07:11.128661308 +0000 UTC m=+0.030837893 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.239 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802416.2277353, 4e90ab44-2028-4ef8-ab7a-3c603be3e750 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.241 253665 INFO nova.compute.manager [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] VM Stopped (Lifecycle Event)
Nov 22 09:07:11 compute-0 podman[274614]: 2025-11-22 09:07:11.25920837 +0000 UTC m=+0.161384935 container create d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.279 253665 DEBUG nova.compute.manager [None req-98dcd478-c854-4438-9a03-469ab1572a91 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:11 compute-0 systemd[1]: Started libpod-conmon-d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd.scope.
Nov 22 09:07:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/239c02df05785fa343c02b8c341ac64babc2cf40001ab923ae8e7d154e7f0394/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:11 compute-0 podman[274614]: 2025-11-22 09:07:11.351012539 +0000 UTC m=+0.253189124 container init d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:07:11 compute-0 podman[274614]: 2025-11-22 09:07:11.35892091 +0000 UTC m=+0.261097475 container start d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 09:07:11 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [NOTICE]   (274692) : New worker (274695) forked
Nov 22 09:07:11 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [NOTICE]   (274692) : Loading success.
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.419 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802431.419153, 61ff3d94-226c-4991-af23-6da29d64dca1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.420 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] VM Started (Lifecycle Event)
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.454 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.458 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802431.419286, 61ff3d94-226c-4991-af23-6da29d64dca1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.459 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] VM Paused (Lifecycle Event)
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.487 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.493 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:07:11 compute-0 nova_compute[253661]: 2025-11-22 09:07:11.520 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:07:12 compute-0 ceph-mon[75021]: osdmap e166: 3 total, 3 up, 3 in
Nov 22 09:07:12 compute-0 ceph-mon[75021]: pgmap v1216: 305 pgs: 305 active+clean; 376 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 13 MiB/s wr, 330 op/s
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.240 253665 DEBUG nova.compute.manager [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.241 253665 DEBUG oslo_concurrency.lockutils [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.241 253665 DEBUG oslo_concurrency.lockutils [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.241 253665 DEBUG oslo_concurrency.lockutils [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.241 253665 DEBUG nova.compute.manager [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Processing event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.243 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.246 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802432.246462, 61ff3d94-226c-4991-af23-6da29d64dca1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.246 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] VM Resumed (Lifecycle Event)
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.253 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.258 253665 INFO nova.virt.libvirt.driver [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance spawned successfully.
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.258 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.266 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.270 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.279 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.280 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.280 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.280 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.281 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.281 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:07:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/195864894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:07:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:07:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/195864894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.305 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:07:12 compute-0 podman[274705]: 2025-11-22 09:07:12.408031202 +0000 UTC m=+0.094260169 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.566 253665 INFO nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Took 10.27 seconds to spawn the instance on the hypervisor.
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.566 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.751 253665 INFO nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Took 11.85 seconds to build instance.
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.754 253665 DEBUG nova.objects.instance [None req-cdcb59d8-a31e-478a-997b-c5d2bf9b40e9 7b6de63d2b014f7a8186c624d9ecbc85 13dcea55e23544739e1e310fe8503083 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5fd70ef-449c-45f6-a479-42c1293bcc35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.791 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802432.7910707, a5fd70ef-449c-45f6-a479-42c1293bcc35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.792 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] VM Paused (Lifecycle Event)
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.812 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.823 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.825 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:12 compute-0 nova_compute[253661]: 2025-11-22 09:07:12.841 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 22 09:07:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 404 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 11 MiB/s wr, 400 op/s
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/195864894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:07:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/195864894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.253 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.309 253665 INFO nova.virt.libvirt.driver [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Snapshot image upload complete
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.310 253665 INFO nova.compute.manager [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 5.54 seconds to snapshot the instance on the hypervisor.
Nov 22 09:07:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 22 09:07:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Consumed 4.083s CPU time.
Nov 22 09:07:13 compute-0 systemd-machined[215941]: Machine qemu-8-instance-00000009 terminated.
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.472 253665 DEBUG nova.compute.manager [None req-cdcb59d8-a31e-478a-997b-c5d2bf9b40e9 7b6de63d2b014f7a8186c624d9ecbc85 13dcea55e23544739e1e310fe8503083 - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1654682786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.746 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.844 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.845 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.845 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.850 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.851 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.856 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.856 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.862 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.863 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:13 compute-0 nova_compute[253661]: 2025-11-22 09:07:13.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.081 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.084 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=59.83506774902344GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.085 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.085 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance bfc23def-6d15-4b5e-959e-3165bc676f9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 1bb24315-1978-4dbf-a16d-5e7b84a25d17 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 61ff3d94-226c-4991-af23-6da29d64dca1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance a5fd70ef-449c-45f6-a479-42c1293bcc35 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.246 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.356 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.425 253665 DEBUG nova.compute.manager [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.426 253665 DEBUG oslo_concurrency.lockutils [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.426 253665 DEBUG oslo_concurrency.lockutils [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.426 253665 DEBUG oslo_concurrency.lockutils [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.426 253665 DEBUG nova.compute.manager [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] No waiting events found dispatching network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.427 253665 WARNING nova.compute.manager [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received unexpected event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b for instance with vm_state active and task_state None.
Nov 22 09:07:14 compute-0 ceph-mon[75021]: pgmap v1217: 305 pgs: 305 active+clean; 404 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 11 MiB/s wr, 400 op/s
Nov 22 09:07:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1654682786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/300687399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.893 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.901 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:14 compute-0 nova_compute[253661]: 2025-11-22 09:07:14.916 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 487 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 449 op/s
Nov 22 09:07:15 compute-0 nova_compute[253661]: 2025-11-22 09:07:15.107 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:07:15 compute-0 nova_compute[253661]: 2025-11-22 09:07:15.108 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/300687399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:15 compute-0 ceph-mon[75021]: pgmap v1218: 305 pgs: 305 active+clean; 487 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 449 op/s
Nov 22 09:07:16 compute-0 nova_compute[253661]: 2025-11-22 09:07:16.100 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:16 compute-0 nova_compute[253661]: 2025-11-22 09:07:16.101 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:16 compute-0 nova_compute[253661]: 2025-11-22 09:07:16.101 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:16 compute-0 nova_compute[253661]: 2025-11-22 09:07:16.101 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:16 compute-0 nova_compute[253661]: 2025-11-22 09:07:16.102 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:07:16 compute-0 nova_compute[253661]: 2025-11-22 09:07:16.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 493 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 9.0 MiB/s wr, 401 op/s
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.514 253665 DEBUG nova.compute.manager [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:17 compute-0 ceph-mon[75021]: pgmap v1219: 305 pgs: 305 active+clean; 493 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 9.0 MiB/s wr, 401 op/s
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.555 253665 INFO nova.compute.manager [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] instance snapshotting
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.953 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.954 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.954 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:07:17 compute-0 nova_compute[253661]: 2025-11-22 09:07:17.955 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.127 253665 INFO nova.virt.libvirt.driver [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Beginning live snapshot process
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.437 253665 DEBUG nova.virt.libvirt.imagebackend [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.650 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(3324ea0f8d5e4f2091bcd99d0b296d9b) on rbd image(4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.792 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.935 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.936 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.936 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "a5fd70ef-449c-45f6-a479-42c1293bcc35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.937 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.937 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.939 253665 INFO nova.compute.manager [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Terminating instance
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.940 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "refresh_cache-a5fd70ef-449c-45f6-a479-42c1293bcc35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.940 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquired lock "refresh_cache-a5fd70ef-449c-45f6-a479-42c1293bcc35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:18 compute-0 nova_compute[253661]: 2025-11-22 09:07:18.940 253665 DEBUG nova.network.neutron [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:07:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 22 09:07:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 8.1 MiB/s wr, 420 op/s
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.803 253665 DEBUG nova.network.neutron [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.915 253665 DEBUG nova.compute.manager [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-changed-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.915 253665 DEBUG nova.compute.manager [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Refreshing instance network info cache due to event network-changed-efd83824-eafa-462c-abe4-952ef6631c2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.916 253665 DEBUG oslo_concurrency.lockutils [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.916 253665 DEBUG oslo_concurrency.lockutils [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.916 253665 DEBUG nova.network.neutron [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Refreshing network info cache for port efd83824-eafa-462c-abe4-952ef6631c2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.974 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.989 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.990 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:07:19 compute-0 nova_compute[253661]: 2025-11-22 09:07:19.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 22 09:07:20 compute-0 ceph-mon[75021]: pgmap v1220: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 8.1 MiB/s wr, 420 op/s
Nov 22 09:07:20 compute-0 ceph-mon[75021]: osdmap e167: 3 total, 3 up, 3 in
Nov 22 09:07:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 22 09:07:20 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 22 09:07:20 compute-0 nova_compute[253661]: 2025-11-22 09:07:20.793 253665 DEBUG nova.network.neutron [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:20 compute-0 nova_compute[253661]: 2025-11-22 09:07:20.811 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Releasing lock "refresh_cache-a5fd70ef-449c-45f6-a479-42c1293bcc35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:20 compute-0 nova_compute[253661]: 2025-11-22 09:07:20.812 253665 DEBUG nova.compute.manager [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:07:20 compute-0 nova_compute[253661]: 2025-11-22 09:07:20.820 253665 INFO nova.virt.libvirt.driver [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance destroyed successfully.
Nov 22 09:07:20 compute-0 nova_compute[253661]: 2025-11-22 09:07:20.821 253665 DEBUG nova.objects.instance [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'resources' on Instance uuid a5fd70ef-449c-45f6-a479-42c1293bcc35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 7.4 MiB/s wr, 312 op/s
Nov 22 09:07:21 compute-0 ceph-mon[75021]: osdmap e168: 3 total, 3 up, 3 in
Nov 22 09:07:21 compute-0 ceph-mon[75021]: pgmap v1223: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 7.4 MiB/s wr, 312 op/s
Nov 22 09:07:22 compute-0 nova_compute[253661]: 2025-11-22 09:07:22.368 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] cloning vms/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk@3324ea0f8d5e4f2091bcd99d0b296d9b to images/4aac6d1c-c4ac-4afa-b126-b2123ebbf1d9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:07:22 compute-0 nova_compute[253661]: 2025-11-22 09:07:22.910 253665 DEBUG nova.network.neutron [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updated VIF entry in instance network info cache for port efd83824-eafa-462c-abe4-952ef6631c2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:07:22 compute-0 nova_compute[253661]: 2025-11-22 09:07:22.912 253665 DEBUG nova.network.neutron [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updating instance_info_cache with network_info: [{"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1000 KiB/s wr, 136 op/s
Nov 22 09:07:23 compute-0 nova_compute[253661]: 2025-11-22 09:07:23.159 253665 DEBUG oslo_concurrency.lockutils [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:23 compute-0 ceph-mon[75021]: pgmap v1224: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1000 KiB/s wr, 136 op/s
Nov 22 09:07:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:24 compute-0 nova_compute[253661]: 2025-11-22 09:07:24.037 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] flattening images/4aac6d1c-c4ac-4afa-b126-b2123ebbf1d9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:07:24 compute-0 nova_compute[253661]: 2025-11-22 09:07:24.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:24 compute-0 nova_compute[253661]: 2025-11-22 09:07:24.480 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 117 KiB/s wr, 91 op/s
Nov 22 09:07:25 compute-0 ceph-mon[75021]: pgmap v1225: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 117 KiB/s wr, 91 op/s
Nov 22 09:07:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 22 KiB/s wr, 61 op/s
Nov 22 09:07:27 compute-0 ceph-mon[75021]: pgmap v1226: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 22 KiB/s wr, 61 op/s
Nov 22 09:07:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:27.951 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:27.952 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:27.953 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.402 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] removing snapshot(3324ea0f8d5e4f2091bcd99d0b296d9b) on rbd image(4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.474 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802433.4730887, a5fd70ef-449c-45f6-a479-42c1293bcc35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.476 253665 INFO nova.compute.manager [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] VM Stopped (Lifecycle Event)
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.498 253665 DEBUG nova.compute.manager [None req-86f1490f-4600-40ed-bb82-44362d7b48fc - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.502 253665 DEBUG nova.compute.manager [None req-86f1490f-4600-40ed-bb82-44362d7b48fc - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.524 253665 INFO nova.compute.manager [None req-86f1490f-4600-40ed-bb82-44362d7b48fc - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 22 09:07:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 22 09:07:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 22 09:07:28 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.778 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:28.778 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:07:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:28.780 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.795 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(snap) on rbd image(4aac6d1c-c4ac-4afa-b126-b2123ebbf1d9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:07:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.977 253665 INFO nova.virt.libvirt.driver [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Deleting instance files /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35_del
Nov 22 09:07:28 compute-0 nova_compute[253661]: 2025-11-22 09:07:28.978 253665 INFO nova.virt.libvirt.driver [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Deletion of /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35_del complete
Nov 22 09:07:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 561 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.3 MiB/s wr, 106 op/s
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.072 253665 INFO nova.compute.manager [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Took 8.26 seconds to destroy the instance on the hypervisor.
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.074 253665 DEBUG oslo.service.loopingcall [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.074 253665 DEBUG nova.compute.manager [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.074 253665 DEBUG nova.network.neutron [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:07:29 compute-0 ovn_controller[152872]: 2025-11-22T09:07:29Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:eb:49 10.100.0.13
Nov 22 09:07:29 compute-0 ovn_controller[152872]: 2025-11-22T09:07:29Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:eb:49 10.100.0.13
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.298 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 22 09:07:29 compute-0 ceph-mon[75021]: osdmap e169: 3 total, 3 up, 3 in
Nov 22 09:07:29 compute-0 ceph-mon[75021]: pgmap v1228: 305 pgs: 305 active+clean; 561 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.3 MiB/s wr, 106 op/s
Nov 22 09:07:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 22 09:07:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.789 253665 DEBUG nova.network.neutron [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.810 253665 DEBUG nova.network.neutron [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.837 253665 INFO nova.compute.manager [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Took 0.76 seconds to deallocate network for instance.
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.895 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:29 compute-0 nova_compute[253661]: 2025-11-22 09:07:29.896 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:30 compute-0 nova_compute[253661]: 2025-11-22 09:07:30.278 253665 DEBUG oslo_concurrency.processutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230164615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:30 compute-0 ceph-mon[75021]: osdmap e170: 3 total, 3 up, 3 in
Nov 22 09:07:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3230164615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:30 compute-0 nova_compute[253661]: 2025-11-22 09:07:30.758 253665 DEBUG oslo_concurrency.processutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:30 compute-0 nova_compute[253661]: 2025-11-22 09:07:30.766 253665 DEBUG nova.compute.provider_tree [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:30 compute-0 nova_compute[253661]: 2025-11-22 09:07:30.787 253665 DEBUG nova.scheduler.client.report [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:30 compute-0 nova_compute[253661]: 2025-11-22 09:07:30.929 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 561 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.7 MiB/s wr, 103 op/s
Nov 22 09:07:31 compute-0 nova_compute[253661]: 2025-11-22 09:07:31.146 253665 INFO nova.scheduler.client.report [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Deleted allocations for instance a5fd70ef-449c-45f6-a479-42c1293bcc35
Nov 22 09:07:31 compute-0 nova_compute[253661]: 2025-11-22 09:07:31.288 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:31 compute-0 nova_compute[253661]: 2025-11-22 09:07:31.712 253665 INFO nova.virt.libvirt.driver [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Snapshot image upload complete
Nov 22 09:07:31 compute-0 nova_compute[253661]: 2025-11-22 09:07:31.713 253665 INFO nova.compute.manager [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 14.16 seconds to snapshot the instance on the hypervisor.
Nov 22 09:07:31 compute-0 ceph-mon[75021]: pgmap v1230: 305 pgs: 305 active+clean; 561 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.7 MiB/s wr, 103 op/s
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.294 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.294 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.295 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.295 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.295 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.296 253665 INFO nova.compute.manager [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Terminating instance
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.297 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "refresh_cache-1bb24315-1978-4dbf-a16d-5e7b84a25d17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.297 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquired lock "refresh_cache-1bb24315-1978-4dbf-a16d-5e7b84a25d17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.297 253665 DEBUG nova.network.neutron [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:07:32 compute-0 nova_compute[253661]: 2025-11-22 09:07:32.794 253665 DEBUG nova.network.neutron [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 554 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.9 MiB/s wr, 164 op/s
Nov 22 09:07:33 compute-0 nova_compute[253661]: 2025-11-22 09:07:33.043 253665 DEBUG nova.network.neutron [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:33 compute-0 nova_compute[253661]: 2025-11-22 09:07:33.060 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Releasing lock "refresh_cache-1bb24315-1978-4dbf-a16d-5e7b84a25d17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:33 compute-0 nova_compute[253661]: 2025-11-22 09:07:33.061 253665 DEBUG nova.compute.manager [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:07:33 compute-0 ceph-mon[75021]: pgmap v1231: 305 pgs: 305 active+clean; 554 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.9 MiB/s wr, 164 op/s
Nov 22 09:07:33 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 22 09:07:33 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 14.182s CPU time.
Nov 22 09:07:33 compute-0 systemd-machined[215941]: Machine qemu-7-instance-00000007 terminated.
Nov 22 09:07:33 compute-0 nova_compute[253661]: 2025-11-22 09:07:33.687 253665 INFO nova.virt.libvirt.driver [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance destroyed successfully.
Nov 22 09:07:33 compute-0 nova_compute[253661]: 2025-11-22 09:07:33.688 253665 DEBUG nova.objects.instance [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'resources' on Instance uuid 1bb24315-1978-4dbf-a16d-5e7b84a25d17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:34 compute-0 nova_compute[253661]: 2025-11-22 09:07:34.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:34 compute-0 nova_compute[253661]: 2025-11-22 09:07:34.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 565 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.0 MiB/s wr, 219 op/s
Nov 22 09:07:35 compute-0 ceph-mon[75021]: pgmap v1232: 305 pgs: 305 active+clean; 565 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.0 MiB/s wr, 219 op/s
Nov 22 09:07:35 compute-0 nova_compute[253661]: 2025-11-22 09:07:35.722 253665 INFO nova.virt.libvirt.driver [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Deleting instance files /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17_del
Nov 22 09:07:35 compute-0 nova_compute[253661]: 2025-11-22 09:07:35.723 253665 INFO nova.virt.libvirt.driver [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Deletion of /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17_del complete
Nov 22 09:07:35 compute-0 nova_compute[253661]: 2025-11-22 09:07:35.954 253665 INFO nova.compute.manager [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Took 2.89 seconds to destroy the instance on the hypervisor.
Nov 22 09:07:35 compute-0 nova_compute[253661]: 2025-11-22 09:07:35.954 253665 DEBUG oslo.service.loopingcall [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:07:35 compute-0 nova_compute[253661]: 2025-11-22 09:07:35.955 253665 DEBUG nova.compute.manager [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:07:35 compute-0 nova_compute[253661]: 2025-11-22 09:07:35.955 253665 DEBUG nova.network.neutron [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.216 253665 DEBUG nova.network.neutron [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.227 253665 DEBUG nova.network.neutron [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.239 253665 INFO nova.compute.manager [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Took 0.28 seconds to deallocate network for instance.
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.553 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.554 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.648 253665 DEBUG oslo_concurrency.processutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.773 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.774 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.774 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.775 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.775 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.777 253665 INFO nova.compute.manager [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Terminating instance
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.779 253665 DEBUG nova.compute.manager [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:07:36 compute-0 kernel: tapefd83824-ea (unregistering): left promiscuous mode
Nov 22 09:07:36 compute-0 NetworkManager[48920]: <info>  [1763802456.8670] device (tapefd83824-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:07:36 compute-0 ovn_controller[152872]: 2025-11-22T09:07:36Z|00040|binding|INFO|Releasing lport efd83824-eafa-462c-abe4-952ef6631c2b from this chassis (sb_readonly=0)
Nov 22 09:07:36 compute-0 ovn_controller[152872]: 2025-11-22T09:07:36Z|00041|binding|INFO|Setting lport efd83824-eafa-462c-abe4-952ef6631c2b down in Southbound
Nov 22 09:07:36 compute-0 ovn_controller[152872]: 2025-11-22T09:07:36Z|00042|binding|INFO|Removing iface tapefd83824-ea ovn-installed in OVS
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:36 compute-0 nova_compute[253661]: 2025-11-22 09:07:36.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:36 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 22 09:07:36 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Consumed 14.790s CPU time.
Nov 22 09:07:36 compute-0 systemd-machined[215941]: Machine qemu-9-instance-00000008 terminated.
Nov 22 09:07:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.982 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:eb:49 10.100.0.13'], port_security=['fa:16:3e:2c:eb:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61ff3d94-226c-4991-af23-6da29d64dca1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27705719-461d-420b-a9b8-656219b295b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6e7bbd2-3ac0-4509-872c-a46868ca499e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e343bc4-f111-4a21-942b-257d99455815, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=efd83824-eafa-462c-abe4-952ef6631c2b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:07:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.983 162862 INFO neutron.agent.ovn.metadata.agent [-] Port efd83824-eafa-462c-abe4-952ef6631c2b in datapath 27705719-461d-420b-a9b8-656219b295b7 unbound from our chassis
Nov 22 09:07:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.984 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27705719-461d-420b-a9b8-656219b295b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:07:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34961a26-bd2d-4bc2-8624-6b1bd7f34cbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.987 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27705719-461d-420b-a9b8-656219b295b7 namespace which is not needed anymore
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.009 253665 INFO nova.virt.libvirt.driver [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance destroyed successfully.
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.010 253665 DEBUG nova.objects.instance [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'resources' on Instance uuid 61ff3d94-226c-4991-af23-6da29d64dca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.021 253665 DEBUG nova.virt.libvirt.vif [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:06:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1877351007',id=8,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:07:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-wqbqv4fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:07:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=61ff3d94-226c-4991-af23-6da29d64dca1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.022 253665 DEBUG nova.network.os_vif_util [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.023 253665 DEBUG nova.network.os_vif_util [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.024 253665 DEBUG os_vif [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.026 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.026 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefd83824-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.028 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:07:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 540 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 2.2 MiB/s wr, 174 op/s
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.040 253665 INFO os_vif [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea')
Nov 22 09:07:37 compute-0 ceph-mon[75021]: pgmap v1233: 305 pgs: 305 active+clean; 540 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 2.2 MiB/s wr, 174 op/s
Nov 22 09:07:37 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [NOTICE]   (274692) : haproxy version is 2.8.14-c23fe91
Nov 22 09:07:37 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [NOTICE]   (274692) : path to executable is /usr/sbin/haproxy
Nov 22 09:07:37 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [WARNING]  (274692) : Exiting Master process...
Nov 22 09:07:37 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [ALERT]    (274692) : Current worker (274695) exited with code 143 (Terminated)
Nov 22 09:07:37 compute-0 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [WARNING]  (274692) : All workers exited. Exiting... (0)
Nov 22 09:07:37 compute-0 systemd[1]: libpod-d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd.scope: Deactivated successfully.
Nov 22 09:07:37 compute-0 podman[275060]: 2025-11-22 09:07:37.161039552 +0000 UTC m=+0.049223936 container died d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:07:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1123497692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.196 253665 DEBUG oslo_concurrency.processutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd-userdata-shm.mount: Deactivated successfully.
Nov 22 09:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-239c02df05785fa343c02b8c341ac64babc2cf40001ab923ae8e7d154e7f0394-merged.mount: Deactivated successfully.
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.207 253665 DEBUG nova.compute.provider_tree [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:37 compute-0 podman[275060]: 2025-11-22 09:07:37.209065869 +0000 UTC m=+0.097250233 container cleanup d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:07:37 compute-0 systemd[1]: libpod-conmon-d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd.scope: Deactivated successfully.
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.221 253665 DEBUG nova.scheduler.client.report [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.274 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:37 compute-0 podman[275090]: 2025-11-22 09:07:37.283442059 +0000 UTC m=+0.045933187 container remove d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:07:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.289 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a02429ea-0c0c-4ab5-b4e5-0c92cd561aff]: (4, ('Sat Nov 22 09:07:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7 (d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd)\nd6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd\nSat Nov 22 09:07:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7 (d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd)\nd6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.292 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10dada30-9e89-4a5f-8bfc-0cda5d17f74c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.294 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27705719-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:37 compute-0 kernel: tap27705719-40: left promiscuous mode
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.317 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[168b8f46-e7bb-4499-87e6-bdd8e98d1b89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.334 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7799777-dbca-40cb-b1d6-26debb4d340b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.335 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef099cc-474b-49a1-894e-c0bac4aad6d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.354 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c70168ab-5fb9-4df1-8adc-c8fb38d78988]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528116, 'reachable_time': 41543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275106, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.358 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27705719-461d-420b-a9b8-656219b295b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:07:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.359 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b46236-c9f7-43fc-b548-1822b8a140da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d27705719\x2d461d\x2d420b\x2da9b8\x2d656219b295b7.mount: Deactivated successfully.
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.359 253665 INFO nova.scheduler.client.report [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Deleted allocations for instance 1bb24315-1978-4dbf-a16d-5e7b84a25d17
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.448 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.668 253665 INFO nova.virt.libvirt.driver [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Deleting instance files /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1_del
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.669 253665 INFO nova.virt.libvirt.driver [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Deletion of /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1_del complete
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.729 253665 INFO nova.compute.manager [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Took 0.95 seconds to destroy the instance on the hypervisor.
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.730 253665 DEBUG oslo.service.loopingcall [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.730 253665 DEBUG nova.compute.manager [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:07:37 compute-0 nova_compute[253661]: 2025-11-22 09:07:37.730 253665 DEBUG nova.network.neutron [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:07:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1123497692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:38 compute-0 nova_compute[253661]: 2025-11-22 09:07:38.326 253665 DEBUG nova.compute.manager [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-unplugged-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:38 compute-0 nova_compute[253661]: 2025-11-22 09:07:38.326 253665 DEBUG oslo_concurrency.lockutils [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:38 compute-0 nova_compute[253661]: 2025-11-22 09:07:38.327 253665 DEBUG oslo_concurrency.lockutils [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:38 compute-0 nova_compute[253661]: 2025-11-22 09:07:38.327 253665 DEBUG oslo_concurrency.lockutils [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:38 compute-0 nova_compute[253661]: 2025-11-22 09:07:38.327 253665 DEBUG nova.compute.manager [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] No waiting events found dispatching network-vif-unplugged-efd83824-eafa-462c-abe4-952ef6631c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:07:38 compute-0 nova_compute[253661]: 2025-11-22 09:07:38.327 253665 DEBUG nova.compute.manager [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-unplugged-efd83824-eafa-462c-abe4-952ef6631c2b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:07:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:07:38.781 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:38 compute-0 nova_compute[253661]: 2025-11-22 09:07:38.803 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:38 compute-0 nova_compute[253661]: 2025-11-22 09:07:38.804 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:38 compute-0 nova_compute[253661]: 2025-11-22 09:07:38.897 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:07:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 22 09:07:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 22 09:07:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 22 09:07:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 424 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 655 KiB/s rd, 2.0 MiB/s wr, 225 op/s
Nov 22 09:07:39 compute-0 podman[275107]: 2025-11-22 09:07:39.389249637 +0000 UTC m=+0.062428564 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:07:39 compute-0 podman[275108]: 2025-11-22 09:07:39.404222637 +0000 UTC m=+0.076238386 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:07:39 compute-0 nova_compute[253661]: 2025-11-22 09:07:39.487 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:39 compute-0 ceph-mon[75021]: osdmap e171: 3 total, 3 up, 3 in
Nov 22 09:07:39 compute-0 ceph-mon[75021]: pgmap v1235: 305 pgs: 305 active+clean; 424 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 655 KiB/s rd, 2.0 MiB/s wr, 225 op/s
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.047 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.048 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.056 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.057 253665 INFO nova.compute.claims [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.468 253665 DEBUG nova.network.neutron [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.512 253665 INFO nova.compute.manager [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Took 2.78 seconds to deallocate network for instance.
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.567 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.607 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.866 253665 DEBUG nova.compute.manager [req-7cef9d77-6144-4a08-9e0d-38bbdfc0532f req-d6f9457a-951a-425f-9426-401066d9e037 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-deleted-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.938 253665 DEBUG nova.compute.manager [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.938 253665 DEBUG oslo_concurrency.lockutils [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.939 253665 DEBUG oslo_concurrency.lockutils [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.939 253665 DEBUG oslo_concurrency.lockutils [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.939 253665 DEBUG nova.compute.manager [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] No waiting events found dispatching network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:07:40 compute-0 nova_compute[253661]: 2025-11-22 09:07:40.940 253665 WARNING nova.compute.manager [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received unexpected event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b for instance with vm_state deleted and task_state None.
Nov 22 09:07:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 424 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 608 KiB/s rd, 1.9 MiB/s wr, 209 op/s
Nov 22 09:07:41 compute-0 ceph-mon[75021]: pgmap v1236: 305 pgs: 305 active+clean; 424 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 608 KiB/s rd, 1.9 MiB/s wr, 209 op/s
Nov 22 09:07:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1511008738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.141 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.148 253665 DEBUG nova.compute.provider_tree [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.162 253665 DEBUG nova.scheduler.client.report [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.183 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.184 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.186 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.226 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.242 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.266 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.276 253665 DEBUG oslo_concurrency.processutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.361 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.363 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.363 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Creating image(s)
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.383 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.411 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.436 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.442 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.517 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.518 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.519 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.519 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.539 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.542 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1426411698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.723 253665 DEBUG oslo_concurrency.processutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.730 253665 DEBUG nova.compute.provider_tree [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.742 253665 DEBUG nova.scheduler.client.report [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.763 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.792 253665 INFO nova.scheduler.client.report [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Deleted allocations for instance 61ff3d94-226c-4991-af23-6da29d64dca1
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.861 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:41 compute-0 nova_compute[253661]: 2025-11-22 09:07:41.947 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.010 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] resizing rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 22 09:07:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1511008738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1426411698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 22 09:07:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.190 253665 DEBUG nova.objects.instance [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lazy-loading 'migration_context' on Instance uuid c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.201 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.201 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Ensure instance console log exists: /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.202 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.202 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.203 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.205 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.210 253665 WARNING nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.215 253665 DEBUG nova.virt.libvirt.host [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.216 253665 DEBUG nova.virt.libvirt.host [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.219 253665 DEBUG nova.virt.libvirt.host [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.220 253665 DEBUG nova.virt.libvirt.host [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.220 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.221 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.221 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.222 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.222 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.222 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.222 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.223 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.223 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.223 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.223 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.224 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.227 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:07:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521780234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.709 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.736 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:42 compute-0 nova_compute[253661]: 2025-11-22 09:07:42.741 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 404 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 38 KiB/s wr, 109 op/s
Nov 22 09:07:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 22 09:07:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 22 09:07:43 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 22 09:07:43 compute-0 ceph-mon[75021]: osdmap e172: 3 total, 3 up, 3 in
Nov 22 09:07:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3521780234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:43 compute-0 ceph-mon[75021]: pgmap v1238: 305 pgs: 305 active+clean; 404 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 38 KiB/s wr, 109 op/s
Nov 22 09:07:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:07:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3207707638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.197 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.199 253665 DEBUG nova.objects.instance [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lazy-loading 'pci_devices' on Instance uuid c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.222 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <uuid>c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12</uuid>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <name>instance-0000000a</name>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-832171232</nova:name>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:07:42</nova:creationTime>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <nova:user uuid="9b2aa984fe7e4bbbab17fc76f5d39990">tempest-ServerDiagnosticsV248Test-357040963-project-member</nova:user>
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <nova:project uuid="c759d960bd994160acfb1cdfbe9858c8">tempest-ServerDiagnosticsV248Test-357040963</nova:project>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <system>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <entry name="serial">c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12</entry>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <entry name="uuid">c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12</entry>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     </system>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <os>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   </os>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <features>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   </features>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk">
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       </source>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config">
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       </source>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:07:43 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/console.log" append="off"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <video>
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     </video>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:07:43 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:07:43 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:07:43 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:07:43 compute-0 nova_compute[253661]: </domain>
Nov 22 09:07:43 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.281 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.281 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.282 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Using config drive
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.304 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:43 compute-0 podman[275423]: 2025-11-22 09:07:43.35108791 +0000 UTC m=+0.088310387 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.549 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Creating config drive at /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.555 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp01d_r6_4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.693 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp01d_r6_4" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.723 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.727 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.882 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:43 compute-0 nova_compute[253661]: 2025-11-22 09:07:43.883 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Deleting local config drive /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config because it was imported into RBD.
Nov 22 09:07:43 compute-0 systemd-machined[215941]: New machine qemu-10-instance-0000000a.
Nov 22 09:07:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:43 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 22 09:07:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 22 09:07:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 22 09:07:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 22 09:07:44 compute-0 ceph-mon[75021]: osdmap e173: 3 total, 3 up, 3 in
Nov 22 09:07:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3207707638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.488 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.504 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802464.5036063, c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.504 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] VM Resumed (Lifecycle Event)
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.507 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.507 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.512 253665 INFO nova.virt.libvirt.driver [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance spawned successfully.
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.512 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.526 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.535 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.540 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.540 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.541 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.541 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.542 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.542 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.569 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.570 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802464.5041304, c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.570 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] VM Started (Lifecycle Event)
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.585 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.588 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.614 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.625 253665 INFO nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Took 3.26 seconds to spawn the instance on the hypervisor.
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.625 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.679 253665 INFO nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Took 5.63 seconds to build instance.
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.698 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.837 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.837 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.838 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "bfc23def-6d15-4b5e-959e-3165bc676f9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.838 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.838 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.839 253665 INFO nova.compute.manager [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Terminating instance
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.840 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "refresh_cache-bfc23def-6d15-4b5e-959e-3165bc676f9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.841 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquired lock "refresh_cache-bfc23def-6d15-4b5e-959e-3165bc676f9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:44 compute-0 nova_compute[253661]: 2025-11-22 09:07:44.841 253665 DEBUG nova.network.neutron [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:07:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 371 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.0 MiB/s wr, 100 op/s
Nov 22 09:07:45 compute-0 ceph-mon[75021]: osdmap e174: 3 total, 3 up, 3 in
Nov 22 09:07:45 compute-0 ceph-mon[75021]: pgmap v1241: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 371 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.0 MiB/s wr, 100 op/s
Nov 22 09:07:45 compute-0 nova_compute[253661]: 2025-11-22 09:07:45.247 253665 DEBUG nova.network.neutron [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:45 compute-0 nova_compute[253661]: 2025-11-22 09:07:45.532 253665 DEBUG nova.compute.manager [None req-cdb11c55-ca2b-4b2a-9db1-4e681f4820f6 5ab5801c00a94ae58a2ee4d79237d36d 1695c5aed6564e9ca76c77cf59eec4b5 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:45 compute-0 nova_compute[253661]: 2025-11-22 09:07:45.534 253665 INFO nova.compute.manager [None req-cdb11c55-ca2b-4b2a-9db1-4e681f4820f6 5ab5801c00a94ae58a2ee4d79237d36d 1695c5aed6564e9ca76c77cf59eec4b5 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Retrieving diagnostics
Nov 22 09:07:45 compute-0 nova_compute[253661]: 2025-11-22 09:07:45.633 253665 DEBUG nova.network.neutron [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:45 compute-0 nova_compute[253661]: 2025-11-22 09:07:45.649 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Releasing lock "refresh_cache-bfc23def-6d15-4b5e-959e-3165bc676f9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:45 compute-0 nova_compute[253661]: 2025-11-22 09:07:45.650 253665 DEBUG nova.compute.manager [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:07:45 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 22 09:07:45 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 16.307s CPU time.
Nov 22 09:07:45 compute-0 systemd-machined[215941]: Machine qemu-6-instance-00000006 terminated.
Nov 22 09:07:45 compute-0 nova_compute[253661]: 2025-11-22 09:07:45.867 253665 INFO nova.virt.libvirt.driver [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance destroyed successfully.
Nov 22 09:07:45 compute-0 nova_compute[253661]: 2025-11-22 09:07:45.868 253665 DEBUG nova.objects.instance [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'resources' on Instance uuid bfc23def-6d15-4b5e-959e-3165bc676f9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.333 253665 INFO nova.virt.libvirt.driver [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Deleting instance files /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c_del
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.333 253665 INFO nova.virt.libvirt.driver [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Deletion of /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c_del complete
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.407 253665 INFO nova.compute.manager [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.408 253665 DEBUG oslo.service.loopingcall [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.409 253665 DEBUG nova.compute.manager [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.409 253665 DEBUG nova.network.neutron [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.808 253665 DEBUG nova.network.neutron [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.827 253665 DEBUG nova.network.neutron [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.847 253665 INFO nova.compute.manager [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 0.44 seconds to deallocate network for instance.
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.886 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.887 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:46 compute-0 nova_compute[253661]: 2025-11-22 09:07:46.986 253665 DEBUG oslo_concurrency.processutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 302 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 714 KiB/s rd, 3.6 MiB/s wr, 244 op/s
Nov 22 09:07:47 compute-0 nova_compute[253661]: 2025-11-22 09:07:47.055 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:47 compute-0 ceph-mon[75021]: pgmap v1242: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 302 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 714 KiB/s rd, 3.6 MiB/s wr, 244 op/s
Nov 22 09:07:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266053923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:47 compute-0 nova_compute[253661]: 2025-11-22 09:07:47.450 253665 DEBUG oslo_concurrency.processutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:47 compute-0 nova_compute[253661]: 2025-11-22 09:07:47.457 253665 DEBUG nova.compute.provider_tree [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:47 compute-0 nova_compute[253661]: 2025-11-22 09:07:47.474 253665 DEBUG nova.scheduler.client.report [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:47 compute-0 nova_compute[253661]: 2025-11-22 09:07:47.493 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:47 compute-0 nova_compute[253661]: 2025-11-22 09:07:47.533 253665 INFO nova.scheduler.client.report [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Deleted allocations for instance bfc23def-6d15-4b5e-959e-3165bc676f9c
Nov 22 09:07:47 compute-0 nova_compute[253661]: 2025-11-22 09:07:47.587 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/266053923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.203 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.204 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.205 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.205 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.206 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.210 253665 INFO nova.compute.manager [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Terminating instance
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.212 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.212 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquired lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.213 253665 DEBUG nova.network.neutron [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.372 253665 DEBUG nova.network.neutron [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:48 compute-0 sudo[275607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:07:48 compute-0 sudo[275607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:48 compute-0 sudo[275607]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:48 compute-0 sudo[275632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:07:48 compute-0 sudo[275632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:48 compute-0 sudo[275632]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:48 compute-0 sudo[275657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:07:48 compute-0 sudo[275657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:48 compute-0 sudo[275657]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.642 253665 DEBUG nova.network.neutron [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.661 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Releasing lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.666 253665 DEBUG nova.compute.manager [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.685 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802453.6846967, 1bb24315-1978-4dbf-a16d-5e7b84a25d17 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.686 253665 INFO nova.compute.manager [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] VM Stopped (Lifecycle Event)
Nov 22 09:07:48 compute-0 sudo[275682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:07:48 compute-0 sudo[275682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.709 253665 DEBUG nova.compute.manager [None req-1cbc500b-06b7-4579-a59b-f03a3383d7f7 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.754 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:48 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 22 09:07:48 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.766s CPU time.
Nov 22 09:07:48 compute-0 systemd-machined[215941]: Machine qemu-5-instance-00000005 terminated.
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.896 253665 INFO nova.virt.libvirt.driver [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance destroyed successfully.
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.897 253665 DEBUG nova.objects.instance [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'resources' on Instance uuid 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:48 compute-0 nova_compute[253661]: 2025-11-22 09:07:48.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 22 09:07:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 22 09:07:48 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 22 09:07:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 391 op/s
Nov 22 09:07:49 compute-0 sudo[275682]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:07:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:07:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:07:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:07:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:07:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:07:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev dd3da25f-5328-45d4-ab4a-51280ad8173c does not exist
Nov 22 09:07:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9b980c1e-f7bc-4260-b81a-c975727467d4 does not exist
Nov 22 09:07:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 452624e9-2f6c-458d-81d6-423d67f2b842 does not exist
Nov 22 09:07:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:07:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:07:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:07:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:07:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:07:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:07:49 compute-0 sudo[275762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:07:49 compute-0 sudo[275762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:49 compute-0 sudo[275762]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:49 compute-0 sudo[275787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:07:49 compute-0 sudo[275787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:49 compute-0 sudo[275787]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.442 253665 INFO nova.virt.libvirt.driver [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Deleting instance files /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_del
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.444 253665 INFO nova.virt.libvirt.driver [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Deletion of /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_del complete
Nov 22 09:07:49 compute-0 sudo[275812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:07:49 compute-0 sudo[275812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:49 compute-0 sudo[275812]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.492 253665 INFO nova.compute.manager [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.492 253665 DEBUG oslo.service.loopingcall [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.492 253665 DEBUG nova.compute.manager [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.493 253665 DEBUG nova.network.neutron [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:07:49 compute-0 sudo[275837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:07:49 compute-0 sudo[275837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.795 253665 DEBUG nova.network.neutron [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.811 253665 DEBUG nova.network.neutron [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.823 253665 INFO nova.compute.manager [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 0.33 seconds to deallocate network for instance.
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.872 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.873 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:49 compute-0 podman[275900]: 2025-11-22 09:07:49.917461727 +0000 UTC m=+0.048771505 container create 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:07:49 compute-0 nova_compute[253661]: 2025-11-22 09:07:49.926 253665 DEBUG oslo_concurrency.processutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:49 compute-0 systemd[1]: Started libpod-conmon-3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617.scope.
Nov 22 09:07:49 compute-0 ceph-mon[75021]: osdmap e175: 3 total, 3 up, 3 in
Nov 22 09:07:49 compute-0 ceph-mon[75021]: pgmap v1244: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 391 op/s
Nov 22 09:07:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:07:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:07:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:07:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:07:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:07:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:07:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:07:49 compute-0 podman[275900]: 2025-11-22 09:07:49.895734295 +0000 UTC m=+0.027044113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:07:50 compute-0 podman[275900]: 2025-11-22 09:07:50.004303898 +0000 UTC m=+0.135613716 container init 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:07:50 compute-0 podman[275900]: 2025-11-22 09:07:50.012669589 +0000 UTC m=+0.143979377 container start 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:07:50 compute-0 podman[275900]: 2025-11-22 09:07:50.016008579 +0000 UTC m=+0.147318367 container attach 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 09:07:50 compute-0 heuristic_cartwright[275917]: 167 167
Nov 22 09:07:50 compute-0 systemd[1]: libpod-3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617.scope: Deactivated successfully.
Nov 22 09:07:50 compute-0 conmon[275917]: conmon 3bc118eb8fefedc4d2e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617.scope/container/memory.events
Nov 22 09:07:50 compute-0 podman[275922]: 2025-11-22 09:07:50.064841605 +0000 UTC m=+0.027608195 container died 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ae64f25c6ddcf4516db9e609730875bdffb956ec3f4aac8f30ac4cce5bd090-merged.mount: Deactivated successfully.
Nov 22 09:07:50 compute-0 podman[275922]: 2025-11-22 09:07:50.104727595 +0000 UTC m=+0.067494185 container remove 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:07:50 compute-0 systemd[1]: libpod-conmon-3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617.scope: Deactivated successfully.
Nov 22 09:07:50 compute-0 podman[275960]: 2025-11-22 09:07:50.302127997 +0000 UTC m=+0.050242421 container create 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:07:50 compute-0 systemd[1]: Started libpod-conmon-03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f.scope.
Nov 22 09:07:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2647677578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:50 compute-0 podman[275960]: 2025-11-22 09:07:50.276415397 +0000 UTC m=+0.024529861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:07:50 compute-0 nova_compute[253661]: 2025-11-22 09:07:50.378 253665 DEBUG oslo_concurrency.processutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:07:50 compute-0 nova_compute[253661]: 2025-11-22 09:07:50.386 253665 DEBUG nova.compute.provider_tree [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:50 compute-0 nova_compute[253661]: 2025-11-22 09:07:50.400 253665 DEBUG nova.scheduler.client.report [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:50 compute-0 podman[275960]: 2025-11-22 09:07:50.420507166 +0000 UTC m=+0.168621610 container init 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:07:50 compute-0 nova_compute[253661]: 2025-11-22 09:07:50.428 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:50 compute-0 podman[275960]: 2025-11-22 09:07:50.428762355 +0000 UTC m=+0.176876769 container start 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:07:50 compute-0 podman[275960]: 2025-11-22 09:07:50.433275914 +0000 UTC m=+0.181390328 container attach 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:07:50 compute-0 nova_compute[253661]: 2025-11-22 09:07:50.454 253665 INFO nova.scheduler.client.report [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Deleted allocations for instance 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76
Nov 22 09:07:50 compute-0 nova_compute[253661]: 2025-11-22 09:07:50.509 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2647677578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 297 op/s
Nov 22 09:07:51 compute-0 crazy_hodgkin[275977]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:07:51 compute-0 crazy_hodgkin[275977]: --> relative data size: 1.0
Nov 22 09:07:51 compute-0 crazy_hodgkin[275977]: --> All data devices are unavailable
Nov 22 09:07:51 compute-0 systemd[1]: libpod-03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f.scope: Deactivated successfully.
Nov 22 09:07:51 compute-0 podman[275960]: 2025-11-22 09:07:51.532446482 +0000 UTC m=+1.280560886 container died 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:07:51 compute-0 systemd[1]: libpod-03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f.scope: Consumed 1.031s CPU time.
Nov 22 09:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba-merged.mount: Deactivated successfully.
Nov 22 09:07:51 compute-0 podman[275960]: 2025-11-22 09:07:51.682689718 +0000 UTC m=+1.430804132 container remove 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:07:51 compute-0 systemd[1]: libpod-conmon-03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f.scope: Deactivated successfully.
Nov 22 09:07:51 compute-0 sudo[275837]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:51 compute-0 sudo[276020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:07:51 compute-0 sudo[276020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:51 compute-0 sudo[276020]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:51 compute-0 sudo[276045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:07:51 compute-0 sudo[276045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:51 compute-0 sudo[276045]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:51 compute-0 sudo[276070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:07:51 compute-0 sudo[276070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:51 compute-0 sudo[276070]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:51 compute-0 sudo[276095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:07:51 compute-0 sudo[276095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 22 09:07:51 compute-0 ceph-mon[75021]: pgmap v1245: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 297 op/s
Nov 22 09:07:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 22 09:07:52 compute-0 nova_compute[253661]: 2025-11-22 09:07:52.007 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802457.0063968, 61ff3d94-226c-4991-af23-6da29d64dca1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:07:52 compute-0 nova_compute[253661]: 2025-11-22 09:07:52.008 253665 INFO nova.compute.manager [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] VM Stopped (Lifecycle Event)
Nov 22 09:07:52 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 22 09:07:52 compute-0 nova_compute[253661]: 2025-11-22 09:07:52.036 253665 DEBUG nova.compute.manager [None req-ba2901fb-e9ce-434c-a0ce-d9f52fe2a11c - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:52 compute-0 nova_compute[253661]: 2025-11-22 09:07:52.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:07:52
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:07:52 compute-0 podman[276160]: 2025-11-22 09:07:52.360105124 +0000 UTC m=+0.039591535 container create 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:07:52 compute-0 systemd[1]: Started libpod-conmon-12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1.scope.
Nov 22 09:07:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:07:52 compute-0 podman[276160]: 2025-11-22 09:07:52.340753898 +0000 UTC m=+0.020240329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:07:52 compute-0 podman[276160]: 2025-11-22 09:07:52.438039979 +0000 UTC m=+0.117526410 container init 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:07:52 compute-0 podman[276160]: 2025-11-22 09:07:52.446493173 +0000 UTC m=+0.125979574 container start 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:07:52 compute-0 podman[276160]: 2025-11-22 09:07:52.450152801 +0000 UTC m=+0.129639212 container attach 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:07:52 compute-0 infallible_goldwasser[276176]: 167 167
Nov 22 09:07:52 compute-0 systemd[1]: libpod-12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1.scope: Deactivated successfully.
Nov 22 09:07:52 compute-0 podman[276160]: 2025-11-22 09:07:52.453214755 +0000 UTC m=+0.132701166 container died 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 09:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f96ea10e853778f795653a9d93897c21455ace8c4fde5fcddc10de2410fe1176-merged.mount: Deactivated successfully.
Nov 22 09:07:52 compute-0 podman[276160]: 2025-11-22 09:07:52.495495072 +0000 UTC m=+0.174981473 container remove 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:07:52 compute-0 systemd[1]: libpod-conmon-12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1.scope: Deactivated successfully.
Nov 22 09:07:52 compute-0 podman[276200]: 2025-11-22 09:07:52.662077262 +0000 UTC m=+0.047453503 container create 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:07:52 compute-0 systemd[1]: Started libpod-conmon-74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e.scope.
Nov 22 09:07:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:52 compute-0 podman[276200]: 2025-11-22 09:07:52.640465872 +0000 UTC m=+0.025842133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:52 compute-0 podman[276200]: 2025-11-22 09:07:52.792509622 +0000 UTC m=+0.177885883 container init 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:07:52 compute-0 podman[276200]: 2025-11-22 09:07:52.799866339 +0000 UTC m=+0.185242580 container start 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:07:52 compute-0 podman[276200]: 2025-11-22 09:07:52.830536188 +0000 UTC m=+0.215912519 container attach 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:07:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 22 09:07:53 compute-0 ceph-mon[75021]: osdmap e176: 3 total, 3 up, 3 in
Nov 22 09:07:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 22 09:07:53 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 22 09:07:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 144 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.8 KiB/s wr, 191 op/s
Nov 22 09:07:53 compute-0 romantic_borg[276215]: {
Nov 22 09:07:53 compute-0 romantic_borg[276215]:     "0": [
Nov 22 09:07:53 compute-0 romantic_borg[276215]:         {
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "devices": [
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "/dev/loop3"
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             ],
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_name": "ceph_lv0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_size": "21470642176",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "name": "ceph_lv0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "tags": {
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.cluster_name": "ceph",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.crush_device_class": "",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.encrypted": "0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.osd_id": "0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.type": "block",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.vdo": "0"
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             },
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "type": "block",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "vg_name": "ceph_vg0"
Nov 22 09:07:53 compute-0 romantic_borg[276215]:         }
Nov 22 09:07:53 compute-0 romantic_borg[276215]:     ],
Nov 22 09:07:53 compute-0 romantic_borg[276215]:     "1": [
Nov 22 09:07:53 compute-0 romantic_borg[276215]:         {
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "devices": [
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "/dev/loop4"
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             ],
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_name": "ceph_lv1",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_size": "21470642176",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "name": "ceph_lv1",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "tags": {
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.cluster_name": "ceph",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.crush_device_class": "",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.encrypted": "0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.osd_id": "1",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.type": "block",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.vdo": "0"
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             },
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "type": "block",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "vg_name": "ceph_vg1"
Nov 22 09:07:53 compute-0 romantic_borg[276215]:         }
Nov 22 09:07:53 compute-0 romantic_borg[276215]:     ],
Nov 22 09:07:53 compute-0 romantic_borg[276215]:     "2": [
Nov 22 09:07:53 compute-0 romantic_borg[276215]:         {
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "devices": [
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "/dev/loop5"
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             ],
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_name": "ceph_lv2",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_size": "21470642176",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "name": "ceph_lv2",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "tags": {
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.cluster_name": "ceph",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.crush_device_class": "",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.encrypted": "0",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.osd_id": "2",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.type": "block",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:                 "ceph.vdo": "0"
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             },
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "type": "block",
Nov 22 09:07:53 compute-0 romantic_borg[276215]:             "vg_name": "ceph_vg2"
Nov 22 09:07:53 compute-0 romantic_borg[276215]:         }
Nov 22 09:07:53 compute-0 romantic_borg[276215]:     ]
Nov 22 09:07:53 compute-0 romantic_borg[276215]: }
Nov 22 09:07:53 compute-0 systemd[1]: libpod-74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e.scope: Deactivated successfully.
Nov 22 09:07:53 compute-0 podman[276200]: 2025-11-22 09:07:53.674040941 +0000 UTC m=+1.059417192 container died 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 09:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df-merged.mount: Deactivated successfully.
Nov 22 09:07:53 compute-0 podman[276200]: 2025-11-22 09:07:53.734729941 +0000 UTC m=+1.120106182 container remove 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:07:53 compute-0 systemd[1]: libpod-conmon-74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e.scope: Deactivated successfully.
Nov 22 09:07:53 compute-0 sudo[276095]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:53 compute-0 sudo[276238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:07:53 compute-0 sudo[276238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:53 compute-0 sudo[276238]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:53 compute-0 sudo[276263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:07:53 compute-0 sudo[276263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:53 compute-0 sudo[276263]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 22 09:07:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 22 09:07:53 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 22 09:07:54 compute-0 sudo[276288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:07:54 compute-0 sudo[276288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:54 compute-0 sudo[276288]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:54 compute-0 ceph-mon[75021]: osdmap e177: 3 total, 3 up, 3 in
Nov 22 09:07:54 compute-0 ceph-mon[75021]: pgmap v1248: 305 pgs: 305 active+clean; 144 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.8 KiB/s wr, 191 op/s
Nov 22 09:07:54 compute-0 ceph-mon[75021]: osdmap e178: 3 total, 3 up, 3 in
Nov 22 09:07:54 compute-0 sudo[276313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:07:54 compute-0 sudo[276313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:07:54 compute-0 nova_compute[253661]: 2025-11-22 09:07:54.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 podman[276380]: 2025-11-22 09:07:54.493992778 +0000 UTC m=+0.076292488 container create a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:07:54 compute-0 podman[276380]: 2025-11-22 09:07:54.454440825 +0000 UTC m=+0.036740555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:07:54 compute-0 systemd[1]: Started libpod-conmon-a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896.scope.
Nov 22 09:07:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:07:54 compute-0 podman[276380]: 2025-11-22 09:07:54.73337591 +0000 UTC m=+0.315675620 container init a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:07:54 compute-0 podman[276380]: 2025-11-22 09:07:54.742690924 +0000 UTC m=+0.324990634 container start a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:07:54 compute-0 brave_hypatia[276396]: 167 167
Nov 22 09:07:54 compute-0 systemd[1]: libpod-a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896.scope: Deactivated successfully.
Nov 22 09:07:54 compute-0 podman[276380]: 2025-11-22 09:07:54.807875413 +0000 UTC m=+0.390175133 container attach a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:07:54 compute-0 podman[276380]: 2025-11-22 09:07:54.808727994 +0000 UTC m=+0.391027704 container died a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:07:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:07:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.2 KiB/s wr, 103 op/s
Nov 22 09:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-47169f60c1609004754af649dc08bc719fbb0f0a3a6348b48a8855774d3494b3-merged.mount: Deactivated successfully.
Nov 22 09:07:55 compute-0 ceph-mon[75021]: pgmap v1250: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.2 KiB/s wr, 103 op/s
Nov 22 09:07:55 compute-0 podman[276380]: 2025-11-22 09:07:55.336413675 +0000 UTC m=+0.918713425 container remove a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:07:55 compute-0 systemd[1]: libpod-conmon-a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896.scope: Deactivated successfully.
Nov 22 09:07:55 compute-0 podman[276420]: 2025-11-22 09:07:55.499368558 +0000 UTC m=+0.024923781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:07:55 compute-0 podman[276420]: 2025-11-22 09:07:55.610022151 +0000 UTC m=+0.135577354 container create 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:07:55 compute-0 systemd[1]: Started libpod-conmon-33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3.scope.
Nov 22 09:07:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:07:55 compute-0 podman[276420]: 2025-11-22 09:07:55.703947252 +0000 UTC m=+0.229502555 container init 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:07:55 compute-0 podman[276420]: 2025-11-22 09:07:55.713871611 +0000 UTC m=+0.239426814 container start 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:07:55 compute-0 podman[276420]: 2025-11-22 09:07:55.718288897 +0000 UTC m=+0.243844310 container attach 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:55.999 253665 DEBUG nova.compute.manager [None req-eea21a1d-e179-432d-9b9e-17292723997b 5ab5801c00a94ae58a2ee4d79237d36d 1695c5aed6564e9ca76c77cf59eec4b5 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.009 253665 INFO nova.compute.manager [None req-eea21a1d-e179-432d-9b9e-17292723997b 5ab5801c00a94ae58a2ee4d79237d36d 1695c5aed6564e9ca76c77cf59eec4b5 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Retrieving diagnostics
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.308 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.309 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.309 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.309 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.309 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.311 253665 INFO nova.compute.manager [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Terminating instance
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.311 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "refresh_cache-c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.312 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquired lock "refresh_cache-c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.312 253665 DEBUG nova.network.neutron [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]: {
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "osd_id": 1,
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "type": "bluestore"
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:     },
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "osd_id": 0,
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "type": "bluestore"
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:     },
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "osd_id": 2,
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:         "type": "bluestore"
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]:     }
Nov 22 09:07:56 compute-0 condescending_maxwell[276437]: }
Nov 22 09:07:56 compute-0 systemd[1]: libpod-33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3.scope: Deactivated successfully.
Nov 22 09:07:56 compute-0 podman[276420]: 2025-11-22 09:07:56.751052157 +0000 UTC m=+1.276607370 container died 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:07:56 compute-0 systemd[1]: libpod-33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3.scope: Consumed 1.011s CPU time.
Nov 22 09:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12-merged.mount: Deactivated successfully.
Nov 22 09:07:56 compute-0 nova_compute[253661]: 2025-11-22 09:07:56.802 253665 DEBUG nova.network.neutron [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:56 compute-0 podman[276420]: 2025-11-22 09:07:56.806125352 +0000 UTC m=+1.331680555 container remove 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 22 09:07:56 compute-0 systemd[1]: libpod-conmon-33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3.scope: Deactivated successfully.
Nov 22 09:07:56 compute-0 sudo[276313]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:07:56 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:07:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:07:56 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:07:56 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 193e80e1-8cab-4d30-bf6c-bbc38c84334c does not exist
Nov 22 09:07:56 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b0d2d8f2-b678-4a89-b8ad-0da17f43e214 does not exist
Nov 22 09:07:56 compute-0 sudo[276482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:07:56 compute-0 sudo[276482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:56 compute-0 sudo[276482]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:56 compute-0 sudo[276507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:07:56 compute-0 sudo[276507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:07:56 compute-0 sudo[276507]: pam_unix(sudo:session): session closed for user root
Nov 22 09:07:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 96 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 1.1 MiB/s wr, 183 op/s
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.168 253665 DEBUG nova.network.neutron [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.183 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Releasing lock "refresh_cache-c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.184 253665 DEBUG nova.compute.manager [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:07:57 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 22 09:07:57 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 12.469s CPU time.
Nov 22 09:07:57 compute-0 systemd-machined[215941]: Machine qemu-10-instance-0000000a terminated.
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.413 253665 INFO nova.virt.libvirt.driver [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance destroyed successfully.
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.415 253665 DEBUG nova.objects.instance [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lazy-loading 'resources' on Instance uuid c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.821 253665 INFO nova.virt.libvirt.driver [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Deleting instance files /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_del
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.822 253665 INFO nova.virt.libvirt.driver [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Deletion of /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_del complete
Nov 22 09:07:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:07:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:07:57 compute-0 ceph-mon[75021]: pgmap v1251: 305 pgs: 305 active+clean; 96 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 1.1 MiB/s wr, 183 op/s
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.893 253665 INFO nova.compute.manager [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.895 253665 DEBUG oslo.service.loopingcall [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.896 253665 DEBUG nova.compute.manager [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:07:57 compute-0 nova_compute[253661]: 2025-11-22 09:07:57.897 253665 DEBUG nova.network.neutron [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.071 253665 DEBUG nova.network.neutron [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.086 253665 DEBUG nova.network.neutron [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.097 253665 INFO nova.compute.manager [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Took 0.20 seconds to deallocate network for instance.
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.151 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.151 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.196 253665 DEBUG oslo_concurrency.processutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:07:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690782661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.645 253665 DEBUG oslo_concurrency.processutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.653 253665 DEBUG nova.compute.provider_tree [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.669 253665 DEBUG nova.scheduler.client.report [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.690 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.725 253665 INFO nova.scheduler.client.report [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Deleted allocations for instance c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12
Nov 22 09:07:58 compute-0 nova_compute[253661]: 2025-11-22 09:07:58.786 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2690782661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:07:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:07:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 22 09:07:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 22 09:07:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 22 09:07:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 57 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 661 KiB/s rd, 4.1 MiB/s wr, 264 op/s
Nov 22 09:07:59 compute-0 nova_compute[253661]: 2025-11-22 09:07:59.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:00 compute-0 ceph-mon[75021]: osdmap e179: 3 total, 3 up, 3 in
Nov 22 09:08:00 compute-0 ceph-mon[75021]: pgmap v1253: 305 pgs: 305 active+clean; 57 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 661 KiB/s rd, 4.1 MiB/s wr, 264 op/s
Nov 22 09:08:00 compute-0 nova_compute[253661]: 2025-11-22 09:08:00.866 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802465.8661232, bfc23def-6d15-4b5e-959e-3165bc676f9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:00 compute-0 nova_compute[253661]: 2025-11-22 09:08:00.867 253665 INFO nova.compute.manager [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] VM Stopped (Lifecycle Event)
Nov 22 09:08:00 compute-0 nova_compute[253661]: 2025-11-22 09:08:00.884 253665 DEBUG nova.compute.manager [None req-a9870ad6-019c-4059-85ab-41f55a12bd79 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 57 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 497 KiB/s rd, 3.1 MiB/s wr, 198 op/s
Nov 22 09:08:02 compute-0 ceph-mon[75021]: pgmap v1254: 305 pgs: 305 active+clean; 57 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 497 KiB/s rd, 3.1 MiB/s wr, 198 op/s
Nov 22 09:08:02 compute-0 nova_compute[253661]: 2025-11-22 09:08:02.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0001601849929999349 of space, bias 1.0, pg target 0.04805549789998047 quantized to 32 (current 32)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:08:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:08:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 444 KiB/s rd, 2.7 MiB/s wr, 151 op/s
Nov 22 09:08:03 compute-0 ceph-mon[75021]: pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 444 KiB/s rd, 2.7 MiB/s wr, 151 op/s
Nov 22 09:08:03 compute-0 nova_compute[253661]: 2025-11-22 09:08:03.894 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802468.892703, 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:03 compute-0 nova_compute[253661]: 2025-11-22 09:08:03.894 253665 INFO nova.compute.manager [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] VM Stopped (Lifecycle Event)
Nov 22 09:08:03 compute-0 nova_compute[253661]: 2025-11-22 09:08:03.914 253665 DEBUG nova.compute.manager [None req-99e177d7-0d53-4d81-b6e9-de1a3d164b35 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 22 09:08:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 22 09:08:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 22 09:08:04 compute-0 nova_compute[253661]: 2025-11-22 09:08:04.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 451 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Nov 22 09:08:05 compute-0 ceph-mon[75021]: osdmap e180: 3 total, 3 up, 3 in
Nov 22 09:08:06 compute-0 ceph-mon[75021]: pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 451 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Nov 22 09:08:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 15 KiB/s wr, 30 op/s
Nov 22 09:08:07 compute-0 nova_compute[253661]: 2025-11-22 09:08:07.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:07 compute-0 ceph-mon[75021]: pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 15 KiB/s wr, 30 op/s
Nov 22 09:08:07 compute-0 nova_compute[253661]: 2025-11-22 09:08:07.411 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "36e46542-ccae-4acd-9191-80f54d6bc694" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:07 compute-0 nova_compute[253661]: 2025-11-22 09:08:07.412 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:07 compute-0 nova_compute[253661]: 2025-11-22 09:08:07.444 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:08:07 compute-0 nova_compute[253661]: 2025-11-22 09:08:07.525 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:07 compute-0 nova_compute[253661]: 2025-11-22 09:08:07.526 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:07 compute-0 nova_compute[253661]: 2025-11-22 09:08:07.532 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:08:07 compute-0 nova_compute[253661]: 2025-11-22 09:08:07.533 253665 INFO nova.compute.claims [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:08:07 compute-0 nova_compute[253661]: 2025-11-22 09:08:07.650 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:08:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4195264242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.133 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.139 253665 DEBUG nova.compute.provider_tree [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:08:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4195264242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.340 253665 DEBUG nova.scheduler.client.report [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.384 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.385 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.464 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.464 253665 DEBUG nova.network.neutron [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.511 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.573 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.707 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.708 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.709 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Creating image(s)
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.731 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.754 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.777 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.782 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.849 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.850 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.850 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.851 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.873 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:08 compute-0 nova_compute[253661]: 2025-11-22 09:08:08.878 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 36e46542-ccae-4acd-9191-80f54d6bc694_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.063 253665 DEBUG nova.network.neutron [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.065 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:08:09 compute-0 ceph-mon[75021]: pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.241 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 36e46542-ccae-4acd-9191-80f54d6bc694_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.293 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] resizing rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.398 253665 DEBUG nova.objects.instance [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lazy-loading 'migration_context' on Instance uuid 36e46542-ccae-4acd-9191-80f54d6bc694 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.413 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.413 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Ensure instance console log exists: /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.414 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.414 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.414 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.416 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.420 253665 WARNING nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.424 253665 DEBUG nova.virt.libvirt.host [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.425 253665 DEBUG nova.virt.libvirt.host [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.427 253665 DEBUG nova.virt.libvirt.host [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.428 253665 DEBUG nova.virt.libvirt.host [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.428 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.428 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.430 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.430 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.430 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.430 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.433 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.496 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.511 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.511 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.597 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.830 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.831 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.839 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.839 253665 INFO nova.compute.claims [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:08:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:08:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1992786895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.866 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.887 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:09 compute-0 nova_compute[253661]: 2025-11-22 09:08:09.891 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.202 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1992786895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:08:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1181363758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.328 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.331 253665 DEBUG nova.objects.instance [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36e46542-ccae-4acd-9191-80f54d6bc694 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.345 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <uuid>36e46542-ccae-4acd-9191-80f54d6bc694</uuid>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <name>instance-0000000b</name>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-422854885</nova:name>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:08:09</nova:creationTime>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <nova:user uuid="543b664d1bde44719b208e5f3e6902f1">tempest-ServerDiagnosticsNegativeTest-832003805-project-member</nova:user>
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <nova:project uuid="f6e0addc8d86425c9ba676b31319cd79">tempest-ServerDiagnosticsNegativeTest-832003805</nova:project>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <system>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <entry name="serial">36e46542-ccae-4acd-9191-80f54d6bc694</entry>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <entry name="uuid">36e46542-ccae-4acd-9191-80f54d6bc694</entry>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     </system>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <os>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   </os>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <features>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   </features>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/36e46542-ccae-4acd-9191-80f54d6bc694_disk">
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       </source>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/36e46542-ccae-4acd-9191-80f54d6bc694_disk.config">
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       </source>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:08:10 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/console.log" append="off"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <video>
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     </video>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:08:10 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:08:10 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:08:10 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:08:10 compute-0 nova_compute[253661]: </domain>
Nov 22 09:08:10 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:08:10 compute-0 podman[276826]: 2025-11-22 09:08:10.381207451 +0000 UTC m=+0.066860871 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.406 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.406 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.407 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Using config drive
Nov 22 09:08:10 compute-0 podman[276827]: 2025-11-22 09:08:10.41569502 +0000 UTC m=+0.097470356 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.429 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.564 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Creating config drive at /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.569 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xnsplv3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:08:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040856504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.654 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.660 253665 DEBUG nova.compute.provider_tree [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.677 253665 DEBUG nova.scheduler.client.report [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.700 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xnsplv3" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.726 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.731 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.753 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.755 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.847 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.868 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.916 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.917 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Deleting local config drive /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config because it was imported into RBD.
Nov 22 09:08:10 compute-0 nova_compute[253661]: 2025-11-22 09:08:10.972 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:08:10 compute-0 systemd-machined[215941]: New machine qemu-11-instance-0000000b.
Nov 22 09:08:11 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.037 253665 DEBUG oslo_concurrency.processutils [None req-6f69d831-742a-406a-8431-d31085746923 c4e39f8ab7c04e3f8670f5bbc0bc8dc3 1f91f31d3b434312a707fcb491e8cf89 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.078 253665 DEBUG oslo_concurrency.processutils [None req-6f69d831-742a-406a-8431-d31085746923 c4e39f8ab7c04e3f8670f5bbc0bc8dc3 1f91f31d3b434312a707fcb491e8cf89 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1181363758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4040856504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:11 compute-0 ceph-mon[75021]: pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.241 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.242 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.313 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.315 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.316 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating image(s)
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.338 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.363 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.385 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.389 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.450 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.451 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.452 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.452 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.470 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.472 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.894 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:11 compute-0 nova_compute[253661]: 2025-11-22 09:08:11.961 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] resizing rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.094 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802492.0942516, 36e46542-ccae-4acd-9191-80f54d6bc694 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.095 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] VM Resumed (Lifecycle Event)
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.098 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.099 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.103 253665 INFO nova.virt.libvirt.driver [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance spawned successfully.
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.104 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.135 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.143 253665 DEBUG nova.objects.instance [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'migration_context' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.147 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.150 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.151 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.151 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.152 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.152 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.153 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.158 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.159 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Ensure instance console log exists: /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.159 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.160 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.160 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.161 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.164 253665 WARNING nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.168 253665 DEBUG nova.virt.libvirt.host [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.169 253665 DEBUG nova.virt.libvirt.host [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.173 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.173 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802492.0978801, 36e46542-ccae-4acd-9191-80f54d6bc694 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.173 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] VM Started (Lifecycle Event)
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.175 253665 DEBUG nova.virt.libvirt.host [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.175 253665 DEBUG nova.virt.libvirt.host [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.175 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.176 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.176 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.176 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.177 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.177 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.177 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.178 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.178 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.178 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.178 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.179 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.181 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.204 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.209 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.223 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.229 253665 INFO nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Took 3.52 seconds to spawn the instance on the hypervisor.
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.230 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:08:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/970811612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:08:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:08:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/970811612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.294 253665 INFO nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Took 4.80 seconds to build instance.
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.320 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/970811612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:08:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/970811612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.410 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802477.4100316, c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.412 253665 INFO nova.compute.manager [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] VM Stopped (Lifecycle Event)
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.431 253665 DEBUG nova.compute.manager [None req-dc24ba31-803e-41fa-b461-d15743ad6ae6 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:08:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681831094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.651 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.677 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:12 compute-0 nova_compute[253661]: 2025-11-22 09:08:12.682 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 53 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 852 KiB/s wr, 26 op/s
Nov 22 09:08:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:08:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966798618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.161 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.164 253665 DEBUG nova.objects.instance [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'pci_devices' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.176 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <uuid>c233bbff-b2e9-442f-818d-e8487dee1c3e</uuid>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <name>instance-0000000c</name>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdmin275Test-server-1195148279</nova:name>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:08:12</nova:creationTime>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <nova:user uuid="db3c9e2649dc463a894636918b1536f6">tempest-ServersAdmin275Test-461797968-project-member</nova:user>
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <nova:project uuid="452c52561ee04e93bc47895d639c9745">tempest-ServersAdmin275Test-461797968</nova:project>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <system>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <entry name="serial">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <entry name="uuid">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     </system>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <os>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   </os>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <features>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   </features>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk">
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       </source>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config">
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       </source>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:08:13 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log" append="off"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <video>
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     </video>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:08:13 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:08:13 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:08:13 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:08:13 compute-0 nova_compute[253661]: </domain>
Nov 22 09:08:13 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.226 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.226 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.227 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Using config drive
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.246 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3681831094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:13 compute-0 ceph-mon[75021]: pgmap v1261: 305 pgs: 305 active+clean; 53 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 852 KiB/s wr, 26 op/s
Nov 22 09:08:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1966798618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.805 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating config drive at /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.811 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5b3pgbpm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.951 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5b3pgbpm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.974 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:13 compute-0 nova_compute[253661]: 2025-11-22 09:08:13.977 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.124 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "36e46542-ccae-4acd-9191-80f54d6bc694" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.125 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.125 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "36e46542-ccae-4acd-9191-80f54d6bc694-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.125 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.125 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.127 253665 INFO nova.compute.manager [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Terminating instance
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.128 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "refresh_cache-36e46542-ccae-4acd-9191-80f54d6bc694" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.128 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquired lock "refresh_cache-36e46542-ccae-4acd-9191-80f54d6bc694" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.128 253665 DEBUG nova.network.neutron [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.254 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.319 253665 DEBUG nova.network.neutron [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:08:14 compute-0 podman[277285]: 2025-11-22 09:08:14.430760726 +0000 UTC m=+0.117283654 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.605 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.606 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting local config drive /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config because it was imported into RBD.
Nov 22 09:08:14 compute-0 systemd-machined[215941]: New machine qemu-12-instance-0000000c.
Nov 22 09:08:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:08:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44051744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:14 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.798 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.799 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.803 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.803 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:08:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/44051744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.959 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.960 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4452MB free_disk=59.98017120361328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.960 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:14 compute-0 nova_compute[253661]: 2025-11-22 09:08:14.960 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.042 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 36e46542-ccae-4acd-9191-80f54d6bc694 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.042 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance c233bbff-b2e9-442f-818d-e8487dee1c3e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.043 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.043 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:08:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 108 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 105 op/s
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.095 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.209 253665 DEBUG nova.network.neutron [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.228 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Releasing lock "refresh_cache-36e46542-ccae-4acd-9191-80f54d6bc694" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.229 253665 DEBUG nova.compute.manager [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.357 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802495.3567834, c233bbff-b2e9-442f-818d-e8487dee1c3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.357 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Resumed (Lifecycle Event)
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.360 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.360 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.365 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance spawned successfully.
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.366 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.379 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.386 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.390 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.390 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.391 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.391 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.392 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.392 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.417 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.417 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802495.357988, c233bbff-b2e9-442f-818d-e8487dee1c3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.417 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Started (Lifecycle Event)
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.434 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.436 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.453 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:08:15 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 22 09:08:15 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 4.241s CPU time.
Nov 22 09:08:15 compute-0 systemd-machined[215941]: Machine qemu-11-instance-0000000b terminated.
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.475 253665 INFO nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Took 4.16 seconds to spawn the instance on the hypervisor.
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.475 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.528 253665 INFO nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Took 5.85 seconds to build instance.
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.546 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:08:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/409087369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.610 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.617 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.630 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.655 253665 INFO nova.virt.libvirt.driver [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance destroyed successfully.
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.656 253665 DEBUG nova.objects.instance [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lazy-loading 'resources' on Instance uuid 36e46542-ccae-4acd-9191-80f54d6bc694 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.701 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:08:15 compute-0 nova_compute[253661]: 2025-11-22 09:08:15.701 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:15 compute-0 ceph-mon[75021]: pgmap v1262: 305 pgs: 305 active+clean; 108 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 105 op/s
Nov 22 09:08:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/409087369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:16 compute-0 nova_compute[253661]: 2025-11-22 09:08:16.693 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:16 compute-0 nova_compute[253661]: 2025-11-22 09:08:16.694 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 156 op/s
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.075 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:08:17 compute-0 ceph-mon[75021]: pgmap v1263: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 156 op/s
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.378 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.378 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.378 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.379 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.379 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:08:17 compute-0 nova_compute[253661]: 2025-11-22 09:08:17.409 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:08:18 compute-0 nova_compute[253661]: 2025-11-22 09:08:18.507 253665 INFO nova.compute.manager [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Rebuilding instance
Nov 22 09:08:18 compute-0 nova_compute[253661]: 2025-11-22 09:08:18.759 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:19 compute-0 nova_compute[253661]: 2025-11-22 09:08:19.041 253665 DEBUG nova.compute.manager [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 100 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Nov 22 09:08:19 compute-0 nova_compute[253661]: 2025-11-22 09:08:19.259 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:19 compute-0 ceph-mon[75021]: pgmap v1264: 305 pgs: 305 active+clean; 100 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Nov 22 09:08:19 compute-0 nova_compute[253661]: 2025-11-22 09:08:19.373 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'pci_requests' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:19 compute-0 nova_compute[253661]: 2025-11-22 09:08:19.393 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'pci_devices' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:19 compute-0 nova_compute[253661]: 2025-11-22 09:08:19.404 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'resources' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:19 compute-0 nova_compute[253661]: 2025-11-22 09:08:19.416 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'migration_context' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:19 compute-0 nova_compute[253661]: 2025-11-22 09:08:19.427 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:08:19 compute-0 nova_compute[253661]: 2025-11-22 09:08:19.433 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:08:19 compute-0 nova_compute[253661]: 2025-11-22 09:08:19.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 100 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.069 253665 INFO nova.virt.libvirt.driver [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Deleting instance files /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694_del
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.070 253665 INFO nova.virt.libvirt.driver [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Deletion of /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694_del complete
Nov 22 09:08:21 compute-0 ceph-mon[75021]: pgmap v1265: 305 pgs: 305 active+clean; 100 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.561 253665 INFO nova.compute.manager [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Took 6.33 seconds to destroy the instance on the hypervisor.
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.561 253665 DEBUG oslo.service.loopingcall [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.561 253665 DEBUG nova.compute.manager [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.562 253665 DEBUG nova.network.neutron [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.745 253665 DEBUG nova.network.neutron [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.760 253665 DEBUG nova.network.neutron [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.782 253665 INFO nova.compute.manager [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Took 0.22 seconds to deallocate network for instance.
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.940 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.941 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:21 compute-0 nova_compute[253661]: 2025-11-22 09:08:21.999 253665 DEBUG oslo_concurrency.processutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:22 compute-0 nova_compute[253661]: 2025-11-22 09:08:22.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:08:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3503406224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:22 compute-0 nova_compute[253661]: 2025-11-22 09:08:22.477 253665 DEBUG oslo_concurrency.processutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:22 compute-0 nova_compute[253661]: 2025-11-22 09:08:22.484 253665 DEBUG nova.compute.provider_tree [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:08:22 compute-0 nova_compute[253661]: 2025-11-22 09:08:22.500 253665 DEBUG nova.scheduler.client.report [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:08:22 compute-0 nova_compute[253661]: 2025-11-22 09:08:22.595 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3503406224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:22 compute-0 nova_compute[253661]: 2025-11-22 09:08:22.677 253665 INFO nova.scheduler.client.report [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Deleted allocations for instance 36e46542-ccae-4acd-9191-80f54d6bc694
Nov 22 09:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:08:22 compute-0 nova_compute[253661]: 2025-11-22 09:08:22.869 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Nov 22 09:08:23 compute-0 ovn_controller[152872]: 2025-11-22T09:08:23Z|00043|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 22 09:08:23 compute-0 ceph-mon[75021]: pgmap v1266: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Nov 22 09:08:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:24 compute-0 nova_compute[253661]: 2025-11-22 09:08:24.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 205 op/s
Nov 22 09:08:25 compute-0 ceph-mon[75021]: pgmap v1267: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 205 op/s
Nov 22 09:08:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Nov 22 09:08:27 compute-0 nova_compute[253661]: 2025-11-22 09:08:27.080 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:27 compute-0 ceph-mon[75021]: pgmap v1268: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Nov 22 09:08:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:08:27.952 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:08:27.952 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:08:27.953 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 99 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 946 KiB/s wr, 87 op/s
Nov 22 09:08:29 compute-0 ceph-mon[75021]: pgmap v1269: 305 pgs: 305 active+clean; 99 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 946 KiB/s wr, 87 op/s
Nov 22 09:08:29 compute-0 nova_compute[253661]: 2025-11-22 09:08:29.476 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:08:29 compute-0 nova_compute[253661]: 2025-11-22 09:08:29.504 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:30 compute-0 nova_compute[253661]: 2025-11-22 09:08:30.396 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:08:30.395 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:08:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:08:30.397 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:08:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:08:30.398 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:08:30 compute-0 nova_compute[253661]: 2025-11-22 09:08:30.654 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802495.6528172, 36e46542-ccae-4acd-9191-80f54d6bc694 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:30 compute-0 nova_compute[253661]: 2025-11-22 09:08:30.655 253665 INFO nova.compute.manager [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] VM Stopped (Lifecycle Event)
Nov 22 09:08:30 compute-0 nova_compute[253661]: 2025-11-22 09:08:30.677 253665 DEBUG nova.compute.manager [None req-4b603693-aa61-4dce-b1c4-538152544081 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 99 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 749 KiB/s rd, 945 KiB/s wr, 56 op/s
Nov 22 09:08:31 compute-0 ceph-mon[75021]: pgmap v1270: 305 pgs: 305 active+clean; 99 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 749 KiB/s rd, 945 KiB/s wr, 56 op/s
Nov 22 09:08:32 compute-0 nova_compute[253661]: 2025-11-22 09:08:32.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 116 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 941 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Nov 22 09:08:33 compute-0 ceph-mon[75021]: pgmap v1271: 305 pgs: 305 active+clean; 116 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 941 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Nov 22 09:08:33 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 22 09:08:33 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.948s CPU time.
Nov 22 09:08:33 compute-0 systemd-machined[215941]: Machine qemu-12-instance-0000000c terminated.
Nov 22 09:08:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:34 compute-0 nova_compute[253661]: 2025-11-22 09:08:34.498 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance shutdown successfully after 15 seconds.
Nov 22 09:08:34 compute-0 nova_compute[253661]: 2025-11-22 09:08:34.506 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.
Nov 22 09:08:34 compute-0 nova_compute[253661]: 2025-11-22 09:08:34.506 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:34 compute-0 nova_compute[253661]: 2025-11-22 09:08:34.511 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.
Nov 22 09:08:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 22 09:08:35 compute-0 ceph-mon[75021]: pgmap v1272: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 22 09:08:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:08:37 compute-0 nova_compute[253661]: 2025-11-22 09:08:37.084 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:37 compute-0 ceph-mon[75021]: pgmap v1273: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:08:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 85 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 22 09:08:39 compute-0 ceph-mon[75021]: pgmap v1274: 305 pgs: 305 active+clean; 85 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 22 09:08:39 compute-0 nova_compute[253661]: 2025-11-22 09:08:39.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 85 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 1.2 MiB/s wr, 62 op/s
Nov 22 09:08:41 compute-0 ceph-mon[75021]: pgmap v1275: 305 pgs: 305 active+clean; 85 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 1.2 MiB/s wr, 62 op/s
Nov 22 09:08:41 compute-0 podman[277479]: 2025-11-22 09:08:41.373741412 +0000 UTC m=+0.065200331 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 22 09:08:41 compute-0 podman[277480]: 2025-11-22 09:08:41.376290714 +0000 UTC m=+0.068638963 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:08:41 compute-0 nova_compute[253661]: 2025-11-22 09:08:41.983 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting instance files /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del
Nov 22 09:08:41 compute-0 nova_compute[253661]: 2025-11-22 09:08:41.983 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deletion of /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del complete
Nov 22 09:08:42 compute-0 nova_compute[253661]: 2025-11-22 09:08:42.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:42 compute-0 nova_compute[253661]: 2025-11-22 09:08:42.381 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:08:42 compute-0 nova_compute[253661]: 2025-11-22 09:08:42.382 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating image(s)
Nov 22 09:08:42 compute-0 nova_compute[253661]: 2025-11-22 09:08:42.402 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:42 compute-0 nova_compute[253661]: 2025-11-22 09:08:42.426 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:42 compute-0 nova_compute[253661]: 2025-11-22 09:08:42.448 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:42 compute-0 nova_compute[253661]: 2025-11-22 09:08:42.453 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:42 compute-0 nova_compute[253661]: 2025-11-22 09:08:42.454 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:42 compute-0 nova_compute[253661]: 2025-11-22 09:08:42.823 253665 DEBUG nova.virt.libvirt.imagebackend [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/baf70c6a-4f18-40eb-9d40-874af269a47f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/baf70c6a-4f18-40eb-9d40-874af269a47f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 22 09:08:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 361 KiB/s rd, 1.2 MiB/s wr, 65 op/s
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.466085) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523466131, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1553, "num_deletes": 259, "total_data_size": 2163425, "memory_usage": 2201440, "flush_reason": "Manual Compaction"}
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 22 09:08:43 compute-0 ceph-mon[75021]: pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 361 KiB/s rd, 1.2 MiB/s wr, 65 op/s
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523550383, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2126175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24416, "largest_seqno": 25968, "table_properties": {"data_size": 2118928, "index_size": 4190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15993, "raw_average_key_size": 20, "raw_value_size": 2104114, "raw_average_value_size": 2732, "num_data_blocks": 185, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802405, "oldest_key_time": 1763802405, "file_creation_time": 1763802523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 84341 microseconds, and 6436 cpu microseconds.
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.550425) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2126175 bytes OK
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.550462) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.618574) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.618628) EVENT_LOG_v1 {"time_micros": 1763802523618616, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.618660) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2156447, prev total WAL file size 2156447, number of live WAL files 2.
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.619579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2076KB)], [56(7037KB)]
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523619620, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9332945, "oldest_snapshot_seqno": -1}
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4844 keys, 7570594 bytes, temperature: kUnknown
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523754777, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7570594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7537454, "index_size": 19910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 121342, "raw_average_key_size": 25, "raw_value_size": 7449196, "raw_average_value_size": 1537, "num_data_blocks": 818, "num_entries": 4844, "num_filter_entries": 4844, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.755196) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7570594 bytes
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.795388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.9 rd, 55.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.9 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 5371, records dropped: 527 output_compression: NoCompression
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.795432) EVENT_LOG_v1 {"time_micros": 1763802523795415, "job": 30, "event": "compaction_finished", "compaction_time_micros": 135369, "compaction_time_cpu_micros": 19463, "output_level": 6, "num_output_files": 1, "total_output_size": 7570594, "num_input_records": 5371, "num_output_records": 4844, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523796587, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523798125, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.619490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:08:43 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:08:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:44 compute-0 nova_compute[253661]: 2025-11-22 09:08:44.929 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:44 compute-0 nova_compute[253661]: 2025-11-22 09:08:44.934 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:44 compute-0 nova_compute[253661]: 2025-11-22 09:08:44.934 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:44 compute-0 nova_compute[253661]: 2025-11-22 09:08:44.964 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:08:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 75 KiB/s wr, 47 op/s
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.123 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.123 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.131 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.131 253665 INFO nova.compute.claims [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:08:45 compute-0 ceph-mon[75021]: pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 75 KiB/s wr, 47 op/s
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.257 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.380 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:45 compute-0 podman[277574]: 2025-11-22 09:08:45.44589457 +0000 UTC m=+0.129813315 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.450 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.part --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.452 253665 DEBUG nova.virt.images [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] baf70c6a-4f18-40eb-9d40-874af269a47f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.466 253665 DEBUG nova.privsep.utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.467 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.part /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:08:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2180418099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.713 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.720 253665 DEBUG nova.compute.provider_tree [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.732 253665 DEBUG nova.scheduler.client.report [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.827 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.part /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.converted" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.832 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.897 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.converted --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.899 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.925 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.930 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.984 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:45 compute-0 nova_compute[253661]: 2025-11-22 09:08:45.985 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.211 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.211 253665 DEBUG nova.network.neutron [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.245 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.274 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:08:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2180418099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.462 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.527 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] resizing rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.566 253665 DEBUG nova.network.neutron [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.567 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.582 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.584 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.584 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Creating image(s)
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.611 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.637 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.660 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.664 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.724 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.725 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.726 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.726 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.749 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.755 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.820 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.821 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Ensure instance console log exists: /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.821 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.821 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.822 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.823 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.826 253665 WARNING nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.831 253665 DEBUG nova.virt.libvirt.host [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.831 253665 DEBUG nova.virt.libvirt.host [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.834 253665 DEBUG nova.virt.libvirt.host [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.834 253665 DEBUG nova.virt.libvirt.host [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.835 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.835 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.836 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.836 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.836 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.837 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.837 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.837 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.837 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.838 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.838 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.838 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.839 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:46 compute-0 nova_compute[253661]: 2025-11-22 09:08:46.853 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 707 KiB/s rd, 780 KiB/s wr, 35 op/s
Nov 22 09:08:47 compute-0 nova_compute[253661]: 2025-11-22 09:08:47.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:08:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140003744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:47 compute-0 nova_compute[253661]: 2025-11-22 09:08:47.496 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:47 compute-0 ceph-mon[75021]: pgmap v1278: 305 pgs: 305 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 707 KiB/s rd, 780 KiB/s wr, 35 op/s
Nov 22 09:08:47 compute-0 nova_compute[253661]: 2025-11-22 09:08:47.701 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:47 compute-0 nova_compute[253661]: 2025-11-22 09:08:47.706 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:47 compute-0 nova_compute[253661]: 2025-11-22 09:08:47.730 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.975s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:47 compute-0 nova_compute[253661]: 2025-11-22 09:08:47.795 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] resizing rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:08:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:08:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1081899854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.160 253665 DEBUG nova.objects.instance [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lazy-loading 'migration_context' on Instance uuid 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.166 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.169 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <uuid>c233bbff-b2e9-442f-818d-e8487dee1c3e</uuid>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <name>instance-0000000c</name>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdmin275Test-server-1195148279</nova:name>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:08:46</nova:creationTime>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <nova:user uuid="db3c9e2649dc463a894636918b1536f6">tempest-ServersAdmin275Test-461797968-project-member</nova:user>
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <nova:project uuid="452c52561ee04e93bc47895d639c9745">tempest-ServersAdmin275Test-461797968</nova:project>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <system>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <entry name="serial">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <entry name="uuid">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     </system>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <os>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   </os>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <features>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   </features>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk">
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       </source>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config">
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       </source>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:08:48 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log" append="off"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <video>
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     </video>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:08:48 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:08:48 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:08:48 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:08:48 compute-0 nova_compute[253661]: </domain>
Nov 22 09:08:48 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.173 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.173 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Ensure instance console log exists: /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.174 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.174 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.174 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.176 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.181 253665 WARNING nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.184 253665 DEBUG nova.virt.libvirt.host [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.185 253665 DEBUG nova.virt.libvirt.host [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.187 253665 DEBUG nova.virt.libvirt.host [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.188 253665 DEBUG nova.virt.libvirt.host [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.188 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.188 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.194 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.248 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.248 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.249 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Using config drive
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.269 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.289 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.383 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'keypairs' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/140003744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1081899854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:08:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3421013341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.635 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.661 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.667 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.720 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating config drive at /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.726 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpip7pirtm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.858 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpip7pirtm" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.890 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:48 compute-0 nova_compute[253661]: 2025-11-22 09:08:48.894 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 116 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 82 op/s
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.119 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802514.1184192, c233bbff-b2e9-442f-818d-e8487dee1c3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.120 253665 INFO nova.compute.manager [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Stopped (Lifecycle Event)
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.143 253665 DEBUG nova.compute.manager [None req-a07fe106-55d3-44ef-905a-5ee1c9ab4da0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.147 253665 DEBUG nova.compute.manager [None req-a07fe106-55d3-44ef-905a-5ee1c9ab4da0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.166 253665 INFO nova.compute.manager [None req-a07fe106-55d3-44ef-905a-5ee1c9ab4da0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:08:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:08:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4026124718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.208 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.210 253665 DEBUG nova.objects.instance [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lazy-loading 'pci_devices' on Instance uuid 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.222 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <uuid>103af636-a2aa-4cdb-a2e1-2ab7cf5fb900</uuid>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <name>instance-0000000d</name>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerExternalEventsTest-server-300434324</nova:name>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:08:48</nova:creationTime>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <nova:user uuid="2e259dc1688c42e3ba13f2239d49b39e">tempest-ServerExternalEventsTest-107261648-project-member</nova:user>
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <nova:project uuid="420f909f2162475ba3f933633661986f">tempest-ServerExternalEventsTest-107261648</nova:project>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <system>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <entry name="serial">103af636-a2aa-4cdb-a2e1-2ab7cf5fb900</entry>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <entry name="uuid">103af636-a2aa-4cdb-a2e1-2ab7cf5fb900</entry>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     </system>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <os>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   </os>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <features>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   </features>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk">
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       </source>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config">
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       </source>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:08:49 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/console.log" append="off"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <video>
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     </video>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:08:49 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:08:49 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:08:49 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:08:49 compute-0 nova_compute[253661]: </domain>
Nov 22 09:08:49 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.270 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.270 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.271 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Using config drive
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.291 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.433 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.434 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting local config drive /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config because it was imported into RBD.
Nov 22 09:08:49 compute-0 systemd-machined[215941]: New machine qemu-13-instance-0000000c.
Nov 22 09:08:49 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3421013341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4026124718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.821 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Creating config drive at /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.826 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejie2xil execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.902 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802529.9012933, c233bbff-b2e9-442f-818d-e8487dee1c3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.903 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Resumed (Lifecycle Event)
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.907 253665 DEBUG nova.compute.manager [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.907 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.912 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance spawned successfully.
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.912 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.925 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.931 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.938 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.939 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.939 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.939 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.940 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.940 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.947 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.947 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802529.903311, c233bbff-b2e9-442f-818d-e8487dee1c3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.948 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Started (Lifecycle Event)
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.958 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejie2xil" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.982 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:08:49 compute-0 nova_compute[253661]: 2025-11-22 09:08:49.986 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.012 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.016 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.034 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.195 253665 DEBUG nova.compute.manager [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.361 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.362 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.363 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.420 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.516 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.516 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Deleting local config drive /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config because it was imported into RBD.
Nov 22 09:08:50 compute-0 systemd-machined[215941]: New machine qemu-14-instance-0000000d.
Nov 22 09:08:50 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Nov 22 09:08:50 compute-0 ceph-mon[75021]: pgmap v1279: 305 pgs: 305 active+clean; 116 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 82 op/s
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.974 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802530.9741821, 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.975 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] VM Resumed (Lifecycle Event)
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.977 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.978 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.981 253665 INFO nova.virt.libvirt.driver [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance spawned successfully.
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.981 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:08:50 compute-0 nova_compute[253661]: 2025-11-22 09:08:50.998 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.008 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.012 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.013 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.013 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.014 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.014 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.015 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.042 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.043 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802530.974989, 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.043 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] VM Started (Lifecycle Event)
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.063 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.067 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:08:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 116 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 69 op/s
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.084 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.614 253665 INFO nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Took 5.03 seconds to spawn the instance on the hypervisor.
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.615 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.673 253665 INFO nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Took 6.57 seconds to build instance.
Nov 22 09:08:51 compute-0 nova_compute[253661]: 2025-11-22 09:08:51.704 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:52 compute-0 nova_compute[253661]: 2025-11-22 09:08:52.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:08:52
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'volumes']
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:08:52 compute-0 ceph-mon[75021]: pgmap v1280: 305 pgs: 305 active+clean; 116 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 69 op/s
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:08:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 109 op/s
Nov 22 09:08:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:08:54 compute-0 nova_compute[253661]: 2025-11-22 09:08:54.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:54 compute-0 ceph-mon[75021]: pgmap v1281: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 109 op/s
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:08:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:08:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 22 09:08:56 compute-0 nova_compute[253661]: 2025-11-22 09:08:56.810 253665 DEBUG nova.compute.manager [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:08:56 compute-0 nova_compute[253661]: 2025-11-22 09:08:56.811 253665 DEBUG nova.compute.manager [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:08:56 compute-0 nova_compute[253661]: 2025-11-22 09:08:56.812 253665 DEBUG oslo_concurrency.lockutils [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] Acquiring lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:08:56 compute-0 nova_compute[253661]: 2025-11-22 09:08:56.812 253665 DEBUG oslo_concurrency.lockutils [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] Acquired lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:08:56 compute-0 nova_compute[253661]: 2025-11-22 09:08:56.812 253665 DEBUG nova.network.neutron [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:08:56 compute-0 nova_compute[253661]: 2025-11-22 09:08:56.927 253665 INFO nova.compute.manager [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Rebuilding instance
Nov 22 09:08:56 compute-0 ceph-mon[75021]: pgmap v1282: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 22 09:08:56 compute-0 nova_compute[253661]: 2025-11-22 09:08:56.983 253665 DEBUG nova.network.neutron [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:08:57 compute-0 sudo[278264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:08:57 compute-0 sudo[278264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:08:57 compute-0 sudo[278264]: pam_unix(sudo:session): session closed for user root
Nov 22 09:08:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 208 op/s
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:57 compute-0 sudo[278289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:08:57 compute-0 sudo[278289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:08:57 compute-0 sudo[278289]: pam_unix(sudo:session): session closed for user root
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.133 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'trusted_certs' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.145 253665 DEBUG nova.compute.manager [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:57 compute-0 sudo[278314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:08:57 compute-0 sudo[278314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:08:57 compute-0 sudo[278314]: pam_unix(sudo:session): session closed for user root
Nov 22 09:08:57 compute-0 sudo[278339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:08:57 compute-0 sudo[278339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.460 253665 DEBUG nova.network.neutron [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.474 253665 DEBUG oslo_concurrency.lockutils [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] Releasing lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.623 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'pci_requests' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.635 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'pci_devices' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.646 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'resources' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.657 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'migration_context' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.667 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.671 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:08:57 compute-0 sudo[278339]: pam_unix(sudo:session): session closed for user root
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.740 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.741 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.742 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.742 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.742 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.744 253665 INFO nova.compute.manager [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Terminating instance
Nov 22 09:08:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:08:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.748 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.749 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquired lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.749 253665 DEBUG nova.network.neutron [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:08:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:08:57 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:08:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:08:57 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:08:57 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e1a7b0ff-353b-4c6e-b4bb-7d587dfccf9e does not exist
Nov 22 09:08:57 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 102200e4-8b49-4f77-a18a-90aeaf66143e does not exist
Nov 22 09:08:57 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 52a1c8a0-f1b2-4bb0-81ab-b1890b9abfd2 does not exist
Nov 22 09:08:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:08:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:08:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:08:57 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:08:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:08:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:08:57 compute-0 sudo[278394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:08:57 compute-0 sudo[278394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:08:57 compute-0 sudo[278394]: pam_unix(sudo:session): session closed for user root
Nov 22 09:08:57 compute-0 sudo[278419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:08:57 compute-0 sudo[278419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:08:57 compute-0 sudo[278419]: pam_unix(sudo:session): session closed for user root
Nov 22 09:08:57 compute-0 nova_compute[253661]: 2025-11-22 09:08:57.926 253665 DEBUG nova.network.neutron [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:08:57 compute-0 sudo[278444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:08:57 compute-0 sudo[278444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:08:57 compute-0 sudo[278444]: pam_unix(sudo:session): session closed for user root
Nov 22 09:08:58 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:08:58 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:08:58 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:08:58 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:08:58 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:08:58 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:08:58 compute-0 sudo[278469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:08:58 compute-0 sudo[278469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:08:58 compute-0 nova_compute[253661]: 2025-11-22 09:08:58.158 253665 DEBUG nova.network.neutron [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:08:58 compute-0 nova_compute[253661]: 2025-11-22 09:08:58.170 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Releasing lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:08:58 compute-0 nova_compute[253661]: 2025-11-22 09:08:58.171 253665 DEBUG nova.compute.manager [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:08:58 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 22 09:08:58 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 7.682s CPU time.
Nov 22 09:08:58 compute-0 systemd-machined[215941]: Machine qemu-14-instance-0000000d terminated.
Nov 22 09:08:58 compute-0 nova_compute[253661]: 2025-11-22 09:08:58.397 253665 INFO nova.virt.libvirt.driver [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance destroyed successfully.
Nov 22 09:08:58 compute-0 nova_compute[253661]: 2025-11-22 09:08:58.397 253665 DEBUG nova.objects.instance [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lazy-loading 'resources' on Instance uuid 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:58 compute-0 podman[278536]: 2025-11-22 09:08:58.368518648 +0000 UTC m=+0.022100488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:08:58 compute-0 podman[278536]: 2025-11-22 09:08:58.662811129 +0000 UTC m=+0.316392939 container create efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:08:58 compute-0 nova_compute[253661]: 2025-11-22 09:08:58.931 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:58 compute-0 nova_compute[253661]: 2025-11-22 09:08:58.933 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:58 compute-0 nova_compute[253661]: 2025-11-22 09:08:58.989 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:08:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Nov 22 09:08:59 compute-0 nova_compute[253661]: 2025-11-22 09:08:59.253 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:59 compute-0 nova_compute[253661]: 2025-11-22 09:08:59.254 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:59 compute-0 nova_compute[253661]: 2025-11-22 09:08:59.262 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:08:59 compute-0 nova_compute[253661]: 2025-11-22 09:08:59.262 253665 INFO nova.compute.claims [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:08:59 compute-0 systemd[1]: Started libpod-conmon-efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838.scope.
Nov 22 09:08:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:08:59 compute-0 nova_compute[253661]: 2025-11-22 09:08:59.451 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:08:59 compute-0 nova_compute[253661]: 2025-11-22 09:08:59.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:08:59 compute-0 ceph-mon[75021]: pgmap v1283: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 208 op/s
Nov 22 09:08:59 compute-0 podman[278536]: 2025-11-22 09:08:59.694950777 +0000 UTC m=+1.348532607 container init efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:08:59 compute-0 podman[278536]: 2025-11-22 09:08:59.703109656 +0000 UTC m=+1.356691486 container start efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:08:59 compute-0 confident_aryabhata[278572]: 167 167
Nov 22 09:08:59 compute-0 systemd[1]: libpod-efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838.scope: Deactivated successfully.
Nov 22 09:08:59 compute-0 podman[278536]: 2025-11-22 09:08:59.936995691 +0000 UTC m=+1.590577521 container attach efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 09:08:59 compute-0 podman[278536]: 2025-11-22 09:08:59.937946415 +0000 UTC m=+1.591528235 container died efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 09:09:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:09:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972792082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:00 compute-0 nova_compute[253661]: 2025-11-22 09:09:00.150 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:00 compute-0 nova_compute[253661]: 2025-11-22 09:09:00.157 253665 DEBUG nova.compute.provider_tree [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:09:00 compute-0 nova_compute[253661]: 2025-11-22 09:09:00.171 253665 DEBUG nova.scheduler.client.report [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:09:00 compute-0 nova_compute[253661]: 2025-11-22 09:09:00.359 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:00 compute-0 nova_compute[253661]: 2025-11-22 09:09:00.360 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:09:00 compute-0 nova_compute[253661]: 2025-11-22 09:09:00.626 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:09:00 compute-0 nova_compute[253661]: 2025-11-22 09:09:00.626 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:09:00 compute-0 nova_compute[253661]: 2025-11-22 09:09:00.694 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:09:00 compute-0 nova_compute[253661]: 2025-11-22 09:09:00.762 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec68f0275a2f9f0397a7aa8bd93cc8a0374d7d95cd34332d2e089ca0dab41b49-merged.mount: Deactivated successfully.
Nov 22 09:09:00 compute-0 ceph-mon[75021]: pgmap v1284: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Nov 22 09:09:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2972792082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 597 KiB/s wr, 153 op/s
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.240 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.242 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.242 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Creating image(s)
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.273 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.308 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.345 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.351 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.381 253665 DEBUG nova.policy [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee24e4812c424984881862883987d750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5879249ab50a40ec9553bc923bdd1042', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.440 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.441 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.442 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.442 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.462 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:01 compute-0 nova_compute[253661]: 2025-11-22 09:09:01.467 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:01 compute-0 podman[278536]: 2025-11-22 09:09:01.90822282 +0000 UTC m=+3.561804660 container remove efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 09:09:01 compute-0 systemd[1]: libpod-conmon-efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838.scope: Deactivated successfully.
Nov 22 09:09:02 compute-0 nova_compute[253661]: 2025-11-22 09:09:02.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:02 compute-0 podman[278710]: 2025-11-22 09:09:02.083517244 +0000 UTC m=+0.025513610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:09:02 compute-0 podman[278710]: 2025-11-22 09:09:02.182013053 +0000 UTC m=+0.124009399 container create 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 09:09:02 compute-0 systemd[1]: Started libpod-conmon-937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f.scope.
Nov 22 09:09:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006967633855896333 of space, bias 1.0, pg target 0.20902901567689 quantized to 32 (current 32)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:09:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:09:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 597 KiB/s wr, 153 op/s
Nov 22 09:09:03 compute-0 podman[278710]: 2025-11-22 09:09:03.226943333 +0000 UTC m=+1.168939709 container init 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:09:03 compute-0 podman[278710]: 2025-11-22 09:09:03.235284845 +0000 UTC m=+1.177281191 container start 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:09:03 compute-0 ceph-mon[75021]: pgmap v1285: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 597 KiB/s wr, 153 op/s
Nov 22 09:09:03 compute-0 podman[278710]: 2025-11-22 09:09:03.655815171 +0000 UTC m=+1.597811507 container attach 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 09:09:04 compute-0 nova_compute[253661]: 2025-11-22 09:09:04.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:04 compute-0 nova_compute[253661]: 2025-11-22 09:09:04.660 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Successfully created port: 0122a4be-9c10-4475-ba7d-5c818be52474 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:09:04 compute-0 trusting_satoshi[278727]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:09:04 compute-0 trusting_satoshi[278727]: --> relative data size: 1.0
Nov 22 09:09:04 compute-0 trusting_satoshi[278727]: --> All data devices are unavailable
Nov 22 09:09:04 compute-0 ceph-mon[75021]: pgmap v1286: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 597 KiB/s wr, 153 op/s
Nov 22 09:09:04 compute-0 systemd[1]: libpod-937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f.scope: Deactivated successfully.
Nov 22 09:09:04 compute-0 podman[278710]: 2025-11-22 09:09:04.713436477 +0000 UTC m=+2.655432853 container died 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:09:04 compute-0 systemd[1]: libpod-937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f.scope: Consumed 1.039s CPU time.
Nov 22 09:09:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 137 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 351 KiB/s wr, 133 op/s
Nov 22 09:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f-merged.mount: Deactivated successfully.
Nov 22 09:09:05 compute-0 podman[278710]: 2025-11-22 09:09:05.489789818 +0000 UTC m=+3.431786154 container remove 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:09:05 compute-0 systemd[1]: libpod-conmon-937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f.scope: Deactivated successfully.
Nov 22 09:09:05 compute-0 sudo[278469]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:05 compute-0 sudo[278771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:09:05 compute-0 sudo[278771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:05 compute-0 sudo[278771]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:05 compute-0 sudo[278796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:09:05 compute-0 nova_compute[253661]: 2025-11-22 09:09:05.655 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:05 compute-0 sudo[278796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:05 compute-0 sudo[278796]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:05 compute-0 sudo[278829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:09:05 compute-0 sudo[278829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:05 compute-0 sudo[278829]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:05 compute-0 nova_compute[253661]: 2025-11-22 09:09:05.746 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] resizing rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:09:05 compute-0 sudo[278882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:09:05 compute-0 sudo[278882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:06 compute-0 podman[278967]: 2025-11-22 09:09:06.126372817 +0000 UTC m=+0.025361678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:09:06 compute-0 podman[278967]: 2025-11-22 09:09:06.271844426 +0000 UTC m=+0.170833307 container create b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:09:06 compute-0 systemd[1]: Started libpod-conmon-b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045.scope.
Nov 22 09:09:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:09:06 compute-0 podman[278967]: 2025-11-22 09:09:06.629627039 +0000 UTC m=+0.528615990 container init b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 09:09:06 compute-0 podman[278967]: 2025-11-22 09:09:06.639051258 +0000 UTC m=+0.538040099 container start b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:09:06 compute-0 practical_wiles[278983]: 167 167
Nov 22 09:09:06 compute-0 systemd[1]: libpod-b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045.scope: Deactivated successfully.
Nov 22 09:09:06 compute-0 podman[278967]: 2025-11-22 09:09:06.758758723 +0000 UTC m=+0.657747564 container attach b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:09:06 compute-0 podman[278967]: 2025-11-22 09:09:06.759721236 +0000 UTC m=+0.658710077 container died b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:09:06 compute-0 ceph-mon[75021]: pgmap v1287: 305 pgs: 305 active+clean; 137 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 351 KiB/s wr, 133 op/s
Nov 22 09:09:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 138 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 574 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.191 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.198 253665 DEBUG nova.objects.instance [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'migration_context' on Instance uuid 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.209 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.210 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Ensure instance console log exists: /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.210 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.210 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.211 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa29562a603ed1b0c7a80982ca6f89f5d9a5280aa6acd34ce6de6e83062cd1ea-merged.mount: Deactivated successfully.
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.850 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.921 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Successfully updated port: 0122a4be-9c10-4475-ba7d-5c818be52474 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.971 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.972 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquired lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:07 compute-0 nova_compute[253661]: 2025-11-22 09:09:07.973 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:09:08 compute-0 nova_compute[253661]: 2025-11-22 09:09:08.065 253665 DEBUG nova.compute.manager [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:08 compute-0 nova_compute[253661]: 2025-11-22 09:09:08.066 253665 DEBUG nova.compute.manager [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing instance network info cache due to event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:09:08 compute-0 nova_compute[253661]: 2025-11-22 09:09:08.066 253665 DEBUG oslo_concurrency.lockutils [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:08 compute-0 nova_compute[253661]: 2025-11-22 09:09:08.279 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:09:08 compute-0 ceph-mon[75021]: pgmap v1288: 305 pgs: 305 active+clean; 138 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 574 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Nov 22 09:09:08 compute-0 podman[278967]: 2025-11-22 09:09:08.852215756 +0000 UTC m=+2.751204597 container remove b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:09:08 compute-0 systemd[1]: libpod-conmon-b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045.scope: Deactivated successfully.
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.021 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:09 compute-0 podman[279025]: 2025-11-22 09:09:09.061738092 +0000 UTC m=+0.092590418 container create 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:09:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 156 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 3.8 MiB/s wr, 92 op/s
Nov 22 09:09:09 compute-0 podman[279025]: 2025-11-22 09:09:08.991230831 +0000 UTC m=+0.022083157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.132 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Releasing lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.132 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance network_info: |[{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.132 253665 DEBUG oslo_concurrency.lockutils [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.133 253665 DEBUG nova.network.neutron [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.135 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start _get_guest_xml network_info=[{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.139 253665 WARNING nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.143 253665 DEBUG nova.virt.libvirt.host [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.144 253665 DEBUG nova.virt.libvirt.host [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.147 253665 DEBUG nova.virt.libvirt.host [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.148 253665 DEBUG nova.virt.libvirt.host [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.148 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.148 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.149 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.149 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.149 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.149 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.151 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.154 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:09 compute-0 systemd[1]: Started libpod-conmon-3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe.scope.
Nov 22 09:09:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:09 compute-0 podman[279025]: 2025-11-22 09:09:09.317723914 +0000 UTC m=+0.348576230 container init 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 09:09:09 compute-0 podman[279025]: 2025-11-22 09:09:09.325890723 +0000 UTC m=+0.356743029 container start 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:09:09 compute-0 podman[279025]: 2025-11-22 09:09:09.388937343 +0000 UTC m=+0.419789709 container attach 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1581701931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.622 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.647 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:09 compute-0 nova_compute[253661]: 2025-11-22 09:09:09.651 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1581701931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:10 compute-0 fervent_burnell[279044]: {
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:     "0": [
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:         {
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "devices": [
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "/dev/loop3"
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             ],
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_name": "ceph_lv0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_size": "21470642176",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "name": "ceph_lv0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "tags": {
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.cluster_name": "ceph",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.crush_device_class": "",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.encrypted": "0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.osd_id": "0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.type": "block",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.vdo": "0"
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             },
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "type": "block",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "vg_name": "ceph_vg0"
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:         }
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:     ],
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:     "1": [
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:         {
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "devices": [
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "/dev/loop4"
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             ],
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_name": "ceph_lv1",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_size": "21470642176",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "name": "ceph_lv1",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "tags": {
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.cluster_name": "ceph",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.crush_device_class": "",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.encrypted": "0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.osd_id": "1",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.type": "block",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.vdo": "0"
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             },
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "type": "block",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "vg_name": "ceph_vg1"
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:         }
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:     ],
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:     "2": [
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:         {
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "devices": [
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "/dev/loop5"
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             ],
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_name": "ceph_lv2",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_size": "21470642176",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "name": "ceph_lv2",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "tags": {
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.cluster_name": "ceph",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.crush_device_class": "",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.encrypted": "0",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.osd_id": "2",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.type": "block",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:                 "ceph.vdo": "0"
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             },
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "type": "block",
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:             "vg_name": "ceph_vg2"
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:         }
Nov 22 09:09:10 compute-0 fervent_burnell[279044]:     ]
Nov 22 09:09:10 compute-0 fervent_burnell[279044]: }
Nov 22 09:09:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4294778796' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.125 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.127 253665 DEBUG nova.virt.libvirt.vif [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:08:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-663200800',display_name='tempest-SecurityGroupsTestJSON-server-663200800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-663200800',id=14,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-197d3f9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:00Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=18eb7df8-f3ac-44d2-86c1-db7c0c913c53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.128 253665 DEBUG nova.network.os_vif_util [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.129 253665 DEBUG nova.network.os_vif_util [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.130 253665 DEBUG nova.objects.instance [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:10 compute-0 systemd[1]: libpod-3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe.scope: Deactivated successfully.
Nov 22 09:09:10 compute-0 podman[279025]: 2025-11-22 09:09:10.134790192 +0000 UTC m=+1.165642498 container died 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.145 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <uuid>18eb7df8-f3ac-44d2-86c1-db7c0c913c53</uuid>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <name>instance-0000000e</name>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <nova:name>tempest-SecurityGroupsTestJSON-server-663200800</nova:name>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:09:09</nova:creationTime>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <nova:user uuid="ee24e4812c424984881862883987d750">tempest-SecurityGroupsTestJSON-342579724-project-member</nova:user>
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <nova:project uuid="5879249ab50a40ec9553bc923bdd1042">tempest-SecurityGroupsTestJSON-342579724</nova:project>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <nova:port uuid="0122a4be-9c10-4475-ba7d-5c818be52474">
Nov 22 09:09:10 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <system>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <entry name="serial">18eb7df8-f3ac-44d2-86c1-db7c0c913c53</entry>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <entry name="uuid">18eb7df8-f3ac-44d2-86c1-db7c0c913c53</entry>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     </system>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <os>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   </os>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <features>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   </features>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk">
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config">
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:10 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ea:be:aa"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <target dev="tap0122a4be-9c"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/console.log" append="off"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <video>
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     </video>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:09:10 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:09:10 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:09:10 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:09:10 compute-0 nova_compute[253661]: </domain>
Nov 22 09:09:10 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.145 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Preparing to wait for external event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.145 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.146 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.146 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.146 253665 DEBUG nova.virt.libvirt.vif [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:08:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-663200800',display_name='tempest-SecurityGroupsTestJSON-server-663200800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-663200800',id=14,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-197d3f9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:00Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=18eb7df8-f3ac-44d2-86c1-db7c0c913c53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.147 253665 DEBUG nova.network.os_vif_util [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.147 253665 DEBUG nova.network.os_vif_util [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.148 253665 DEBUG os_vif [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.149 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.149 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.153 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.153 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0122a4be-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.153 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0122a4be-9c, col_values=(('external_ids', {'iface-id': '0122a4be-9c10-4475-ba7d-5c818be52474', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:be:aa', 'vm-uuid': '18eb7df8-f3ac-44d2-86c1-db7c0c913c53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:10 compute-0 NetworkManager[48920]: <info>  [1763802550.1566] manager: (tap0122a4be-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.158 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.163 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.164 253665 INFO os_vif [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c')
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.250 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.250 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.251 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No VIF found with MAC fa:16:3e:ea:be:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.252 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Using config drive
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.276 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.282 253665 DEBUG nova.network.neutron [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updated VIF entry in instance network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.282 253665 DEBUG nova.network.neutron [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:10 compute-0 nova_compute[253661]: 2025-11-22 09:09:10.302 253665 DEBUG oslo_concurrency.lockutils [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b-merged.mount: Deactivated successfully.
Nov 22 09:09:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 156 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 3.8 MiB/s wr, 92 op/s
Nov 22 09:09:11 compute-0 ceph-mon[75021]: pgmap v1289: 305 pgs: 305 active+clean; 156 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 3.8 MiB/s wr, 92 op/s
Nov 22 09:09:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4294778796' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:11 compute-0 podman[279025]: 2025-11-22 09:09:11.098687415 +0000 UTC m=+2.129539761 container remove 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 09:09:11 compute-0 sudo[278882]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:11 compute-0 systemd[1]: libpod-conmon-3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe.scope: Deactivated successfully.
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.224 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Creating config drive at /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.228 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqsddqrq1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:11 compute-0 sudo[279147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:09:11 compute-0 sudo[279147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:11 compute-0 sudo[279147]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:11 compute-0 sudo[279173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:09:11 compute-0 sudo[279173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:11 compute-0 sudo[279173]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.370 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqsddqrq1" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.379 253665 INFO nova.virt.libvirt.driver [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Deleting instance files /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_del
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.380 253665 INFO nova.virt.libvirt.driver [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Deletion of /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_del complete
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.412 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:11 compute-0 sudo[279200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.419 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:11 compute-0 sudo[279200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:11 compute-0 sudo[279200]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:11 compute-0 sudo[279256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:09:11 compute-0 sudo[279256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:11 compute-0 podman[279243]: 2025-11-22 09:09:11.517468488 +0000 UTC m=+0.087228748 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:09:11 compute-0 podman[279239]: 2025-11-22 09:09:11.532507963 +0000 UTC m=+0.102146230 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.629 253665 INFO nova.compute.manager [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Took 13.46 seconds to destroy the instance on the hypervisor.
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.632 253665 DEBUG oslo.service.loopingcall [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.633 253665 DEBUG nova.compute.manager [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.633 253665 DEBUG nova.network.neutron [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.707 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.710 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Deleting local config drive /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config because it was imported into RBD.
Nov 22 09:09:11 compute-0 kernel: tap0122a4be-9c: entered promiscuous mode
Nov 22 09:09:11 compute-0 NetworkManager[48920]: <info>  [1763802551.7851] manager: (tap0122a4be-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 22 09:09:11 compute-0 ovn_controller[152872]: 2025-11-22T09:09:11Z|00044|binding|INFO|Claiming lport 0122a4be-9c10-4475-ba7d-5c818be52474 for this chassis.
Nov 22 09:09:11 compute-0 ovn_controller[152872]: 2025-11-22T09:09:11Z|00045|binding|INFO|0122a4be-9c10-4475-ba7d-5c818be52474: Claiming fa:16:3e:ea:be:aa 10.100.0.6
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:11 compute-0 systemd-machined[215941]: New machine qemu-15-instance-0000000e.
Nov 22 09:09:11 compute-0 systemd-udevd[279374]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:09:11 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Nov 22 09:09:11 compute-0 NetworkManager[48920]: <info>  [1763802551.8537] device (tap0122a4be-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:09:11 compute-0 NetworkManager[48920]: <info>  [1763802551.8547] device (tap0122a4be-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.856 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:be:aa 10.100.0.6'], port_security=['fa:16:3e:ea:be:aa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '18eb7df8-f3ac-44d2-86c1-db7c0c913c53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0122a4be-9c10-4475-ba7d-5c818be52474) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.857 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0122a4be-9c10-4475-ba7d-5c818be52474 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 bound to our chassis
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.859 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.873 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bad184a9-c748-4306-ba4d-d2abc5dc8754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.875 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbce72c95-f1 in ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.876 253665 DEBUG nova.network.neutron [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.878 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbce72c95-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.879 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6099c2c8-57b2-4b54-b763-ff26aa31bbaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.880 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2ee1cf-bedc-466f-9139-27e208ef738a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:11 compute-0 ovn_controller[152872]: 2025-11-22T09:09:11Z|00046|binding|INFO|Setting lport 0122a4be-9c10-4475-ba7d-5c818be52474 ovn-installed in OVS
Nov 22 09:09:11 compute-0 ovn_controller[152872]: 2025-11-22T09:09:11Z|00047|binding|INFO|Setting lport 0122a4be-9c10-4475-ba7d-5c818be52474 up in Southbound
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.887 253665 DEBUG nova.network.neutron [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.898 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[7eef320e-ff3f-4350-b014-30e86fb8e051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.902 253665 INFO nova.compute.manager [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Took 0.27 seconds to deallocate network for instance.
Nov 22 09:09:11 compute-0 nova_compute[253661]: 2025-11-22 09:09:11.928 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.930 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c090813-bf9f-4760-a433-4c59924a8037]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.968 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[24097aef-70b6-4f98-8407-da6c6fcf8449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:11 compute-0 NetworkManager[48920]: <info>  [1763802551.9762] manager: (tapbce72c95-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Nov 22 09:09:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.974 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66dae6e0-c95e-4a9b-9e74-bae161516219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:11 compute-0 systemd-udevd[279377]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:09:11 compute-0 podman[279379]: 2025-11-22 09:09:11.996197486 +0000 UTC m=+0.100570222 container create c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:09:12 compute-0 podman[279379]: 2025-11-22 09:09:11.923381709 +0000 UTC m=+0.027754465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.021 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d51e05-8371-4b6d-b82f-9555752a941d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.025 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fb041a34-281a-4812-b462-f67bba61ec59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.029 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.029 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:12 compute-0 NetworkManager[48920]: <info>  [1763802552.0544] device (tapbce72c95-f0): carrier: link connected
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.061 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7728e761-e71e-4aa6-9131-90ef6e4f69ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.087 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aee3210f-f446-4645-b5ee-17ff96a22bc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279423, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:12 compute-0 systemd[1]: Started libpod-conmon-c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0.scope.
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.109 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8fe476-7d3a-4356-8b77-489bd00d8272]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:ca12'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540279, 'tstamp': 540279}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279439, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.116 253665 DEBUG oslo_concurrency.processutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.132 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58592137-bbd2-44de-a2ba-cbd611a14904]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279446, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:12 compute-0 podman[279379]: 2025-11-22 09:09:12.15213553 +0000 UTC m=+0.256508296 container init c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:09:12 compute-0 podman[279379]: 2025-11-22 09:09:12.162993424 +0000 UTC m=+0.267366170 container start c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:09:12 compute-0 podman[279379]: 2025-11-22 09:09:12.170082016 +0000 UTC m=+0.274454772 container attach c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:09:12 compute-0 systemd[1]: libpod-c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0.scope: Deactivated successfully.
Nov 22 09:09:12 compute-0 modest_noyce[279440]: 167 167
Nov 22 09:09:12 compute-0 conmon[279440]: conmon c69f8d4f2cbbb0e308d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0.scope/container/memory.events
Nov 22 09:09:12 compute-0 podman[279379]: 2025-11-22 09:09:12.173489339 +0000 UTC m=+0.277862095 container died c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.185 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[44f53f9a-3321-44be-9269-b3db2c0acb61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-09aaef984562ae0088eb0fe709941dc24dbe0c893dc6f2367f6511a786f7150c-merged.mount: Deactivated successfully.
Nov 22 09:09:12 compute-0 podman[279379]: 2025-11-22 09:09:12.272641864 +0000 UTC m=+0.377014600 container remove c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:09:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:09:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309182576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:09:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:09:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309182576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.286 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b859d9-afc8-410b-b321-0f091d80004a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.288 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.288 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.288 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:12 compute-0 kernel: tapbce72c95-f0: entered promiscuous mode
Nov 22 09:09:12 compute-0 NetworkManager[48920]: <info>  [1763802552.2909] manager: (tapbce72c95-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.291 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:12 compute-0 systemd[1]: libpod-conmon-c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0.scope: Deactivated successfully.
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.297 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:12 compute-0 ovn_controller[152872]: 2025-11-22T09:09:12Z|00048|binding|INFO|Releasing lport 9b713871-83a7-42c2-9c01-d716fc099936 from this chassis (sb_readonly=0)
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.302 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bce72c95-f29f-458a-9b0e-7e700aa1deb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bce72c95-f29f-458a-9b0e-7e700aa1deb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.305 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7deeb3e-2d1f-4962-911f-48afba3d3033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.306 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-bce72c95-f29f-458a-9b0e-7e700aa1deb4
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/bce72c95-f29f-458a-9b0e-7e700aa1deb4.pid.haproxy
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID bce72c95-f29f-458a-9b0e-7e700aa1deb4
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:09:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.307 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'env', 'PROCESS_TAG=haproxy-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bce72c95-f29f-458a-9b0e-7e700aa1deb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:09:12 compute-0 ceph-mon[75021]: pgmap v1290: 305 pgs: 305 active+clean; 156 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 3.8 MiB/s wr, 92 op/s
Nov 22 09:09:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/309182576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:09:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/309182576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.334 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802552.3260155, 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.334 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] VM Started (Lifecycle Event)
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.352 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.378 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802552.3263586, 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.379 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] VM Paused (Lifecycle Event)
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.393 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.399 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.415 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:09:12 compute-0 podman[279521]: 2025-11-22 09:09:12.491090766 +0000 UTC m=+0.068578306 container create 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 09:09:12 compute-0 systemd[1]: Started libpod-conmon-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope.
Nov 22 09:09:12 compute-0 podman[279521]: 2025-11-22 09:09:12.463795984 +0000 UTC m=+0.041283514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:09:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:09:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579036546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:12 compute-0 podman[279521]: 2025-11-22 09:09:12.602752086 +0000 UTC m=+0.180239616 container init 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:09:12 compute-0 podman[279521]: 2025-11-22 09:09:12.61284078 +0000 UTC m=+0.190328280 container start 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.619 253665 DEBUG oslo_concurrency.processutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.629 253665 DEBUG nova.compute.provider_tree [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:09:12 compute-0 podman[279521]: 2025-11-22 09:09:12.632329743 +0000 UTC m=+0.209817263 container attach 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.644 253665 DEBUG nova.scheduler.client.report [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.653 253665 DEBUG nova.compute.manager [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.654 253665 DEBUG oslo_concurrency.lockutils [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.654 253665 DEBUG oslo_concurrency.lockutils [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.654 253665 DEBUG oslo_concurrency.lockutils [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.654 253665 DEBUG nova.compute.manager [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Processing event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.655 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.659 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802552.6588144, 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.659 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] VM Resumed (Lifecycle Event)
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.660 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.664 253665 INFO nova.virt.libvirt.driver [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance spawned successfully.
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.664 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.675 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.684 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.685 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.685 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.686 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.686 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.687 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.691 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.722 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:09:12 compute-0 podman[279566]: 2025-11-22 09:09:12.750112771 +0000 UTC m=+0.100637543 container create c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:09:12 compute-0 nova_compute[253661]: 2025-11-22 09:09:12.765 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:12 compute-0 podman[279566]: 2025-11-22 09:09:12.682854489 +0000 UTC m=+0.033379291 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:09:12 compute-0 systemd[1]: Started libpod-conmon-c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f.scope.
Nov 22 09:09:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b439ac87fd05beec68cedfc1c5359f61b19c450f7ad63dcb47892d20e6a6cb9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:09:12 compute-0 podman[279566]: 2025-11-22 09:09:12.87325731 +0000 UTC m=+0.223782102 container init c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 09:09:12 compute-0 podman[279566]: 2025-11-22 09:09:12.886553473 +0000 UTC m=+0.237078285 container start c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 09:09:12 compute-0 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [NOTICE]   (279586) : New worker (279588) forked
Nov 22 09:09:12 compute-0 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [NOTICE]   (279586) : Loading success.
Nov 22 09:09:13 compute-0 nova_compute[253661]: 2025-11-22 09:09:13.026 253665 INFO nova.scheduler.client.report [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Deleted allocations for instance 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900
Nov 22 09:09:13 compute-0 nova_compute[253661]: 2025-11-22 09:09:13.038 253665 INFO nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Took 11.80 seconds to spawn the instance on the hypervisor.
Nov 22 09:09:13 compute-0 nova_compute[253661]: 2025-11-22 09:09:13.039 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 161 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 22 09:09:13 compute-0 nova_compute[253661]: 2025-11-22 09:09:13.184 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:13 compute-0 nova_compute[253661]: 2025-11-22 09:09:13.206 253665 INFO nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Took 13.98 seconds to build instance.
Nov 22 09:09:13 compute-0 nova_compute[253661]: 2025-11-22 09:09:13.287 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/579036546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:13 compute-0 nova_compute[253661]: 2025-11-22 09:09:13.395 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802538.3944967, 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:13 compute-0 nova_compute[253661]: 2025-11-22 09:09:13.396 253665 INFO nova.compute.manager [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] VM Stopped (Lifecycle Event)
Nov 22 09:09:13 compute-0 nova_compute[253661]: 2025-11-22 09:09:13.413 253665 DEBUG nova.compute.manager [None req-64bdf937-0e87-4149-a211-770c01354e72 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:13 compute-0 brave_germain[279542]: {
Nov 22 09:09:13 compute-0 brave_germain[279542]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "osd_id": 1,
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "type": "bluestore"
Nov 22 09:09:13 compute-0 brave_germain[279542]:     },
Nov 22 09:09:13 compute-0 brave_germain[279542]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "osd_id": 0,
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "type": "bluestore"
Nov 22 09:09:13 compute-0 brave_germain[279542]:     },
Nov 22 09:09:13 compute-0 brave_germain[279542]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "osd_id": 2,
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:09:13 compute-0 brave_germain[279542]:         "type": "bluestore"
Nov 22 09:09:13 compute-0 brave_germain[279542]:     }
Nov 22 09:09:13 compute-0 brave_germain[279542]: }
Nov 22 09:09:13 compute-0 systemd[1]: libpod-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope: Deactivated successfully.
Nov 22 09:09:13 compute-0 systemd[1]: libpod-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope: Consumed 1.123s CPU time.
Nov 22 09:09:13 compute-0 conmon[279542]: conmon 00d5d9ed3fef89ff07b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope/container/memory.events
Nov 22 09:09:13 compute-0 podman[279521]: 2025-11-22 09:09:13.785406937 +0000 UTC m=+1.362894447 container died 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb-merged.mount: Deactivated successfully.
Nov 22 09:09:13 compute-0 podman[279521]: 2025-11-22 09:09:13.889043092 +0000 UTC m=+1.466530602 container remove 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 22 09:09:13 compute-0 systemd[1]: libpod-conmon-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope: Deactivated successfully.
Nov 22 09:09:13 compute-0 sudo[279256]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:09:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:09:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:09:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:09:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 21cba8f6-d6d8-4d0f-88a6-5ac45bc3894e does not exist
Nov 22 09:09:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2e1c2049-4ec7-4c27-b58a-ab555280a7d8 does not exist
Nov 22 09:09:14 compute-0 sudo[279636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:09:14 compute-0 sudo[279636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:14 compute-0 sudo[279636]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:14 compute-0 sudo[279661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:09:14 compute-0 sudo[279661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:09:14 compute-0 sudo[279661]: pam_unix(sudo:session): session closed for user root
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.257 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:09:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154894827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.701 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.790 253665 DEBUG nova.compute.manager [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.790 253665 DEBUG oslo_concurrency.lockutils [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.791 253665 DEBUG oslo_concurrency.lockutils [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.791 253665 DEBUG oslo_concurrency.lockutils [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.792 253665 DEBUG nova.compute.manager [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] No waiting events found dispatching network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.792 253665 WARNING nova.compute.manager [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received unexpected event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 for instance with vm_state active and task_state None.
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.807 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.808 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.824 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:09:14 compute-0 nova_compute[253661]: 2025-11-22 09:09:14.825 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:09:14 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 22 09:09:14 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 13.714s CPU time.
Nov 22 09:09:14 compute-0 systemd-machined[215941]: Machine qemu-13-instance-0000000c terminated.
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.030 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance shutdown successfully after 17 seconds.
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.040 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.
Nov 22 09:09:15 compute-0 ceph-mon[75021]: pgmap v1291: 305 pgs: 305 active+clean; 161 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 22 09:09:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:09:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:09:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3154894827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.050 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.
Nov 22 09:09:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.356 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.364 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.366 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4193MB free_disk=59.922393798828125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.366 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.366 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.427 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance c233bbff-b2e9-442f-818d-e8487dee1c3e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.428 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.428 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.429 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.478 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.820 253665 DEBUG nova.compute.manager [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.822 253665 DEBUG nova.compute.manager [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing instance network info cache due to event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.822 253665 DEBUG oslo_concurrency.lockutils [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.822 253665 DEBUG oslo_concurrency.lockutils [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.822 253665 DEBUG nova.network.neutron [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:09:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:09:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474440318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.942 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.948 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:09:15 compute-0 nova_compute[253661]: 2025-11-22 09:09:15.964 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:09:16 compute-0 nova_compute[253661]: 2025-11-22 09:09:16.013 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:09:16 compute-0 nova_compute[253661]: 2025-11-22 09:09:16.014 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2474440318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:16 compute-0 podman[279752]: 2025-11-22 09:09:16.424035051 +0000 UTC m=+0.108247948 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 22 09:09:17 compute-0 nova_compute[253661]: 2025-11-22 09:09:17.006 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:17 compute-0 nova_compute[253661]: 2025-11-22 09:09:17.007 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Nov 22 09:09:17 compute-0 nova_compute[253661]: 2025-11-22 09:09:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:17 compute-0 nova_compute[253661]: 2025-11-22 09:09:17.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:09:17 compute-0 nova_compute[253661]: 2025-11-22 09:09:17.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:09:17 compute-0 nova_compute[253661]: 2025-11-22 09:09:17.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:17 compute-0 nova_compute[253661]: 2025-11-22 09:09:17.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:17 compute-0 nova_compute[253661]: 2025-11-22 09:09:17.250 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:09:17 compute-0 nova_compute[253661]: 2025-11-22 09:09:17.250 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:17 compute-0 ceph-mon[75021]: pgmap v1292: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.014 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting instance files /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.016 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deletion of /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del complete
Nov 22 09:09:18 compute-0 ceph-mon[75021]: pgmap v1293: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.456 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.458 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating image(s)
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.480 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.504 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.529 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.534 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.599 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.600 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.601 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.602 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.624 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.628 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.929 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:09:18 compute-0 nova_compute[253661]: 2025-11-22 09:09:18.997 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.051 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] resizing rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:09:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 114 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 173 op/s
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.147 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.148 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Ensure instance console log exists: /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.148 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.149 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.149 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.150 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.153 253665 WARNING nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.159 253665 DEBUG nova.virt.libvirt.host [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.159 253665 DEBUG nova.virt.libvirt.host [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.172 253665 DEBUG nova.virt.libvirt.host [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.173 253665 DEBUG nova.virt.libvirt.host [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.173 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.173 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.176 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'vcpu_model' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.193 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.237 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.254 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.254 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.259 253665 DEBUG nova.network.neutron [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updated VIF entry in instance network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.259 253665 DEBUG nova.network.neutron [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.393 253665 DEBUG oslo_concurrency.lockutils [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254550709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.668 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.686 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.690 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.972 253665 DEBUG nova.compute.manager [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.973 253665 DEBUG nova.compute.manager [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing instance network info cache due to event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.974 253665 DEBUG oslo_concurrency.lockutils [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.974 253665 DEBUG oslo_concurrency.lockutils [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:19 compute-0 nova_compute[253661]: 2025-11-22 09:09:19.974 253665 DEBUG nova.network.neutron [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:09:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260095218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.128 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.131 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <uuid>c233bbff-b2e9-442f-818d-e8487dee1c3e</uuid>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <name>instance-0000000c</name>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdmin275Test-server-1195148279</nova:name>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:09:19</nova:creationTime>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <nova:user uuid="db3c9e2649dc463a894636918b1536f6">tempest-ServersAdmin275Test-461797968-project-member</nova:user>
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <nova:project uuid="452c52561ee04e93bc47895d639c9745">tempest-ServersAdmin275Test-461797968</nova:project>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <system>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <entry name="serial">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <entry name="uuid">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     </system>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <os>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   </os>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <features>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   </features>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk">
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config">
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:20 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log" append="off"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <video>
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     </video>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:09:20 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:09:20 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:09:20 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:09:20 compute-0 nova_compute[253661]: </domain>
Nov 22 09:09:20 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.180 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.181 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.181 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Using config drive
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.204 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.227 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'ec2_ids' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.287 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'keypairs' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:20 compute-0 ceph-mon[75021]: pgmap v1294: 305 pgs: 305 active+clean; 114 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 173 op/s
Nov 22 09:09:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4254550709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3260095218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.361 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.790 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating config drive at /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.795 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuon41cky execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.925 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuon41cky" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.951 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:20 compute-0 nova_compute[253661]: 2025-11-22 09:09:20.954 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 114 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 126 KiB/s wr, 118 op/s
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.119 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.121 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting local config drive /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config because it was imported into RBD.
Nov 22 09:09:21 compute-0 systemd-machined[215941]: New machine qemu-16-instance-0000000c.
Nov 22 09:09:21 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000c.
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.570 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for c233bbff-b2e9-442f-818d-e8487dee1c3e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.570 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802561.5697138, c233bbff-b2e9-442f-818d-e8487dee1c3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.570 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Resumed (Lifecycle Event)
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.573 253665 DEBUG nova.compute.manager [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.573 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.577 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance spawned successfully.
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.578 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.600 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.605 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.606 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.606 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.607 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.607 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.608 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.611 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.637 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.638 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802561.5725129, c233bbff-b2e9-442f-818d-e8487dee1c3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.639 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Started (Lifecycle Event)
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.655 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.659 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.675 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.768 253665 DEBUG nova.compute.manager [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.839 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.840 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.841 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:09:21 compute-0 nova_compute[253661]: 2025-11-22 09:09:21.899 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.171 253665 DEBUG nova.network.neutron [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updated VIF entry in instance network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.172 253665 DEBUG nova.network.neutron [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.187 253665 DEBUG oslo_concurrency.lockutils [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:22 compute-0 ceph-mon[75021]: pgmap v1295: 305 pgs: 305 active+clean; 114 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 126 KiB/s wr, 118 op/s
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.389 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.391 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.391 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "c233bbff-b2e9-442f-818d-e8487dee1c3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.392 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.392 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.393 253665 INFO nova.compute.manager [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Terminating instance
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.394 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.394 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquired lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.394 253665 DEBUG nova.network.neutron [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:09:22 compute-0 nova_compute[253661]: 2025-11-22 09:09:22.872 253665 DEBUG nova.network.neutron [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:09:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 105 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 784 KiB/s wr, 129 op/s
Nov 22 09:09:23 compute-0 nova_compute[253661]: 2025-11-22 09:09:23.420 253665 DEBUG nova.network.neutron [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:23 compute-0 nova_compute[253661]: 2025-11-22 09:09:23.433 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Releasing lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:23 compute-0 nova_compute[253661]: 2025-11-22 09:09:23.433 253665 DEBUG nova.compute.manager [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:09:23 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 22 09:09:23 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000c.scope: Consumed 2.276s CPU time.
Nov 22 09:09:23 compute-0 systemd-machined[215941]: Machine qemu-16-instance-0000000c terminated.
Nov 22 09:09:23 compute-0 nova_compute[253661]: 2025-11-22 09:09:23.861 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.
Nov 22 09:09:23 compute-0 nova_compute[253661]: 2025-11-22 09:09:23.864 253665 DEBUG nova.objects.instance [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'resources' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:24 compute-0 ceph-mon[75021]: pgmap v1296: 305 pgs: 305 active+clean; 105 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 784 KiB/s wr, 129 op/s
Nov 22 09:09:24 compute-0 nova_compute[253661]: 2025-11-22 09:09:24.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 205 op/s
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.266 253665 INFO nova.virt.libvirt.driver [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting instance files /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.267 253665 INFO nova.virt.libvirt.driver [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deletion of /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del complete
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.364 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.507 253665 INFO nova.compute.manager [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Took 2.07 seconds to destroy the instance on the hypervisor.
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.508 253665 DEBUG oslo.service.loopingcall [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.508 253665 DEBUG nova.compute.manager [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.508 253665 DEBUG nova.network.neutron [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.688 253665 DEBUG nova.network.neutron [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.696 253665 DEBUG nova.network.neutron [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.707 253665 INFO nova.compute.manager [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Took 0.20 seconds to deallocate network for instance.
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.751 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.752 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:25 compute-0 nova_compute[253661]: 2025-11-22 09:09:25.819 253665 DEBUG oslo_concurrency.processutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:26 compute-0 ovn_controller[152872]: 2025-11-22T09:09:26Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:be:aa 10.100.0.6
Nov 22 09:09:26 compute-0 ovn_controller[152872]: 2025-11-22T09:09:26Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:be:aa 10.100.0.6
Nov 22 09:09:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:09:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002464113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:26 compute-0 nova_compute[253661]: 2025-11-22 09:09:26.324 253665 DEBUG oslo_concurrency.processutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:26 compute-0 nova_compute[253661]: 2025-11-22 09:09:26.333 253665 DEBUG nova.compute.provider_tree [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:09:26 compute-0 nova_compute[253661]: 2025-11-22 09:09:26.352 253665 DEBUG nova.scheduler.client.report [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:09:26 compute-0 nova_compute[253661]: 2025-11-22 09:09:26.392 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:26 compute-0 nova_compute[253661]: 2025-11-22 09:09:26.459 253665 INFO nova.scheduler.client.report [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Deleted allocations for instance c233bbff-b2e9-442f-818d-e8487dee1c3e
Nov 22 09:09:26 compute-0 ceph-mon[75021]: pgmap v1297: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 205 op/s
Nov 22 09:09:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2002464113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:26 compute-0 nova_compute[253661]: 2025-11-22 09:09:26.543 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 126 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 205 op/s
Nov 22 09:09:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:27.953 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:27.954 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:27.955 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:28 compute-0 ceph-mon[75021]: pgmap v1298: 305 pgs: 305 active+clean; 126 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 205 op/s
Nov 22 09:09:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 117 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 248 op/s
Nov 22 09:09:29 compute-0 nova_compute[253661]: 2025-11-22 09:09:29.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:30 compute-0 nova_compute[253661]: 2025-11-22 09:09:30.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:30.405 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:09:30 compute-0 nova_compute[253661]: 2025-11-22 09:09:30.405 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:30.407 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:09:30 compute-0 ceph-mon[75021]: pgmap v1299: 305 pgs: 305 active+clean; 117 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 248 op/s
Nov 22 09:09:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 117 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Nov 22 09:09:32 compute-0 ceph-mon[75021]: pgmap v1300: 305 pgs: 305 active+clean; 117 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Nov 22 09:09:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 200 op/s
Nov 22 09:09:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:34.411 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:34 compute-0 nova_compute[253661]: 2025-11-22 09:09:34.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 22 09:09:34 compute-0 ceph-mon[75021]: pgmap v1301: 305 pgs: 305 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 200 op/s
Nov 22 09:09:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Nov 22 09:09:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 22 09:09:35 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 22 09:09:35 compute-0 nova_compute[253661]: 2025-11-22 09:09:35.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:36 compute-0 ceph-mon[75021]: pgmap v1302: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Nov 22 09:09:36 compute-0 ceph-mon[75021]: osdmap e181: 3 total, 3 up, 3 in
Nov 22 09:09:36 compute-0 nova_compute[253661]: 2025-11-22 09:09:36.854 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:36 compute-0 nova_compute[253661]: 2025-11-22 09:09:36.854 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:36 compute-0 nova_compute[253661]: 2025-11-22 09:09:36.877 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:09:36 compute-0 nova_compute[253661]: 2025-11-22 09:09:36.957 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:36 compute-0 nova_compute[253661]: 2025-11-22 09:09:36.958 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:36 compute-0 nova_compute[253661]: 2025-11-22 09:09:36.965 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:09:36 compute-0 nova_compute[253661]: 2025-11-22 09:09:36.965 253665 INFO nova.compute.claims [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.061 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 1.9 MiB/s wr, 94 op/s
Nov 22 09:09:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:09:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016299636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.615 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.624 253665 DEBUG nova.compute.provider_tree [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.639 253665 DEBUG nova.scheduler.client.report [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.658 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.659 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.700 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.700 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.718 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.733 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.813 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.815 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.815 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Creating image(s)
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.839 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.868 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.897 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.903 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.949 253665 DEBUG nova.policy [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee24e4812c424984881862883987d750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5879249ab50a40ec9553bc923bdd1042', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.974 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.975 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.975 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:37 compute-0 nova_compute[253661]: 2025-11-22 09:09:37.976 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.002 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.007 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c5f708d0-4110-417f-8353-dc61992d22dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:38 compute-0 ceph-mon[75021]: pgmap v1304: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 1.9 MiB/s wr, 94 op/s
Nov 22 09:09:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3016299636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.544 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c5f708d0-4110-417f-8353-dc61992d22dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.577 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Successfully created port: 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.616 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] resizing rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.740 253665 DEBUG nova.objects.instance [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'migration_context' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.752 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.753 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Ensure instance console log exists: /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.753 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.754 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.754 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.860 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802563.8584688, c233bbff-b2e9-442f-818d-e8487dee1c3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.860 253665 INFO nova.compute.manager [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Stopped (Lifecycle Event)
Nov 22 09:09:38 compute-0 nova_compute[253661]: 2025-11-22 09:09:38.882 253665 DEBUG nova.compute.manager [None req-12376c83-52d3-4afe-915c-27c1ee3f3ecf - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 128 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 63 KiB/s wr, 22 op/s
Nov 22 09:09:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 22 09:09:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 22 09:09:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 22 09:09:39 compute-0 nova_compute[253661]: 2025-11-22 09:09:39.535 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:40 compute-0 ceph-mon[75021]: pgmap v1305: 305 pgs: 305 active+clean; 128 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 63 KiB/s wr, 22 op/s
Nov 22 09:09:40 compute-0 ceph-mon[75021]: osdmap e182: 3 total, 3 up, 3 in
Nov 22 09:09:40 compute-0 nova_compute[253661]: 2025-11-22 09:09:40.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:40 compute-0 nova_compute[253661]: 2025-11-22 09:09:40.806 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Successfully updated port: 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:09:40 compute-0 nova_compute[253661]: 2025-11-22 09:09:40.821 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:40 compute-0 nova_compute[253661]: 2025-11-22 09:09:40.821 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:40 compute-0 nova_compute[253661]: 2025-11-22 09:09:40.822 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.016 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:09:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 128 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 72 KiB/s wr, 20 op/s
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.178 253665 DEBUG nova.compute.manager [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.178 253665 DEBUG nova.compute.manager [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing instance network info cache due to event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.178 253665 DEBUG oslo_concurrency.lockutils [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.695 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.696 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.718 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.793 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.794 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.801 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.801 253665 INFO nova.compute.claims [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.936 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.968 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.988 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.988 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance network_info: |[{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.989 253665 DEBUG oslo_concurrency.lockutils [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.989 253665 DEBUG nova.network.neutron [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.992 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start _get_guest_xml network_info=[{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:09:41 compute-0 nova_compute[253661]: 2025-11-22 09:09:41.998 253665 WARNING nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.004 253665 DEBUG nova.virt.libvirt.host [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.005 253665 DEBUG nova.virt.libvirt.host [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.016 253665 DEBUG nova.virt.libvirt.host [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.017 253665 DEBUG nova.virt.libvirt.host [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.017 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.017 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.023 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:42 compute-0 ceph-mon[75021]: pgmap v1307: 305 pgs: 305 active+clean; 128 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 72 KiB/s wr, 20 op/s
Nov 22 09:09:42 compute-0 podman[280394]: 2025-11-22 09:09:42.373402138 +0000 UTC m=+0.061617416 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:09:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:09:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228160723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:42 compute-0 podman[280395]: 2025-11-22 09:09:42.396844327 +0000 UTC m=+0.079626383 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.414 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.421 253665 DEBUG nova.compute.provider_tree [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.449 253665 DEBUG nova.scheduler.client.report [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:09:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156589561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.470 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.471 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.487 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.515 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.520 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.553 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.554 253665 DEBUG nova.network.neutron [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.574 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.591 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.677 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.679 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.679 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Creating image(s)
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.707 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.737 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.762 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.768 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.839 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.840 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.841 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.841 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.863 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.869 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d6aea4a7-7722-4565-8c76-6d257dcc5362_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4122168536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.993 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.995 253665 DEBUG nova.virt.libvirt.vif [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:37Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.996 253665 DEBUG nova.network.os_vif_util [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.997 253665 DEBUG nova.network.os_vif_util [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:09:42 compute-0 nova_compute[253661]: 2025-11-22 09:09:42.998 253665 DEBUG nova.objects.instance [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.016 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <uuid>c5f708d0-4110-417f-8353-dc61992d22dc</uuid>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <name>instance-0000000f</name>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <nova:name>tempest-SecurityGroupsTestJSON-server-782283387</nova:name>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:09:41</nova:creationTime>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <nova:user uuid="ee24e4812c424984881862883987d750">tempest-SecurityGroupsTestJSON-342579724-project-member</nova:user>
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <nova:project uuid="5879249ab50a40ec9553bc923bdd1042">tempest-SecurityGroupsTestJSON-342579724</nova:project>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <nova:port uuid="2bc1ef13-abf9-49ce-b3bb-41d737b9cd13">
Nov 22 09:09:43 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <system>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <entry name="serial">c5f708d0-4110-417f-8353-dc61992d22dc</entry>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <entry name="uuid">c5f708d0-4110-417f-8353-dc61992d22dc</entry>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     </system>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <os>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   </os>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <features>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   </features>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c5f708d0-4110-417f-8353-dc61992d22dc_disk">
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c5f708d0-4110-417f-8353-dc61992d22dc_disk.config">
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:43 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:2b:74:48"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <target dev="tap2bc1ef13-ab"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/console.log" append="off"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <video>
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     </video>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:09:43 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:09:43 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:09:43 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:09:43 compute-0 nova_compute[253661]: </domain>
Nov 22 09:09:43 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.017 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Preparing to wait for external event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.018 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.019 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.019 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.020 253665 DEBUG nova.virt.libvirt.vif [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:37Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.020 253665 DEBUG nova.network.os_vif_util [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.021 253665 DEBUG nova.network.os_vif_util [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.022 253665 DEBUG os_vif [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.023 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.023 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.024 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.030 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bc1ef13-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.030 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bc1ef13-ab, col_values=(('external_ids', {'iface-id': '2bc1ef13-abf9-49ce-b3bb-41d737b9cd13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:74:48', 'vm-uuid': 'c5f708d0-4110-417f-8353-dc61992d22dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.032 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:43 compute-0 NetworkManager[48920]: <info>  [1763802583.0330] manager: (tap2bc1ef13-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.034 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.041 253665 INFO os_vif [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab')
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.078 253665 DEBUG nova.network.neutron [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.078 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:09:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 138 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 689 KiB/s wr, 57 op/s
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.193 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.194 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.194 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No VIF found with MAC fa:16:3e:2b:74:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.195 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Using config drive
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.215 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.369 253665 DEBUG nova.network.neutron [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updated VIF entry in instance network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.370 253665 DEBUG nova.network.neutron [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.383 253665 DEBUG oslo_concurrency.lockutils [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4228160723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/156589561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4122168536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.551 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d6aea4a7-7722-4565-8c76-6d257dcc5362_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.623 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] resizing rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.770 253665 DEBUG nova.objects.instance [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lazy-loading 'migration_context' on Instance uuid d6aea4a7-7722-4565-8c76-6d257dcc5362 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.784 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.784 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Ensure instance console log exists: /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.785 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.785 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.785 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.787 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.793 253665 WARNING nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.800 253665 DEBUG nova.virt.libvirt.host [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.801 253665 DEBUG nova.virt.libvirt.host [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.804 253665 DEBUG nova.virt.libvirt.host [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.805 253665 DEBUG nova.virt.libvirt.host [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.805 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.805 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.806 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.806 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.807 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.807 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.807 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.808 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.808 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.808 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.808 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.809 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.812 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.919 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Creating config drive at /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config
Nov 22 09:09:43 compute-0 nova_compute[253661]: 2025-11-22 09:09:43.927 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprqhspvri execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.062 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprqhspvri" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.087 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.092 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config c5f708d0-4110-417f-8353-dc61992d22dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884135950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.265 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.300 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.308 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:44 compute-0 ceph-mon[75021]: pgmap v1308: 305 pgs: 305 active+clean; 138 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 689 KiB/s wr, 57 op/s
Nov 22 09:09:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/884135950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804180269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.834 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config c5f708d0-4110-417f-8353-dc61992d22dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.835 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Deleting local config drive /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config because it was imported into RBD.
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.842 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.843 253665 DEBUG nova.objects.instance [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lazy-loading 'pci_devices' on Instance uuid d6aea4a7-7722-4565-8c76-6d257dcc5362 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.855 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <uuid>d6aea4a7-7722-4565-8c76-6d257dcc5362</uuid>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <name>instance-00000010</name>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerDiagnosticsTest-server-1370620633</nova:name>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:09:43</nova:creationTime>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <nova:user uuid="79b4e753165d476ea489590d62266a4d">tempest-ServerDiagnosticsTest-1737322539-project-member</nova:user>
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <nova:project uuid="573fbf80a5d94f8f841634d42a3bd35a">tempest-ServerDiagnosticsTest-1737322539</nova:project>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <system>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <entry name="serial">d6aea4a7-7722-4565-8c76-6d257dcc5362</entry>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <entry name="uuid">d6aea4a7-7722-4565-8c76-6d257dcc5362</entry>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     </system>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <os>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   </os>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <features>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   </features>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d6aea4a7-7722-4565-8c76-6d257dcc5362_disk">
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config">
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:44 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/console.log" append="off"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <video>
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     </video>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:09:44 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:09:44 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:09:44 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:09:44 compute-0 nova_compute[253661]: </domain>
Nov 22 09:09:44 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:09:44 compute-0 kernel: tap2bc1ef13-ab: entered promiscuous mode
Nov 22 09:09:44 compute-0 NetworkManager[48920]: <info>  [1763802584.8946] manager: (tap2bc1ef13-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 22 09:09:44 compute-0 ovn_controller[152872]: 2025-11-22T09:09:44Z|00049|binding|INFO|Claiming lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for this chassis.
Nov 22 09:09:44 compute-0 ovn_controller[152872]: 2025-11-22T09:09:44Z|00050|binding|INFO|2bc1ef13-abf9-49ce-b3bb-41d737b9cd13: Claiming fa:16:3e:2b:74:48 10.100.0.8
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.896 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.909 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:74:48 10.100.0.8'], port_security=['fa:16:3e:2b:74:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c5f708d0-4110-417f-8353-dc61992d22dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:09:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.910 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 bound to our chassis
Nov 22 09:09:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.912 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.917 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.917 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.918 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Using config drive
Nov 22 09:09:44 compute-0 ovn_controller[152872]: 2025-11-22T09:09:44Z|00051|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 ovn-installed in OVS
Nov 22 09:09:44 compute-0 ovn_controller[152872]: 2025-11-22T09:09:44Z|00052|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 up in Southbound
Nov 22 09:09:44 compute-0 systemd-udevd[280784]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:09:44 compute-0 systemd-machined[215941]: New machine qemu-17-instance-0000000f.
Nov 22 09:09:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.936 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f047f832-acc1-494d-bd83-de7f9c5909e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:44 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Nov 22 09:09:44 compute-0 NetworkManager[48920]: <info>  [1763802584.9524] device (tap2bc1ef13-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:09:44 compute-0 NetworkManager[48920]: <info>  [1763802584.9532] device (tap2bc1ef13-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.956 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:44 compute-0 nova_compute[253661]: 2025-11-22 09:09:44.963 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.973 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f1025c49-0722-48d9-95f7-6f9208ac0572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.978 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[228abb60-2fb6-45d8-9e14-c80d42462594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.009 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cb6014-6847-434d-9558-1a54c4c9cf3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.028 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be98bb7c-07a9-4d2b-bbb2-d36c99f94274]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280807, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.046 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06abecf0-8897-402a-b8fd-eaae463ab121]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540297, 'tstamp': 540297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280809, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540301, 'tstamp': 540301}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280809, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.048 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.053 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.054 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.054 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.054 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.115 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Creating config drive at /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.119 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_fsbpk6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.225 253665 DEBUG nova.compute.manager [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.226 253665 DEBUG oslo_concurrency.lockutils [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.226 253665 DEBUG oslo_concurrency.lockutils [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.226 253665 DEBUG oslo_concurrency.lockutils [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.226 253665 DEBUG nova.compute.manager [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Processing event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.251 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_fsbpk6" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.280 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.285 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.580 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.582 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802585.5817897, c5f708d0-4110-417f-8353-dc61992d22dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.582 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Started (Lifecycle Event)
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.585 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.588 253665 INFO nova.virt.libvirt.driver [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance spawned successfully.
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.588 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.598 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.600 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.610 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.611 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.611 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.611 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.612 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.612 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.619 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.619 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802585.5818622, c5f708d0-4110-417f-8353-dc61992d22dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.619 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Paused (Lifecycle Event)
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.639 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.643 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802585.5849652, c5f708d0-4110-417f-8353-dc61992d22dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.643 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Resumed (Lifecycle Event)
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.659 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.662 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.680 253665 INFO nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Took 7.87 seconds to spawn the instance on the hypervisor.
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.680 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.682 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:09:45 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2804180269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.754 253665 INFO nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Took 8.83 seconds to build instance.
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.771 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.809 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:45 compute-0 nova_compute[253661]: 2025-11-22 09:09:45.810 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Deleting local config drive /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config because it was imported into RBD.
Nov 22 09:09:45 compute-0 systemd-machined[215941]: New machine qemu-18-instance-00000010.
Nov 22 09:09:45 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.446 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802586.4455445, d6aea4a7-7722-4565-8c76-6d257dcc5362 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.447 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] VM Resumed (Lifecycle Event)
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.450 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.450 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.455 253665 INFO nova.virt.libvirt.driver [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance spawned successfully.
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.455 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.477 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.480 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.480 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.481 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.481 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.481 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.481 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.487 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.516 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.517 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802586.4471738, d6aea4a7-7722-4565-8c76-6d257dcc5362 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.517 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] VM Started (Lifecycle Event)
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.541 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.545 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.551 253665 INFO nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Took 3.87 seconds to spawn the instance on the hypervisor.
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.552 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.564 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.609 253665 INFO nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Took 4.84 seconds to build instance.
Nov 22 09:09:46 compute-0 nova_compute[253661]: 2025-11-22 09:09:46.624 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:46 compute-0 ceph-mon[75021]: pgmap v1309: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 22 09:09:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 185 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 534 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Nov 22 09:09:47 compute-0 nova_compute[253661]: 2025-11-22 09:09:47.298 253665 DEBUG nova.compute.manager [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:47 compute-0 nova_compute[253661]: 2025-11-22 09:09:47.299 253665 DEBUG oslo_concurrency.lockutils [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:47 compute-0 nova_compute[253661]: 2025-11-22 09:09:47.299 253665 DEBUG oslo_concurrency.lockutils [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:47 compute-0 nova_compute[253661]: 2025-11-22 09:09:47.300 253665 DEBUG oslo_concurrency.lockutils [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:47 compute-0 nova_compute[253661]: 2025-11-22 09:09:47.300 253665 DEBUG nova.compute.manager [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:09:47 compute-0 nova_compute[253661]: 2025-11-22 09:09:47.300 253665 WARNING nova.compute.manager [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state None.
Nov 22 09:09:47 compute-0 podman[280949]: 2025-11-22 09:09:47.413722867 +0000 UTC m=+0.102766675 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.032 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.295 253665 DEBUG nova.compute.manager [None req-2bcdfe52-6137-445b-b8b1-ae7bd5d47d83 385bfa70f94e4e118f9fa6d567f40b2f 3259232df7614dd094a8c6c8c274c0fe - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.301 253665 INFO nova.compute.manager [None req-2bcdfe52-6137-445b-b8b1-ae7bd5d47d83 385bfa70f94e4e118f9fa6d567f40b2f 3259232df7614dd094a8c6c8c274c0fe - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Retrieving diagnostics
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.469 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.471 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.471 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "d6aea4a7-7722-4565-8c76-6d257dcc5362-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.471 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.472 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.473 253665 INFO nova.compute.manager [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Terminating instance
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.474 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "refresh_cache-d6aea4a7-7722-4565-8c76-6d257dcc5362" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.474 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquired lock "refresh_cache-d6aea4a7-7722-4565-8c76-6d257dcc5362" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.474 253665 DEBUG nova.network.neutron [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:09:48 compute-0 nova_compute[253661]: 2025-11-22 09:09:48.867 253665 DEBUG nova.network.neutron [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:09:48 compute-0 ceph-mon[75021]: pgmap v1310: 305 pgs: 305 active+clean; 185 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 534 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Nov 22 09:09:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 214 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 243 op/s
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.235 253665 DEBUG nova.network.neutron [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.248 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Releasing lock "refresh_cache-d6aea4a7-7722-4565-8c76-6d257dcc5362" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.250 253665 DEBUG nova.compute.manager [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:09:49 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 22 09:09:49 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 3.423s CPU time.
Nov 22 09:09:49 compute-0 systemd-machined[215941]: Machine qemu-18-instance-00000010 terminated.
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.470 253665 INFO nova.virt.libvirt.driver [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance destroyed successfully.
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.471 253665 DEBUG nova.objects.instance [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lazy-loading 'resources' on Instance uuid d6aea4a7-7722-4565-8c76-6d257dcc5362 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.476 253665 DEBUG nova.compute.manager [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.476 253665 DEBUG nova.compute.manager [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing instance network info cache due to event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.477 253665 DEBUG oslo_concurrency.lockutils [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.477 253665 DEBUG oslo_concurrency.lockutils [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.477 253665 DEBUG nova.network.neutron [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:09:49 compute-0 nova_compute[253661]: 2025-11-22 09:09:49.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 22 09:09:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 22 09:09:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.162 253665 INFO nova.virt.libvirt.driver [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Deleting instance files /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362_del
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.163 253665 INFO nova.virt.libvirt.driver [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Deletion of /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362_del complete
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.297 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.298 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.298 253665 INFO nova.compute.manager [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Rebooting instance
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.307 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.332 253665 INFO nova.compute.manager [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Took 1.08 seconds to destroy the instance on the hypervisor.
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.332 253665 DEBUG oslo.service.loopingcall [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.332 253665 DEBUG nova.compute.manager [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.333 253665 DEBUG nova.network.neutron [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.521 253665 DEBUG nova.network.neutron [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.535 253665 DEBUG nova.network.neutron [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.550 253665 INFO nova.compute.manager [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Took 0.22 seconds to deallocate network for instance.
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.597 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.598 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:50 compute-0 ceph-mon[75021]: pgmap v1311: 305 pgs: 305 active+clean; 214 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 243 op/s
Nov 22 09:09:50 compute-0 ceph-mon[75021]: osdmap e183: 3 total, 3 up, 3 in
Nov 22 09:09:50 compute-0 nova_compute[253661]: 2025-11-22 09:09:50.683 253665 DEBUG oslo_concurrency.processutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 214 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 243 op/s
Nov 22 09:09:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:09:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1620928968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.162 253665 DEBUG oslo_concurrency.processutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.169 253665 DEBUG nova.compute.provider_tree [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.184 253665 DEBUG nova.scheduler.client.report [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.207 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.235 253665 INFO nova.scheduler.client.report [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Deleted allocations for instance d6aea4a7-7722-4565-8c76-6d257dcc5362
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.293 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.372 253665 DEBUG nova.network.neutron [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updated VIF entry in instance network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.372 253665 DEBUG nova.network.neutron [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.389 253665 DEBUG oslo_concurrency.lockutils [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.390 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:51 compute-0 nova_compute[253661]: 2025-11-22 09:09:51.390 253665 DEBUG nova.network.neutron [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:09:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1620928968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:09:52
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'vms', 'default.rgw.control', 'volumes', '.mgr']
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:09:52 compute-0 ceph-mon[75021]: pgmap v1313: 305 pgs: 305 active+clean; 214 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 243 op/s
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:09:53 compute-0 nova_compute[253661]: 2025-11-22 09:09:53.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.7 MiB/s wr, 238 op/s
Nov 22 09:09:53 compute-0 nova_compute[253661]: 2025-11-22 09:09:53.820 253665 DEBUG nova.network.neutron [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:09:53 compute-0 nova_compute[253661]: 2025-11-22 09:09:53.836 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:09:53 compute-0 nova_compute[253661]: 2025-11-22 09:09:53.837 253665 DEBUG nova.compute.manager [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:53 compute-0 kernel: tap2bc1ef13-ab (unregistering): left promiscuous mode
Nov 22 09:09:53 compute-0 NetworkManager[48920]: <info>  [1763802593.9692] device (tap2bc1ef13-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:09:53 compute-0 ovn_controller[152872]: 2025-11-22T09:09:53Z|00053|binding|INFO|Releasing lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 from this chassis (sb_readonly=0)
Nov 22 09:09:53 compute-0 ovn_controller[152872]: 2025-11-22T09:09:53Z|00054|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 down in Southbound
Nov 22 09:09:53 compute-0 ovn_controller[152872]: 2025-11-22T09:09:53Z|00055|binding|INFO|Removing iface tap2bc1ef13-ab ovn-installed in OVS
Nov 22 09:09:53 compute-0 nova_compute[253661]: 2025-11-22 09:09:53.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:53 compute-0 nova_compute[253661]: 2025-11-22 09:09:53.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:53.986 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:74:48 10.100.0.8'], port_security=['fa:16:3e:2b:74:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c5f708d0-4110-417f-8353-dc61992d22dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5ecca170-8cb5-478c-9208-cfe27a5002c7 90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:09:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:53.988 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 unbound from our chassis
Nov 22 09:09:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:53.989 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:53.998 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.014 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[492235bb-4513-4aa3-a661-5bcb50ab447e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:54 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 22 09:09:54 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 8.863s CPU time.
Nov 22 09:09:54 compute-0 systemd-machined[215941]: Machine qemu-17-instance-0000000f terminated.
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.052 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[349cb55d-928d-449d-be8a-5daf1520c57a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.057 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a963d17-b957-45f9-9524-96c81f13b30e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.084 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fa049ca8-bfa9-41ee-a36a-988466fa508e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.111 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a97cea0f-dc8b-4c1e-b226-36f14dab8b88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281032, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.145 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3979ff5f-1a79-47e7-833c-470818b0a611]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540297, 'tstamp': 540297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281033, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540301, 'tstamp': 540301}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281033, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.147 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.156 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.157 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.157 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.158 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.159 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.183 253665 INFO nova.virt.libvirt.driver [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance destroyed successfully.
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.184 253665 DEBUG nova.objects.instance [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'resources' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.199 253665 DEBUG nova.virt.libvirt.vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:53Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.200 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.200 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.201 253665 DEBUG os_vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.203 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bc1ef13-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.208 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.210 253665 INFO os_vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab')
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.217 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start _get_guest_xml network_info=[{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.221 253665 WARNING nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.226 253665 DEBUG nova.virt.libvirt.host [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.226 253665 DEBUG nova.virt.libvirt.host [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.230 253665 DEBUG nova.virt.libvirt.host [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.230 253665 DEBUG nova.virt.libvirt.host [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.231 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.231 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.232 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.232 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.232 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.232 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.233 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.233 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.233 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.234 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.234 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.234 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.234 253665 DEBUG nova.objects.instance [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.252 253665 DEBUG oslo_concurrency.processutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:54 compute-0 ceph-mon[75021]: pgmap v1314: 305 pgs: 305 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.7 MiB/s wr, 238 op/s
Nov 22 09:09:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:54 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1532047322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.713 253665 DEBUG oslo_concurrency.processutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.759 253665 DEBUG oslo_concurrency.processutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.791 253665 DEBUG nova.compute.manager [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.792 253665 DEBUG oslo_concurrency.lockutils [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.792 253665 DEBUG oslo_concurrency.lockutils [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.793 253665 DEBUG oslo_concurrency.lockutils [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.793 253665 DEBUG nova.compute.manager [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:09:54 compute-0 nova_compute[253661]: 2025-11-22 09:09:54.793 253665 WARNING nova.compute.manager [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state reboot_started_hard.
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:09:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:09:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 237 op/s
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.130 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.130 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.146 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:09:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:09:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/681901186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.227 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.228 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.235 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.235 253665 INFO nova.compute.claims [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.242 253665 DEBUG oslo_concurrency.processutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.243 253665 DEBUG nova.virt.libvirt.vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:53Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.243 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.244 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.245 253665 DEBUG nova.objects.instance [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.268 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <uuid>c5f708d0-4110-417f-8353-dc61992d22dc</uuid>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <name>instance-0000000f</name>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <nova:name>tempest-SecurityGroupsTestJSON-server-782283387</nova:name>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:09:54</nova:creationTime>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <nova:user uuid="ee24e4812c424984881862883987d750">tempest-SecurityGroupsTestJSON-342579724-project-member</nova:user>
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <nova:project uuid="5879249ab50a40ec9553bc923bdd1042">tempest-SecurityGroupsTestJSON-342579724</nova:project>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <nova:port uuid="2bc1ef13-abf9-49ce-b3bb-41d737b9cd13">
Nov 22 09:09:55 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <system>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <entry name="serial">c5f708d0-4110-417f-8353-dc61992d22dc</entry>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <entry name="uuid">c5f708d0-4110-417f-8353-dc61992d22dc</entry>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     </system>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <os>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   </os>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <features>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   </features>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c5f708d0-4110-417f-8353-dc61992d22dc_disk">
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c5f708d0-4110-417f-8353-dc61992d22dc_disk.config">
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       </source>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:09:55 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:2b:74:48"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <target dev="tap2bc1ef13-ab"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/console.log" append="off"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <video>
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     </video>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:09:55 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:09:55 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:09:55 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:09:55 compute-0 nova_compute[253661]: </domain>
Nov 22 09:09:55 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.271 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.271 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.272 253665 DEBUG nova.virt.libvirt.vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:53Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.273 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.274 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.274 253665 DEBUG os_vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.275 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.276 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bc1ef13-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.280 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bc1ef13-ab, col_values=(('external_ids', {'iface-id': '2bc1ef13-abf9-49ce-b3bb-41d737b9cd13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:74:48', 'vm-uuid': 'c5f708d0-4110-417f-8353-dc61992d22dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.281 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:55 compute-0 NetworkManager[48920]: <info>  [1763802595.2829] manager: (tap2bc1ef13-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.290 253665 INFO os_vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab')
Nov 22 09:09:55 compute-0 virtqemud[254229]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 22 09:09:55 compute-0 virtqemud[254229]: hostname: compute-0
Nov 22 09:09:55 compute-0 virtqemud[254229]: End of file while reading data: Input/output error
Nov 22 09:09:55 compute-0 virtqemud[254229]: End of file while reading data: Input/output error
Nov 22 09:09:55 compute-0 systemd-udevd[281025]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:09:55 compute-0 kernel: tap2bc1ef13-ab: entered promiscuous mode
Nov 22 09:09:55 compute-0 NetworkManager[48920]: <info>  [1763802595.3800] manager: (tap2bc1ef13-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.383 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:55 compute-0 ovn_controller[152872]: 2025-11-22T09:09:55Z|00056|binding|INFO|Claiming lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for this chassis.
Nov 22 09:09:55 compute-0 ovn_controller[152872]: 2025-11-22T09:09:55Z|00057|binding|INFO|2bc1ef13-abf9-49ce-b3bb-41d737b9cd13: Claiming fa:16:3e:2b:74:48 10.100.0.8
Nov 22 09:09:55 compute-0 NetworkManager[48920]: <info>  [1763802595.3926] device (tap2bc1ef13-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:09:55 compute-0 NetworkManager[48920]: <info>  [1763802595.3937] device (tap2bc1ef13-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.394 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:74:48 10.100.0.8'], port_security=['fa:16:3e:2b:74:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c5f708d0-4110-417f-8353-dc61992d22dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5ecca170-8cb5-478c-9208-cfe27a5002c7 90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.395 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 bound to our chassis
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.397 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4
Nov 22 09:09:55 compute-0 ovn_controller[152872]: 2025-11-22T09:09:55Z|00058|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 ovn-installed in OVS
Nov 22 09:09:55 compute-0 ovn_controller[152872]: 2025-11-22T09:09:55Z|00059|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 up in Southbound
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.414 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e206640f-4a8d-47df-8d63-0a024745d0e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:55 compute-0 systemd-machined[215941]: New machine qemu-19-instance-0000000f.
Nov 22 09:09:55 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-0000000f.
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.459 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d30401d6-b430-4938-a917-a8a1741b2737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.466 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbed0b8-b2e9-4496-8c5b-d28031a515ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.494 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfc8b43-41eb-4e67-9bff-4a90417ed1af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.520 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc4e7ca-9569-49f9-87ee-bf158a2580b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281134, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.542 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a47b2-1076-43fe-937f-c6713e5fffac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540297, 'tstamp': 540297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281154, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540301, 'tstamp': 540301}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281154, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.545 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.548 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:09:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1532047322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/681901186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:09:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:09:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824317784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.857 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.864 253665 DEBUG nova.compute.provider_tree [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.874 253665 DEBUG nova.scheduler.client.report [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.970 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:55 compute-0 nova_compute[253661]: 2025-11-22 09:09:55.972 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.020 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.020 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.077 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.168 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.410 253665 DEBUG nova.policy [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '526789957ca1421b94691426dc7bccb5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef6e238d438c49959eb8bee112836e52', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.425 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.427 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.427 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Creating image(s)
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.449 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.471 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.492 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.496 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.560 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.561 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.562 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.563 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.586 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.591 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 04781543-b5ed-482a-a30a-0730fbcd12a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:09:56 compute-0 ceph-mon[75021]: pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 237 op/s
Nov 22 09:09:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2824317784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.794 253665 DEBUG nova.compute.manager [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.795 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for c5f708d0-4110-417f-8353-dc61992d22dc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.796 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802596.7885516, c5f708d0-4110-417f-8353-dc61992d22dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.799 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Resumed (Lifecycle Event)
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.811 253665 INFO nova.virt.libvirt.driver [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance rebooted successfully.
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.812 253665 DEBUG nova.compute.manager [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.824 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.826 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.847 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.847 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802596.7887912, c5f708d0-4110-417f-8353-dc61992d22dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.848 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Started (Lifecycle Event)
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.865 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.867 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.871 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:09:56 compute-0 nova_compute[253661]: 2025-11-22 09:09:56.939 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 04781543-b5ed-482a-a30a-0730fbcd12a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.001 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.002 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.002 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.003 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.003 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.004 253665 WARNING nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state None.
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.004 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.004 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.005 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.005 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.005 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.005 253665 WARNING nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state None.
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.006 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.006 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.006 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.007 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.007 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.008 253665 WARNING nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state None.
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.015 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] resizing rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:09:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 213 op/s
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.182 253665 DEBUG nova.objects.instance [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'migration_context' on Instance uuid 04781543-b5ed-482a-a30a-0730fbcd12a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.194 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.195 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Ensure instance console log exists: /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.195 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.196 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.196 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:57 compute-0 nova_compute[253661]: 2025-11-22 09:09:57.913 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Successfully created port: e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:09:58 compute-0 ceph-mon[75021]: pgmap v1316: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 213 op/s
Nov 22 09:09:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 191 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Nov 22 09:09:59 compute-0 nova_compute[253661]: 2025-11-22 09:09:59.562 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:09:59 compute-0 nova_compute[253661]: 2025-11-22 09:09:59.647 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Successfully updated port: e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:09:59 compute-0 nova_compute[253661]: 2025-11-22 09:09:59.661 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:09:59 compute-0 nova_compute[253661]: 2025-11-22 09:09:59.662 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:09:59 compute-0 nova_compute[253661]: 2025-11-22 09:09:59.662 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:09:59 compute-0 nova_compute[253661]: 2025-11-22 09:09:59.865 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:10:00 compute-0 nova_compute[253661]: 2025-11-22 09:10:00.283 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:00 compute-0 nova_compute[253661]: 2025-11-22 09:10:00.313 253665 DEBUG nova.compute.manager [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:00 compute-0 nova_compute[253661]: 2025-11-22 09:10:00.314 253665 DEBUG nova.compute.manager [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:00 compute-0 nova_compute[253661]: 2025-11-22 09:10:00.314 253665 DEBUG oslo_concurrency.lockutils [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:00 compute-0 ceph-mon[75021]: pgmap v1317: 305 pgs: 305 active+clean; 191 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.058 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.059 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.075 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.080 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 191 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 971 KiB/s wr, 118 op/s
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.115 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.116 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance network_info: |[{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.117 253665 DEBUG oslo_concurrency.lockutils [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.118 253665 DEBUG nova.network.neutron [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.124 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start _get_guest_xml network_info=[{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.154 253665 WARNING nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.161 253665 DEBUG nova.virt.libvirt.host [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.162 253665 DEBUG nova.virt.libvirt.host [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.165 253665 DEBUG nova.virt.libvirt.host [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.166 253665 DEBUG nova.virt.libvirt.host [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.166 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.166 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.167 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.167 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.167 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.168 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.168 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.168 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.168 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.169 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.169 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.169 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.172 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.205 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.207 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.218 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.219 253665 INFO nova.compute.claims [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.413 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1718560103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.679 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.703 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.708 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1718560103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1073061718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.926 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.933 253665 DEBUG nova.compute.provider_tree [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.946 253665 DEBUG nova.scheduler.client.report [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.967 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:01 compute-0 nova_compute[253661]: 2025-11-22 09:10:01.968 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.019 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.019 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.042 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.064 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:10:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2849461507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.156 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.159 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.160 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating image(s)
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.189 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.217 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.240 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.246 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.271 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.274 253665 DEBUG nova.virt.libvirt.vif [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-785600448',display_name='tempest-FloatingIPsAssociationTestJSON-server-785600448',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-785600448',id=17,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-912pf9hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:56Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=04781543-b5ed-482a-a30a-0730fbcd12a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.275 253665 DEBUG nova.network.os_vif_util [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.276 253665 DEBUG nova.network.os_vif_util [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.278 253665 DEBUG nova.objects.instance [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'pci_devices' on Instance uuid 04781543-b5ed-482a-a30a-0730fbcd12a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.294 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <uuid>04781543-b5ed-482a-a30a-0730fbcd12a1</uuid>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <name>instance-00000011</name>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-785600448</nova:name>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:10:01</nova:creationTime>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <nova:user uuid="526789957ca1421b94691426dc7bccb5">tempest-FloatingIPsAssociationTestJSON-1882113079-project-member</nova:user>
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <nova:project uuid="ef6e238d438c49959eb8bee112836e52">tempest-FloatingIPsAssociationTestJSON-1882113079</nova:project>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <nova:port uuid="e7682709-05fd-4d27-bd49-1a84e1cf6bd3">
Nov 22 09:10:02 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <system>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <entry name="serial">04781543-b5ed-482a-a30a-0730fbcd12a1</entry>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <entry name="uuid">04781543-b5ed-482a-a30a-0730fbcd12a1</entry>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     </system>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <os>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   </os>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <features>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   </features>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/04781543-b5ed-482a-a30a-0730fbcd12a1_disk">
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config">
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:02 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:5e:ea:eb"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <target dev="tape7682709-05"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/console.log" append="off"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <video>
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     </video>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:10:02 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:10:02 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:10:02 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:10:02 compute-0 nova_compute[253661]: </domain>
Nov 22 09:10:02 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.296 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Preparing to wait for external event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.296 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.296 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.297 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.297 253665 DEBUG nova.virt.libvirt.vif [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-785600448',display_name='tempest-FloatingIPsAssociationTestJSON-server-785600448',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-785600448',id=17,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-912pf9hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:56Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=04781543-b5ed-482a-a30a-0730fbcd12a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.298 253665 DEBUG nova.network.os_vif_util [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.298 253665 DEBUG nova.network.os_vif_util [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.299 253665 DEBUG os_vif [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.299 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.300 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.301 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.303 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.303 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7682709-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.304 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7682709-05, col_values=(('external_ids', {'iface-id': 'e7682709-05fd-4d27-bd49-1a84e1cf6bd3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:ea:eb', 'vm-uuid': '04781543-b5ed-482a-a30a-0730fbcd12a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:02 compute-0 NetworkManager[48920]: <info>  [1763802602.3070] manager: (tape7682709-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.311 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.311 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.312 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.312 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.337 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.341 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.371 253665 INFO os_vif [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05')
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.430 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.430 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.431 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No VIF found with MAC fa:16:3e:5e:ea:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.431 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Using config drive
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.465 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.536 253665 DEBUG nova.policy [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05cafdbce8334f9380b4dbd1d21f7d58', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd78b26f20d674ae6a213d727050a50d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012833240745262747 of space, bias 1.0, pg target 0.3849972223578824 quantized to 32 (current 32)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:10:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.678 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.747 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:10:02 compute-0 ceph-mon[75021]: pgmap v1318: 305 pgs: 305 active+clean; 191 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 971 KiB/s wr, 118 op/s
Nov 22 09:10:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1073061718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2849461507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.858 253665 DEBUG nova.objects.instance [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.875 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.875 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Ensure instance console log exists: /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.876 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.876 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:02 compute-0 nova_compute[253661]: 2025-11-22 09:10:02.877 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.158 253665 DEBUG nova.compute.manager [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.158 253665 DEBUG nova.compute.manager [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing instance network info cache due to event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.159 253665 DEBUG oslo_concurrency.lockutils [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.159 253665 DEBUG oslo_concurrency.lockutils [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.159 253665 DEBUG nova.network.neutron [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.335 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Creating config drive at /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.341 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeuilv0d8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.416 253665 DEBUG nova.network.neutron [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.417 253665 DEBUG nova.network.neutron [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.435 253665 DEBUG oslo_concurrency.lockutils [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.478 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeuilv0d8" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.521 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.526 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.567 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Successfully created port: 716b716d-2ee2-44e7-9850-c10854634f77 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.593 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.594 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.594 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.594 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.594 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.596 253665 INFO nova.compute.manager [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Terminating instance
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.597 253665 DEBUG nova.compute.manager [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:10:03 compute-0 kernel: tap2bc1ef13-ab (unregistering): left promiscuous mode
Nov 22 09:10:03 compute-0 NetworkManager[48920]: <info>  [1763802603.6497] device (tap2bc1ef13-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.666 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:03 compute-0 ovn_controller[152872]: 2025-11-22T09:10:03Z|00060|binding|INFO|Releasing lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 from this chassis (sb_readonly=0)
Nov 22 09:10:03 compute-0 ovn_controller[152872]: 2025-11-22T09:10:03Z|00061|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 down in Southbound
Nov 22 09:10:03 compute-0 ovn_controller[152872]: 2025-11-22T09:10:03Z|00062|binding|INFO|Removing iface tap2bc1ef13-ab ovn-installed in OVS
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.674 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.710 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:74:48 10.100.0.8'], port_security=['fa:16:3e:2b:74:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c5f708d0-4110-417f-8353-dc61992d22dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5ecca170-8cb5-478c-9208-cfe27a5002c7 8db4c515-712a-46df-b14c-6a11222f6f3f 90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.713 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 unbound from our chassis
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.716 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4
Nov 22 09:10:03 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 22 09:10:03 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000000f.scope: Consumed 7.727s CPU time.
Nov 22 09:10:03 compute-0 systemd-machined[215941]: Machine qemu-19-instance-0000000f terminated.
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.738 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4cefbc04-5ef5-4aca-95ab-ac6c6412f8e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.745 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.746 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Deleting local config drive /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config because it was imported into RBD.
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.784 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae4d304-12e5-4b11-9b62-1e961f4f944c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.789 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[00ab4103-0240-4334-b309-8c829878b700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:03 compute-0 kernel: tape7682709-05: entered promiscuous mode
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.829 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[37218518-613b-46da-b79d-75ff3d963f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:03 compute-0 NetworkManager[48920]: <info>  [1763802603.8330] manager: (tape7682709-05): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:03 compute-0 ovn_controller[152872]: 2025-11-22T09:10:03Z|00063|binding|INFO|Claiming lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 for this chassis.
Nov 22 09:10:03 compute-0 ovn_controller[152872]: 2025-11-22T09:10:03Z|00064|binding|INFO|e7682709-05fd-4d27-bd49-1a84e1cf6bd3: Claiming fa:16:3e:5e:ea:eb 10.100.0.3
Nov 22 09:10:03 compute-0 systemd-udevd[281682]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:10:03 compute-0 NetworkManager[48920]: <info>  [1763802603.8528] device (tape7682709-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:10:03 compute-0 NetworkManager[48920]: <info>  [1763802603.8544] device (tape7682709-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.856 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:ea:eb 10.100.0.3'], port_security=['fa:16:3e:5e:ea:eb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '04781543-b5ed-482a-a30a-0730fbcd12a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e7682709-05fd-4d27-bd49-1a84e1cf6bd3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.860 253665 INFO nova.virt.libvirt.driver [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance destroyed successfully.
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.860 253665 DEBUG nova.objects.instance [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'resources' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6da945-75a3-4497-9a80-d9d1abfe229d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281711, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.877 253665 DEBUG nova.virt.libvirt.vif [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:56Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.878 253665 DEBUG nova.network.os_vif_util [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.879 253665 DEBUG nova.network.os_vif_util [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:03 compute-0 systemd-machined[215941]: New machine qemu-20-instance-00000011.
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.879 253665 DEBUG os_vif [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.882 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bc1ef13-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.893 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d8287c-fc8b-46fd-b993-e77d3cf2cf5f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540297, 'tstamp': 540297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281719, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540301, 'tstamp': 540301}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281719, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.895 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.889 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.890 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.921 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:10:03 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000011.
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.928 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:03 compute-0 ovn_controller[152872]: 2025-11-22T09:10:03Z|00065|binding|INFO|Setting lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 ovn-installed in OVS
Nov 22 09:10:03 compute-0 ovn_controller[152872]: 2025-11-22T09:10:03Z|00066|binding|INFO|Setting lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 up in Southbound
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.928 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.929 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.929 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.931 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 bound to our chassis
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.932 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.932 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608
Nov 22 09:10:03 compute-0 nova_compute[253661]: 2025-11-22 09:10:03.938 253665 INFO os_vif [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab')
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.948 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5234b553-e050-4701-ba87-c6eddb55ad13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.949 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape64548ac-51 in ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.952 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape64548ac-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.952 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8a6ab6-f2b4-4d02-859e-e9220ce138bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.954 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d8cbf875-6f46-4b58-9c87-dd0e23b111e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.969 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[470bae88-23dc-4418-aa31-302b54644da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.000 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.000 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.002 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06727b9f-4e88-431e-8c02-e66774b499eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.009 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.009 253665 INFO nova.compute.claims [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.044 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3ce953-82cd-4123-8b48-6ef0742528e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 systemd-udevd[281681]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.051 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73d30804-dac3-4694-8282-e9a7e493cc57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 NetworkManager[48920]: <info>  [1763802604.0532] manager: (tape64548ac-50): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.092 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ddec7b-e456-4cfd-bc31-3919a4a92ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.096 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e65b01f9-c13f-4f9b-a94e-2dcf7cedd5b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 NetworkManager[48920]: <info>  [1763802604.1266] device (tape64548ac-50): carrier: link connected
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.134 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0d72ec-7d45-44fb-8792-9f166a5a7c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.160 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95d15e12-ed08-4f97-8995-0580191d9e52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281783, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.183 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8fc1bd-6e1e-47b7-bd34-14c619293931]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:bc3b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545486, 'tstamp': 545486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281788, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ad23ec3d-75b7-49ab-83eb-bc3cbce3533b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281804, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.234 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.249 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc1ef1f-f4e5-41f5-a973-8b900c34e13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.293 253665 DEBUG nova.compute.manager [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.294 253665 DEBUG oslo_concurrency.lockutils [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.294 253665 DEBUG oslo_concurrency.lockutils [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.295 253665 DEBUG oslo_concurrency.lockutils [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.295 253665 DEBUG nova.compute.manager [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Processing event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.324 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.327 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802604.3268237, 04781543-b5ed-482a-a30a-0730fbcd12a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.327 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] VM Started (Lifecycle Event)
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.332 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.337 253665 INFO nova.virt.libvirt.driver [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance spawned successfully.
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.338 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.346 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a5452a-0efd-4fca-8e2c-723517fd3475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.348 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.349 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.349 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.349 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:04 compute-0 kernel: tape64548ac-50: entered promiscuous mode
Nov 22 09:10:04 compute-0 NetworkManager[48920]: <info>  [1763802604.3526] manager: (tape64548ac-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.354 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:04 compute-0 ovn_controller[152872]: 2025-11-22T09:10:04Z|00067|binding|INFO|Releasing lport 791df5ce-fddc-4961-a1d0-6667026f8b13 from this chassis (sb_readonly=0)
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.364 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.368 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.368 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.368 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.369 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.369 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.369 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.378 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e64548ac-5898-4d23-b6f7-17a1ae54c608.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e64548ac-5898-4d23-b6f7-17a1ae54c608.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.379 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc76d56-22c3-46f7-b3bc-446fd6064f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.380 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-e64548ac-5898-4d23-b6f7-17a1ae54c608
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/e64548ac-5898-4d23-b6f7-17a1ae54c608.pid.haproxy
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID e64548ac-5898-4d23-b6f7-17a1ae54c608
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:10:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.382 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'env', 'PROCESS_TAG=haproxy-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e64548ac-5898-4d23-b6f7-17a1ae54c608.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.583 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.585 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802589.4686115, d6aea4a7-7722-4565-8c76-6d257dcc5362 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.585 253665 INFO nova.compute.manager [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] VM Stopped (Lifecycle Event)
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.586 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.586 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802604.3280363, 04781543-b5ed-482a-a30a-0730fbcd12a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.586 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] VM Paused (Lifecycle Event)
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.588 253665 INFO nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Took 8.16 seconds to spawn the instance on the hypervisor.
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.588 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.598 253665 INFO nova.virt.libvirt.driver [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Deleting instance files /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc_del
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.599 253665 INFO nova.virt.libvirt.driver [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Deletion of /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc_del complete
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.607 253665 DEBUG nova.compute.manager [None req-f5c560f6-26b2-4794-8ab8-ba1271de37dc - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.618 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.629 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802604.3335626, 04781543-b5ed-482a-a30a-0730fbcd12a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] VM Resumed (Lifecycle Event)
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.655 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.662 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352268410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.704 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.717 253665 INFO nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Took 9.51 seconds to build instance.
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.725 253665 INFO nova.compute.manager [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Took 1.13 seconds to destroy the instance on the hypervisor.
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.725 253665 DEBUG oslo.service.loopingcall [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.726 253665 DEBUG nova.compute.manager [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.726 253665 DEBUG nova.network.neutron [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.729 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.731 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.735 253665 DEBUG nova.compute.provider_tree [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.750 253665 DEBUG nova.scheduler.client.report [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.771 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.772 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:10:04 compute-0 podman[281864]: 2025-11-22 09:10:04.802244243 +0000 UTC m=+0.066149547 container create 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:10:04 compute-0 ceph-mon[75021]: pgmap v1319: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Nov 22 09:10:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/352268410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.841 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.842 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:10:04 compute-0 systemd[1]: Started libpod-conmon-977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3.scope.
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.858 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:10:04 compute-0 podman[281864]: 2025-11-22 09:10:04.767577761 +0000 UTC m=+0.031483095 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.869 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:10:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa4cb5a6efdb2b8320c6dc794a849ea1a90de83555fc6fea9133bc58a10cfaa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:04 compute-0 podman[281864]: 2025-11-22 09:10:04.908796768 +0000 UTC m=+0.172702162 container init 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 09:10:04 compute-0 podman[281864]: 2025-11-22 09:10:04.917222583 +0000 UTC m=+0.181127927 container start 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 09:10:04 compute-0 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [NOTICE]   (281884) : New worker (281886) forked
Nov 22 09:10:04 compute-0 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [NOTICE]   (281884) : Loading success.
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.954 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.956 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.956 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Creating image(s)
Nov 22 09:10:04 compute-0 nova_compute[253661]: 2025-11-22 09:10:04.981 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.017 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.049 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.054 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.089 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Successfully updated port: 716b716d-2ee2-44e7-9850-c10854634f77 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.104 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.105 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquired lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.105 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:10:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 244 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 137 op/s
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.144 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.146 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.146 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.147 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.173 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.178 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.206 253665 DEBUG nova.network.neutron [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updated VIF entry in instance network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.207 253665 DEBUG nova.network.neutron [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.226 253665 DEBUG oslo_concurrency.lockutils [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.238 253665 DEBUG nova.policy [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05cafdbce8334f9380b4dbd1d21f7d58', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd78b26f20d674ae6a213d727050a50d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.298 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.298 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 WARNING nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state deleting.
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-changed-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Refreshing instance network info cache due to event network-changed-716b716d-2ee2-44e7-9850-c10854634f77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.319 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.565 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.649 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.719 253665 DEBUG nova.network.neutron [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.756 253665 INFO nova.compute.manager [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Took 1.03 seconds to deallocate network for instance.
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.768 253665 DEBUG nova.objects.instance [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid d99bd27b-0ff3-493e-a69c-6c7ec034aa81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.781 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.781 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Ensure instance console log exists: /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.782 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.782 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.782 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.821 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.822 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:05 compute-0 nova_compute[253661]: 2025-11-22 09:10:05.948 253665 DEBUG oslo_concurrency.processutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.003 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Successfully created port: a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.167 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updating instance_info_cache with network_info: [{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.185 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Releasing lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.185 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance network_info: |[{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.185 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.185 253665 DEBUG nova.network.neutron [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Refreshing network info cache for port 716b716d-2ee2-44e7-9850-c10854634f77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.188 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start _get_guest_xml network_info=[{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.193 253665 WARNING nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.202 253665 DEBUG nova.virt.libvirt.host [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.203 253665 DEBUG nova.virt.libvirt.host [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.209 253665 DEBUG nova.virt.libvirt.host [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.209 253665 DEBUG nova.virt.libvirt.host [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.210 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.210 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.211 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.211 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.212 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.212 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.212 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.212 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.213 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.213 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.213 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.213 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.217 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986123316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.422 253665 DEBUG oslo_concurrency.processutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.435 253665 DEBUG nova.compute.provider_tree [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.454 253665 DEBUG nova.scheduler.client.report [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.486 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.526 253665 INFO nova.scheduler.client.report [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Deleted allocations for instance c5f708d0-4110-417f-8353-dc61992d22dc
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.609 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524036332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.692 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.719 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.724 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.754 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Successfully updated port: a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.770 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.770 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquired lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.770 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:10:06 compute-0 ceph-mon[75021]: pgmap v1320: 305 pgs: 305 active+clean; 244 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 137 op/s
Nov 22 09:10:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3986123316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/524036332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.983 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.996 253665 DEBUG nova.compute.manager [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.996 253665 DEBUG oslo_concurrency.lockutils [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.997 253665 DEBUG oslo_concurrency.lockutils [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.997 253665 DEBUG oslo_concurrency.lockutils [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.997 253665 DEBUG nova.compute.manager [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] No waiting events found dispatching network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:06 compute-0 nova_compute[253661]: 2025-11-22 09:10:06.997 253665 WARNING nova.compute.manager [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received unexpected event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 for instance with vm_state active and task_state None.
Nov 22 09:10:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 274 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 198 op/s
Nov 22 09:10:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509290870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.210 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.212 253665 DEBUG nova.virt.libvirt.vif [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:02Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.212 253665 DEBUG nova.network.os_vif_util [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.213 253665 DEBUG nova.network.os_vif_util [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.215 253665 DEBUG nova.objects.instance [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.228 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <uuid>3ae08a2f-348c-406b-8ffc-9acb8a542e1c</uuid>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <name>instance-00000012</name>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdminTestJSON-server-1439141870</nova:name>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:10:06</nova:creationTime>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <nova:port uuid="716b716d-2ee2-44e7-9850-c10854634f77">
Nov 22 09:10:07 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <system>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <entry name="serial">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <entry name="uuid">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     </system>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <os>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   </os>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <features>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   </features>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk">
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config">
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:47:7d:dd"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <target dev="tap716b716d-2e"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log" append="off"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <video>
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     </video>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:10:07 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:10:07 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:10:07 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:10:07 compute-0 nova_compute[253661]: </domain>
Nov 22 09:10:07 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.228 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Preparing to wait for external event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.229 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.229 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.229 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.230 253665 DEBUG nova.virt.libvirt.vif [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:02Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.230 253665 DEBUG nova.network.os_vif_util [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.230 253665 DEBUG nova.network.os_vif_util [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.231 253665 DEBUG os_vif [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.232 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.232 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.235 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.235 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap716b716d-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.239 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap716b716d-2e, col_values=(('external_ids', {'iface-id': '716b716d-2ee2-44e7-9850-c10854634f77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:7d:dd', 'vm-uuid': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:07 compute-0 NetworkManager[48920]: <info>  [1763802607.2706] manager: (tap716b716d-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.272 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.288 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.290 253665 INFO os_vif [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.356 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.357 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.357 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:47:7d:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.358 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Using config drive
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.380 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.449 253665 DEBUG nova.compute.manager [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-deleted-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.449 253665 DEBUG nova.compute.manager [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-changed-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.449 253665 DEBUG nova.compute.manager [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Refreshing instance network info cache due to event network-changed-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.450 253665 DEBUG oslo_concurrency.lockutils [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.785 253665 DEBUG nova.network.neutron [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updated VIF entry in instance network info cache for port 716b716d-2ee2-44e7-9850-c10854634f77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.786 253665 DEBUG nova.network.neutron [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updating instance_info_cache with network_info: [{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.799 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1509290870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.843 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating config drive at /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.849 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphg_hb7ki execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.899 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updating instance_info_cache with network_info: [{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.924 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Releasing lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.924 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance network_info: |[{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.925 253665 DEBUG oslo_concurrency.lockutils [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.925 253665 DEBUG nova.network.neutron [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Refreshing network info cache for port a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.928 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start _get_guest_xml network_info=[{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.934 253665 WARNING nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.941 253665 DEBUG nova.virt.libvirt.host [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.941 253665 DEBUG nova.virt.libvirt.host [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.945 253665 DEBUG nova.virt.libvirt.host [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.945 253665 DEBUG nova.virt.libvirt.host [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.945 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.948 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.950 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:07 compute-0 nova_compute[253661]: 2025-11-22 09:10:07.983 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphg_hb7ki" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.009 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.014 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.195 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.196 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting local config drive /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config because it was imported into RBD.
Nov 22 09:10:08 compute-0 kernel: tap716b716d-2e: entered promiscuous mode
Nov 22 09:10:08 compute-0 NetworkManager[48920]: <info>  [1763802608.2664] manager: (tap716b716d-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Nov 22 09:10:08 compute-0 ovn_controller[152872]: 2025-11-22T09:10:08Z|00068|binding|INFO|Claiming lport 716b716d-2ee2-44e7-9850-c10854634f77 for this chassis.
Nov 22 09:10:08 compute-0 ovn_controller[152872]: 2025-11-22T09:10:08Z|00069|binding|INFO|716b716d-2ee2-44e7-9850-c10854634f77: Claiming fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.266 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.284 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.285 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.287 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:10:08 compute-0 systemd-udevd[282235]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:10:08 compute-0 NetworkManager[48920]: <info>  [1763802608.3038] device (tap716b716d-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:10:08 compute-0 NetworkManager[48920]: <info>  [1763802608.3046] device (tap716b716d-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.303 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[170e4371-7bdf-4458-bb6a-92e02182aeb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.306 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap514ab32c-31 in ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.309 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap514ab32c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc11488c-01a7-404c-b7bf-fe89ec5d0960]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.311 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10a7cb20-0f88-4feb-884c-09a62eb2b2fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 systemd-machined[215941]: New machine qemu-21-instance-00000012.
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.328 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe7cc47-5b01-4306-b917-36cc4c796c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000012.
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.356 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa5262c-af95-48af-b8d1-027683ac0979]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979115775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:08 compute-0 ovn_controller[152872]: 2025-11-22T09:10:08Z|00070|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 ovn-installed in OVS
Nov 22 09:10:08 compute-0 ovn_controller[152872]: 2025-11-22T09:10:08Z|00071|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 up in Southbound
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.437 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.440 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8acd4790-1893-4d44-b8d8-eca1706e87f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 NetworkManager[48920]: <info>  [1763802608.4537] manager: (tap514ab32c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.452 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b40dece-fef8-4e3e-bf6f-e435e9d3c6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.482 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.488 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.497 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2c237f-0d22-410f-b8fa-d2dd6d01d81e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.502 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[96e19735-f41d-4499-857f-84e309f8444e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 NetworkManager[48920]: <info>  [1763802608.5277] device (tap514ab32c-30): carrier: link connected
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.535 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e744068c-a9a1-42b5-a88b-77d9868dbdca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.555 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf6e623-8778-4627-8159-eb028226ccc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282292, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.577 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a134e3d3-6b6d-4dda-888f-a98cbd88821a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:d932'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545926, 'tstamp': 545926}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282293, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.594 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e3485840-25ea-40db-8c3d-2d3b398ead83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282294, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.625 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d19e03b-6227-4a0c-9544-30b893d46f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.692 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c99a8dbf-534e-4ea9-bfd6-1f4e1073856c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.694 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.695 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.695 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:08 compute-0 NetworkManager[48920]: <info>  [1763802608.6984] manager: (tap514ab32c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:08 compute-0 kernel: tap514ab32c-30: entered promiscuous mode
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.704 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:08 compute-0 ovn_controller[152872]: 2025-11-22T09:10:08Z|00072|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.707 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/514ab32c-3e9b-4d95-81f8-6acc06be6d1a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/514ab32c-3e9b-4d95-81f8-6acc06be6d1a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.708 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d85911-7424-4851-9dde-5d07611d4075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.708 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/514ab32c-3e9b-4d95-81f8-6acc06be6d1a.pid.haproxy
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:10:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.709 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'env', 'PROCESS_TAG=haproxy-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/514ab32c-3e9b-4d95-81f8-6acc06be6d1a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.723 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:08 compute-0 ceph-mon[75021]: pgmap v1321: 305 pgs: 305 active+clean; 274 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 198 op/s
Nov 22 09:10:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3979115775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282349143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.956 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.965 253665 DEBUG nova.virt.libvirt.vif [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1874754552',display_name='tempest-ServersAdminTestJSON-server-1874754552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1874754552',id=19,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-otgq40uh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:04Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=d99bd27b-0ff3-493e-a69c-6c7ec034aa81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.966 253665 DEBUG nova.network.os_vif_util [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.967 253665 DEBUG nova.network.os_vif_util [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:08 compute-0 nova_compute[253661]: 2025-11-22 09:10:08.970 253665 DEBUG nova.objects.instance [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid d99bd27b-0ff3-493e-a69c-6c7ec034aa81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.030 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <uuid>d99bd27b-0ff3-493e-a69c-6c7ec034aa81</uuid>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <name>instance-00000013</name>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdminTestJSON-server-1874754552</nova:name>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:10:07</nova:creationTime>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <nova:port uuid="a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19">
Nov 22 09:10:09 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <system>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <entry name="serial">d99bd27b-0ff3-493e-a69c-6c7ec034aa81</entry>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <entry name="uuid">d99bd27b-0ff3-493e-a69c-6c7ec034aa81</entry>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     </system>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <os>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   </os>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <features>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   </features>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk">
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config">
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:0c:5a:f3"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <target dev="tapa36e1a52-1f"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/console.log" append="off"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <video>
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     </video>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:10:09 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:10:09 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:10:09 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:10:09 compute-0 nova_compute[253661]: </domain>
Nov 22 09:10:09 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.032 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Preparing to wait for external event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.032 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.032 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.033 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.033 253665 DEBUG nova.virt.libvirt.vif [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1874754552',display_name='tempest-ServersAdminTestJSON-server-1874754552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1874754552',id=19,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-otgq40uh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:04Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=d99bd27b-0ff3-493e-a69c-6c7ec034aa81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.034 253665 DEBUG nova.network.os_vif_util [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.034 253665 DEBUG nova.network.os_vif_util [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.035 253665 DEBUG os_vif [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.038 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.038 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa36e1a52-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.038 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa36e1a52-1f, col_values=(('external_ids', {'iface-id': 'a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:5a:f3', 'vm-uuid': 'd99bd27b-0ff3-493e-a69c-6c7ec034aa81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:09 compute-0 NetworkManager[48920]: <info>  [1763802609.0413] manager: (tapa36e1a52-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.045 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.051 253665 INFO os_vif [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f')
Nov 22 09:10:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 251 op/s
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.126 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.126 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.126 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:0c:5a:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.127 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Using config drive
Nov 22 09:10:09 compute-0 podman[282388]: 2025-11-22 09:10:09.132528569 +0000 UTC m=+0.064004343 container create af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.167 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:09 compute-0 systemd[1]: Started libpod-conmon-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6.scope.
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.174 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802609.1464906, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.175 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Started (Lifecycle Event)
Nov 22 09:10:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:10:09 compute-0 podman[282388]: 2025-11-22 09:10:09.101901167 +0000 UTC m=+0.033376961 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e66055707f791e950c824f07df600e3199b957edd8fd29251cf5299b718d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.213 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:09 compute-0 podman[282388]: 2025-11-22 09:10:09.221206092 +0000 UTC m=+0.152681886 container init af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.225 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802609.1466255, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.225 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Paused (Lifecycle Event)
Nov 22 09:10:09 compute-0 podman[282388]: 2025-11-22 09:10:09.2310109 +0000 UTC m=+0.162486674 container start af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.246 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.251 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:09 compute-0 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [NOTICE]   (282428) : New worker (282430) forked
Nov 22 09:10:09 compute-0 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [NOTICE]   (282428) : Loading success.
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.268 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.603 253665 DEBUG nova.compute.manager [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.604 253665 DEBUG oslo_concurrency.lockutils [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.604 253665 DEBUG oslo_concurrency.lockutils [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.604 253665 DEBUG oslo_concurrency.lockutils [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.605 253665 DEBUG nova.compute.manager [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Processing event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.606 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.622 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802609.6209538, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.622 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Resumed (Lifecycle Event)
Nov 22 09:10:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.626 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.646 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.648 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance spawned successfully.
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.649 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.651 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.671 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.671 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.672 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.672 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.673 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.673 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.679 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.720 253665 INFO nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Took 7.56 seconds to spawn the instance on the hypervisor.
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.720 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.773 253665 INFO nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Took 8.64 seconds to build instance.
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.786 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.812 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Creating config drive at /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.816 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbet2jpvl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.843 253665 DEBUG nova.network.neutron [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updated VIF entry in instance network info cache for port a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.844 253665 DEBUG nova.network.neutron [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updating instance_info_cache with network_info: [{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.856 253665 DEBUG oslo_concurrency.lockutils [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/282349143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.949 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbet2jpvl" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.978 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:09 compute-0 nova_compute[253661]: 2025-11-22 09:10:09.982 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.154 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.155 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Deleting local config drive /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config because it was imported into RBD.
Nov 22 09:10:10 compute-0 kernel: tapa36e1a52-1f: entered promiscuous mode
Nov 22 09:10:10 compute-0 NetworkManager[48920]: <info>  [1763802610.2502] manager: (tapa36e1a52-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Nov 22 09:10:10 compute-0 ovn_controller[152872]: 2025-11-22T09:10:10Z|00073|binding|INFO|Claiming lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 for this chassis.
Nov 22 09:10:10 compute-0 ovn_controller[152872]: 2025-11-22T09:10:10Z|00074|binding|INFO|a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19: Claiming fa:16:3e:0c:5a:f3 10.100.0.11
Nov 22 09:10:10 compute-0 systemd-udevd[282276]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.258 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:5a:f3 10.100.0.11'], port_security=['fa:16:3e:0c:5a:f3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd99bd27b-0ff3-493e-a69c-6c7ec034aa81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.260 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.263 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.267 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.272 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.272 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:10 compute-0 NetworkManager[48920]: <info>  [1763802610.2839] device (tapa36e1a52-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:10:10 compute-0 NetworkManager[48920]: <info>  [1763802610.2858] device (tapa36e1a52-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:10:10 compute-0 ovn_controller[152872]: 2025-11-22T09:10:10Z|00075|binding|INFO|Setting lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 ovn-installed in OVS
Nov 22 09:10:10 compute-0 ovn_controller[152872]: 2025-11-22T09:10:10Z|00076|binding|INFO|Setting lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 up in Southbound
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.287 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[107e20f4-b450-4893-a750-6a2c2c52c795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.292 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.295 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:10 compute-0 systemd-machined[215941]: New machine qemu-22-instance-00000013.
Nov 22 09:10:10 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000013.
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.327 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2e5c7d-c615-479c-993c-b01ea6f99e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.331 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d2d3f2-ed7c-453c-9f08-a2d687230f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.369 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.369 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.377 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.377 253665 INFO nova.compute.claims [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.366 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9643dd14-686d-4329-a603-db375a7e562d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.398 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee9264b-d608-46bf-82ea-02b2f0a4d922]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282503, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.421 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eac9e805-9381-4222-91c9-9c119e1ee3b3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282505, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282505, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.424 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.428 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.428 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.428 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.429 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.541 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.741 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802610.7401047, d99bd27b-0ff3-493e-a69c-6c7ec034aa81 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.744 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] VM Started (Lifecycle Event)
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.766 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.772 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802610.740446, d99bd27b-0ff3-493e-a69c-6c7ec034aa81 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.773 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] VM Paused (Lifecycle Event)
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.795 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.801 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:10 compute-0 nova_compute[253661]: 2025-11-22 09:10:10.822 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:10 compute-0 ceph-mon[75021]: pgmap v1322: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 251 op/s
Nov 22 09:10:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948069961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.079 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.086 253665 DEBUG nova.compute.provider_tree [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.103 253665 DEBUG nova.scheduler.client.report [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 188 op/s
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.127 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.129 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.169 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.170 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.187 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.202 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.320 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.323 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.323 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Creating image(s)
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.360 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.403 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.442 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.449 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.488 253665 DEBUG nova.policy [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '526789957ca1421b94691426dc7bccb5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef6e238d438c49959eb8bee112836e52', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.540 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.542 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.543 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.543 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.578 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.586 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ff657cfc-b1bb-4545-bc13-ad240e69c666_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.882 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ff657cfc-b1bb-4545-bc13-ad240e69c666_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/948069961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.935 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] resizing rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.987 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 WARNING nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state None.
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Processing event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] No waiting events found dispatching network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 WARNING nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received unexpected event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 for instance with vm_state building and task_state spawning.
Nov 22 09:10:11 compute-0 nova_compute[253661]: 2025-11-22 09:10:11.991 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.039 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802612.0026987, d99bd27b-0ff3-493e-a69c-6c7ec034aa81 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.040 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] VM Resumed (Lifecycle Event)
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.043 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.049 253665 DEBUG nova.objects.instance [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'migration_context' on Instance uuid ff657cfc-b1bb-4545-bc13-ad240e69c666 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.053 253665 INFO nova.virt.libvirt.driver [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance spawned successfully.
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.054 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.059 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.062 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.065 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.067 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Ensure instance console log exists: /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.067 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.067 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.068 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.074 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.075 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.075 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.076 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.076 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.077 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.081 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.169 253665 INFO nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Took 7.21 seconds to spawn the instance on the hypervisor.
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.169 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.223 253665 INFO nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Took 8.24 seconds to build instance.
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.241 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:10:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823678760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:10:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:10:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823678760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.336 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Successfully created port: 52bf11af-1372-4c5d-8bd8-81017da77de8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.613 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.614 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.638 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.713 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.714 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.719 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.720 253665 INFO nova.compute.claims [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:10:12 compute-0 ceph-mon[75021]: pgmap v1323: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 188 op/s
Nov 22 09:10:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3823678760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:10:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3823678760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:10:12 compute-0 nova_compute[253661]: 2025-11-22 09:10:12.923 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.4 MiB/s wr, 203 op/s
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.134 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Successfully updated port: 52bf11af-1372-4c5d-8bd8-81017da77de8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.148 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.149 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquired lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.149 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.308 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:10:13 compute-0 podman[282756]: 2025-11-22 09:10:13.413264715 +0000 UTC m=+0.096100753 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 22 09:10:13 compute-0 podman[282757]: 2025-11-22 09:10:13.415885209 +0000 UTC m=+0.098479832 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:10:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171787393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.459 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.465 253665 DEBUG nova.compute.provider_tree [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.477 253665 DEBUG nova.scheduler.client.report [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.499 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.500 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.540 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.540 253665 DEBUG nova.network.neutron [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.556 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.579 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.618 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.619 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.619 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.619 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.620 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.621 253665 INFO nova.compute.manager [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Terminating instance
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.622 253665 DEBUG nova.compute.manager [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.667 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.669 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.669 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Creating image(s)
Nov 22 09:10:13 compute-0 kernel: tap0122a4be-9c (unregistering): left promiscuous mode
Nov 22 09:10:13 compute-0 NetworkManager[48920]: <info>  [1763802613.6856] device (tap0122a4be-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:10:13 compute-0 ovn_controller[152872]: 2025-11-22T09:10:13Z|00077|binding|INFO|Releasing lport 0122a4be-9c10-4475-ba7d-5c818be52474 from this chassis (sb_readonly=0)
Nov 22 09:10:13 compute-0 ovn_controller[152872]: 2025-11-22T09:10:13Z|00078|binding|INFO|Setting lport 0122a4be-9c10-4475-ba7d-5c818be52474 down in Southbound
Nov 22 09:10:13 compute-0 ovn_controller[152872]: 2025-11-22T09:10:13Z|00079|binding|INFO|Removing iface tap0122a4be-9c ovn-installed in OVS
Nov 22 09:10:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.705 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:be:aa 10.100.0.6'], port_security=['fa:16:3e:ea:be:aa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '18eb7df8-f3ac-44d2-86c1-db7c0c913c53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7098ed06-dd10-40f8-a35d-3bd27702aded 90f543f2-0e15-4746-9035-ec29edc5cf1e d14126c2-5248-4820-8cca-a041d8844d35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0122a4be-9c10-4475-ba7d-5c818be52474) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.709 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0122a4be-9c10-4475-ba7d-5c818be52474 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 unbound from our chassis
Nov 22 09:10:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.711 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bce72c95-f29f-458a-9b0e-7e700aa1deb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:10:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.713 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9acff7a-89ab-4d13-82fc-cb2a2256ddee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.714 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 namespace which is not needed anymore
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.713 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:13 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 22 09:10:13 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 15.950s CPU time.
Nov 22 09:10:13 compute-0 systemd-machined[215941]: Machine qemu-15-instance-0000000e terminated.
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.760 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.817 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.829 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.892 253665 INFO nova.virt.libvirt.driver [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance destroyed successfully.
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.894 253665 DEBUG nova.objects.instance [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'resources' on Instance uuid 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.904 253665 DEBUG nova.network.neutron [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.905 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:10:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1171787393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.909 253665 DEBUG nova.virt.libvirt.vif [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:08:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-663200800',display_name='tempest-SecurityGroupsTestJSON-server-663200800',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-663200800',id=14,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-197d3f9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:13Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=18eb7df8-f3ac-44d2-86c1-db7c0c913c53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.909 253665 DEBUG nova.network.os_vif_util [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.910 253665 DEBUG nova.network.os_vif_util [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.910 253665 DEBUG os_vif [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:10:13 compute-0 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [NOTICE]   (279586) : haproxy version is 2.8.14-c23fe91
Nov 22 09:10:13 compute-0 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [NOTICE]   (279586) : path to executable is /usr/sbin/haproxy
Nov 22 09:10:13 compute-0 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [WARNING]  (279586) : Exiting Master process...
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.916 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:13 compute-0 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [ALERT]    (279586) : Current worker (279588) exited with code 143 (Terminated)
Nov 22 09:10:13 compute-0 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [WARNING]  (279586) : All workers exited. Exiting... (0)
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.918 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0122a4be-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:13 compute-0 systemd[1]: libpod-c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f.scope: Deactivated successfully.
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.925 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.927 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.928 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:13 compute-0 podman[282869]: 2025-11-22 09:10:13.928041217 +0000 UTC m=+0.083652910 container died c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.928 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.955 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:13 compute-0 nova_compute[253661]: 2025-11-22 09:10:13.969 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f-userdata-shm.mount: Deactivated successfully.
Nov 22 09:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b439ac87fd05beec68cedfc1c5359f61b19c450f7ad63dcb47892d20e6a6cb9e-merged.mount: Deactivated successfully.
Nov 22 09:10:14 compute-0 podman[282869]: 2025-11-22 09:10:14.01015769 +0000 UTC m=+0.165769393 container cleanup c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.015 253665 INFO os_vif [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c')
Nov 22 09:10:14 compute-0 systemd[1]: libpod-conmon-c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f.scope: Deactivated successfully.
Nov 22 09:10:14 compute-0 podman[282932]: 2025-11-22 09:10:14.118339865 +0000 UTC m=+0.069397074 container remove c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:10:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.126 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b390e5d4-0f3c-4096-ae32-a506da445b84]: (4, ('Sat Nov 22 09:10:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 (c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f)\nc391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f\nSat Nov 22 09:10:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 (c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f)\nc391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.128 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4773d91-95e7-44a1-9738-e7363df9fe8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.131 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:14 compute-0 kernel: tapbce72c95-f0: left promiscuous mode
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.153 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.159 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99847d8b-d196-4441-b783-0724eae07418]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.176 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4262973-fbe6-4d38-9c20-f0e8b80ffdd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.178 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b17158e4-bb5a-448f-adc0-19b10ba13e13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[398c4af4-0dc8-48d7-86f3-2cb77b306b4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540269, 'reachable_time': 19889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282996, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:14 compute-0 systemd[1]: run-netns-ovnmeta\x2dbce72c95\x2df29f\x2d458a\x2d9b0e\x2d7e700aa1deb4.mount: Deactivated successfully.
Nov 22 09:10:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.215 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:10:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.215 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c493f820-461b-46cb-b0fd-22db7d386ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:14 compute-0 sudo[282976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:10:14 compute-0 sudo[282976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:14 compute-0 sudo[282976]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:14 compute-0 sudo[283004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:10:14 compute-0 sudo[283004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:14 compute-0 sudo[283004]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.376 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:14 compute-0 sudo[283029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:10:14 compute-0 sudo[283029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:14 compute-0 sudo[283029]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.398 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:14 compute-0 sudo[283054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:10:14 compute-0 sudo[283054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.497 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] resizing rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.564 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Releasing lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.564 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance network_info: |[{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.571 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start _get_guest_xml network_info=[{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.592 253665 WARNING nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.599 253665 DEBUG nova.virt.libvirt.host [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.600 253665 DEBUG nova.virt.libvirt.host [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.606 253665 DEBUG nova.virt.libvirt.host [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.607 253665 DEBUG nova.virt.libvirt.host [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.608 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.608 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.609 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.609 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.609 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.610 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.610 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.610 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.610 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.611 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.611 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.611 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.615 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.664 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.664 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing instance network info cache due to event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.665 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.665 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.666 253665 DEBUG nova.network.neutron [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.725 253665 DEBUG nova.objects.instance [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lazy-loading 'migration_context' on Instance uuid 0e7ac107-5a5a-4066-9396-f22b877e4c2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.741 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.742 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Ensure instance console log exists: /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.742 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.743 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.743 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.744 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.750 253665 WARNING nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.761 253665 INFO nova.virt.libvirt.driver [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Deleting instance files /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53_del
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.761 253665 INFO nova.virt.libvirt.driver [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Deletion of /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53_del complete
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.771 253665 DEBUG nova.virt.libvirt.host [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.772 253665 DEBUG nova.virt.libvirt.host [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.780 253665 DEBUG nova.virt.libvirt.host [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.780 253665 DEBUG nova.virt.libvirt.host [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.783 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.786 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.886 253665 INFO nova.compute.manager [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Took 1.26 seconds to destroy the instance on the hypervisor.
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.887 253665 DEBUG oslo.service.loopingcall [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.888 253665 DEBUG nova.compute.manager [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:10:14 compute-0 nova_compute[253661]: 2025-11-22 09:10:14.888 253665 DEBUG nova.network.neutron [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:10:14 compute-0 ceph-mon[75021]: pgmap v1324: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.4 MiB/s wr, 203 op/s
Nov 22 09:10:15 compute-0 sudo[283054]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168624485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:10:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1857b15f-9c09-4b59-930f-345761f6be59 does not exist
Nov 22 09:10:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d037f7c4-6ffa-4f86-b9b2-3003a50107c0 does not exist
Nov 22 09:10:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev badd32c9-34f0-43cf-b54f-55a150dc872e does not exist
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.119 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 292 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.9 MiB/s wr, 263 op/s
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.164 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.174 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:15 compute-0 sudo[283230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:10:15 compute-0 sudo[283230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:15 compute-0 sudo[283230]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:15 compute-0 sudo[283267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:10:15 compute-0 sudo[283267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:15 compute-0 sudo[283267]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562866794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 sudo[283294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:10:15 compute-0 sudo[283294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:15 compute-0 sudo[283294]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.374 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.414 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.418 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:15 compute-0 sudo[283338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:10:15 compute-0 sudo[283338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.501 253665 DEBUG nova.network.neutron [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.575 253665 INFO nova.compute.manager [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Took 0.69 seconds to deallocate network for instance.
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471072209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.668 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.670 253665 DEBUG nova.virt.libvirt.vif [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1250749597',display_name='tempest-FloatingIPsAssociationTestJSON-server-1250749597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1250749597',id=20,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-63b4tjoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:11Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=ff657cfc-b1bb-4545-bc13-ad240e69c666,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.671 253665 DEBUG nova.network.os_vif_util [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.672 253665 DEBUG nova.network.os_vif_util [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.673 253665 DEBUG nova.objects.instance [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'pci_devices' on Instance uuid ff657cfc-b1bb-4545-bc13-ad240e69c666 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.690 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <uuid>ff657cfc-b1bb-4545-bc13-ad240e69c666</uuid>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <name>instance-00000014</name>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1250749597</nova:name>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:10:14</nova:creationTime>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:user uuid="526789957ca1421b94691426dc7bccb5">tempest-FloatingIPsAssociationTestJSON-1882113079-project-member</nova:user>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:project uuid="ef6e238d438c49959eb8bee112836e52">tempest-FloatingIPsAssociationTestJSON-1882113079</nova:project>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:port uuid="52bf11af-1372-4c5d-8bd8-81017da77de8">
Nov 22 09:10:15 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <system>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="serial">ff657cfc-b1bb-4545-bc13-ad240e69c666</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="uuid">ff657cfc-b1bb-4545-bc13-ad240e69c666</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </system>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <os>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </os>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <features>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </features>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/ff657cfc-b1bb-4545-bc13-ad240e69c666_disk">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:58:5c:0b"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <target dev="tap52bf11af-13"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/console.log" append="off"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <video>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </video>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:10:15 compute-0 nova_compute[253661]: </domain>
Nov 22 09:10:15 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.692 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Preparing to wait for external event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.693 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.693 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.694 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.694 253665 DEBUG nova.virt.libvirt.vif [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1250749597',display_name='tempest-FloatingIPsAssociationTestJSON-server-1250749597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1250749597',id=20,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-63b4tjoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:11Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=ff657cfc-b1bb-4545-bc13-ad240e69c666,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.695 253665 DEBUG nova.network.os_vif_util [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.695 253665 DEBUG nova.network.os_vif_util [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.696 253665 DEBUG os_vif [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.696 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.697 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.697 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.701 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52bf11af-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.702 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52bf11af-13, col_values=(('external_ids', {'iface-id': '52bf11af-1372-4c5d-8bd8-81017da77de8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:5c:0b', 'vm-uuid': 'ff657cfc-b1bb-4545-bc13-ad240e69c666'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:15 compute-0 NetworkManager[48920]: <info>  [1763802615.7059] manager: (tap52bf11af-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.712 253665 INFO os_vif [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13')
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.730 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.731 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:15 compute-0 podman[283435]: 2025-11-22 09:10:15.777017739 +0000 UTC m=+0.055193980 container create a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.801 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.801 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.802 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No VIF found with MAC fa:16:3e:58:5c:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.802 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Using config drive
Nov 22 09:10:15 compute-0 systemd[1]: Started libpod-conmon-a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb.scope.
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.838 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:15 compute-0 podman[283435]: 2025-11-22 09:10:15.747508313 +0000 UTC m=+0.025684574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.852 253665 DEBUG nova.compute.manager [req-883135d7-893d-42dc-8278-9cb630dad984 req-e8efbffa-690a-43da-88f7-3484b4d97b2d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-deleted-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:10:15 compute-0 podman[283435]: 2025-11-22 09:10:15.888477584 +0000 UTC m=+0.166653855 container init a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 09:10:15 compute-0 podman[283435]: 2025-11-22 09:10:15.896541729 +0000 UTC m=+0.174717970 container start a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 09:10:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893282555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 laughing_joliot[283467]: 167 167
Nov 22 09:10:15 compute-0 systemd[1]: libpod-a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb.scope: Deactivated successfully.
Nov 22 09:10:15 compute-0 podman[283435]: 2025-11-22 09:10:15.906605504 +0000 UTC m=+0.184781745 container attach a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.905 253665 DEBUG oslo_concurrency.processutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:15 compute-0 podman[283435]: 2025-11-22 09:10:15.907892255 +0000 UTC m=+0.186068496 container died a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3168624485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2562866794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2471072209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/893282555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e65c9ba47ed24e48939c734aa215904208b903a171e1cda51c434eec5dbcc4a9-merged.mount: Deactivated successfully.
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.952 253665 DEBUG nova.network.neutron [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updated VIF entry in instance network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.956 253665 DEBUG nova.network.neutron [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.958 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.961 253665 DEBUG nova.objects.instance [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e7ac107-5a5a-4066-9396-f22b877e4c2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:15 compute-0 podman[283435]: 2025-11-22 09:10:15.972732019 +0000 UTC m=+0.250908270 container remove a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.976 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.977 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-unplugged-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.977 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.977 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.979 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.979 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] No waiting events found dispatching network-vif-unplugged-0122a4be-9c10-4475-ba7d-5c818be52474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.979 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-unplugged-0122a4be-9c10-4475-ba7d-5c818be52474 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:10:15 compute-0 nova_compute[253661]: 2025-11-22 09:10:15.982 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <uuid>0e7ac107-5a5a-4066-9396-f22b877e4c2b</uuid>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <name>instance-00000015</name>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:name>tempest-TenantUsagesTestJSON-server-1254272894</nova:name>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:10:14</nova:creationTime>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:user uuid="d741f4ee50ae459697238fe0a7207afe">tempest-TenantUsagesTestJSON-238986020-project-member</nova:user>
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <nova:project uuid="6869d1beac0b4bfab5de74e8692b55ed">tempest-TenantUsagesTestJSON-238986020</nova:project>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <system>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="serial">0e7ac107-5a5a-4066-9396-f22b877e4c2b</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="uuid">0e7ac107-5a5a-4066-9396-f22b877e4c2b</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </system>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <os>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </os>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <features>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </features>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/console.log" append="off"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <video>
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </video>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:10:15 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:10:15 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:10:15 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:10:15 compute-0 nova_compute[253661]: </domain>
Nov 22 09:10:15 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:10:16 compute-0 systemd[1]: libpod-conmon-a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb.scope: Deactivated successfully.
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.035 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.036 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.036 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Using config drive
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.095 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.151 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Creating config drive at /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.156 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcncqr4v5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:16 compute-0 podman[283533]: 2025-11-22 09:10:16.240341483 +0000 UTC m=+0.097347734 container create 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:16 compute-0 podman[283533]: 2025-11-22 09:10:16.17469447 +0000 UTC m=+0.031700691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:10:16 compute-0 systemd[1]: Started libpod-conmon-439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423.scope.
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.295 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcncqr4v5" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.333 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.349 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:16 compute-0 podman[283533]: 2025-11-22 09:10:16.353838247 +0000 UTC m=+0.210844458 container init 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:10:16 compute-0 podman[283533]: 2025-11-22 09:10:16.361139034 +0000 UTC m=+0.218145245 container start 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 09:10:16 compute-0 podman[283533]: 2025-11-22 09:10:16.366270909 +0000 UTC m=+0.223277120 container attach 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:10:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751987839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.395 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Creating config drive at /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.400 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjzoz7npp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.431 253665 DEBUG oslo_concurrency.processutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.437 253665 DEBUG nova.compute.provider_tree [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.452 253665 DEBUG nova.scheduler.client.report [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.498 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.503 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.504 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.504 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.504 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.543 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjzoz7npp" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.567 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.571 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.606 253665 INFO nova.scheduler.client.report [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Deleted allocations for instance 18eb7df8-f3ac-44d2-86c1-db7c0c913c53
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.609 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.609 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Deleting local config drive /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config because it was imported into RBD.
Nov 22 09:10:16 compute-0 kernel: tap52bf11af-13: entered promiscuous mode
Nov 22 09:10:16 compute-0 NetworkManager[48920]: <info>  [1763802616.6674] manager: (tap52bf11af-13): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 22 09:10:16 compute-0 systemd-udevd[282834]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:10:16 compute-0 ovn_controller[152872]: 2025-11-22T09:10:16Z|00080|binding|INFO|Claiming lport 52bf11af-1372-4c5d-8bd8-81017da77de8 for this chassis.
Nov 22 09:10:16 compute-0 ovn_controller[152872]: 2025-11-22T09:10:16Z|00081|binding|INFO|52bf11af-1372-4c5d-8bd8-81017da77de8: Claiming fa:16:3e:58:5c:0b 10.100.0.5
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.668 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:16 compute-0 NetworkManager[48920]: <info>  [1763802616.6853] device (tap52bf11af-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.686 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:5c:0b 10.100.0.5'], port_security=['fa:16:3e:58:5c:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff657cfc-b1bb-4545-bc13-ad240e69c666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=52bf11af-1372-4c5d-8bd8-81017da77de8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:16 compute-0 NetworkManager[48920]: <info>  [1763802616.6888] device (tap52bf11af-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:10:16 compute-0 ovn_controller[152872]: 2025-11-22T09:10:16Z|00082|binding|INFO|Setting lport 52bf11af-1372-4c5d-8bd8-81017da77de8 ovn-installed in OVS
Nov 22 09:10:16 compute-0 ovn_controller[152872]: 2025-11-22T09:10:16Z|00083|binding|INFO|Setting lport 52bf11af-1372-4c5d-8bd8-81017da77de8 up in Southbound
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.689 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 52bf11af-1372-4c5d-8bd8-81017da77de8 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 bound to our chassis
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.691 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.705 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.717 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[67f263dd-107d-48ff-8689-16cd9ff5af18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:16 compute-0 systemd-machined[215941]: New machine qemu-23-instance-00000014.
Nov 22 09:10:16 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000014.
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.738 253665 DEBUG nova.compute.manager [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.739 253665 DEBUG oslo_concurrency.lockutils [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.739 253665 DEBUG oslo_concurrency.lockutils [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.739 253665 DEBUG oslo_concurrency.lockutils [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.740 253665 DEBUG nova.compute.manager [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] No waiting events found dispatching network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.740 253665 WARNING nova.compute.manager [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received unexpected event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 for instance with vm_state deleted and task_state None.
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.745 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.745 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.756 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[13f25353-b74e-4166-a90e-50a56d1159a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.760 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ea682dc1-7246-4491-8c8f-55202017f748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.787 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.806 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0cad83a-9c48-4b1d-9c83-48b4d22e9b5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.831 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88d2a4b0-ac6f-4d27-ba4c-a4e2f7188370]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283681, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.851 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.852 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Deleting local config drive /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config because it was imported into RBD.
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dfaec166-0fbf-497a-9070-f4a8caf5b9cc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545502, 'tstamp': 545502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283682, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545507, 'tstamp': 545507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283682, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.883 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.884 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.884 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.885 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.886 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.886 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.896 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:10:16 compute-0 nova_compute[253661]: 2025-11-22 09:10:16.896 253665 INFO nova.compute.claims [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:10:16 compute-0 systemd-machined[215941]: New machine qemu-24-instance-00000015.
Nov 22 09:10:16 compute-0 ceph-mon[75021]: pgmap v1325: 305 pgs: 305 active+clean; 292 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.9 MiB/s wr, 263 op/s
Nov 22 09:10:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3751987839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:16 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000015.
Nov 22 09:10:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2589562911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.036 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.100 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 294 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.2 MiB/s wr, 308 op/s
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.174 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.175 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.189 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.189 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.202 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.203 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.212 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.213 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.220 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.222 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.236 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802617.2355912, ff657cfc-b1bb-4545-bc13-ad240e69c666 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.237 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] VM Started (Lifecycle Event)
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.257 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.263 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802617.2360687, ff657cfc-b1bb-4545-bc13-ad240e69c666 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.263 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] VM Paused (Lifecycle Event)
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.281 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.292 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.307 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.408 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802617.408283, 0e7ac107-5a5a-4066-9396-f22b877e4c2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.411 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] VM Resumed (Lifecycle Event)
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.414 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.415 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.421 253665 INFO nova.virt.libvirt.driver [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance spawned successfully.
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.422 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.435 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.445 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.448 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.448 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.449 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.449 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.449 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.450 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.479 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.480 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802617.408365, 0e7ac107-5a5a-4066-9396-f22b877e4c2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.480 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] VM Started (Lifecycle Event)
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.493 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.501 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.518 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:17 compute-0 jovial_gauss[283551]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:10:17 compute-0 jovial_gauss[283551]: --> relative data size: 1.0
Nov 22 09:10:17 compute-0 jovial_gauss[283551]: --> All data devices are unavailable
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.670 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.671 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3947MB free_disk=59.8648681640625GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.671 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:17 compute-0 systemd[1]: libpod-439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423.scope: Deactivated successfully.
Nov 22 09:10:17 compute-0 systemd[1]: libpod-439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423.scope: Consumed 1.125s CPU time.
Nov 22 09:10:17 compute-0 podman[283533]: 2025-11-22 09:10:17.68164849 +0000 UTC m=+1.538654691 container died 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92-merged.mount: Deactivated successfully.
Nov 22 09:10:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978496543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.754 253665 INFO nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Took 4.09 seconds to spawn the instance on the hypervisor.
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.754 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.784 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:17 compute-0 podman[283533]: 2025-11-22 09:10:17.793554376 +0000 UTC m=+1.650560587 container remove 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.808 253665 DEBUG nova.compute.provider_tree [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.830 253665 DEBUG nova.scheduler.client.report [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:17 compute-0 systemd[1]: libpod-conmon-439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423.scope: Deactivated successfully.
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.835 253665 INFO nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Took 5.15 seconds to build instance.
Nov 22 09:10:17 compute-0 sudo[283338]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:17 compute-0 podman[283830]: 2025-11-22 09:10:17.86911379 +0000 UTC m=+0.159262985 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.877 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.878 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.881 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.882 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:17 compute-0 sudo[283863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:10:17 compute-0 sudo[283863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2589562911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/978496543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:17 compute-0 sudo[283863]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.966 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 04781543-b5ed-482a-a30a-0730fbcd12a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3ae08a2f-348c-406b-8ffc-9acb8a542e1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d99bd27b-0ff3-493e-a69c-6c7ec034aa81 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance ff657cfc-b1bb-4545-bc13-ad240e69c666 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 0e7ac107-5a5a-4066-9396-f22b877e4c2b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 96000606-0bc4-4cf1-9e33-360a640c2cb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.968 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:10:17 compute-0 nova_compute[253661]: 2025-11-22 09:10:17.968 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:10:17 compute-0 sudo[283890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:10:17 compute-0 sudo[283890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:17 compute-0 sudo[283890]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:18 compute-0 sudo[283915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:10:18 compute-0 sudo[283915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:18 compute-0 sudo[283915]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:18 compute-0 sudo[283940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:10:18 compute-0 sudo[283940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.194 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.194 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.237 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.240 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.289 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.401 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.403 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.404 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Creating image(s)
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.436 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.498 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:18 compute-0 podman[284053]: 2025-11-22 09:10:18.553087179 +0000 UTC m=+0.059364352 container create 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.557 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.577 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:18 compute-0 systemd[1]: Started libpod-conmon-3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2.scope.
Nov 22 09:10:18 compute-0 podman[284053]: 2025-11-22 09:10:18.521065562 +0000 UTC m=+0.027342775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:10:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:10:18 compute-0 podman[284053]: 2025-11-22 09:10:18.665295252 +0000 UTC m=+0.171572445 container init 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:10:18 compute-0 podman[284053]: 2025-11-22 09:10:18.674522855 +0000 UTC m=+0.180800028 container start 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.675 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.676 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.677 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.677 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:18 compute-0 podman[284053]: 2025-11-22 09:10:18.678178904 +0000 UTC m=+0.184456077 container attach 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:10:18 compute-0 quizzical_poitras[284090]: 167 167
Nov 22 09:10:18 compute-0 systemd[1]: libpod-3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2.scope: Deactivated successfully.
Nov 22 09:10:18 compute-0 conmon[284090]: conmon 3a929d02345659ba0be6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2.scope/container/memory.events
Nov 22 09:10:18 compute-0 podman[284053]: 2025-11-22 09:10:18.683429872 +0000 UTC m=+0.189707035 container died 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 09:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-da0746c8cdb68eccbdff141d0d3d3dcb582ae3d065b7ced41349937555337199-merged.mount: Deactivated successfully.
Nov 22 09:10:18 compute-0 podman[284053]: 2025-11-22 09:10:18.727794429 +0000 UTC m=+0.234071602 container remove 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.736 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.755 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966723609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:18 compute-0 systemd[1]: libpod-conmon-3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2.scope: Deactivated successfully.
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.803 253665 DEBUG nova.policy [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05cafdbce8334f9380b4dbd1d21f7d58', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd78b26f20d674ae6a213d727050a50d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.806 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.813 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.827 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.847 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802603.8446388, c5f708d0-4110-417f-8353-dc61992d22dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.848 253665 INFO nova.compute.manager [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Stopped (Lifecycle Event)
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.855 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.856 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:18 compute-0 nova_compute[253661]: 2025-11-22 09:10:18.866 253665 DEBUG nova.compute.manager [None req-62b2d0fa-fab7-434c-b937-5068e8ba5677 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:18 compute-0 ceph-mon[75021]: pgmap v1326: 305 pgs: 305 active+clean; 294 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.2 MiB/s wr, 308 op/s
Nov 22 09:10:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3966723609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:18 compute-0 podman[284152]: 2025-11-22 09:10:18.946253671 +0000 UTC m=+0.052513446 container create 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:10:19 compute-0 podman[284152]: 2025-11-22 09:10:18.916862847 +0000 UTC m=+0.023122642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:10:19 compute-0 systemd[1]: Started libpod-conmon-69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8.scope.
Nov 22 09:10:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:19 compute-0 podman[284152]: 2025-11-22 09:10:19.090184533 +0000 UTC m=+0.196444318 container init 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 22 09:10:19 compute-0 podman[284152]: 2025-11-22 09:10:19.100960115 +0000 UTC m=+0.207219890 container start 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:10:19 compute-0 podman[284152]: 2025-11-22 09:10:19.105592807 +0000 UTC m=+0.211852582 container attach 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:10:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 288 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.2 MiB/s wr, 317 op/s
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.143 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.217 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:10:19 compute-0 ovn_controller[152872]: 2025-11-22T09:10:19Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:ea:eb 10.100.0.3
Nov 22 09:10:19 compute-0 ovn_controller[152872]: 2025-11-22T09:10:19Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:ea:eb 10.100.0.3
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.378 253665 DEBUG nova.objects.instance [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 96000606-0bc4-4cf1-9e33-360a640c2cb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.398 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.399 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Ensure instance console log exists: /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.400 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.401 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.401 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.593 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.856 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.857 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.879 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.879 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.880 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.880 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:19 compute-0 nova_compute[253661]: 2025-11-22 09:10:19.881 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:10:19 compute-0 confident_khorana[284171]: {
Nov 22 09:10:19 compute-0 confident_khorana[284171]:     "0": [
Nov 22 09:10:19 compute-0 confident_khorana[284171]:         {
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "devices": [
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "/dev/loop3"
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             ],
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_name": "ceph_lv0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_size": "21470642176",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "name": "ceph_lv0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "tags": {
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.cluster_name": "ceph",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.crush_device_class": "",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.encrypted": "0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.osd_id": "0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.type": "block",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.vdo": "0"
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             },
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "type": "block",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "vg_name": "ceph_vg0"
Nov 22 09:10:19 compute-0 confident_khorana[284171]:         }
Nov 22 09:10:19 compute-0 confident_khorana[284171]:     ],
Nov 22 09:10:19 compute-0 confident_khorana[284171]:     "1": [
Nov 22 09:10:19 compute-0 confident_khorana[284171]:         {
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "devices": [
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "/dev/loop4"
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             ],
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_name": "ceph_lv1",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_size": "21470642176",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "name": "ceph_lv1",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "tags": {
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.cluster_name": "ceph",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.crush_device_class": "",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.encrypted": "0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.osd_id": "1",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.type": "block",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.vdo": "0"
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             },
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "type": "block",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "vg_name": "ceph_vg1"
Nov 22 09:10:19 compute-0 confident_khorana[284171]:         }
Nov 22 09:10:19 compute-0 confident_khorana[284171]:     ],
Nov 22 09:10:19 compute-0 confident_khorana[284171]:     "2": [
Nov 22 09:10:19 compute-0 confident_khorana[284171]:         {
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "devices": [
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "/dev/loop5"
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             ],
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_name": "ceph_lv2",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_size": "21470642176",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "name": "ceph_lv2",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "tags": {
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.cluster_name": "ceph",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.crush_device_class": "",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.encrypted": "0",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.osd_id": "2",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.type": "block",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:                 "ceph.vdo": "0"
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             },
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "type": "block",
Nov 22 09:10:19 compute-0 confident_khorana[284171]:             "vg_name": "ceph_vg2"
Nov 22 09:10:19 compute-0 confident_khorana[284171]:         }
Nov 22 09:10:19 compute-0 confident_khorana[284171]:     ]
Nov 22 09:10:19 compute-0 confident_khorana[284171]: }
Nov 22 09:10:20 compute-0 systemd[1]: libpod-69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8.scope: Deactivated successfully.
Nov 22 09:10:20 compute-0 conmon[284171]: conmon 69303f0bd44d107a035f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8.scope/container/memory.events
Nov 22 09:10:20 compute-0 podman[284152]: 2025-11-22 09:10:20.026527876 +0000 UTC m=+1.132787651 container died 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 09:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86-merged.mount: Deactivated successfully.
Nov 22 09:10:20 compute-0 podman[284152]: 2025-11-22 09:10:20.087572348 +0000 UTC m=+1.193832123 container remove 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:10:20 compute-0 systemd[1]: libpod-conmon-69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8.scope: Deactivated successfully.
Nov 22 09:10:20 compute-0 sudo[283940]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:20 compute-0 sudo[284263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:10:20 compute-0 sudo[284263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:20 compute-0 sudo[284263]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:20 compute-0 sudo[284288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:10:20 compute-0 sudo[284288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:20 compute-0 sudo[284288]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.299 253665 DEBUG nova.compute.manager [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.300 253665 DEBUG oslo_concurrency.lockutils [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.300 253665 DEBUG oslo_concurrency.lockutils [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.300 253665 DEBUG oslo_concurrency.lockutils [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.300 253665 DEBUG nova.compute.manager [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Processing event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.301 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.309 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.310 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802620.3106909, ff657cfc-b1bb-4545-bc13-ad240e69c666 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.311 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] VM Resumed (Lifecycle Event)
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.315 253665 INFO nova.virt.libvirt.driver [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance spawned successfully.
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.316 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.336 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:20 compute-0 sudo[284313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:10:20 compute-0 sudo[284313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:20 compute-0 sudo[284313]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.348 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.349 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.349 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.349 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.350 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.350 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.377 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:20 compute-0 sudo[284338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:10:20 compute-0 sudo[284338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.409 253665 INFO nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Took 9.09 seconds to spawn the instance on the hypervisor.
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.409 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.474 253665 INFO nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Took 10.12 seconds to build instance.
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.498 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.646 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Successfully created port: 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:10:20 compute-0 nova_compute[253661]: 2025-11-22 09:10:20.735 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:20 compute-0 podman[284397]: 2025-11-22 09:10:20.757463425 +0000 UTC m=+0.041408586 container create 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:10:20 compute-0 systemd[1]: Started libpod-conmon-3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0.scope.
Nov 22 09:10:20 compute-0 podman[284397]: 2025-11-22 09:10:20.738976396 +0000 UTC m=+0.022921577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:10:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:10:20 compute-0 podman[284397]: 2025-11-22 09:10:20.862987866 +0000 UTC m=+0.146933047 container init 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:10:20 compute-0 podman[284397]: 2025-11-22 09:10:20.871195684 +0000 UTC m=+0.155140845 container start 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:10:20 compute-0 podman[284397]: 2025-11-22 09:10:20.875024548 +0000 UTC m=+0.158969719 container attach 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:10:20 compute-0 eager_mestorf[284410]: 167 167
Nov 22 09:10:20 compute-0 systemd[1]: libpod-3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0.scope: Deactivated successfully.
Nov 22 09:10:20 compute-0 podman[284397]: 2025-11-22 09:10:20.879726971 +0000 UTC m=+0.163672162 container died 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ff6dfeff1a5bfe678f6473a30cda0f156466d9f70b1ee7668b7c4590944a8c5-merged.mount: Deactivated successfully.
Nov 22 09:10:20 compute-0 podman[284397]: 2025-11-22 09:10:20.924114629 +0000 UTC m=+0.208059790 container remove 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:10:20 compute-0 systemd[1]: libpod-conmon-3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0.scope: Deactivated successfully.
Nov 22 09:10:20 compute-0 ceph-mon[75021]: pgmap v1327: 305 pgs: 305 active+clean; 288 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.2 MiB/s wr, 317 op/s
Nov 22 09:10:21 compute-0 podman[284432]: 2025-11-22 09:10:21.127890765 +0000 UTC m=+0.054129225 container create dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 22 09:10:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 288 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.7 MiB/s wr, 259 op/s
Nov 22 09:10:21 compute-0 systemd[1]: Started libpod-conmon-dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a.scope.
Nov 22 09:10:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:10:21 compute-0 podman[284432]: 2025-11-22 09:10:21.104035786 +0000 UTC m=+0.030274256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:21 compute-0 podman[284432]: 2025-11-22 09:10:21.21839543 +0000 UTC m=+0.144633910 container init dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:10:21 compute-0 podman[284432]: 2025-11-22 09:10:21.226548838 +0000 UTC m=+0.152787308 container start dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 09:10:21 compute-0 podman[284432]: 2025-11-22 09:10:21.231104749 +0000 UTC m=+0.157343209 container attach dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.325 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.327 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.327 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.327 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.327 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.329 253665 INFO nova.compute.manager [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Terminating instance
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.329 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "refresh_cache-0e7ac107-5a5a-4066-9396-f22b877e4c2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.330 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquired lock "refresh_cache-0e7ac107-5a5a-4066-9396-f22b877e4c2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.330 253665 DEBUG nova.network.neutron [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.496 253665 DEBUG nova.network.neutron [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.541 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Successfully updated port: 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.567 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.568 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquired lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.568 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.672 253665 DEBUG nova.compute.manager [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-changed-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.672 253665 DEBUG nova.compute.manager [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Refreshing instance network info cache due to event network-changed-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.673 253665 DEBUG oslo_concurrency.lockutils [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.749 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.834 253665 DEBUG nova.network.neutron [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.845 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Releasing lock "refresh_cache-0e7ac107-5a5a-4066-9396-f22b877e4c2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:21 compute-0 nova_compute[253661]: 2025-11-22 09:10:21.846 253665 DEBUG nova.compute.manager [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:10:21 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 22 09:10:21 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000015.scope: Consumed 4.591s CPU time.
Nov 22 09:10:21 compute-0 systemd-machined[215941]: Machine qemu-24-instance-00000015 terminated.
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.077 253665 INFO nova.virt.libvirt.driver [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance destroyed successfully.
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.077 253665 DEBUG nova.objects.instance [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lazy-loading 'resources' on Instance uuid 0e7ac107-5a5a-4066-9396-f22b877e4c2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:22 compute-0 nice_khorana[284447]: {
Nov 22 09:10:22 compute-0 nice_khorana[284447]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "osd_id": 1,
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "type": "bluestore"
Nov 22 09:10:22 compute-0 nice_khorana[284447]:     },
Nov 22 09:10:22 compute-0 nice_khorana[284447]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "osd_id": 0,
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "type": "bluestore"
Nov 22 09:10:22 compute-0 nice_khorana[284447]:     },
Nov 22 09:10:22 compute-0 nice_khorana[284447]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "osd_id": 2,
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:10:22 compute-0 nice_khorana[284447]:         "type": "bluestore"
Nov 22 09:10:22 compute-0 nice_khorana[284447]:     }
Nov 22 09:10:22 compute-0 nice_khorana[284447]: }
Nov 22 09:10:22 compute-0 systemd[1]: libpod-dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a.scope: Deactivated successfully.
Nov 22 09:10:22 compute-0 systemd[1]: libpod-dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a.scope: Consumed 1.063s CPU time.
Nov 22 09:10:22 compute-0 podman[284502]: 2025-11-22 09:10:22.398830077 +0000 UTC m=+0.035759008 container died dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776-merged.mount: Deactivated successfully.
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.462 253665 DEBUG nova.compute.manager [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.464 253665 DEBUG oslo_concurrency.lockutils [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.464 253665 DEBUG oslo_concurrency.lockutils [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.465 253665 DEBUG oslo_concurrency.lockutils [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.465 253665 DEBUG nova.compute.manager [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] No waiting events found dispatching network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.466 253665 WARNING nova.compute.manager [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received unexpected event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 for instance with vm_state active and task_state None.
Nov 22 09:10:22 compute-0 podman[284502]: 2025-11-22 09:10:22.474677419 +0000 UTC m=+0.111606310 container remove dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:10:22 compute-0 systemd[1]: libpod-conmon-dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a.scope: Deactivated successfully.
Nov 22 09:10:22 compute-0 sudo[284338]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:10:22 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:10:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:10:22 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:10:22 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8c5bbc95-47f4-47fd-942d-a4dc0babe9a8 does not exist
Nov 22 09:10:22 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 29dc3265-c2a3-493e-a023-36e0a0fdc149 does not exist
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.573 253665 INFO nova.virt.libvirt.driver [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Deleting instance files /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b_del
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.573 253665 INFO nova.virt.libvirt.driver [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Deletion of /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b_del complete
Nov 22 09:10:22 compute-0 nova_compute[253661]: 2025-11-22 09:10:22.584 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Updating instance_info_cache with network_info: [{"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:22 compute-0 sudo[284517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:10:22 compute-0 sudo[284517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:22 compute-0 sudo[284517]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:22 compute-0 sudo[284542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:10:22 compute-0 sudo[284542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:10:22 compute-0 sudo[284542]: pam_unix(sudo:session): session closed for user root
Nov 22 09:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:10:22 compute-0 ceph-mon[75021]: pgmap v1328: 305 pgs: 305 active+clean; 288 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.7 MiB/s wr, 259 op/s
Nov 22 09:10:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:10:22 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:23 compute-0 NetworkManager[48920]: <info>  [1763802623.1369] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 22 09:10:23 compute-0 NetworkManager[48920]: <info>  [1763802623.1381] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 22 09:10:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 314 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.3 MiB/s wr, 337 op/s
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.230 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:23 compute-0 ovn_controller[152872]: 2025-11-22T09:10:23Z|00084|binding|INFO|Releasing lport 791df5ce-fddc-4961-a1d0-6667026f8b13 from this chassis (sb_readonly=0)
Nov 22 09:10:23 compute-0 ovn_controller[152872]: 2025-11-22T09:10:23Z|00085|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.447 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Releasing lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.448 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance network_info: |[{"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.448 253665 DEBUG oslo_concurrency.lockutils [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.448 253665 DEBUG nova.network.neutron [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Refreshing network info cache for port 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.451 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start _get_guest_xml network_info=[{"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.455 253665 WARNING nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.461 253665 DEBUG nova.virt.libvirt.host [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.462 253665 DEBUG nova.virt.libvirt.host [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.467 253665 DEBUG nova.virt.libvirt.host [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.468 253665 DEBUG nova.virt.libvirt.host [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.468 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.469 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.469 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.469 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.470 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.470 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.470 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.471 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.471 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.471 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.472 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.472 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.475 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.717 253665 INFO nova.compute.manager [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Took 1.87 seconds to destroy the instance on the hypervisor.
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.718 253665 DEBUG oslo.service.loopingcall [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.719 253665 DEBUG nova.compute.manager [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.719 253665 DEBUG nova.network.neutron [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.925 253665 DEBUG nova.network.neutron [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.935 253665 DEBUG nova.network.neutron [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2068492745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.951 253665 INFO nova.compute.manager [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Took 0.23 seconds to deallocate network for instance.
Nov 22 09:10:23 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.961 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2068492745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:23.999 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.004 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:24 compute-0 ovn_controller[152872]: 2025-11-22T09:10:24Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 09:10:24 compute-0 ovn_controller[152872]: 2025-11-22T09:10:24Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.124 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.128 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.308 253665 DEBUG oslo_concurrency.processutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2748705719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.521 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.523 253665 DEBUG nova.virt.libvirt.vif [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-70556130',display_name='tempest-ServersAdminTestJSON-server-70556130',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-70556130',id=22,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-74e7hdfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:18Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=96000606-0bc4-4cf1-9e33-360a640c2cb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.524 253665 DEBUG nova.network.os_vif_util [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.525 253665 DEBUG nova.network.os_vif_util [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.526 253665 DEBUG nova.objects.instance [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 96000606-0bc4-4cf1-9e33-360a640c2cb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.542 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <uuid>96000606-0bc4-4cf1-9e33-360a640c2cb7</uuid>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <name>instance-00000016</name>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdminTestJSON-server-70556130</nova:name>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:10:23</nova:creationTime>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <nova:port uuid="411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa">
Nov 22 09:10:24 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <system>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <entry name="serial">96000606-0bc4-4cf1-9e33-360a640c2cb7</entry>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <entry name="uuid">96000606-0bc4-4cf1-9e33-360a640c2cb7</entry>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     </system>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <os>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   </os>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <features>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   </features>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/96000606-0bc4-4cf1-9e33-360a640c2cb7_disk">
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config">
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:24 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:cb:6f:23"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <target dev="tap411035c7-ec"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/console.log" append="off"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <video>
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     </video>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:10:24 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:10:24 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:10:24 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:10:24 compute-0 nova_compute[253661]: </domain>
Nov 22 09:10:24 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.543 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Preparing to wait for external event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.543 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.543 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.543 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.544 253665 DEBUG nova.virt.libvirt.vif [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-70556130',display_name='tempest-ServersAdminTestJSON-server-70556130',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-70556130',id=22,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-74e7hdfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:18Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=96000606-0bc4-4cf1-9e33-360a640c2cb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.544 253665 DEBUG nova.network.os_vif_util [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.545 253665 DEBUG nova.network.os_vif_util [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.545 253665 DEBUG os_vif [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.546 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.547 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.552 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap411035c7-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.553 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap411035c7-ec, col_values=(('external_ids', {'iface-id': '411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:6f:23', 'vm-uuid': '96000606-0bc4-4cf1-9e33-360a640c2cb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:24 compute-0 NetworkManager[48920]: <info>  [1763802624.5556] manager: (tap411035c7-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.565 253665 INFO os_vif [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec')
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.639 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.639 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.639 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:cb:6f:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.640 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Using config drive
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.669 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.779 253665 DEBUG nova.compute.manager [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.780 253665 DEBUG nova.compute.manager [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.781 253665 DEBUG oslo_concurrency.lockutils [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.782 253665 DEBUG oslo_concurrency.lockutils [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.783 253665 DEBUG nova.network.neutron [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3538232642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.825 253665 DEBUG oslo_concurrency.processutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.831 253665 DEBUG nova.compute.provider_tree [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.845 253665 DEBUG nova.scheduler.client.report [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.882 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:24 compute-0 nova_compute[253661]: 2025-11-22 09:10:24.971 253665 INFO nova.scheduler.client.report [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Deleted allocations for instance 0e7ac107-5a5a-4066-9396-f22b877e4c2b
Nov 22 09:10:25 compute-0 ceph-mon[75021]: pgmap v1329: 305 pgs: 305 active+clean; 314 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.3 MiB/s wr, 337 op/s
Nov 22 09:10:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2748705719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3538232642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:25 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 09:10:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 335 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 9.0 MiB/s wr, 486 op/s
Nov 22 09:10:25 compute-0 nova_compute[253661]: 2025-11-22 09:10:25.151 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:25 compute-0 ovn_controller[152872]: 2025-11-22T09:10:25Z|00086|binding|INFO|Releasing lport 791df5ce-fddc-4961-a1d0-6667026f8b13 from this chassis (sb_readonly=0)
Nov 22 09:10:25 compute-0 ovn_controller[152872]: 2025-11-22T09:10:25Z|00087|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:10:25 compute-0 nova_compute[253661]: 2025-11-22 09:10:25.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.092 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Creating config drive at /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.096 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ir_coe6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.230 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ir_coe6" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.288 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.294 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.451 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.452 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Deleting local config drive /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config because it was imported into RBD.
Nov 22 09:10:26 compute-0 kernel: tap411035c7-ec: entered promiscuous mode
Nov 22 09:10:26 compute-0 NetworkManager[48920]: <info>  [1763802626.5224] manager: (tap411035c7-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 22 09:10:26 compute-0 ovn_controller[152872]: 2025-11-22T09:10:26Z|00088|binding|INFO|Claiming lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa for this chassis.
Nov 22 09:10:26 compute-0 ovn_controller[152872]: 2025-11-22T09:10:26Z|00089|binding|INFO|411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa: Claiming fa:16:3e:cb:6f:23 10.100.0.10
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:26 compute-0 ovn_controller[152872]: 2025-11-22T09:10:26Z|00090|binding|INFO|Setting lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa ovn-installed in OVS
Nov 22 09:10:26 compute-0 systemd-udevd[284725]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.566 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:26 compute-0 systemd-machined[215941]: New machine qemu-25-instance-00000016.
Nov 22 09:10:26 compute-0 NetworkManager[48920]: <info>  [1763802626.5850] device (tap411035c7-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:10:26 compute-0 NetworkManager[48920]: <info>  [1763802626.5870] device (tap411035c7-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:10:26 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000016.
Nov 22 09:10:26 compute-0 ovn_controller[152872]: 2025-11-22T09:10:26Z|00091|binding|INFO|Setting lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa up in Southbound
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.595 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:6f:23 10.100.0.10'], port_security=['fa:16:3e:cb:6f:23 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '96000606-0bc4-4cf1-9e33-360a640c2cb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.596 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.598 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.617 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00baf2e3-4eb4-4f96-95da-e966c6186229]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.664 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad91ab6-cfe7-48b9-afee-eff8721a0572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.670 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a3abd501-5668-41f9-a8fb-cec5e63751fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.704 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[20416871-4354-42d7-9787-971e056cf6e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.724 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3cd94e-da6e-4755-970d-eb48f3394db9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284741, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.743 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[786f8080-a5c1-48bb-92ed-f8300a3f2c79]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284742, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284742, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.746 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.748 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.750 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.751 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.751 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.752 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.753 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.993 253665 DEBUG nova.network.neutron [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Updated VIF entry in instance network info cache for port 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:26 compute-0 nova_compute[253661]: 2025-11-22 09:10:26.994 253665 DEBUG nova.network.neutron [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Updating instance_info_cache with network_info: [{"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:27 compute-0 nova_compute[253661]: 2025-11-22 09:10:27.010 253665 DEBUG oslo_concurrency.lockutils [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:27 compute-0 ceph-mon[75021]: pgmap v1330: 305 pgs: 305 active+clean; 335 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 9.0 MiB/s wr, 486 op/s
Nov 22 09:10:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 349 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 9.0 MiB/s wr, 454 op/s
Nov 22 09:10:27 compute-0 nova_compute[253661]: 2025-11-22 09:10:27.349 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802627.3485453, 96000606-0bc4-4cf1-9e33-360a640c2cb7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:27 compute-0 nova_compute[253661]: 2025-11-22 09:10:27.350 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] VM Started (Lifecycle Event)
Nov 22 09:10:27 compute-0 nova_compute[253661]: 2025-11-22 09:10:27.368 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:27 compute-0 nova_compute[253661]: 2025-11-22 09:10:27.374 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802627.3488536, 96000606-0bc4-4cf1-9e33-360a640c2cb7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:27 compute-0 nova_compute[253661]: 2025-11-22 09:10:27.375 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] VM Paused (Lifecycle Event)
Nov 22 09:10:27 compute-0 nova_compute[253661]: 2025-11-22 09:10:27.396 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:27 compute-0 nova_compute[253661]: 2025-11-22 09:10:27.401 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:27 compute-0 nova_compute[253661]: 2025-11-22 09:10:27.417 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:27 compute-0 ovn_controller[152872]: 2025-11-22T09:10:27Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:5a:f3 10.100.0.11
Nov 22 09:10:27 compute-0 ovn_controller[152872]: 2025-11-22T09:10:27Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:5a:f3 10.100.0.11
Nov 22 09:10:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:27.953 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:27.956 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:27.957 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.110 253665 DEBUG nova.compute.manager [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.111 253665 DEBUG oslo_concurrency.lockutils [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.111 253665 DEBUG oslo_concurrency.lockutils [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.111 253665 DEBUG oslo_concurrency.lockutils [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.111 253665 DEBUG nova.compute.manager [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Processing event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.112 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.124 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802628.1156123, 96000606-0bc4-4cf1-9e33-360a640c2cb7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] VM Resumed (Lifecycle Event)
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.127 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.144 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.149 253665 INFO nova.virt.libvirt.driver [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance spawned successfully.
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.149 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.152 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.175 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.180 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.180 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.181 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.182 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.183 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.183 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.236 253665 INFO nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Took 9.83 seconds to spawn the instance on the hypervisor.
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.237 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.302 253665 INFO nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Took 11.44 seconds to build instance.
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.324 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.467 253665 DEBUG nova.network.neutron [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.468 253665 DEBUG nova.network.neutron [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.483 253665 DEBUG oslo_concurrency.lockutils [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.875 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802613.8735316, 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.876 253665 INFO nova.compute.manager [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] VM Stopped (Lifecycle Event)
Nov 22 09:10:28 compute-0 nova_compute[253661]: 2025-11-22 09:10:28.897 253665 DEBUG nova.compute.manager [None req-7e6dbe9e-45fd-4ecf-936a-7b83d3f4bed9 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:29 compute-0 ceph-mon[75021]: pgmap v1331: 305 pgs: 305 active+clean; 349 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 9.0 MiB/s wr, 454 op/s
Nov 22 09:10:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 9.7 MiB/s wr, 455 op/s
Nov 22 09:10:29 compute-0 nova_compute[253661]: 2025-11-22 09:10:29.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:29 compute-0 nova_compute[253661]: 2025-11-22 09:10:29.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.252 253665 DEBUG nova.compute.manager [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.253 253665 DEBUG oslo_concurrency.lockutils [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.253 253665 DEBUG oslo_concurrency.lockutils [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.254 253665 DEBUG oslo_concurrency.lockutils [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.254 253665 DEBUG nova.compute.manager [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] No waiting events found dispatching network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.254 253665 WARNING nova.compute.manager [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received unexpected event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa for instance with vm_state active and task_state None.
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.644 253665 DEBUG nova.compute.manager [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.645 253665 DEBUG nova.compute.manager [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.645 253665 DEBUG oslo_concurrency.lockutils [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.646 253665 DEBUG oslo_concurrency.lockutils [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.646 253665 DEBUG nova.network.neutron [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:30.806 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:30.807 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:10:30 compute-0 nova_compute[253661]: 2025-11-22 09:10:30.808 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:31 compute-0 ceph-mon[75021]: pgmap v1332: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 9.7 MiB/s wr, 455 op/s
Nov 22 09:10:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.0 MiB/s wr, 362 op/s
Nov 22 09:10:32 compute-0 nova_compute[253661]: 2025-11-22 09:10:32.655 253665 DEBUG nova.network.neutron [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:32 compute-0 nova_compute[253661]: 2025-11-22 09:10:32.656 253665 DEBUG nova.network.neutron [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:32 compute-0 nova_compute[253661]: 2025-11-22 09:10:32.693 253665 DEBUG oslo_concurrency.lockutils [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:32 compute-0 nova_compute[253661]: 2025-11-22 09:10:32.823 253665 DEBUG nova.compute.manager [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:32 compute-0 nova_compute[253661]: 2025-11-22 09:10:32.824 253665 DEBUG nova.compute.manager [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing instance network info cache due to event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:32 compute-0 nova_compute[253661]: 2025-11-22 09:10:32.824 253665 DEBUG oslo_concurrency.lockutils [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:32 compute-0 nova_compute[253661]: 2025-11-22 09:10:32.824 253665 DEBUG oslo_concurrency.lockutils [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:32 compute-0 nova_compute[253661]: 2025-11-22 09:10:32.825 253665 DEBUG nova.network.neutron [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:33 compute-0 ceph-mon[75021]: pgmap v1333: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.0 MiB/s wr, 362 op/s
Nov 22 09:10:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 7.1 MiB/s wr, 385 op/s
Nov 22 09:10:33 compute-0 nova_compute[253661]: 2025-11-22 09:10:33.584 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:33 compute-0 nova_compute[253661]: 2025-11-22 09:10:33.584 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:33 compute-0 nova_compute[253661]: 2025-11-22 09:10:33.601 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:10:33 compute-0 nova_compute[253661]: 2025-11-22 09:10:33.666 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:33 compute-0 nova_compute[253661]: 2025-11-22 09:10:33.667 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:33 compute-0 nova_compute[253661]: 2025-11-22 09:10:33.676 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:10:33 compute-0 nova_compute[253661]: 2025-11-22 09:10:33.676 253665 INFO nova.compute.claims [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:10:33 compute-0 nova_compute[253661]: 2025-11-22 09:10:33.872 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.166 253665 DEBUG nova.network.neutron [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updated VIF entry in instance network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.167 253665 DEBUG nova.network.neutron [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.224 253665 DEBUG oslo_concurrency.lockutils [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985083650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.475 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.483 253665 DEBUG nova.compute.provider_tree [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.499 253665 DEBUG nova.scheduler.client.report [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.519 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.520 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:34 compute-0 ovn_controller[152872]: 2025-11-22T09:10:34Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:5c:0b 10.100.0.5
Nov 22 09:10:34 compute-0 ovn_controller[152872]: 2025-11-22T09:10:34Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:5c:0b 10.100.0.5
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.565 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.565 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.582 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.599 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.602 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.685 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.687 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.688 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Creating image(s)
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.715 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.748 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.777 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.782 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.866 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.868 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.869 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.869 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.891 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:34 compute-0 nova_compute[253661]: 2025-11-22 09:10:34.895 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 de145d76-062b-4362-bc82-09e09d2f9154_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.038 253665 DEBUG nova.policy [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05cafdbce8334f9380b4dbd1d21f7d58', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd78b26f20d674ae6a213d727050a50d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:10:35 compute-0 ceph-mon[75021]: pgmap v1334: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 7.1 MiB/s wr, 385 op/s
Nov 22 09:10:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2985083650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.5 MiB/s wr, 351 op/s
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.273 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 de145d76-062b-4362-bc82-09e09d2f9154_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.365 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.516 253665 DEBUG nova.objects.instance [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid de145d76-062b-4362-bc82-09e09d2f9154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.529 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.530 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Ensure instance console log exists: /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.531 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.532 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.532 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:35 compute-0 nova_compute[253661]: 2025-11-22 09:10:35.929 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Successfully created port: c048a826-73ad-49d3-a29f-5d790d359e51 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:10:36 compute-0 ceph-mon[75021]: pgmap v1335: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.5 MiB/s wr, 351 op/s
Nov 22 09:10:36 compute-0 nova_compute[253661]: 2025-11-22 09:10:36.217 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.075 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802622.0738266, 0e7ac107-5a5a-4066-9396-f22b877e4c2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.077 253665 INFO nova.compute.manager [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] VM Stopped (Lifecycle Event)
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.095 253665 DEBUG nova.compute.manager [None req-9f017c36-6a7b-45ad-8210-da2334bf98d2 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 394 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.150 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Successfully updated port: c048a826-73ad-49d3-a29f-5d790d359e51 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.165 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.166 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquired lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.166 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.382 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.803 253665 DEBUG nova.compute.manager [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-changed-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.804 253665 DEBUG nova.compute.manager [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Refreshing instance network info cache due to event network-changed-c048a826-73ad-49d3-a29f-5d790d359e51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.804 253665 DEBUG oslo_concurrency.lockutils [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:37.810 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.878 253665 DEBUG nova.compute.manager [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.879 253665 DEBUG nova.compute.manager [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing instance network info cache due to event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.879 253665 DEBUG oslo_concurrency.lockutils [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.880 253665 DEBUG oslo_concurrency.lockutils [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:37 compute-0 nova_compute[253661]: 2025-11-22 09:10:37.880 253665 DEBUG nova.network.neutron [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:38 compute-0 ceph-mon[75021]: pgmap v1336: 305 pgs: 305 active+clean; 394 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Nov 22 09:10:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 451 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 219 op/s
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.264 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updating instance_info_cache with network_info: [{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.295 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Releasing lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.296 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance network_info: |[{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.296 253665 DEBUG oslo_concurrency.lockutils [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.297 253665 DEBUG nova.network.neutron [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Refreshing network info cache for port c048a826-73ad-49d3-a29f-5d790d359e51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.300 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start _get_guest_xml network_info=[{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.304 253665 WARNING nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.311 253665 DEBUG nova.virt.libvirt.host [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.311 253665 DEBUG nova.virt.libvirt.host [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.315 253665 DEBUG nova.virt.libvirt.host [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.316 253665 DEBUG nova.virt.libvirt.host [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.317 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.317 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.318 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.318 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.318 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.318 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.319 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.319 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.319 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.320 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.320 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.320 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.323 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.605 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720399152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.835 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.858 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.862 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.925 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.926 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.926 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.926 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.927 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.928 253665 INFO nova.compute.manager [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Terminating instance
Nov 22 09:10:39 compute-0 nova_compute[253661]: 2025-11-22 09:10:39.929 253665 DEBUG nova.compute.manager [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.202 253665 DEBUG nova.network.neutron [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updated VIF entry in instance network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.203 253665 DEBUG nova.network.neutron [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.219 253665 DEBUG oslo_concurrency.lockutils [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:40 compute-0 kernel: tap52bf11af-13 (unregistering): left promiscuous mode
Nov 22 09:10:40 compute-0 NetworkManager[48920]: <info>  [1763802640.2839] device (tap52bf11af-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:10:40 compute-0 ovn_controller[152872]: 2025-11-22T09:10:40Z|00092|binding|INFO|Releasing lport 52bf11af-1372-4c5d-8bd8-81017da77de8 from this chassis (sb_readonly=0)
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.294 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 ovn_controller[152872]: 2025-11-22T09:10:40Z|00093|binding|INFO|Setting lport 52bf11af-1372-4c5d-8bd8-81017da77de8 down in Southbound
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.298 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 ovn_controller[152872]: 2025-11-22T09:10:40Z|00094|binding|INFO|Removing iface tap52bf11af-13 ovn-installed in OVS
Nov 22 09:10:40 compute-0 ceph-mon[75021]: pgmap v1337: 305 pgs: 305 active+clean; 451 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 219 op/s
Nov 22 09:10:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3720399152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.304 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:5c:0b 10.100.0.5'], port_security=['fa:16:3e:58:5c:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff657cfc-b1bb-4545-bc13-ad240e69c666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=52bf11af-1372-4c5d-8bd8-81017da77de8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.305 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 52bf11af-1372-4c5d-8bd8-81017da77de8 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 unbound from our chassis
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.309 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.334 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[62573e58-2dc1-4f22-ad65-1f3f0ffa0bdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 22 09:10:40 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000014.scope: Consumed 14.994s CPU time.
Nov 22 09:10:40 compute-0 systemd-machined[215941]: Machine qemu-23-instance-00000014 terminated.
Nov 22 09:10:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3855083627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.374 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac3eda2-f3c6-4789-a8ca-f327b9802476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.379 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d156547f-f964-4f0f-ba1a-ff3e48e38fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.384 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.386 253665 DEBUG nova.virt.libvirt.vif [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-27339221',display_name='tempest-ServersAdminTestJSON-server-27339221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-27339221',id=23,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-xz93pz1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:34Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=de145d76-062b-4362-bc82-09e09d2f9154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.386 253665 DEBUG nova.network.os_vif_util [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.391 253665 DEBUG nova.network.os_vif_util [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.392 253665 DEBUG nova.objects.instance [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid de145d76-062b-4362-bc82-09e09d2f9154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.411 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <uuid>de145d76-062b-4362-bc82-09e09d2f9154</uuid>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <name>instance-00000017</name>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdminTestJSON-server-27339221</nova:name>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:10:39</nova:creationTime>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <nova:port uuid="c048a826-73ad-49d3-a29f-5d790d359e51">
Nov 22 09:10:40 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <system>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <entry name="serial">de145d76-062b-4362-bc82-09e09d2f9154</entry>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <entry name="uuid">de145d76-062b-4362-bc82-09e09d2f9154</entry>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     </system>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <os>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   </os>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <features>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   </features>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/de145d76-062b-4362-bc82-09e09d2f9154_disk">
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/de145d76-062b-4362-bc82-09e09d2f9154_disk.config">
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:40 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:8c:b7:42"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <target dev="tapc048a826-73"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/console.log" append="off"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <video>
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     </video>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:10:40 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:10:40 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:10:40 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:10:40 compute-0 nova_compute[253661]: </domain>
Nov 22 09:10:40 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.413 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Preparing to wait for external event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.413 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.414 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.414 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.415 253665 DEBUG nova.virt.libvirt.vif [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-27339221',display_name='tempest-ServersAdminTestJSON-server-27339221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-27339221',id=23,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-xz93pz1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:34Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=de145d76-062b-4362-bc82-09e09d2f9154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.416 253665 DEBUG nova.network.os_vif_util [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.416 253665 DEBUG nova.network.os_vif_util [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.417 253665 DEBUG os_vif [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.418 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.418 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.419 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.414 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[71fcf913-f1f3-4f2e-a125-1357ae4ea1e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.423 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc048a826-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.426 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc048a826-73, col_values=(('external_ids', {'iface-id': 'c048a826-73ad-49d3-a29f-5d790d359e51', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:b7:42', 'vm-uuid': 'de145d76-062b-4362-bc82-09e09d2f9154'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 NetworkManager[48920]: <info>  [1763802640.4293] manager: (tapc048a826-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.436 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.436 253665 INFO os_vif [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73')
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ab71ca0e-b918-44a9-87be-d4f5c49c80c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285048, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.496 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[690f5363-53d6-45a2-b898-1c016b17462e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545502, 'tstamp': 545502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285051, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545507, 'tstamp': 545507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285051, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.500 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.512 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.513 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.513 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.514 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:40 compute-0 kernel: tap52bf11af-13: entered promiscuous mode
Nov 22 09:10:40 compute-0 NetworkManager[48920]: <info>  [1763802640.5631] manager: (tap52bf11af-13): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.561 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.562 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.562 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:8c:b7:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.563 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Using config drive
Nov 22 09:10:40 compute-0 kernel: tap52bf11af-13 (unregistering): left promiscuous mode
Nov 22 09:10:40 compute-0 ovn_controller[152872]: 2025-11-22T09:10:40Z|00095|binding|INFO|Claiming lport 52bf11af-1372-4c5d-8bd8-81017da77de8 for this chassis.
Nov 22 09:10:40 compute-0 ovn_controller[152872]: 2025-11-22T09:10:40Z|00096|binding|INFO|52bf11af-1372-4c5d-8bd8-81017da77de8: Claiming fa:16:3e:58:5c:0b 10.100.0.5
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.577 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:5c:0b 10.100.0.5'], port_security=['fa:16:3e:58:5c:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff657cfc-b1bb-4545-bc13-ad240e69c666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=52bf11af-1372-4c5d-8bd8-81017da77de8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.578 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 52bf11af-1372-4c5d-8bd8-81017da77de8 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 bound to our chassis
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.588 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608
Nov 22 09:10:40 compute-0 ovn_controller[152872]: 2025-11-22T09:10:40Z|00097|binding|INFO|Releasing lport 52bf11af-1372-4c5d-8bd8-81017da77de8 from this chassis (sb_readonly=0)
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.604 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:5c:0b 10.100.0.5'], port_security=['fa:16:3e:58:5c:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff657cfc-b1bb-4545-bc13-ad240e69c666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=52bf11af-1372-4c5d-8bd8-81017da77de8) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.609 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.617 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48c3ba9e-6d1f-4ca6-8b3d-768fe7318c62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.622 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.628 253665 INFO nova.virt.libvirt.driver [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance destroyed successfully.
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.628 253665 DEBUG nova.objects.instance [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'resources' on Instance uuid ff657cfc-b1bb-4545-bc13-ad240e69c666 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.645 253665 DEBUG nova.virt.libvirt.vif [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1250749597',display_name='tempest-FloatingIPsAssociationTestJSON-server-1250749597',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1250749597',id=20,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-63b4tjoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:20Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=ff657cfc-b1bb-4545-bc13-ad240e69c666,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.645 253665 DEBUG nova.network.os_vif_util [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.646 253665 DEBUG nova.network.os_vif_util [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.646 253665 DEBUG os_vif [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.649 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.650 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52bf11af-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.650 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[423a035f-52e8-4468-93a8-43312c277bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.655 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[83e36ba2-c546-4646-80d6-b4179596d052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.663 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.665 253665 INFO os_vif [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13')
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.690 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2a778b66-ec01-43ef-a4cd-5fd6f37c003c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.714 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a75fa380-aa99-4f32-9678-f912a3d356ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285099, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.735 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84efa79c-bf98-4beb-86ae-06d5c3626ec5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545502, 'tstamp': 545502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285103, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545507, 'tstamp': 545507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285103, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.737 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.739 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.744 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.744 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.745 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.745 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.745 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.746 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 52bf11af-1372-4c5d-8bd8-81017da77de8 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 unbound from our chassis
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.747 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.766 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8dedcb-ef4a-4480-b462-71b3b91723bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.794 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc5781a-82e3-4adc-b3f7-15e0988d3ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.798 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5283a19e-d54e-4477-943b-9aa94daff111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.829 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c7808190-f80f-4177-92a9-5599145f1fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.854 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e95307-7e48-41d7-beeb-21628e68d91e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285110, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d4db3a07-79a1-4b28-beb7-82b86df7b1e3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545502, 'tstamp': 545502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285111, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545507, 'tstamp': 545507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285111, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.876 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.881 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.883 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:40 compute-0 nova_compute[253661]: 2025-11-22 09:10:40.997 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Creating config drive at /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config
Nov 22 09:10:41 compute-0 nova_compute[253661]: 2025-11-22 09:10:41.004 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpky9ybsp3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 451 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Nov 22 09:10:41 compute-0 nova_compute[253661]: 2025-11-22 09:10:41.160 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpky9ybsp3" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:41 compute-0 nova_compute[253661]: 2025-11-22 09:10:41.186 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:41 compute-0 nova_compute[253661]: 2025-11-22 09:10:41.191 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config de145d76-062b-4362-bc82-09e09d2f9154_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:41 compute-0 nova_compute[253661]: 2025-11-22 09:10:41.236 253665 DEBUG nova.network.neutron [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updated VIF entry in instance network info cache for port c048a826-73ad-49d3-a29f-5d790d359e51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:41 compute-0 nova_compute[253661]: 2025-11-22 09:10:41.237 253665 DEBUG nova.network.neutron [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updating instance_info_cache with network_info: [{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:41 compute-0 nova_compute[253661]: 2025-11-22 09:10:41.253 253665 DEBUG oslo_concurrency.lockutils [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3855083627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.154 253665 DEBUG nova.compute.manager [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.155 253665 DEBUG nova.compute.manager [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.155 253665 DEBUG oslo_concurrency.lockutils [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.155 253665 DEBUG oslo_concurrency.lockutils [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.156 253665 DEBUG nova.network.neutron [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.253 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config de145d76-062b-4362-bc82-09e09d2f9154_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.253 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Deleting local config drive /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config because it was imported into RBD.
Nov 22 09:10:42 compute-0 kernel: tapc048a826-73: entered promiscuous mode
Nov 22 09:10:42 compute-0 NetworkManager[48920]: <info>  [1763802642.3172] manager: (tapc048a826-73): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Nov 22 09:10:42 compute-0 ovn_controller[152872]: 2025-11-22T09:10:42Z|00098|binding|INFO|Claiming lport c048a826-73ad-49d3-a29f-5d790d359e51 for this chassis.
Nov 22 09:10:42 compute-0 ovn_controller[152872]: 2025-11-22T09:10:42Z|00099|binding|INFO|c048a826-73ad-49d3-a29f-5d790d359e51: Claiming fa:16:3e:8c:b7:42 10.100.0.7
Nov 22 09:10:42 compute-0 systemd-udevd[285037]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.326 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:b7:42 10.100.0.7'], port_security=['fa:16:3e:8c:b7:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'de145d76-062b-4362-bc82-09e09d2f9154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c048a826-73ad-49d3-a29f-5d790d359e51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.328 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c048a826-73ad-49d3-a29f-5d790d359e51 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.330 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:10:42 compute-0 NetworkManager[48920]: <info>  [1763802642.3372] device (tapc048a826-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:10:42 compute-0 NetworkManager[48920]: <info>  [1763802642.3381] device (tapc048a826-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:10:42 compute-0 ovn_controller[152872]: 2025-11-22T09:10:42Z|00100|binding|INFO|Setting lport c048a826-73ad-49d3-a29f-5d790d359e51 ovn-installed in OVS
Nov 22 09:10:42 compute-0 ovn_controller[152872]: 2025-11-22T09:10:42Z|00101|binding|INFO|Setting lport c048a826-73ad-49d3-a29f-5d790d359e51 up in Southbound
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.354 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.359 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c423509c-9f86-4428-93f0-19841b8cac44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:42 compute-0 systemd-machined[215941]: New machine qemu-26-instance-00000017.
Nov 22 09:10:42 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000017.
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.395 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[38783663-dd63-41e3-bba3-00db84f7343e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.398 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[de5b4ca6-e325-4ada-a687-666a2392c59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.436 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a27f7c-d48f-4350-af36-7626145ae83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.459 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce331fd-2669-428a-8049-900ec8ad81a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285177, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.481 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc6daab-9fd5-4a9c-846c-98cb97d7cbac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285178, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285178, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.484 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:42 compute-0 nova_compute[253661]: 2025-11-22 09:10:42.488 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.489 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.489 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.489 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.490 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:42 compute-0 ceph-mon[75021]: pgmap v1338: 305 pgs: 305 active+clean; 451 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.073 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802643.0720322, de145d76-062b-4362-bc82-09e09d2f9154 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.073 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] VM Started (Lifecycle Event)
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.089 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.095 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802643.0751026, de145d76-062b-4362-bc82-09e09d2f9154 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.095 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] VM Paused (Lifecycle Event)
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.109 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.112 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.129 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 451 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.911 253665 DEBUG nova.network.neutron [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.912 253665 DEBUG nova.network.neutron [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:43 compute-0 nova_compute[253661]: 2025-11-22 09:10:43.940 253665 DEBUG oslo_concurrency.lockutils [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:44 compute-0 podman[285223]: 2025-11-22 09:10:44.429422838 +0000 UTC m=+0.080585557 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:10:44 compute-0 podman[285224]: 2025-11-22 09:10:44.438450246 +0000 UTC m=+0.089318897 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:10:44 compute-0 nova_compute[253661]: 2025-11-22 09:10:44.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:44 compute-0 ceph-mon[75021]: pgmap v1339: 305 pgs: 305 active+clean; 451 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 22 09:10:44 compute-0 nova_compute[253661]: 2025-11-22 09:10:44.672 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 399 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 22 09:10:45 compute-0 nova_compute[253661]: 2025-11-22 09:10:45.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:46 compute-0 ceph-mon[75021]: pgmap v1340: 305 pgs: 305 active+clean; 399 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 22 09:10:46 compute-0 nova_compute[253661]: 2025-11-22 09:10:46.935 253665 INFO nova.virt.libvirt.driver [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Deleting instance files /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666_del
Nov 22 09:10:46 compute-0 nova_compute[253661]: 2025-11-22 09:10:46.936 253665 INFO nova.virt.libvirt.driver [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Deletion of /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666_del complete
Nov 22 09:10:46 compute-0 nova_compute[253661]: 2025-11-22 09:10:46.993 253665 INFO nova.compute.manager [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Took 7.06 seconds to destroy the instance on the hypervisor.
Nov 22 09:10:46 compute-0 nova_compute[253661]: 2025-11-22 09:10:46.994 253665 DEBUG oslo.service.loopingcall [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:10:46 compute-0 nova_compute[253661]: 2025-11-22 09:10:46.994 253665 DEBUG nova.compute.manager [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:10:46 compute-0 nova_compute[253661]: 2025-11-22 09:10:46.995 253665 DEBUG nova.network.neutron [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:10:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 376 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 4.2 MiB/s wr, 130 op/s
Nov 22 09:10:47 compute-0 nova_compute[253661]: 2025-11-22 09:10:47.540 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:47 compute-0 nova_compute[253661]: 2025-11-22 09:10:47.541 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:47 compute-0 nova_compute[253661]: 2025-11-22 09:10:47.553 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:10:47 compute-0 nova_compute[253661]: 2025-11-22 09:10:47.634 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:47 compute-0 nova_compute[253661]: 2025-11-22 09:10:47.635 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:47 compute-0 nova_compute[253661]: 2025-11-22 09:10:47.643 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:10:47 compute-0 nova_compute[253661]: 2025-11-22 09:10:47.644 253665 INFO nova.compute.claims [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:10:47 compute-0 nova_compute[253661]: 2025-11-22 09:10:47.834 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:47 compute-0 ovn_controller[152872]: 2025-11-22T09:10:47Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:6f:23 10.100.0.10
Nov 22 09:10:47 compute-0 ovn_controller[152872]: 2025-11-22T09:10:47Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:6f:23 10.100.0.10
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.086 253665 DEBUG nova.compute.manager [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-unplugged-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.087 253665 DEBUG oslo_concurrency.lockutils [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.087 253665 DEBUG oslo_concurrency.lockutils [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.088 253665 DEBUG oslo_concurrency.lockutils [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.088 253665 DEBUG nova.compute.manager [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] No waiting events found dispatching network-vif-unplugged-52bf11af-1372-4c5d-8bd8-81017da77de8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.088 253665 DEBUG nova.compute.manager [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-unplugged-52bf11af-1372-4c5d-8bd8-81017da77de8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.141 253665 DEBUG nova.compute.manager [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.142 253665 DEBUG nova.compute.manager [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.143 253665 DEBUG oslo_concurrency.lockutils [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.143 253665 DEBUG oslo_concurrency.lockutils [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.143 253665 DEBUG nova.network.neutron [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422486760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.359 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.366 253665 DEBUG nova.compute.provider_tree [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.381 253665 DEBUG nova.scheduler.client.report [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.403 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.404 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.444 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.445 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:10:48 compute-0 podman[285278]: 2025-11-22 09:10:48.452304584 +0000 UTC m=+0.136427371 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.465 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.480 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.567 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.569 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.569 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Creating image(s)
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.593 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.620 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.645 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.649 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.720 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.722 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.722 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.723 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.746 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.751 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.788 253665 DEBUG nova.policy [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.803 253665 DEBUG nova.network.neutron [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.820 253665 INFO nova.compute.manager [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Took 1.82 seconds to deallocate network for instance.
Nov 22 09:10:48 compute-0 ceph-mon[75021]: pgmap v1341: 305 pgs: 305 active+clean; 376 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 4.2 MiB/s wr, 130 op/s
Nov 22 09:10:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/422486760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.874 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.875 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.902 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.929 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.929 253665 DEBUG nova.compute.provider_tree [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.952 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:10:48 compute-0 nova_compute[253661]: 2025-11-22 09:10:48.990 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.109 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 404 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 5.2 MiB/s wr, 157 op/s
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.187 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] resizing rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.221 253665 DEBUG oslo_concurrency.processutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.323 253665 DEBUG nova.objects.instance [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'migration_context' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.339 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.339 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Ensure instance console log exists: /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.340 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.340 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.341 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528930108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.726 253665 DEBUG oslo_concurrency.processutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.732 253665 DEBUG nova.compute.provider_tree [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.748 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.773 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.813 253665 INFO nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Deleted allocations for instance ff657cfc-b1bb-4545-bc13-ad240e69c666
Nov 22 09:10:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2528930108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:49 compute-0 nova_compute[253661]: 2025-11-22 09:10:49.880 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.002 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Successfully created port: f70fa10f-f756-4faa-aebf-deeb0b129704 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.195 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.196 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.196 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.196 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.196 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] No waiting events found dispatching network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 WARNING nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received unexpected event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 for instance with vm_state deleted and task_state None.
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Processing event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.199 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] No waiting events found dispatching network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.199 253665 WARNING nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received unexpected event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 for instance with vm_state building and task_state spawning.
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.199 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.204 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802650.2034488, de145d76-062b-4362-bc82-09e09d2f9154 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.204 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] VM Resumed (Lifecycle Event)
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.207 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.212 253665 INFO nova.virt.libvirt.driver [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance spawned successfully.
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.213 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.241 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.246 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.247 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.247 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.248 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.248 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.248 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.254 253665 DEBUG nova.compute.manager [req-36d96467-634a-4193-919a-0b3708525c63 req-6e37f6ed-798c-4a10-a238-a361c6ff4c7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-deleted-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.268 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.437 253665 DEBUG nova.network.neutron [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.438 253665 DEBUG nova.network.neutron [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.453 253665 DEBUG oslo_concurrency.lockutils [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.489 253665 INFO nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Took 15.80 seconds to spawn the instance on the hypervisor.
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.490 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.575 253665 INFO nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Took 16.93 seconds to build instance.
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.607 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:50 compute-0 nova_compute[253661]: 2025-11-22 09:10:50.661 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:50 compute-0 ceph-mon[75021]: pgmap v1342: 305 pgs: 305 active+clean; 404 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 5.2 MiB/s wr, 157 op/s
Nov 22 09:10:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 404 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 22 09:10:51 compute-0 nova_compute[253661]: 2025-11-22 09:10:51.994 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Successfully updated port: f70fa10f-f756-4faa-aebf-deeb0b129704 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.018 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.018 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.018 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:10:52
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups']
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.224 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.444 253665 DEBUG oslo_concurrency.lockutils [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] Acquiring lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.444 253665 DEBUG oslo_concurrency.lockutils [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] Acquired lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.444 253665 DEBUG nova.network.neutron [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.536 253665 DEBUG nova.compute.manager [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-changed-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.537 253665 DEBUG nova.compute.manager [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing instance network info cache due to event network-changed-f70fa10f-f756-4faa-aebf-deeb0b129704. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.537 253665 DEBUG oslo_concurrency.lockutils [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.706 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.706 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.707 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.707 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.707 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.712 253665 INFO nova.compute.manager [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Terminating instance
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.713 253665 DEBUG nova.compute.manager [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:10:52 compute-0 kernel: tape7682709-05 (unregistering): left promiscuous mode
Nov 22 09:10:52 compute-0 NetworkManager[48920]: <info>  [1763802652.7729] device (tape7682709-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:52 compute-0 ovn_controller[152872]: 2025-11-22T09:10:52Z|00102|binding|INFO|Releasing lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 from this chassis (sb_readonly=0)
Nov 22 09:10:52 compute-0 ovn_controller[152872]: 2025-11-22T09:10:52Z|00103|binding|INFO|Setting lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 down in Southbound
Nov 22 09:10:52 compute-0 ovn_controller[152872]: 2025-11-22T09:10:52Z|00104|binding|INFO|Removing iface tape7682709-05 ovn-installed in OVS
Nov 22 09:10:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.837 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:ea:eb 10.100.0.3'], port_security=['fa:16:3e:5e:ea:eb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '04781543-b5ed-482a-a30a-0730fbcd12a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e7682709-05fd-4d27-bd49-1a84e1cf6bd3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.838 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 unbound from our chassis
Nov 22 09:10:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.840 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e64548ac-5898-4d23-b6f7-17a1ae54c608, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:10:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.842 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[163305c9-6c5a-461a-9776-29a84280a4a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.844 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 namespace which is not needed anymore
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.853 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:52 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 22 09:10:52 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000011.scope: Consumed 15.242s CPU time.
Nov 22 09:10:52 compute-0 systemd-machined[215941]: Machine qemu-20-instance-00000011 terminated.
Nov 22 09:10:52 compute-0 ceph-mon[75021]: pgmap v1343: 305 pgs: 305 active+clean; 404 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.958 253665 INFO nova.virt.libvirt.driver [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance destroyed successfully.
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.960 253665 DEBUG nova.objects.instance [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'resources' on Instance uuid 04781543-b5ed-482a-a30a-0730fbcd12a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.969 253665 DEBUG nova.virt.libvirt.vif [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-785600448',display_name='tempest-FloatingIPsAssociationTestJSON-server-785600448',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-785600448',id=17,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-912pf9hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:04Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=04781543-b5ed-482a-a30a-0730fbcd12a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.970 253665 DEBUG nova.network.os_vif_util [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.971 253665 DEBUG nova.network.os_vif_util [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.971 253665 DEBUG os_vif [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.975 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.976 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7682709-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:52 compute-0 nova_compute[253661]: 2025-11-22 09:10:52.984 253665 INFO os_vif [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05')
Nov 22 09:10:53 compute-0 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [NOTICE]   (281884) : haproxy version is 2.8.14-c23fe91
Nov 22 09:10:53 compute-0 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [NOTICE]   (281884) : path to executable is /usr/sbin/haproxy
Nov 22 09:10:53 compute-0 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [WARNING]  (281884) : Exiting Master process...
Nov 22 09:10:53 compute-0 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [WARNING]  (281884) : Exiting Master process...
Nov 22 09:10:53 compute-0 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [ALERT]    (281884) : Current worker (281886) exited with code 143 (Terminated)
Nov 22 09:10:53 compute-0 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [WARNING]  (281884) : All workers exited. Exiting... (0)
Nov 22 09:10:53 compute-0 systemd[1]: libpod-977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3.scope: Deactivated successfully.
Nov 22 09:10:53 compute-0 podman[285521]: 2025-11-22 09:10:53.041621227 +0000 UTC m=+0.074265163 container died 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fa4cb5a6efdb2b8320c6dc794a849ea1a90de83555fc6fea9133bc58a10cfaa-merged.mount: Deactivated successfully.
Nov 22 09:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3-userdata-shm.mount: Deactivated successfully.
Nov 22 09:10:53 compute-0 podman[285521]: 2025-11-22 09:10:53.098807355 +0000 UTC m=+0.131451281 container cleanup 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:10:53 compute-0 systemd[1]: libpod-conmon-977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3.scope: Deactivated successfully.
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.137 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.154 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.154 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance network_info: |[{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.154 253665 DEBUG oslo_concurrency.lockutils [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.155 253665 DEBUG nova.network.neutron [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing network info cache for port f70fa10f-f756-4faa-aebf-deeb0b129704 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.157 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start _get_guest_xml network_info=[{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:10:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 418 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 2.7 MiB/s wr, 112 op/s
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.163 253665 WARNING nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.171 253665 DEBUG nova.virt.libvirt.host [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.172 253665 DEBUG nova.virt.libvirt.host [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.176 253665 DEBUG nova.virt.libvirt.host [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.176 253665 DEBUG nova.virt.libvirt.host [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.176 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.179 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.181 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:53 compute-0 podman[285575]: 2025-11-22 09:10:53.184860543 +0000 UTC m=+0.060741785 container remove 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:10:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.190 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff128a20-1279-45bc-b522-6a9e797a0d27]: (4, ('Sat Nov 22 09:10:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 (977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3)\n977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3\nSat Nov 22 09:10:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 (977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3)\n977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.193 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c11c6ad-443f-41d0-a172-e145313581bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.194 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:53 compute-0 kernel: tape64548ac-50: left promiscuous mode
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.218 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.225 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e086739f-87e3-415f-b113-51a2443a8dfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a607c65f-6ec1-4885-be62-d817dcb59ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.241 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca6c784-f838-4a39-afdf-fb6c9dd2dc9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.259 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10aaa1e4-9c42-434b-893f-073dccc688d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545477, 'reachable_time': 39507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285590, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:53 compute-0 systemd[1]: run-netns-ovnmeta\x2de64548ac\x2d5898\x2d4d23\x2db6f7\x2d17a1ae54c608.mount: Deactivated successfully.
Nov 22 09:10:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.265 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:10:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.265 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9c56e1-972f-4a39-a4ac-319ee1024e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.493 253665 INFO nova.virt.libvirt.driver [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Deleting instance files /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1_del
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.494 253665 INFO nova.virt.libvirt.driver [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Deletion of /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1_del complete
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.553 253665 INFO nova.compute.manager [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.553 253665 DEBUG oslo.service.loopingcall [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.557 253665 DEBUG nova.compute.manager [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.557 253665 DEBUG nova.network.neutron [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:10:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652402155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.681 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.707 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.712 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2652402155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.968 253665 DEBUG nova.network.neutron [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updating instance_info_cache with network_info: [{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.989 253665 DEBUG oslo_concurrency.lockutils [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] Releasing lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.989 253665 DEBUG nova.compute.manager [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 22 09:10:53 compute-0 nova_compute[253661]: 2025-11-22 09:10:53.989 253665 DEBUG nova.compute.manager [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] network_info to inject: |[{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 22 09:10:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:10:54 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017011033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.225 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.227 253665 DEBUG nova.virt.libvirt.vif [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.227 253665 DEBUG nova.network.os_vif_util [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.228 253665 DEBUG nova.network.os_vif_util [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.229 253665 DEBUG nova.objects.instance [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_devices' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.252 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <uuid>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</uuid>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <name>instance-00000018</name>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:10:53</nova:creationTime>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 09:10:54 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <system>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <entry name="serial">a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <entry name="uuid">a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     </system>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <os>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   </os>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <features>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   </features>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk">
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config">
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       </source>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:10:54 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:14:85:74"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <target dev="tapf70fa10f-f7"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log" append="off"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <video>
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     </video>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:10:54 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:10:54 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:10:54 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:10:54 compute-0 nova_compute[253661]: </domain>
Nov 22 09:10:54 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.253 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Preparing to wait for external event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.253 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.253 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.254 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.254 253665 DEBUG nova.virt.libvirt.vif [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.255 253665 DEBUG nova.network.os_vif_util [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.255 253665 DEBUG nova.network.os_vif_util [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.256 253665 DEBUG os_vif [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.257 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.257 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.260 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.261 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf70fa10f-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.261 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf70fa10f-f7, col_values=(('external_ids', {'iface-id': 'f70fa10f-f756-4faa-aebf-deeb0b129704', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:85:74', 'vm-uuid': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:54 compute-0 NetworkManager[48920]: <info>  [1763802654.2641] manager: (tapf70fa10f-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.272 253665 INFO os_vif [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7')
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.324 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.324 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.325 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:14:85:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.325 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Using config drive
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.352 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.442 253665 DEBUG nova.network.neutron [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updated VIF entry in instance network info cache for port f70fa10f-f756-4faa-aebf-deeb0b129704. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.443 253665 DEBUG nova.network.neutron [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.457 253665 DEBUG oslo_concurrency.lockutils [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.468 253665 DEBUG nova.network.neutron [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.485 253665 INFO nova.compute.manager [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Took 0.93 seconds to deallocate network for instance.
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.533 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.533 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.613 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.652 253665 DEBUG nova.compute.manager [req-863300a6-878e-4384-acf2-5a3f3f2d6f6d req-c45618f1-41a9-4abf-b7de-ece097520e56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-vif-deleted-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.675 253665 DEBUG oslo_concurrency.processutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.762 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Creating config drive at /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.771 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp_om7w41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:10:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.912 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp_om7w41" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:54 compute-0 ceph-mon[75021]: pgmap v1344: 305 pgs: 305 active+clean; 418 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 2.7 MiB/s wr, 112 op/s
Nov 22 09:10:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1017011033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.951 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:10:54 compute-0 nova_compute[253661]: 2025-11-22 09:10:54.955 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:10:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 404 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 200 op/s
Nov 22 09:10:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:10:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3367886030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.197 253665 DEBUG oslo_concurrency.processutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.204 253665 DEBUG nova.compute.provider_tree [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.219 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.220 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Deleting local config drive /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config because it was imported into RBD.
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.224 253665 DEBUG nova.scheduler.client.report [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.245 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.269 253665 INFO nova.scheduler.client.report [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Deleted allocations for instance 04781543-b5ed-482a-a30a-0730fbcd12a1
Nov 22 09:10:55 compute-0 kernel: tapf70fa10f-f7: entered promiscuous mode
Nov 22 09:10:55 compute-0 NetworkManager[48920]: <info>  [1763802655.2817] manager: (tapf70fa10f-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Nov 22 09:10:55 compute-0 systemd-udevd[285497]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:10:55 compute-0 ovn_controller[152872]: 2025-11-22T09:10:55Z|00105|binding|INFO|Claiming lport f70fa10f-f756-4faa-aebf-deeb0b129704 for this chassis.
Nov 22 09:10:55 compute-0 ovn_controller[152872]: 2025-11-22T09:10:55Z|00106|binding|INFO|f70fa10f-f756-4faa-aebf-deeb0b129704: Claiming fa:16:3e:14:85:74 10.100.0.11
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.294 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:85:74 10.100.0.11'], port_security=['fa:16:3e:14:85:74 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '483aedc9-eae7-4cec-a714-9d623421c584', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f70fa10f-f756-4faa-aebf-deeb0b129704) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.295 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f70fa10f-f756-4faa-aebf-deeb0b129704 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.297 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:10:55 compute-0 NetworkManager[48920]: <info>  [1763802655.3071] device (tapf70fa10f-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:10:55 compute-0 NetworkManager[48920]: <info>  [1763802655.3087] device (tapf70fa10f-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:10:55 compute-0 ovn_controller[152872]: 2025-11-22T09:10:55Z|00107|binding|INFO|Setting lport f70fa10f-f756-4faa-aebf-deeb0b129704 ovn-installed in OVS
Nov 22 09:10:55 compute-0 ovn_controller[152872]: 2025-11-22T09:10:55Z|00108|binding|INFO|Setting lport f70fa10f-f756-4faa-aebf-deeb0b129704 up in Southbound
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.312 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[93c9b86c-e807-45db-aa78-8f2070ef59b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.314 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e2cd359-c1 in ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.317 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e2cd359-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.317 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00575ec3-5323-4980-898c-5accb3e70c79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.322 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[91b3c6f7-aa6a-4b48-92b8-c0da85bac254]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.337 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5b9e66-312b-4018-93b6-75cb8358494f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.338 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:55 compute-0 systemd-machined[215941]: New machine qemu-27-instance-00000018.
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34c8b35f-331d-4a8c-adf6-8442a3312bcc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000018.
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.408 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0a7b38-8fd9-4168-8415-6579ffef983f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 NetworkManager[48920]: <info>  [1763802655.4165] manager: (tap5e2cd359-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.417 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce6db14-10db-4a32-b9df-9a3400ba5a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.471 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d02a02-2467-4f13-b70b-78accd4220ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.476 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[688eed24-0cae-48bf-89c4-4ff8439d2127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 NetworkManager[48920]: <info>  [1763802655.5106] device (tap5e2cd359-c0): carrier: link connected
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.519 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8dfe95-5ddd-46fc-ae31-61d6a5c64f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.541 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81f7d4b9-94fa-41ed-8a92-e715dc4b20d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550624, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285780, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b9bf7c-6c4b-4731-aaa3-4eb86156937f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:bd41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550624, 'tstamp': 550624}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285781, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.590 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[db1ec91f-9beb-4384-afad-12f7f4ae2ee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550624, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285782, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.622 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802640.5864038, ff657cfc-b1bb-4545-bc13-ad240e69c666 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.623 253665 INFO nova.compute.manager [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] VM Stopped (Lifecycle Event)
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.633 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0190f679-741a-40fc-a156-5a74734657d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.641 253665 DEBUG nova.compute.manager [None req-174428f5-ce50-4c6d-a56b-ec0a9738dec0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.718 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d8546b-e5ab-450a-b4e4-eed5761797b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.721 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.722 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.722 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:55 compute-0 NetworkManager[48920]: <info>  [1763802655.7257] manager: (tap5e2cd359-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 22 09:10:55 compute-0 kernel: tap5e2cd359-c0: entered promiscuous mode
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.729 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:10:55 compute-0 ovn_controller[152872]: 2025-11-22T09:10:55Z|00109|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.753 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.755 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17d980c8-458d-49b0-b51a-777beb0dc990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.756 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:10:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.757 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'env', 'PROCESS_TAG=haproxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e2cd359-c68f-4256-90e8-0ad40aff8a00.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:10:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3367886030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.948 253665 DEBUG nova.compute.manager [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.949 253665 DEBUG oslo_concurrency.lockutils [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.949 253665 DEBUG oslo_concurrency.lockutils [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.950 253665 DEBUG oslo_concurrency.lockutils [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:55 compute-0 nova_compute[253661]: 2025-11-22 09:10:55.950 253665 DEBUG nova.compute.manager [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Processing event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:10:56 compute-0 podman[285832]: 2025-11-22 09:10:56.231898279 +0000 UTC m=+0.088887318 container create 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:10:56 compute-0 podman[285832]: 2025-11-22 09:10:56.176603827 +0000 UTC m=+0.033592896 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:10:56 compute-0 systemd[1]: Started libpod-conmon-64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f.scope.
Nov 22 09:10:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7679634f901691bbb7933c31bc9e7b94437f81ed8d9c677c4422e9a5cf6ef0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.330 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802656.3298676, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.332 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] VM Started (Lifecycle Event)
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.334 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.338 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:10:56 compute-0 podman[285832]: 2025-11-22 09:10:56.338792594 +0000 UTC m=+0.195781653 container init 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.342 253665 INFO nova.virt.libvirt.driver [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance spawned successfully.
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.343 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:10:56 compute-0 podman[285832]: 2025-11-22 09:10:56.345041055 +0000 UTC m=+0.202030094 container start 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.355 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.358 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.373 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.374 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.374 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:56 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [NOTICE]   (285875) : New worker (285877) forked
Nov 22 09:10:56 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [NOTICE]   (285875) : Loading success.
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.376 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.376 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.377 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.382 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.383 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802656.330079, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.383 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] VM Paused (Lifecycle Event)
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.413 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.418 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802656.3382874, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.419 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] VM Resumed (Lifecycle Event)
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.438 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.443 253665 INFO nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Took 7.88 seconds to spawn the instance on the hypervisor.
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.444 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.445 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.476 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.516 253665 INFO nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Took 8.91 seconds to build instance.
Nov 22 09:10:56 compute-0 nova_compute[253661]: 2025-11-22 09:10:56.534 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:56 compute-0 ceph-mon[75021]: pgmap v1345: 305 pgs: 305 active+clean; 404 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 200 op/s
Nov 22 09:10:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 372 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 202 op/s
Nov 22 09:10:57 compute-0 nova_compute[253661]: 2025-11-22 09:10:57.551 253665 INFO nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Rebuilding instance
Nov 22 09:10:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:10:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6018 writes, 27K keys, 6018 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6018 writes, 6018 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1512 writes, 6796 keys, 1512 commit groups, 1.0 writes per commit group, ingest: 9.34 MB, 0.02 MB/s
                                           Interval WAL: 1512 writes, 1512 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     54.6      0.53              0.10        15    0.036       0      0       0.0       0.0
                                             L6      1/0    7.22 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     77.1     63.2      1.59              0.31        14    0.113     65K   7730       0.0       0.0
                                            Sum      1/0    7.22 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     57.7     61.1      2.12              0.41        29    0.073     65K   7730       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     40.2     40.6      0.96              0.13         8    0.120     21K   2569       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     77.1     63.2      1.59              0.31        14    0.113     65K   7730       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     54.8      0.53              0.10        14    0.038       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.029, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 2.1 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 12.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000256 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(848,12.46 MB,4.09741%) FilterBlock(30,188.23 KB,0.060468%) IndexBlock(30,341.61 KB,0.109738%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 09:10:57 compute-0 ovn_controller[152872]: 2025-11-22T09:10:57Z|00110|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:10:57 compute-0 ovn_controller[152872]: 2025-11-22T09:10:57Z|00111|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:10:57 compute-0 nova_compute[253661]: 2025-11-22 09:10:57.731 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:57 compute-0 nova_compute[253661]: 2025-11-22 09:10:57.824 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:57 compute-0 ovn_controller[152872]: 2025-11-22T09:10:57Z|00112|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:10:57 compute-0 ovn_controller[152872]: 2025-11-22T09:10:57Z|00113|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:10:57 compute-0 nova_compute[253661]: 2025-11-22 09:10:57.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.018 253665 DEBUG nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.103 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.114 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.124 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.135 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.143 253665 DEBUG nova.compute.manager [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.143 253665 DEBUG oslo_concurrency.lockutils [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.144 253665 DEBUG oslo_concurrency.lockutils [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.144 253665 DEBUG oslo_concurrency.lockutils [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.144 253665 DEBUG nova.compute.manager [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.145 253665 WARNING nova.compute.manager [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 for instance with vm_state active and task_state None.
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.146 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:10:58 compute-0 nova_compute[253661]: 2025-11-22 09:10:58.150 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:10:58 compute-0 ceph-mon[75021]: pgmap v1346: 305 pgs: 305 active+clean; 372 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 202 op/s
Nov 22 09:10:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Nov 22 09:10:59 compute-0 nova_compute[253661]: 2025-11-22 09:10:59.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:59 compute-0 NetworkManager[48920]: <info>  [1763802659.1995] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 22 09:10:59 compute-0 NetworkManager[48920]: <info>  [1763802659.2014] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 22 09:10:59 compute-0 nova_compute[253661]: 2025-11-22 09:10:59.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:59 compute-0 nova_compute[253661]: 2025-11-22 09:10:59.340 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:59 compute-0 ovn_controller[152872]: 2025-11-22T09:10:59Z|00114|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:10:59 compute-0 ovn_controller[152872]: 2025-11-22T09:10:59Z|00115|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:10:59 compute-0 nova_compute[253661]: 2025-11-22 09:10:59.357 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:59 compute-0 nova_compute[253661]: 2025-11-22 09:10:59.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:00 compute-0 nova_compute[253661]: 2025-11-22 09:11:00.275 253665 DEBUG nova.compute.manager [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-changed-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:00 compute-0 nova_compute[253661]: 2025-11-22 09:11:00.276 253665 DEBUG nova.compute.manager [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing instance network info cache due to event network-changed-f70fa10f-f756-4faa-aebf-deeb0b129704. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:11:00 compute-0 nova_compute[253661]: 2025-11-22 09:11:00.276 253665 DEBUG oslo_concurrency.lockutils [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:11:00 compute-0 nova_compute[253661]: 2025-11-22 09:11:00.276 253665 DEBUG oslo_concurrency.lockutils [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:11:00 compute-0 nova_compute[253661]: 2025-11-22 09:11:00.277 253665 DEBUG nova.network.neutron [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing network info cache for port f70fa10f-f756-4faa-aebf-deeb0b129704 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:11:00 compute-0 ovn_controller[152872]: 2025-11-22T09:11:00Z|00116|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:11:00 compute-0 ovn_controller[152872]: 2025-11-22T09:11:00Z|00117|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:11:00 compute-0 nova_compute[253661]: 2025-11-22 09:11:00.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:00 compute-0 kernel: tap716b716d-2e (unregistering): left promiscuous mode
Nov 22 09:11:00 compute-0 NetworkManager[48920]: <info>  [1763802660.9202] device (tap716b716d-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:11:00 compute-0 nova_compute[253661]: 2025-11-22 09:11:00.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:00 compute-0 ovn_controller[152872]: 2025-11-22T09:11:00Z|00118|binding|INFO|Releasing lport 716b716d-2ee2-44e7-9850-c10854634f77 from this chassis (sb_readonly=0)
Nov 22 09:11:00 compute-0 ovn_controller[152872]: 2025-11-22T09:11:00Z|00119|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 down in Southbound
Nov 22 09:11:00 compute-0 ovn_controller[152872]: 2025-11-22T09:11:00Z|00120|binding|INFO|Removing iface tap716b716d-2e ovn-installed in OVS
Nov 22 09:11:00 compute-0 nova_compute[253661]: 2025-11-22 09:11:00.943 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:00.958 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:11:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:00.960 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis
Nov 22 09:11:00 compute-0 nova_compute[253661]: 2025-11-22 09:11:00.960 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:00.963 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:11:00 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 22 09:11:00 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Consumed 16.096s CPU time.
Nov 22 09:11:00 compute-0 systemd-machined[215941]: Machine qemu-21-instance-00000012 terminated.
Nov 22 09:11:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:00.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6eaa97-7f02-4ead-af8e-7f9e1468f0c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:01 compute-0 ceph-mon[75021]: pgmap v1347: 305 pgs: 305 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.036 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[89b42a54-f3ad-46a9-9766-e8a79901232d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.040 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd422c9-2e99-4c30-ac8b-43123555de04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.078 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ccbbe956-d09d-4078-bb56-1d0c8ffc65ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.099 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f446147-cde0-483d-b5b5-c597f4f6a141]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285899, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.118 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea95a089-81bf-4b80-bb19-21a08866c2a0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285900, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285900, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.120 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.123 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.129 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.130 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.130 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.130 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.170 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance shutdown successfully after 3 seconds.
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.177 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.184 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.186 253665 DEBUG nova.virt.libvirt.vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:56Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.186 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.190 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.192 253665 DEBUG os_vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.194 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap716b716d-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.202 253665 INFO os_vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.732 253665 DEBUG nova.compute.manager [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.734 253665 DEBUG oslo_concurrency.lockutils [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.734 253665 DEBUG oslo_concurrency.lockutils [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.734 253665 DEBUG oslo_concurrency.lockutils [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.735 253665 DEBUG nova.compute.manager [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.735 253665 WARNING nova.compute.manager [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state error and task_state rebuilding.
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.973 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting instance files /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del
Nov 22 09:11:01 compute-0 nova_compute[253661]: 2025-11-22 09:11:01.974 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deletion of /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del complete
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.224 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.225 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating image(s)
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.250 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.282 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.308 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.312 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.382 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.384 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.385 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.386 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.418 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.424 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029728019999023247 of space, bias 1.0, pg target 0.8918405999706974 quantized to 32 (current 32)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:11:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.870 253665 DEBUG nova.network.neutron [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updated VIF entry in instance network info cache for port f70fa10f-f756-4faa-aebf-deeb0b129704. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:11:02 compute-0 nova_compute[253661]: 2025-11-22 09:11:02.873 253665 DEBUG nova.network.neutron [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:11:03 compute-0 ceph-mon[75021]: pgmap v1348: 305 pgs: 305 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Nov 22 09:11:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.201 253665 DEBUG oslo_concurrency.lockutils [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.617 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.700 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.846 253665 DEBUG nova.compute.manager [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.846 253665 DEBUG oslo_concurrency.lockutils [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.847 253665 DEBUG oslo_concurrency.lockutils [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.847 253665 DEBUG oslo_concurrency.lockutils [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.847 253665 DEBUG nova.compute.manager [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.847 253665 WARNING nova.compute.manager [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state error and task_state rebuild_spawning.
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.855 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.856 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Ensure instance console log exists: /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.856 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.857 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.857 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.859 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start _get_guest_xml network_info=[{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.863 253665 WARNING nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.870 253665 DEBUG nova.virt.libvirt.host [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.871 253665 DEBUG nova.virt.libvirt.host [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.877 253665 DEBUG nova.virt.libvirt.host [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.878 253665 DEBUG nova.virt.libvirt.host [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.878 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.878 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.879 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.879 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.879 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.880 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.880 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.880 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.881 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.881 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.881 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.881 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.882 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:03 compute-0 nova_compute[253661]: 2025-11-22 09:11:03.896 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:04 compute-0 ceph-mon[75021]: pgmap v1349: 305 pgs: 305 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Nov 22 09:11:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:11:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170946742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.363 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.391 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.397 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:04 compute-0 ovn_controller[152872]: 2025-11-22T09:11:04Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:b7:42 10.100.0.7
Nov 22 09:11:04 compute-0 ovn_controller[152872]: 2025-11-22T09:11:04Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:b7:42 10.100.0.7
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.617 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:11:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1049188024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.849 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.852 253665 DEBUG nova.virt.libvirt.vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:02Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.853 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.854 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.858 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <uuid>3ae08a2f-348c-406b-8ffc-9acb8a542e1c</uuid>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <name>instance-00000012</name>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdminTestJSON-server-1439141870</nova:name>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:11:03</nova:creationTime>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <nova:port uuid="716b716d-2ee2-44e7-9850-c10854634f77">
Nov 22 09:11:04 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <system>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <entry name="serial">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <entry name="uuid">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     </system>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <os>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   </os>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <features>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   </features>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk">
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config">
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:11:04 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:47:7d:dd"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <target dev="tap716b716d-2e"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log" append="off"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <video>
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     </video>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:11:04 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:11:04 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:11:04 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:11:04 compute-0 nova_compute[253661]: </domain>
Nov 22 09:11:04 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.867 253665 DEBUG nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Preparing to wait for external event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.867 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.868 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.868 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.870 253665 DEBUG nova.virt.libvirt.vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:02Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.870 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.871 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.872 253665 DEBUG os_vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.873 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.874 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.875 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.879 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.880 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap716b716d-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.880 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap716b716d-2e, col_values=(('external_ids', {'iface-id': '716b716d-2ee2-44e7-9850-c10854634f77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:7d:dd', 'vm-uuid': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:04 compute-0 NetworkManager[48920]: <info>  [1763802664.8853] manager: (tap716b716d-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.895 253665 INFO os_vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.955 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.955 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.956 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:47:7d:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.956 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Using config drive
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.976 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:04 compute-0 nova_compute[253661]: 2025-11-22 09:11:04.992 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:05 compute-0 nova_compute[253661]: 2025-11-22 09:11:05.020 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'keypairs' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 335 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 229 op/s
Nov 22 09:11:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4170946742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:11:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1049188024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:11:06 compute-0 nova_compute[253661]: 2025-11-22 09:11:06.325 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating config drive at /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config
Nov 22 09:11:06 compute-0 nova_compute[253661]: 2025-11-22 09:11:06.332 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfym0nge execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:06 compute-0 nova_compute[253661]: 2025-11-22 09:11:06.467 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfym0nge" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:06 compute-0 ceph-mon[75021]: pgmap v1350: 305 pgs: 305 active+clean; 335 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 229 op/s
Nov 22 09:11:06 compute-0 nova_compute[253661]: 2025-11-22 09:11:06.604 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:06 compute-0 nova_compute[253661]: 2025-11-22 09:11:06.609 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 331 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Nov 22 09:11:07 compute-0 nova_compute[253661]: 2025-11-22 09:11:07.951 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802652.9507298, 04781543-b5ed-482a-a30a-0730fbcd12a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:11:07 compute-0 nova_compute[253661]: 2025-11-22 09:11:07.952 253665 INFO nova.compute.manager [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] VM Stopped (Lifecycle Event)
Nov 22 09:11:07 compute-0 nova_compute[253661]: 2025-11-22 09:11:07.969 253665 DEBUG nova.compute.manager [None req-56aba9da-28f3-4415-b17d-bc49fc5684f8 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.393 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.784s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.394 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting local config drive /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config because it was imported into RBD.
Nov 22 09:11:08 compute-0 NetworkManager[48920]: <info>  [1763802668.4705] manager: (tap716b716d-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Nov 22 09:11:08 compute-0 kernel: tap716b716d-2e: entered promiscuous mode
Nov 22 09:11:08 compute-0 ovn_controller[152872]: 2025-11-22T09:11:08Z|00121|binding|INFO|Claiming lport 716b716d-2ee2-44e7-9850-c10854634f77 for this chassis.
Nov 22 09:11:08 compute-0 ovn_controller[152872]: 2025-11-22T09:11:08Z|00122|binding|INFO|716b716d-2ee2-44e7-9850-c10854634f77: Claiming fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.486 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.487 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.489 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:08 compute-0 ovn_controller[152872]: 2025-11-22T09:11:08Z|00123|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 ovn-installed in OVS
Nov 22 09:11:08 compute-0 ovn_controller[152872]: 2025-11-22T09:11:08Z|00124|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 up in Southbound
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.508 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea65e18a-f3bc-42da-91bb-3548a9104fdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:08 compute-0 systemd-machined[215941]: New machine qemu-28-instance-00000012.
Nov 22 09:11:08 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000012.
Nov 22 09:11:08 compute-0 systemd-udevd[286239]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.546 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[964aad8e-ead5-4223-a83f-35d749377895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:08 compute-0 NetworkManager[48920]: <info>  [1763802668.5547] device (tap716b716d-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:11:08 compute-0 NetworkManager[48920]: <info>  [1763802668.5557] device (tap716b716d-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.553 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84a9e1d2-87cb-40df-9e70-7120dc26d39b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.590 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[edd2f1d2-5854-4857-b712-117a5310d904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.616 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c7ffa4-a7f9-4bd6-bc09-c27a413f7a11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286249, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.641 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aa51c023-74a9-4db2-a3ce-35253c75b43c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286251, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286251, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.643 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.645 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.648 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.649 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.649 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.650 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:08 compute-0 ceph-mon[75021]: pgmap v1351: 305 pgs: 305 active+clean; 331 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.914 253665 DEBUG nova.compute.manager [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.914 253665 DEBUG oslo_concurrency.lockutils [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.914 253665 DEBUG oslo_concurrency.lockutils [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.915 253665 DEBUG oslo_concurrency.lockutils [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:08 compute-0 nova_compute[253661]: 2025-11-22 09:11:08.915 253665 DEBUG nova.compute.manager [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Processing event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:11:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 372 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Nov 22 09:11:09 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 09:11:09 compute-0 nova_compute[253661]: 2025-11-22 09:11:09.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:09 compute-0 kernel: hrtimer: interrupt took 58050732 ns
Nov 22 09:11:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:09 compute-0 nova_compute[253661]: 2025-11-22 09:11:09.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.265 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 3ae08a2f-348c-406b-8ffc-9acb8a542e1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.267 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802670.2647564, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.268 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Started (Lifecycle Event)
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.270 253665 DEBUG nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.275 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.283 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance spawned successfully.
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.284 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.302 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.311 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.315 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.315 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.316 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.316 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.317 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.317 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.350 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.350 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802670.265166, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.351 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Paused (Lifecycle Event)
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.377 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.386 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802670.274439, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.386 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Resumed (Lifecycle Event)
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.404 253665 DEBUG nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.406 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.413 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.444 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.619 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.619 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.620 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.668 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.988 253665 DEBUG nova.compute.manager [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.989 253665 DEBUG oslo_concurrency.lockutils [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.989 253665 DEBUG oslo_concurrency.lockutils [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.989 253665 DEBUG oslo_concurrency.lockutils [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.990 253665 DEBUG nova.compute.manager [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:10 compute-0 nova_compute[253661]: 2025-11-22 09:11:10.990 253665 WARNING nova.compute.manager [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state None.
Nov 22 09:11:11 compute-0 ceph-mon[75021]: pgmap v1352: 305 pgs: 305 active+clean; 372 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Nov 22 09:11:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 372 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 22 09:11:11 compute-0 nova_compute[253661]: 2025-11-22 09:11:11.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:11 compute-0 ovn_controller[152872]: 2025-11-22T09:11:11Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:85:74 10.100.0.11
Nov 22 09:11:11 compute-0 ovn_controller[152872]: 2025-11-22T09:11:11Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:85:74 10.100.0.11
Nov 22 09:11:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:11:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2860305444' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:11:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:11:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2860305444' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:11:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 375 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 183 op/s
Nov 22 09:11:13 compute-0 ceph-mon[75021]: pgmap v1353: 305 pgs: 305 active+clean; 372 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 22 09:11:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2860305444' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:11:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2860305444' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:11:13 compute-0 nova_compute[253661]: 2025-11-22 09:11:13.431 253665 INFO nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Rebuilding instance
Nov 22 09:11:13 compute-0 nova_compute[253661]: 2025-11-22 09:11:13.628 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:13 compute-0 nova_compute[253661]: 2025-11-22 09:11:13.642 253665 DEBUG nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:13 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 09:11:13 compute-0 nova_compute[253661]: 2025-11-22 09:11:13.790 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:13 compute-0 nova_compute[253661]: 2025-11-22 09:11:13.801 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:13 compute-0 nova_compute[253661]: 2025-11-22 09:11:13.813 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:13 compute-0 nova_compute[253661]: 2025-11-22 09:11:13.823 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:13 compute-0 nova_compute[253661]: 2025-11-22 09:11:13.831 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:11:13 compute-0 nova_compute[253661]: 2025-11-22 09:11:13.836 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:11:14 compute-0 nova_compute[253661]: 2025-11-22 09:11:14.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:14 compute-0 ceph-mon[75021]: pgmap v1354: 305 pgs: 305 active+clean; 375 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 183 op/s
Nov 22 09:11:14 compute-0 nova_compute[253661]: 2025-11-22 09:11:14.623 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:14 compute-0 nova_compute[253661]: 2025-11-22 09:11:14.931 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 393 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 231 op/s
Nov 22 09:11:15 compute-0 nova_compute[253661]: 2025-11-22 09:11:15.233 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:15 compute-0 podman[286295]: 2025-11-22 09:11:15.403474621 +0000 UTC m=+0.089439837 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:11:15 compute-0 podman[286296]: 2025-11-22 09:11:15.425989464 +0000 UTC m=+0.099595722 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:11:16 compute-0 ceph-mon[75021]: pgmap v1355: 305 pgs: 305 active+clean; 393 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 231 op/s
Nov 22 09:11:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 397 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 201 op/s
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.251 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:11:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506221972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.732 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.830 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.831 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.836 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.836 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.839 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.840 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.857 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.857 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.866 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:17 compute-0 nova_compute[253661]: 2025-11-22 09:11:17.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.085 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.086 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3537MB free_disk=59.786197662353516GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.087 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.087 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.154 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3ae08a2f-348c-406b-8ffc-9acb8a542e1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.155 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d99bd27b-0ff3-493e-a69c-6c7ec034aa81 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.155 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 96000606-0bc4-4cf1-9e33-360a640c2cb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.155 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance de145d76-062b-4362-bc82-09e09d2f9154 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.155 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.156 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.156 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:11:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2506221972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.267 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:11:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24817561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.798 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.805 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.822 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.901 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:11:18 compute-0 nova_compute[253661]: 2025-11-22 09:11:18.902 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 402 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Nov 22 09:11:19 compute-0 ceph-mon[75021]: pgmap v1356: 305 pgs: 305 active+clean; 397 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 201 op/s
Nov 22 09:11:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/24817561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:11:19 compute-0 podman[286380]: 2025-11-22 09:11:19.439683773 +0000 UTC m=+0.123443267 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.896 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.897 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.897 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.898 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.923 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.924 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.924 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.925 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:19 compute-0 nova_compute[253661]: 2025-11-22 09:11:19.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:20 compute-0 nova_compute[253661]: 2025-11-22 09:11:20.188 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:20 compute-0 ceph-mon[75021]: pgmap v1357: 305 pgs: 305 active+clean; 402 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Nov 22 09:11:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 402 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 22 09:11:22 compute-0 nova_compute[253661]: 2025-11-22 09:11:22.309 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updating instance_info_cache with network_info: [{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:11:22 compute-0 nova_compute[253661]: 2025-11-22 09:11:22.374 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:11:22 compute-0 nova_compute[253661]: 2025-11-22 09:11:22.374 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:11:22 compute-0 nova_compute[253661]: 2025-11-22 09:11:22.375 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:22 compute-0 nova_compute[253661]: 2025-11-22 09:11:22.375 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:22 compute-0 nova_compute[253661]: 2025-11-22 09:11:22.375 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:22 compute-0 nova_compute[253661]: 2025-11-22 09:11:22.375 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:22 compute-0 nova_compute[253661]: 2025-11-22 09:11:22.376 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:11:22 compute-0 ceph-mon[75021]: pgmap v1358: 305 pgs: 305 active+clean; 402 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 22 09:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:11:22 compute-0 sudo[286407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:11:22 compute-0 sudo[286407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:22 compute-0 sudo[286407]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:22 compute-0 sudo[286432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:11:22 compute-0 sudo[286432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:22 compute-0 sudo[286432]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:22 compute-0 sudo[286457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:11:22 compute-0 sudo[286457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:22 compute-0 sudo[286457]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:22 compute-0 sudo[286482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:11:22 compute-0 sudo[286482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 405 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 22 09:11:23 compute-0 sudo[286482]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:11:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:11:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:11:23 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:11:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:11:23 compute-0 nova_compute[253661]: 2025-11-22 09:11:23.891 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:11:23 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:11:23 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 17dc58d1-5a04-4007-a138-4991c36eb930 does not exist
Nov 22 09:11:23 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b696cd8d-142f-4654-84de-d4472afcef8b does not exist
Nov 22 09:11:23 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4211af3b-b854-499f-a4f0-4dd266d62d2b does not exist
Nov 22 09:11:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:11:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:11:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:11:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:11:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:11:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:11:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:11:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:11:24 compute-0 sudo[286539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:11:24 compute-0 sudo[286539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:24 compute-0 sudo[286539]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:24 compute-0 sudo[286564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:11:24 compute-0 sudo[286564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:24 compute-0 sudo[286564]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:24 compute-0 sudo[286589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:11:24 compute-0 sudo[286589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:24 compute-0 sudo[286589]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.284 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.284 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.305 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:11:24 compute-0 sudo[286614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:11:24 compute-0 sudo[286614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.378 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.380 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.387 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.387 253665 INFO nova.compute.claims [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.579 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.628 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:24 compute-0 podman[286680]: 2025-11-22 09:11:24.727766404 +0000 UTC m=+0.033384806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:11:24 compute-0 nova_compute[253661]: 2025-11-22 09:11:24.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:11:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833411026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.046 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.057 253665 DEBUG nova.compute.provider_tree [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.073 253665 DEBUG nova.scheduler.client.report [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:11:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.094 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.095 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.142 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.142 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.161 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:11:25 compute-0 podman[286680]: 2025-11-22 09:11:25.171711396 +0000 UTC m=+0.477329778 container create fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:11:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 414 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 112 op/s
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.178 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:11:25 compute-0 ceph-mon[75021]: pgmap v1359: 305 pgs: 305 active+clean; 405 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 22 09:11:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:11:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:11:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:11:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:11:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/833411026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.270 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.272 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.273 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Creating image(s)
Nov 22 09:11:25 compute-0 systemd[1]: Started libpod-conmon-fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4.scope.
Nov 22 09:11:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.577 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.600 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.621 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.625 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.654 253665 DEBUG nova.policy [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97872d7ce91947789de976821b771135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.694 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.694 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.695 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.695 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.718 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:25 compute-0 nova_compute[253661]: 2025-11-22 09:11:25.722 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:25 compute-0 podman[286680]: 2025-11-22 09:11:25.739576805 +0000 UTC m=+1.045195197 container init fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:11:25 compute-0 podman[286680]: 2025-11-22 09:11:25.754608108 +0000 UTC m=+1.060226480 container start fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 09:11:25 compute-0 systemd[1]: libpod-fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4.scope: Deactivated successfully.
Nov 22 09:11:25 compute-0 objective_ganguly[286722]: 167 167
Nov 22 09:11:25 compute-0 conmon[286722]: conmon fe5ee609ddaaa1b492ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4.scope/container/memory.events
Nov 22 09:11:26 compute-0 podman[286680]: 2025-11-22 09:11:26.087054732 +0000 UTC m=+1.392673114 container attach fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 09:11:26 compute-0 podman[286680]: 2025-11-22 09:11:26.088786164 +0000 UTC m=+1.394404596 container died fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 09:11:26 compute-0 nova_compute[253661]: 2025-11-22 09:11:26.477 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Successfully created port: 816016d3-f417-4c33-8f24-8e6360d6fa39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:11:26 compute-0 nova_compute[253661]: 2025-11-22 09:11:26.698 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd370b3588e41fce04bf3fe28717c0a3f5dfd6f80825a7ffa8cd8ede8ab67ddf-merged.mount: Deactivated successfully.
Nov 22 09:11:27 compute-0 ceph-mon[75021]: pgmap v1360: 305 pgs: 305 active+clean; 414 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 112 op/s
Nov 22 09:11:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 419 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Nov 22 09:11:27 compute-0 podman[286680]: 2025-11-22 09:11:27.712613239 +0000 UTC m=+3.018231621 container remove fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:11:27 compute-0 systemd[1]: libpod-conmon-fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4.scope: Deactivated successfully.
Nov 22 09:11:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:27.955 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:27.956 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:27.957 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:28 compute-0 podman[286838]: 2025-11-22 09:11:27.934566961 +0000 UTC m=+0.027321780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:11:28 compute-0 nova_compute[253661]: 2025-11-22 09:11:28.206 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Successfully updated port: 816016d3-f417-4c33-8f24-8e6360d6fa39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:11:28 compute-0 nova_compute[253661]: 2025-11-22 09:11:28.228 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:11:28 compute-0 nova_compute[253661]: 2025-11-22 09:11:28.228 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:11:28 compute-0 nova_compute[253661]: 2025-11-22 09:11:28.229 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:11:28 compute-0 podman[286838]: 2025-11-22 09:11:28.254265727 +0000 UTC m=+0.347020466 container create 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:11:28 compute-0 nova_compute[253661]: 2025-11-22 09:11:28.391 253665 DEBUG nova.compute.manager [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received event network-changed-816016d3-f417-4c33-8f24-8e6360d6fa39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:28 compute-0 nova_compute[253661]: 2025-11-22 09:11:28.392 253665 DEBUG nova.compute.manager [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Refreshing instance network info cache due to event network-changed-816016d3-f417-4c33-8f24-8e6360d6fa39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:11:28 compute-0 nova_compute[253661]: 2025-11-22 09:11:28.393 253665 DEBUG oslo_concurrency.lockutils [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:11:28 compute-0 ceph-mon[75021]: pgmap v1361: 305 pgs: 305 active+clean; 419 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Nov 22 09:11:28 compute-0 systemd[1]: Started libpod-conmon-98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d.scope.
Nov 22 09:11:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:28 compute-0 nova_compute[253661]: 2025-11-22 09:11:28.716 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:11:29 compute-0 podman[286838]: 2025-11-22 09:11:29.009471347 +0000 UTC m=+1.102226106 container init 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:11:29 compute-0 podman[286838]: 2025-11-22 09:11:29.018045213 +0000 UTC m=+1.110799952 container start 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 22 09:11:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 426 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Nov 22 09:11:29 compute-0 nova_compute[253661]: 2025-11-22 09:11:29.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:29 compute-0 podman[286838]: 2025-11-22 09:11:29.868824583 +0000 UTC m=+1.961579352 container attach 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:11:29 compute-0 nova_compute[253661]: 2025-11-22 09:11:29.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:30 compute-0 friendly_ramanujan[286855]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:11:30 compute-0 friendly_ramanujan[286855]: --> relative data size: 1.0
Nov 22 09:11:30 compute-0 friendly_ramanujan[286855]: --> All data devices are unavailable
Nov 22 09:11:30 compute-0 systemd[1]: libpod-98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d.scope: Deactivated successfully.
Nov 22 09:11:30 compute-0 systemd[1]: libpod-98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d.scope: Consumed 1.078s CPU time.
Nov 22 09:11:30 compute-0 ovn_controller[152872]: 2025-11-22T09:11:30Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 09:11:30 compute-0 ovn_controller[152872]: 2025-11-22T09:11:30Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 09:11:30 compute-0 podman[286884]: 2025-11-22 09:11:30.64980431 +0000 UTC m=+0.431674227 container died 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:11:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 426 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Nov 22 09:11:31 compute-0 nova_compute[253661]: 2025-11-22 09:11:31.210 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Updating instance_info_cache with network_info: [{"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:11:31 compute-0 nova_compute[253661]: 2025-11-22 09:11:31.233 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:11:31 compute-0 nova_compute[253661]: 2025-11-22 09:11:31.233 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance network_info: |[{"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:11:31 compute-0 nova_compute[253661]: 2025-11-22 09:11:31.233 253665 DEBUG oslo_concurrency.lockutils [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:11:31 compute-0 nova_compute[253661]: 2025-11-22 09:11:31.233 253665 DEBUG nova.network.neutron [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Refreshing network info cache for port 816016d3-f417-4c33-8f24-8e6360d6fa39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:11:31 compute-0 ceph-mon[75021]: pgmap v1362: 305 pgs: 305 active+clean; 426 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Nov 22 09:11:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29-merged.mount: Deactivated successfully.
Nov 22 09:11:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:32.558 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:11:32 compute-0 nova_compute[253661]: 2025-11-22 09:11:32.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:32.559 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:11:32 compute-0 nova_compute[253661]: 2025-11-22 09:11:32.965 253665 DEBUG nova.network.neutron [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Updated VIF entry in instance network info cache for port 816016d3-f417-4c33-8f24-8e6360d6fa39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:11:32 compute-0 nova_compute[253661]: 2025-11-22 09:11:32.966 253665 DEBUG nova.network.neutron [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Updating instance_info_cache with network_info: [{"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:11:32 compute-0 nova_compute[253661]: 2025-11-22 09:11:32.983 253665 DEBUG oslo_concurrency.lockutils [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:11:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 437 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 2.3 MiB/s wr, 35 op/s
Nov 22 09:11:33 compute-0 ceph-mon[75021]: pgmap v1363: 305 pgs: 305 active+clean; 426 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Nov 22 09:11:33 compute-0 podman[286884]: 2025-11-22 09:11:33.655951861 +0000 UTC m=+3.437821708 container remove 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:11:33 compute-0 systemd[1]: libpod-conmon-98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d.scope: Deactivated successfully.
Nov 22 09:11:33 compute-0 sudo[286614]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:33 compute-0 sudo[286899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:11:33 compute-0 sudo[286899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:33 compute-0 sudo[286899]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:33 compute-0 sudo[286924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:11:33 compute-0 sudo[286924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:33 compute-0 sudo[286924]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:33 compute-0 sudo[286949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:11:33 compute-0 sudo[286949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:33 compute-0 sudo[286949]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:34 compute-0 sudo[286974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:11:34 compute-0 sudo[286974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:34 compute-0 podman[287039]: 2025-11-22 09:11:34.390252443 +0000 UTC m=+0.027048774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:11:34 compute-0 podman[287039]: 2025-11-22 09:11:34.614697053 +0000 UTC m=+0.251493354 container create 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:11:34 compute-0 nova_compute[253661]: 2025-11-22 09:11:34.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:34 compute-0 nova_compute[253661]: 2025-11-22 09:11:34.645 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.923s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:34 compute-0 nova_compute[253661]: 2025-11-22 09:11:34.874 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] resizing rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:11:34 compute-0 ceph-mon[75021]: pgmap v1364: 305 pgs: 305 active+clean; 437 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 2.3 MiB/s wr, 35 op/s
Nov 22 09:11:34 compute-0 systemd[1]: Started libpod-conmon-8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297.scope.
Nov 22 09:11:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:11:35 compute-0 nova_compute[253661]: 2025-11-22 09:11:35.001 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:35 compute-0 nova_compute[253661]: 2025-11-22 09:11:35.006 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:11:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 468 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Nov 22 09:11:35 compute-0 podman[287039]: 2025-11-22 09:11:35.220130099 +0000 UTC m=+0.856926420 container init 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 09:11:35 compute-0 podman[287039]: 2025-11-22 09:11:35.232955679 +0000 UTC m=+0.869751960 container start 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:11:35 compute-0 happy_hugle[287109]: 167 167
Nov 22 09:11:35 compute-0 systemd[1]: libpod-8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297.scope: Deactivated successfully.
Nov 22 09:11:35 compute-0 nova_compute[253661]: 2025-11-22 09:11:35.651 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:35 compute-0 nova_compute[253661]: 2025-11-22 09:11:35.651 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:35 compute-0 nova_compute[253661]: 2025-11-22 09:11:35.652 253665 DEBUG nova.objects.instance [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:35 compute-0 podman[287039]: 2025-11-22 09:11:35.703862711 +0000 UTC m=+1.340658982 container attach 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 09:11:35 compute-0 podman[287039]: 2025-11-22 09:11:35.705291935 +0000 UTC m=+1.342088216 container died 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:11:35 compute-0 nova_compute[253661]: 2025-11-22 09:11:35.974 253665 DEBUG nova.objects.instance [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:35 compute-0 nova_compute[253661]: 2025-11-22 09:11:35.985 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:11:36 compute-0 nova_compute[253661]: 2025-11-22 09:11:36.134 253665 DEBUG nova.policy [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d0ef816e35fd18265e7e74611866b9dd5a81c3e3c27ed517c6bf8d6cf2a798e-merged.mount: Deactivated successfully.
Nov 22 09:11:36 compute-0 ceph-mon[75021]: pgmap v1365: 305 pgs: 305 active+clean; 468 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Nov 22 09:11:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 477 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 2.8 MiB/s wr, 52 op/s
Nov 22 09:11:37 compute-0 podman[287039]: 2025-11-22 09:11:37.342403472 +0000 UTC m=+2.979199753 container remove 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 09:11:37 compute-0 systemd[1]: libpod-conmon-8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297.scope: Deactivated successfully.
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.464 253665 DEBUG nova.objects.instance [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'migration_context' on Instance uuid b7c923dd-3ae9-4c51-8d6d-6305a71fe97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.478 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.479 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Ensure instance console log exists: /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.479 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.480 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.480 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.482 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start _get_guest_xml network_info=[{"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.487 253665 WARNING nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.497 253665 DEBUG nova.virt.libvirt.host [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.498 253665 DEBUG nova.virt.libvirt.host [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.503 253665 DEBUG nova.virt.libvirt.host [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.503 253665 DEBUG nova.virt.libvirt.host [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.504 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.504 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.505 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.505 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.506 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.506 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.506 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.506 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.507 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.507 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.507 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.508 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.512 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:37.561 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:37 compute-0 podman[287150]: 2025-11-22 09:11:37.561045573 +0000 UTC m=+0.033011476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:11:37 compute-0 podman[287150]: 2025-11-22 09:11:37.781654861 +0000 UTC m=+0.253620755 container create d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 22 09:11:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:11:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314115248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:11:37 compute-0 nova_compute[253661]: 2025-11-22 09:11:37.993 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:38 compute-0 systemd[1]: Started libpod-conmon-d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9.scope.
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.036 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.044 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.092 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Successfully created port: fdaaf015-c32e-4960-a33a-2767bf447b71 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:11:38 compute-0 podman[287150]: 2025-11-22 09:11:38.252705947 +0000 UTC m=+0.724671860 container init d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:11:38 compute-0 podman[287150]: 2025-11-22 09:11:38.267122075 +0000 UTC m=+0.739087958 container start d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:11:38 compute-0 podman[287150]: 2025-11-22 09:11:38.462002053 +0000 UTC m=+0.933967976 container attach d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:11:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:11:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503735232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.528 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.530 253665 DEBUG nova.virt.libvirt.vif [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:11:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1474933117',display_name='tempest-ImagesTestJSON-server-1474933117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1474933117',id=25,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-vsxkgtmn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:25Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=b7c923dd-3ae9-4c51-8d6d-6305a71fe97f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.530 253665 DEBUG nova.network.os_vif_util [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.531 253665 DEBUG nova.network.os_vif_util [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.532 253665 DEBUG nova.objects.instance [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid b7c923dd-3ae9-4c51-8d6d-6305a71fe97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.548 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <uuid>b7c923dd-3ae9-4c51-8d6d-6305a71fe97f</uuid>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <name>instance-00000019</name>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesTestJSON-server-1474933117</nova:name>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:11:37</nova:creationTime>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <nova:port uuid="816016d3-f417-4c33-8f24-8e6360d6fa39">
Nov 22 09:11:38 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <system>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <entry name="serial">b7c923dd-3ae9-4c51-8d6d-6305a71fe97f</entry>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <entry name="uuid">b7c923dd-3ae9-4c51-8d6d-6305a71fe97f</entry>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     </system>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <os>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   </os>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <features>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   </features>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk">
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config">
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:11:38 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:78:74:61"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <target dev="tap816016d3-f4"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/console.log" append="off"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <video>
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     </video>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:11:38 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:11:38 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:11:38 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:11:38 compute-0 nova_compute[253661]: </domain>
Nov 22 09:11:38 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.550 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Preparing to wait for external event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.551 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.551 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.551 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.552 253665 DEBUG nova.virt.libvirt.vif [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:11:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1474933117',display_name='tempest-ImagesTestJSON-server-1474933117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1474933117',id=25,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-vsxkgtmn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:25Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=b7c923dd-3ae9-4c51-8d6d-6305a71fe97f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.553 253665 DEBUG nova.network.os_vif_util [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.553 253665 DEBUG nova.network.os_vif_util [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.554 253665 DEBUG os_vif [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.556 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.556 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.561 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap816016d3-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.561 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap816016d3-f4, col_values=(('external_ids', {'iface-id': '816016d3-f417-4c33-8f24-8e6360d6fa39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:74:61', 'vm-uuid': 'b7c923dd-3ae9-4c51-8d6d-6305a71fe97f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:38 compute-0 NetworkManager[48920]: <info>  [1763802698.5649] manager: (tap816016d3-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.574 253665 INFO os_vif [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4')
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.825 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.826 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.826 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:78:74:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.827 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Using config drive
Nov 22 09:11:38 compute-0 nova_compute[253661]: 2025-11-22 09:11:38.850 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.094 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Successfully updated port: fdaaf015-c32e-4960-a33a-2767bf447b71 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.156 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.157 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.157 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:11:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 477 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.196 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Creating config drive at /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.202 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6sgvmsy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:39 compute-0 happy_meninsky[287202]: {
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:     "0": [
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:         {
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "devices": [
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "/dev/loop3"
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             ],
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_name": "ceph_lv0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_size": "21470642176",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "name": "ceph_lv0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "tags": {
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.cluster_name": "ceph",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.crush_device_class": "",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.encrypted": "0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.osd_id": "0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.type": "block",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.vdo": "0"
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             },
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "type": "block",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "vg_name": "ceph_vg0"
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:         }
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:     ],
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:     "1": [
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:         {
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "devices": [
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "/dev/loop4"
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             ],
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_name": "ceph_lv1",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_size": "21470642176",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "name": "ceph_lv1",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "tags": {
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.cluster_name": "ceph",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.crush_device_class": "",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.encrypted": "0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.osd_id": "1",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.type": "block",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.vdo": "0"
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             },
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "type": "block",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "vg_name": "ceph_vg1"
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:         }
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:     ],
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:     "2": [
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:         {
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "devices": [
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "/dev/loop5"
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             ],
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_name": "ceph_lv2",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_size": "21470642176",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "name": "ceph_lv2",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "tags": {
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.cluster_name": "ceph",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.crush_device_class": "",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.encrypted": "0",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.osd_id": "2",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.type": "block",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:                 "ceph.vdo": "0"
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             },
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "type": "block",
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:             "vg_name": "ceph_vg2"
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:         }
Nov 22 09:11:39 compute-0 happy_meninsky[287202]:     ]
Nov 22 09:11:39 compute-0 happy_meninsky[287202]: }
Nov 22 09:11:39 compute-0 ceph-mon[75021]: pgmap v1366: 305 pgs: 305 active+clean; 477 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 2.8 MiB/s wr, 52 op/s
Nov 22 09:11:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1314115248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:11:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2503735232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.294 253665 WARNING nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.340 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6sgvmsy" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:39 compute-0 systemd[1]: libpod-d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9.scope: Deactivated successfully.
Nov 22 09:11:39 compute-0 podman[287150]: 2025-11-22 09:11:39.344920857 +0000 UTC m=+1.816886740 container died d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.425 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.434 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:39 compute-0 nova_compute[253661]: 2025-11-22 09:11:39.637 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d-merged.mount: Deactivated successfully.
Nov 22 09:11:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:40 compute-0 podman[287150]: 2025-11-22 09:11:40.325533377 +0000 UTC m=+2.797499260 container remove d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:11:40 compute-0 systemd[1]: libpod-conmon-d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9.scope: Deactivated successfully.
Nov 22 09:11:40 compute-0 sudo[286974]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:40 compute-0 sudo[287313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:11:40 compute-0 sudo[287313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:40 compute-0 sudo[287313]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:40 compute-0 sudo[287338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:11:40 compute-0 sudo[287338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:40 compute-0 sudo[287338]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:40 compute-0 sudo[287363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:11:40 compute-0 sudo[287363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:40 compute-0 sudo[287363]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:40 compute-0 sudo[287388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:11:40 compute-0 sudo[287388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:40 compute-0 ceph-mon[75021]: pgmap v1367: 305 pgs: 305 active+clean; 477 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Nov 22 09:11:41 compute-0 nova_compute[253661]: 2025-11-22 09:11:41.138 253665 DEBUG nova.compute.manager [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-changed-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:41 compute-0 nova_compute[253661]: 2025-11-22 09:11:41.138 253665 DEBUG nova.compute.manager [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing instance network info cache due to event network-changed-fdaaf015-c32e-4960-a33a-2767bf447b71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:11:41 compute-0 nova_compute[253661]: 2025-11-22 09:11:41.139 253665 DEBUG oslo_concurrency.lockutils [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:11:41 compute-0 podman[287453]: 2025-11-22 09:11:41.063285592 +0000 UTC m=+0.042448315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:11:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 477 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Nov 22 09:11:41 compute-0 podman[287453]: 2025-11-22 09:11:41.491062654 +0000 UTC m=+0.470225307 container create fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:11:41 compute-0 systemd[1]: Started libpod-conmon-fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471.scope.
Nov 22 09:11:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:11:41 compute-0 podman[287453]: 2025-11-22 09:11:41.92352285 +0000 UTC m=+0.902685513 container init fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:11:41 compute-0 podman[287453]: 2025-11-22 09:11:41.941064804 +0000 UTC m=+0.920227457 container start fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 09:11:41 compute-0 kind_diffie[287469]: 167 167
Nov 22 09:11:41 compute-0 systemd[1]: libpod-fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471.scope: Deactivated successfully.
Nov 22 09:11:42 compute-0 podman[287453]: 2025-11-22 09:11:42.02722389 +0000 UTC m=+1.006386543 container attach fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:11:42 compute-0 podman[287453]: 2025-11-22 09:11:42.027886267 +0000 UTC m=+1.007048920 container died fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.033 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.034 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Deleting local config drive /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config because it was imported into RBD.
Nov 22 09:11:42 compute-0 kernel: tap816016d3-f4: entered promiscuous mode
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.1141] manager: (tap816016d3-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 ovn_controller[152872]: 2025-11-22T09:11:42Z|00125|binding|INFO|Claiming lport 816016d3-f417-4c33-8f24-8e6360d6fa39 for this chassis.
Nov 22 09:11:42 compute-0 ovn_controller[152872]: 2025-11-22T09:11:42Z|00126|binding|INFO|816016d3-f417-4c33-8f24-8e6360d6fa39: Claiming fa:16:3e:78:74:61 10.100.0.9
Nov 22 09:11:42 compute-0 ovn_controller[152872]: 2025-11-22T09:11:42Z|00127|binding|INFO|Setting lport 816016d3-f417-4c33-8f24-8e6360d6fa39 ovn-installed in OVS
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.154 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 ovn_controller[152872]: 2025-11-22T09:11:42Z|00128|binding|INFO|Setting lport 816016d3-f417-4c33-8f24-8e6360d6fa39 up in Southbound
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.159 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:74:61 10.100.0.9'], port_security=['fa:16:3e:78:74:61 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b7c923dd-3ae9-4c51-8d6d-6305a71fe97f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=816016d3-f417-4c33-8f24-8e6360d6fa39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.160 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 816016d3-f417-4c33-8f24-8e6360d6fa39 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 bound to our chassis
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.162 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:11:42 compute-0 systemd-udevd[287498]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:11:42 compute-0 systemd-machined[215941]: New machine qemu-29-instance-00000019.
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.185 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a71043cf-e128-4f7b-984c-a420bb461dce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.186 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2abeeeb2-21 in ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.1869] device (tap816016d3-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.1887] device (tap816016d3-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.190 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2abeeeb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.190 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d797192b-91be-4d5c-9299-78466e2f4091]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000019.
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[315420f6-37d5-416d-99fa-8b1bd0d84b93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.217 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.215 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8f954800-4ce9-4f5e-b6e5-e290d366d3e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.218 253665 DEBUG oslo_concurrency.lockutils [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.218 253665 DEBUG nova.network.neutron [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing network info cache for port fdaaf015-c32e-4960-a33a-2767bf447b71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.225 253665 DEBUG nova.virt.libvirt.vif [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.225 253665 DEBUG nova.network.os_vif_util [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.226 253665 DEBUG nova.network.os_vif_util [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.227 253665 DEBUG os_vif [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.228 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.228 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.237 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdaaf015-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.237 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdaaf015-c3, col_values=(('external_ids', {'iface-id': 'fdaaf015-c32e-4960-a33a-2767bf447b71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:7a:b5', 'vm-uuid': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.239 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.2402] manager: (tapfdaaf015-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.241 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.246 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08d839da-30fe-47e8-ad9e-85b97148c551]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.247 253665 INFO os_vif [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3')
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.248 253665 DEBUG nova.virt.libvirt.vif [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.248 253665 DEBUG nova.network.os_vif_util [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.249 253665 DEBUG nova.network.os_vif_util [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.254 253665 DEBUG nova.virt.libvirt.guest [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:f8:7a:b5"/>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <target dev="tapfdaaf015-c3"/>
Nov 22 09:11:42 compute-0 nova_compute[253661]: </interface>
Nov 22 09:11:42 compute-0 nova_compute[253661]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 09:11:42 compute-0 kernel: tapfdaaf015-c3: entered promiscuous mode
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.2695] manager: (tapfdaaf015-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 22 09:11:42 compute-0 ovn_controller[152872]: 2025-11-22T09:11:42Z|00129|binding|INFO|Claiming lport fdaaf015-c32e-4960-a33a-2767bf447b71 for this chassis.
Nov 22 09:11:42 compute-0 ovn_controller[152872]: 2025-11-22T09:11:42Z|00130|binding|INFO|fdaaf015-c32e-4960-a33a-2767bf447b71: Claiming fa:16:3e:f8:7a:b5 10.100.0.14
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.270 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.2830] device (tapfdaaf015-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.2842] device (tapfdaaf015-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.288 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bf28ac-3336-4793-9fae-f49c993bfcff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 ovn_controller[152872]: 2025-11-22T09:11:42Z|00131|binding|INFO|Setting lport fdaaf015-c32e-4960-a33a-2767bf447b71 ovn-installed in OVS
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.295 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95806123-3de2-4079-b6b9-1028dbba31cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.2966] manager: (tap2abeeeb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Nov 22 09:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b028fb0a2de50f97ec65d48c0710009be0753d31bd4cc49498aa0e21246449b1-merged.mount: Deactivated successfully.
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.338 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a6d636-8bd9-49c0-a282-417970e3fa42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.342 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4b77d2a8-690d-40f9-b709-10f89f7e05e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 ovn_controller[152872]: 2025-11-22T09:11:42Z|00132|binding|INFO|Setting lport fdaaf015-c32e-4960-a33a-2767bf447b71 up in Southbound
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.346 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:7a:b5 10.100.0.14'], port_security=['fa:16:3e:f8:7a:b5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fdaaf015-c32e-4960-a33a-2767bf447b71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.3765] device (tap2abeeeb2-20): carrier: link connected
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.384 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a44814-25f4-4144-95cd-5ed72740fc8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.407 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[da9236ae-fd6c-4cf5-b422-3dd11850e1e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555311, 'reachable_time': 37064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287538, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b165fa52-3119-4756-866f-23f746e0af10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:bff7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555311, 'tstamp': 555311}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287539, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.446 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d99b6415-3869-448d-be81-e24276d77ac4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555311, 'reachable_time': 37064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287540, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.458 253665 DEBUG nova.virt.libvirt.driver [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.458 253665 DEBUG nova.virt.libvirt.driver [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.459 253665 DEBUG nova.virt.libvirt.driver [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:14:85:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.459 253665 DEBUG nova.virt.libvirt.driver [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:f8:7a:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.490 253665 DEBUG nova.virt.libvirt.guest [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:11:42</nova:creationTime>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:11:42 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 09:11:42 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     <nova:port uuid="fdaaf015-c32e-4960-a33a-2767bf447b71">
Nov 22 09:11:42 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:11:42 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:11:42 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:11:42 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:11:42 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.492 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01bf9721-86d4-475e-bd8e-b5ad3d71b344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.519 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.549 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c400c04f-12c6-4e47-a14e-a20c7aac0cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.551 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.551 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.552 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 NetworkManager[48920]: <info>  [1763802702.5559] manager: (tap2abeeeb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 22 09:11:42 compute-0 kernel: tap2abeeeb2-20: entered promiscuous mode
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.560 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:42 compute-0 ovn_controller[152872]: 2025-11-22T09:11:42Z|00133|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 nova_compute[253661]: 2025-11-22 09:11:42.579 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.581 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.582 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f974aee6-1eca-4703-b29f-0e6edcd7e95a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.583 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.585 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'env', 'PROCESS_TAG=haproxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:11:42 compute-0 podman[287453]: 2025-11-22 09:11:42.995591695 +0000 UTC m=+1.974754338 container remove fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:11:43 compute-0 systemd[1]: libpod-conmon-fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471.scope: Deactivated successfully.
Nov 22 09:11:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 482 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 22 09:11:43 compute-0 podman[287591]: 2025-11-22 09:11:43.090328149 +0000 UTC m=+0.027834691 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:11:43 compute-0 ceph-mon[75021]: pgmap v1368: 305 pgs: 305 active+clean; 477 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.397 253665 DEBUG nova.compute.manager [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.398 253665 DEBUG oslo_concurrency.lockutils [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.399 253665 DEBUG oslo_concurrency.lockutils [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.399 253665 DEBUG oslo_concurrency.lockutils [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.399 253665 DEBUG nova.compute.manager [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Processing event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.451 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802703.4506342, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.452 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Started (Lifecycle Event)
Nov 22 09:11:43 compute-0 podman[287616]: 2025-11-22 09:11:43.37491928 +0000 UTC m=+0.181661231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.454 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.457 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.460 253665 INFO nova.virt.libvirt.driver [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance spawned successfully.
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.461 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.471 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:43 compute-0 podman[287591]: 2025-11-22 09:11:43.472657226 +0000 UTC m=+0.410163738 container create 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.482 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.486 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.487 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.488 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.488 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.489 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.489 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.517 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.518 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802703.4518642, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.518 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Paused (Lifecycle Event)
Nov 22 09:11:43 compute-0 systemd[1]: Started libpod-conmon-3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e.scope.
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.547 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.553 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802703.4565256, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.553 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Resumed (Lifecycle Event)
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.560 253665 INFO nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 18.29 seconds to spawn the instance on the hypervisor.
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.560 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f397d4018771e7d833e8521b0dd4b258c9b2b44865d862d4ba5ce5ea501120/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.570 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.575 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.595 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:11:43 compute-0 podman[287616]: 2025-11-22 09:11:43.624514697 +0000 UTC m=+0.431256618 container create 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.634 253665 INFO nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 19.28 seconds to build instance.
Nov 22 09:11:43 compute-0 nova_compute[253661]: 2025-11-22 09:11:43.653 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:43 compute-0 podman[287591]: 2025-11-22 09:11:43.66654909 +0000 UTC m=+0.604055632 container init 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:11:43 compute-0 podman[287591]: 2025-11-22 09:11:43.673949198 +0000 UTC m=+0.611455720 container start 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:11:43 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [NOTICE]   (287653) : New worker (287658) forked
Nov 22 09:11:43 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [NOTICE]   (287653) : Loading success.
Nov 22 09:11:43 compute-0 systemd[1]: Started libpod-conmon-23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a.scope.
Nov 22 09:11:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:11:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.866 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fdaaf015-c32e-4960-a33a-2767bf447b71 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:11:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.870 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:11:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.886 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a02b51b2-695d-433d-be72-67ab685fcb6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:43 compute-0 podman[287616]: 2025-11-22 09:11:43.898507453 +0000 UTC m=+0.705249394 container init 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:11:43 compute-0 podman[287616]: 2025-11-22 09:11:43.908840851 +0000 UTC m=+0.715582772 container start 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:11:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.927 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[04057188-2e7e-483b-a802-5521e1521388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.932 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[36d066bf-bf3c-4d3d-9ebb-bc2f245282f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.964 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1f94e8ef-e374-4eb4-8808-3e244ca60f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:43 compute-0 podman[287616]: 2025-11-22 09:11:43.9859264 +0000 UTC m=+0.792668481 container attach 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:11:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b2dfef8d-9123-4840-920e-bb2da22383fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550624, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287677, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.007 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9cf291-2428-4dd8-8c1b-a5ebad946b01]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550640, 'tstamp': 550640}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287678, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550644, 'tstamp': 550644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287678, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.009 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.012 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.012 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.013 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.013 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.013 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.237 253665 DEBUG nova.network.neutron [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updated VIF entry in instance network info cache for port fdaaf015-c32e-4960-a33a-2767bf447b71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.238 253665 DEBUG nova.network.neutron [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.251 253665 DEBUG oslo_concurrency.lockutils [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:11:44 compute-0 ceph-mon[75021]: pgmap v1369: 305 pgs: 305 active+clean; 482 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 22 09:11:44 compute-0 ovn_controller[152872]: 2025-11-22T09:11:44Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:7a:b5 10.100.0.14
Nov 22 09:11:44 compute-0 ovn_controller[152872]: 2025-11-22T09:11:44Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:7a:b5 10.100.0.14
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.639 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.904 253665 INFO nova.compute.manager [None req-063f50f1-0146-47a3-b680-77cb0b70a626 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Pausing
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.907 253665 DEBUG nova.objects.instance [None req-063f50f1-0146-47a3-b680-77cb0b70a626 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'flavor' on Instance uuid b7c923dd-3ae9-4c51-8d6d-6305a71fe97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.936 253665 DEBUG nova.compute.manager [None req-063f50f1-0146-47a3-b680-77cb0b70a626 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.937 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802704.9355547, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.937 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Paused (Lifecycle Event)
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.959 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.962 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:11:44 compute-0 nova_compute[253661]: 2025-11-22 09:11:44.989 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 22 09:11:45 compute-0 hardcore_black[287667]: {
Nov 22 09:11:45 compute-0 hardcore_black[287667]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "osd_id": 1,
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "type": "bluestore"
Nov 22 09:11:45 compute-0 hardcore_black[287667]:     },
Nov 22 09:11:45 compute-0 hardcore_black[287667]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "osd_id": 0,
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "type": "bluestore"
Nov 22 09:11:45 compute-0 hardcore_black[287667]:     },
Nov 22 09:11:45 compute-0 hardcore_black[287667]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "osd_id": 2,
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:11:45 compute-0 hardcore_black[287667]:         "type": "bluestore"
Nov 22 09:11:45 compute-0 hardcore_black[287667]:     }
Nov 22 09:11:45 compute-0 hardcore_black[287667]: }
Nov 22 09:11:45 compute-0 systemd[1]: libpod-23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a.scope: Deactivated successfully.
Nov 22 09:11:45 compute-0 systemd[1]: libpod-23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a.scope: Consumed 1.107s CPU time.
Nov 22 09:11:45 compute-0 podman[287616]: 2025-11-22 09:11:45.054056489 +0000 UTC m=+1.860798410 container died 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:11:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 1.6 MiB/s wr, 70 op/s
Nov 22 09:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db-merged.mount: Deactivated successfully.
Nov 22 09:11:45 compute-0 podman[287616]: 2025-11-22 09:11:45.418457094 +0000 UTC m=+2.225199015 container remove 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:11:45 compute-0 sudo[287388]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:11:45 compute-0 systemd[1]: libpod-conmon-23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a.scope: Deactivated successfully.
Nov 22 09:11:45 compute-0 podman[287722]: 2025-11-22 09:11:45.65058449 +0000 UTC m=+0.075383489 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.669 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-fdaaf015-c32e-4960-a33a-2767bf447b71" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.669 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-fdaaf015-c32e-4960-a33a-2767bf447b71" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.687 253665 DEBUG nova.objects.instance [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:11:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:11:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:11:45 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 14e13e3a-f3b9-4e0b-989a-bb5d9f09feed does not exist
Nov 22 09:11:45 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev cab48b88-7b32-4145-8bdc-506f181e7aa1 does not exist
Nov 22 09:11:45 compute-0 podman[287720]: 2025-11-22 09:11:45.729470531 +0000 UTC m=+0.155517190 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.740 253665 DEBUG nova.virt.libvirt.vif [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.741 253665 DEBUG nova.network.os_vif_util [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.742 253665 DEBUG nova.network.os_vif_util [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.747 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.750 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.753 253665 DEBUG nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Attempting to detach device tapfdaaf015-c3 from instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.753 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:11:45 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:f8:7a:b5"/>
Nov 22 09:11:45 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:11:45 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:11:45 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:11:45 compute-0 nova_compute[253661]:   <target dev="tapfdaaf015-c3"/>
Nov 22 09:11:45 compute-0 nova_compute[253661]: </interface>
Nov 22 09:11:45 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:11:45 compute-0 sudo[287760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.799 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.800 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.801 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.801 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.801 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] No waiting events found dispatching network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.801 253665 WARNING nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received unexpected event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 for instance with vm_state paused and task_state None.
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.802 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.802 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.802 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.802 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 WARNING nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 for instance with vm_state active and task_state None.
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.804 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.804 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:45 compute-0 nova_compute[253661]: 2025-11-22 09:11:45.804 253665 WARNING nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 for instance with vm_state active and task_state None.
Nov 22 09:11:45 compute-0 sudo[287760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:45 compute-0 sudo[287760]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:45 compute-0 sudo[287785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:11:45 compute-0 sudo[287785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:11:45 compute-0 sudo[287785]: pam_unix(sudo:session): session closed for user root
Nov 22 09:11:46 compute-0 ovn_controller[152872]: 2025-11-22T09:11:46Z|00134|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:11:46 compute-0 ovn_controller[152872]: 2025-11-22T09:11:46Z|00135|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:11:46 compute-0 ovn_controller[152872]: 2025-11-22T09:11:46Z|00136|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.148 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.210 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.215 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface>not found in domain: <domain type='kvm' id='27'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <name>instance-00000018</name>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <uuid>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</uuid>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:11:42</nova:creationTime>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:port uuid="fdaaf015-c32e-4960-a33a-2767bf447b71">
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:11:46 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <system>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='serial'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='uuid'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </system>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <os>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </os>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <features>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </features>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk' index='2'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config' index='1'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:14:85:74'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target dev='tapf70fa10f-f7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:f8:7a:b5'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target dev='tapfdaaf015-c3'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='net1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </target>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </console>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </input>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </input>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </input>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <video>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </video>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c707,c812</label>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c707,c812</imagelabel>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:11:46 compute-0 nova_compute[253661]: </domain>
Nov 22 09:11:46 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.215 253665 INFO nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tapfdaaf015-c3 from instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 from the persistent domain config.
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.217 253665 DEBUG nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] (1/8): Attempting to detach device tapfdaaf015-c3 with device alias net1 from instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.218 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:f8:7a:b5"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <target dev="tapfdaaf015-c3"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]: </interface>
Nov 22 09:11:46 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:11:46 compute-0 kernel: tapfdaaf015-c3 (unregistering): left promiscuous mode
Nov 22 09:11:46 compute-0 NetworkManager[48920]: <info>  [1763802706.3321] device (tapfdaaf015-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:11:46 compute-0 ovn_controller[152872]: 2025-11-22T09:11:46Z|00137|binding|INFO|Releasing lport fdaaf015-c32e-4960-a33a-2767bf447b71 from this chassis (sb_readonly=0)
Nov 22 09:11:46 compute-0 ovn_controller[152872]: 2025-11-22T09:11:46Z|00138|binding|INFO|Setting lport fdaaf015-c32e-4960-a33a-2767bf447b71 down in Southbound
Nov 22 09:11:46 compute-0 ovn_controller[152872]: 2025-11-22T09:11:46Z|00139|binding|INFO|Removing iface tapfdaaf015-c3 ovn-installed in OVS
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.347 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:7a:b5 10.100.0.14'], port_security=['fa:16:3e:f8:7a:b5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fdaaf015-c32e-4960-a33a-2767bf447b71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.348 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763802706.347724, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.349 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fdaaf015-c32e-4960-a33a-2767bf447b71 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.351 253665 DEBUG nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Start waiting for the detach event from libvirt for device tapfdaaf015-c3 with device alias net1 for instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.351 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.353 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.359 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface>not found in domain: <domain type='kvm' id='27'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <name>instance-00000018</name>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <uuid>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</uuid>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:11:42</nova:creationTime>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:port uuid="fdaaf015-c32e-4960-a33a-2767bf447b71">
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:11:46 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <system>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='serial'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='uuid'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </system>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <os>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </os>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <features>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </features>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk' index='2'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config' index='1'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:14:85:74'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target dev='tapf70fa10f-f7'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       </target>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </console>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </input>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </input>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </input>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <video>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </video>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c707,c812</label>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c707,c812</imagelabel>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:11:46 compute-0 nova_compute[253661]: </domain>
Nov 22 09:11:46 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.360 253665 INFO nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tapfdaaf015-c3 from instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 from the live domain config.
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.361 253665 DEBUG nova.virt.libvirt.vif [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.362 253665 DEBUG nova.network.os_vif_util [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.363 253665 DEBUG nova.network.os_vif_util [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.363 253665 DEBUG os_vif [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.365 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.367 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdaaf015-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.370 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.372 253665 INFO os_vif [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3')
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.372 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[651c326f-66a1-47b4-9322-9cac6bbac1cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.373 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:11:46</nova:creationTime>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 09:11:46 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:11:46 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:11:46 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:11:46 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:11:46 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.414 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6c86b4a7-5b8b-4f75-9ff4-29bbb1115db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.419 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[74fba763-f144-49c8-a4fc-f64867ef8193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.454 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[52909475-5c3c-4885-b2f5-72b159b34181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.476 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[16c37001-23c4-4f88-a015-46316a921a73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550624, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287822, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.498 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c39a699c-7d1a-4491-8fea-fcfb936ff898]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550640, 'tstamp': 550640}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287823, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550644, 'tstamp': 550644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287823, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.500 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:46 compute-0 nova_compute[253661]: 2025-11-22 09:11:46.504 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.504 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.504 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:46 compute-0 ceph-mon[75021]: pgmap v1370: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 1.6 MiB/s wr, 70 op/s
Nov 22 09:11:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:11:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:11:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 794 KiB/s rd, 439 KiB/s wr, 60 op/s
Nov 22 09:11:47 compute-0 nova_compute[253661]: 2025-11-22 09:11:47.226 253665 DEBUG nova.compute.manager [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:11:47 compute-0 nova_compute[253661]: 2025-11-22 09:11:47.254 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:11:47 compute-0 nova_compute[253661]: 2025-11-22 09:11:47.254 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:11:47 compute-0 nova_compute[253661]: 2025-11-22 09:11:47.254 253665 DEBUG nova.network.neutron [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:11:47 compute-0 nova_compute[253661]: 2025-11-22 09:11:47.278 253665 INFO nova.compute.manager [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] instance snapshotting
Nov 22 09:11:47 compute-0 nova_compute[253661]: 2025-11-22 09:11:47.278 253665 WARNING nova.compute.manager [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] trying to snapshot a non-running instance: (state: 3 expected: 1)
Nov 22 09:11:47 compute-0 nova_compute[253661]: 2025-11-22 09:11:47.709 253665 INFO nova.virt.libvirt.driver [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Beginning live snapshot process
Nov 22 09:11:47 compute-0 nova_compute[253661]: 2025-11-22 09:11:47.859 253665 DEBUG nova.virt.libvirt.imagebackend [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.006 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-unplugged-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.007 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.008 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.008 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.008 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-unplugged-fdaaf015-c32e-4960-a33a-2767bf447b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.008 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-unplugged-fdaaf015-c32e-4960-a33a-2767bf447b71 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 WARNING nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 for instance with vm_state active and task_state deleting.
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-deleted-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.010 253665 INFO nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Neutron deleted interface fdaaf015-c32e-4960-a33a-2767bf447b71; detaching it from the instance and deleting it from the info cache
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.010 253665 DEBUG nova.network.neutron [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.037 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(67d49bc907464fa0893c1b629039f058) on rbd image(b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.106 253665 DEBUG nova.objects.instance [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'system_metadata' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.161 253665 DEBUG nova.objects.instance [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'flavor' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.166 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.167 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.167 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.168 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.168 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.169 253665 INFO nova.compute.manager [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Terminating instance
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.170 253665 DEBUG nova.compute.manager [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.204 253665 DEBUG nova.virt.libvirt.vif [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:11:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.205 253665 DEBUG nova.network.os_vif_util [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.206 253665 DEBUG nova.network.os_vif_util [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.564 253665 INFO nova.network.neutron [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Port fdaaf015-c32e-4960-a33a-2767bf447b71 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.565 253665 DEBUG nova.network.neutron [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.578 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.599 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-fdaaf015-c32e-4960-a33a-2767bf447b71" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:48 compute-0 kernel: tapf70fa10f-f7 (unregistering): left promiscuous mode
Nov 22 09:11:48 compute-0 NetworkManager[48920]: <info>  [1763802708.8059] device (tapf70fa10f-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:11:48 compute-0 ovn_controller[152872]: 2025-11-22T09:11:48Z|00140|binding|INFO|Releasing lport f70fa10f-f756-4faa-aebf-deeb0b129704 from this chassis (sb_readonly=0)
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:48 compute-0 ovn_controller[152872]: 2025-11-22T09:11:48Z|00141|binding|INFO|Setting lport f70fa10f-f756-4faa-aebf-deeb0b129704 down in Southbound
Nov 22 09:11:48 compute-0 ovn_controller[152872]: 2025-11-22T09:11:48Z|00142|binding|INFO|Removing iface tapf70fa10f-f7 ovn-installed in OVS
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.818 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.823 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:85:74 10.100.0.11'], port_security=['fa:16:3e:14:85:74 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '483aedc9-eae7-4cec-a714-9d623421c584', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f70fa10f-f756-4faa-aebf-deeb0b129704) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:11:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.825 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f70fa10f-f756-4faa-aebf-deeb0b129704 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:11:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.827 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:11:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.828 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30f008d2-bd42-44e8-9bf9-3706a8c2364d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.829 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace which is not needed anymore
Nov 22 09:11:48 compute-0 nova_compute[253661]: 2025-11-22 09:11:48.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:48 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 22 09:11:48 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Consumed 16.328s CPU time.
Nov 22 09:11:48 compute-0 systemd-machined[215941]: Machine qemu-27-instance-00000018 terminated.
Nov 22 09:11:48 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [NOTICE]   (285875) : haproxy version is 2.8.14-c23fe91
Nov 22 09:11:48 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [NOTICE]   (285875) : path to executable is /usr/sbin/haproxy
Nov 22 09:11:48 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [WARNING]  (285875) : Exiting Master process...
Nov 22 09:11:48 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [WARNING]  (285875) : Exiting Master process...
Nov 22 09:11:48 compute-0 virtqemud[254229]: cannot parse process status data
Nov 22 09:11:48 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [ALERT]    (285875) : Current worker (285877) exited with code 143 (Terminated)
Nov 22 09:11:48 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [WARNING]  (285875) : All workers exited. Exiting... (0)
Nov 22 09:11:48 compute-0 systemd[1]: libpod-64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f.scope: Deactivated successfully.
Nov 22 09:11:48 compute-0 podman[287896]: 2025-11-22 09:11:48.993577221 +0000 UTC m=+0.056626597 container died 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:11:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.023 253665 DEBUG nova.virt.libvirt.guest [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:11:49 compute-0 ceph-mon[75021]: pgmap v1371: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 794 KiB/s rd, 439 KiB/s wr, 60 op/s
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.029 253665 DEBUG nova.virt.libvirt.guest [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface>not found in domain: <domain type='kvm'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <name>instance-00000018</name>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <uuid>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</uuid>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:10:53</nova:creationTime>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 09:11:49 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <system>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <entry name='serial'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <entry name='uuid'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </system>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <os>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </os>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <features>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </features>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <cpu mode='host-model' check='partial'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       </source>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:14:85:74'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target dev='tapf70fa10f-f7'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       </target>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <console type='pty'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </console>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </input>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <video>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </video>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:11:49 compute-0 nova_compute[253661]: </domain>
Nov 22 09:11:49 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.030 253665 WARNING nova.virt.libvirt.driver [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Detaching interface fa:16:3e:f8:7a:b5 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapfdaaf015-c3' not found.
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.031 253665 DEBUG nova.virt.libvirt.vif [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:11:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.031 253665 DEBUG nova.network.os_vif_util [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.033 253665 DEBUG nova.network.os_vif_util [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.033 253665 DEBUG os_vif [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdaaf015-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.038 253665 INFO nova.virt.libvirt.driver [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance destroyed successfully.
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.038 253665 DEBUG nova.objects.instance [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'resources' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.040 253665 INFO os_vif [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3')
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.042 253665 DEBUG nova.virt.libvirt.guest [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:11:49</nova:creationTime>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 09:11:49 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:11:49 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:11:49 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:11:49 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:11:49 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.053 253665 DEBUG nova.virt.libvirt.vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.054 253665 DEBUG nova.network.os_vif_util [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.055 253665 DEBUG nova.network.os_vif_util [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.055 253665 DEBUG os_vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:11:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.057 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.057 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf70fa10f-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:11:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f-userdata-shm.mount: Deactivated successfully.
Nov 22 09:11:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.066 253665 INFO os_vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7')
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.067 253665 DEBUG nova.virt.libvirt.vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.067 253665 DEBUG nova.network.os_vif_util [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.068 253665 DEBUG nova.network.os_vif_util [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.068 253665 DEBUG os_vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:11:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b7679634f901691bbb7933c31bc9e7b94437f81ed8d9c677c4422e9a5cf6ef0-merged.mount: Deactivated successfully.
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdaaf015-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.074 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.076 253665 INFO os_vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3')
Nov 22 09:11:49 compute-0 podman[287896]: 2025-11-22 09:11:49.09644774 +0000 UTC m=+0.159497116 container cleanup 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:11:49 compute-0 systemd[1]: libpod-conmon-64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f.scope: Deactivated successfully.
Nov 22 09:11:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 85 KiB/s wr, 108 op/s
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.198 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] cloning vms/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk@67d49bc907464fa0893c1b629039f058 to images/a229ff81-6736-4727-80db-88a96c174b36 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:11:49 compute-0 podman[287953]: 2025-11-22 09:11:49.206127564 +0000 UTC m=+0.070442299 container remove 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:11:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.214 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26ab7a40-fc22-48f5-a3db-a30c0892ee02]: (4, ('Sat Nov 22 09:11:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f)\n64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f\nSat Nov 22 09:11:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f)\n64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be658565-5851-4e31-9b95-3543c4aad205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.219 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:11:49 compute-0 kernel: tap5e2cd359-c0: left promiscuous mode
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.241 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.248 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd387de-1b95-4fb7-9187-c132540d5b13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.264 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ef56db-efc0-44f4-a5f0-0b80a2aded30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.266 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6641a32a-9936-47b4-b3fe-95bbc9eb9978]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.293 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd236ce-62f8-43af-a533-a6de87a84f4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550613, 'reachable_time': 39639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288001, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d5e2cd359\x2dc68f\x2d4256\x2d90e8\x2d0ad40aff8a00.mount: Deactivated successfully.
Nov 22 09:11:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.298 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:11:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.298 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[07477c66-2858-4941-8650-610c72b30ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.305 253665 DEBUG nova.compute.manager [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-unplugged-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG oslo_concurrency.lockutils [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG oslo_concurrency.lockutils [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG oslo_concurrency.lockutils [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG nova.compute.manager [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-unplugged-f70fa10f-f756-4faa-aebf-deeb0b129704 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG nova.compute.manager [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-unplugged-f70fa10f-f756-4faa-aebf-deeb0b129704 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.377 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] flattening images/a229ff81-6736-4727-80db-88a96c174b36 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:49 compute-0 nova_compute[253661]: 2025-11-22 09:11:49.751 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(67d49bc907464fa0893c1b629039f058) on rbd image(b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:11:50 compute-0 nova_compute[253661]: 2025-11-22 09:11:50.034 253665 INFO nova.virt.libvirt.driver [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Deleting instance files /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_del
Nov 22 09:11:50 compute-0 nova_compute[253661]: 2025-11-22 09:11:50.035 253665 INFO nova.virt.libvirt.driver [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Deletion of /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_del complete
Nov 22 09:11:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 22 09:11:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 22 09:11:50 compute-0 ceph-mon[75021]: osdmap e184: 3 total, 3 up, 3 in
Nov 22 09:11:50 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 22 09:11:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:50 compute-0 nova_compute[253661]: 2025-11-22 09:11:50.158 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(snap) on rbd image(a229ff81-6736-4727-80db-88a96c174b36) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:11:50 compute-0 nova_compute[253661]: 2025-11-22 09:11:50.218 253665 INFO nova.compute.manager [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Took 2.05 seconds to destroy the instance on the hypervisor.
Nov 22 09:11:50 compute-0 nova_compute[253661]: 2025-11-22 09:11:50.220 253665 DEBUG oslo.service.loopingcall [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:11:50 compute-0 nova_compute[253661]: 2025-11-22 09:11:50.220 253665 DEBUG nova.compute.manager [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:11:50 compute-0 nova_compute[253661]: 2025-11-22 09:11:50.221 253665 DEBUG nova.network.neutron [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:11:50 compute-0 podman[288060]: 2025-11-22 09:11:50.431187797 +0000 UTC m=+0.115606887 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.031 253665 DEBUG nova.network.neutron [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.067 253665 INFO nova.compute.manager [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Took 0.85 seconds to deallocate network for instance.
Nov 22 09:11:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 22 09:11:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.083 253665 DEBUG nova.compute.manager [req-eec92782-088b-40c7-b0ea-c64bbcfd1914 req-5b262c99-f220-4fcc-a909-0034a888e348 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-deleted-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:51 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 22 09:11:51 compute-0 ceph-mon[75021]: pgmap v1373: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 85 KiB/s wr, 108 op/s
Nov 22 09:11:51 compute-0 ceph-mon[75021]: osdmap e185: 3 total, 3 up, 3 in
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.145 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.145 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 37 KiB/s wr, 141 op/s
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.303 253665 DEBUG oslo_concurrency.processutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.571 253665 DEBUG nova.compute.manager [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.573 253665 DEBUG oslo_concurrency.lockutils [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.573 253665 DEBUG oslo_concurrency.lockutils [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.574 253665 DEBUG oslo_concurrency.lockutils [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.574 253665 DEBUG nova.compute.manager [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.574 253665 WARNING nova.compute.manager [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 for instance with vm_state deleted and task_state None.
Nov 22 09:11:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:11:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/482563714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.843 253665 DEBUG oslo_concurrency.processutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.853 253665 DEBUG nova.compute.provider_tree [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.869 253665 DEBUG nova.scheduler.client.report [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:11:51 compute-0 nova_compute[253661]: 2025-11-22 09:11:51.951 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:52 compute-0 nova_compute[253661]: 2025-11-22 09:11:52.049 253665 INFO nova.scheduler.client.report [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Deleted allocations for instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9
Nov 22 09:11:52 compute-0 nova_compute[253661]: 2025-11-22 09:11:52.157 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:11:52
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:11:52 compute-0 ceph-mon[75021]: osdmap e186: 3 total, 3 up, 3 in
Nov 22 09:11:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/482563714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:11:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 475 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.6 MiB/s wr, 161 op/s
Nov 22 09:11:53 compute-0 ceph-mon[75021]: pgmap v1376: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 37 KiB/s wr, 141 op/s
Nov 22 09:11:53 compute-0 nova_compute[253661]: 2025-11-22 09:11:53.490 253665 INFO nova.virt.libvirt.driver [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Snapshot image upload complete
Nov 22 09:11:53 compute-0 nova_compute[253661]: 2025-11-22 09:11:53.491 253665 INFO nova.compute.manager [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 6.21 seconds to snapshot the instance on the hypervisor.
Nov 22 09:11:54 compute-0 nova_compute[253661]: 2025-11-22 09:11:54.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:54 compute-0 nova_compute[253661]: 2025-11-22 09:11:54.644 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:11:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:11:55 compute-0 ceph-mon[75021]: pgmap v1377: 305 pgs: 305 active+clean; 475 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.6 MiB/s wr, 161 op/s
Nov 22 09:11:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 158 op/s
Nov 22 09:11:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:11:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 22 09:11:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 22 09:11:56 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 22 09:11:56 compute-0 ceph-mon[75021]: pgmap v1378: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 158 op/s
Nov 22 09:11:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 164 op/s
Nov 22 09:11:57 compute-0 ceph-mon[75021]: osdmap e187: 3 total, 3 up, 3 in
Nov 22 09:11:57 compute-0 nova_compute[253661]: 2025-11-22 09:11:57.887 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:11:59 compute-0 nova_compute[253661]: 2025-11-22 09:11:59.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:59 compute-0 ceph-mon[75021]: pgmap v1380: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 164 op/s
Nov 22 09:11:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Nov 22 09:11:59 compute-0 nova_compute[253661]: 2025-11-22 09:11:59.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:59 compute-0 nova_compute[253661]: 2025-11-22 09:11:59.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 22 09:12:00 compute-0 ceph-mon[75021]: pgmap v1381: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Nov 22 09:12:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Nov 22 09:12:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 22 09:12:01 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 22 09:12:02 compute-0 ceph-mon[75021]: pgmap v1382: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Nov 22 09:12:02 compute-0 ceph-mon[75021]: osdmap e188: 3 total, 3 up, 3 in
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.371 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.371 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.408 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.485 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.486 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.492 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.492 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.492 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.492 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.493 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.494 253665 INFO nova.compute.manager [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Terminating instance
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.495 253665 DEBUG nova.compute.manager [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.497 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.498 253665 INFO nova.compute.claims [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033831528374682557 of space, bias 1.0, pg target 1.0149458512404768 quantized to 32 (current 32)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010120461149638602 of space, bias 1.0, pg target 0.3026017883741942 quantized to 32 (current 32)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:12:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.697 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:02 compute-0 kernel: tap816016d3-f4 (unregistering): left promiscuous mode
Nov 22 09:12:02 compute-0 NetworkManager[48920]: <info>  [1763802722.8750] device (tap816016d3-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:12:02 compute-0 ovn_controller[152872]: 2025-11-22T09:12:02Z|00143|binding|INFO|Releasing lport 816016d3-f417-4c33-8f24-8e6360d6fa39 from this chassis (sb_readonly=0)
Nov 22 09:12:02 compute-0 ovn_controller[152872]: 2025-11-22T09:12:02Z|00144|binding|INFO|Setting lport 816016d3-f417-4c33-8f24-8e6360d6fa39 down in Southbound
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:02 compute-0 ovn_controller[152872]: 2025-11-22T09:12:02Z|00145|binding|INFO|Removing iface tap816016d3-f4 ovn-installed in OVS
Nov 22 09:12:02 compute-0 nova_compute[253661]: 2025-11-22 09:12:02.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:02 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 22 09:12:02 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Consumed 2.189s CPU time.
Nov 22 09:12:02 compute-0 systemd-machined[215941]: Machine qemu-29-instance-00000019 terminated.
Nov 22 09:12:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.002 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:74:61 10.100.0.9'], port_security=['fa:16:3e:78:74:61 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b7c923dd-3ae9-4c51-8d6d-6305a71fe97f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=816016d3-f417-4c33-8f24-8e6360d6fa39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.004 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 816016d3-f417-4c33-8f24-8e6360d6fa39 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis
Nov 22 09:12:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.006 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:12:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.008 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a8da572c-7dac-479d-b0a5-b9e99c00ded9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.009 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace which is not needed anymore
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.149 253665 INFO nova.virt.libvirt.driver [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance destroyed successfully.
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.151 253665 DEBUG nova.objects.instance [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid b7c923dd-3ae9-4c51-8d6d-6305a71fe97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:12:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2924160443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.175 253665 DEBUG nova.virt.libvirt.vif [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:11:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1474933117',display_name='tempest-ImagesTestJSON-server-1474933117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1474933117',id=25,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-vsxkgtmn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:11:53Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=b7c923dd-3ae9-4c51-8d6d-6305a71fe97f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.177 253665 DEBUG nova.network.os_vif_util [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.178 253665 DEBUG nova.network.os_vif_util [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.179 253665 DEBUG os_vif [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.181 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap816016d3-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.183 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.184 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.187 253665 INFO os_vif [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4')
Nov 22 09:12:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 437 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.4 KiB/s wr, 43 op/s
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.205 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.212 253665 DEBUG nova.compute.provider_tree [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.226 253665 DEBUG nova.scheduler.client.report [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.395 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.396 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:12:03 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [NOTICE]   (287653) : haproxy version is 2.8.14-c23fe91
Nov 22 09:12:03 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [NOTICE]   (287653) : path to executable is /usr/sbin/haproxy
Nov 22 09:12:03 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [WARNING]  (287653) : Exiting Master process...
Nov 22 09:12:03 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [ALERT]    (287653) : Current worker (287658) exited with code 143 (Terminated)
Nov 22 09:12:03 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [WARNING]  (287653) : All workers exited. Exiting... (0)
Nov 22 09:12:03 compute-0 systemd[1]: libpod-3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e.scope: Deactivated successfully.
Nov 22 09:12:03 compute-0 podman[288154]: 2025-11-22 09:12:03.48295697 +0000 UTC m=+0.376095897 container died 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:12:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2924160443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.861 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.862 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.920 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:12:03 compute-0 nova_compute[253661]: 2025-11-22 09:12:03.951 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.022 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802709.017618, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.023 253665 INFO nova.compute.manager [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] VM Stopped (Lifecycle Event)
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.048 253665 DEBUG nova.compute.manager [None req-803509fb-4545-4d75-aa14-620b5d6541f1 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-59f397d4018771e7d833e8521b0dd4b258c9b2b44865d862d4ba5ce5ea501120-merged.mount: Deactivated successfully.
Nov 22 09:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:12:04 compute-0 podman[288154]: 2025-11-22 09:12:04.363932298 +0000 UTC m=+1.257071235 container cleanup 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:12:04 compute-0 systemd[1]: libpod-conmon-3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e.scope: Deactivated successfully.
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.574 253665 DEBUG nova.policy [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.730 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.732 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.733 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Creating image(s)
Nov 22 09:12:04 compute-0 podman[288216]: 2025-11-22 09:12:04.760787505 +0000 UTC m=+0.363088495 container remove 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.761 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.772 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[edc7ee53-1b29-4d9a-9900-69bb9c9f7265]: (4, ('Sat Nov 22 09:12:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e)\n3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e\nSat Nov 22 09:12:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e)\n3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.774 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ebda4b-40e5-4691-97fb-7ed1a5d7f204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.775 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:04 compute-0 kernel: tap2abeeeb2-20: left promiscuous mode
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.799 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.803 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a9b4a9-4086-40a4-83a2-d1e42196bbea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.815 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc682eb-3f22-4829-b3a7-9ec578cd5d7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.816 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[87679169-f790-47b4-ae55-0ce841ff5a6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.836 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff29825f-fe52-485d-8bb9-2502c4c36853]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555301, 'reachable_time': 18341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288275, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d2abeeeb2\x2d24a5\x2d4ccd\x2d93c8\x2d05b42d3a1a51.mount: Deactivated successfully.
Nov 22 09:12:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.841 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:12:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.841 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d97cbed3-e5d9-4ee8-9ec6-87d21a4418e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.845 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.850 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.938 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.939 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.940 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.941 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:04 compute-0 ceph-mon[75021]: pgmap v1384: 305 pgs: 305 active+clean; 437 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.4 KiB/s wr, 43 op/s
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.968 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:04 compute-0 nova_compute[253661]: 2025-11-22 09:12:04.973 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 405 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 KiB/s wr, 45 op/s
Nov 22 09:12:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 22 09:12:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 22 09:12:05 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 22 09:12:05 compute-0 nova_compute[253661]: 2025-11-22 09:12:05.792 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:05 compute-0 nova_compute[253661]: 2025-11-22 09:12:05.851 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] resizing rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:12:06 compute-0 ceph-mon[75021]: pgmap v1385: 305 pgs: 305 active+clean; 405 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 KiB/s wr, 45 op/s
Nov 22 09:12:06 compute-0 ceph-mon[75021]: osdmap e189: 3 total, 3 up, 3 in
Nov 22 09:12:06 compute-0 nova_compute[253661]: 2025-11-22 09:12:06.390 253665 DEBUG nova.objects.instance [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'migration_context' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:06 compute-0 nova_compute[253661]: 2025-11-22 09:12:06.404 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:12:06 compute-0 nova_compute[253661]: 2025-11-22 09:12:06.405 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Ensure instance console log exists: /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:12:06 compute-0 nova_compute[253661]: 2025-11-22 09:12:06.405 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:06 compute-0 nova_compute[253661]: 2025-11-22 09:12:06.406 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:06 compute-0 nova_compute[253661]: 2025-11-22 09:12:06.406 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:06 compute-0 nova_compute[253661]: 2025-11-22 09:12:06.833 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:06 compute-0 nova_compute[253661]: 2025-11-22 09:12:06.834 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:06 compute-0 nova_compute[253661]: 2025-11-22 09:12:06.939 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:12:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 405 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.4 MiB/s wr, 60 op/s
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.235 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.236 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.244 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.245 253665 INFO nova.compute.claims [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.311 253665 INFO nova.virt.libvirt.driver [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Deleting instance files /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_del
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.312 253665 INFO nova.virt.libvirt.driver [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Deletion of /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_del complete
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.540 253665 INFO nova.compute.manager [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 5.05 seconds to destroy the instance on the hypervisor.
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.541 253665 DEBUG oslo.service.loopingcall [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.541 253665 DEBUG nova.compute.manager [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.542 253665 DEBUG nova.network.neutron [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:12:07 compute-0 nova_compute[253661]: 2025-11-22 09:12:07.586 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:12:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840169657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.082 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.090 253665 DEBUG nova.compute.provider_tree [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.111 253665 DEBUG nova.scheduler.client.report [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.146 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.147 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.175 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully created port: b82d7759-7fa9-4919-9812-a4f5df6893a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.183 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.206 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.207 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.226 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.246 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.343 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.345 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.345 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Creating image(s)
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.373 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.406 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.433 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.436 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.502 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.503 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.503 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.504 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.574 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:08 compute-0 ceph-mon[75021]: pgmap v1387: 305 pgs: 305 active+clean; 405 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.4 MiB/s wr, 60 op/s
Nov 22 09:12:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1840169657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.579 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.617 253665 DEBUG nova.policy [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97872d7ce91947789de976821b771135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:12:08 compute-0 nova_compute[253661]: 2025-11-22 09:12:08.956 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:12:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 405 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Nov 22 09:12:09 compute-0 nova_compute[253661]: 2025-11-22 09:12:09.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:09 compute-0 nova_compute[253661]: 2025-11-22 09:12:09.597 253665 DEBUG nova.network.neutron [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:09 compute-0 nova_compute[253661]: 2025-11-22 09:12:09.618 253665 INFO nova.compute.manager [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 2.08 seconds to deallocate network for instance.
Nov 22 09:12:09 compute-0 nova_compute[253661]: 2025-11-22 09:12:09.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:09 compute-0 nova_compute[253661]: 2025-11-22 09:12:09.682 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:09 compute-0 nova_compute[253661]: 2025-11-22 09:12:09.683 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:09 compute-0 nova_compute[253661]: 2025-11-22 09:12:09.719 253665 DEBUG nova.compute.manager [req-5d1da9ec-c88f-4fbf-833c-ac524b31ab66 req-7ebf97cf-ef65-49f7-9daf-87bbb453d940 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received event network-vif-deleted-816016d3-f417-4c33-8f24-8e6360d6fa39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.757211) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802729757292, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2131, "num_deletes": 254, "total_data_size": 3281700, "memory_usage": 3331200, "flush_reason": "Manual Compaction"}
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 22 09:12:09 compute-0 nova_compute[253661]: 2025-11-22 09:12:09.847 253665 DEBUG oslo_concurrency.processutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802729882047, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3224128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25969, "largest_seqno": 28099, "table_properties": {"data_size": 3214602, "index_size": 5956, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20484, "raw_average_key_size": 20, "raw_value_size": 3195186, "raw_average_value_size": 3211, "num_data_blocks": 262, "num_entries": 995, "num_filter_entries": 995, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802525, "oldest_key_time": 1763802525, "file_creation_time": 1763802729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 124880 microseconds, and 17106 cpu microseconds.
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.882101) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3224128 bytes OK
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.882132) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.920630) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.920696) EVENT_LOG_v1 {"time_micros": 1763802729920684, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.920730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3272661, prev total WAL file size 3272661, number of live WAL files 2.
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.922161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3148KB)], [59(7393KB)]
Nov 22 09:12:09 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802729922201, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10794722, "oldest_snapshot_seqno": -1}
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5316 keys, 9041565 bytes, temperature: kUnknown
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802730048276, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9041565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9004059, "index_size": 23124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 132056, "raw_average_key_size": 24, "raw_value_size": 8906292, "raw_average_value_size": 1675, "num_data_blocks": 949, "num_entries": 5316, "num_filter_entries": 5316, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.096 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully updated port: b82d7759-7fa9-4919-9812-a4f5df6893a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.162 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.163 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.163 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.208 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Successfully created port: 5898357d-7112-429d-86c6-24932a2fc274 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:12:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.048622) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9041565 bytes
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.235551) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.5 rd, 71.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.2 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5839, records dropped: 523 output_compression: NoCompression
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.235609) EVENT_LOG_v1 {"time_micros": 1763802730235582, "job": 32, "event": "compaction_finished", "compaction_time_micros": 126200, "compaction_time_cpu_micros": 20528, "output_level": 6, "num_output_files": 1, "total_output_size": 9041565, "num_input_records": 5839, "num_output_records": 5316, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802730236360, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802730237564, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.921845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:12:10 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:12:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:12:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592794506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.312 253665 DEBUG oslo_concurrency.processutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.320 253665 DEBUG nova.compute.provider_tree [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.342 253665 DEBUG nova.scheduler.client.report [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.376 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.404 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.440 253665 INFO nova.scheduler.client.report [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Deleted allocations for instance b7c923dd-3ae9-4c51-8d6d-6305a71fe97f
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.485 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] resizing rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.542 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.576 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:12:10 compute-0 ceph-mon[75021]: pgmap v1388: 305 pgs: 305 active+clean; 405 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Nov 22 09:12:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3592794506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.950 253665 DEBUG nova.objects.instance [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'migration_context' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.963 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.963 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Ensure instance console log exists: /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.964 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.964 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:10 compute-0 nova_compute[253661]: 2025-11-22 09:12:10.964 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 405 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.521 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.570 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.571 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Instance network_info: |[{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.574 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Start _get_guest_xml network_info=[{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.580 253665 WARNING nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.586 253665 DEBUG nova.virt.libvirt.host [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.587 253665 DEBUG nova.virt.libvirt.host [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.591 253665 DEBUG nova.virt.libvirt.host [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.592 253665 DEBUG nova.virt.libvirt.host [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.592 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.592 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.593 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.593 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.594 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.594 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.594 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.595 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.595 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.595 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.595 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.596 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.599 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.914 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Successfully updated port: 5898357d-7112-429d-86c6-24932a2fc274 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.952 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.952 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:11 compute-0 nova_compute[253661]: 2025-11-22 09:12:11.952 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.007 253665 DEBUG nova.compute.manager [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.007 253665 DEBUG nova.compute.manager [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-b82d7759-7fa9-4919-9812-a4f5df6893a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.007 253665 DEBUG oslo_concurrency.lockutils [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.008 253665 DEBUG oslo_concurrency.lockutils [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.008 253665 DEBUG nova.network.neutron [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port b82d7759-7fa9-4919-9812-a4f5df6893a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:12:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:12:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/590567606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.064 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.090 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.096 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.134 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:12:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:12:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3672761642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:12:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:12:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3672761642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:12:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:12:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612283366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.597 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.599 253665 DEBUG nova.virt.libvirt.vif [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.599 253665 DEBUG nova.network.os_vif_util [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.600 253665 DEBUG nova.network.os_vif_util [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.601 253665 DEBUG nova.objects.instance [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_devices' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.633 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.634 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.767 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <name>instance-0000001a</name>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:12:11</nova:creationTime>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:12:12 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <system>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <entry name="serial">3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <entry name="uuid">3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     </system>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <os>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   </os>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <features>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   </features>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk">
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       </source>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config">
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       </source>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:12:12 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:78:3a:a5"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <target dev="tapb82d7759-7f"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log" append="off"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <video>
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     </video>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:12:12 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:12:12 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:12:12 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:12:12 compute-0 nova_compute[253661]: </domain>
Nov 22 09:12:12 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.769 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Preparing to wait for external event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.770 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.770 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.770 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.771 253665 DEBUG nova.virt.libvirt.vif [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.772 253665 DEBUG nova.network.os_vif_util [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.781 253665 DEBUG nova.network.os_vif_util [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.783 253665 DEBUG os_vif [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.786 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.787 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.790 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.797 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb82d7759-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.798 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb82d7759-7f, col_values=(('external_ids', {'iface-id': 'b82d7759-7fa9-4919-9812-a4f5df6893a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:3a:a5', 'vm-uuid': '3c70b093-a92a-4781-8e32-2a7eefde4a43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:12 compute-0 NetworkManager[48920]: <info>  [1763802732.8018] manager: (tapb82d7759-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.803 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.808 253665 INFO os_vif [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f')
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.865 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.865 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.872 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.872 253665 INFO nova.compute.claims [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.918 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.918 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.918 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:78:3a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:12:12 compute-0 nova_compute[253661]: 2025-11-22 09:12:12.919 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Using config drive
Nov 22 09:12:12 compute-0 ceph-mon[75021]: pgmap v1389: 305 pgs: 305 active+clean; 405 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Nov 22 09:12:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/590567606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3672761642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:12:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3672761642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:12:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3612283366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.028 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 426 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.1 MiB/s wr, 71 op/s
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.210 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.239 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Updating instance_info_cache with network_info: [{"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.255 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.256 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance network_info: |[{"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.259 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start _get_guest_xml network_info=[{"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.263 253665 WARNING nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.269 253665 DEBUG nova.virt.libvirt.host [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.270 253665 DEBUG nova.virt.libvirt.host [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.272 253665 DEBUG nova.virt.libvirt.host [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.273 253665 DEBUG nova.virt.libvirt.host [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.273 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.273 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.274 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.274 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.274 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.274 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.279 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.330 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Creating config drive at /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.336 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa2gr2151 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.442 253665 DEBUG nova.network.neutron [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port b82d7759-7fa9-4919-9812-a4f5df6893a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.443 253665 DEBUG nova.network.neutron [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.457 253665 DEBUG oslo_concurrency.lockutils [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.475 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa2gr2151" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.499 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.503 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:12:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252070460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.679 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.688 253665 DEBUG nova.compute.provider_tree [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.701 253665 DEBUG nova.scheduler.client.report [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.727 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.728 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:12:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:12:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126836262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.754 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.775 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.779 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.815 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.816 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.965 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.970 253665 DEBUG nova.policy [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96cac95dc532449d964ffb3705dae943', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:12:13 compute-0 nova_compute[253661]: 2025-11-22 09:12:13.986 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.069 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.070 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.071 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Creating image(s)
Nov 22 09:12:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3252070460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/126836262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.095 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.221 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:12:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4010960236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.243 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.247 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.277 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.280 253665 DEBUG nova.virt.libvirt.vif [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-137830058',display_name='tempest-ImagesTestJSON-server-137830058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-137830058',id=27,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-3brb40ng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:08Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=6e825024-ffe6-4fdb-abaa-0c99c65ac38b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.281 253665 DEBUG nova.network.os_vif_util [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.282 253665 DEBUG nova.network.os_vif_util [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.284 253665 DEBUG nova.objects.instance [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.303 253665 DEBUG nova.compute.manager [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-changed-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.304 253665 DEBUG nova.compute.manager [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Refreshing instance network info cache due to event network-changed-5898357d-7112-429d-86c6-24932a2fc274. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.305 253665 DEBUG oslo_concurrency.lockutils [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.305 253665 DEBUG oslo_concurrency.lockutils [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.305 253665 DEBUG nova.network.neutron [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Refreshing network info cache for port 5898357d-7112-429d-86c6-24932a2fc274 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.308 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <uuid>6e825024-ffe6-4fdb-abaa-0c99c65ac38b</uuid>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <name>instance-0000001b</name>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesTestJSON-server-137830058</nova:name>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:12:13</nova:creationTime>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <nova:port uuid="5898357d-7112-429d-86c6-24932a2fc274">
Nov 22 09:12:14 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <system>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <entry name="serial">6e825024-ffe6-4fdb-abaa-0c99c65ac38b</entry>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <entry name="uuid">6e825024-ffe6-4fdb-abaa-0c99c65ac38b</entry>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     </system>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <os>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   </os>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <features>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   </features>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk">
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       </source>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config">
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       </source>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:12:14 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:b0:97:c6"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <target dev="tap5898357d-71"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/console.log" append="off"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <video>
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     </video>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:12:14 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:12:14 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:12:14 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:12:14 compute-0 nova_compute[253661]: </domain>
Nov 22 09:12:14 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.309 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Preparing to wait for external event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.309 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.309 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.310 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.310 253665 DEBUG nova.virt.libvirt.vif [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-137830058',display_name='tempest-ImagesTestJSON-server-137830058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-137830058',id=27,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-3brb40ng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:08Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=6e825024-ffe6-4fdb-abaa-0c99c65ac38b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.310 253665 DEBUG nova.network.os_vif_util [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.311 253665 DEBUG nova.network.os_vif_util [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.311 253665 DEBUG os_vif [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.313 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.317 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5898357d-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.317 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5898357d-71, col_values=(('external_ids', {'iface-id': '5898357d-7112-429d-86c6-24932a2fc274', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:97:c6', 'vm-uuid': '6e825024-ffe6-4fdb-abaa-0c99c65ac38b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.318 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.318 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.319 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.319 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.339 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.343 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5babe591-239b-4ef7-b193-6960c7313292_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:14 compute-0 NetworkManager[48920]: <info>  [1763802734.3675] manager: (tap5898357d-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.387 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.388 253665 INFO os_vif [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71')
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.494 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Successfully created port: d3202009-ab9d-4ee2-a94d-0d05cc739658 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.542 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.543 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.543 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:b0:97:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.544 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Using config drive
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.564 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.658 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.910 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Creating config drive at /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config
Nov 22 09:12:14 compute-0 nova_compute[253661]: 2025-11-22 09:12:14.916 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt4pfqha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.052 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance failed to shutdown in 60 seconds.
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.055 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt4pfqha" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.085 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.091 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.124 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.125 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Deleting local config drive /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config because it was imported into RBD.
Nov 22 09:12:15 compute-0 NetworkManager[48920]: <info>  [1763802735.1790] manager: (tapb82d7759-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 22 09:12:15 compute-0 kernel: tapb82d7759-7f: entered promiscuous mode
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00146|binding|INFO|Claiming lport b82d7759-7fa9-4919-9812-a4f5df6893a7 for this chassis.
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00147|binding|INFO|b82d7759-7fa9-4919-9812-a4f5df6893a7: Claiming fa:16:3e:78:3a:a5 10.100.0.12
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00148|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.196 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:3a:a5 10.100.0.12'], port_security=['fa:16:3e:78:3a:a5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0e8c403-f9ed-4054-8f14-f56c4d8c06c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b82d7759-7fa9-4919-9812-a4f5df6893a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.198 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b82d7759-7fa9-4919-9812-a4f5df6893a7 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.201 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:12:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 451 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 4.3 MiB/s wr, 95 op/s
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 systemd-udevd[288982]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.219 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[91188c27-8847-47a6-8dc9-774a4310ad3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.220 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e2cd359-c1 in ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:12:15 compute-0 ceph-mon[75021]: pgmap v1390: 305 pgs: 305 active+clean; 426 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.1 MiB/s wr, 71 op/s
Nov 22 09:12:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4010960236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.222 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e2cd359-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4becf683-b57b-4b4e-ac3f-968edc4169bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.223 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b94e3059-71c6-4b4e-87b4-257b7d2b3188]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 NetworkManager[48920]: <info>  [1763802735.2295] device (tapb82d7759-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:12:15 compute-0 NetworkManager[48920]: <info>  [1763802735.2303] device (tapb82d7759-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:12:15 compute-0 systemd-machined[215941]: New machine qemu-30-instance-0000001a.
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.235 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4df535fe-ea4a-40d5-83f5-3d1aee8a92fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001a.
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.271 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a98af082-e0d6-4049-9fa1-7d7b0f02bc3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.303 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2e27227c-cb8d-4e3a-86d4-d5b337c048e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.324 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4170d9-eed4-4905-956a-e6bfbf0f3776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 NetworkManager[48920]: <info>  [1763802735.3253] manager: (tap5e2cd359-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Nov 22 09:12:15 compute-0 systemd-udevd[288988]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:12:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.357 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[05e196e7-2ab8-4eb9-a10b-74d8f5b2121b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.360 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[284cbb8e-94f2-4eac-a856-5e7c6cf927c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 NetworkManager[48920]: <info>  [1763802735.3917] device (tap5e2cd359-c0): carrier: link connected
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.401 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c97272-b92a-4639-871c-06b69ff01e14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.418 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.423 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58ee8860-ccb5-4e95-8bb1-b32906f661ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289017, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00149|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.436 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Successfully updated port: d3202009-ab9d-4ee2-a94d-0d05cc739658 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00150|binding|INFO|Setting lport b82d7759-7fa9-4919-9812-a4f5df6893a7 ovn-installed in OVS
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00151|binding|INFO|Setting lport b82d7759-7fa9-4919-9812-a4f5df6893a7 up in Southbound
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.439 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.452 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.452 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquired lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.453 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.456 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d55c73-eb3b-44cf-bbb9-6dcf49a6d996]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:bd41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558612, 'tstamp': 558612}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289020, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.482 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[86443e01-ef07-479b-8658-ecb61dc00117]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289021, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.512 253665 DEBUG nova.network.neutron [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Updated VIF entry in instance network info cache for port 5898357d-7112-429d-86c6-24932a2fc274. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.513 253665 DEBUG nova.network.neutron [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Updating instance_info_cache with network_info: [{"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.521 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eed050b2-d15b-4e9f-8e99-2a683b4c22d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.527 253665 DEBUG oslo_concurrency.lockutils [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.575 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.606 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6339f1-035b-4f1a-88e0-6b5a7bdd3499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.608 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.608 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.609 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 NetworkManager[48920]: <info>  [1763802735.6116] manager: (tap5e2cd359-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 22 09:12:15 compute-0 kernel: tap5e2cd359-c0: entered promiscuous mode
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.617 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.653 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00152|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.673 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.674 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.676 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0771aa9-b4cd-430c-83d9-1ee0396a610a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.677 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.677 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'env', 'PROCESS_TAG=haproxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e2cd359-c68f-4256-90e8-0ad40aff8a00.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.730 253665 DEBUG nova.compute.manager [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.730 253665 DEBUG oslo_concurrency.lockutils [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.731 253665 DEBUG oslo_concurrency.lockutils [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.731 253665 DEBUG oslo_concurrency.lockutils [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.731 253665 DEBUG nova.compute.manager [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Processing event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:12:15 compute-0 kernel: tap716b716d-2e (unregistering): left promiscuous mode
Nov 22 09:12:15 compute-0 NetworkManager[48920]: <info>  [1763802735.8667] device (tap716b716d-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00153|binding|INFO|Releasing lport 716b716d-2ee2-44e7-9850-c10854634f77 from this chassis (sb_readonly=0)
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00154|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 down in Southbound
Nov 22 09:12:15 compute-0 ovn_controller[152872]: 2025-11-22T09:12:15Z|00155|binding|INFO|Removing iface tap716b716d-2e ovn-installed in OVS
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.904 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:15 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 22 09:12:15 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000012.scope: Consumed 16.606s CPU time.
Nov 22 09:12:15 compute-0 systemd-machined[215941]: Machine qemu-28-instance-00000012 terminated.
Nov 22 09:12:15 compute-0 systemd-udevd[289010]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:12:15 compute-0 NetworkManager[48920]: <info>  [1763802735.9462] manager: (tap716b716d-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.969 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.981 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.982 253665 DEBUG nova.virt.libvirt.vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:11:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:12Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.982 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.983 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.983 253665 DEBUG os_vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.985 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.985 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap716b716d-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.987 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:12:15 compute-0 podman[289061]: 2025-11-22 09:12:15.989587953 +0000 UTC m=+0.083552856 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.994 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:15 compute-0 nova_compute[253661]: 2025-11-22 09:12:15.997 253665 INFO os_vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')
Nov 22 09:12:16 compute-0 podman[289056]: 2025-11-22 09:12:16.016468171 +0000 UTC m=+0.104176943 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:12:16 compute-0 podman[289139]: 2025-11-22 09:12:16.053887732 +0000 UTC m=+0.024538522 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.387 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-changed-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.387 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Refreshing instance network info cache due to event network-changed-d3202009-ab9d-4ee2-a94d-0d05cc739658. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.387 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.485 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Updating instance_info_cache with network_info: [{"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.503 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Releasing lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.503 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance network_info: |[{"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.504 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.504 253665 DEBUG nova.network.neutron [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Refreshing network info cache for port d3202009-ab9d-4ee2-a94d-0d05cc739658 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:12:16 compute-0 ceph-mon[75021]: pgmap v1391: 305 pgs: 305 active+clean; 451 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 4.3 MiB/s wr, 95 op/s
Nov 22 09:12:16 compute-0 podman[289139]: 2025-11-22 09:12:16.633704911 +0000 UTC m=+0.604355691 container create e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.661 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.663 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802736.660913, 3c70b093-a92a-4781-8e32-2a7eefde4a43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.664 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] VM Started (Lifecycle Event)
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.667 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.671 253665 INFO nova.virt.libvirt.driver [-] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Instance spawned successfully.
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.671 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.686 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.692 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.695 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.695 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.696 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.696 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.697 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.697 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.719 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.720 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802736.6612504, 3c70b093-a92a-4781-8e32-2a7eefde4a43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.720 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] VM Paused (Lifecycle Event)
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.746 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.750 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802736.6669443, 3c70b093-a92a-4781-8e32-2a7eefde4a43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.750 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] VM Resumed (Lifecycle Event)
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.757 253665 INFO nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Took 12.03 seconds to spawn the instance on the hypervisor.
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.758 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.769 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.772 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.803 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.842 253665 INFO nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Took 14.38 seconds to build instance.
Nov 22 09:12:16 compute-0 systemd[1]: Started libpod-conmon-e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d.scope.
Nov 22 09:12:16 compute-0 nova_compute[253661]: 2025-11-22 09:12:16.856 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cb179eb9809a39cf76cdbb9739dddbf19fc65549fbc0ae24ede098ba40d347/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:17 compute-0 podman[289139]: 2025-11-22 09:12:17.03481801 +0000 UTC m=+1.005468780 container init e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:12:17 compute-0 podman[289139]: 2025-11-22 09:12:17.046352218 +0000 UTC m=+1.017002988 container start e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:12:17 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [NOTICE]   (289187) : New worker (289189) forked
Nov 22 09:12:17 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [NOTICE]   (289187) : Loading success.
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.080 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5babe591-239b-4ef7-b193-6960c7313292_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.164 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] resizing rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.193 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.195 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:12:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 468 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.4 MiB/s wr, 93 op/s
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.214 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17584355-d1e1-4478-8158-6ce3d97f49db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.246 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.247 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Deleting local config drive /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config because it was imported into RBD.
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.264 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5df9bb-7306-4880-a516-d6bda639cafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.270 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ee535955-0985-4364-b8bb-924eb09317eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.313 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0558d8-6fe3-4a97-a15e-971b99991279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 kernel: tap5898357d-71: entered promiscuous mode
Nov 22 09:12:17 compute-0 NetworkManager[48920]: <info>  [1763802737.3190] manager: (tap5898357d-71): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Nov 22 09:12:17 compute-0 ovn_controller[152872]: 2025-11-22T09:12:17Z|00156|binding|INFO|Claiming lport 5898357d-7112-429d-86c6-24932a2fc274 for this chassis.
Nov 22 09:12:17 compute-0 ovn_controller[152872]: 2025-11-22T09:12:17Z|00157|binding|INFO|5898357d-7112-429d-86c6-24932a2fc274: Claiming fa:16:3e:b0:97:c6 10.100.0.3
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.325 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.333 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:97:c6 10.100.0.3'], port_security=['fa:16:3e:b0:97:c6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6e825024-ffe6-4fdb-abaa-0c99c65ac38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5898357d-7112-429d-86c6-24932a2fc274) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:17 compute-0 NetworkManager[48920]: <info>  [1763802737.3390] device (tap5898357d-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:12:17 compute-0 NetworkManager[48920]: <info>  [1763802737.3398] device (tap5898357d-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f94cd90f-2e93-433d-909d-e1baa4e7bce7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 952, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 952, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289270, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 systemd-machined[215941]: New machine qemu-31-instance-0000001b.
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.367 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1392c125-7db7-4a34-bf76-79e73eb96e68]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289274, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289274, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.369 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:17 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.404 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.405 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.405 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.405 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.407 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5898357d-7112-429d-86c6-24932a2fc274 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 bound to our chassis
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:17 compute-0 ovn_controller[152872]: 2025-11-22T09:12:17Z|00158|binding|INFO|Setting lport 5898357d-7112-429d-86c6-24932a2fc274 ovn-installed in OVS
Nov 22 09:12:17 compute-0 ovn_controller[152872]: 2025-11-22T09:12:17Z|00159|binding|INFO|Setting lport 5898357d-7112-429d-86c6-24932a2fc274 up in Southbound
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.410 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.428 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[79eb4f24-3d6e-443c-924f-b3cb389b3cf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.429 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2abeeeb2-21 in ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.431 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2abeeeb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.432 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a21552-191d-4652-abde-e2f3b4f8fe0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.433 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1db24e-7762-4aaa-9dca-11e994c282e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.450 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e7233903-4f06-43a3-8e89-9c6c1e1eadfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.464 253665 DEBUG nova.objects.instance [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 5babe591-239b-4ef7-b193-6960c7313292 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.476 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.477 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Ensure instance console log exists: /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.477 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.478 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.479 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.481 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start _get_guest_xml network_info=[{"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.484 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[058c277b-b817-4001-a11b-df0b7561a0ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.491 253665 WARNING nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.496 253665 DEBUG nova.virt.libvirt.host [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.496 253665 DEBUG nova.virt.libvirt.host [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.500 253665 DEBUG nova.virt.libvirt.host [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.501 253665 DEBUG nova.virt.libvirt.host [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.501 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.501 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.503 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.503 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.503 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.503 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.504 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.507 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.525 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[969074dd-f1ce-4162-8355-cb6755432edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 NetworkManager[48920]: <info>  [1763802737.5404] manager: (tap2abeeeb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.539 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[109fd7da-9bda-4b3d-8857-2b00959912a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.589 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[240ccec1-8254-4244-9287-07e2af6618eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.595 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[63533498-54a7-46d4-a954-b2cd56319602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 NetworkManager[48920]: <info>  [1763802737.6333] device (tap2abeeeb2-20): carrier: link connected
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.642 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[97645460-8a48-47c9-a611-2a05fc84b71f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.660 253665 DEBUG nova.network.neutron [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Updated VIF entry in instance network info cache for port d3202009-ab9d-4ee2-a94d-0d05cc739658. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.662 253665 DEBUG nova.network.neutron [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Updating instance_info_cache with network_info: [{"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.666 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f12baaf6-1ea1-4266-b127-cba00f1de0bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558837, 'reachable_time': 36353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289310, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.678 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.679 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.679 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.680 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.680 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.680 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.681 253665 WARNING nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state rebuilding.
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.681 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.681 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.682 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.682 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.682 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.682 253665 WARNING nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state rebuilding.
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.688 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[71a23941-65cc-4ec0-a799-273b21a1ad81]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:bff7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558837, 'tstamp': 558837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289321, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.714 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b54f668e-f26b-461b-878f-b2286b4cdd48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558837, 'reachable_time': 36353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289330, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.761 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5ac4a4-d3e4-4068-8c8d-86f1fc19a6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.874 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[271c8ffd-b48b-4651-a5f7-2b458415e808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.880 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.881 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:17 compute-0 NetworkManager[48920]: <info>  [1763802737.8865] manager: (tap2abeeeb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 22 09:12:17 compute-0 kernel: tap2abeeeb2-20: entered promiscuous mode
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.889 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.893 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:17 compute-0 ovn_controller[152872]: 2025-11-22T09:12:17Z|00160|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.895 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.897 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.898 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d0a8fc-8182-4aac-88be-387c7a130e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.899 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:12:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.901 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'env', 'PROCESS_TAG=haproxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:12:17 compute-0 nova_compute[253661]: 2025-11-22 09:12:17.914 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.032 253665 DEBUG nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.032 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.033 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.033 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.034 253665 DEBUG nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.035 253665 WARNING nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 for instance with vm_state active and task_state None.
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.035 253665 DEBUG nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.035 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.036 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.036 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.036 253665 DEBUG nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Processing event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:12:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:12:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577429672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.098 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.128 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.133 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.168 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802738.117068, 6e825024-ffe6-4fdb-abaa-0c99c65ac38b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.169 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] VM Started (Lifecycle Event)
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.173 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802723.1447194, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.174 253665 INFO nova.compute.manager [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Stopped (Lifecycle Event)
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.175 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.187 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting instance files /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.188 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deletion of /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del complete
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.194 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.197 253665 DEBUG nova.compute.manager [None req-90a250fb-f5ab-4bff-970f-b9e6d52afe23 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.197 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.204 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance spawned successfully.
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.205 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.208 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.223 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.223 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802738.117386, 6e825024-ffe6-4fdb-abaa-0c99c65ac38b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.224 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] VM Paused (Lifecycle Event)
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.226 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.227 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.227 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.227 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.228 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.228 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.259 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.265 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.265 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.316 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802738.179927, 6e825024-ffe6-4fdb-abaa-0c99c65ac38b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.317 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] VM Resumed (Lifecycle Event)
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.338 253665 INFO nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 9.99 seconds to spawn the instance on the hypervisor.
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.339 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.347 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.354 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.382 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:12:18 compute-0 podman[289447]: 2025-11-22 09:12:18.412124783 +0000 UTC m=+0.075689705 container create c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.439 253665 INFO nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 11.23 seconds to build instance.
Nov 22 09:12:18 compute-0 podman[289447]: 2025-11-22 09:12:18.36842781 +0000 UTC m=+0.031992762 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.469 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.474 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.474 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating image(s)
Nov 22 09:12:18 compute-0 systemd[1]: Started libpod-conmon-c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928.scope.
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.504 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cc462d771deebe4c19614685b97fc397e44751755ad5b1c8389e8f680dea6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.542 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:18 compute-0 podman[289447]: 2025-11-22 09:12:18.551553484 +0000 UTC m=+0.215118436 container init c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:12:18 compute-0 podman[289447]: 2025-11-22 09:12:18.558536363 +0000 UTC m=+0.222101295 container start c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:12:18 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [NOTICE]   (289521) : New worker (289538) forked
Nov 22 09:12:18 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [NOTICE]   (289521) : Loading success.
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.600 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.609 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:18 compute-0 ceph-mon[75021]: pgmap v1392: 305 pgs: 305 active+clean; 468 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.4 MiB/s wr, 93 op/s
Nov 22 09:12:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2577429672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:18 compute-0 NetworkManager[48920]: <info>  [1763802738.7029] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 22 09:12:18 compute-0 NetworkManager[48920]: <info>  [1763802738.7036] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:12:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4122684865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.723 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.725 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.725 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.726 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.757 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.762 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.801 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.804 253665 DEBUG nova.virt.libvirt.vif [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-557765812',display_name='tempest-ImagesOneServerNegativeTestJSON-server-557765812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-557765812',id=28,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-siow6hfb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:14Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=5babe591-239b-4ef7-b193-6960c7313292,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.805 253665 DEBUG nova.network.os_vif_util [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.806 253665 DEBUG nova.network.os_vif_util [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.807 253665 DEBUG nova.objects.instance [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5babe591-239b-4ef7-b193-6960c7313292 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:12:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263467567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.827 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <uuid>5babe591-239b-4ef7-b193-6960c7313292</uuid>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <name>instance-0000001c</name>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-557765812</nova:name>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:12:17</nova:creationTime>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <nova:user uuid="96cac95dc532449d964ffb3705dae943">tempest-ImagesOneServerNegativeTestJSON-251054159-project-member</nova:user>
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <nova:project uuid="dcedb2f9ed6e43dfa8ecc3854373b0b5">tempest-ImagesOneServerNegativeTestJSON-251054159</nova:project>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <nova:port uuid="d3202009-ab9d-4ee2-a94d-0d05cc739658">
Nov 22 09:12:18 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <system>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <entry name="serial">5babe591-239b-4ef7-b193-6960c7313292</entry>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <entry name="uuid">5babe591-239b-4ef7-b193-6960c7313292</entry>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     </system>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <os>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   </os>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <features>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   </features>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/5babe591-239b-4ef7-b193-6960c7313292_disk">
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       </source>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/5babe591-239b-4ef7-b193-6960c7313292_disk.config">
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       </source>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:12:18 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:b2:33:a1"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <target dev="tapd3202009-ab"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/console.log" append="off"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <video>
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     </video>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:12:18 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:12:18 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:12:18 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:12:18 compute-0 nova_compute[253661]: </domain>
Nov 22 09:12:18 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.827 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Preparing to wait for external event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.828 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.828 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.828 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.829 253665 DEBUG nova.virt.libvirt.vif [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-557765812',display_name='tempest-ImagesOneServerNegativeTestJSON-server-557765812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-557765812',id=28,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-siow6hfb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:14Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=5babe591-239b-4ef7-b193-6960c7313292,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.830 253665 DEBUG nova.network.os_vif_util [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.830 253665 DEBUG nova.network.os_vif_util [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.831 253665 DEBUG os_vif [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.832 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.833 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3202009-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3202009-ab, col_values=(('external_ids', {'iface-id': 'd3202009-ab9d-4ee2-a94d-0d05cc739658', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:33:a1', 'vm-uuid': '5babe591-239b-4ef7-b193-6960c7313292'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:18 compute-0 NetworkManager[48920]: <info>  [1763802738.8412] manager: (tapd3202009-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.853 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.856 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.858 253665 INFO os_vif [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab')
Nov 22 09:12:18 compute-0 ovn_controller[152872]: 2025-11-22T09:12:18Z|00161|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 09:12:18 compute-0 ovn_controller[152872]: 2025-11-22T09:12:18Z|00162|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:12:18 compute-0 ovn_controller[152872]: 2025-11-22T09:12:18Z|00163|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.899 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.926 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.928 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.928 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No VIF found with MAC fa:16:3e:b2:33:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.929 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Using config drive
Nov 22 09:12:18 compute-0 nova_compute[253661]: 2025-11-22 09:12:18.970 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.075 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.075 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.079 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.080 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.083 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.083 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.087 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.087 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.091 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.092 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.096 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.096 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.165 253665 DEBUG oslo_concurrency.lockutils [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.166 253665 DEBUG oslo_concurrency.lockutils [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.166 253665 DEBUG nova.compute.manager [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.172 253665 DEBUG nova.compute.manager [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.173 253665 DEBUG nova.objects.instance [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'flavor' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.200 253665 DEBUG nova.virt.libvirt.driver [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:12:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 448 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 794 KiB/s rd, 4.4 MiB/s wr, 140 op/s
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.206 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.281 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.321 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Creating config drive at /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.328 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpciljblvs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.425 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.426 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Ensure instance console log exists: /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.427 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.427 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.428 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.431 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start _get_guest_xml network_info=[{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.437 253665 WARNING nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.444 253665 DEBUG nova.virt.libvirt.host [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.446 253665 DEBUG nova.virt.libvirt.host [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.453 253665 DEBUG nova.virt.libvirt.host [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.453 253665 DEBUG nova.virt.libvirt.host [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.454 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.454 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.457 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.473 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.508 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpciljblvs" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.544 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.550 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config 5babe591-239b-4ef7-b193-6960c7313292_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4122684865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4263467567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.794 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.795 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3538MB free_disk=59.754974365234375GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.796 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.796 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.810 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config 5babe591-239b-4ef7-b193-6960c7313292_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.810 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Deleting local config drive /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config because it was imported into RBD.
Nov 22 09:12:19 compute-0 kernel: tapd3202009-ab: entered promiscuous mode
Nov 22 09:12:19 compute-0 NetworkManager[48920]: <info>  [1763802739.8650] manager: (tapd3202009-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.866 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:19 compute-0 ovn_controller[152872]: 2025-11-22T09:12:19Z|00164|binding|INFO|Claiming lport d3202009-ab9d-4ee2-a94d-0d05cc739658 for this chassis.
Nov 22 09:12:19 compute-0 ovn_controller[152872]: 2025-11-22T09:12:19Z|00165|binding|INFO|d3202009-ab9d-4ee2-a94d-0d05cc739658: Claiming fa:16:3e:b2:33:a1 10.100.0.3
Nov 22 09:12:19 compute-0 ovn_controller[152872]: 2025-11-22T09:12:19Z|00166|binding|INFO|Setting lport d3202009-ab9d-4ee2-a94d-0d05cc739658 ovn-installed in OVS
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:19 compute-0 ovn_controller[152872]: 2025-11-22T09:12:19Z|00167|binding|INFO|Setting lport d3202009-ab9d-4ee2-a94d-0d05cc739658 up in Southbound
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.896 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:33:a1 10.100.0.3'], port_security=['fa:16:3e:b2:33:a1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5babe591-239b-4ef7-b193-6960c7313292', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=d3202009-ab9d-4ee2-a94d-0d05cc739658) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.898 162862 INFO neutron.agent.ovn.metadata.agent [-] Port d3202009-ab9d-4ee2-a94d-0d05cc739658 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff bound to our chassis
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.895 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.902 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.919 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[65705b9c-9e04-41d7-ac6c-15652b69eeda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.920 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap691e79ad-d1 in ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:12:19 compute-0 systemd-udevd[289759]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.923 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap691e79ad-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.923 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbfe9c8-6e1b-4b6b-b9c8-45760754ba15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.929 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[42b21f75-6313-4fe0-9a11-f320b03b087e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:19 compute-0 systemd-machined[215941]: New machine qemu-32-instance-0000001c.
Nov 22 09:12:19 compute-0 NetworkManager[48920]: <info>  [1763802739.9401] device (tapd3202009-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:12:19 compute-0 NetworkManager[48920]: <info>  [1763802739.9414] device (tapd3202009-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:12:19 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.950 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3ae08a2f-348c-406b-8ffc-9acb8a542e1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.950 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a06c5a55-cb57-4d3e-8177-2b93f47b1110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d99bd27b-0ff3-493e-a69c-6c7ec034aa81 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 96000606-0bc4-4cf1-9e33-360a640c2cb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance de145d76-062b-4362-bc82-09e09d2f9154 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 6e825024-ffe6-4fdb-abaa-0c99c65ac38b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.953 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 5babe591-239b-4ef7-b193-6960c7313292 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.953 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:12:19 compute-0 nova_compute[253661]: 2025-11-22 09:12:19.954 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:12:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.971 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce0a70f-3060-407d-87b2-50ee66354f49]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.004 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6770ea6b-43f0-4c90-91e0-6949e5c359b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 systemd-udevd[289763]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:12:20 compute-0 NetworkManager[48920]: <info>  [1763802740.0132] manager: (tap691e79ad-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.011 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e36d36-a170-44b6-ad8c-d86936095c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.060 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3996c406-1365-4358-a272-8be1d56c3f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:12:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888745374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.065 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef15fba-fabf-4424-89c6-fd6617dad3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 NetworkManager[48920]: <info>  [1763802740.1179] device (tap691e79ad-d0): carrier: link connected
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.116 253665 DEBUG nova.compute.manager [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.119 253665 DEBUG oslo_concurrency.lockutils [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.120 253665 DEBUG oslo_concurrency.lockutils [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.120 253665 DEBUG oslo_concurrency.lockutils [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.121 253665 DEBUG nova.compute.manager [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Processing event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.121 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.126 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5406dee8-749f-4553-a6ab-9da3e1aa148f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.150 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.171 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.181 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[438bf922-80a5-4701-89c6-be498e6411c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559085, 'reachable_time': 34526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289808, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.209 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.213 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3430e576-ff13-42f9-a12b-1bd76868d8a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:f9e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559085, 'tstamp': 559085}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289814, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd8c0d2-f3bb-42de-91cf-4a432dad1172]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559085, 'reachable_time': 34526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289815, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.253 253665 DEBUG nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.254 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.255 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.255 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.256 253665 DEBUG nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] No waiting events found dispatching network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.256 253665 WARNING nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received unexpected event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 for instance with vm_state active and task_state powering-off.
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.256 253665 DEBUG nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.257 253665 DEBUG nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-b82d7759-7fa9-4919-9812-a4f5df6893a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.257 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.258 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.258 253665 DEBUG nova.network.neutron [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port b82d7759-7fa9-4919-9812-a4f5df6893a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.290 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ac00a6-9ff3-46ee-a974-87282a4a5724]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.375 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e051443b-6ee7-4ac1-9757-932f76656c59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.377 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691e79ad-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.377 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.378 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap691e79ad-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:20 compute-0 NetworkManager[48920]: <info>  [1763802740.3814] manager: (tap691e79ad-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 22 09:12:20 compute-0 kernel: tap691e79ad-d0: entered promiscuous mode
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.385 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap691e79ad-d0, col_values=(('external_ids', {'iface-id': '6b990e4f-df30-4562-9550-e3e0ea811f07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:20 compute-0 ovn_controller[152872]: 2025-11-22T09:12:20Z|00168|binding|INFO|Releasing lport 6b990e4f-df30-4562-9550-e3e0ea811f07 from this chassis (sb_readonly=0)
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.408 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[effe8d96-295e-4767-ad16-db07cae38f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.410 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:12:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.411 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'env', 'PROCESS_TAG=haproxy-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/691e79ad-da5d-4276-aa7d-732c2aaedbff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.486 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.487 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802740.4856288, 5babe591-239b-4ef7-b193-6960c7313292 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.488 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] VM Started (Lifecycle Event)
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.504 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.513 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.520 253665 INFO nova.virt.libvirt.driver [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance spawned successfully.
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.521 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.524 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.545 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.545 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802740.4858258, 5babe591-239b-4ef7-b193-6960c7313292 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.545 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] VM Paused (Lifecycle Event)
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.559 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.560 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.561 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.561 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.561 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.562 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.566 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.570 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802740.5024066, 5babe591-239b-4ef7-b193-6960c7313292 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.570 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] VM Resumed (Lifecycle Event)
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.593 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.597 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.614 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.655 253665 INFO nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Took 6.59 seconds to spawn the instance on the hypervisor.
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.655 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:20 compute-0 ceph-mon[75021]: pgmap v1393: 305 pgs: 305 active+clean; 448 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 794 KiB/s rd, 4.4 MiB/s wr, 140 op/s
Nov 22 09:12:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3888745374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:12:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1093388408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.767 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.768 253665 DEBUG nova.virt.libvirt.vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:11:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:18Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.769 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.769 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.772 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <uuid>3ae08a2f-348c-406b-8ffc-9acb8a542e1c</uuid>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <name>instance-00000012</name>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdminTestJSON-server-1439141870</nova:name>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:12:19</nova:creationTime>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <nova:port uuid="716b716d-2ee2-44e7-9850-c10854634f77">
Nov 22 09:12:20 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <system>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <entry name="serial">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <entry name="uuid">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     </system>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <os>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   </os>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <features>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   </features>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk">
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       </source>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config">
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       </source>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:12:20 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:12:20 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:47:7d:dd"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <target dev="tap716b716d-2e"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log" append="off"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <video>
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     </video>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:12:20 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:12:20 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:12:20 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:12:20 compute-0 nova_compute[253661]: </domain>
Nov 22 09:12:20 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.773 253665 DEBUG nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Preparing to wait for external event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.773 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.773 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.773 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.774 253665 DEBUG nova.virt.libvirt.vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:11:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:18Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.775 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.775 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.775 253665 DEBUG os_vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.776 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.777 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.784 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap716b716d-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.785 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap716b716d-2e, col_values=(('external_ids', {'iface-id': '716b716d-2ee2-44e7-9850-c10854634f77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:7d:dd', 'vm-uuid': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.787 253665 INFO nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Took 7.94 seconds to build instance.
Nov 22 09:12:20 compute-0 NetworkManager[48920]: <info>  [1763802740.7890] manager: (tap716b716d-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Nov 22 09:12:20 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:12:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3468576485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.816 253665 INFO os_vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.844 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.860 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.877 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.894 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:12:20 compute-0 podman[289932]: 2025-11-22 09:12:20.925094794 +0000 UTC m=+0.084455137 container create 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.944 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.945 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.945 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:47:7d:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:12:20 compute-0 nova_compute[253661]: 2025-11-22 09:12:20.946 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Using config drive
Nov 22 09:12:20 compute-0 podman[289932]: 2025-11-22 09:12:20.877850365 +0000 UTC m=+0.037210738 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:12:20 compute-0 systemd[1]: Started libpod-conmon-328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da.scope.
Nov 22 09:12:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:12:21 compute-0 nova_compute[253661]: 2025-11-22 09:12:21.008 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:21 compute-0 podman[289941]: 2025-11-22 09:12:21.010497573 +0000 UTC m=+0.127335791 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff7de6cecd8d8be804ea66b96176eefccb4d3774ace81e67748b939ad84ffe4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:21 compute-0 nova_compute[253661]: 2025-11-22 09:12:21.030 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:12:21 compute-0 nova_compute[253661]: 2025-11-22 09:12:21.030 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:21 compute-0 podman[289932]: 2025-11-22 09:12:21.033462676 +0000 UTC m=+0.192823049 container init 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:12:21 compute-0 podman[289932]: 2025-11-22 09:12:21.040152608 +0000 UTC m=+0.199512951 container start 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:12:21 compute-0 nova_compute[253661]: 2025-11-22 09:12:21.042 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:21 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [NOTICE]   (289993) : New worker (289995) forked
Nov 22 09:12:21 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [NOTICE]   (289993) : Loading success.
Nov 22 09:12:21 compute-0 nova_compute[253661]: 2025-11-22 09:12:21.081 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'keypairs' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 448 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Nov 22 09:12:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1093388408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:12:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3468576485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.028 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.028 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.244 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.244 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.245 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.351 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating config drive at /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.358 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfz3372qv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.423 253665 DEBUG nova.compute.manager [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.423 253665 DEBUG oslo_concurrency.lockutils [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.424 253665 DEBUG oslo_concurrency.lockutils [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.424 253665 DEBUG oslo_concurrency.lockutils [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.424 253665 DEBUG nova.compute.manager [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] No waiting events found dispatching network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.424 253665 WARNING nova.compute.manager [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received unexpected event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 for instance with vm_state active and task_state None.
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.506 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfz3372qv" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.533 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.538 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.599 253665 DEBUG nova.network.neutron [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port b82d7759-7fa9-4919-9812-a4f5df6893a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.600 253665 DEBUG nova.network.neutron [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.627 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:12:22 compute-0 ceph-mon[75021]: pgmap v1394: 305 pgs: 305 active+clean; 448 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.844 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.845 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting local config drive /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config because it was imported into RBD.
Nov 22 09:12:22 compute-0 kernel: tap716b716d-2e: entered promiscuous mode
Nov 22 09:12:22 compute-0 NetworkManager[48920]: <info>  [1763802742.9169] manager: (tap716b716d-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:22 compute-0 ovn_controller[152872]: 2025-11-22T09:12:22Z|00169|binding|INFO|Claiming lport 716b716d-2ee2-44e7-9850-c10854634f77 for this chassis.
Nov 22 09:12:22 compute-0 ovn_controller[152872]: 2025-11-22T09:12:22Z|00170|binding|INFO|716b716d-2ee2-44e7-9850-c10854634f77: Claiming fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.945 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:22.946 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '7', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:22.949 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis
Nov 22 09:12:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:22.952 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:12:22 compute-0 ovn_controller[152872]: 2025-11-22T09:12:22Z|00171|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 ovn-installed in OVS
Nov 22 09:12:22 compute-0 ovn_controller[152872]: 2025-11-22T09:12:22Z|00172|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 up in Southbound
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.955 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:22 compute-0 nova_compute[253661]: 2025-11-22 09:12:22.956 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:22.977 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[caff061d-6bab-4a4c-983b-60ae8ed1ea56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:22 compute-0 systemd-udevd[290057]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:12:22 compute-0 systemd-machined[215941]: New machine qemu-33-instance-00000012.
Nov 22 09:12:23 compute-0 NetworkManager[48920]: <info>  [1763802743.0038] device (tap716b716d-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:12:23 compute-0 NetworkManager[48920]: <info>  [1763802743.0048] device (tap716b716d-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:12:23 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-00000012.
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.039 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb65dca-a347-473e-b0d2-d211a54ac91a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.043 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcb1934-1255-458d-8b0e-fb3fcee9ac33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.084 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[87982d20-92fa-483f-bae1-64e42128035c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.117 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3529df-0f9c-409c-a1f6-7d493585cde3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 952, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 952, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290070, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc66e031-3371-4deb-9d7e-3224e0e775a5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290072, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290072, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.145 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.149 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.150 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.150 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 426 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 201 op/s
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.731 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 3ae08a2f-348c-406b-8ffc-9acb8a542e1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.732 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802743.7310624, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.732 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Started (Lifecycle Event)
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.750 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.758 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802743.7312176, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.759 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Paused (Lifecycle Event)
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.798 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.805 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:12:23 compute-0 nova_compute[253661]: 2025-11-22 09:12:23.825 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:12:24 compute-0 nova_compute[253661]: 2025-11-22 09:12:24.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:24 compute-0 ceph-mon[75021]: pgmap v1395: 305 pgs: 305 active+clean; 426 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 201 op/s
Nov 22 09:12:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.6 MiB/s wr, 312 op/s
Nov 22 09:12:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.396 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updating instance_info_cache with network_info: [{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.432 253665 DEBUG nova.compute.manager [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.433 253665 DEBUG oslo_concurrency.lockutils [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.433 253665 DEBUG oslo_concurrency.lockutils [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.433 253665 DEBUG oslo_concurrency.lockutils [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.433 253665 DEBUG nova.compute.manager [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Processing event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.434 253665 DEBUG nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.439 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802745.4388597, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.439 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Resumed (Lifecycle Event)
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.441 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.445 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance spawned successfully.
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.445 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.464 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.472 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.478 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.479 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.479 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.480 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.480 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.480 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.506 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.507 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.507 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.509 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.568 253665 DEBUG nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.642 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.642 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.642 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.697 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:25 compute-0 nova_compute[253661]: 2025-11-22 09:12:25.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:26 compute-0 ceph-mon[75021]: pgmap v1396: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.6 MiB/s wr, 312 op/s
Nov 22 09:12:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.6 MiB/s wr, 320 op/s
Nov 22 09:12:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:27.956 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:27.956 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:27.958 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:28 compute-0 nova_compute[253661]: 2025-11-22 09:12:28.330 253665 DEBUG nova.compute.manager [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:28 compute-0 nova_compute[253661]: 2025-11-22 09:12:28.331 253665 DEBUG oslo_concurrency.lockutils [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:28 compute-0 nova_compute[253661]: 2025-11-22 09:12:28.332 253665 DEBUG oslo_concurrency.lockutils [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:28 compute-0 nova_compute[253661]: 2025-11-22 09:12:28.332 253665 DEBUG oslo_concurrency.lockutils [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:28 compute-0 nova_compute[253661]: 2025-11-22 09:12:28.332 253665 DEBUG nova.compute.manager [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:28 compute-0 nova_compute[253661]: 2025-11-22 09:12:28.333 253665 WARNING nova.compute.manager [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state None.
Nov 22 09:12:28 compute-0 ceph-mon[75021]: pgmap v1397: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.6 MiB/s wr, 320 op/s
Nov 22 09:12:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 2.8 MiB/s wr, 364 op/s
Nov 22 09:12:29 compute-0 nova_compute[253661]: 2025-11-22 09:12:29.316 253665 DEBUG nova.virt.libvirt.driver [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:12:29 compute-0 nova_compute[253661]: 2025-11-22 09:12:29.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:30 compute-0 nova_compute[253661]: 2025-11-22 09:12:30.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:31 compute-0 ceph-mon[75021]: pgmap v1398: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 2.8 MiB/s wr, 364 op/s
Nov 22 09:12:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 1.8 MiB/s wr, 294 op/s
Nov 22 09:12:31 compute-0 ovn_controller[152872]: 2025-11-22T09:12:31Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:3a:a5 10.100.0.12
Nov 22 09:12:31 compute-0 ovn_controller[152872]: 2025-11-22T09:12:31Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:3a:a5 10.100.0.12
Nov 22 09:12:31 compute-0 nova_compute[253661]: 2025-11-22 09:12:31.903 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:31 compute-0 nova_compute[253661]: 2025-11-22 09:12:31.903 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:31 compute-0 nova_compute[253661]: 2025-11-22 09:12:31.904 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:31 compute-0 nova_compute[253661]: 2025-11-22 09:12:31.904 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:31 compute-0 nova_compute[253661]: 2025-11-22 09:12:31.904 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:31 compute-0 nova_compute[253661]: 2025-11-22 09:12:31.906 253665 INFO nova.compute.manager [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Terminating instance
Nov 22 09:12:31 compute-0 nova_compute[253661]: 2025-11-22 09:12:31.909 253665 DEBUG nova.compute.manager [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:12:32 compute-0 nova_compute[253661]: 2025-11-22 09:12:32.426 253665 DEBUG nova.compute.manager [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:32 compute-0 nova_compute[253661]: 2025-11-22 09:12:32.462 253665 INFO nova.compute.manager [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] instance snapshotting
Nov 22 09:12:32 compute-0 nova_compute[253661]: 2025-11-22 09:12:32.647 253665 INFO nova.virt.libvirt.driver [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Beginning live snapshot process
Nov 22 09:12:32 compute-0 nova_compute[253661]: 2025-11-22 09:12:32.792 253665 DEBUG nova.virt.libvirt.imagebackend [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:12:32 compute-0 ceph-mon[75021]: pgmap v1399: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 1.8 MiB/s wr, 294 op/s
Nov 22 09:12:32 compute-0 nova_compute[253661]: 2025-11-22 09:12:32.978 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] creating snapshot(12639cd7c1ad4538a8186cb2d407fe9f) on rbd image(5babe591-239b-4ef7-b193-6960c7313292_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:12:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 476 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.6 MiB/s wr, 307 op/s
Nov 22 09:12:33 compute-0 kernel: tapc048a826-73 (unregistering): left promiscuous mode
Nov 22 09:12:33 compute-0 NetworkManager[48920]: <info>  [1763802753.4167] device (tapc048a826-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.425 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:33 compute-0 ovn_controller[152872]: 2025-11-22T09:12:33Z|00173|binding|INFO|Releasing lport c048a826-73ad-49d3-a29f-5d790d359e51 from this chassis (sb_readonly=0)
Nov 22 09:12:33 compute-0 ovn_controller[152872]: 2025-11-22T09:12:33Z|00174|binding|INFO|Setting lport c048a826-73ad-49d3-a29f-5d790d359e51 down in Southbound
Nov 22 09:12:33 compute-0 ovn_controller[152872]: 2025-11-22T09:12:33Z|00175|binding|INFO|Removing iface tapc048a826-73 ovn-installed in OVS
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.428 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.452 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:b7:42 10.100.0.7'], port_security=['fa:16:3e:8c:b7:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'de145d76-062b-4362-bc82-09e09d2f9154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c048a826-73ad-49d3-a29f-5d790d359e51) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.454 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c048a826-73ad-49d3-a29f-5d790d359e51 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.459 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:12:33 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 22 09:12:33 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Consumed 17.890s CPU time.
Nov 22 09:12:33 compute-0 systemd-machined[215941]: Machine qemu-26-instance-00000017 terminated.
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.483 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6acbe84a-897e-4355-8070-8bce3e925b77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.525 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1dee8ee1-6059-4d4c-8a69-cd7d36430cee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.528 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1f77a57c-c213-47e3-9133-df2e9a67258b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.564 253665 INFO nova.virt.libvirt.driver [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance destroyed successfully.
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.564 253665 DEBUG nova.objects.instance [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid de145d76-062b-4362-bc82-09e09d2f9154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.567 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0728b835-7120-45ef-bf59-d8e5cd9a43b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.581 253665 DEBUG nova.virt.libvirt.vif [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-27339221',display_name='tempest-ServersAdminTestJSON-server-27339221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-27339221',id=23,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-xz93pz1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:50Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=de145d76-062b-4362-bc82-09e09d2f9154,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.581 253665 DEBUG nova.network.os_vif_util [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.582 253665 DEBUG nova.network.os_vif_util [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.583 253665 DEBUG os_vif [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.585 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.586 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc048a826-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.588 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.589 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.592 253665 INFO os_vif [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73')
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.604 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[93f475ed-6a0d-453c-8b97-3421c5e2bfbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 952, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 952, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290185, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.628 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bd3f9e-ca59-451b-8b18-ac0e4494149f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290201, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290201, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.634 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.634 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.634 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.635 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.639 253665 DEBUG nova.compute.manager [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-unplugged-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.640 253665 DEBUG oslo_concurrency.lockutils [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.640 253665 DEBUG oslo_concurrency.lockutils [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.641 253665 DEBUG oslo_concurrency.lockutils [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.641 253665 DEBUG nova.compute.manager [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] No waiting events found dispatching network-vif-unplugged-c048a826-73ad-49d3-a29f-5d790d359e51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:33 compute-0 nova_compute[253661]: 2025-11-22 09:12:33.641 253665 DEBUG nova.compute.manager [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-unplugged-c048a826-73ad-49d3-a29f-5d790d359e51 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:12:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 22 09:12:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 22 09:12:33 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 22 09:12:34 compute-0 nova_compute[253661]: 2025-11-22 09:12:34.264 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] cloning vms/5babe591-239b-4ef7-b193-6960c7313292_disk@12639cd7c1ad4538a8186cb2d407fe9f to images/ffc4be20-c068-44ca-a572-d433657a200f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:12:34 compute-0 nova_compute[253661]: 2025-11-22 09:12:34.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:34 compute-0 ceph-mon[75021]: pgmap v1400: 305 pgs: 305 active+clean; 476 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.6 MiB/s wr, 307 op/s
Nov 22 09:12:34 compute-0 ceph-mon[75021]: osdmap e190: 3 total, 3 up, 3 in
Nov 22 09:12:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 498 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.5 MiB/s wr, 187 op/s
Nov 22 09:12:35 compute-0 nova_compute[253661]: 2025-11-22 09:12:35.348 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] flattening images/ffc4be20-c068-44ca-a572-d433657a200f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:12:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:35 compute-0 nova_compute[253661]: 2025-11-22 09:12:35.719 253665 DEBUG nova.compute.manager [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:35 compute-0 nova_compute[253661]: 2025-11-22 09:12:35.719 253665 DEBUG oslo_concurrency.lockutils [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:35 compute-0 nova_compute[253661]: 2025-11-22 09:12:35.719 253665 DEBUG oslo_concurrency.lockutils [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:35 compute-0 nova_compute[253661]: 2025-11-22 09:12:35.720 253665 DEBUG oslo_concurrency.lockutils [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:35 compute-0 nova_compute[253661]: 2025-11-22 09:12:35.720 253665 DEBUG nova.compute.manager [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] No waiting events found dispatching network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:35 compute-0 nova_compute[253661]: 2025-11-22 09:12:35.720 253665 WARNING nova.compute.manager [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received unexpected event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 for instance with vm_state active and task_state deleting.
Nov 22 09:12:36 compute-0 ovn_controller[152872]: 2025-11-22T09:12:36Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:97:c6 10.100.0.3
Nov 22 09:12:36 compute-0 ovn_controller[152872]: 2025-11-22T09:12:36Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:97:c6 10.100.0.3
Nov 22 09:12:37 compute-0 ceph-mon[75021]: pgmap v1402: 305 pgs: 305 active+clean; 498 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.5 MiB/s wr, 187 op/s
Nov 22 09:12:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 532 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.3 MiB/s wr, 230 op/s
Nov 22 09:12:37 compute-0 nova_compute[253661]: 2025-11-22 09:12:37.235 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] removing snapshot(12639cd7c1ad4538a8186cb2d407fe9f) on rbd image(5babe591-239b-4ef7-b193-6960c7313292_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:12:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 22 09:12:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 22 09:12:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 22 09:12:38 compute-0 ceph-mon[75021]: pgmap v1403: 305 pgs: 305 active+clean; 532 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.3 MiB/s wr, 230 op/s
Nov 22 09:12:38 compute-0 ovn_controller[152872]: 2025-11-22T09:12:38Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:33:a1 10.100.0.3
Nov 22 09:12:38 compute-0 ovn_controller[152872]: 2025-11-22T09:12:38Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:33:a1 10.100.0.3
Nov 22 09:12:38 compute-0 nova_compute[253661]: 2025-11-22 09:12:38.479 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] creating snapshot(snap) on rbd image(ffc4be20-c068-44ca-a572-d433657a200f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:12:38 compute-0 nova_compute[253661]: 2025-11-22 09:12:38.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:38 compute-0 nova_compute[253661]: 2025-11-22 09:12:38.909 253665 INFO nova.virt.libvirt.driver [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Deleting instance files /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154_del
Nov 22 09:12:38 compute-0 nova_compute[253661]: 2025-11-22 09:12:38.910 253665 INFO nova.virt.libvirt.driver [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Deletion of /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154_del complete
Nov 22 09:12:38 compute-0 nova_compute[253661]: 2025-11-22 09:12:38.993 253665 INFO nova.compute.manager [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Took 7.08 seconds to destroy the instance on the hypervisor.
Nov 22 09:12:38 compute-0 nova_compute[253661]: 2025-11-22 09:12:38.995 253665 DEBUG oslo.service.loopingcall [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:12:38 compute-0 nova_compute[253661]: 2025-11-22 09:12:38.995 253665 DEBUG nova.compute.manager [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:12:38 compute-0 nova_compute[253661]: 2025-11-22 09:12:38.996 253665 DEBUG nova.network.neutron [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:12:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 525 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 MiB/s wr, 366 op/s
Nov 22 09:12:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 22 09:12:39 compute-0 ceph-mon[75021]: osdmap e191: 3 total, 3 up, 3 in
Nov 22 09:12:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 22 09:12:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 22 09:12:39 compute-0 nova_compute[253661]: 2025-11-22 09:12:39.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image ffc4be20-c068-44ca-a572-d433657a200f could not be found.
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID ffc4be20-c068-44ca-a572-d433657a200f
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver 
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver 
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image ffc4be20-c068-44ca-a572-d433657a200f could not be found.
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver 
Nov 22 09:12:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:40 compute-0 ceph-mon[75021]: pgmap v1405: 305 pgs: 305 active+clean; 525 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 MiB/s wr, 366 op/s
Nov 22 09:12:40 compute-0 ceph-mon[75021]: osdmap e192: 3 total, 3 up, 3 in
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.546 253665 DEBUG nova.virt.libvirt.driver [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:12:40 compute-0 nova_compute[253661]: 2025-11-22 09:12:40.682 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] removing snapshot(snap) on rbd image(ffc4be20-c068-44ca-a572-d433657a200f) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:12:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:41.065 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:41.066 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:41.067 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.101 253665 DEBUG nova.network.neutron [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.118 253665 INFO nova.compute.manager [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Took 2.12 seconds to deallocate network for instance.
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.164 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.165 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.174 253665 DEBUG nova.compute.manager [req-2da275cd-d067-4538-b23f-0efb7801a67f req-eeb760cd-86f3-4044-bccc-730c586055cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-deleted-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 525 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 8.4 MiB/s wr, 293 op/s
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.310 253665 DEBUG oslo_concurrency.processutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 22 09:12:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 22 09:12:41 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 22 09:12:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:12:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797358642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.788 253665 DEBUG oslo_concurrency.processutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.794 253665 DEBUG nova.compute.provider_tree [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.811 253665 DEBUG nova.scheduler.client.report [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.836 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.861 253665 INFO nova.scheduler.client.report [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Deleted allocations for instance de145d76-062b-4362-bc82-09e09d2f9154
Nov 22 09:12:41 compute-0 nova_compute[253661]: 2025-11-22 09:12:41.959 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:42 compute-0 nova_compute[253661]: 2025-11-22 09:12:42.553 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:42 compute-0 nova_compute[253661]: 2025-11-22 09:12:42.554 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:42 compute-0 nova_compute[253661]: 2025-11-22 09:12:42.554 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:42 compute-0 nova_compute[253661]: 2025-11-22 09:12:42.555 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:42 compute-0 nova_compute[253661]: 2025-11-22 09:12:42.555 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:42 compute-0 nova_compute[253661]: 2025-11-22 09:12:42.556 253665 INFO nova.compute.manager [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Terminating instance
Nov 22 09:12:42 compute-0 nova_compute[253661]: 2025-11-22 09:12:42.557 253665 DEBUG nova.compute.manager [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:12:42 compute-0 ceph-mon[75021]: pgmap v1407: 305 pgs: 305 active+clean; 525 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 8.4 MiB/s wr, 293 op/s
Nov 22 09:12:42 compute-0 ceph-mon[75021]: osdmap e193: 3 total, 3 up, 3 in
Nov 22 09:12:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2797358642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 538 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Nov 22 09:12:43 compute-0 kernel: tap411035c7-ec (unregistering): left promiscuous mode
Nov 22 09:12:43 compute-0 NetworkManager[48920]: <info>  [1763802763.4992] device (tap411035c7-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:12:43 compute-0 ovn_controller[152872]: 2025-11-22T09:12:43Z|00176|binding|INFO|Releasing lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa from this chassis (sb_readonly=0)
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:43 compute-0 ovn_controller[152872]: 2025-11-22T09:12:43Z|00177|binding|INFO|Setting lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa down in Southbound
Nov 22 09:12:43 compute-0 ovn_controller[152872]: 2025-11-22T09:12:43Z|00178|binding|INFO|Removing iface tap411035c7-ec ovn-installed in OVS
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.521 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:6f:23 10.100.0.10'], port_security=['fa:16:3e:cb:6f:23 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '96000606-0bc4-4cf1-9e33-360a640c2cb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.523 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.525 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.546 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[123dc51e-53dc-4e8c-ae0f-3399740f1e6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:43 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 22 09:12:43 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000016.scope: Consumed 22.579s CPU time.
Nov 22 09:12:43 compute-0 systemd-machined[215941]: Machine qemu-25-instance-00000016 terminated.
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.587 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[59fc7443-58ba-4979-8cf8-abb87054c8d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.590 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[08198933-436e-4bdb-9e25-91a00aef8f50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.592 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.621 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[67ff4496-f59f-44d7-b4d2-fe43f6128607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[61017171-e9e5-4df7-8037-5836ac251794]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 952, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 952, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290366, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.667 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a04e0c66-42ef-4705-a164-d624f74c035d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290367, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290367, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.669 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.675 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.676 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.676 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.676 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.677 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.798 253665 INFO nova.virt.libvirt.driver [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance destroyed successfully.
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.799 253665 DEBUG nova.objects.instance [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid 96000606-0bc4-4cf1-9e33-360a640c2cb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.817 253665 DEBUG nova.virt.libvirt.vif [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-70556130',display_name='tempest-ServersAdminTestJSON-server-70556130',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-70556130',id=22,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-74e7hdfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:28Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=96000606-0bc4-4cf1-9e33-360a640c2cb7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.817 253665 DEBUG nova.network.os_vif_util [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.818 253665 DEBUG nova.network.os_vif_util [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.818 253665 DEBUG os_vif [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.821 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap411035c7-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.825 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:43 compute-0 nova_compute[253661]: 2025-11-22 09:12:43.828 253665 INFO os_vif [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec')
Nov 22 09:12:44 compute-0 nova_compute[253661]: 2025-11-22 09:12:44.270 253665 DEBUG nova.compute.manager [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-unplugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:44 compute-0 nova_compute[253661]: 2025-11-22 09:12:44.271 253665 DEBUG oslo_concurrency.lockutils [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:44 compute-0 nova_compute[253661]: 2025-11-22 09:12:44.274 253665 DEBUG oslo_concurrency.lockutils [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:44 compute-0 nova_compute[253661]: 2025-11-22 09:12:44.275 253665 DEBUG oslo_concurrency.lockutils [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:44 compute-0 nova_compute[253661]: 2025-11-22 09:12:44.275 253665 DEBUG nova.compute.manager [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] No waiting events found dispatching network-vif-unplugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:44 compute-0 nova_compute[253661]: 2025-11-22 09:12:44.276 253665 DEBUG nova.compute.manager [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-unplugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:12:44 compute-0 nova_compute[253661]: 2025-11-22 09:12:44.780 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:44 compute-0 ceph-mon[75021]: pgmap v1409: 305 pgs: 305 active+clean; 538 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Nov 22 09:12:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 547 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.9 MiB/s wr, 310 op/s
Nov 22 09:12:45 compute-0 kernel: tap5898357d-71 (unregistering): left promiscuous mode
Nov 22 09:12:45 compute-0 ovn_controller[152872]: 2025-11-22T09:12:45Z|00179|binding|INFO|Releasing lport 5898357d-7112-429d-86c6-24932a2fc274 from this chassis (sb_readonly=0)
Nov 22 09:12:45 compute-0 ovn_controller[152872]: 2025-11-22T09:12:45Z|00180|binding|INFO|Setting lport 5898357d-7112-429d-86c6-24932a2fc274 down in Southbound
Nov 22 09:12:45 compute-0 nova_compute[253661]: 2025-11-22 09:12:45.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:45 compute-0 ovn_controller[152872]: 2025-11-22T09:12:45Z|00181|binding|INFO|Removing iface tap5898357d-71 ovn-installed in OVS
Nov 22 09:12:45 compute-0 NetworkManager[48920]: <info>  [1763802765.2301] device (tap5898357d-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:12:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.237 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:97:c6 10.100.0.3'], port_security=['fa:16:3e:b0:97:c6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6e825024-ffe6-4fdb-abaa-0c99c65ac38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5898357d-7112-429d-86c6-24932a2fc274) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.239 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5898357d-7112-429d-86c6-24932a2fc274 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis
Nov 22 09:12:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.243 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:12:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.245 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1c9859-a78b-4edc-8e31-0fc3378475cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.246 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace which is not needed anymore
Nov 22 09:12:45 compute-0 nova_compute[253661]: 2025-11-22 09:12:45.261 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:45 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 22 09:12:45 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 15.652s CPU time.
Nov 22 09:12:45 compute-0 systemd-machined[215941]: Machine qemu-31-instance-0000001b terminated.
Nov 22 09:12:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:45 compute-0 nova_compute[253661]: 2025-11-22 09:12:45.581 253665 INFO nova.virt.libvirt.driver [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance shutdown successfully after 26 seconds.
Nov 22 09:12:45 compute-0 nova_compute[253661]: 2025-11-22 09:12:45.590 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance destroyed successfully.
Nov 22 09:12:45 compute-0 nova_compute[253661]: 2025-11-22 09:12:45.591 253665 DEBUG nova.objects.instance [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:45 compute-0 nova_compute[253661]: 2025-11-22 09:12:45.605 253665 DEBUG nova.compute.manager [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:45 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [NOTICE]   (289521) : haproxy version is 2.8.14-c23fe91
Nov 22 09:12:45 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [NOTICE]   (289521) : path to executable is /usr/sbin/haproxy
Nov 22 09:12:45 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [WARNING]  (289521) : Exiting Master process...
Nov 22 09:12:45 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [ALERT]    (289521) : Current worker (289538) exited with code 143 (Terminated)
Nov 22 09:12:45 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [WARNING]  (289521) : All workers exited. Exiting... (0)
Nov 22 09:12:45 compute-0 systemd[1]: libpod-c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928.scope: Deactivated successfully.
Nov 22 09:12:45 compute-0 podman[290420]: 2025-11-22 09:12:45.621246579 +0000 UTC m=+0.237832065 container died c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:12:45 compute-0 nova_compute[253661]: 2025-11-22 09:12:45.655 253665 DEBUG oslo_concurrency.lockutils [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 26.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:45 compute-0 sudo[290456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:12:45 compute-0 sudo[290456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:45 compute-0 sudo[290456]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:46 compute-0 sudo[290481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:12:46 compute-0 sudo[290481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:46 compute-0 sudo[290481]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:46 compute-0 sudo[290518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:12:46 compute-0 sudo[290518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:46 compute-0 sudo[290518]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:46 compute-0 sudo[290552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:12:46 compute-0 sudo[290552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.347 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.348 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.348 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.348 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.348 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] No waiting events found dispatching network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.349 253665 WARNING nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received unexpected event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa for instance with vm_state active and task_state deleting.
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.349 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-unplugged-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.349 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.349 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.350 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.350 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] No waiting events found dispatching network-vif-unplugged-5898357d-7112-429d-86c6-24932a2fc274 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.350 253665 WARNING nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received unexpected event network-vif-unplugged-5898357d-7112-429d-86c6-24932a2fc274 for instance with vm_state stopped and task_state None.
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.351 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.351 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.351 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.351 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.352 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] No waiting events found dispatching network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:46 compute-0 nova_compute[253661]: 2025-11-22 09:12:46.352 253665 WARNING nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received unexpected event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 for instance with vm_state stopped and task_state None.
Nov 22 09:12:46 compute-0 ovn_controller[152872]: 2025-11-22T09:12:46Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 09:12:46 compute-0 ovn_controller[152872]: 2025-11-22T09:12:46Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 09:12:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928-userdata-shm.mount: Deactivated successfully.
Nov 22 09:12:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e2cc462d771deebe4c19614685b97fc397e44751755ad5b1c8389e8f680dea6-merged.mount: Deactivated successfully.
Nov 22 09:12:46 compute-0 ceph-mon[75021]: pgmap v1410: 305 pgs: 305 active+clean; 547 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.9 MiB/s wr, 310 op/s
Nov 22 09:12:47 compute-0 sudo[290552]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 552 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 3.2 MiB/s wr, 131 op/s
Nov 22 09:12:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:12:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:12:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:12:47 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:12:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:12:47 compute-0 podman[290420]: 2025-11-22 09:12:47.566564075 +0000 UTC m=+2.183149541 container cleanup c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:12:47 compute-0 systemd[1]: libpod-conmon-c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928.scope: Deactivated successfully.
Nov 22 09:12:47 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:12:47 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev bcfce270-6d82-4a67-8111-907db03cc725 does not exist
Nov 22 09:12:47 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 212886a2-e170-4162-b3e5-4ceaf877cfa7 does not exist
Nov 22 09:12:47 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 03d3ebc4-e1be-4662-840b-459c6febf050 does not exist
Nov 22 09:12:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:12:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:12:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:12:47 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:12:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:12:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:12:47 compute-0 sudo[290635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:12:47 compute-0 sudo[290635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:47 compute-0 sudo[290635]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:47 compute-0 sudo[290665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:12:47 compute-0 sudo[290665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:47 compute-0 sudo[290665]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:47 compute-0 nova_compute[253661]: 2025-11-22 09:12:47.790 253665 DEBUG nova.compute.manager [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:47 compute-0 podman[290505]: 2025-11-22 09:12:47.824091043 +0000 UTC m=+1.748479952 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:12:47 compute-0 podman[290506]: 2025-11-22 09:12:47.826155283 +0000 UTC m=+1.752359746 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 22 09:12:47 compute-0 nova_compute[253661]: 2025-11-22 09:12:47.841 253665 INFO nova.compute.manager [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] instance snapshotting
Nov 22 09:12:47 compute-0 nova_compute[253661]: 2025-11-22 09:12:47.842 253665 WARNING nova.compute.manager [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] trying to snapshot a non-running instance: (state: 4 expected: 1)
Nov 22 09:12:47 compute-0 sudo[290690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:12:47 compute-0 sudo[290690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:47 compute-0 sudo[290690]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:47 compute-0 sudo[290715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:12:47 compute-0 sudo[290715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:48 compute-0 nova_compute[253661]: 2025-11-22 09:12:48.068 253665 INFO nova.virt.libvirt.driver [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Beginning cold snapshot process
Nov 22 09:12:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:12:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:12:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:12:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:12:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:12:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:12:48 compute-0 podman[290614]: 2025-11-22 09:12:48.33626259 +0000 UTC m=+0.738989296 container remove c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:12:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.343 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f19fdb3d-448e-477c-a927-cadd8632255a]: (4, ('Sat Nov 22 09:12:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928)\nc001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928\nSat Nov 22 09:12:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928)\nc001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[893d8b17-f9d6-4dc7-8b2e-eb184682a7fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.346 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:48 compute-0 kernel: tap2abeeeb2-20: left promiscuous mode
Nov 22 09:12:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.374 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4667d0-e4ba-4990-babb-b33d63288dc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.388 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3d497d-37fa-4e7b-b4e3-ef16e482036a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.391 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1c535971-75e0-4e90-995f-73f12af37527]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:48 compute-0 nova_compute[253661]: 2025-11-22 09:12:48.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.411 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9158061c-ce20-4311-9e31-68f7f9f47f50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558825, 'reachable_time': 21815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290809, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d2abeeeb2\x2d24a5\x2d4ccd\x2d93c8\x2d05b42d3a1a51.mount: Deactivated successfully.
Nov 22 09:12:48 compute-0 nova_compute[253661]: 2025-11-22 09:12:48.418 253665 DEBUG nova.virt.libvirt.imagebackend [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:12:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.416 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:12:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.425 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1e83a419-1d7a-489e-bf17-1a203840d5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:48 compute-0 nova_compute[253661]: 2025-11-22 09:12:48.560 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802753.5599756, de145d76-062b-4362-bc82-09e09d2f9154 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:48 compute-0 nova_compute[253661]: 2025-11-22 09:12:48.561 253665 INFO nova.compute.manager [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] VM Stopped (Lifecycle Event)
Nov 22 09:12:48 compute-0 nova_compute[253661]: 2025-11-22 09:12:48.579 253665 DEBUG nova.compute.manager [None req-489440ab-055c-4a24-8ab9-378e56502abe - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:48 compute-0 podman[290823]: 2025-11-22 09:12:48.581786599 +0000 UTC m=+0.117544205 container create 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:12:48 compute-0 podman[290823]: 2025-11-22 09:12:48.488412439 +0000 UTC m=+0.024170075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:12:48 compute-0 nova_compute[253661]: 2025-11-22 09:12:48.605 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(af237d8df3944bf985d1958a12fa2e46) on rbd image(6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:12:48 compute-0 systemd[1]: Started libpod-conmon-339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023.scope.
Nov 22 09:12:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:12:48 compute-0 nova_compute[253661]: 2025-11-22 09:12:48.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:48 compute-0 podman[290823]: 2025-11-22 09:12:48.979786924 +0000 UTC m=+0.515544550 container init 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:12:48 compute-0 podman[290823]: 2025-11-22 09:12:48.989234071 +0000 UTC m=+0.524991677 container start 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:12:48 compute-0 zen_yonath[290857]: 167 167
Nov 22 09:12:48 compute-0 systemd[1]: libpod-339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023.scope: Deactivated successfully.
Nov 22 09:12:48 compute-0 conmon[290857]: conmon 339e7accbe7da6923b44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023.scope/container/memory.events
Nov 22 09:12:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 528 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 416 KiB/s rd, 2.7 MiB/s wr, 142 op/s
Nov 22 09:12:49 compute-0 podman[290823]: 2025-11-22 09:12:49.278970956 +0000 UTC m=+0.814728562 container attach 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:12:49 compute-0 podman[290823]: 2025-11-22 09:12:49.281733622 +0000 UTC m=+0.817491268 container died 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:12:49 compute-0 ceph-mon[75021]: pgmap v1411: 305 pgs: 305 active+clean; 552 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 3.2 MiB/s wr, 131 op/s
Nov 22 09:12:49 compute-0 nova_compute[253661]: 2025-11-22 09:12:49.782 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-20198849129e2d44e7b3d580572dbc12383e0394c90d75e795ec2f2dfea1b10a-merged.mount: Deactivated successfully.
Nov 22 09:12:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 22 09:12:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 22 09:12:50 compute-0 podman[290823]: 2025-11-22 09:12:50.685583625 +0000 UTC m=+2.221341261 container remove 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 09:12:50 compute-0 systemd[1]: libpod-conmon-339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023.scope: Deactivated successfully.
Nov 22 09:12:50 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 22 09:12:51 compute-0 podman[290882]: 2025-11-22 09:12:50.963876824 +0000 UTC m=+0.041312167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:12:51 compute-0 podman[290882]: 2025-11-22 09:12:51.158625519 +0000 UTC m=+0.236060812 container create 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:12:51 compute-0 ceph-mon[75021]: pgmap v1412: 305 pgs: 305 active+clean; 528 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 416 KiB/s rd, 2.7 MiB/s wr, 142 op/s
Nov 22 09:12:51 compute-0 ceph-mon[75021]: osdmap e194: 3 total, 3 up, 3 in
Nov 22 09:12:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 528 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 2.7 MiB/s wr, 144 op/s
Nov 22 09:12:51 compute-0 systemd[1]: Started libpod-conmon-494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc.scope.
Nov 22 09:12:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:51 compute-0 nova_compute[253661]: 2025-11-22 09:12:51.410 253665 WARNING nova.compute.manager [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Image not found during snapshot: nova.exception.ImageNotFound: Image ffc4be20-c068-44ca-a572-d433657a200f could not be found.
Nov 22 09:12:51 compute-0 podman[290896]: 2025-11-22 09:12:51.584123617 +0000 UTC m=+0.383601649 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:12:51 compute-0 podman[290882]: 2025-11-22 09:12:51.584456795 +0000 UTC m=+0.661892118 container init 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 09:12:51 compute-0 podman[290882]: 2025-11-22 09:12:51.594473686 +0000 UTC m=+0.671908989 container start 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:12:51 compute-0 podman[290882]: 2025-11-22 09:12:51.839255267 +0000 UTC m=+0.916690580 container attach 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:12:52
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'volumes', 'backups', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:12:52 compute-0 ceph-mon[75021]: pgmap v1414: 305 pgs: 305 active+clean; 528 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 2.7 MiB/s wr, 144 op/s
Nov 22 09:12:52 compute-0 nova_compute[253661]: 2025-11-22 09:12:52.654 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] cloning vms/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk@af237d8df3944bf985d1958a12fa2e46 to images/c0a1f7fa-e570-4e82-9df8-99d640ef5df3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:12:52 compute-0 vigorous_benz[290916]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:12:52 compute-0 vigorous_benz[290916]: --> relative data size: 1.0
Nov 22 09:12:52 compute-0 vigorous_benz[290916]: --> All data devices are unavailable
Nov 22 09:12:52 compute-0 systemd[1]: libpod-494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc.scope: Deactivated successfully.
Nov 22 09:12:52 compute-0 systemd[1]: libpod-494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc.scope: Consumed 1.032s CPU time.
Nov 22 09:12:52 compute-0 podman[290882]: 2025-11-22 09:12:52.763560469 +0000 UTC m=+1.840995762 container died 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:12:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 489 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.248 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.249 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.249 253665 DEBUG nova.objects.instance [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.275 253665 DEBUG nova.objects.instance [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.276 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.276 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.277 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.277 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.277 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.278 253665 INFO nova.compute.manager [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Terminating instance
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.279 253665 DEBUG nova.compute.manager [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.285 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486-merged.mount: Deactivated successfully.
Nov 22 09:12:53 compute-0 nova_compute[253661]: 2025-11-22 09:12:53.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.220 253665 DEBUG nova.policy [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:12:54 compute-0 podman[290882]: 2025-11-22 09:12:54.264960044 +0000 UTC m=+3.342395337 container remove 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:12:54 compute-0 sudo[290715]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:54 compute-0 systemd[1]: libpod-conmon-494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc.scope: Deactivated successfully.
Nov 22 09:12:54 compute-0 sudo[291001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:12:54 compute-0 sudo[291001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:54 compute-0 sudo[291001]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:54 compute-0 sudo[291026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:12:54 compute-0 sudo[291026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:54 compute-0 sudo[291026]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:54 compute-0 sudo[291051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:12:54 compute-0 sudo[291051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:54 compute-0 sudo[291051]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:54 compute-0 kernel: tapd3202009-ab (unregistering): left promiscuous mode
Nov 22 09:12:54 compute-0 NetworkManager[48920]: <info>  [1763802774.5413] device (tapd3202009-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:54 compute-0 ovn_controller[152872]: 2025-11-22T09:12:54Z|00182|binding|INFO|Releasing lport d3202009-ab9d-4ee2-a94d-0d05cc739658 from this chassis (sb_readonly=0)
Nov 22 09:12:54 compute-0 ovn_controller[152872]: 2025-11-22T09:12:54Z|00183|binding|INFO|Setting lport d3202009-ab9d-4ee2-a94d-0d05cc739658 down in Southbound
Nov 22 09:12:54 compute-0 ovn_controller[152872]: 2025-11-22T09:12:54Z|00184|binding|INFO|Removing iface tapd3202009-ab ovn-installed in OVS
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.565 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:33:a1 10.100.0.3'], port_security=['fa:16:3e:b2:33:a1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5babe591-239b-4ef7-b193-6960c7313292', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=d3202009-ab9d-4ee2-a94d-0d05cc739658) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.568 162862 INFO neutron.agent.ovn.metadata.agent [-] Port d3202009-ab9d-4ee2-a94d-0d05cc739658 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff unbound from our chassis
Nov 22 09:12:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.571 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 691e79ad-da5d-4276-aa7d-732c2aaedbff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:12:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.572 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abf9d7f7-9f5d-4361-868a-ed9679fc4fcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.574 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff namespace which is not needed anymore
Nov 22 09:12:54 compute-0 sudo[291076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:12:54 compute-0 sudo[291076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:54 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 22 09:12:54 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 14.946s CPU time.
Nov 22 09:12:54 compute-0 systemd-machined[215941]: Machine qemu-32-instance-0000001c terminated.
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.722 253665 INFO nova.virt.libvirt.driver [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance destroyed successfully.
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.722 253665 DEBUG nova.objects.instance [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'resources' on Instance uuid 5babe591-239b-4ef7-b193-6960c7313292 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.736 253665 DEBUG nova.virt.libvirt.vif [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-557765812',display_name='tempest-ImagesOneServerNegativeTestJSON-server-557765812',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-557765812',id=28,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-siow6hfb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:51Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=5babe591-239b-4ef7-b193-6960c7313292,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.736 253665 DEBUG nova.network.os_vif_util [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.737 253665 DEBUG nova.network.os_vif_util [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.737 253665 DEBUG os_vif [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.739 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.739 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3202009-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.741 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.743 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.747 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.750 253665 INFO os_vif [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab')
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:54 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [NOTICE]   (289993) : haproxy version is 2.8.14-c23fe91
Nov 22 09:12:54 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [NOTICE]   (289993) : path to executable is /usr/sbin/haproxy
Nov 22 09:12:54 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [WARNING]  (289993) : Exiting Master process...
Nov 22 09:12:54 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [WARNING]  (289993) : Exiting Master process...
Nov 22 09:12:54 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [ALERT]    (289993) : Current worker (289995) exited with code 143 (Terminated)
Nov 22 09:12:54 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [WARNING]  (289993) : All workers exited. Exiting... (0)
Nov 22 09:12:54 compute-0 systemd[1]: libpod-328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da.scope: Deactivated successfully.
Nov 22 09:12:54 compute-0 podman[291124]: 2025-11-22 09:12:54.837744053 +0000 UTC m=+0.157706753 container died 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:12:54 compute-0 nova_compute[253661]: 2025-11-22 09:12:54.872 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] flattening images/c0a1f7fa-e570-4e82-9df8-99d640ef5df3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:12:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:12:55 compute-0 ceph-mon[75021]: pgmap v1415: 305 pgs: 305 active+clean; 489 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Nov 22 09:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bff7de6cecd8d8be804ea66b96176eefccb4d3774ace81e67748b939ad84ffe4-merged.mount: Deactivated successfully.
Nov 22 09:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da-userdata-shm.mount: Deactivated successfully.
Nov 22 09:12:55 compute-0 nova_compute[253661]: 2025-11-22 09:12:55.196 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully created port: f1f391af-c757-4aab-b0ce-ddad3dab55e7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:12:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 436 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 540 KiB/s wr, 93 op/s
Nov 22 09:12:55 compute-0 nova_compute[253661]: 2025-11-22 09:12:55.429 253665 DEBUG nova.compute.manager [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-unplugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:55 compute-0 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG oslo_concurrency.lockutils [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:55 compute-0 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG oslo_concurrency.lockutils [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:55 compute-0 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG oslo_concurrency.lockutils [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:55 compute-0 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG nova.compute.manager [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] No waiting events found dispatching network-vif-unplugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:55 compute-0 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG nova.compute.manager [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-unplugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:12:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:12:55 compute-0 podman[291124]: 2025-11-22 09:12:55.647877614 +0000 UTC m=+0.967840294 container cleanup 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:12:55 compute-0 systemd[1]: libpod-conmon-328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da.scope: Deactivated successfully.
Nov 22 09:12:56 compute-0 podman[291226]: 2025-11-22 09:12:56.338107373 +0000 UTC m=+0.662960382 container remove 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:12:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.348 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[336541ad-0058-44d2-ac46-a2763afd7f34]: (4, ('Sat Nov 22 09:12:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff (328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da)\n328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da\nSat Nov 22 09:12:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff (328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da)\n328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.351 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e093c3f-3e38-4f77-a420-26c2954ec8a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.351 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691e79ad-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:56 compute-0 nova_compute[253661]: 2025-11-22 09:12:56.354 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:56 compute-0 kernel: tap691e79ad-d0: left promiscuous mode
Nov 22 09:12:56 compute-0 nova_compute[253661]: 2025-11-22 09:12:56.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.379 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[85dc2ac6-e7be-4f56-a799-d28dd820fce5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:56 compute-0 nova_compute[253661]: 2025-11-22 09:12:56.385 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully updated port: f1f391af-c757-4aab-b0ce-ddad3dab55e7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:12:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8709bf4b-d174-4d95-b2f4-0c98a26389f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.396 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d52922a5-9776-40b7-b961-834ce1b484df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:56 compute-0 nova_compute[253661]: 2025-11-22 09:12:56.400 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:56 compute-0 nova_compute[253661]: 2025-11-22 09:12:56.401 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:56 compute-0 nova_compute[253661]: 2025-11-22 09:12:56.402 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:12:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.421 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cbec4509-794d-4cbc-baa5-f7a7415fffb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559073, 'reachable_time': 16379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291257, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d691e79ad\x2dda5d\x2d4276\x2daa7d\x2d732c2aaedbff.mount: Deactivated successfully.
Nov 22 09:12:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.423 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:12:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.424 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9cfbaeae-8875-459b-ac38-49a13729dd30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:56 compute-0 podman[291258]: 2025-11-22 09:12:56.462289307 +0000 UTC m=+0.027906173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:12:56 compute-0 ceph-mon[75021]: pgmap v1416: 305 pgs: 305 active+clean; 436 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 540 KiB/s wr, 93 op/s
Nov 22 09:12:56 compute-0 podman[291258]: 2025-11-22 09:12:56.822573913 +0000 UTC m=+0.388190779 container create cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:12:56 compute-0 nova_compute[253661]: 2025-11-22 09:12:56.943 253665 WARNING nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it
Nov 22 09:12:57 compute-0 systemd[1]: Started libpod-conmon-cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4.scope.
Nov 22 09:12:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:12:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 442 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 544 KiB/s wr, 86 op/s
Nov 22 09:12:57 compute-0 nova_compute[253661]: 2025-11-22 09:12:57.270 253665 DEBUG nova.compute.manager [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:57 compute-0 nova_compute[253661]: 2025-11-22 09:12:57.271 253665 DEBUG nova.compute.manager [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-f1f391af-c757-4aab-b0ce-ddad3dab55e7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:12:57 compute-0 nova_compute[253661]: 2025-11-22 09:12:57.271 253665 DEBUG oslo_concurrency.lockutils [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:12:57 compute-0 podman[291258]: 2025-11-22 09:12:57.309066862 +0000 UTC m=+0.874683748 container init cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:12:57 compute-0 podman[291258]: 2025-11-22 09:12:57.317122625 +0000 UTC m=+0.882739491 container start cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:12:57 compute-0 xenodochial_cannon[291274]: 167 167
Nov 22 09:12:57 compute-0 systemd[1]: libpod-cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4.scope: Deactivated successfully.
Nov 22 09:12:57 compute-0 podman[291258]: 2025-11-22 09:12:57.539559427 +0000 UTC m=+1.105176323 container attach cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 09:12:57 compute-0 podman[291258]: 2025-11-22 09:12:57.540150952 +0000 UTC m=+1.105767818 container died cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 09:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-700bf1d6dd92ec38ba68438a77216b66b278f877e758717ab16d00885b7dcf71-merged.mount: Deactivated successfully.
Nov 22 09:12:58 compute-0 podman[291258]: 2025-11-22 09:12:58.219800926 +0000 UTC m=+1.785417792 container remove cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.293 253665 DEBUG nova.compute.manager [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.293 253665 DEBUG oslo_concurrency.lockutils [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.293 253665 DEBUG oslo_concurrency.lockutils [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.294 253665 DEBUG oslo_concurrency.lockutils [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.294 253665 DEBUG nova.compute.manager [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] No waiting events found dispatching network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.294 253665 WARNING nova.compute.manager [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received unexpected event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 for instance with vm_state active and task_state deleting.
Nov 22 09:12:58 compute-0 systemd[1]: libpod-conmon-cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4.scope: Deactivated successfully.
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.326 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(af237d8df3944bf985d1958a12fa2e46) on rbd image(6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.376 253665 INFO nova.virt.libvirt.driver [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Deleting instance files /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7_del
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.377 253665 INFO nova.virt.libvirt.driver [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Deletion of /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7_del complete
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.438 253665 INFO nova.compute.manager [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Took 15.88 seconds to destroy the instance on the hypervisor.
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.439 253665 DEBUG oslo.service.loopingcall [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.440 253665 DEBUG nova.compute.manager [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.440 253665 DEBUG nova.network.neutron [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:12:58 compute-0 podman[291316]: 2025-11-22 09:12:58.442895603 +0000 UTC m=+0.060471388 container create 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:12:58 compute-0 systemd[1]: Started libpod-conmon-308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035.scope.
Nov 22 09:12:58 compute-0 podman[291316]: 2025-11-22 09:12:58.416149079 +0000 UTC m=+0.033724894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:12:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:12:58 compute-0 podman[291316]: 2025-11-22 09:12:58.544047593 +0000 UTC m=+0.161623408 container init 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:12:58 compute-0 podman[291316]: 2025-11-22 09:12:58.552349412 +0000 UTC m=+0.169925237 container start 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 09:12:58 compute-0 podman[291316]: 2025-11-22 09:12:58.556526043 +0000 UTC m=+0.174101858 container attach 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:12:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.731 253665 INFO nova.virt.libvirt.driver [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Deleting instance files /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292_del
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.732 253665 INFO nova.virt.libvirt.driver [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Deletion of /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292_del complete
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.739 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 22 09:12:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 22 09:12:58 compute-0 ceph-mon[75021]: pgmap v1417: 305 pgs: 305 active+clean; 442 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 544 KiB/s wr, 86 op/s
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.774 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.775 253665 DEBUG oslo_concurrency.lockutils [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.775 253665 DEBUG nova.network.neutron [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port f1f391af-c757-4aab-b0ce-ddad3dab55e7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.779 253665 DEBUG nova.virt.libvirt.vif [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.779 253665 DEBUG nova.network.os_vif_util [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.780 253665 DEBUG nova.network.os_vif_util [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.780 253665 DEBUG os_vif [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.781 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.781 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.781 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.785 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1f391af-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.786 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1f391af-c7, col_values=(('external_ids', {'iface-id': 'f1f391af-c757-4aab-b0ce-ddad3dab55e7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:0f:1c', 'vm-uuid': '3c70b093-a92a-4781-8e32-2a7eefde4a43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:58 compute-0 NetworkManager[48920]: <info>  [1763802778.7942] manager: (tapf1f391af-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.974 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802763.7952714, 96000606-0bc4-4cf1-9e33-360a640c2cb7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.974 253665 INFO nova.compute.manager [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] VM Stopped (Lifecycle Event)
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.978 253665 INFO os_vif [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7')
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.979 253665 DEBUG nova.virt.libvirt.vif [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.979 253665 DEBUG nova.network.os_vif_util [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.980 253665 DEBUG nova.network.os_vif_util [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:12:58 compute-0 nova_compute[253661]: 2025-11-22 09:12:58.985 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(snap) on rbd image(c0a1f7fa-e570-4e82-9df8-99d640ef5df3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.050 253665 DEBUG nova.compute.manager [None req-c769c8a3-72ad-4b1b-ab91-51610934b095 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.053 253665 DEBUG nova.virt.libvirt.guest [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:97:0f:1c"/>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <target dev="tapf1f391af-c7"/>
Nov 22 09:12:59 compute-0 nova_compute[253661]: </interface>
Nov 22 09:12:59 compute-0 nova_compute[253661]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.056 253665 INFO nova.compute.manager [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Took 5.78 seconds to destroy the instance on the hypervisor.
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.056 253665 DEBUG oslo.service.loopingcall [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.057 253665 DEBUG nova.compute.manager [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.057 253665 DEBUG nova.network.neutron [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.067 253665 DEBUG nova.network.neutron [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:59 compute-0 kernel: tapf1f391af-c7: entered promiscuous mode
Nov 22 09:12:59 compute-0 NetworkManager[48920]: <info>  [1763802779.0714] manager: (tapf1f391af-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Nov 22 09:12:59 compute-0 ovn_controller[152872]: 2025-11-22T09:12:59Z|00185|binding|INFO|Claiming lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 for this chassis.
Nov 22 09:12:59 compute-0 ovn_controller[152872]: 2025-11-22T09:12:59Z|00186|binding|INFO|f1f391af-c757-4aab-b0ce-ddad3dab55e7: Claiming fa:16:3e:97:0f:1c 10.100.0.13
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.089 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:0f:1c 10.100.0.13'], port_security=['fa:16:3e:97:0f:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f1f391af-c757-4aab-b0ce-ddad3dab55e7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.091 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f1f391af-c757-4aab-b0ce-ddad3dab55e7 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.096 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.100 253665 INFO nova.compute.manager [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Took 0.66 seconds to deallocate network for instance.
Nov 22 09:12:59 compute-0 ovn_controller[152872]: 2025-11-22T09:12:59Z|00187|binding|INFO|Setting lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 ovn-installed in OVS
Nov 22 09:12:59 compute-0 ovn_controller[152872]: 2025-11-22T09:12:59Z|00188|binding|INFO|Setting lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 up in Southbound
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.111 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:59 compute-0 systemd-udevd[291368]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.122 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[981ba118-3733-40c3-94e2-2187b59008da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:59 compute-0 NetworkManager[48920]: <info>  [1763802779.1342] device (tapf1f391af-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:12:59 compute-0 NetworkManager[48920]: <info>  [1763802779.1351] device (tapf1f391af-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.160 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.161 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.164 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[18b88600-348e-416f-a067-1e770a9ec4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.170 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d389fe6a-071a-4452-9a1e-19736f3c924c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.205 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fc1282-b5a5-415c-97bd-f8595dfb5cfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.209 253665 DEBUG nova.virt.libvirt.driver [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.209 253665 DEBUG nova.virt.libvirt.driver [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.210 253665 DEBUG nova.virt.libvirt.driver [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:78:3a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.210 253665 DEBUG nova.virt.libvirt.driver [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:97:0f:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:12:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 499 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 4.7 MiB/s wr, 153 op/s
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.227 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[729c6531-da76-4f51-be6e-1e06197c6703]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291377, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.241 253665 DEBUG nova.virt.libvirt.guest [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:12:59</nova:creationTime>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:12:59 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:12:59 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 09:12:59 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:12:59 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:12:59 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:12:59 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:12:59 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.253 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[60fe7a64-15f3-496b-9450-b7aa17c1d163]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291378, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291378, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.256 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.258 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.264 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.264 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.265 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.265 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.267 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.356 253665 DEBUG oslo_concurrency.processutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:12:59 compute-0 sleepy_buck[291334]: {
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:     "0": [
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:         {
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "devices": [
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "/dev/loop3"
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             ],
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_name": "ceph_lv0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_size": "21470642176",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "name": "ceph_lv0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "tags": {
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.cluster_name": "ceph",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.crush_device_class": "",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.encrypted": "0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.osd_id": "0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.type": "block",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.vdo": "0"
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             },
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "type": "block",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "vg_name": "ceph_vg0"
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:         }
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:     ],
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:     "1": [
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:         {
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "devices": [
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "/dev/loop4"
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             ],
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_name": "ceph_lv1",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_size": "21470642176",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "name": "ceph_lv1",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "tags": {
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.cluster_name": "ceph",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.crush_device_class": "",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.encrypted": "0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.osd_id": "1",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.type": "block",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.vdo": "0"
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             },
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "type": "block",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "vg_name": "ceph_vg1"
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:         }
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:     ],
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:     "2": [
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:         {
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "devices": [
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "/dev/loop5"
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             ],
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_name": "ceph_lv2",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_size": "21470642176",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "name": "ceph_lv2",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "tags": {
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.cluster_name": "ceph",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.crush_device_class": "",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.encrypted": "0",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.osd_id": "2",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.type": "block",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:                 "ceph.vdo": "0"
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             },
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "type": "block",
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:             "vg_name": "ceph_vg2"
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:         }
Nov 22 09:12:59 compute-0 sleepy_buck[291334]:     ]
Nov 22 09:12:59 compute-0 sleepy_buck[291334]: }
Nov 22 09:12:59 compute-0 systemd[1]: libpod-308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035.scope: Deactivated successfully.
Nov 22 09:12:59 compute-0 conmon[291334]: conmon 308d8108899240841c04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035.scope/container/memory.events
Nov 22 09:12:59 compute-0 podman[291316]: 2025-11-22 09:12:59.514794735 +0000 UTC m=+1.132370520 container died 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e-merged.mount: Deactivated successfully.
Nov 22 09:12:59 compute-0 podman[291316]: 2025-11-22 09:12:59.598731488 +0000 UTC m=+1.216307273 container remove 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:12:59 compute-0 systemd[1]: libpod-conmon-308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035.scope: Deactivated successfully.
Nov 22 09:12:59 compute-0 sudo[291076]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:59 compute-0 sudo[291415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:12:59 compute-0 sudo[291415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:59 compute-0 sudo[291415]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 22 09:12:59 compute-0 ceph-mon[75021]: osdmap e195: 3 total, 3 up, 3 in
Nov 22 09:12:59 compute-0 sudo[291440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:12:59 compute-0 sudo[291440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:59 compute-0 sudo[291440]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:12:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 22 09:12:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:12:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62017300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:12:59 compute-0 sudo[291465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:12:59 compute-0 sudo[291465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:59 compute-0 sudo[291465]: pam_unix(sudo:session): session closed for user root
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.864 253665 DEBUG oslo_concurrency.processutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.872 253665 DEBUG nova.compute.provider_tree [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.883 253665 DEBUG nova.network.neutron [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.898 253665 DEBUG nova.scheduler.client.report [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.905 253665 INFO nova.compute.manager [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Took 0.85 seconds to deallocate network for instance.
Nov 22 09:12:59 compute-0 sudo[291492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:12:59 compute-0 sudo[291492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.930 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.958 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.959 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:12:59 compute-0 nova_compute[253661]: 2025-11-22 09:12:59.968 253665 INFO nova.scheduler.client.report [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Deleted allocations for instance 96000606-0bc4-4cf1-9e33-360a640c2cb7
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.005 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.006 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.006 253665 DEBUG nova.objects.instance [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.047 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 17.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.102 253665 DEBUG oslo_concurrency.processutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:00 compute-0 podman[291557]: 2025-11-22 09:13:00.245413558 +0000 UTC m=+0.042615629 container create 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:13:00 compute-0 systemd[1]: Started libpod-conmon-90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef.scope.
Nov 22 09:13:00 compute-0 ovn_controller[152872]: 2025-11-22T09:13:00Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:0f:1c 10.100.0.13
Nov 22 09:13:00 compute-0 ovn_controller[152872]: 2025-11-22T09:13:00Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:0f:1c 10.100.0.13
Nov 22 09:13:00 compute-0 podman[291557]: 2025-11-22 09:13:00.224853872 +0000 UTC m=+0.022055973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:13:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:13:00 compute-0 podman[291557]: 2025-11-22 09:13:00.354496178 +0000 UTC m=+0.151698269 container init 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:13:00 compute-0 podman[291557]: 2025-11-22 09:13:00.362835029 +0000 UTC m=+0.160037110 container start 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 09:13:00 compute-0 affectionate_hawking[291592]: 167 167
Nov 22 09:13:00 compute-0 systemd[1]: libpod-90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef.scope: Deactivated successfully.
Nov 22 09:13:00 compute-0 conmon[291592]: conmon 90d7ee2b41fe8c93e212 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef.scope/container/memory.events
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.369 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-deleted-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.372 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.372 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.372 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.373 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.373 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.373 253665 WARNING nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 for instance with vm_state active and task_state None.
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.373 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:00 compute-0 podman[291557]: 2025-11-22 09:13:00.372356258 +0000 UTC m=+0.169558339 container attach 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 WARNING nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 for instance with vm_state active and task_state None.
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-deleted-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:00 compute-0 podman[291557]: 2025-11-22 09:13:00.375468053 +0000 UTC m=+0.172670124 container died 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 09:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-66baa077c0e7c25481ce55fa9e7b1c8ab42faa0933dce8ad2607d0b0437e8066-merged.mount: Deactivated successfully.
Nov 22 09:13:00 compute-0 podman[291557]: 2025-11-22 09:13:00.44337858 +0000 UTC m=+0.240580651 container remove 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:13:00 compute-0 systemd[1]: libpod-conmon-90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef.scope: Deactivated successfully.
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.478 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802765.4771852, 6e825024-ffe6-4fdb-abaa-0c99c65ac38b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.479 253665 INFO nova.compute.manager [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] VM Stopped (Lifecycle Event)
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.513 253665 DEBUG nova.compute.manager [None req-d842b4e0-7595-40c6-96ee-8ae741a82b6d - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.520 253665 DEBUG nova.compute.manager [None req-d842b4e0-7595-40c6-96ee-8ae741a82b6d - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: image_uploading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.539 253665 INFO nova.compute.manager [None req-d842b4e0-7595-40c6-96ee-8ae741a82b6d - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] During sync_power_state the instance has a pending task (image_uploading). Skip.
Nov 22 09:13:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3201247690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.614 253665 DEBUG oslo_concurrency.processutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.621 253665 DEBUG nova.compute.provider_tree [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.631 253665 DEBUG nova.scheduler.client.report [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:00 compute-0 podman[291616]: 2025-11-22 09:13:00.635470251 +0000 UTC m=+0.051962584 container create 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.642 253665 DEBUG nova.network.neutron [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port f1f391af-c757-4aab-b0ce-ddad3dab55e7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.643 253665 DEBUG nova.network.neutron [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.657 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.660 253665 DEBUG oslo_concurrency.lockutils [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.687 253665 INFO nova.scheduler.client.report [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Deleted allocations for instance 5babe591-239b-4ef7-b193-6960c7313292
Nov 22 09:13:00 compute-0 systemd[1]: Started libpod-conmon-1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557.scope.
Nov 22 09:13:00 compute-0 podman[291616]: 2025-11-22 09:13:00.611215206 +0000 UTC m=+0.027707559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:13:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.758 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:00 compute-0 podman[291616]: 2025-11-22 09:13:00.773172131 +0000 UTC m=+0.189664464 container init 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 09:13:00 compute-0 podman[291616]: 2025-11-22 09:13:00.781950272 +0000 UTC m=+0.198442605 container start 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:13:00 compute-0 podman[291616]: 2025-11-22 09:13:00.788538961 +0000 UTC m=+0.205031344 container attach 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 09:13:00 compute-0 ceph-mon[75021]: pgmap v1419: 305 pgs: 305 active+clean; 499 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 4.7 MiB/s wr, 153 op/s
Nov 22 09:13:00 compute-0 ceph-mon[75021]: osdmap e196: 3 total, 3 up, 3 in
Nov 22 09:13:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/62017300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3201247690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.827 253665 DEBUG nova.objects.instance [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:00 compute-0 nova_compute[253661]: 2025-11-22 09:13:00.839 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.005 253665 DEBUG nova.policy [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:13:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 499 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.9 MiB/s wr, 153 op/s
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.260 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.261 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.262 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.263 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.263 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.266 253665 INFO nova.compute.manager [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Terminating instance
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.268 253665 DEBUG nova.compute.manager [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:13:01 compute-0 kernel: tapa36e1a52-1f (unregistering): left promiscuous mode
Nov 22 09:13:01 compute-0 NetworkManager[48920]: <info>  [1763802781.3394] device (tapa36e1a52-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.352 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:01 compute-0 ovn_controller[152872]: 2025-11-22T09:13:01Z|00189|binding|INFO|Releasing lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 from this chassis (sb_readonly=0)
Nov 22 09:13:01 compute-0 ovn_controller[152872]: 2025-11-22T09:13:01Z|00190|binding|INFO|Setting lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 down in Southbound
Nov 22 09:13:01 compute-0 ovn_controller[152872]: 2025-11-22T09:13:01Z|00191|binding|INFO|Removing iface tapa36e1a52-1f ovn-installed in OVS
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:01 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 22 09:13:01 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Consumed 20.596s CPU time.
Nov 22 09:13:01 compute-0 systemd-machined[215941]: Machine qemu-22-instance-00000013 terminated.
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.436 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:5a:f3 10.100.0.11'], port_security=['fa:16:3e:0c:5a:f3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd99bd27b-0ff3-493e-a69c-6c7ec034aa81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.439 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.442 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.467 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9084f498-35c8-4994-bde0-e6c0187bd6cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.510 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2b9a2351-0170-411a-9b81-6ff8ecc60b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.515 253665 INFO nova.virt.libvirt.driver [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance destroyed successfully.
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.515 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23d353f2-ae68-4505-8a12-a670f1551d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.517 253665 DEBUG nova.objects.instance [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid d99bd27b-0ff3-493e-a69c-6c7ec034aa81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.536 253665 DEBUG nova.virt.libvirt.vif [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1874754552',display_name='tempest-ServersAdminTestJSON-server-1874754552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1874754552',id=19,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-otgq40uh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:12Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=d99bd27b-0ff3-493e-a69c-6c7ec034aa81,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.537 253665 DEBUG nova.network.os_vif_util [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.538 253665 DEBUG nova.network.os_vif_util [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.538 253665 DEBUG os_vif [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.542 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa36e1a52-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.549 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.552 253665 INFO os_vif [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f')
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.556 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e123e8a0-3d9d-465a-b72c-4bc26688a1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.578 253665 INFO nova.virt.libvirt.driver [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Snapshot image upload complete
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.578 253665 INFO nova.compute.manager [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 13.73 seconds to snapshot the instance on the hypervisor.
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.580 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[50bcfd2c-594f-4743-bf46-9213d24fca58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 23, 'rx_bytes': 1036, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 23, 'rx_bytes': 1036, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291680, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.606 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f434ac-cf39-468b-81ae-307e71801987]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291695, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291695, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.608 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:01 compute-0 nova_compute[253661]: 2025-11-22 09:13:01.614 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.615 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.615 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.616 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.616 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:01 compute-0 quirky_jackson[291635]: {
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "osd_id": 1,
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "type": "bluestore"
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:     },
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "osd_id": 0,
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "type": "bluestore"
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:     },
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "osd_id": 2,
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:         "type": "bluestore"
Nov 22 09:13:01 compute-0 quirky_jackson[291635]:     }
Nov 22 09:13:01 compute-0 quirky_jackson[291635]: }
Nov 22 09:13:01 compute-0 systemd[1]: libpod-1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557.scope: Deactivated successfully.
Nov 22 09:13:01 compute-0 systemd[1]: libpod-1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557.scope: Consumed 1.025s CPU time.
Nov 22 09:13:01 compute-0 podman[291616]: 2025-11-22 09:13:01.822600619 +0000 UTC m=+1.239092952 container died 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.100 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully created port: 995224e6-d1ff-4d74-bca5-3996eb4d404d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:13:02 compute-0 ceph-mon[75021]: pgmap v1421: 305 pgs: 305 active+clean; 499 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.9 MiB/s wr, 153 op/s
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003791913907338673 of space, bias 1.0, pg target 1.137574172201602 quantized to 32 (current 32)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013400469859021944 of space, bias 1.0, pg target 0.40067404878475615 quantized to 32 (current 32)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:13:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 22 09:13:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7-merged.mount: Deactivated successfully.
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.657 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-unplugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.657 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.657 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.658 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.658 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] No waiting events found dispatching network-vif-unplugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.658 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-unplugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.658 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] No waiting events found dispatching network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:02 compute-0 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 WARNING nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received unexpected event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 for instance with vm_state active and task_state deleting.
Nov 22 09:13:03 compute-0 podman[291616]: 2025-11-22 09:13:03.206790919 +0000 UTC m=+2.623283252 container remove 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 09:13:03 compute-0 systemd[1]: libpod-conmon-1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557.scope: Deactivated successfully.
Nov 22 09:13:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 480 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.8 MiB/s wr, 153 op/s
Nov 22 09:13:03 compute-0 sudo[291492]: pam_unix(sudo:session): session closed for user root
Nov 22 09:13:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:13:03 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:13:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:13:03 compute-0 nova_compute[253661]: 2025-11-22 09:13:03.417 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully updated port: 995224e6-d1ff-4d74-bca5-3996eb4d404d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:13:03 compute-0 nova_compute[253661]: 2025-11-22 09:13:03.433 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:03 compute-0 nova_compute[253661]: 2025-11-22 09:13:03.433 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:03 compute-0 nova_compute[253661]: 2025-11-22 09:13:03.433 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:03 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:13:03 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ba50615c-2ba6-498d-8fd2-54cff71cd9ca does not exist
Nov 22 09:13:03 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d0c69ac1-e3f2-4c07-ac95-6eb96affee3b does not exist
Nov 22 09:13:03 compute-0 nova_compute[253661]: 2025-11-22 09:13:03.588 253665 WARNING nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it
Nov 22 09:13:03 compute-0 nova_compute[253661]: 2025-11-22 09:13:03.588 253665 WARNING nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it
Nov 22 09:13:03 compute-0 sudo[291728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:13:03 compute-0 sudo[291728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:13:03 compute-0 sudo[291728]: pam_unix(sudo:session): session closed for user root
Nov 22 09:13:03 compute-0 sudo[291753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:13:03 compute-0 sudo[291753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:13:03 compute-0 sudo[291753]: pam_unix(sudo:session): session closed for user root
Nov 22 09:13:04 compute-0 ceph-mon[75021]: pgmap v1422: 305 pgs: 305 active+clean; 480 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.8 MiB/s wr, 153 op/s
Nov 22 09:13:04 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:13:04 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:13:04 compute-0 nova_compute[253661]: 2025-11-22 09:13:04.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 438 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.5 MiB/s wr, 169 op/s
Nov 22 09:13:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.772 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.790 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.793 253665 DEBUG nova.virt.libvirt.vif [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.793 253665 DEBUG nova.network.os_vif_util [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.794 253665 DEBUG nova.network.os_vif_util [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.795 253665 DEBUG os_vif [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.795 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.796 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.796 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.800 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap995224e6-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.800 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap995224e6-d1, col_values=(('external_ids', {'iface-id': '995224e6-d1ff-4d74-bca5-3996eb4d404d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:fb:57', 'vm-uuid': '3c70b093-a92a-4781-8e32-2a7eefde4a43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.802 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:05 compute-0 NetworkManager[48920]: <info>  [1763802785.8031] manager: (tap995224e6-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.813 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.814 253665 INFO os_vif [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1')
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.815 253665 DEBUG nova.virt.libvirt.vif [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.816 253665 DEBUG nova.network.os_vif_util [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.817 253665 DEBUG nova.network.os_vif_util [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.825 253665 DEBUG nova.virt.libvirt.guest [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 09:13:05 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:a6:fb:57"/>
Nov 22 09:13:05 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:13:05 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:05 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:13:05 compute-0 nova_compute[253661]:   <target dev="tap995224e6-d1"/>
Nov 22 09:13:05 compute-0 nova_compute[253661]: </interface>
Nov 22 09:13:05 compute-0 nova_compute[253661]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 09:13:05 compute-0 kernel: tap995224e6-d1: entered promiscuous mode
Nov 22 09:13:05 compute-0 NetworkManager[48920]: <info>  [1763802785.8431] manager: (tap995224e6-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Nov 22 09:13:05 compute-0 ovn_controller[152872]: 2025-11-22T09:13:05Z|00192|binding|INFO|Claiming lport 995224e6-d1ff-4d74-bca5-3996eb4d404d for this chassis.
Nov 22 09:13:05 compute-0 ovn_controller[152872]: 2025-11-22T09:13:05Z|00193|binding|INFO|995224e6-d1ff-4d74-bca5-3996eb4d404d: Claiming fa:16:3e:a6:fb:57 10.100.0.11
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.847 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.856 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:fb:57 10.100.0.11'], port_security=['fa:16:3e:a6:fb:57 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=995224e6-d1ff-4d74-bca5-3996eb4d404d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.858 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 995224e6-d1ff-4d74-bca5-3996eb4d404d in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis
Nov 22 09:13:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.859 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:13:05 compute-0 ovn_controller[152872]: 2025-11-22T09:13:05Z|00194|binding|INFO|Setting lport 995224e6-d1ff-4d74-bca5-3996eb4d404d ovn-installed in OVS
Nov 22 09:13:05 compute-0 ovn_controller[152872]: 2025-11-22T09:13:05Z|00195|binding|INFO|Setting lport 995224e6-d1ff-4d74-bca5-3996eb4d404d up in Southbound
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:05 compute-0 nova_compute[253661]: 2025-11-22 09:13:05.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.885 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3305be6a-2317-446e-8653-e67b59171bf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:05 compute-0 systemd-udevd[291789]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:13:05 compute-0 NetworkManager[48920]: <info>  [1763802785.9030] device (tap995224e6-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:13:05 compute-0 NetworkManager[48920]: <info>  [1763802785.9037] device (tap995224e6-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:13:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.925 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbc5d91-b5d0-49df-96cc-63a471e06f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.930 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a290f258-a2f4-4e67-8eeb-991c1406e296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.970 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ce07fed7-20a6-41e1-ae76-b4dd38b56b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.002 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48d848f2-f712-4e48-a8ce-09bae10d8b2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291798, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.023 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[426cf276-1489-40ef-96ba-53e5d8e91e0a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291799, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291799, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.025 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:06 compute-0 nova_compute[253661]: 2025-11-22 09:13:06.028 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:06 compute-0 nova_compute[253661]: 2025-11-22 09:13:06.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.035 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:06 compute-0 nova_compute[253661]: 2025-11-22 09:13:06.178 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:06 compute-0 nova_compute[253661]: 2025-11-22 09:13:06.179 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:06 compute-0 nova_compute[253661]: 2025-11-22 09:13:06.179 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:78:3a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:06 compute-0 nova_compute[253661]: 2025-11-22 09:13:06.180 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:97:0f:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:06 compute-0 nova_compute[253661]: 2025-11-22 09:13:06.180 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:a6:fb:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:06 compute-0 nova_compute[253661]: 2025-11-22 09:13:06.204 253665 DEBUG nova.virt.libvirt.guest [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:06 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:06 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:06 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:13:06</nova:creationTime>
Nov 22 09:13:06 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:06 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:13:06 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:06 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:13:06 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:06 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:06 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 09:13:06 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 09:13:06 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:06 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:06 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:13:06 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:13:06 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:13:06 compute-0 nova_compute[253661]: 2025-11-22 09:13:06.230 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 22 09:13:06 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 22 09:13:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 432 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.1 MiB/s wr, 87 op/s
Nov 22 09:13:07 compute-0 ceph-mon[75021]: pgmap v1423: 305 pgs: 305 active+clean; 438 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.5 MiB/s wr, 169 op/s
Nov 22 09:13:07 compute-0 ceph-mon[75021]: osdmap e197: 3 total, 3 up, 3 in
Nov 22 09:13:07 compute-0 ovn_controller[152872]: 2025-11-22T09:13:07Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a6:fb:57 10.100.0.11
Nov 22 09:13:07 compute-0 ovn_controller[152872]: 2025-11-22T09:13:07Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:fb:57 10.100.0.11
Nov 22 09:13:08 compute-0 ceph-mon[75021]: pgmap v1425: 305 pgs: 305 active+clean; 432 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.1 MiB/s wr, 87 op/s
Nov 22 09:13:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 392 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 919 KiB/s wr, 81 op/s
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.344 253665 INFO nova.virt.libvirt.driver [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Deleting instance files /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81_del
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.345 253665 INFO nova.virt.libvirt.driver [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Deletion of /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81_del complete
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.399 253665 INFO nova.compute.manager [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Took 8.13 seconds to destroy the instance on the hypervisor.
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.400 253665 DEBUG oslo.service.loopingcall [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.400 253665 DEBUG nova.compute.manager [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.400 253665 DEBUG nova.network.neutron [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.515 253665 DEBUG nova.compute.manager [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-995224e6-d1ff-4d74-bca5-3996eb4d404d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.515 253665 DEBUG nova.compute.manager [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-995224e6-d1ff-4d74-bca5-3996eb4d404d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.516 253665 DEBUG oslo_concurrency.lockutils [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.516 253665 DEBUG oslo_concurrency.lockutils [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.516 253665 DEBUG nova.network.neutron [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port 995224e6-d1ff-4d74-bca5-3996eb4d404d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.721 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802774.7174582, 5babe591-239b-4ef7-b193-6960c7313292 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.721 253665 INFO nova.compute.manager [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] VM Stopped (Lifecycle Event)
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.745 253665 DEBUG nova.compute.manager [None req-db15371d-8bb0-4731-8de1-d36742f17e95 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.792 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.979 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.979 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.979 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.980 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.980 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.982 253665 INFO nova.compute.manager [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Terminating instance
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.983 253665 DEBUG nova.compute.manager [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.989 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance destroyed successfully.
Nov 22 09:13:09 compute-0 nova_compute[253661]: 2025-11-22 09:13:09.990 253665 DEBUG nova.objects.instance [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.007 253665 DEBUG nova.virt.libvirt.vif [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-137830058',display_name='tempest-ImagesTestJSON-server-137830058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-137830058',id=27,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-3brb40ng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:01Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=6e825024-ffe6-4fdb-abaa-0c99c65ac38b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.007 253665 DEBUG nova.network.os_vif_util [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.008 253665 DEBUG nova.network.os_vif_util [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.008 253665 DEBUG os_vif [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.010 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5898357d-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.013 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.015 253665 INFO os_vif [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71')
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.247 253665 DEBUG nova.network.neutron [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.271 253665 INFO nova.compute.manager [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Took 0.87 seconds to deallocate network for instance.
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.319 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.320 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 22 09:13:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 22 09:13:10 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.712 253665 DEBUG oslo_concurrency.processutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.893 253665 DEBUG nova.network.neutron [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port 995224e6-d1ff-4d74-bca5-3996eb4d404d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.894 253665 DEBUG nova.network.neutron [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.917 253665 DEBUG oslo_concurrency.lockutils [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.979 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-07d520ca-fd4a-49e6-b52e-ee9e8208b902" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.980 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-07d520ca-fd4a-49e6-b52e-ee9e8208b902" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.981 253665 DEBUG nova.objects.instance [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.995 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:10 compute-0 nova_compute[253661]: 2025-11-22 09:13:10.996 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.022 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.112 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:11 compute-0 ceph-mon[75021]: pgmap v1426: 305 pgs: 305 active+clean; 392 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 919 KiB/s wr, 81 op/s
Nov 22 09:13:11 compute-0 ceph-mon[75021]: osdmap e198: 3 total, 3 up, 3 in
Nov 22 09:13:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 392 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 20 KiB/s wr, 53 op/s
Nov 22 09:13:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3449920996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.447 253665 DEBUG oslo_concurrency.processutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.455 253665 DEBUG nova.compute.provider_tree [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.469 253665 DEBUG nova.scheduler.client.report [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.680 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.682 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.690 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.691 253665 INFO nova.compute.claims [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.726 253665 INFO nova.scheduler.client.report [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Deleted allocations for instance d99bd27b-0ff3-493e-a69c-6c7ec034aa81
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.802 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:11 compute-0 nova_compute[253661]: 2025-11-22 09:13:11.861 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.161 253665 DEBUG nova.objects.instance [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.177 253665 DEBUG nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:13:12 compute-0 ceph-mon[75021]: pgmap v1428: 305 pgs: 305 active+clean; 392 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 20 KiB/s wr, 53 op/s
Nov 22 09:13:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3449920996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077584066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:13:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/474190434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:13:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:13:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/474190434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.447 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.447 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.448 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.448 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.448 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.449 253665 WARNING nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d for instance with vm_state active and task_state None.
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.449 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.449 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 WARNING nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d for instance with vm_state active and task_state None.
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-deleted-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.455 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.462 253665 DEBUG nova.compute.provider_tree [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.475 253665 DEBUG nova.scheduler.client.report [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.544 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.545 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.644 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.645 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.719 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.768 253665 INFO nova.virt.libvirt.driver [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Deleting instance files /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_del
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.769 253665 INFO nova.virt.libvirt.driver [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Deletion of /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_del complete
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.793 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.941 253665 INFO nova.compute.manager [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 2.96 seconds to destroy the instance on the hypervisor.
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.941 253665 DEBUG oslo.service.loopingcall [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.942 253665 DEBUG nova.compute.manager [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.942 253665 DEBUG nova.network.neutron [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:13:12 compute-0 nova_compute[253661]: 2025-11-22 09:13:12.985 253665 DEBUG nova.policy [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96cac95dc532449d964ffb3705dae943', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.028 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.029 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.030 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Creating image(s)
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.052 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.078 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.116 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.127 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.190 253665 DEBUG nova.policy [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.208 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.209 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.210 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.210 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.229 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 304 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 8.0 KiB/s wr, 43 op/s
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.233 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4077584066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/474190434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:13:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/474190434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.610 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.611 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.611 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.611 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.611 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.613 253665 INFO nova.compute.manager [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Terminating instance
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.614 253665 DEBUG nova.compute.manager [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.659 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.659 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.677 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:13:13 compute-0 kernel: tap716b716d-2e (unregistering): left promiscuous mode
Nov 22 09:13:13 compute-0 NetworkManager[48920]: <info>  [1763802793.7681] device (tap716b716d-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:13:13 compute-0 ovn_controller[152872]: 2025-11-22T09:13:13Z|00196|binding|INFO|Releasing lport 716b716d-2ee2-44e7-9850-c10854634f77 from this chassis (sb_readonly=0)
Nov 22 09:13:13 compute-0 ovn_controller[152872]: 2025-11-22T09:13:13Z|00197|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 down in Southbound
Nov 22 09:13:13 compute-0 ovn_controller[152872]: 2025-11-22T09:13:13Z|00198|binding|INFO|Removing iface tap716b716d-2e ovn-installed in OVS
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.776 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.777 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.778 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.792 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.793 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis
Nov 22 09:13:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.795 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.788 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.789 253665 INFO nova.compute.claims [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.792 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d703b1-40cc-4fba-9e37-24a5a7081da1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.797 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a namespace which is not needed anymore
Nov 22 09:13:13 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 22 09:13:13 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000012.scope: Consumed 18.423s CPU time.
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:13 compute-0 systemd-machined[215941]: Machine qemu-33-instance-00000012 terminated.
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.906 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] resizing rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:13:13 compute-0 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [NOTICE]   (282428) : haproxy version is 2.8.14-c23fe91
Nov 22 09:13:13 compute-0 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [NOTICE]   (282428) : path to executable is /usr/sbin/haproxy
Nov 22 09:13:13 compute-0 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [WARNING]  (282428) : Exiting Master process...
Nov 22 09:13:13 compute-0 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [WARNING]  (282428) : Exiting Master process...
Nov 22 09:13:13 compute-0 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [ALERT]    (282428) : Current worker (282430) exited with code 143 (Terminated)
Nov 22 09:13:13 compute-0 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [WARNING]  (282428) : All workers exited. Exiting... (0)
Nov 22 09:13:13 compute-0 systemd[1]: libpod-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6.scope: Deactivated successfully.
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.956 253665 DEBUG nova.network.neutron [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:13 compute-0 conmon[282424]: conmon af2907c7193e7dfa3192 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6.scope/container/memory.events
Nov 22 09:13:13 compute-0 podman[292017]: 2025-11-22 09:13:13.962886503 +0000 UTC m=+0.057051243 container died af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.973 253665 INFO nova.compute.manager [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 1.03 seconds to deallocate network for instance.
Nov 22 09:13:13 compute-0 nova_compute[253661]: 2025-11-22 09:13:13.981 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Successfully created port: 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.026 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6-userdata-shm.mount: Deactivated successfully.
Nov 22 09:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d31e66055707f791e950c824f07df600e3199b957edd8fd29251cf5299b718d3-merged.mount: Deactivated successfully.
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.062 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.062 253665 DEBUG nova.objects.instance [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.071 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:14 compute-0 podman[292017]: 2025-11-22 09:13:14.080717799 +0000 UTC m=+0.174882539 container cleanup af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:13:14 compute-0 systemd[1]: libpod-conmon-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6.scope: Deactivated successfully.
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.109 253665 DEBUG nova.virt.libvirt.vif [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:30Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.110 253665 DEBUG nova.network.os_vif_util [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.111 253665 DEBUG nova.network.os_vif_util [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.111 253665 DEBUG os_vif [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.145 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.146 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap716b716d-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.158 253665 DEBUG nova.objects.instance [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 14600eae-75dc-4ffc-a15a-bdb234f164d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.162 253665 INFO os_vif [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.182 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.182 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Ensure instance console log exists: /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:14 compute-0 podman[292075]: 2025-11-22 09:13:14.18453767 +0000 UTC m=+0.079972126 container remove af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:13:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc51f79-3b3b-4b36-a5f3-02b6aaa869e2]: (4, ('Sat Nov 22 09:13:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a (af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6)\naf2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6\nSat Nov 22 09:13:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a (af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6)\naf2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.194 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[632e28b9-181f-48e0-b39e-a6d9d378976b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.195 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:14 compute-0 kernel: tap514ab32c-30: left promiscuous mode
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.213 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.215 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81601035-2b21-49bd-a9f0-3869c22eed12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.234 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[80d75b39-5fe9-4eff-bb44-1e73a7a95804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.236 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5bd2dc0-ddfe-450e-b3a4-443e08080125]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.253 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be7ce0d0-018d-4653-a3f3-dfbc9d238604]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545917, 'reachable_time': 32355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292145, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d514ab32c\x2d3e9b\x2d4d95\x2d81f8\x2d6acc06be6d1a.mount: Deactivated successfully.
Nov 22 09:13:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.255 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:13:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.256 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ae790934-45e6-4df6-b7fa-31c203227787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.286 253665 DEBUG nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully updated port: 07d520ca-fd4a-49e6-b52e-ee9e8208b902 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.299 253665 DEBUG nova.compute.manager [req-f9d5b89f-bba0-45a4-8f1c-f21128f451ea req-4bdacb9c-1e3b-43d1-9081-aa31e9388d4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-deleted-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.300 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.301 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.301 253665 DEBUG nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:14 compute-0 ceph-mon[75021]: pgmap v1429: 305 pgs: 305 active+clean; 304 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 8.0 KiB/s wr, 43 op/s
Nov 22 09:13:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563768531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.543 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.549 253665 DEBUG nova.compute.provider_tree [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.562 253665 DEBUG nova.scheduler.client.report [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.581 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.581 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.584 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.623 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.623 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.638 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.654 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.694 253665 DEBUG oslo_concurrency.processutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.769 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.771 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.772 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Creating image(s)
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.794 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.823 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.844 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.848 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.884 253665 DEBUG nova.policy [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97872d7ce91947789de976821b771135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.887 253665 WARNING nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.887 253665 WARNING nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.887 253665 WARNING nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.897 253665 INFO nova.virt.libvirt.driver [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting instance files /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.898 253665 INFO nova.virt.libvirt.driver [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deletion of /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del complete
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.923 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.923 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.924 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.924 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.944 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.947 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.981 253665 INFO nova.compute.manager [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Took 1.37 seconds to destroy the instance on the hypervisor.
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.982 253665 DEBUG oslo.service.loopingcall [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.983 253665 DEBUG nova.compute.manager [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:13:14 compute-0 nova_compute[253661]: 2025-11-22 09:13:14.983 253665 DEBUG nova.network.neutron [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.168 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Successfully updated port: 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:13:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609026723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquired lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.184 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.200 253665 DEBUG oslo_concurrency.processutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.205 253665 DEBUG nova.compute.provider_tree [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.221 253665 DEBUG nova.scheduler.client.report [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 200 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 12 KiB/s wr, 78 op/s
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.243 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.269 253665 INFO nova.scheduler.client.report [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Deleted allocations for instance 6e825024-ffe6-4fdb-abaa-0c99c65ac38b
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.290 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.291 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.291 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.291 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.291 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.292 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.292 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-07d520ca-fd4a-49e6-b52e-ee9e8208b902 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.292 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-07d520ca-fd4a-49e6-b52e-ee9e8208b902. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.293 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.328 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.355 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.361 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1563768531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/609026723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.406 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] resizing rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.523 253665 DEBUG nova.objects.instance [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'migration_context' on Instance uuid fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.539 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.540 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Ensure instance console log exists: /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.540 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.540 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.541 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 22 09:13:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 22 09:13:15 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.648 253665 DEBUG nova.network.neutron [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.663 253665 INFO nova.compute.manager [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Took 0.68 seconds to deallocate network for instance.
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.708 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.709 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.734 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Successfully created port: 2c059df4-a5a0-4c31-8485-01ccdea02b01 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:13:15 compute-0 nova_compute[253661]: 2025-11-22 09:13:15.821 253665 DEBUG oslo_concurrency.processutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987270026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.284 253665 DEBUG oslo_concurrency.processutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.291 253665 DEBUG nova.compute.provider_tree [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.315 253665 DEBUG nova.scheduler.client.report [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.339 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Updating instance_info_cache with network_info: [{"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.343 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.366 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Releasing lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.367 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Instance network_info: |[{"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.369 253665 INFO nova.scheduler.client.report [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Deleted allocations for instance 3ae08a2f-348c-406b-8ffc-9acb8a542e1c
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.373 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Start _get_guest_xml network_info=[{"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.378 253665 WARNING nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.385 253665 DEBUG nova.virt.libvirt.host [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.385 253665 DEBUG nova.virt.libvirt.host [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.392 253665 DEBUG nova.virt.libvirt.host [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.393 253665 DEBUG nova.virt.libvirt.host [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.394 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.394 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.394 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.395 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.395 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.395 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.395 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.396 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.396 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.396 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.396 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.397 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.401 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.439 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.511 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802781.509268, d99bd27b-0ff3-493e-a69c-6c7ec034aa81 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.511 253665 INFO nova.compute.manager [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] VM Stopped (Lifecycle Event)
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.529 253665 DEBUG nova.compute.manager [None req-2a8de323-d873-4f53-bd10-92291e2fb3b9 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:16 compute-0 ceph-mon[75021]: pgmap v1430: 305 pgs: 305 active+clean; 200 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 12 KiB/s wr, 78 op/s
Nov 22 09:13:16 compute-0 ceph-mon[75021]: osdmap e199: 3 total, 3 up, 3 in
Nov 22 09:13:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/987270026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421444660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.869 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.893 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:16 compute-0 nova_compute[253661]: 2025-11-22 09:13:16.897 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.066 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Successfully updated port: 2c059df4-a5a0-4c31-8485-01ccdea02b01 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.082 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.083 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.083 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.219 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:13:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 206 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 1.3 MiB/s wr, 106 op/s
Nov 22 09:13:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862241399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.364 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.366 253665 DEBUG nova.virt.libvirt.vif [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1301305920',id=29,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-febfj4xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:12Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=14600eae-75dc-4ffc-a15a-bdb234f164d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.366 253665 DEBUG nova.network.os_vif_util [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.367 253665 DEBUG nova.network.os_vif_util [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.369 253665 DEBUG nova.objects.instance [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14600eae-75dc-4ffc-a15a-bdb234f164d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.381 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <uuid>14600eae-75dc-4ffc-a15a-bdb234f164d0</uuid>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <name>instance-0000001d</name>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1301305920</nova:name>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:13:16</nova:creationTime>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <nova:user uuid="96cac95dc532449d964ffb3705dae943">tempest-ImagesOneServerNegativeTestJSON-251054159-project-member</nova:user>
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <nova:project uuid="dcedb2f9ed6e43dfa8ecc3854373b0b5">tempest-ImagesOneServerNegativeTestJSON-251054159</nova:project>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <nova:port uuid="2f6ebd6c-b451-455e-b4aa-19a0ccf66a44">
Nov 22 09:13:17 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <entry name="serial">14600eae-75dc-4ffc-a15a-bdb234f164d0</entry>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <entry name="uuid">14600eae-75dc-4ffc-a15a-bdb234f164d0</entry>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/14600eae-75dc-4ffc-a15a-bdb234f164d0_disk">
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config">
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:17 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:13:97:6f"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <target dev="tap2f6ebd6c-b4"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/console.log" append="off"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:13:17 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:13:17 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:17 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:17 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:17 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.382 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Preparing to wait for external event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.383 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.383 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.383 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.384 253665 DEBUG nova.virt.libvirt.vif [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1301305920',id=29,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-febfj4xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:12Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=14600eae-75dc-4ffc-a15a-bdb234f164d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.384 253665 DEBUG nova.network.os_vif_util [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.385 253665 DEBUG nova.network.os_vif_util [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.385 253665 DEBUG os_vif [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.386 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.386 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f6ebd6c-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f6ebd6c-b4, col_values=(('external_ids', {'iface-id': '2f6ebd6c-b451-455e-b4aa-19a0ccf66a44', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:97:6f', 'vm-uuid': '14600eae-75dc-4ffc-a15a-bdb234f164d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:17 compute-0 NetworkManager[48920]: <info>  [1763802797.3931] manager: (tap2f6ebd6c-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.398 253665 INFO os_vif [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4')
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.447 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.448 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.448 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No VIF found with MAC fa:16:3e:13:97:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.449 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Using config drive
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.471 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.498 253665 DEBUG nova.compute.manager [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-changed-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.499 253665 DEBUG nova.compute.manager [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Refreshing instance network info cache due to event network-changed-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.499 253665 DEBUG oslo_concurrency.lockutils [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.499 253665 DEBUG oslo_concurrency.lockutils [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.500 253665 DEBUG nova.network.neutron [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Refreshing network info cache for port 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:13:17 compute-0 nova_compute[253661]: 2025-11-22 09:13:17.632 253665 DEBUG nova.compute.manager [req-2b04d1dc-d025-40dd-81c0-d00340a14b68 req-473de715-0cba-470d-9217-82bd80f303c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-deleted-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2421444660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3862241399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.117 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Creating config drive at /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.124 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1oz3vvrk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.270 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1oz3vvrk" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.296 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.302 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:18 compute-0 podman[292465]: 2025-11-22 09:13:18.375658991 +0000 UTC m=+0.063269026 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:13:18 compute-0 podman[292453]: 2025-11-22 09:13:18.391834478 +0000 UTC m=+0.079703740 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.414 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Updating instance_info_cache with network_info: [{"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.445 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.445 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Instance network_info: |[{"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.448 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Start _get_guest_xml network_info=[{"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.454 253665 WARNING nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.458 253665 DEBUG nova.virt.libvirt.host [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.459 253665 DEBUG nova.virt.libvirt.host [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.464 253665 DEBUG nova.virt.libvirt.host [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.465 253665 DEBUG nova.virt.libvirt.host [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.465 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.465 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.466 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.466 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.466 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.466 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.468 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.470 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.495 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.496 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Deleting local config drive /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config because it was imported into RBD.
Nov 22 09:13:18 compute-0 kernel: tap2f6ebd6c-b4: entered promiscuous mode
Nov 22 09:13:18 compute-0 NetworkManager[48920]: <info>  [1763802798.5517] manager: (tap2f6ebd6c-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Nov 22 09:13:18 compute-0 ovn_controller[152872]: 2025-11-22T09:13:18Z|00199|binding|INFO|Claiming lport 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 for this chassis.
Nov 22 09:13:18 compute-0 ovn_controller[152872]: 2025-11-22T09:13:18Z|00200|binding|INFO|2f6ebd6c-b451-455e-b4aa-19a0ccf66a44: Claiming fa:16:3e:13:97:6f 10.100.0.11
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:18 compute-0 ovn_controller[152872]: 2025-11-22T09:13:18Z|00201|binding|INFO|Setting lport 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 ovn-installed in OVS
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:18 compute-0 systemd-machined[215941]: New machine qemu-34-instance-0000001d.
Nov 22 09:13:18 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001d.
Nov 22 09:13:18 compute-0 systemd-udevd[292552]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.618 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:97:6f 10.100.0.11'], port_security=['fa:16:3e:13:97:6f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '14600eae-75dc-4ffc-a15a-bdb234f164d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.619 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff bound to our chassis
Nov 22 09:13:18 compute-0 ovn_controller[152872]: 2025-11-22T09:13:18Z|00202|binding|INFO|Setting lport 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 up in Southbound
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.621 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 09:13:18 compute-0 NetworkManager[48920]: <info>  [1763802798.6302] device (tap2f6ebd6c-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:13:18 compute-0 NetworkManager[48920]: <info>  [1763802798.6311] device (tap2f6ebd6c-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.633 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[213ea8e8-12eb-4713-a196-7014fdbd14bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.635 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap691e79ad-d1 in ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.636 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap691e79ad-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.636 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[545cc5f4-b741-4a08-b7e5-0454c27d3d92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.637 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a184a5b9-18eb-46c8-8a6a-67b6e48af03a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.649 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c82a2fee-05c8-44a7-94ec-c10bf597f2db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ceph-mon[75021]: pgmap v1432: 305 pgs: 305 active+clean; 206 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 1.3 MiB/s wr, 106 op/s
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.679 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03befbdf-b58d-4a87-b845-da125bb92d24]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.725 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe334a07-8aa6-4354-972a-c8d9a0b5dbc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 NetworkManager[48920]: <info>  [1763802798.7361] manager: (tap691e79ad-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.734 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3af42d95-b4a4-4575-8cae-0f859ef4d9bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.778 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[da52d9a4-dbca-4b5d-8794-21cb2cfe250d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.782 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[78f75cda-2b97-425d-b097-66066cf46284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 NetworkManager[48920]: <info>  [1763802798.8127] device (tap691e79ad-d0): carrier: link connected
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.819 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[99b21347-6225-4793-b04d-c743e3ff69c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be183257-1eef-461e-9047-a2aa148ee3be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564955, 'reachable_time': 24605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292587, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.860 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a45bcca-8db3-461c-ac47-79f5ef772a53]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:f9e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564955, 'tstamp': 564955}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292588, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.878 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d131c23-ac92-49d1-a710-427a07af4291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564955, 'reachable_time': 24605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292589, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.920 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a667a99-5909-405e-919a-b3fb828caa08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3378602376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c32e0b8a-b2d1-4663-b8b7-f90b8fa90ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.987 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691e79ad-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.987 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.988 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap691e79ad-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:18 compute-0 NetworkManager[48920]: <info>  [1763802798.9905] manager: (tap691e79ad-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Nov 22 09:13:18 compute-0 kernel: tap691e79ad-d0: entered promiscuous mode
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.994 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap691e79ad-d0, col_values=(('external_ids', {'iface-id': '6b990e4f-df30-4562-9550-e3e0ea811f07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:18 compute-0 nova_compute[253661]: 2025-11-22 09:13:18.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:18 compute-0 ovn_controller[152872]: 2025-11-22T09:13:18Z|00203|binding|INFO|Releasing lport 6b990e4f-df30-4562-9550-e3e0ea811f07 from this chassis (sb_readonly=0)
Nov 22 09:13:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.998 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:19.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e650f153-bf89-4aaa-bce8-1b7c6d594a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:19.001 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:13:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:19.002 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'env', 'PROCESS_TAG=haproxy-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/691e79ad-da5d-4276-aa7d-732c2aaedbff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.035 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.064 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.070 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 213 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 5.0 MiB/s wr, 196 op/s
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.238 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:19 compute-0 podman[292660]: 2025-11-22 09:13:19.414545765 +0000 UTC m=+0.060382896 container create c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 09:13:19 compute-0 systemd[1]: Started libpod-conmon-c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438.scope.
Nov 22 09:13:19 compute-0 podman[292660]: 2025-11-22 09:13:19.378783126 +0000 UTC m=+0.024620277 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:13:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a101ddd0cbf37b355cc7ffa96e374d86e06f82052cc49d6d94999ac85daf5cc3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:13:19 compute-0 podman[292660]: 2025-11-22 09:13:19.5376685 +0000 UTC m=+0.183505661 container init c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:13:19 compute-0 podman[292660]: 2025-11-22 09:13:19.546009876 +0000 UTC m=+0.191847007 container start c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:13:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2709526242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:19 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[292693]: [NOTICE]   (292716) : New worker (292724) forked
Nov 22 09:13:19 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[292693]: [NOTICE]   (292716) : Loading success.
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.580 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.582 253665 DEBUG nova.virt.libvirt.vif [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2140089311',display_name='tempest-ImagesTestJSON-server-2140089311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2140089311',id=30,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-63rtx74t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:14Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=fd1f1ba2-6963-47bb-8d59-86e2ed015ad1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.582 253665 DEBUG nova.network.os_vif_util [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.583 253665 DEBUG nova.network.os_vif_util [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.584 253665 DEBUG nova.objects.instance [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.598 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <uuid>fd1f1ba2-6963-47bb-8d59-86e2ed015ad1</uuid>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <name>instance-0000001e</name>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesTestJSON-server-2140089311</nova:name>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:13:18</nova:creationTime>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <nova:port uuid="2c059df4-a5a0-4c31-8485-01ccdea02b01">
Nov 22 09:13:19 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <entry name="serial">fd1f1ba2-6963-47bb-8d59-86e2ed015ad1</entry>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <entry name="uuid">fd1f1ba2-6963-47bb-8d59-86e2ed015ad1</entry>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk">
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config">
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:19 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:43:82:bb"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <target dev="tap2c059df4-a5"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/console.log" append="off"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:13:19 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:13:19 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:19 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:19 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:19 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.599 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Preparing to wait for external event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.600 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.600 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.600 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.601 253665 DEBUG nova.virt.libvirt.vif [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2140089311',display_name='tempest-ImagesTestJSON-server-2140089311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2140089311',id=30,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-63rtx74t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:14Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=fd1f1ba2-6963-47bb-8d59-86e2ed015ad1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.601 253665 DEBUG nova.network.os_vif_util [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.602 253665 DEBUG nova.network.os_vif_util [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.602 253665 DEBUG os_vif [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.603 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.603 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.603 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.606 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c059df4-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.607 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c059df4-a5, col_values=(('external_ids', {'iface-id': '2c059df4-a5a0-4c31-8485-01ccdea02b01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:82:bb', 'vm-uuid': 'fd1f1ba2-6963-47bb-8d59-86e2ed015ad1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.609 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:19 compute-0 NetworkManager[48920]: <info>  [1763802799.6113] manager: (tap2c059df4-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.612 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.621 253665 INFO os_vif [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5')
Nov 22 09:13:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3378602376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2709526242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.681 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.682 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.683 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:43:82:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.684 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Using config drive
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.707 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.714 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802799.6844804, 14600eae-75dc-4ffc-a15a-bdb234f164d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.715 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] VM Started (Lifecycle Event)
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.735 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.740 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802799.6848211, 14600eae-75dc-4ffc-a15a-bdb234f164d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.740 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] VM Paused (Lifecycle Event)
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.758 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.769 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.789 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:19 compute-0 nova_compute[253661]: 2025-11-22 09:13:19.797 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.066 253665 DEBUG nova.network.neutron [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Updated VIF entry in instance network info cache for port 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.066 253665 DEBUG nova.network.neutron [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Updating instance_info_cache with network_info: [{"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.082 253665 DEBUG oslo_concurrency.lockutils [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.110 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Creating config drive at /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.116 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd8diaau execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.156 253665 DEBUG nova.compute.manager [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received event network-changed-2c059df4-a5a0-4c31-8485-01ccdea02b01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.157 253665 DEBUG nova.compute.manager [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Refreshing instance network info cache due to event network-changed-2c059df4-a5a0-4c31-8485-01ccdea02b01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.158 253665 DEBUG oslo_concurrency.lockutils [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.158 253665 DEBUG oslo_concurrency.lockutils [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.159 253665 DEBUG nova.network.neutron [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Refreshing network info cache for port 2c059df4-a5a0-4c31-8485-01ccdea02b01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.208 253665 DEBUG nova.compute.manager [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.209 253665 DEBUG oslo_concurrency.lockutils [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.209 253665 DEBUG oslo_concurrency.lockutils [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.209 253665 DEBUG oslo_concurrency.lockutils [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.210 253665 DEBUG nova.compute.manager [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Processing event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.211 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.215 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802800.2150548, 14600eae-75dc-4ffc-a15a-bdb234f164d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.216 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] VM Resumed (Lifecycle Event)
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.219 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.229 253665 INFO nova.virt.libvirt.driver [-] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Instance spawned successfully.
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.230 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.245 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.250 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.259 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd8diaau" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.290 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.295 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.332 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.339 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.340 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.341 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.341 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.342 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.342 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.347 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.348 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.349 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.388 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.388 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.389 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.389 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.390 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.439 253665 INFO nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Took 7.41 seconds to spawn the instance on the hypervisor.
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.440 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.478 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.479 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Deleting local config drive /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config because it was imported into RBD.
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.498 253665 INFO nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Took 9.41 seconds to build instance.
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.515 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:20 compute-0 kernel: tap2c059df4-a5: entered promiscuous mode
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.539 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:20 compute-0 ovn_controller[152872]: 2025-11-22T09:13:20Z|00204|binding|INFO|Claiming lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 for this chassis.
Nov 22 09:13:20 compute-0 ovn_controller[152872]: 2025-11-22T09:13:20Z|00205|binding|INFO|2c059df4-a5a0-4c31-8485-01ccdea02b01: Claiming fa:16:3e:43:82:bb 10.100.0.14
Nov 22 09:13:20 compute-0 NetworkManager[48920]: <info>  [1763802800.5423] manager: (tap2c059df4-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Nov 22 09:13:20 compute-0 systemd-udevd[292583]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.548 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:82:bb 10.100.0.14'], port_security=['fa:16:3e:43:82:bb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fd1f1ba2-6963-47bb-8d59-86e2ed015ad1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2c059df4-a5a0-4c31-8485-01ccdea02b01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.550 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2c059df4-a5a0-4c31-8485-01ccdea02b01 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 bound to our chassis
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.552 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:13:20 compute-0 ovn_controller[152872]: 2025-11-22T09:13:20Z|00206|binding|INFO|Setting lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 ovn-installed in OVS
Nov 22 09:13:20 compute-0 ovn_controller[152872]: 2025-11-22T09:13:20Z|00207|binding|INFO|Setting lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 up in Southbound
Nov 22 09:13:20 compute-0 NetworkManager[48920]: <info>  [1763802800.5604] device (tap2c059df4-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:13:20 compute-0 NetworkManager[48920]: <info>  [1763802800.5613] device (tap2c059df4-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.576 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d549b8-a028-4d0a-9201-984fe137e92d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.580 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2abeeeb2-21 in ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.584 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2abeeeb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.585 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bca0f9-93a4-4a06-9fdf-70a57e650cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.587 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5fb0a5-c681-4de7-bb2a-26c7b56dee01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.601 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8424f25c-17fa-4394-a2cb-14b3a8c1fe90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 systemd-machined[215941]: New machine qemu-35-instance-0000001e.
Nov 22 09:13:20 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001e.
Nov 22 09:13:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.629 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf32e0d-51be-44d9-96c7-df99eec6a23d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ceph-mon[75021]: pgmap v1433: 305 pgs: 305 active+clean; 213 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 5.0 MiB/s wr, 196 op/s
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.679 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ac724161-4583-4899-b935-7d03635952af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 NetworkManager[48920]: <info>  [1763802800.6900] manager: (tap2abeeeb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.689 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[87e6c37c-2e3d-4626-ba18-647864fe5407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.745 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d31055-ea22-4f4e-80b0-6adad5c1cd5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.749 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c7cb32-7d4b-4250-a963-2eb9934b1b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 NetworkManager[48920]: <info>  [1763802800.7773] device (tap2abeeeb2-20): carrier: link connected
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.782 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8439b8dd-1868-434b-b4ad-1390867c385f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.801 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b630501f-a917-472b-955d-4a6aec5f70d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565151, 'reachable_time': 33933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292843, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.825 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[643a6f30-e795-4a2f-9750-4afbdcf2a19a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:bff7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565151, 'tstamp': 565151}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292844, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.856 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[911ab794-5242-4727-8a53-2c1997e5585b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565151, 'reachable_time': 33933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292845, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940237445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.896 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74c7380c-ecf3-45a2-9e45-8a2fa1710ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 nova_compute[253661]: 2025-11-22 09:13:20.921 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.987 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69e90d3f-1c5d-43b1-9d13-3bc8eaff96e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.989 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.989 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.996 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.000 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.000 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.002 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:21 compute-0 NetworkManager[48920]: <info>  [1763802801.0034] manager: (tap2abeeeb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Nov 22 09:13:21 compute-0 kernel: tap2abeeeb2-20: entered promiscuous mode
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.008 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.010 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.012 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:13:21 compute-0 ovn_controller[152872]: 2025-11-22T09:13:21Z|00208|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.012 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.014 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.016 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4850eb3a-5c5f-463b-a3af-97a2bab9be69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.017 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:13:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.018 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'env', 'PROCESS_TAG=haproxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.019 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.020 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.165 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802801.164695, fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.165 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] VM Started (Lifecycle Event)
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.188 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.191 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802801.165603, fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.191 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] VM Paused (Lifecycle Event)
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.205 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.208 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.225 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 213 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 4.3 MiB/s wr, 168 op/s
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.324 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4037MB free_disk=59.90109634399414GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.326 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.326 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.408 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.408 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 14600eae-75dc-4ffc-a15a-bdb234f164d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.408 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.408 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.409 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:13:21 compute-0 podman[292921]: 2025-11-22 09:13:21.484712485 +0000 UTC m=+0.087699496 container create 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.492 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:21 compute-0 podman[292921]: 2025-11-22 09:13:21.421336088 +0000 UTC m=+0.024323099 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:13:21 compute-0 systemd[1]: Started libpod-conmon-75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad.scope.
Nov 22 09:13:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05db8838f8babda2674b21fdd9cba56d8e24a25007d525322ca9fed2068a5d9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:13:21 compute-0 podman[292921]: 2025-11-22 09:13:21.622217195 +0000 UTC m=+0.225204256 container init 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:13:21 compute-0 podman[292921]: 2025-11-22 09:13:21.628549071 +0000 UTC m=+0.231536082 container start 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 09:13:21 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [NOTICE]   (292942) : New worker (292960) forked
Nov 22 09:13:21 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [NOTICE]   (292942) : Loading success.
Nov 22 09:13:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3940237445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3587739031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.960 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.965 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:21 compute-0 nova_compute[253661]: 2025-11-22 09:13:21.982 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.005 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.006 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.044 253665 DEBUG nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.064 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.066 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.066 253665 DEBUG nova.network.neutron [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port 07d520ca-fd4a-49e6-b52e-ee9e8208b902 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.070 253665 DEBUG nova.virt.libvirt.vif [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.070 253665 DEBUG nova.network.os_vif_util [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.071 253665 DEBUG nova.network.os_vif_util [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.071 253665 DEBUG os_vif [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.078 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07d520ca-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.079 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07d520ca-fd, col_values=(('external_ids', {'iface-id': '07d520ca-fd4a-49e6-b52e-ee9e8208b902', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:2d:f5', 'vm-uuid': '3c70b093-a92a-4781-8e32-2a7eefde4a43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:22 compute-0 NetworkManager[48920]: <info>  [1763802802.0822] manager: (tap07d520ca-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.093 253665 INFO os_vif [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd')
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.094 253665 DEBUG nova.virt.libvirt.vif [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.095 253665 DEBUG nova.network.os_vif_util [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.095 253665 DEBUG nova.network.os_vif_util [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.100 253665 DEBUG nova.virt.libvirt.guest [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:cd:2d:f5"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <target dev="tap07d520ca-fd"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]: </interface>
Nov 22 09:13:22 compute-0 nova_compute[253661]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 09:13:22 compute-0 kernel: tap07d520ca-fd: entered promiscuous mode
Nov 22 09:13:22 compute-0 NetworkManager[48920]: <info>  [1763802802.1144] manager: (tap07d520ca-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.119 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:22 compute-0 ovn_controller[152872]: 2025-11-22T09:13:22Z|00209|binding|INFO|Claiming lport 07d520ca-fd4a-49e6-b52e-ee9e8208b902 for this chassis.
Nov 22 09:13:22 compute-0 ovn_controller[152872]: 2025-11-22T09:13:22Z|00210|binding|INFO|07d520ca-fd4a-49e6-b52e-ee9e8208b902: Claiming fa:16:3e:cd:2d:f5 10.100.0.8
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.130 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:2d:f5 10.100.0.8'], port_security=['fa:16:3e:cd:2d:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-202407542', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-202407542', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=07d520ca-fd4a-49e6-b52e-ee9e8208b902) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.131 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 07d520ca-fd4a-49e6-b52e-ee9e8208b902 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.133 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:13:22 compute-0 ovn_controller[152872]: 2025-11-22T09:13:22Z|00211|binding|INFO|Setting lport 07d520ca-fd4a-49e6-b52e-ee9e8208b902 ovn-installed in OVS
Nov 22 09:13:22 compute-0 ovn_controller[152872]: 2025-11-22T09:13:22Z|00212|binding|INFO|Setting lport 07d520ca-fd4a-49e6-b52e-ee9e8208b902 up in Southbound
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.156 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[44b03c80-e42c-4699-8be7-449ed9b68c44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:22 compute-0 systemd-udevd[292989]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.171 253665 DEBUG nova.network.neutron [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Updated VIF entry in instance network info cache for port 2c059df4-a5a0-4c31-8485-01ccdea02b01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.172 253665 DEBUG nova.network.neutron [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Updating instance_info_cache with network_info: [{"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:22 compute-0 NetworkManager[48920]: <info>  [1763802802.1897] device (tap07d520ca-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:13:22 compute-0 NetworkManager[48920]: <info>  [1763802802.1906] device (tap07d520ca-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.196 253665 DEBUG oslo_concurrency.lockutils [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.202 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[97aeb580-e00c-4bce-a5aa-ac1d8c147f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.206 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe76e5de-06cc-4bd1-aba8-5c755178d5af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.215 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:78:3a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:97:0f:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:a6:fb:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:cd:2d:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.240 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3e18a8b1-302e-4a0a-b4e2-22568d815f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.246 253665 DEBUG nova.virt.libvirt.guest [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:13:22</nova:creationTime>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:22 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 09:13:22 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 09:13:22 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 09:13:22 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:13:22 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:22 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:13:22 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:13:22 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:13:22 compute-0 podman[292977]: 2025-11-22 09:13:22.261973269 +0000 UTC m=+0.114543537 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.264 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06701e47-eb0a-4643-8383-0690095ab53f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 698, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 698, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293009, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.272 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-07d520ca-fd4a-49e6-b52e-ee9e8208b902" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.285 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e63e2ce0-5bf1-43dd-a42c-166679ae12c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293011, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293011, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.287 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.290 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.291 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.291 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.291 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:13:22 compute-0 ceph-mon[75021]: pgmap v1434: 305 pgs: 305 active+clean; 213 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 4.3 MiB/s wr, 168 op/s
Nov 22 09:13:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3587739031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.886 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:22 compute-0 nova_compute[253661]: 2025-11-22 09:13:22.886 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.042 253665 DEBUG nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.043 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.043 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.043 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.043 253665 DEBUG nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] No waiting events found dispatching network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 WARNING nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received unexpected event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 for instance with vm_state active and task_state None.
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Processing event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.045 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.051 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802803.0509188, fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.051 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] VM Resumed (Lifecycle Event)
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.053 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.058 253665 INFO nova.virt.libvirt.driver [-] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Instance spawned successfully.
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.058 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.073 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.080 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.084 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.084 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.084 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.084 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.085 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.085 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.119 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.155 253665 INFO nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Took 8.38 seconds to spawn the instance on the hypervisor.
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.156 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.224 253665 INFO nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Took 9.48 seconds to build instance.
Nov 22 09:13:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 213 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 313 KiB/s rd, 4.3 MiB/s wr, 169 op/s
Nov 22 09:13:23 compute-0 nova_compute[253661]: 2025-11-22 09:13:23.239 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:13:24 compute-0 ovn_controller[152872]: 2025-11-22T09:13:24Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:2d:f5 10.100.0.8
Nov 22 09:13:24 compute-0 ovn_controller[152872]: 2025-11-22T09:13:24Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:2d:f5 10.100.0.8
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.855 253665 DEBUG nova.network.neutron [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port 07d520ca-fd4a-49e6-b52e-ee9e8208b902. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.857 253665 DEBUG nova.network.neutron [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.878 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.879 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.879 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.880 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.881 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.881 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:24 compute-0 nova_compute[253661]: 2025-11-22 09:13:24.882 253665 WARNING nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state deleting.
Nov 22 09:13:24 compute-0 ceph-mon[75021]: pgmap v1435: 305 pgs: 305 active+clean; 213 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 313 KiB/s rd, 4.3 MiB/s wr, 169 op/s
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.142 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-f1f391af-c757-4aab-b0ce-ddad3dab55e7" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.143 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-f1f391af-c757-4aab-b0ce-ddad3dab55e7" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.159 253665 DEBUG nova.objects.instance [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.181 253665 DEBUG nova.virt.libvirt.vif [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.183 253665 DEBUG nova.network.os_vif_util [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.184 253665 DEBUG nova.network.os_vif_util [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.190 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.193 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.197 253665 DEBUG nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Attempting to detach device tapf1f391af-c7 from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.197 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:97:0f:1c"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <target dev="tapf1f391af-c7"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]: </interface>
Nov 22 09:13:25 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:13:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 214 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 209 op/s
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.282 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.289 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface>not found in domain: <domain type='kvm' id='30'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <name>instance-0000001a</name>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:13:22</nova:creationTime>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:13:25 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='serial'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='uuid'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk' index='2'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config' index='1'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:78:3a:a5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='tapb82d7759-7f'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:97:0f:1c'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='tapf1f391af-c7'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='net1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:a6:fb:57'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='tap995224e6-d1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='net2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:cd:2d:f5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='tap07d520ca-fd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='net3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </target>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </console>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c214,c646</label>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c646</imagelabel>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:13:25 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:25 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.300 253665 INFO nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tapf1f391af-c7 from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the persistent domain config.
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.300 253665 DEBUG nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] (1/8): Attempting to detach device tapf1f391af-c7 with device alias net1 from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.301 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:97:0f:1c"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <target dev="tapf1f391af-c7"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]: </interface>
Nov 22 09:13:25 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.318 253665 DEBUG nova.objects.instance [None req-4a4278ff-8271-4e79-a6af-308ddef6f082 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:25 compute-0 ovn_controller[152872]: 2025-11-22T09:13:25Z|00213|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:13:25 compute-0 ovn_controller[152872]: 2025-11-22T09:13:25Z|00214|binding|INFO|Releasing lport 6b990e4f-df30-4562-9550-e3e0ea811f07 from this chassis (sb_readonly=0)
Nov 22 09:13:25 compute-0 ovn_controller[152872]: 2025-11-22T09:13:25Z|00215|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.363 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802805.3435202, fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.365 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] VM Paused (Lifecycle Event)
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.371 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.372 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.372 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.372 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] No waiting events found dispatching network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 WARNING nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received unexpected event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 for instance with vm_state active and task_state suspending.
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.374 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.374 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.374 253665 WARNING nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 for instance with vm_state active and task_state None.
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.374 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 WARNING nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 for instance with vm_state active and task_state None.
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.388 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.394 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.419 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 22 09:13:25 compute-0 kernel: tapf1f391af-c7 (unregistering): left promiscuous mode
Nov 22 09:13:25 compute-0 NetworkManager[48920]: <info>  [1763802805.4357] device (tapf1f391af-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:13:25 compute-0 ovn_controller[152872]: 2025-11-22T09:13:25Z|00216|binding|INFO|Releasing lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 from this chassis (sb_readonly=0)
Nov 22 09:13:25 compute-0 ovn_controller[152872]: 2025-11-22T09:13:25Z|00217|binding|INFO|Setting lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 down in Southbound
Nov 22 09:13:25 compute-0 ovn_controller[152872]: 2025-11-22T09:13:25Z|00218|binding|INFO|Removing iface tapf1f391af-c7 ovn-installed in OVS
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.453 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:0f:1c 10.100.0.13'], port_security=['fa:16:3e:97:0f:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f1f391af-c757-4aab-b0ce-ddad3dab55e7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.455 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f1f391af-c757-4aab-b0ce-ddad3dab55e7 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.454 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763802805.4527683, 3c70b093-a92a-4781-8e32-2a7eefde4a43 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.457 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.460 253665 DEBUG nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Start waiting for the detach event from libvirt for device tapf1f391af-c7 with device alias net1 for instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.461 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.466 253665 DEBUG nova.compute.manager [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.474 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface>not found in domain: <domain type='kvm' id='30'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <name>instance-0000001a</name>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:13:22</nova:creationTime>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:13:25 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='serial'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='uuid'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk' index='2'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config' index='1'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:78:3a:a5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='tapb82d7759-7f'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:a6:fb:57'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='tap995224e6-d1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='net2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:cd:2d:f5'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target dev='tap07d520ca-fd'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='net3'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       </target>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </console>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c214,c646</label>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c646</imagelabel>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:13:25 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:25 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.474 253665 INFO nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tapf1f391af-c7 from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the live domain config.
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.475 253665 DEBUG nova.virt.libvirt.vif [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.476 253665 DEBUG nova.network.os_vif_util [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.478 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab32280-efed-4f8d-90a2-2fd15578a4e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.480 253665 DEBUG nova.network.os_vif_util [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.480 253665 DEBUG os_vif [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.483 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1f391af-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.485 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.489 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.494 253665 INFO os_vif [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7')
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.495 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:13:25</nova:creationTime>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 09:13:25 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:13:25 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:25 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:13:25 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:13:25 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.525 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b027005c-ecbe-4481-afc9-eca3b317e7f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.528 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[740d982c-7a2f-40f1-b8c5-e32c73b71cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.537 253665 INFO nova.compute.manager [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] instance snapshotting
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.560 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[56591169-967e-4d37-a443-a22f35410a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.580 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca2bd83-8ee8-4c73-b8f4-815b8a9dacdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 782, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 782, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293026, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.607 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11f4cafb-9c5e-43b9-9ef4-b364a9999713]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293027, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293027, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.609 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.614 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.614 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.614 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.614 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.766 253665 INFO nova.virt.libvirt.driver [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Beginning live snapshot process
Nov 22 09:13:25 compute-0 kernel: tap2c059df4-a5 (unregistering): left promiscuous mode
Nov 22 09:13:25 compute-0 NetworkManager[48920]: <info>  [1763802805.8914] device (tap2c059df4-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:13:25 compute-0 ovn_controller[152872]: 2025-11-22T09:13:25Z|00219|binding|INFO|Releasing lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 from this chassis (sb_readonly=0)
Nov 22 09:13:25 compute-0 ovn_controller[152872]: 2025-11-22T09:13:25Z|00220|binding|INFO|Setting lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 down in Southbound
Nov 22 09:13:25 compute-0 ovn_controller[152872]: 2025-11-22T09:13:25Z|00221|binding|INFO|Removing iface tap2c059df4-a5 ovn-installed in OVS
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.907 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:82:bb 10.100.0.14'], port_security=['fa:16:3e:43:82:bb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fd1f1ba2-6963-47bb-8d59-86e2ed015ad1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2c059df4-a5a0-4c31-8485-01ccdea02b01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.908 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2c059df4-a5a0-4c31-8485-01ccdea02b01 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.909 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.910 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2e25e85e-806b-4f13-814e-01da99b43392]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.913 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace which is not needed anymore
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.944 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:25 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 22 09:13:25 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Consumed 2.862s CPU time.
Nov 22 09:13:25 compute-0 systemd-machined[215941]: Machine qemu-35-instance-0000001e terminated.
Nov 22 09:13:25 compute-0 nova_compute[253661]: 2025-11-22 09:13:25.959 253665 DEBUG nova.virt.libvirt.imagebackend [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:13:26 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [NOTICE]   (292942) : haproxy version is 2.8.14-c23fe91
Nov 22 09:13:26 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [NOTICE]   (292942) : path to executable is /usr/sbin/haproxy
Nov 22 09:13:26 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [WARNING]  (292942) : Exiting Master process...
Nov 22 09:13:26 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [WARNING]  (292942) : Exiting Master process...
Nov 22 09:13:26 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [ALERT]    (292942) : Current worker (292960) exited with code 143 (Terminated)
Nov 22 09:13:26 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [WARNING]  (292942) : All workers exited. Exiting... (0)
Nov 22 09:13:26 compute-0 systemd[1]: libpod-75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad.scope: Deactivated successfully.
Nov 22 09:13:26 compute-0 podman[293083]: 2025-11-22 09:13:26.07960876 +0000 UTC m=+0.052218815 container died 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.088 253665 DEBUG nova.compute.manager [None req-4a4278ff-8271-4e79-a6af-308ddef6f082 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad-userdata-shm.mount: Deactivated successfully.
Nov 22 09:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-05db8838f8babda2674b21fdd9cba56d8e24a25007d525322ca9fed2068a5d9c-merged.mount: Deactivated successfully.
Nov 22 09:13:26 compute-0 podman[293083]: 2025-11-22 09:13:26.147342104 +0000 UTC m=+0.119952129 container cleanup 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:13:26 compute-0 systemd[1]: libpod-conmon-75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad.scope: Deactivated successfully.
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.159 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.160 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.160 253665 DEBUG nova.network.neutron [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.217 253665 DEBUG nova.storage.rbd_utils [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] creating snapshot(a63ea19ccc764cfb863961ba6d076325) on rbd image(14600eae-75dc-4ffc-a15a-bdb234f164d0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:13:26 compute-0 podman[293124]: 2025-11-22 09:13:26.231123053 +0000 UTC m=+0.054092601 container remove 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:13:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.237 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1271770b-b3d5-42e8-a0da-926f9a326756]: (4, ('Sat Nov 22 09:13:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad)\n75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad\nSat Nov 22 09:13:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad)\n75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[12c48433-d620-4e74-b301-a4abc8c901ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.240 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:26 compute-0 kernel: tap2abeeeb2-20: left promiscuous mode
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.253 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.254 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.258 253665 DEBUG nova.compute.manager [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-deleted-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.259 253665 INFO nova.compute.manager [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Neutron deleted interface f1f391af-c757-4aab-b0ce-ddad3dab55e7; detaching it from the instance and deleting it from the info cache
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.259 253665 DEBUG nova.network.neutron [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.266 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e3298ccb-7320-4445-ad40-52b2b342106f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.275 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:13:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.286 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1bec1e8-730e-47a6-9eae-f7f5725045e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.289 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[49044ccf-0528-4191-b329-80396473393f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.291 253665 DEBUG nova.objects.instance [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'system_metadata' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.310 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b9bbce-0b0d-4777-9836-2c55316e0f87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565140, 'reachable_time': 35029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293160, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d2abeeeb2\x2d24a5\x2d4ccd\x2d93c8\x2d05b42d3a1a51.mount: Deactivated successfully.
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.314 253665 DEBUG nova.objects.instance [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.313 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:13:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.313 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[803d3c8e-b30f-45e8-8aee-80f01b956cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.338 253665 DEBUG nova.virt.libvirt.vif [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.338 253665 DEBUG nova.network.os_vif_util [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.339 253665 DEBUG nova.network.os_vif_util [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.345 253665 DEBUG nova.virt.libvirt.guest [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.349 253665 DEBUG nova.virt.libvirt.guest [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface>not found in domain: <domain type='kvm' id='30'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <name>instance-0000001a</name>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:13:25</nova:creationTime>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:13:26 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='serial'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='uuid'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk' index='2'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config' index='1'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:78:3a:a5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='tapb82d7759-7f'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:a6:fb:57'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='tap995224e6-d1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='net2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:cd:2d:f5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='tap07d520ca-fd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='net3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </target>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </console>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c214,c646</label>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c646</imagelabel>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:13:26 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:26 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.349 253665 DEBUG nova.virt.libvirt.guest [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.365 253665 DEBUG nova.virt.libvirt.guest [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface>not found in domain: <domain type='kvm' id='30'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <name>instance-0000001a</name>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:13:25</nova:creationTime>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:13:26 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='serial'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='uuid'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk' index='2'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config' index='1'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:78:3a:a5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='tapb82d7759-7f'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:a6:fb:57'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='tap995224e6-d1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='net2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:cd:2d:f5'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target dev='tap07d520ca-fd'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='net3'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       </target>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </console>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c214,c646</label>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c646</imagelabel>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:13:26 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:26 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.366 253665 WARNING nova.virt.libvirt.driver [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Detaching interface fa:16:3e:97:0f:1c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapf1f391af-c7' not found.
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.366 253665 DEBUG nova.virt.libvirt.vif [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.367 253665 DEBUG nova.network.os_vif_util [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.367 253665 DEBUG nova.network.os_vif_util [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.368 253665 DEBUG os_vif [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.370 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1f391af-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.370 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.372 253665 INFO os_vif [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7')
Nov 22 09:13:26 compute-0 nova_compute[253661]: 2025-11-22 09:13:26.372 253665 DEBUG nova.virt.libvirt.guest [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:13:26</nova:creationTime>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 09:13:26 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:13:26 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:26 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:13:26 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:13:26 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:13:26 compute-0 rsyslogd[1005]: imjournal from <np0005532048:nova_compute>: begin to drop messages due to rate-limiting
Nov 22 09:13:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Nov 22 09:13:27 compute-0 ceph-mon[75021]: pgmap v1436: 305 pgs: 305 active+clean; 214 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 209 op/s
Nov 22 09:13:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Nov 22 09:13:27 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Nov 22 09:13:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 214 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 213 op/s
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.354 253665 INFO nova.network.neutron [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Port f1f391af-c757-4aab-b0ce-ddad3dab55e7 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.611 253665 DEBUG nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-unplugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.612 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.612 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.612 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.613 253665 DEBUG nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-unplugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.613 253665 WARNING nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-unplugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 for instance with vm_state active and task_state None.
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.613 253665 DEBUG nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.613 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.614 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.614 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.615 253665 DEBUG nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.615 253665 WARNING nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 for instance with vm_state active and task_state None.
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.615 253665 DEBUG nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received event network-vif-unplugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.615 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.616 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.616 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.616 253665 DEBUG nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] No waiting events found dispatching network-vif-unplugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.616 253665 WARNING nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received unexpected event network-vif-unplugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 for instance with vm_state suspended and task_state None.
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.617 253665 DEBUG nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.617 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.617 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.617 253665 DEBUG oslo_concurrency.lockutils [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.618 253665 DEBUG nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] No waiting events found dispatching network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.618 253665 WARNING nova.compute.manager [req-b2886fca-dc78-4cbc-9d7a-c953229cb8b7 req-b97d3ed4-8c4d-4908-bef3-81438e3fc4ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received unexpected event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 for instance with vm_state suspended and task_state None.
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.634 253665 DEBUG nova.storage.rbd_utils [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] cloning vms/14600eae-75dc-4ffc-a15a-bdb234f164d0_disk@a63ea19ccc764cfb863961ba6d076325 to images/f0529002-2053-42db-9df6-b6de9ee91ca3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.724 162862 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port c3554706-afe4-4537-a514-e3de0a6c3bd7 with type ""
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.725 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:2d:f5 10.100.0.8'], port_security=['fa:16:3e:cd:2d:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-202407542', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-202407542', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=07d520ca-fd4a-49e6-b52e-ee9e8208b902) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.726 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 07d520ca-fd4a-49e6-b52e-ee9e8208b902 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.728 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:13:27 compute-0 ovn_controller[152872]: 2025-11-22T09:13:27Z|00222|binding|INFO|Removing iface tap07d520ca-fd ovn-installed in OVS
Nov 22 09:13:27 compute-0 ovn_controller[152872]: 2025-11-22T09:13:27Z|00223|binding|INFO|Removing lport 07d520ca-fd4a-49e6-b52e-ee9e8208b902 ovn-installed in OVS
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.743 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.761 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.762 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfc97cd-6fa3-4e10-bea6-21d5704bd037]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.797 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4db70c-f3b5-467d-b0cc-673ba74417ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.800 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1285a9fc-c9f4-4d9a-812d-53d0ca7fbcb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.830 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3639f3-0e54-48bb-9a07-6fb4642cdddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.852 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9596bff-86f4-4abe-88ed-cc8a0068f897]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 866, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 866, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293202, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.875 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[54254c73-898d-4e42-9288-10bec52864b9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293203, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293203, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.877 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.888 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.888 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.889 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.889 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:27 compute-0 nova_compute[253661]: 2025-11-22 09:13:27.926 253665 DEBUG nova.storage.rbd_utils [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] flattening images/f0529002-2053-42db-9df6-b6de9ee91ca3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.956 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.957 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:27.958 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.020 253665 DEBUG oslo_concurrency.lockutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.020 253665 DEBUG oslo_concurrency.lockutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.021 253665 DEBUG oslo_concurrency.lockutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.021 253665 DEBUG oslo_concurrency.lockutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.021 253665 DEBUG oslo_concurrency.lockutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.023 253665 INFO nova.compute.manager [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Terminating instance
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.024 253665 DEBUG nova.compute.manager [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.242 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:28 compute-0 ceph-mon[75021]: osdmap e200: 3 total, 3 up, 3 in
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.441 253665 DEBUG nova.compute.manager [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:28 compute-0 kernel: tapb82d7759-7f (unregistering): left promiscuous mode
Nov 22 09:13:28 compute-0 NetworkManager[48920]: <info>  [1763802808.4617] device (tapb82d7759-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 ovn_controller[152872]: 2025-11-22T09:13:28Z|00224|binding|INFO|Releasing lport b82d7759-7fa9-4919-9812-a4f5df6893a7 from this chassis (sb_readonly=0)
Nov 22 09:13:28 compute-0 ovn_controller[152872]: 2025-11-22T09:13:28Z|00225|binding|INFO|Setting lport b82d7759-7fa9-4919-9812-a4f5df6893a7 down in Southbound
Nov 22 09:13:28 compute-0 ovn_controller[152872]: 2025-11-22T09:13:28Z|00226|binding|INFO|Removing iface tapb82d7759-7f ovn-installed in OVS
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 kernel: tap995224e6-d1 (unregistering): left promiscuous mode
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.484 253665 INFO nova.compute.manager [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] instance snapshotting
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.484 253665 WARNING nova.compute.manager [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] trying to snapshot a non-running instance: (state: 4 expected: 1)
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.487 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:3a:a5 10.100.0.12'], port_security=['fa:16:3e:78:3a:a5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0e8c403-f9ed-4054-8f14-f56c4d8c06c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b82d7759-7fa9-4919-9812-a4f5df6893a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:28 compute-0 NetworkManager[48920]: <info>  [1763802808.4888] device (tap995224e6-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.489 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b82d7759-7fa9-4919-9812-a4f5df6893a7 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.491 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 ovn_controller[152872]: 2025-11-22T09:13:28Z|00227|binding|INFO|Releasing lport 995224e6-d1ff-4d74-bca5-3996eb4d404d from this chassis (sb_readonly=0)
Nov 22 09:13:28 compute-0 ovn_controller[152872]: 2025-11-22T09:13:28Z|00228|binding|INFO|Setting lport 995224e6-d1ff-4d74-bca5-3996eb4d404d down in Southbound
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 ovn_controller[152872]: 2025-11-22T09:13:28Z|00229|binding|INFO|Removing iface tap995224e6-d1 ovn-installed in OVS
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.506 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 kernel: tap07d520ca-fd (unregistering): left promiscuous mode
Nov 22 09:13:28 compute-0 NetworkManager[48920]: <info>  [1763802808.5208] device (tap07d520ca-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.521 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:fb:57 10.100.0.11'], port_security=['fa:16:3e:a6:fb:57 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=995224e6-d1ff-4d74-bca5-3996eb4d404d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.515 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[974eb32d-e9d6-46f5-9a40-2dcc959b1598]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.539 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.565 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b4663938-635d-48d1-9cde-548426a6f97c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.569 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7d4aa6-73e9-4f96-9a21-520f22ca2483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:28 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 22 09:13:28 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Consumed 18.336s CPU time.
Nov 22 09:13:28 compute-0 systemd-machined[215941]: Machine qemu-30-instance-0000001a terminated.
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.611 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[856caa11-d1f6-4d14-85db-8025d6737a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.630 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abe2b6da-1e3b-4135-a14a-a9ebf9c17732]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 950, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 950, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293248, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.654 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[45d573ed-715d-438b-a9c5-4b9aaf581629]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293250, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293250, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.659 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:28 compute-0 NetworkManager[48920]: <info>  [1763802808.6645] manager: (tap995224e6-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Nov 22 09:13:28 compute-0 NetworkManager[48920]: <info>  [1763802808.6770] manager: (tap07d520ca-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.684 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.684 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.685 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.685 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.686 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 995224e6-d1ff-4d74-bca5-3996eb4d404d in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.688 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.689 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5efc4f8c-81fb-4987-9328-7a2473302a41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:28.690 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace which is not needed anymore
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.791 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.802 253665 INFO nova.virt.libvirt.driver [-] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Instance destroyed successfully.
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.803 253665 DEBUG nova.objects.instance [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'resources' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.820 253665 DEBUG nova.virt.libvirt.vif [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.821 253665 DEBUG nova.network.os_vif_util [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.821 253665 DEBUG nova.network.os_vif_util [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.821 253665 DEBUG os_vif [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.824 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.824 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb82d7759-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.832 253665 DEBUG nova.compute.manager [req-e9cd2626-1678-4273-bb2c-0673ad08b3f6 req-3425b1e9-9d07-4928-b5cc-4ce78ea0030a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-unplugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.832 253665 DEBUG oslo_concurrency.lockutils [req-e9cd2626-1678-4273-bb2c-0673ad08b3f6 req-3425b1e9-9d07-4928-b5cc-4ce78ea0030a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.833 253665 DEBUG oslo_concurrency.lockutils [req-e9cd2626-1678-4273-bb2c-0673ad08b3f6 req-3425b1e9-9d07-4928-b5cc-4ce78ea0030a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.833 253665 DEBUG oslo_concurrency.lockutils [req-e9cd2626-1678-4273-bb2c-0673ad08b3f6 req-3425b1e9-9d07-4928-b5cc-4ce78ea0030a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.833 253665 DEBUG nova.compute.manager [req-e9cd2626-1678-4273-bb2c-0673ad08b3f6 req-3425b1e9-9d07-4928-b5cc-4ce78ea0030a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-unplugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.833 253665 DEBUG nova.compute.manager [req-e9cd2626-1678-4273-bb2c-0673ad08b3f6 req-3425b1e9-9d07-4928-b5cc-4ce78ea0030a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-unplugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.839 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.843 253665 INFO os_vif [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f')
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.844 253665 DEBUG nova.virt.libvirt.vif [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.844 253665 DEBUG nova.network.os_vif_util [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.845 253665 DEBUG nova.network.os_vif_util [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.845 253665 DEBUG os_vif [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.846 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap995224e6-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.859 253665 INFO os_vif [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1')
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.860 253665 DEBUG nova.virt.libvirt.vif [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.860 253665 DEBUG nova.network.os_vif_util [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.861 253665 DEBUG nova.network.os_vif_util [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.861 253665 DEBUG os_vif [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.863 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07d520ca-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.876 253665 INFO os_vif [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd')
Nov 22 09:13:28 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [NOTICE]   (289187) : haproxy version is 2.8.14-c23fe91
Nov 22 09:13:28 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [NOTICE]   (289187) : path to executable is /usr/sbin/haproxy
Nov 22 09:13:28 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [WARNING]  (289187) : Exiting Master process...
Nov 22 09:13:28 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [WARNING]  (289187) : Exiting Master process...
Nov 22 09:13:28 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [ALERT]    (289187) : Current worker (289189) exited with code 143 (Terminated)
Nov 22 09:13:28 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [WARNING]  (289187) : All workers exited. Exiting... (0)
Nov 22 09:13:28 compute-0 systemd[1]: libpod-e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d.scope: Deactivated successfully.
Nov 22 09:13:28 compute-0 podman[293316]: 2025-11-22 09:13:28.891415178 +0000 UTC m=+0.083241756 container died e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.906 253665 DEBUG nova.storage.rbd_utils [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] removing snapshot(a63ea19ccc764cfb863961ba6d076325) on rbd image(14600eae-75dc-4ffc-a15a-bdb234f164d0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:13:28 compute-0 nova_compute[253661]: 2025-11-22 09:13:28.914 253665 INFO nova.virt.libvirt.driver [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Beginning cold snapshot process
Nov 22 09:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d-userdata-shm.mount: Deactivated successfully.
Nov 22 09:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-08cb179eb9809a39cf76cdbb9739dddbf19fc65549fbc0ae24ede098ba40d347-merged.mount: Deactivated successfully.
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.056 253665 DEBUG nova.virt.libvirt.imagebackend [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.060 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802794.058327, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.061 253665 INFO nova.compute.manager [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Stopped (Lifecycle Event)
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.081 253665 DEBUG nova.compute.manager [None req-be21b49a-b895-496e-a6fc-a03b26e3a0cd - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:29 compute-0 podman[293316]: 2025-11-22 09:13:29.092599223 +0000 UTC m=+0.284425781 container cleanup e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:13:29 compute-0 systemd[1]: libpod-conmon-e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d.scope: Deactivated successfully.
Nov 22 09:13:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 239 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 201 op/s
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.260 253665 DEBUG nova.storage.rbd_utils [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(c876d1f3af7349849fe1b0843a119603) on rbd image(fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:13:29 compute-0 ceph-mon[75021]: pgmap v1438: 305 pgs: 305 active+clean; 214 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 213 op/s
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.469 253665 DEBUG nova.network.neutron [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.493 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.518 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-f1f391af-c757-4aab-b0ce-ddad3dab55e7" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Nov 22 09:13:29 compute-0 ovn_controller[152872]: 2025-11-22T09:13:29Z|00230|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:13:29 compute-0 ovn_controller[152872]: 2025-11-22T09:13:29Z|00231|binding|INFO|Releasing lport 6b990e4f-df30-4562-9550-e3e0ea811f07 from this chassis (sb_readonly=0)
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.622 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:29 compute-0 podman[293401]: 2025-11-22 09:13:29.625985623 +0000 UTC m=+0.511080292 container remove e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:13:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:29.641 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c535ac9-2a71-4fb7-a8a6-923ef86e22a6]: (4, ('Sat Nov 22 09:13:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d)\ne95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d\nSat Nov 22 09:13:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d)\ne95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:29.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2b6b89-1024-44c3-b549-c25e87f0b663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:29.644 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.646 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.689 253665 DEBUG nova.compute.manager [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-deleted-07d520ca-fd4a-49e6-b52e-ee9e8208b902 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.690 253665 INFO nova.compute.manager [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Neutron deleted interface 07d520ca-fd4a-49e6-b52e-ee9e8208b902; detaching it from the instance and deleting it from the info cache
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.690 253665 DEBUG nova.network.neutron [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.718 253665 DEBUG nova.objects.instance [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'system_metadata' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.740 253665 DEBUG nova.objects.instance [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.760 253665 DEBUG nova.virt.libvirt.vif [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.761 253665 DEBUG nova.network.os_vif_util [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.762 253665 DEBUG nova.network.os_vif_util [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.765 253665 DEBUG nova.virt.libvirt.guest [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:cd:2d:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07d520ca-fd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.768 253665 DEBUG nova.virt.libvirt.driver [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Attempting to detach device tap07d520ca-fd from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.768 253665 DEBUG nova.virt.libvirt.guest [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:cd:2d:f5"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <target dev="tap07d520ca-fd"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]: </interface>
Nov 22 09:13:29 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.822 253665 DEBUG nova.virt.libvirt.guest [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:cd:2d:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07d520ca-fd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.827 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:29 compute-0 kernel: tap5e2cd359-c0: left promiscuous mode
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.829 253665 DEBUG nova.virt.libvirt.guest [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:cd:2d:f5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07d520ca-fd"/></interface>not found in domain: <domain type='kvm'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <name>instance-0000001a</name>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:12:11</nova:creationTime>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:29 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <entry name='serial'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <entry name='uuid'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <cpu mode='host-model' check='partial'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:78:3a:a5'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target dev='tapb82d7759-7f'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:a6:fb:57'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target dev='tap995224e6-d1'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       </target>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <console type='pty'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </console>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </input>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:29 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:29 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:13:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:29.838 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b342a599-184d-4b59-8ad4-4b96e0ba6b76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.830 253665 INFO nova.virt.libvirt.driver [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Successfully detached device tap07d520ca-fd from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the persistent domain config.
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.835 253665 DEBUG nova.virt.libvirt.vif [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.836 253665 DEBUG nova.network.os_vif_util [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.837 253665 DEBUG nova.network.os_vif_util [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.837 253665 DEBUG os_vif [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.839 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07d520ca-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.847 253665 INFO os_vif [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd')
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.848 253665 DEBUG nova.virt.libvirt.guest [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:13:29</nova:creationTime>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 09:13:29 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:13:29 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:13:29 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:13:29 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:13:29 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:13:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:29.849 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78a197f5-e497-4ab7-a9aa-136597e946ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:29.852 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59804a35-e136-4b03-913b-7a87dd633cf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:29 compute-0 ovn_controller[152872]: 2025-11-22T09:13:29Z|00232|binding|INFO|Releasing lport 6b990e4f-df30-4562-9550-e3e0ea811f07 from this chassis (sb_readonly=0)
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.874 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:29.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0eece024-f516-423a-bd6c-e4a2e0602207]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558603, 'reachable_time': 32619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293438, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d5e2cd359\x2dc68f\x2d4256\x2d90e8\x2d0ad40aff8a00.mount: Deactivated successfully.
Nov 22 09:13:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:29.884 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:13:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:29.884 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d79c6fd1-9e79-4ae5-a029-ab01693580da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.903 253665 DEBUG nova.compute.manager [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-unplugged-995224e6-d1ff-4d74-bca5-3996eb4d404d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.903 253665 DEBUG oslo_concurrency.lockutils [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.904 253665 DEBUG oslo_concurrency.lockutils [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.904 253665 DEBUG oslo_concurrency.lockutils [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.904 253665 DEBUG nova.compute.manager [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-unplugged-995224e6-d1ff-4d74-bca5-3996eb4d404d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.904 253665 DEBUG nova.compute.manager [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-unplugged-995224e6-d1ff-4d74-bca5-3996eb4d404d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.904 253665 DEBUG nova.compute.manager [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.905 253665 DEBUG oslo_concurrency.lockutils [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.905 253665 DEBUG oslo_concurrency.lockutils [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.905 253665 DEBUG oslo_concurrency.lockutils [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.905 253665 DEBUG nova.compute.manager [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:29 compute-0 nova_compute[253661]: 2025-11-22 09:13:29.905 253665 WARNING nova.compute.manager [req-bf3794f1-6838-41ae-9b75-55da92c3474e req-8b0382ad-9bb4-41ee-a7a7-434a19481d22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d for instance with vm_state active and task_state deleting.
Nov 22 09:13:30 compute-0 nova_compute[253661]: 2025-11-22 09:13:30.240 253665 DEBUG nova.storage.rbd_utils [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] creating snapshot(snap) on rbd image(f0529002-2053-42db-9df6-b6de9ee91ca3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:13:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Nov 22 09:13:30 compute-0 ceph-mon[75021]: pgmap v1439: 305 pgs: 305 active+clean; 239 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 201 op/s
Nov 22 09:13:30 compute-0 ceph-mon[75021]: osdmap e201: 3 total, 3 up, 3 in
Nov 22 09:13:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Nov 22 09:13:30 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Nov 22 09:13:30 compute-0 nova_compute[253661]: 2025-11-22 09:13:30.934 253665 DEBUG nova.compute.manager [req-b618f2c8-359e-495e-98cf-c2b281164499 req-95fa90b6-91c7-40b5-8729-cb55ae2ce577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:30 compute-0 nova_compute[253661]: 2025-11-22 09:13:30.935 253665 DEBUG oslo_concurrency.lockutils [req-b618f2c8-359e-495e-98cf-c2b281164499 req-95fa90b6-91c7-40b5-8729-cb55ae2ce577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:30 compute-0 nova_compute[253661]: 2025-11-22 09:13:30.935 253665 DEBUG oslo_concurrency.lockutils [req-b618f2c8-359e-495e-98cf-c2b281164499 req-95fa90b6-91c7-40b5-8729-cb55ae2ce577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:30 compute-0 nova_compute[253661]: 2025-11-22 09:13:30.935 253665 DEBUG oslo_concurrency.lockutils [req-b618f2c8-359e-495e-98cf-c2b281164499 req-95fa90b6-91c7-40b5-8729-cb55ae2ce577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:30 compute-0 nova_compute[253661]: 2025-11-22 09:13:30.936 253665 DEBUG nova.compute.manager [req-b618f2c8-359e-495e-98cf-c2b281164499 req-95fa90b6-91c7-40b5-8729-cb55ae2ce577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:30 compute-0 nova_compute[253661]: 2025-11-22 09:13:30.936 253665 WARNING nova.compute.manager [req-b618f2c8-359e-495e-98cf-c2b281164499 req-95fa90b6-91c7-40b5-8729-cb55ae2ce577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 for instance with vm_state active and task_state deleting.
Nov 22 09:13:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 239 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.3 MiB/s wr, 168 op/s
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.518 253665 DEBUG nova.storage.rbd_utils [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] cloning vms/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk@c876d1f3af7349849fe1b0843a119603 to images/966bc045-2577-4f43-8aa6-5e9a6606256f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image f0529002-2053-42db-9df6-b6de9ee91ca3 could not be found.
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID f0529002-2053-42db-9df6-b6de9ee91ca3
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver 
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver 
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image f0529002-2053-42db-9df6-b6de9ee91ca3 could not be found.
Nov 22 09:13:31 compute-0 nova_compute[253661]: 2025-11-22 09:13:31.839 253665 ERROR nova.virt.libvirt.driver 
Nov 22 09:13:31 compute-0 ceph-mon[75021]: osdmap e202: 3 total, 3 up, 3 in
Nov 22 09:13:32 compute-0 nova_compute[253661]: 2025-11-22 09:13:32.506 253665 DEBUG nova.storage.rbd_utils [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] removing snapshot(snap) on rbd image(f0529002-2053-42db-9df6-b6de9ee91ca3) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:13:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 216 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 176 op/s
Nov 22 09:13:33 compute-0 ceph-mon[75021]: pgmap v1442: 305 pgs: 305 active+clean; 239 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.3 MiB/s wr, 168 op/s
Nov 22 09:13:33 compute-0 nova_compute[253661]: 2025-11-22 09:13:33.278 253665 DEBUG nova.storage.rbd_utils [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] flattening images/966bc045-2577-4f43-8aa6-5e9a6606256f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:13:33 compute-0 nova_compute[253661]: 2025-11-22 09:13:33.908 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Nov 22 09:13:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Nov 22 09:13:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Nov 22 09:13:34 compute-0 ceph-mon[75021]: pgmap v1443: 305 pgs: 305 active+clean; 216 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 176 op/s
Nov 22 09:13:34 compute-0 ceph-mon[75021]: osdmap e203: 3 total, 3 up, 3 in
Nov 22 09:13:34 compute-0 nova_compute[253661]: 2025-11-22 09:13:34.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 530 KiB/s rd, 1.3 MiB/s wr, 158 op/s
Nov 22 09:13:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:36 compute-0 nova_compute[253661]: 2025-11-22 09:13:36.715 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:36 compute-0 nova_compute[253661]: 2025-11-22 09:13:36.716 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:36 compute-0 nova_compute[253661]: 2025-11-22 09:13:36.728 253665 DEBUG nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:13:36 compute-0 nova_compute[253661]: 2025-11-22 09:13:36.802 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:36 compute-0 nova_compute[253661]: 2025-11-22 09:13:36.803 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:36 compute-0 nova_compute[253661]: 2025-11-22 09:13:36.812 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:13:36 compute-0 nova_compute[253661]: 2025-11-22 09:13:36.813 253665 INFO nova.compute.claims [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:13:36 compute-0 nova_compute[253661]: 2025-11-22 09:13:36.956 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:37 compute-0 ceph-mon[75021]: pgmap v1445: 305 pgs: 305 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 530 KiB/s rd, 1.3 MiB/s wr, 158 op/s
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.152 253665 INFO nova.virt.libvirt.driver [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Deleting instance files /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43_del
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.154 253665 INFO nova.virt.libvirt.driver [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Deletion of /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43_del complete
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.163 253665 DEBUG nova.storage.rbd_utils [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(c876d1f3af7349849fe1b0843a119603) on rbd image(fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.210 253665 INFO nova.compute.manager [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Took 9.19 seconds to destroy the instance on the hypervisor.
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.211 253665 DEBUG oslo.service.loopingcall [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.212 253665 DEBUG nova.compute.manager [-] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.212 253665 DEBUG nova.network.neutron [-] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:13:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 203 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 159 op/s
Nov 22 09:13:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1047610751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.453 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.460 253665 DEBUG nova.compute.provider_tree [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.474 253665 DEBUG nova.scheduler.client.report [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.516 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.518 253665 DEBUG nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.522 253665 WARNING nova.compute.manager [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Image not found during snapshot: nova.exception.ImageNotFound: Image f0529002-2053-42db-9df6-b6de9ee91ca3 could not be found.
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.571 253665 DEBUG nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.572 253665 DEBUG nova.network.neutron [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.596 253665 INFO nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.612 253665 DEBUG nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.690 253665 DEBUG nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.693 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.694 253665 INFO nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Creating image(s)
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.724 253665 DEBUG nova.storage.rbd_utils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.752 253665 DEBUG nova.storage.rbd_utils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.779 253665 DEBUG nova.storage.rbd_utils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.784 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.865 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.866 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.867 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.868 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.895 253665 DEBUG nova.storage.rbd_utils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:37 compute-0 nova_compute[253661]: 2025-11-22 09:13:37.900 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 22 09:13:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1047610751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 22 09:13:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.098 253665 DEBUG nova.network.neutron [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.100 253665 DEBUG nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.192 253665 DEBUG nova.storage.rbd_utils [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(snap) on rbd image(966bc045-2577-4f43-8aa6-5e9a6606256f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.282 253665 DEBUG nova.compute.manager [req-e23be5cc-ee6f-4abb-be5f-16181b4038f0 req-bb04cdfd-4d54-4c7d-b0fd-d50c31ee0e74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-deleted-995224e6-d1ff-4d74-bca5-3996eb4d404d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.283 253665 INFO nova.compute.manager [req-e23be5cc-ee6f-4abb-be5f-16181b4038f0 req-bb04cdfd-4d54-4c7d-b0fd-d50c31ee0e74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Neutron deleted interface 995224e6-d1ff-4d74-bca5-3996eb4d404d; detaching it from the instance and deleting it from the info cache
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.284 253665 DEBUG nova.network.neutron [req-e23be5cc-ee6f-4abb-be5f-16181b4038f0 req-bb04cdfd-4d54-4c7d-b0fd-d50c31ee0e74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.304 253665 DEBUG nova.compute.manager [req-e23be5cc-ee6f-4abb-be5f-16181b4038f0 req-bb04cdfd-4d54-4c7d-b0fd-d50c31ee0e74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Detach interface failed, port_id=995224e6-d1ff-4d74-bca5-3996eb4d404d, reason: Instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.313 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.378 253665 DEBUG nova.storage.rbd_utils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] resizing rbd image 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.483 253665 DEBUG nova.objects.instance [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lazy-loading 'migration_context' on Instance uuid 60845b2b-40a4-4c4b-8a88-333a8f5e233c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.497 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.497 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Ensure instance console log exists: /var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.497 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.498 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.498 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.499 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.503 253665 WARNING nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.507 253665 DEBUG nova.virt.libvirt.host [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.508 253665 DEBUG nova.virt.libvirt.host [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.511 253665 DEBUG nova.virt.libvirt.host [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.511 253665 DEBUG nova.virt.libvirt.host [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.511 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.512 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.512 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.512 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.512 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.512 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.513 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.513 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.513 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.513 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.513 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.513 253665 DEBUG nova.virt.hardware [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.516 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.792 253665 DEBUG nova.network.neutron [-] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.814 253665 INFO nova.compute.manager [-] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Took 1.60 seconds to deallocate network for instance.
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.863 253665 DEBUG nova.compute.manager [req-00178909-4057-43ff-b454-54500a238803 req-16f37f5f-f092-463a-8b52-85174d799ed9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-deleted-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.866 253665 DEBUG oslo_concurrency.lockutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.866 253665 DEBUG oslo_concurrency.lockutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.911 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1855102527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:38 compute-0 ovn_controller[152872]: 2025-11-22T09:13:38Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:97:6f 10.100.0.11
Nov 22 09:13:38 compute-0 ovn_controller[152872]: 2025-11-22T09:13:38Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:97:6f 10.100.0.11
Nov 22 09:13:38 compute-0 nova_compute[253661]: 2025-11-22 09:13:38.990 253665 DEBUG oslo_concurrency.processutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.034 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:39 compute-0 ceph-mon[75021]: pgmap v1446: 305 pgs: 305 active+clean; 203 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 159 op/s
Nov 22 09:13:39 compute-0 ceph-mon[75021]: osdmap e204: 3 total, 3 up, 3 in
Nov 22 09:13:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1855102527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 22 09:13:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.072 253665 DEBUG nova.storage.rbd_utils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.076 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 244 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 10 MiB/s wr, 309 op/s
Nov 22 09:13:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3021188534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.474 253665 DEBUG oslo_concurrency.processutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.480 253665 DEBUG nova.compute.provider_tree [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.519 253665 DEBUG nova.scheduler.client.report [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445990758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.543 253665 DEBUG oslo_concurrency.lockutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.554 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.556 253665 DEBUG nova.objects.instance [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lazy-loading 'pci_devices' on Instance uuid 60845b2b-40a4-4c4b-8a88-333a8f5e233c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.568 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <uuid>60845b2b-40a4-4c4b-8a88-333a8f5e233c</uuid>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <name>instance-0000001f</name>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1153503922</nova:name>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:13:38</nova:creationTime>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <nova:user uuid="0a81519d045643428005367ca3294f1e">tempest-ServersAdminNegativeTestJSON-1642538521-project-member</nova:user>
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <nova:project uuid="fa41c7d4e15a42658da755de028ccabb">tempest-ServersAdminNegativeTestJSON-1642538521</nova:project>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <entry name="serial">60845b2b-40a4-4c4b-8a88-333a8f5e233c</entry>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <entry name="uuid">60845b2b-40a4-4c4b-8a88-333a8f5e233c</entry>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk">
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk.config">
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:39 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c/console.log" append="off"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:13:39 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:13:39 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:39 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:39 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:39 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.571 253665 INFO nova.scheduler.client.report [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Deleted allocations for instance 3c70b093-a92a-4781-8e32-2a7eefde4a43
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.632 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.633 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.633 253665 INFO nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Using config drive
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.652 253665 DEBUG nova.storage.rbd_utils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.658 253665 DEBUG oslo_concurrency.lockutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.659 253665 DEBUG oslo_concurrency.lockutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.659 253665 DEBUG oslo_concurrency.lockutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.659 253665 DEBUG oslo_concurrency.lockutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.659 253665 DEBUG oslo_concurrency.lockutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.660 253665 INFO nova.compute.manager [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Terminating instance
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.661 253665 DEBUG nova.compute.manager [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.663 253665 DEBUG oslo_concurrency.lockutils [None req-bcd0d6f7-534d-4d60-a05e-4ab284e3a338 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:39 compute-0 kernel: tap2f6ebd6c-b4 (unregistering): left promiscuous mode
Nov 22 09:13:39 compute-0 NetworkManager[48920]: <info>  [1763802819.7381] device (tap2f6ebd6c-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.747 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:39 compute-0 ovn_controller[152872]: 2025-11-22T09:13:39Z|00233|binding|INFO|Releasing lport 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 from this chassis (sb_readonly=0)
Nov 22 09:13:39 compute-0 ovn_controller[152872]: 2025-11-22T09:13:39Z|00234|binding|INFO|Setting lport 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 down in Southbound
Nov 22 09:13:39 compute-0 ovn_controller[152872]: 2025-11-22T09:13:39Z|00235|binding|INFO|Removing iface tap2f6ebd6c-b4 ovn-installed in OVS
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.749 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:39.758 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:97:6f 10.100.0.11'], port_security=['fa:16:3e:13:97:6f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '14600eae-75dc-4ffc-a15a-bdb234f164d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:39.759 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff unbound from our chassis
Nov 22 09:13:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:39.761 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 691e79ad-da5d-4276-aa7d-732c2aaedbff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:13:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:39.762 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bbf00b-94b8-4469-a128-381629e6495b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:39.763 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff namespace which is not needed anymore
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:39 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 22 09:13:39 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Consumed 15.159s CPU time.
Nov 22 09:13:39 compute-0 systemd-machined[215941]: Machine qemu-34-instance-0000001d terminated.
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.839 253665 INFO nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Creating config drive at /var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c/disk.config
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.845 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7pxvdcbb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.879 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:39 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[292693]: [NOTICE]   (292716) : haproxy version is 2.8.14-c23fe91
Nov 22 09:13:39 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[292693]: [NOTICE]   (292716) : path to executable is /usr/sbin/haproxy
Nov 22 09:13:39 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[292693]: [ALERT]    (292716) : Current worker (292724) exited with code 143 (Terminated)
Nov 22 09:13:39 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[292693]: [WARNING]  (292716) : All workers exited. Exiting... (0)
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:39 compute-0 systemd[1]: libpod-c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438.scope: Deactivated successfully.
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:39 compute-0 podman[293905]: 2025-11-22 09:13:39.907027674 +0000 UTC m=+0.051346013 container died c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.914 253665 INFO nova.virt.libvirt.driver [-] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Instance destroyed successfully.
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.914 253665 DEBUG nova.objects.instance [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'resources' on Instance uuid 14600eae-75dc-4ffc-a15a-bdb234f164d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.928 253665 DEBUG nova.virt.libvirt.vif [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1301305920',id=29,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-febfj4xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:37Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=14600eae-75dc-4ffc-a15a-bdb234f164d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.929 253665 DEBUG nova.network.os_vif_util [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.930 253665 DEBUG nova.network.os_vif_util [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.931 253665 DEBUG os_vif [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.934 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.934 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f6ebd6c-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.937 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438-userdata-shm.mount: Deactivated successfully.
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.945 253665 INFO os_vif [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4')
Nov 22 09:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a101ddd0cbf37b355cc7ffa96e374d86e06f82052cc49d6d94999ac85daf5cc3-merged.mount: Deactivated successfully.
Nov 22 09:13:39 compute-0 podman[293905]: 2025-11-22 09:13:39.967530621 +0000 UTC m=+0.111848960 container cleanup c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:13:39 compute-0 systemd[1]: libpod-conmon-c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438.scope: Deactivated successfully.
Nov 22 09:13:39 compute-0 nova_compute[253661]: 2025-11-22 09:13:39.992 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7pxvdcbb" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.023 253665 DEBUG nova.storage.rbd_utils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.028 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c/disk.config 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:40 compute-0 podman[293969]: 2025-11-22 09:13:40.042831472 +0000 UTC m=+0.049224462 container remove c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:13:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:40.048 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1bb8f3-a43c-4c9d-8b75-387422fd9925]: (4, ('Sat Nov 22 09:13:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff (c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438)\nc850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438\nSat Nov 22 09:13:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff (c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438)\nc850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:40.050 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac3b20c-c95f-4175-a371-e316ef42f31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:40.050 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691e79ad-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:40 compute-0 kernel: tap691e79ad-d0: left promiscuous mode
Nov 22 09:13:40 compute-0 ceph-mon[75021]: osdmap e205: 3 total, 3 up, 3 in
Nov 22 09:13:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3021188534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3445990758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.065 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.073 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:40.077 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c158b5ad-003b-4e34-9aa1-9453504b665a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:40.090 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4995cdd6-f3d0-43ef-8354-53a49cce1601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:40.092 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c1ac03-c21e-4842-af44-59168e02c985]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:40.115 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c184408e-9dbd-4fde-9ed9-bebd6de525a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564945, 'reachable_time': 29405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294009, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d691e79ad\x2dda5d\x2d4276\x2daa7d\x2d732c2aaedbff.mount: Deactivated successfully.
Nov 22 09:13:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:40.118 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:13:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:40.118 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[add445dd-5eb4-4e90-8386-39835f8344d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.257 253665 DEBUG oslo_concurrency.processutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c/disk.config 60845b2b-40a4-4c4b-8a88-333a8f5e233c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.258 253665 INFO nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Deleting local config drive /var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c/disk.config because it was imported into RBD.
Nov 22 09:13:40 compute-0 virtqemud[254229]: End of file while reading data: Input/output error
Nov 22 09:13:40 compute-0 systemd-machined[215941]: New machine qemu-36-instance-0000001f.
Nov 22 09:13:40 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-0000001f.
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.493 253665 INFO nova.virt.libvirt.driver [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Snapshot image upload complete
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.494 253665 INFO nova.compute.manager [None req-30382c38-9a67-42b8-b8f9-653c75ac3487 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Took 12.01 seconds to snapshot the instance on the hypervisor.
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.501 253665 INFO nova.virt.libvirt.driver [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Deleting instance files /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0_del
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.502 253665 INFO nova.virt.libvirt.driver [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Deletion of /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0_del complete
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.582 253665 INFO nova.compute.manager [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Took 0.92 seconds to destroy the instance on the hypervisor.
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.582 253665 DEBUG oslo.service.loopingcall [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.582 253665 DEBUG nova.compute.manager [-] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.583 253665 DEBUG nova.network.neutron [-] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:13:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.637 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802820.637387, 60845b2b-40a4-4c4b-8a88-333a8f5e233c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.638 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] VM Resumed (Lifecycle Event)
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.641 253665 DEBUG nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.642 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.645 253665 INFO nova.virt.libvirt.driver [-] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Instance spawned successfully.
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.645 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.660 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.668 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.672 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.672 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.673 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.673 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.673 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.674 253665 DEBUG nova.virt.libvirt.driver [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:40 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.698 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.698 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802820.640747, 60845b2b-40a4-4c4b-8a88-333a8f5e233c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.698 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] VM Started (Lifecycle Event)
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.724 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.727 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.745 253665 INFO nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Took 3.05 seconds to spawn the instance on the hypervisor.
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.745 253665 DEBUG nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.746 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.803 253665 INFO nova.compute.manager [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Took 4.04 seconds to build instance.
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.816 253665 DEBUG oslo_concurrency.lockutils [None req-7975b93c-c20d-485f-965c-b3d2d09555e4 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.940 253665 DEBUG nova.compute.manager [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-vif-unplugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.941 253665 DEBUG oslo_concurrency.lockutils [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.941 253665 DEBUG oslo_concurrency.lockutils [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.941 253665 DEBUG oslo_concurrency.lockutils [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.941 253665 DEBUG nova.compute.manager [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] No waiting events found dispatching network-vif-unplugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.942 253665 DEBUG nova.compute.manager [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-vif-unplugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.942 253665 DEBUG nova.compute.manager [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.942 253665 DEBUG oslo_concurrency.lockutils [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.943 253665 DEBUG oslo_concurrency.lockutils [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.943 253665 DEBUG oslo_concurrency.lockutils [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.943 253665 DEBUG nova.compute.manager [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] No waiting events found dispatching network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:40 compute-0 nova_compute[253661]: 2025-11-22 09:13:40.943 253665 WARNING nova.compute.manager [req-ac4d0834-776b-4abd-b5d6-200815d2ff3d req-b48ef3ad-33d5-4fa9-929c-3d807d3fef36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received unexpected event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 for instance with vm_state active and task_state deleting.
Nov 22 09:13:41 compute-0 ceph-mon[75021]: pgmap v1449: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 244 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 10 MiB/s wr, 309 op/s
Nov 22 09:13:41 compute-0 ceph-mon[75021]: osdmap e206: 3 total, 3 up, 3 in
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.093 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802806.0878515, fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.093 253665 INFO nova.compute.manager [-] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] VM Stopped (Lifecycle Event)
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.111 253665 DEBUG nova.compute.manager [None req-f35f165b-f11e-469d-9288-b67135501d79 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.115 253665 DEBUG nova.compute.manager [None req-f35f165b-f11e-469d-9288-b67135501d79 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:41.155 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:41.156 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:13:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 244 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 9.7 MiB/s wr, 212 op/s
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.278 253665 DEBUG nova.network.neutron [-] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.298 253665 INFO nova.compute.manager [-] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Took 0.72 seconds to deallocate network for instance.
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.347 253665 DEBUG oslo_concurrency.lockutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.347 253665 DEBUG oslo_concurrency.lockutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:41 compute-0 nova_compute[253661]: 2025-11-22 09:13:41.444 253665 DEBUG oslo_concurrency.processutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085783104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.025 253665 DEBUG oslo_concurrency.processutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.033 253665 DEBUG nova.compute.provider_tree [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.049 253665 DEBUG nova.scheduler.client.report [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.073 253665 DEBUG oslo_concurrency.lockutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 22 09:13:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1085783104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 22 09:13:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.108 253665 INFO nova.scheduler.client.report [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Deleted allocations for instance 14600eae-75dc-4ffc-a15a-bdb234f164d0
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.179 253665 DEBUG oslo_concurrency.lockutils [None req-ad888ed8-a504-43d4-a5a7-13501e33fa8b 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.785 253665 DEBUG oslo_concurrency.lockutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.786 253665 DEBUG oslo_concurrency.lockutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.786 253665 DEBUG oslo_concurrency.lockutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.787 253665 DEBUG oslo_concurrency.lockutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.787 253665 DEBUG oslo_concurrency.lockutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.789 253665 INFO nova.compute.manager [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Terminating instance
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.790 253665 DEBUG nova.compute.manager [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.798 253665 INFO nova.virt.libvirt.driver [-] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Instance destroyed successfully.
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.799 253665 DEBUG nova.objects.instance [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.814 253665 DEBUG nova.virt.libvirt.vif [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2140089311',display_name='tempest-ImagesTestJSON-server-2140089311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2140089311',id=30,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-63rtx74t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:40Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=fd1f1ba2-6963-47bb-8d59-86e2ed015ad1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.815 253665 DEBUG nova.network.os_vif_util [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.816 253665 DEBUG nova.network.os_vif_util [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.816 253665 DEBUG os_vif [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.819 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c059df4-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.866 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:42 compute-0 nova_compute[253661]: 2025-11-22 09:13:42.868 253665 INFO os_vif [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5')
Nov 22 09:13:43 compute-0 ceph-mon[75021]: pgmap v1451: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 244 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 9.7 MiB/s wr, 212 op/s
Nov 22 09:13:43 compute-0 ceph-mon[75021]: osdmap e207: 3 total, 3 up, 3 in
Nov 22 09:13:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 233 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 9.6 MiB/s wr, 301 op/s
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.426 253665 INFO nova.virt.libvirt.driver [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Deleting instance files /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_del
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.427 253665 INFO nova.virt.libvirt.driver [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Deletion of /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_del complete
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.486 253665 DEBUG nova.compute.manager [req-14374865-a354-4c3d-b268-5b3f0329bad1 req-92a96f2e-ac99-4927-b3ec-3ead74693ad6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-vif-deleted-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.492 253665 INFO nova.compute.manager [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.492 253665 DEBUG oslo.service.loopingcall [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.493 253665 DEBUG nova.compute.manager [-] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.493 253665 DEBUG nova.network.neutron [-] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.742 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.743 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.759 253665 DEBUG nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.794 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802808.6898367, 3c70b093-a92a-4781-8e32-2a7eefde4a43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.794 253665 INFO nova.compute.manager [-] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] VM Stopped (Lifecycle Event)
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.815 253665 DEBUG nova.compute.manager [None req-defdaa08-0fe8-4b7f-91fd-359381990325 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.843 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.844 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.852 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.853 253665 INFO nova.compute.claims [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:13:43 compute-0 nova_compute[253661]: 2025-11-22 09:13:43.990 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.304 253665 DEBUG nova.network.neutron [-] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.324 253665 INFO nova.compute.manager [-] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Took 0.83 seconds to deallocate network for instance.
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.365 253665 DEBUG oslo_concurrency.lockutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369063750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.441 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.448 253665 DEBUG nova.compute.provider_tree [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.461 253665 DEBUG nova.scheduler.client.report [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.483 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.484 253665 DEBUG nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.487 253665 DEBUG oslo_concurrency.lockutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.531 253665 DEBUG nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.531 253665 DEBUG nova.network.neutron [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.548 253665 INFO nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.571 253665 DEBUG nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.589 253665 DEBUG oslo_concurrency.processutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.632 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.633 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.672 253665 DEBUG nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.696 253665 DEBUG nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.698 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.698 253665 INFO nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Creating image(s)
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.720 253665 DEBUG nova.storage.rbd_utils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.744 253665 DEBUG nova.storage.rbd_utils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.768 253665 DEBUG nova.storage.rbd_utils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.771 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.819 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.842 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.843 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.844 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.844 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.871 253665 DEBUG nova.storage.rbd_utils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.876 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.907 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.951 253665 DEBUG nova.network.neutron [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:13:44 compute-0 nova_compute[253661]: 2025-11-22 09:13:44.952 253665 DEBUG nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:13:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859760930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.044 253665 DEBUG oslo_concurrency.processutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.050 253665 DEBUG nova.compute.provider_tree [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.069 253665 DEBUG nova.scheduler.client.report [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.099 253665 DEBUG oslo_concurrency.lockutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.101 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.109 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.110 253665 INFO nova.compute.claims [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.122 253665 INFO nova.scheduler.client.report [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Deleted allocations for instance fd1f1ba2-6963-47bb-8d59-86e2ed015ad1
Nov 22 09:13:45 compute-0 ceph-mon[75021]: pgmap v1453: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 233 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 9.6 MiB/s wr, 301 op/s
Nov 22 09:13:45 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3369063750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:45 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1859760930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.199 253665 DEBUG oslo_concurrency.lockutils [None req-4aa79fde-2d3e-4fcd-bad5-5e6c017f4517 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.239 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.239 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 114 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 345 op/s
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.253 253665 DEBUG nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.310 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.342 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.368 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.401 253665 DEBUG nova.storage.rbd_utils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] resizing rbd image e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.525 253665 DEBUG nova.objects.instance [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lazy-loading 'migration_context' on Instance uuid e33e156c-7752-4e0c-82db-bb1bdde4c8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.543 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.544 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Ensure instance console log exists: /var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.545 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.545 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.546 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.548 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.554 253665 WARNING nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.560 253665 DEBUG nova.virt.libvirt.host [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.560 253665 DEBUG nova.virt.libvirt.host [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.564 253665 DEBUG nova.virt.libvirt.host [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.566 253665 DEBUG nova.virt.libvirt.host [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.566 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.567 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.567 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.568 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.568 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.568 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.568 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.569 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.569 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.569 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.569 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.570 253665 DEBUG nova.virt.hardware [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.574 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.617 253665 DEBUG nova.compute.manager [req-9f2a2370-5769-4c8a-bd54-c45b9063c5f7 req-5c894d51-c22c-4441-a8e5-9290033c9fc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received event network-vif-deleted-2c059df4-a5a0-4c31-8485-01ccdea02b01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573594001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.776 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.782 253665 DEBUG nova.compute.provider_tree [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.796 253665 DEBUG nova.scheduler.client.report [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.821 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.822 253665 DEBUG nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.825 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.833 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.834 253665 INFO nova.compute.claims [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.885 253665 DEBUG nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.885 253665 DEBUG nova.network.neutron [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.910 253665 INFO nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.925 253665 DEBUG nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:13:45 compute-0 nova_compute[253661]: 2025-11-22 09:13:45.998 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230890620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.035 253665 DEBUG nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.037 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.037 253665 INFO nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Creating image(s)
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.060 253665 DEBUG nova.storage.rbd_utils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.084 253665 DEBUG nova.storage.rbd_utils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.111 253665 DEBUG nova.storage.rbd_utils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.116 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:46 compute-0 ceph-mon[75021]: pgmap v1454: 305 pgs: 305 active+clean; 114 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 345 op/s
Nov 22 09:13:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3573594001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3230890620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.150 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.177 253665 DEBUG nova.storage.rbd_utils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.186 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.221 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.222 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.223 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.223 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.247 253665 DEBUG nova.storage.rbd_utils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.250 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427518684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.510 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.517 253665 DEBUG nova.compute.provider_tree [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.535 253665 DEBUG nova.scheduler.client.report [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.568 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.571 253665 DEBUG nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.612 253665 DEBUG nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.612 253665 DEBUG nova.network.neutron [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.629 253665 INFO nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.645 253665 DEBUG nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:13:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294943416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.682 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.684 253665 DEBUG nova.objects.instance [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lazy-loading 'pci_devices' on Instance uuid e33e156c-7752-4e0c-82db-bb1bdde4c8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.703 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <uuid>e33e156c-7752-4e0c-82db-bb1bdde4c8e5</uuid>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <name>instance-00000020</name>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-357386522</nova:name>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:13:45</nova:creationTime>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <nova:user uuid="0a81519d045643428005367ca3294f1e">tempest-ServersAdminNegativeTestJSON-1642538521-project-member</nova:user>
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <nova:project uuid="fa41c7d4e15a42658da755de028ccabb">tempest-ServersAdminNegativeTestJSON-1642538521</nova:project>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <entry name="serial">e33e156c-7752-4e0c-82db-bb1bdde4c8e5</entry>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <entry name="uuid">e33e156c-7752-4e0c-82db-bb1bdde4c8e5</entry>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk">
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk.config">
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:46 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5/console.log" append="off"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:13:46 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:13:46 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:46 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:46 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:46 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.718 253665 DEBUG nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.720 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.721 253665 INFO nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Creating image(s)
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.745 253665 DEBUG nova.storage.rbd_utils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.785 253665 DEBUG nova.storage.rbd_utils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.803 253665 DEBUG nova.storage.rbd_utils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.807 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.837 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.903 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.904 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.905 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.905 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.931 253665 DEBUG nova.storage.rbd_utils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.935 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:46 compute-0 nova_compute[253661]: 2025-11-22 09:13:46.972 253665 DEBUG nova.storage.rbd_utils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] resizing rbd image 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.016 253665 DEBUG nova.policy [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96cac95dc532449d964ffb3705dae943', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.020 253665 DEBUG nova.policy [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97872d7ce91947789de976821b771135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.025 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.025 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.026 253665 INFO nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Using config drive
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.059 253665 DEBUG nova.storage.rbd_utils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:47 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.146 253665 DEBUG nova.objects.instance [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e1195d2-5c9f-45c5-9646-f82fdf069bd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.169 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.170 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Ensure instance console log exists: /var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.171 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.171 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.172 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.242 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.242 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 94 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.4 MiB/s wr, 316 op/s
Nov 22 09:13:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2427518684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2294943416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.259 253665 DEBUG nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.310 253665 INFO nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Creating config drive at /var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5/disk.config
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.317 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbd2jzuac execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.351 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.352 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.360 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.361 253665 INFO nova.compute.claims [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.438 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.471 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbd2jzuac" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.496 253665 DEBUG nova.storage.rbd_utils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] rbd image e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.501 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5/disk.config e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.586 253665 DEBUG nova.storage.rbd_utils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] resizing rbd image 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.689 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.759 253665 DEBUG oslo_concurrency.processutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5/disk.config e33e156c-7752-4e0c-82db-bb1bdde4c8e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.760 253665 INFO nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Deleting local config drive /var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5/disk.config because it was imported into RBD.
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.772 253665 DEBUG nova.objects.instance [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'migration_context' on Instance uuid 6c9b56d3-9edf-4e5a-88e4-c0470a193778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.786 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.787 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Ensure instance console log exists: /var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.787 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.788 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.788 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:47 compute-0 systemd-machined[215941]: New machine qemu-37-instance-00000020.
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.833 253665 DEBUG nova.network.neutron [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Successfully created port: c1d9117b-dc46-4b02-a00a-2d78a8027873 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:13:47 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.851 253665 DEBUG nova.network.neutron [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Successfully created port: b52cd552-3da3-4a4f-b85c-55c37eff6bf1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:13:47 compute-0 nova_compute[253661]: 2025-11-22 09:13:47.862 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/527976727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.223 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.230 253665 DEBUG nova.compute.provider_tree [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.246 253665 DEBUG nova.scheduler.client.report [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.267 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.268 253665 DEBUG nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.312 253665 DEBUG nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.312 253665 DEBUG nova.network.neutron [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.330 253665 INFO nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:13:48 compute-0 ceph-mon[75021]: pgmap v1455: 305 pgs: 305 active+clean; 94 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.4 MiB/s wr, 316 op/s
Nov 22 09:13:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/527976727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.350 253665 DEBUG nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.451 253665 DEBUG nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.453 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.453 253665 INFO nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Creating image(s)
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.474 253665 DEBUG nova.storage.rbd_utils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.503 253665 DEBUG nova.storage.rbd_utils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.531 253665 DEBUG nova.storage.rbd_utils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.537 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.578 253665 DEBUG nova.policy [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.623 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.624 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.625 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.626 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.649 253665 DEBUG nova.storage.rbd_utils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.653 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:13:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 15K writes, 62K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 15K writes, 4281 syncs, 3.51 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8898 writes, 37K keys, 8898 commit groups, 1.0 writes per commit group, ingest: 40.91 MB, 0.07 MB/s
                                           Interval WAL: 8898 writes, 3159 syncs, 2.82 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.953 253665 DEBUG nova.network.neutron [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Successfully updated port: c1d9117b-dc46-4b02-a00a-2d78a8027873 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.973 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-6c9b56d3-9edf-4e5a-88e4-c0470a193778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.974 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-6c9b56d3-9edf-4e5a-88e4-c0470a193778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.974 253665 DEBUG nova.network.neutron [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:48 compute-0 podman[294998]: 2025-11-22 09:13:48.979742104 +0000 UTC m=+0.065098251 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:13:48 compute-0 podman[295001]: 2025-11-22 09:13:48.992179499 +0000 UTC m=+0.078210933 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.993 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802828.9928734, e33e156c-7752-4e0c-82db-bb1bdde4c8e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.993 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] VM Resumed (Lifecycle Event)
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.998 253665 DEBUG nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:13:48 compute-0 nova_compute[253661]: 2025-11-22 09:13:48.998 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.000 253665 DEBUG nova.network.neutron [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Successfully updated port: b52cd552-3da3-4a4f-b85c-55c37eff6bf1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.006 253665 INFO nova.virt.libvirt.driver [-] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Instance spawned successfully.
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.007 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.021 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.022 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "refresh_cache-5e1195d2-5c9f-45c5-9646-f82fdf069bd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.022 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquired lock "refresh_cache-5e1195d2-5c9f-45c5-9646-f82fdf069bd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.023 253665 DEBUG nova.network.neutron [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.025 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.057 253665 DEBUG nova.compute.manager [req-45078f46-41a3-4ee3-b088-640970ed3075 req-44fc266e-da3e-4dae-bc23-c25694e7889b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Received event network-changed-c1d9117b-dc46-4b02-a00a-2d78a8027873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.058 253665 DEBUG nova.compute.manager [req-45078f46-41a3-4ee3-b088-640970ed3075 req-44fc266e-da3e-4dae-bc23-c25694e7889b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Refreshing instance network info cache due to event network-changed-c1d9117b-dc46-4b02-a00a-2d78a8027873. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.058 253665 DEBUG oslo_concurrency.lockutils [req-45078f46-41a3-4ee3-b088-640970ed3075 req-44fc266e-da3e-4dae-bc23-c25694e7889b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6c9b56d3-9edf-4e5a-88e4-c0470a193778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.093 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.097 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.098 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.098 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.099 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.099 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.099 253665 DEBUG nova.virt.libvirt.driver [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.108 253665 DEBUG nova.storage.rbd_utils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] resizing rbd image 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.153 253665 DEBUG nova.network.neutron [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.157 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.157 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802828.9971886, e33e156c-7752-4e0c-82db-bb1bdde4c8e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.158 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] VM Started (Lifecycle Event)
Nov 22 09:13:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:49.158 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.179 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.182 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.186 253665 INFO nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Took 4.49 seconds to spawn the instance on the hypervisor.
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.187 253665 DEBUG nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.213 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 197 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.1 MiB/s wr, 391 op/s
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.255 253665 INFO nova.compute.manager [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Took 5.43 seconds to build instance.
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.270 253665 DEBUG oslo_concurrency.lockutils [None req-b427c9a8-f47b-468e-a0f6-010082e5fc70 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.310 253665 DEBUG nova.objects.instance [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'migration_context' on Instance uuid 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.322 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.324 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Ensure instance console log exists: /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.324 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.325 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.325 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.331 253665 DEBUG nova.network.neutron [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.475 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.501 253665 WARNING nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] While synchronizing instance power states, found 5 instances in the database and 2 instances on the hypervisor.
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.502 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 60845b2b-40a4-4c4b-8a88-333a8f5e233c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.502 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid e33e156c-7752-4e0c-82db-bb1bdde4c8e5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.502 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 5e1195d2-5c9f-45c5-9646-f82fdf069bd6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.503 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 6c9b56d3-9edf-4e5a-88e4-c0470a193778 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.503 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.503 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.504 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.504 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.505 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.505 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.505 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.506 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.536 253665 DEBUG nova.network.neutron [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Successfully created port: 250740a7-7283-491e-b03e-1e30171a9f3f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.549 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.551 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:49 compute-0 nova_compute[253661]: 2025-11-22 09:13:49.848 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:50 compute-0 ceph-mon[75021]: pgmap v1456: 305 pgs: 305 active+clean; 197 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.1 MiB/s wr, 391 op/s
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.513 253665 DEBUG nova.network.neutron [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Updating instance_info_cache with network_info: [{"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.521 253665 DEBUG nova.network.neutron [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Updating instance_info_cache with network_info: [{"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.542 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Releasing lock "refresh_cache-5e1195d2-5c9f-45c5-9646-f82fdf069bd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.542 253665 DEBUG nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Instance network_info: |[{"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.546 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Start _get_guest_xml network_info=[{"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.551 253665 WARNING nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.554 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-6c9b56d3-9edf-4e5a-88e4-c0470a193778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.555 253665 DEBUG nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Instance network_info: |[{"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.557 253665 DEBUG oslo_concurrency.lockutils [req-45078f46-41a3-4ee3-b088-640970ed3075 req-44fc266e-da3e-4dae-bc23-c25694e7889b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6c9b56d3-9edf-4e5a-88e4-c0470a193778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.557 253665 DEBUG nova.network.neutron [req-45078f46-41a3-4ee3-b088-640970ed3075 req-44fc266e-da3e-4dae-bc23-c25694e7889b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Refreshing network info cache for port c1d9117b-dc46-4b02-a00a-2d78a8027873 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.560 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Start _get_guest_xml network_info=[{"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.563 253665 DEBUG nova.objects.instance [None req-e59ac7ed-243b-4bbf-95be-bd4804b16789 9fa91a7b757d41259b450def96ce164d c6b50197d49045ccb7ff02520c8f09d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid e33e156c-7752-4e0c-82db-bb1bdde4c8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.569 253665 DEBUG nova.virt.libvirt.host [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.570 253665 DEBUG nova.virt.libvirt.host [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.571 253665 WARNING nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.579 253665 DEBUG nova.virt.libvirt.host [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.580 253665 DEBUG nova.virt.libvirt.host [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.580 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.580 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.581 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.581 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.582 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.582 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.582 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.583 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.583 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.583 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.584 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.584 253665 DEBUG nova.virt.hardware [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.587 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.625 253665 DEBUG nova.virt.libvirt.host [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.626 253665 DEBUG nova.virt.libvirt.host [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.630 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802830.6299574, e33e156c-7752-4e0c-82db-bb1bdde4c8e5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] VM Paused (Lifecycle Event)
Nov 22 09:13:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.633 253665 DEBUG nova.virt.libvirt.host [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.634 253665 DEBUG nova.virt.libvirt.host [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.634 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.635 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.635 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.636 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.636 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.636 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.637 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.637 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.638 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:13:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.638 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.638 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.639 253665 DEBUG nova.virt.hardware [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:13:50 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.644 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.687 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.693 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.720 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.805 253665 DEBUG nova.network.neutron [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Successfully updated port: 250740a7-7283-491e-b03e-1e30171a9f3f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.822 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.822 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.823 253665 DEBUG nova.network.neutron [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:50 compute-0 nova_compute[253661]: 2025-11-22 09:13:50.966 253665 DEBUG nova.network.neutron [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:13:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4195667623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:51 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Nov 22 09:13:51 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 2.854s CPU time.
Nov 22 09:13:51 compute-0 systemd-machined[215941]: Machine qemu-37-instance-00000020 terminated.
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.074 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.122 253665 DEBUG nova.storage.rbd_utils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.129 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026725573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.183 253665 DEBUG nova.compute.manager [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-changed-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.186 253665 DEBUG nova.compute.manager [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Refreshing instance network info cache due to event network-changed-b52cd552-3da3-4a4f-b85c-55c37eff6bf1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.187 253665 DEBUG oslo_concurrency.lockutils [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-5e1195d2-5c9f-45c5-9646-f82fdf069bd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.187 253665 DEBUG oslo_concurrency.lockutils [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-5e1195d2-5c9f-45c5-9646-f82fdf069bd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.187 253665 DEBUG nova.network.neutron [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Refreshing network info cache for port b52cd552-3da3-4a4f-b85c-55c37eff6bf1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.189 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.217 253665 DEBUG nova.storage.rbd_utils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.225 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 197 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 306 op/s
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.266 253665 DEBUG nova.compute.manager [None req-e59ac7ed-243b-4bbf-95be-bd4804b16789 9fa91a7b757d41259b450def96ce164d c6b50197d49045ccb7ff02520c8f09d0 - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301431491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:51 compute-0 ceph-mon[75021]: osdmap e208: 3 total, 3 up, 3 in
Nov 22 09:13:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4195667623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2026725573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/301431491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.657 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.659 253665 DEBUG nova.virt.libvirt.vif [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-206874402',display_name='tempest-ImagesOneServerNegativeTestJSON-server-206874402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-206874402',id=33,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-311ounv8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:45Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=5e1195d2-5c9f-45c5-9646-f82fdf069bd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.660 253665 DEBUG nova.network.os_vif_util [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.661 253665 DEBUG nova.network.os_vif_util [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:8b:8c,bridge_name='br-int',has_traffic_filtering=True,id=b52cd552-3da3-4a4f-b85c-55c37eff6bf1,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb52cd552-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.662 253665 DEBUG nova.objects.instance [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e1195d2-5c9f-45c5-9646-f82fdf069bd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.674 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <uuid>5e1195d2-5c9f-45c5-9646-f82fdf069bd6</uuid>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <name>instance-00000021</name>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-206874402</nova:name>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:13:50</nova:creationTime>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:user uuid="96cac95dc532449d964ffb3705dae943">tempest-ImagesOneServerNegativeTestJSON-251054159-project-member</nova:user>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:project uuid="dcedb2f9ed6e43dfa8ecc3854373b0b5">tempest-ImagesOneServerNegativeTestJSON-251054159</nova:project>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:port uuid="b52cd552-3da3-4a4f-b85c-55c37eff6bf1">
Nov 22 09:13:51 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="serial">5e1195d2-5c9f-45c5-9646-f82fdf069bd6</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="uuid">5e1195d2-5c9f-45c5-9646-f82fdf069bd6</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk.config">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:d6:8b:8c"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <target dev="tapb52cd552-3d"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6/console.log" append="off"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:51 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:51 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.680 253665 DEBUG nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Preparing to wait for external event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.682 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.682 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.683 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.684 253665 DEBUG nova.virt.libvirt.vif [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-206874402',display_name='tempest-ImagesOneServerNegativeTestJSON-server-206874402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-206874402',id=33,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-311ounv8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:45Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=5e1195d2-5c9f-45c5-9646-f82fdf069bd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.684 253665 DEBUG nova.network.os_vif_util [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.685 253665 DEBUG nova.network.os_vif_util [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:8b:8c,bridge_name='br-int',has_traffic_filtering=True,id=b52cd552-3da3-4a4f-b85c-55c37eff6bf1,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb52cd552-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.686 253665 DEBUG os_vif [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:8b:8c,bridge_name='br-int',has_traffic_filtering=True,id=b52cd552-3da3-4a4f-b85c-55c37eff6bf1,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb52cd552-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.687 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.687 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.688 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.692 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.693 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb52cd552-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.693 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb52cd552-3d, col_values=(('external_ids', {'iface-id': 'b52cd552-3da3-4a4f-b85c-55c37eff6bf1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:8b:8c', 'vm-uuid': '5e1195d2-5c9f-45c5-9646-f82fdf069bd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:51 compute-0 NetworkManager[48920]: <info>  [1763802831.6972] manager: (tapb52cd552-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.703 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.704 253665 INFO os_vif [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:8b:8c,bridge_name='br-int',has_traffic_filtering=True,id=b52cd552-3da3-4a4f-b85c-55c37eff6bf1,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb52cd552-3d')
Nov 22 09:13:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4185420149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.754 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.755 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.755 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No VIF found with MAC fa:16:3e:d6:8b:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.756 253665 INFO nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Using config drive
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.782 253665 DEBUG nova.storage.rbd_utils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.791 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.794 253665 DEBUG nova.virt.libvirt.vif [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1910653746',display_name='tempest-ImagesTestJSON-server-1910653746',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1910653746',id=34,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-qyv3s6hf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:46Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=6c9b56d3-9edf-4e5a-88e4-c0470a193778,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.794 253665 DEBUG nova.network.os_vif_util [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.795 253665 DEBUG nova.network.os_vif_util [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:f0:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1d9117b-dc46-4b02-a00a-2d78a8027873,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1d9117b-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.797 253665 DEBUG nova.objects.instance [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c9b56d3-9edf-4e5a-88e4-c0470a193778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.813 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <uuid>6c9b56d3-9edf-4e5a-88e4-c0470a193778</uuid>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <name>instance-00000022</name>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesTestJSON-server-1910653746</nova:name>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:13:50</nova:creationTime>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <nova:port uuid="c1d9117b-dc46-4b02-a00a-2d78a8027873">
Nov 22 09:13:51 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="serial">6c9b56d3-9edf-4e5a-88e4-c0470a193778</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="uuid">6c9b56d3-9edf-4e5a-88e4-c0470a193778</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk.config">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:51 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:b1:f0:3f"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <target dev="tapc1d9117b-dc"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778/console.log" append="off"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:13:51 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:13:51 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:51 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:51 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:51 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.815 253665 DEBUG nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Preparing to wait for external event network-vif-plugged-c1d9117b-dc46-4b02-a00a-2d78a8027873 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.815 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.815 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.815 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.816 253665 DEBUG nova.virt.libvirt.vif [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1910653746',display_name='tempest-ImagesTestJSON-server-1910653746',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1910653746',id=34,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-qyv3s6hf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:46Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=6c9b56d3-9edf-4e5a-88e4-c0470a193778,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.816 253665 DEBUG nova.network.os_vif_util [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.817 253665 DEBUG nova.network.os_vif_util [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:f0:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1d9117b-dc46-4b02-a00a-2d78a8027873,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1d9117b-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.818 253665 DEBUG os_vif [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:f0:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1d9117b-dc46-4b02-a00a-2d78a8027873,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1d9117b-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.820 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.820 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.824 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.824 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1d9117b-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.825 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1d9117b-dc, col_values=(('external_ids', {'iface-id': 'c1d9117b-dc46-4b02-a00a-2d78a8027873', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:f0:3f', 'vm-uuid': '6c9b56d3-9edf-4e5a-88e4-c0470a193778'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:51 compute-0 NetworkManager[48920]: <info>  [1763802831.8283] manager: (tapc1d9117b-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.829 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.837 253665 INFO os_vif [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:f0:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1d9117b-dc46-4b02-a00a-2d78a8027873,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1d9117b-dc')
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.884 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.885 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.885 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:b1:f0:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.886 253665 INFO nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Using config drive
Nov 22 09:13:51 compute-0 nova_compute[253661]: 2025-11-22 09:13:51.927 253665 DEBUG nova.storage.rbd_utils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:13:52
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.254 253665 INFO nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Creating config drive at /var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6/disk.config
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.260 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpekkfogxs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:52 compute-0 podman[295279]: 2025-11-22 09:13:52.405619075 +0000 UTC m=+0.098014110 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.406 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpekkfogxs" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.433 253665 DEBUG nova.storage.rbd_utils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.437 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6/disk.config 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:52 compute-0 ceph-mon[75021]: pgmap v1458: 305 pgs: 305 active+clean; 197 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 306 op/s
Nov 22 09:13:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4185420149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.846 253665 DEBUG oslo_concurrency.processutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6/disk.config 5e1195d2-5c9f-45c5-9646-f82fdf069bd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.847 253665 INFO nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Deleting local config drive /var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6/disk.config because it was imported into RBD.
Nov 22 09:13:52 compute-0 kernel: tapb52cd552-3d: entered promiscuous mode
Nov 22 09:13:52 compute-0 NetworkManager[48920]: <info>  [1763802832.8996] manager: (tapb52cd552-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Nov 22 09:13:52 compute-0 ovn_controller[152872]: 2025-11-22T09:13:52Z|00236|binding|INFO|Claiming lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 for this chassis.
Nov 22 09:13:52 compute-0 ovn_controller[152872]: 2025-11-22T09:13:52Z|00237|binding|INFO|b52cd552-3da3-4a4f-b85c-55c37eff6bf1: Claiming fa:16:3e:d6:8b:8c 10.100.0.12
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.913 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:8b:8c 10.100.0.12'], port_security=['fa:16:3e:d6:8b:8c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5e1195d2-5c9f-45c5-9646-f82fdf069bd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b52cd552-3da3-4a4f-b85c-55c37eff6bf1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.914 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b52cd552-3da3-4a4f-b85c-55c37eff6bf1 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff bound to our chassis
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.915 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 09:13:52 compute-0 ovn_controller[152872]: 2025-11-22T09:13:52Z|00238|binding|INFO|Setting lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 ovn-installed in OVS
Nov 22 09:13:52 compute-0 ovn_controller[152872]: 2025-11-22T09:13:52Z|00239|binding|INFO|Setting lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 up in Southbound
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.933 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8b72a7-d434-482e-ba82-5d6ec04a3c81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.934 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap691e79ad-d1 in ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.935 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.937 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap691e79ad-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4eff37c0-7db9-4ec4-a5b4-55ad04dfad66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:52 compute-0 systemd-udevd[295361]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.938 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6674f922-d514-40b4-8948-6f1feff7179b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.940 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:52 compute-0 systemd-machined[215941]: New machine qemu-38-instance-00000021.
Nov 22 09:13:52 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000021.
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.953 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1160345b-87ba-4228-82ec-3cc22ab1157c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:52 compute-0 NetworkManager[48920]: <info>  [1763802832.9577] device (tapb52cd552-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:13:52 compute-0 NetworkManager[48920]: <info>  [1763802832.9586] device (tapb52cd552-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:13:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:52.973 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[45681c7b-58a3-442f-a459-91aeb6224c2e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.985 253665 INFO nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Creating config drive at /var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778/disk.config
Nov 22 09:13:52 compute-0 nova_compute[253661]: 2025-11-22 09:13:52.991 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb4_ftuk7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.014 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[33833a41-3d2e-4843-b27f-c07099d8b7a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 NetworkManager[48920]: <info>  [1763802833.0226] manager: (tap691e79ad-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.022 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c5720eae-15e2-4afb-9eac-d31c892dc728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.061 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4e813b8d-9579-4135-8a7c-5f0e106f7c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.065 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf55ede-5d36-47b2-8584-c0b8af40ac3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 NetworkManager[48920]: <info>  [1763802833.1001] device (tap691e79ad-d0): carrier: link connected
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.108 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[868296f9-f0be-44b7-8ff9-64b00572bddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.128 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb4_ftuk7" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.128 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a2eb0469-d91e-4647-b891-aa162be8a5ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568383, 'reachable_time': 16259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295400, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5a36d9-eb4b-400b-997f-45392bb73a1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:f9e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568383, 'tstamp': 568383}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295401, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.156 253665 DEBUG nova.storage.rbd_utils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.163 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778/disk.config 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.180 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ba924d-5697-45eb-b974-1b5feb7d509e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568383, 'reachable_time': 16259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295420, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.221 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8394ffed-d633-42b1-997a-3258d2dfbab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 244 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.3 MiB/s wr, 301 op/s
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.296 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d475f34-3dc3-4b8e-a369-71014e14651f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.298 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691e79ad-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.298 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.299 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap691e79ad-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:53 compute-0 NetworkManager[48920]: <info>  [1763802833.3015] manager: (tap691e79ad-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.306 253665 DEBUG nova.network.neutron [req-45078f46-41a3-4ee3-b088-640970ed3075 req-44fc266e-da3e-4dae-bc23-c25694e7889b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Updated VIF entry in instance network info cache for port c1d9117b-dc46-4b02-a00a-2d78a8027873. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.307 253665 DEBUG nova.network.neutron [req-45078f46-41a3-4ee3-b088-640970ed3075 req-44fc266e-da3e-4dae-bc23-c25694e7889b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Updating instance_info_cache with network_info: [{"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:53 compute-0 kernel: tap691e79ad-d0: entered promiscuous mode
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.311 253665 DEBUG nova.network.neutron [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.312 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap691e79ad-d0, col_values=(('external_ids', {'iface-id': '6b990e4f-df30-4562-9550-e3e0ea811f07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:53 compute-0 ovn_controller[152872]: 2025-11-22T09:13:53Z|00240|binding|INFO|Releasing lport 6b990e4f-df30-4562-9550-e3e0ea811f07 from this chassis (sb_readonly=0)
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.325 253665 DEBUG oslo_concurrency.lockutils [req-45078f46-41a3-4ee3-b088-640970ed3075 req-44fc266e-da3e-4dae-bc23-c25694e7889b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6c9b56d3-9edf-4e5a-88e4-c0470a193778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.328 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.329 253665 DEBUG nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Instance network_info: |[{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.333 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Start _get_guest_xml network_info=[{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.339 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.340 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2e44c8-771a-4fb1-b5e8-f5e45769d0db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.341 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.342 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'env', 'PROCESS_TAG=haproxy-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/691e79ad-da5d-4276-aa7d-732c2aaedbff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.351 253665 WARNING nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.364 253665 DEBUG nova.virt.libvirt.host [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.365 253665 DEBUG nova.virt.libvirt.host [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.369 253665 DEBUG nova.virt.libvirt.host [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.370 253665 DEBUG nova.virt.libvirt.host [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.370 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.371 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.372 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.373 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.373 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.373 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.374 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.374 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.374 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.375 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.375 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.376 253665 DEBUG nova.virt.hardware [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.381 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.450 253665 DEBUG oslo_concurrency.processutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778/disk.config 6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.451 253665 INFO nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Deleting local config drive /var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778/disk.config because it was imported into RBD.
Nov 22 09:13:53 compute-0 NetworkManager[48920]: <info>  [1763802833.5202] manager: (tapc1d9117b-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Nov 22 09:13:53 compute-0 kernel: tapc1d9117b-dc: entered promiscuous mode
Nov 22 09:13:53 compute-0 systemd-udevd[295391]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:13:53 compute-0 ovn_controller[152872]: 2025-11-22T09:13:53Z|00241|binding|INFO|Claiming lport c1d9117b-dc46-4b02-a00a-2d78a8027873 for this chassis.
Nov 22 09:13:53 compute-0 ovn_controller[152872]: 2025-11-22T09:13:53Z|00242|binding|INFO|c1d9117b-dc46-4b02-a00a-2d78a8027873: Claiming fa:16:3e:b1:f0:3f 10.100.0.10
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:53 compute-0 NetworkManager[48920]: <info>  [1763802833.5338] device (tapc1d9117b-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:13:53 compute-0 NetworkManager[48920]: <info>  [1763802833.5344] device (tapc1d9117b-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:13:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:53.538 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:f0:3f 10.100.0.10'], port_security=['fa:16:3e:b1:f0:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c9b56d3-9edf-4e5a-88e4-c0470a193778', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c1d9117b-dc46-4b02-a00a-2d78a8027873) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:53 compute-0 systemd-machined[215941]: New machine qemu-39-instance-00000022.
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:53 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000022.
Nov 22 09:13:53 compute-0 ovn_controller[152872]: 2025-11-22T09:13:53Z|00243|binding|INFO|Setting lport c1d9117b-dc46-4b02-a00a-2d78a8027873 ovn-installed in OVS
Nov 22 09:13:53 compute-0 ovn_controller[152872]: 2025-11-22T09:13:53Z|00244|binding|INFO|Setting lport c1d9117b-dc46-4b02-a00a-2d78a8027873 up in Southbound
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.740 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802833.7340567, 5e1195d2-5c9f-45c5-9646-f82fdf069bd6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.740 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] VM Started (Lifecycle Event)
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.759 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.769 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802833.7343678, 5e1195d2-5c9f-45c5-9646-f82fdf069bd6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.769 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] VM Paused (Lifecycle Event)
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.782 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.787 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.807 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:53 compute-0 podman[295553]: 2025-11-22 09:13:53.810034283 +0000 UTC m=+0.093463637 container create c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:13:53 compute-0 podman[295553]: 2025-11-22 09:13:53.742828871 +0000 UTC m=+0.026258385 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:13:53 compute-0 systemd[1]: Started libpod-conmon-c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68.scope.
Nov 22 09:13:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2087524637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2f248946451736b315236ddb8cc113564cad8d8ffe880a8bb3a464d3c7c7d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.894 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:53 compute-0 podman[295553]: 2025-11-22 09:13:53.913595329 +0000 UTC m=+0.197024713 container init c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 09:13:53 compute-0 podman[295553]: 2025-11-22 09:13:53.920972181 +0000 UTC m=+0.204401535 container start c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.925 253665 DEBUG nova.storage.rbd_utils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:53 compute-0 nova_compute[253661]: 2025-11-22 09:13:53.933 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:53 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[295569]: [NOTICE]   (295591) : New worker (295595) forked
Nov 22 09:13:53 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[295569]: [NOTICE]   (295591) : Loading success.
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.006 253665 DEBUG nova.compute.manager [req-b6ce21ba-8c71-48c0-b3d6-f447eec180b9 req-171d51c5-5075-45c8-b9f7-6de096e9e335 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.007 253665 DEBUG oslo_concurrency.lockutils [req-b6ce21ba-8c71-48c0-b3d6-f447eec180b9 req-171d51c5-5075-45c8-b9f7-6de096e9e335 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.008 253665 DEBUG oslo_concurrency.lockutils [req-b6ce21ba-8c71-48c0-b3d6-f447eec180b9 req-171d51c5-5075-45c8-b9f7-6de096e9e335 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.008 253665 DEBUG oslo_concurrency.lockutils [req-b6ce21ba-8c71-48c0-b3d6-f447eec180b9 req-171d51c5-5075-45c8-b9f7-6de096e9e335 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.009 253665 DEBUG nova.compute.manager [req-b6ce21ba-8c71-48c0-b3d6-f447eec180b9 req-171d51c5-5075-45c8-b9f7-6de096e9e335 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Processing event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.010 253665 DEBUG nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.017 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802834.0171287, 5e1195d2-5c9f-45c5-9646-f82fdf069bd6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.019 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] VM Resumed (Lifecycle Event)
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.020 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c1d9117b-dc46-4b02-a00a-2d78a8027873 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.021 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.023 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.028 253665 INFO nova.virt.libvirt.driver [-] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Instance spawned successfully.
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.028 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.035 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eca18e4a-3cf1-4270-9032-7da4e46005b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.036 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2abeeeb2-21 in ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.037 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2abeeeb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.037 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5184d62b-bbeb-4076-9a9c-020d0f1f5b0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.038 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f20b39f5-872c-4c20-affd-15b2f6325fb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.040 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.044 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.052 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2d3365-32d0-4517-8a24-2a698c57074e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.055 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.055 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.056 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.056 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.057 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.057 253665 DEBUG nova.virt.libvirt.driver [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.062 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.079 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8c1f84-ce9c-4bae-beed-aec8d538dbc7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.108 253665 INFO nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Took 8.07 seconds to spawn the instance on the hypervisor.
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.108 253665 DEBUG nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.121 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[aac0d40d-1de1-4c8c-a29d-49b06794dcb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 NetworkManager[48920]: <info>  [1763802834.1368] manager: (tap2abeeeb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.134 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30c51317-eb9a-4750-b5ce-5e02c63e89e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.179 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802834.1739373, 6c9b56d3-9edf-4e5a-88e4-c0470a193778 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.180 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] VM Started (Lifecycle Event)
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.184 253665 INFO nova.compute.manager [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Took 9.38 seconds to build instance.
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.188 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3ed609-37bd-46a9-bb8e-181685368000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.191 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8bc5ed-d7a8-4f1b-9c34-eb3ea6ad5b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.209 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.213 253665 DEBUG oslo_concurrency.lockutils [None req-adb5e79d-cc8e-406a-813f-3bb3304e6561 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.214 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.214 253665 INFO nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.214 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.217 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802834.1793747, 6c9b56d3-9edf-4e5a-88e4-c0470a193778 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.217 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] VM Paused (Lifecycle Event)
Nov 22 09:13:54 compute-0 NetworkManager[48920]: <info>  [1763802834.2204] device (tap2abeeeb2-20): carrier: link connected
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.229 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6458bea0-2980-4c61-882c-6b2d8991e363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.236 253665 DEBUG nova.network.neutron [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Updated VIF entry in instance network info cache for port b52cd552-3da3-4a4f-b85c-55c37eff6bf1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.237 253665 DEBUG nova.network.neutron [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Updating instance_info_cache with network_info: [{"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.250 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.252 253665 DEBUG oslo_concurrency.lockutils [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-5e1195d2-5c9f-45c5-9646-f82fdf069bd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.252 253665 DEBUG nova.compute.manager [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.253 253665 DEBUG nova.compute.manager [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing instance network info cache due to event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.252 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[869b2cac-be19-49ec-8758-33fc882820d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568495, 'reachable_time': 17944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295676, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.253 253665 DEBUG oslo_concurrency.lockutils [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.254 253665 DEBUG oslo_concurrency.lockutils [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.261 253665 DEBUG nova.network.neutron [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.271 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb04d6e3-04b8-4ef8-bd7f-8e9e5c8d1c5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:bff7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568495, 'tstamp': 568495}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295677, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.291 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b411bf1d-e447-40f9-b570-3aa1bba0a137]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568495, 'reachable_time': 17944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295678, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.331 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7adaa089-0da4-42d8-b18a-58f6979472c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.403 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eea602b8-0eb0-4d6b-959f-3d27ad88b9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.404 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.404 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.405 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:54 compute-0 NetworkManager[48920]: <info>  [1763802834.4073] manager: (tap2abeeeb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 22 09:13:54 compute-0 kernel: tap2abeeeb2-20: entered promiscuous mode
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.414 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:54 compute-0 ovn_controller[152872]: 2025-11-22T09:13:54Z|00245|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.419 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.406 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.415 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.428 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2a7b33-1e92-4cb4-8297-39eade75700d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.430 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:13:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:54.430 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'env', 'PROCESS_TAG=haproxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.436 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.439 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.455 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:13:54 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003617927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.530 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.532 253665 DEBUG nova.virt.libvirt.vif [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.532 253665 DEBUG nova.network.os_vif_util [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.533 253665 DEBUG nova.network.os_vif_util [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.534 253665 DEBUG nova.objects.instance [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.547 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <uuid>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</uuid>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <name>instance-00000023</name>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <nova:name>tempest-tempest.common.compute-instance-1344454464</nova:name>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:13:53</nova:creationTime>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <nova:port uuid="250740a7-7283-491e-b03e-1e30171a9f3f">
Nov 22 09:13:54 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <system>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <entry name="serial">8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <entry name="uuid">8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     </system>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <os>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   </os>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <features>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   </features>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk">
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config">
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       </source>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:13:54 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:0e:fa:90"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <target dev="tap250740a7-72"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log" append="off"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <video>
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     </video>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:13:54 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:13:54 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:13:54 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:13:54 compute-0 nova_compute[253661]: </domain>
Nov 22 09:13:54 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.547 253665 DEBUG nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Preparing to wait for external event network-vif-plugged-250740a7-7283-491e-b03e-1e30171a9f3f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.559 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.560 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.560 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.561 253665 DEBUG nova.virt.libvirt.vif [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.562 253665 DEBUG nova.network.os_vif_util [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.562 253665 DEBUG nova.network.os_vif_util [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.563 253665 DEBUG os_vif [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.564 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.564 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.567 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.567 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap250740a7-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.568 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap250740a7-72, col_values=(('external_ids', {'iface-id': '250740a7-7283-491e-b03e-1e30171a9f3f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:fa:90', 'vm-uuid': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.570 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:54 compute-0 NetworkManager[48920]: <info>  [1763802834.5707] manager: (tap250740a7-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.577 253665 INFO os_vif [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72')
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.630 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.631 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.631 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:0e:fa:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.632 253665 INFO nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Using config drive
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.661 253665 DEBUG nova.storage.rbd_utils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:54 compute-0 ceph-mon[75021]: pgmap v1459: 305 pgs: 305 active+clean; 244 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.3 MiB/s wr, 301 op/s
Nov 22 09:13:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2087524637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1003617927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:13:54 compute-0 podman[295729]: 2025-11-22 09:13:54.88239863 +0000 UTC m=+0.076674906 container create 4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.891 253665 DEBUG nova.compute.manager [req-5ed04e4f-ec51-47b6-bbc0-3843dda4aa01 req-702bba76-723b-4fee-bf5c-21255726f14f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Received event network-vif-plugged-c1d9117b-dc46-4b02-a00a-2d78a8027873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.892 253665 DEBUG oslo_concurrency.lockutils [req-5ed04e4f-ec51-47b6-bbc0-3843dda4aa01 req-702bba76-723b-4fee-bf5c-21255726f14f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.892 253665 DEBUG oslo_concurrency.lockutils [req-5ed04e4f-ec51-47b6-bbc0-3843dda4aa01 req-702bba76-723b-4fee-bf5c-21255726f14f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.892 253665 DEBUG oslo_concurrency.lockutils [req-5ed04e4f-ec51-47b6-bbc0-3843dda4aa01 req-702bba76-723b-4fee-bf5c-21255726f14f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.892 253665 DEBUG nova.compute.manager [req-5ed04e4f-ec51-47b6-bbc0-3843dda4aa01 req-702bba76-723b-4fee-bf5c-21255726f14f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Processing event network-vif-plugged-c1d9117b-dc46-4b02-a00a-2d78a8027873 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.893 253665 DEBUG nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.896 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802834.8967087, 6c9b56d3-9edf-4e5a-88e4-c0470a193778 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.897 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] VM Resumed (Lifecycle Event)
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.901 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.905 253665 INFO nova.virt.libvirt.driver [-] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Instance spawned successfully.
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.906 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.910 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802819.9088094, 14600eae-75dc-4ffc-a15a-bdb234f164d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.911 253665 INFO nova.compute.manager [-] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] VM Stopped (Lifecycle Event)
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:13:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.924 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:54 compute-0 podman[295729]: 2025-11-22 09:13:54.837326903 +0000 UTC m=+0.031603199 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.929 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:54 compute-0 systemd[1]: Started libpod-conmon-4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99.scope.
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.936 253665 DEBUG nova.compute.manager [None req-97e2618b-1462-4698-90b5-d8e9713b9511 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.938 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.938 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.939 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.939 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.939 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.940 253665 DEBUG nova.virt.libvirt.driver [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.962 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.965 253665 INFO nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Creating config drive at /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/disk.config
Nov 22 09:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe55fc797ec71e81b03d1ddd682b7475efbaba06e45857eb15b78877c4d07b1e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:13:54 compute-0 nova_compute[253661]: 2025-11-22 09:13:54.971 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr7sgp6vg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:54 compute-0 podman[295729]: 2025-11-22 09:13:54.991455121 +0000 UTC m=+0.185731417 container init 4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:13:54 compute-0 podman[295729]: 2025-11-22 09:13:54.998824872 +0000 UTC m=+0.193101148 container start 4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.016 253665 INFO nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Took 8.30 seconds to spawn the instance on the hypervisor.
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.017 253665 DEBUG nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:55 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[295744]: [NOTICE]   (295749) : New worker (295753) forked
Nov 22 09:13:55 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[295744]: [NOTICE]   (295749) : Loading success.
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.107 253665 INFO nova.compute.manager [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Took 9.81 seconds to build instance.
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.115 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr7sgp6vg" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:13:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.2 total, 600.0 interval
                                           Cumulative writes: 15K writes, 62K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 4679 syncs, 3.37 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8577 writes, 32K keys, 8577 commit groups, 1.0 writes per commit group, ingest: 31.68 MB, 0.05 MB/s
                                           Interval WAL: 8577 writes, 3271 syncs, 2.62 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.149 253665 DEBUG nova.storage.rbd_utils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.152 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/disk.config 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.193 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.194 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.194 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.195 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.195 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.197 253665 INFO nova.compute.manager [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Terminating instance
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.199 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "refresh_cache-e33e156c-7752-4e0c-82db-bb1bdde4c8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.199 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquired lock "refresh_cache-e33e156c-7752-4e0c-82db-bb1bdde4c8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.199 253665 DEBUG nova.network.neutron [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.201 253665 DEBUG oslo_concurrency.lockutils [None req-7407408d-f398-4c80-a321-ededa766c5b2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.203 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 5.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.203 253665 INFO nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.204 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 289 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 10 MiB/s wr, 291 op/s
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.359 253665 DEBUG oslo_concurrency.processutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/disk.config 8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.360 253665 INFO nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Deleting local config drive /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/disk.config because it was imported into RBD.
Nov 22 09:13:55 compute-0 kernel: tap250740a7-72: entered promiscuous mode
Nov 22 09:13:55 compute-0 NetworkManager[48920]: <info>  [1763802835.4205] manager: (tap250740a7-72): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:55 compute-0 ovn_controller[152872]: 2025-11-22T09:13:55Z|00246|binding|INFO|Claiming lport 250740a7-7283-491e-b03e-1e30171a9f3f for this chassis.
Nov 22 09:13:55 compute-0 ovn_controller[152872]: 2025-11-22T09:13:55Z|00247|binding|INFO|250740a7-7283-491e-b03e-1e30171a9f3f: Claiming fa:16:3e:0e:fa:90 10.100.0.13
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.438 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:fa:90 10.100.0.13'], port_security=['fa:16:3e:0e:fa:90 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccbdff20-588a-43ee-a362-2464b4cf13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=250740a7-7283-491e-b03e-1e30171a9f3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.440 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 250740a7-7283-491e-b03e-1e30171a9f3f in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis
Nov 22 09:13:55 compute-0 NetworkManager[48920]: <info>  [1763802835.4435] device (tap250740a7-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:13:55 compute-0 NetworkManager[48920]: <info>  [1763802835.4449] device (tap250740a7-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.446 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:13:55 compute-0 systemd-machined[215941]: New machine qemu-40-instance-00000023.
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.469 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c625f26-f19d-431c-a566-2ba72c7ca7b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.471 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e2cd359-c1 in ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.474 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e2cd359-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.474 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[775dbffb-dd90-4a93-b2ad-1ab06a8c283f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000023.
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.476 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4931d580-1deb-4f81-8324-7d1b523082db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.495 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2f4281-5008-4a50-8512-631f1aae4364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_controller[152872]: 2025-11-22T09:13:55Z|00248|binding|INFO|Setting lport 250740a7-7283-491e-b03e-1e30171a9f3f ovn-installed in OVS
Nov 22 09:13:55 compute-0 ovn_controller[152872]: 2025-11-22T09:13:55Z|00249|binding|INFO|Setting lport 250740a7-7283-491e-b03e-1e30171a9f3f up in Southbound
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.533 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1b0799-66ad-4617-aff1-dcd577941f87]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.573 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d4d29b-493a-4971-a9d9-a47d19ea559d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 NetworkManager[48920]: <info>  [1763802835.5812] manager: (tap5e2cd359-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.580 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[43a5252a-0fc4-492c-97ac-6b3184a1ca91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.621 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7315691e-2a79-4716-9821-f3fc42bf8c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.628 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4942ecff-5925-4c8c-b7d9-66d22673aaf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:13:55 compute-0 NetworkManager[48920]: <info>  [1763802835.6682] device (tap5e2cd359-c0): carrier: link connected
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.678 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[85d3e6f8-2d44-4b1e-97d1-05af9eb95a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.704 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5183135a-fa0e-4062-b078-33cc6922c024]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 15414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295828, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.729 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cfba2385-b07e-4441-b763-f493ec178fb3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:bd41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568640, 'tstamp': 568640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295829, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.753 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[742be8b1-4ce8-49f5-a3e9-cd05b8e0234e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 15414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295830, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b373e905-4682-4d58-b75a-bbd748913f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.879 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7092d8-7844-4536-959c-6fe60f1d0e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.883 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:55 compute-0 kernel: tap5e2cd359-c0: entered promiscuous mode
Nov 22 09:13:55 compute-0 NetworkManager[48920]: <info>  [1763802835.9146] manager: (tap5e2cd359-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.913 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.917 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:55 compute-0 ovn_controller[152872]: 2025-11-22T09:13:55Z|00250|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.940 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.941 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03a8ff4b-5a2b-4f93-b58b-9a2435c440e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.942 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:13:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:55.943 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'env', 'PROCESS_TAG=haproxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e2cd359-c68f-4256-90e8-0ad40aff8a00.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:13:55 compute-0 nova_compute[253661]: 2025-11-22 09:13:55.946 253665 DEBUG nova.network.neutron [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.072 253665 DEBUG nova.compute.manager [req-58dc79a2-aba2-4168-84d2-db75c06c0d91 req-55a40b6f-4b16-4b4b-a9c4-d0cc6315b3f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.073 253665 DEBUG oslo_concurrency.lockutils [req-58dc79a2-aba2-4168-84d2-db75c06c0d91 req-55a40b6f-4b16-4b4b-a9c4-d0cc6315b3f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.073 253665 DEBUG oslo_concurrency.lockutils [req-58dc79a2-aba2-4168-84d2-db75c06c0d91 req-55a40b6f-4b16-4b4b-a9c4-d0cc6315b3f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.073 253665 DEBUG oslo_concurrency.lockutils [req-58dc79a2-aba2-4168-84d2-db75c06c0d91 req-55a40b6f-4b16-4b4b-a9c4-d0cc6315b3f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.073 253665 DEBUG nova.compute.manager [req-58dc79a2-aba2-4168-84d2-db75c06c0d91 req-55a40b6f-4b16-4b4b-a9c4-d0cc6315b3f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] No waiting events found dispatching network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.073 253665 WARNING nova.compute.manager [req-58dc79a2-aba2-4168-84d2-db75c06c0d91 req-55a40b6f-4b16-4b4b-a9c4-d0cc6315b3f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received unexpected event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 for instance with vm_state active and task_state None.
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.166 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802836.1660635, 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.167 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] VM Started (Lifecycle Event)
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.187 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.191 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802836.166227, 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.192 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] VM Paused (Lifecycle Event)
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.215 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.218 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.226 253665 DEBUG nova.network.neutron [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.246 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.249 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Releasing lock "refresh_cache-e33e156c-7752-4e0c-82db-bb1bdde4c8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.250 253665 DEBUG nova.compute.manager [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.256 253665 INFO nova.virt.libvirt.driver [-] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Instance destroyed successfully.
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.257 253665 DEBUG nova.objects.instance [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lazy-loading 'resources' on Instance uuid e33e156c-7752-4e0c-82db-bb1bdde4c8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:56 compute-0 podman[295905]: 2025-11-22 09:13:56.366504517 +0000 UTC m=+0.078663765 container create e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:13:56 compute-0 systemd[1]: Started libpod-conmon-e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88.scope.
Nov 22 09:13:56 compute-0 podman[295905]: 2025-11-22 09:13:56.324598647 +0000 UTC m=+0.036757905 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:13:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33cf72e08b1ddd2978dc2e03da03122704dc3ba1214ec5d21f8fe0f1527d854/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:13:56 compute-0 podman[295905]: 2025-11-22 09:13:56.473678721 +0000 UTC m=+0.185837999 container init e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:13:56 compute-0 podman[295905]: 2025-11-22 09:13:56.482132099 +0000 UTC m=+0.194291347 container start e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:13:56 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [NOTICE]   (295942) : New worker (295944) forked
Nov 22 09:13:56 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [NOTICE]   (295942) : Loading success.
Nov 22 09:13:56 compute-0 ceph-mon[75021]: pgmap v1460: 305 pgs: 305 active+clean; 289 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 10 MiB/s wr, 291 op/s
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.773 253665 INFO nova.virt.libvirt.driver [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Deleting instance files /var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5_del
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.774 253665 INFO nova.virt.libvirt.driver [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Deletion of /var/lib/nova/instances/e33e156c-7752-4e0c-82db-bb1bdde4c8e5_del complete
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.835 253665 INFO nova.compute.manager [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Took 0.58 seconds to destroy the instance on the hypervisor.
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.836 253665 DEBUG oslo.service.loopingcall [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.837 253665 DEBUG nova.compute.manager [-] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.837 253665 DEBUG nova.network.neutron [-] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.954 253665 DEBUG nova.network.neutron [-] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.967 253665 DEBUG nova.network.neutron [-] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:56 compute-0 nova_compute[253661]: 2025-11-22 09:13:56.979 253665 INFO nova.compute.manager [-] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Took 0.14 seconds to deallocate network for instance.
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.013 253665 DEBUG nova.network.neutron [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updated VIF entry in instance network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.014 253665 DEBUG nova.network.neutron [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.030 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.030 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.031 253665 DEBUG oslo_concurrency.lockutils [req-9b989fec-e622-47b2-89d8-be4ed0fcfdfa req-623ceab8-6875-45a5-b502-ec954c72a144 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.061 253665 DEBUG nova.compute.manager [req-a6279c7f-e479-4691-a459-26e6e2d8e729 req-1da401a9-3441-4065-9656-d799abd75c9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Received event network-vif-plugged-c1d9117b-dc46-4b02-a00a-2d78a8027873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.062 253665 DEBUG oslo_concurrency.lockutils [req-a6279c7f-e479-4691-a459-26e6e2d8e729 req-1da401a9-3441-4065-9656-d799abd75c9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.062 253665 DEBUG oslo_concurrency.lockutils [req-a6279c7f-e479-4691-a459-26e6e2d8e729 req-1da401a9-3441-4065-9656-d799abd75c9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.063 253665 DEBUG oslo_concurrency.lockutils [req-a6279c7f-e479-4691-a459-26e6e2d8e729 req-1da401a9-3441-4065-9656-d799abd75c9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.063 253665 DEBUG nova.compute.manager [req-a6279c7f-e479-4691-a459-26e6e2d8e729 req-1da401a9-3441-4065-9656-d799abd75c9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] No waiting events found dispatching network-vif-plugged-c1d9117b-dc46-4b02-a00a-2d78a8027873 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.064 253665 WARNING nova.compute.manager [req-a6279c7f-e479-4691-a459-26e6e2d8e729 req-1da401a9-3441-4065-9656-d799abd75c9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Received unexpected event network-vif-plugged-c1d9117b-dc46-4b02-a00a-2d78a8027873 for instance with vm_state active and task_state None.
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.162 253665 DEBUG oslo_concurrency.processutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 297 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 346 op/s
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.516 253665 DEBUG nova.compute.manager [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.565 253665 INFO nova.compute.manager [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] instance snapshotting
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.577 253665 DEBUG oslo_concurrency.lockutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.577 253665 DEBUG oslo_concurrency.lockutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.578 253665 DEBUG oslo_concurrency.lockutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.578 253665 DEBUG oslo_concurrency.lockutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.578 253665 DEBUG oslo_concurrency.lockutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.580 253665 INFO nova.compute.manager [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Terminating instance
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.581 253665 DEBUG nova.compute.manager [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:13:57 compute-0 kernel: tapb52cd552-3d (unregistering): left promiscuous mode
Nov 22 09:13:57 compute-0 NetworkManager[48920]: <info>  [1763802837.6272] device (tapb52cd552-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:13:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300296014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00251|binding|INFO|Releasing lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 from this chassis (sb_readonly=0)
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00252|binding|INFO|Setting lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 down in Southbound
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00253|binding|INFO|Removing iface tapb52cd552-3d ovn-installed in OVS
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.649 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:8b:8c 10.100.0.12'], port_security=['fa:16:3e:d6:8b:8c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5e1195d2-5c9f-45c5-9646-f82fdf069bd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b52cd552-3da3-4a4f-b85c-55c37eff6bf1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.650 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b52cd552-3da3-4a4f-b85c-55c37eff6bf1 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff unbound from our chassis
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.652 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 691e79ad-da5d-4276-aa7d-732c2aaedbff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.653 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1aca58-2ed7-44b0-855f-b6184badf5f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.653 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff namespace which is not needed anymore
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.676 253665 DEBUG oslo_concurrency.processutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.680 253665 DEBUG nova.compute.provider_tree [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.691 253665 DEBUG nova.scheduler.client.report [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3300296014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:57 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Deactivated successfully.
Nov 22 09:13:57 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Consumed 4.169s CPU time.
Nov 22 09:13:57 compute-0 systemd-machined[215941]: Machine qemu-38-instance-00000021 terminated.
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.710 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.750 253665 INFO nova.scheduler.client.report [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Deleted allocations for instance e33e156c-7752-4e0c-82db-bb1bdde4c8e5
Nov 22 09:13:57 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[295569]: [NOTICE]   (295591) : haproxy version is 2.8.14-c23fe91
Nov 22 09:13:57 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[295569]: [NOTICE]   (295591) : path to executable is /usr/sbin/haproxy
Nov 22 09:13:57 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[295569]: [WARNING]  (295591) : Exiting Master process...
Nov 22 09:13:57 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[295569]: [ALERT]    (295591) : Current worker (295595) exited with code 143 (Terminated)
Nov 22 09:13:57 compute-0 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[295569]: [WARNING]  (295591) : All workers exited. Exiting... (0)
Nov 22 09:13:57 compute-0 systemd[1]: libpod-c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68.scope: Deactivated successfully.
Nov 22 09:13:57 compute-0 podman[295998]: 2025-11-22 09:13:57.804293875 +0000 UTC m=+0.048755299 container died c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:13:57 compute-0 kernel: tapb52cd552-3d: entered promiscuous mode
Nov 22 09:13:57 compute-0 NetworkManager[48920]: <info>  [1763802837.8064] manager: (tapb52cd552-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.808 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00254|binding|INFO|Claiming lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 for this chassis.
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00255|binding|INFO|b52cd552-3da3-4a4f-b85c-55c37eff6bf1: Claiming fa:16:3e:d6:8b:8c 10.100.0.12
Nov 22 09:13:57 compute-0 kernel: tapb52cd552-3d (unregistering): left promiscuous mode
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.810 253665 INFO nova.virt.libvirt.driver [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Beginning live snapshot process
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.816 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:8b:8c 10.100.0.12'], port_security=['fa:16:3e:d6:8b:8c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5e1195d2-5c9f-45c5-9646-f82fdf069bd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b52cd552-3da3-4a4f-b85c-55c37eff6bf1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.833 253665 DEBUG oslo_concurrency.lockutils [None req-d8009a68-656f-4736-8d27-e906e5af3846 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "e33e156c-7752-4e0c-82db-bb1bdde4c8e5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00256|binding|INFO|Setting lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 ovn-installed in OVS
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00257|binding|INFO|Setting lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 up in Southbound
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00258|binding|INFO|Releasing lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 from this chassis (sb_readonly=1)
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00259|if_status|INFO|Not setting lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 down as sb is readonly
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00260|binding|INFO|Removing iface tapb52cd552-3d ovn-installed in OVS
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.840 253665 INFO nova.virt.libvirt.driver [-] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Instance destroyed successfully.
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.841 253665 DEBUG nova.objects.instance [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'resources' on Instance uuid 5e1195d2-5c9f-45c5-9646-f82fdf069bd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00261|binding|INFO|Releasing lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 from this chassis (sb_readonly=0)
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:57 compute-0 ovn_controller[152872]: 2025-11-22T09:13:57Z|00262|binding|INFO|Setting lport b52cd552-3da3-4a4f-b85c-55c37eff6bf1 down in Southbound
Nov 22 09:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68-userdata-shm.mount: Deactivated successfully.
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.849 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:8b:8c 10.100.0.12'], port_security=['fa:16:3e:d6:8b:8c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5e1195d2-5c9f-45c5-9646-f82fdf069bd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b52cd552-3da3-4a4f-b85c-55c37eff6bf1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d2f248946451736b315236ddb8cc113564cad8d8ffe880a8bb3a464d3c7c7d3-merged.mount: Deactivated successfully.
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.865 253665 DEBUG nova.virt.libvirt.vif [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-206874402',display_name='tempest-ImagesOneServerNegativeTestJSON-server-206874402',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-206874402',id=33,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-311ounv8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:54Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=5e1195d2-5c9f-45c5-9646-f82fdf069bd6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.866 253665 DEBUG nova.network.os_vif_util [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "address": "fa:16:3e:d6:8b:8c", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb52cd552-3d", "ovs_interfaceid": "b52cd552-3da3-4a4f-b85c-55c37eff6bf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.867 253665 DEBUG nova.network.os_vif_util [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:8b:8c,bridge_name='br-int',has_traffic_filtering=True,id=b52cd552-3da3-4a4f-b85c-55c37eff6bf1,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb52cd552-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.867 253665 DEBUG os_vif [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:8b:8c,bridge_name='br-int',has_traffic_filtering=True,id=b52cd552-3da3-4a4f-b85c-55c37eff6bf1,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb52cd552-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:13:57 compute-0 podman[295998]: 2025-11-22 09:13:57.868678287 +0000 UTC m=+0.113139711 container cleanup c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.873 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.873 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb52cd552-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.874 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:57 compute-0 systemd[1]: libpod-conmon-c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68.scope: Deactivated successfully.
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.883 253665 INFO os_vif [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:8b:8c,bridge_name='br-int',has_traffic_filtering=True,id=b52cd552-3da3-4a4f-b85c-55c37eff6bf1,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb52cd552-3d')
Nov 22 09:13:57 compute-0 podman[296028]: 2025-11-22 09:13:57.941735753 +0000 UTC m=+0.049741623 container remove c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.948 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d66e12-c3d6-4773-be16-00f78872ad6a]: (4, ('Sat Nov 22 09:13:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff (c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68)\nc3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68\nSat Nov 22 09:13:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff (c3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68)\nc3ceedd7f49594c3d59a40e091f75676cef6cc4eaad7f9ca757869f0cd76bb68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.950 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4162b590-377b-4d28-9d7e-91e1eb8081d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.951 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691e79ad-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:13:57 compute-0 kernel: tap691e79ad-d0: left promiscuous mode
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.975 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a6f47d-9fcf-4588-a791-b9d2365bfb03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.988 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba89935-43c7-495f-bdc5-21aa459e5f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:57 compute-0 nova_compute[253661]: 2025-11-22 09:13:57.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:57.990 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bee92d57-2356-4439-ba5e-574519fc7f47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.008 253665 DEBUG nova.virt.libvirt.imagebackend [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:13:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:58.009 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2131dc-c0e1-4320-86e7-f5d4ba2ab067]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568374, 'reachable_time': 29104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296094, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d691e79ad\x2dda5d\x2d4276\x2daa7d\x2d732c2aaedbff.mount: Deactivated successfully.
Nov 22 09:13:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:58.014 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:13:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:58.015 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[86a94d1a-d8da-4587-be57-e7ccc91b040d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:58.015 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b52cd552-3da3-4a4f-b85c-55c37eff6bf1 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff unbound from our chassis
Nov 22 09:13:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:58.017 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 691e79ad-da5d-4276-aa7d-732c2aaedbff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:13:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:58.017 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9d6c95-d42a-46ef-bfac-d74d080a6610]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:58.018 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b52cd552-3da3-4a4f-b85c-55c37eff6bf1 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff unbound from our chassis
Nov 22 09:13:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:58.019 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 691e79ad-da5d-4276-aa7d-732c2aaedbff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:13:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:13:58.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d06be3bb-5600-40b9-97e2-9262158836ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.169 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-plugged-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.169 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.170 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.170 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.170 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Processing event network-vif-plugged-250740a7-7283-491e-b03e-1e30171a9f3f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.171 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-plugged-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.172 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.172 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.172 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.172 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] No waiting events found dispatching network-vif-plugged-250740a7-7283-491e-b03e-1e30171a9f3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.172 253665 WARNING nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received unexpected event network-vif-plugged-250740a7-7283-491e-b03e-1e30171a9f3f for instance with vm_state building and task_state spawning.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.173 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-unplugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.173 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.174 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.174 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.174 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] No waiting events found dispatching network-vif-unplugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.175 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-unplugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.175 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.175 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.175 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.176 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.176 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] No waiting events found dispatching network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.176 253665 WARNING nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received unexpected event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 for instance with vm_state active and task_state deleting.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.176 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.176 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.177 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.177 253665 DEBUG oslo_concurrency.lockutils [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.177 253665 DEBUG nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] No waiting events found dispatching network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.177 253665 WARNING nova.compute.manager [req-741acd16-a75f-4a80-9480-36d478bacf77 req-4eba6933-1e97-4ce7-89e0-f68dfe6f12bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received unexpected event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 for instance with vm_state active and task_state deleting.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.178 253665 DEBUG nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.182 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802838.1818209, 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.182 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] VM Resumed (Lifecycle Event)
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.188 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.192 253665 INFO nova.virt.libvirt.driver [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Instance spawned successfully.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.193 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.206 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.212 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.215 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.216 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.216 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.216 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.217 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.217 253665 DEBUG nova.virt.libvirt.driver [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.230 253665 DEBUG nova.storage.rbd_utils [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(d9092da5a4b840d0a7054ab9ff85b6d3) on rbd image(6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.261 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.270 253665 INFO nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Took 9.82 seconds to spawn the instance on the hypervisor.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.271 253665 DEBUG nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.331 253665 INFO nova.compute.manager [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Took 11.03 seconds to build instance.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.350 253665 INFO nova.virt.libvirt.driver [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Deleting instance files /var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6_del
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.351 253665 INFO nova.virt.libvirt.driver [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Deletion of /var/lib/nova/instances/5e1195d2-5c9f-45c5-9646-f82fdf069bd6_del complete
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.354 253665 DEBUG oslo_concurrency.lockutils [None req-e58139e0-516e-4b38-ab32-1e42870f0cbe 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.354 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.354 253665 INFO nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.355 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.396 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.397 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.397 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.397 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.397 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.398 253665 INFO nova.compute.manager [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Terminating instance
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.400 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "refresh_cache-60845b2b-40a4-4c4b-8a88-333a8f5e233c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.400 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquired lock "refresh_cache-60845b2b-40a4-4c4b-8a88-333a8f5e233c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.400 253665 DEBUG nova.network.neutron [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.405 253665 INFO nova.compute.manager [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.406 253665 DEBUG oslo.service.loopingcall [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.406 253665 DEBUG nova.compute.manager [-] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.406 253665 DEBUG nova.network.neutron [-] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.551 253665 DEBUG nova.network.neutron [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:13:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 22 09:13:58 compute-0 ceph-mon[75021]: pgmap v1461: 305 pgs: 305 active+clean; 297 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 346 op/s
Nov 22 09:13:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 22 09:13:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.777 253665 DEBUG nova.network.neutron [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.789 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Releasing lock "refresh_cache-60845b2b-40a4-4c4b-8a88-333a8f5e233c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.790 253665 DEBUG nova.compute.manager [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.895 253665 DEBUG nova.storage.rbd_utils [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] cloning vms/6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk@d9092da5a4b840d0a7054ab9ff85b6d3 to images/9aba632c-c21b-46fd-8be0-d394d1f32c7d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:13:58 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 22 09:13:58 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Consumed 13.398s CPU time.
Nov 22 09:13:58 compute-0 systemd-machined[215941]: Machine qemu-36-instance-0000001f terminated.
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.976 253665 DEBUG nova.network.neutron [-] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:58 compute-0 nova_compute[253661]: 2025-11-22 09:13:58.993 253665 INFO nova.compute.manager [-] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Took 0.59 seconds to deallocate network for instance.
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.015 253665 INFO nova.virt.libvirt.driver [-] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Instance destroyed successfully.
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.015 253665 DEBUG nova.objects.instance [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lazy-loading 'resources' on Instance uuid 60845b2b-40a4-4c4b-8a88-333a8f5e233c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.057 253665 DEBUG oslo_concurrency.lockutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.058 253665 DEBUG oslo_concurrency.lockutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.071 253665 DEBUG nova.storage.rbd_utils [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] flattening images/9aba632c-c21b-46fd-8be0-d394d1f32c7d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:13:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 233 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 7.0 MiB/s wr, 516 op/s
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.257 253665 DEBUG oslo_concurrency.processutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.427 253665 DEBUG nova.storage.rbd_utils [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(d9092da5a4b840d0a7054ab9ff85b6d3) on rbd image(6c9b56d3-9edf-4e5a-88e4-c0470a193778_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.717 253665 INFO nova.virt.libvirt.driver [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Deleting instance files /var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c_del
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.718 253665 INFO nova.virt.libvirt.driver [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Deletion of /var/lib/nova/instances/60845b2b-40a4-4c4b-8a88-333a8f5e233c_del complete
Nov 22 09:13:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 22 09:13:59 compute-0 ceph-mon[75021]: osdmap e209: 3 total, 3 up, 3 in
Nov 22 09:13:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.791 253665 INFO nova.compute.manager [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Took 1.00 seconds to destroy the instance on the hypervisor.
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.792 253665 DEBUG oslo.service.loopingcall [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.793 253665 DEBUG nova.compute.manager [-] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.793 253665 DEBUG nova.network.neutron [-] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:13:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 22 09:13:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:13:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279472966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.827 253665 DEBUG oslo_concurrency.processutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.835 253665 DEBUG nova.compute.provider_tree [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.849 253665 DEBUG nova.scheduler.client.report [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.875 253665 DEBUG oslo_concurrency.lockutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.891 253665 DEBUG nova.storage.rbd_utils [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(snap) on rbd image(9aba632c-c21b-46fd-8be0-d394d1f32c7d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.930 253665 INFO nova.scheduler.client.report [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Deleted allocations for instance 5e1195d2-5c9f-45c5-9646-f82fdf069bd6
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.932 253665 DEBUG nova.network.neutron [-] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.946 253665 DEBUG nova.network.neutron [-] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:13:59 compute-0 nova_compute[253661]: 2025-11-22 09:13:59.968 253665 INFO nova.compute.manager [-] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Took 0.17 seconds to deallocate network for instance.
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.025 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.026 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.028 253665 DEBUG oslo_concurrency.lockutils [None req-017aa121-6cdc-4470-8de9-6cb2f4b113fe 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.099 253665 DEBUG oslo_concurrency.processutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.328 253665 DEBUG nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.329 253665 DEBUG oslo_concurrency.lockutils [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.329 253665 DEBUG oslo_concurrency.lockutils [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.330 253665 DEBUG oslo_concurrency.lockutils [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.330 253665 DEBUG nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] No waiting events found dispatching network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.330 253665 WARNING nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received unexpected event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 for instance with vm_state deleted and task_state None.
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.330 253665 DEBUG nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-unplugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.331 253665 DEBUG oslo_concurrency.lockutils [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.331 253665 DEBUG oslo_concurrency.lockutils [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.332 253665 DEBUG oslo_concurrency.lockutils [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.332 253665 DEBUG nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] No waiting events found dispatching network-vif-unplugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.332 253665 WARNING nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received unexpected event network-vif-unplugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 for instance with vm_state deleted and task_state None.
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.332 253665 DEBUG nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.333 253665 DEBUG oslo_concurrency.lockutils [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.333 253665 DEBUG oslo_concurrency.lockutils [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.333 253665 DEBUG oslo_concurrency.lockutils [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5e1195d2-5c9f-45c5-9646-f82fdf069bd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.334 253665 DEBUG nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] No waiting events found dispatching network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.334 253665 WARNING nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received unexpected event network-vif-plugged-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 for instance with vm_state deleted and task_state None.
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.334 253665 DEBUG nova.compute.manager [req-772d78c4-ab71-4ef4-a0e9-ca587bc30016 req-a45b237e-676d-4f8f-9d36-3a6fccc166a0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Received event network-vif-deleted-b52cd552-3da3-4a4f-b85c-55c37eff6bf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:00 compute-0 ovn_controller[152872]: 2025-11-22T09:14:00Z|00263|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:14:00 compute-0 ovn_controller[152872]: 2025-11-22T09:14:00Z|00264|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:14:00 compute-0 NetworkManager[48920]: <info>  [1763802840.3448] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Nov 22 09:14:00 compute-0 NetworkManager[48920]: <info>  [1763802840.3460] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:00 compute-0 ovn_controller[152872]: 2025-11-22T09:14:00Z|00265|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:14:00 compute-0 ovn_controller[152872]: 2025-11-22T09:14:00Z|00266|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:14:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:14:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 54K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 3622 syncs, 3.61 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7003 writes, 29K keys, 7003 commit groups, 1.0 writes per commit group, ingest: 32.08 MB, 0.05 MB/s
                                           Interval WAL: 7003 writes, 2575 syncs, 2.72 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600562964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.556 253665 DEBUG oslo_concurrency.processutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.564 253665 DEBUG nova.compute.provider_tree [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.581 253665 DEBUG nova.scheduler.client.report [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.606 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.630 253665 INFO nova.scheduler.client.report [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Deleted allocations for instance 60845b2b-40a4-4c4b-8a88-333a8f5e233c
Nov 22 09:14:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:00 compute-0 nova_compute[253661]: 2025-11-22 09:14:00.690 253665 DEBUG oslo_concurrency.lockutils [None req-21c40a93-a1cb-44f1-b55c-a0db49ddb51e 0a81519d045643428005367ca3294f1e fa41c7d4e15a42658da755de028ccabb - - default default] Lock "60845b2b-40a4-4c4b-8a88-333a8f5e233c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 22 09:14:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 22 09:14:00 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 22 09:14:00 compute-0 ceph-mon[75021]: pgmap v1463: 305 pgs: 305 active+clean; 233 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 7.0 MiB/s wr, 516 op/s
Nov 22 09:14:00 compute-0 ceph-mon[75021]: osdmap e210: 3 total, 3 up, 3 in
Nov 22 09:14:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3279472966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/600562964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 233 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 1.4 MiB/s wr, 458 op/s
Nov 22 09:14:01 compute-0 ceph-mon[75021]: osdmap e211: 3 total, 3 up, 3 in
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 09:14:02 compute-0 nova_compute[253661]: 2025-11-22 09:14:02.425 253665 DEBUG nova.compute.manager [req-22862ff3-2606-4be3-a6d9-206bbdf2839d req-8783fcd5-1b03-4f72-b823-34271ed36ce1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:02 compute-0 nova_compute[253661]: 2025-11-22 09:14:02.425 253665 DEBUG nova.compute.manager [req-22862ff3-2606-4be3-a6d9-206bbdf2839d req-8783fcd5-1b03-4f72-b823-34271ed36ce1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing instance network info cache due to event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:02 compute-0 nova_compute[253661]: 2025-11-22 09:14:02.425 253665 DEBUG oslo_concurrency.lockutils [req-22862ff3-2606-4be3-a6d9-206bbdf2839d req-8783fcd5-1b03-4f72-b823-34271ed36ce1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:02 compute-0 nova_compute[253661]: 2025-11-22 09:14:02.425 253665 DEBUG oslo_concurrency.lockutils [req-22862ff3-2606-4be3-a6d9-206bbdf2839d req-8783fcd5-1b03-4f72-b823-34271ed36ce1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:02 compute-0 nova_compute[253661]: 2025-11-22 09:14:02.425 253665 DEBUG nova.network.neutron [req-22862ff3-2606-4be3-a6d9-206bbdf2839d req-8783fcd5-1b03-4f72-b823-34271ed36ce1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001619146602526535 of space, bias 1.0, pg target 0.48574398075796055 quantized to 32 (current 32)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:14:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:14:02 compute-0 ceph-mon[75021]: pgmap v1466: 305 pgs: 305 active+clean; 233 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 1.4 MiB/s wr, 458 op/s
Nov 22 09:14:02 compute-0 nova_compute[253661]: 2025-11-22 09:14:02.911 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:02 compute-0 nova_compute[253661]: 2025-11-22 09:14:02.916 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:02 compute-0 nova_compute[253661]: 2025-11-22 09:14:02.916 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:03 compute-0 nova_compute[253661]: 2025-11-22 09:14:03.107 253665 DEBUG nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:14:03 compute-0 nova_compute[253661]: 2025-11-22 09:14:03.196 253665 INFO nova.virt.libvirt.driver [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Snapshot image upload complete
Nov 22 09:14:03 compute-0 nova_compute[253661]: 2025-11-22 09:14:03.197 253665 INFO nova.compute.manager [None req-c17fac54-718e-44f7-833f-e538c834fd7d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Took 5.63 seconds to snapshot the instance on the hypervisor.
Nov 22 09:14:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 205 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 1.9 MiB/s wr, 412 op/s
Nov 22 09:14:03 compute-0 nova_compute[253661]: 2025-11-22 09:14:03.722 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:03 compute-0 nova_compute[253661]: 2025-11-22 09:14:03.723 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:03 compute-0 nova_compute[253661]: 2025-11-22 09:14:03.734 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:14:03 compute-0 nova_compute[253661]: 2025-11-22 09:14:03.734 253665 INFO nova.compute.claims [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:14:03 compute-0 sudo[296270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:14:03 compute-0 sudo[296270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:03 compute-0 sudo[296270]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:03 compute-0 sudo[296295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:14:03 compute-0 sudo[296295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:03 compute-0 sudo[296295]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:03 compute-0 sudo[296320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:14:03 compute-0 sudo[296320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:03 compute-0 sudo[296320]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:03 compute-0 nova_compute[253661]: 2025-11-22 09:14:03.895 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:03 compute-0 sudo[296345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:14:03 compute-0 sudo[296345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438901812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.347 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.354 253665 DEBUG nova.compute.provider_tree [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.371 253665 DEBUG nova.scheduler.client.report [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.405 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.405 253665 DEBUG nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.472 253665 DEBUG nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.473 253665 DEBUG nova.network.neutron [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:14:04 compute-0 sudo[296345]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.489 253665 INFO nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.503 253665 DEBUG nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:14:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:14:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:14:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:14:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:14:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:14:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:14:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 43b0cc55-c939-4096-9a47-fe8b24503fba does not exist
Nov 22 09:14:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev f8f1a0a0-24d5-4f29-8b9b-a4edc4b8f4f9 does not exist
Nov 22 09:14:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 405235cd-c882-4c91-ba33-cb31267093cc does not exist
Nov 22 09:14:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:14:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:14:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:14:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:14:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:14:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.586 253665 DEBUG nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.590 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.591 253665 INFO nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Creating image(s)
Nov 22 09:14:04 compute-0 sudo[296422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:14:04 compute-0 sudo[296422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:04 compute-0 sudo[296422]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.623 253665 DEBUG nova.storage.rbd_utils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image bf96e20f-af8f-4db3-977f-cee93b1d7934_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.655 253665 DEBUG nova.storage.rbd_utils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image bf96e20f-af8f-4db3-977f-cee93b1d7934_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:04 compute-0 sudo[296462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:14:04 compute-0 sudo[296462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:04 compute-0 sudo[296462]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.680 253665 DEBUG nova.storage.rbd_utils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image bf96e20f-af8f-4db3-977f-cee93b1d7934_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.688 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:04 compute-0 sudo[296524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:14:04 compute-0 sudo[296524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:04 compute-0 sudo[296524]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.770 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.771 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.772 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.772 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:04 compute-0 sudo[296552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:14:04 compute-0 sudo[296552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.795 253665 DEBUG nova.storage.rbd_utils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image bf96e20f-af8f-4db3-977f-cee93b1d7934_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.801 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bf96e20f-af8f-4db3-977f-cee93b1d7934_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.853 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:04 compute-0 nova_compute[253661]: 2025-11-22 09:14:04.934 253665 DEBUG nova.policy [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:14:05 compute-0 ceph-mon[75021]: pgmap v1467: 305 pgs: 305 active+clean; 205 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 1.9 MiB/s wr, 412 op/s
Nov 22 09:14:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1438901812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:14:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:14:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:14:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:14:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:14:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.043 253665 DEBUG nova.network.neutron [req-22862ff3-2606-4be3-a6d9-206bbdf2839d req-8783fcd5-1b03-4f72-b823-34271ed36ce1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updated VIF entry in instance network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.044 253665 DEBUG nova.network.neutron [req-22862ff3-2606-4be3-a6d9-206bbdf2839d req-8783fcd5-1b03-4f72-b823-34271ed36ce1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.069 253665 DEBUG oslo_concurrency.lockutils [req-22862ff3-2606-4be3-a6d9-206bbdf2839d req-8783fcd5-1b03-4f72-b823-34271ed36ce1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:05 compute-0 podman[296654]: 2025-11-22 09:14:05.113951773 +0000 UTC m=+0.044149187 container create e78b17146f58ae6db039db461f36b8f2f3ea3b017a97296db19c055e8f9d08ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:14:05 compute-0 systemd[1]: Started libpod-conmon-e78b17146f58ae6db039db461f36b8f2f3ea3b017a97296db19c055e8f9d08ac.scope.
Nov 22 09:14:05 compute-0 podman[296654]: 2025-11-22 09:14:05.092263459 +0000 UTC m=+0.022460903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:14:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:14:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 181 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 3.3 MiB/s wr, 307 op/s
Nov 22 09:14:05 compute-0 podman[296654]: 2025-11-22 09:14:05.257550462 +0000 UTC m=+0.187747906 container init e78b17146f58ae6db039db461f36b8f2f3ea3b017a97296db19c055e8f9d08ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poincare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 09:14:05 compute-0 podman[296654]: 2025-11-22 09:14:05.266884671 +0000 UTC m=+0.197082075 container start e78b17146f58ae6db039db461f36b8f2f3ea3b017a97296db19c055e8f9d08ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poincare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:14:05 compute-0 great_poincare[296673]: 167 167
Nov 22 09:14:05 compute-0 systemd[1]: libpod-e78b17146f58ae6db039db461f36b8f2f3ea3b017a97296db19c055e8f9d08ac.scope: Deactivated successfully.
Nov 22 09:14:05 compute-0 podman[296654]: 2025-11-22 09:14:05.276802015 +0000 UTC m=+0.206999459 container attach e78b17146f58ae6db039db461f36b8f2f3ea3b017a97296db19c055e8f9d08ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 09:14:05 compute-0 podman[296654]: 2025-11-22 09:14:05.277393619 +0000 UTC m=+0.207591053 container died e78b17146f58ae6db039db461f36b8f2f3ea3b017a97296db19c055e8f9d08ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poincare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fca95b3ceedbf291469b663c0316762c812800e455452516b4ce3b67f72a0ac-merged.mount: Deactivated successfully.
Nov 22 09:14:05 compute-0 podman[296654]: 2025-11-22 09:14:05.349029921 +0000 UTC m=+0.279227335 container remove e78b17146f58ae6db039db461f36b8f2f3ea3b017a97296db19c055e8f9d08ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.356 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bf96e20f-af8f-4db3-977f-cee93b1d7934_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:05 compute-0 systemd[1]: libpod-conmon-e78b17146f58ae6db039db461f36b8f2f3ea3b017a97296db19c055e8f9d08ac.scope: Deactivated successfully.
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.450 253665 DEBUG nova.storage.rbd_utils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] resizing rbd image bf96e20f-af8f-4db3-977f-cee93b1d7934_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:14:05 compute-0 podman[296749]: 2025-11-22 09:14:05.58216494 +0000 UTC m=+0.074356928 container create 3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.621 253665 DEBUG nova.network.neutron [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Successfully created port: 8c2fda4f-7fa8-479c-8573-592021820968 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:14:05 compute-0 podman[296749]: 2025-11-22 09:14:05.536505658 +0000 UTC m=+0.028697666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:14:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:05 compute-0 systemd[1]: Started libpod-conmon-3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc.scope.
Nov 22 09:14:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45a39fb08879f9c35a747bd7d7645292c9ef6c70363166743a3d73900e04b9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45a39fb08879f9c35a747bd7d7645292c9ef6c70363166743a3d73900e04b9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45a39fb08879f9c35a747bd7d7645292c9ef6c70363166743a3d73900e04b9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45a39fb08879f9c35a747bd7d7645292c9ef6c70363166743a3d73900e04b9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45a39fb08879f9c35a747bd7d7645292c9ef6c70363166743a3d73900e04b9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.727 253665 DEBUG nova.objects.instance [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'migration_context' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.739 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.740 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Ensure instance console log exists: /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.740 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.741 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.741 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:05 compute-0 podman[296749]: 2025-11-22 09:14:05.784808901 +0000 UTC m=+0.277000899 container init 3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:14:05 compute-0 podman[296749]: 2025-11-22 09:14:05.791786533 +0000 UTC m=+0.283978511 container start 3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:14:05 compute-0 ovn_controller[152872]: 2025-11-22T09:14:05Z|00267|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:14:05 compute-0 ovn_controller[152872]: 2025-11-22T09:14:05Z|00268|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:14:05 compute-0 podman[296749]: 2025-11-22 09:14:05.839964476 +0000 UTC m=+0.332156484 container attach 3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:14:05 compute-0 nova_compute[253661]: 2025-11-22 09:14:05.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:06 compute-0 nova_compute[253661]: 2025-11-22 09:14:06.260 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802831.2029364, e33e156c-7752-4e0c-82db-bb1bdde4c8e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:06 compute-0 nova_compute[253661]: 2025-11-22 09:14:06.261 253665 INFO nova.compute.manager [-] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] VM Stopped (Lifecycle Event)
Nov 22 09:14:06 compute-0 nova_compute[253661]: 2025-11-22 09:14:06.286 253665 DEBUG nova.compute.manager [None req-30b466f3-c1ec-47a6-9f98-f438a3ee718b - - - - - -] [instance: e33e156c-7752-4e0c-82db-bb1bdde4c8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:06 compute-0 sad_lewin[296765]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:14:06 compute-0 sad_lewin[296765]: --> relative data size: 1.0
Nov 22 09:14:06 compute-0 sad_lewin[296765]: --> All data devices are unavailable
Nov 22 09:14:06 compute-0 systemd[1]: libpod-3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc.scope: Deactivated successfully.
Nov 22 09:14:06 compute-0 podman[296749]: 2025-11-22 09:14:06.87840189 +0000 UTC m=+1.370593878 container died 3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 09:14:06 compute-0 systemd[1]: libpod-3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc.scope: Consumed 1.025s CPU time.
Nov 22 09:14:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d45a39fb08879f9c35a747bd7d7645292c9ef6c70363166743a3d73900e04b9d-merged.mount: Deactivated successfully.
Nov 22 09:14:06 compute-0 podman[296749]: 2025-11-22 09:14:06.960935108 +0000 UTC m=+1.453127096 container remove 3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lewin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:14:06 compute-0 systemd[1]: libpod-conmon-3384787d84eb162dca8b98c95b04a5e5a6e7c1d73a65ce0af17fd1dab60fa8dc.scope: Deactivated successfully.
Nov 22 09:14:07 compute-0 sudo[296552]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:07 compute-0 ceph-mon[75021]: pgmap v1468: 305 pgs: 305 active+clean; 181 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 3.3 MiB/s wr, 307 op/s
Nov 22 09:14:07 compute-0 sudo[296825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:14:07 compute-0 sudo[296825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:07 compute-0 sudo[296825]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:07 compute-0 sudo[296850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:14:07 compute-0 sudo[296850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:07 compute-0 sudo[296850]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:07 compute-0 sudo[296875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:14:07 compute-0 sudo[296875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:07 compute-0 sudo[296875]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 204 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.8 MiB/s wr, 272 op/s
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.261 253665 DEBUG nova.network.neutron [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Successfully updated port: 8c2fda4f-7fa8-479c-8573-592021820968 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.283 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.284 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.284 253665 DEBUG nova.network.neutron [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:14:07 compute-0 sudo[296900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:14:07 compute-0 sudo[296900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.295 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.295 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.313 253665 DEBUG nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.416 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.417 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.426 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.427 253665 INFO nova.compute.claims [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.443 253665 DEBUG nova.compute.manager [req-42de63a8-471b-4304-9ae2-7f1bc41080fe req-babe6252-4e43-4336-8064-3ce55c200248 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-changed-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.444 253665 DEBUG nova.compute.manager [req-42de63a8-471b-4304-9ae2-7f1bc41080fe req-babe6252-4e43-4336-8064-3ce55c200248 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing instance network info cache due to event network-changed-8c2fda4f-7fa8-479c-8573-592021820968. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.445 253665 DEBUG oslo_concurrency.lockutils [req-42de63a8-471b-4304-9ae2-7f1bc41080fe req-babe6252-4e43-4336-8064-3ce55c200248 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.457 253665 DEBUG nova.network.neutron [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.625 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:07 compute-0 podman[296963]: 2025-11-22 09:14:07.678938205 +0000 UTC m=+0.052451320 container create 0acf2f1d25e83be5658a714f05454f5830bbbc9b03e268972f9ea89ca7bf5ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ellis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 09:14:07 compute-0 podman[296963]: 2025-11-22 09:14:07.6461971 +0000 UTC m=+0.019710235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:14:07 compute-0 systemd[1]: Started libpod-conmon-0acf2f1d25e83be5658a714f05454f5830bbbc9b03e268972f9ea89ca7bf5ca6.scope.
Nov 22 09:14:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:14:07 compute-0 podman[296963]: 2025-11-22 09:14:07.828164082 +0000 UTC m=+0.201677217 container init 0acf2f1d25e83be5658a714f05454f5830bbbc9b03e268972f9ea89ca7bf5ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ellis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:14:07 compute-0 podman[296963]: 2025-11-22 09:14:07.836761154 +0000 UTC m=+0.210274269 container start 0acf2f1d25e83be5658a714f05454f5830bbbc9b03e268972f9ea89ca7bf5ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ellis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:14:07 compute-0 fervent_ellis[296978]: 167 167
Nov 22 09:14:07 compute-0 systemd[1]: libpod-0acf2f1d25e83be5658a714f05454f5830bbbc9b03e268972f9ea89ca7bf5ca6.scope: Deactivated successfully.
Nov 22 09:14:07 compute-0 podman[296963]: 2025-11-22 09:14:07.846834572 +0000 UTC m=+0.220347717 container attach 0acf2f1d25e83be5658a714f05454f5830bbbc9b03e268972f9ea89ca7bf5ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:14:07 compute-0 podman[296963]: 2025-11-22 09:14:07.847337614 +0000 UTC m=+0.220850749 container died 0acf2f1d25e83be5658a714f05454f5830bbbc9b03e268972f9ea89ca7bf5ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 09:14:07 compute-0 nova_compute[253661]: 2025-11-22 09:14:07.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-05063ea88f60431de883ea418f32b8d20a4c7deaa86ced41baa9fc5ce4bfcc37-merged.mount: Deactivated successfully.
Nov 22 09:14:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/556665759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.106 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.111 253665 DEBUG nova.compute.provider_tree [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:08 compute-0 podman[296963]: 2025-11-22 09:14:08.120615401 +0000 UTC m=+0.494128516 container remove 0acf2f1d25e83be5658a714f05454f5830bbbc9b03e268972f9ea89ca7bf5ca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ellis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.128 253665 DEBUG nova.scheduler.client.report [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:08 compute-0 systemd[1]: libpod-conmon-0acf2f1d25e83be5658a714f05454f5830bbbc9b03e268972f9ea89ca7bf5ca6.scope: Deactivated successfully.
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.158 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.160 253665 DEBUG nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.203 253665 DEBUG nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.204 253665 DEBUG nova.network.neutron [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.227 253665 INFO nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.246 253665 DEBUG nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.329 253665 DEBUG nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.332 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.333 253665 INFO nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Creating image(s)
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.357 253665 DEBUG nova.storage.rbd_utils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:08 compute-0 podman[297025]: 2025-11-22 09:14:08.361581833 +0000 UTC m=+0.110852485 container create c8886d39693ced8d1de781c9de93b5a413eea38265c35d9a2dcc0cefec1839f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:14:08 compute-0 podman[297025]: 2025-11-22 09:14:08.27481625 +0000 UTC m=+0.024086942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.382 253665 DEBUG nova.storage.rbd_utils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.419 253665 DEBUG nova.storage.rbd_utils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.422 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "1dcb56b45ed5d23d1005f166109f402c147dff64" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:08 compute-0 systemd[1]: Started libpod-conmon-c8886d39693ced8d1de781c9de93b5a413eea38265c35d9a2dcc0cefec1839f9.scope.
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.423 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "1dcb56b45ed5d23d1005f166109f402c147dff64" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404916ac18b9192713dca8d2959943ba7b4ba80d1efb516b40d26732bedd1dc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404916ac18b9192713dca8d2959943ba7b4ba80d1efb516b40d26732bedd1dc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404916ac18b9192713dca8d2959943ba7b4ba80d1efb516b40d26732bedd1dc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404916ac18b9192713dca8d2959943ba7b4ba80d1efb516b40d26732bedd1dc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.467 253665 DEBUG nova.policy [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97872d7ce91947789de976821b771135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:14:08 compute-0 podman[297025]: 2025-11-22 09:14:08.482958386 +0000 UTC m=+0.232229068 container init c8886d39693ced8d1de781c9de93b5a413eea38265c35d9a2dcc0cefec1839f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:14:08 compute-0 podman[297025]: 2025-11-22 09:14:08.491212319 +0000 UTC m=+0.240482971 container start c8886d39693ced8d1de781c9de93b5a413eea38265c35d9a2dcc0cefec1839f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:14:08 compute-0 podman[297025]: 2025-11-22 09:14:08.636333986 +0000 UTC m=+0.385604658 container attach c8886d39693ced8d1de781c9de93b5a413eea38265c35d9a2dcc0cefec1839f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:14:08 compute-0 ovn_controller[152872]: 2025-11-22T09:14:08Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:f0:3f 10.100.0.10
Nov 22 09:14:08 compute-0 ovn_controller[152872]: 2025-11-22T09:14:08Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:f0:3f 10.100.0.10
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.857 253665 DEBUG nova.network.neutron [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.862 253665 DEBUG nova.virt.libvirt.imagebackend [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/9aba632c-c21b-46fd-8be0-d394d1f32c7d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/9aba632c-c21b-46fd-8be0-d394d1f32c7d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.935 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.937 253665 DEBUG nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Instance network_info: |[{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.938 253665 DEBUG oslo_concurrency.lockutils [req-42de63a8-471b-4304-9ae2-7f1bc41080fe req-babe6252-4e43-4336-8064-3ce55c200248 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.938 253665 DEBUG nova.network.neutron [req-42de63a8-471b-4304-9ae2-7f1bc41080fe req-babe6252-4e43-4336-8064-3ce55c200248 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.941 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Start _get_guest_xml network_info=[{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.945 253665 DEBUG nova.virt.libvirt.imagebackend [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Selected location: {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/9aba632c-c21b-46fd-8be0-d394d1f32c7d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.946 253665 DEBUG nova.storage.rbd_utils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] cloning images/9aba632c-c21b-46fd-8be0-d394d1f32c7d@snap to None/01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.989 253665 WARNING nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.995 253665 DEBUG nova.virt.libvirt.host [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:14:08 compute-0 nova_compute[253661]: 2025-11-22 09:14:08.996 253665 DEBUG nova.virt.libvirt.host [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.002 253665 DEBUG nova.virt.libvirt.host [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.003 253665 DEBUG nova.virt.libvirt.host [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.004 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.005 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.006 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.006 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.006 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.007 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.008 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.009 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.009 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.010 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.010 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.010 253665 DEBUG nova.virt.hardware [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.015 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:09 compute-0 ceph-mon[75021]: pgmap v1469: 305 pgs: 305 active+clean; 204 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.8 MiB/s wr, 272 op/s
Nov 22 09:14:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/556665759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 239 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Nov 22 09:14:09 compute-0 kind_keller[297095]: {
Nov 22 09:14:09 compute-0 kind_keller[297095]:     "0": [
Nov 22 09:14:09 compute-0 kind_keller[297095]:         {
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "devices": [
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "/dev/loop3"
Nov 22 09:14:09 compute-0 kind_keller[297095]:             ],
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_name": "ceph_lv0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_size": "21470642176",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "name": "ceph_lv0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "tags": {
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.cluster_name": "ceph",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.crush_device_class": "",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.encrypted": "0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.osd_id": "0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.type": "block",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.vdo": "0"
Nov 22 09:14:09 compute-0 kind_keller[297095]:             },
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "type": "block",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "vg_name": "ceph_vg0"
Nov 22 09:14:09 compute-0 kind_keller[297095]:         }
Nov 22 09:14:09 compute-0 kind_keller[297095]:     ],
Nov 22 09:14:09 compute-0 kind_keller[297095]:     "1": [
Nov 22 09:14:09 compute-0 kind_keller[297095]:         {
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "devices": [
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "/dev/loop4"
Nov 22 09:14:09 compute-0 kind_keller[297095]:             ],
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_name": "ceph_lv1",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_size": "21470642176",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "name": "ceph_lv1",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "tags": {
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.cluster_name": "ceph",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.crush_device_class": "",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.encrypted": "0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.osd_id": "1",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.type": "block",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.vdo": "0"
Nov 22 09:14:09 compute-0 kind_keller[297095]:             },
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "type": "block",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "vg_name": "ceph_vg1"
Nov 22 09:14:09 compute-0 kind_keller[297095]:         }
Nov 22 09:14:09 compute-0 kind_keller[297095]:     ],
Nov 22 09:14:09 compute-0 kind_keller[297095]:     "2": [
Nov 22 09:14:09 compute-0 kind_keller[297095]:         {
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "devices": [
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "/dev/loop5"
Nov 22 09:14:09 compute-0 kind_keller[297095]:             ],
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_name": "ceph_lv2",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_size": "21470642176",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "name": "ceph_lv2",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "tags": {
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.cluster_name": "ceph",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.crush_device_class": "",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.encrypted": "0",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.osd_id": "2",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.type": "block",
Nov 22 09:14:09 compute-0 kind_keller[297095]:                 "ceph.vdo": "0"
Nov 22 09:14:09 compute-0 kind_keller[297095]:             },
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "type": "block",
Nov 22 09:14:09 compute-0 kind_keller[297095]:             "vg_name": "ceph_vg2"
Nov 22 09:14:09 compute-0 kind_keller[297095]:         }
Nov 22 09:14:09 compute-0 kind_keller[297095]:     ]
Nov 22 09:14:09 compute-0 kind_keller[297095]: }
Nov 22 09:14:09 compute-0 systemd[1]: libpod-c8886d39693ced8d1de781c9de93b5a413eea38265c35d9a2dcc0cefec1839f9.scope: Deactivated successfully.
Nov 22 09:14:09 compute-0 podman[297025]: 2025-11-22 09:14:09.368799239 +0000 UTC m=+1.118069901 container died c8886d39693ced8d1de781c9de93b5a413eea38265c35d9a2dcc0cefec1839f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:14:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:14:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2276994978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.489 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.512 253665 DEBUG nova.storage.rbd_utils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.517 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.769 253665 DEBUG nova.network.neutron [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Successfully created port: 8adf054c-477f-4973-9b6c-9732286f2337 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.856 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:14:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962413644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.969 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.971 253665 DEBUG nova.virt.libvirt.vif [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:14:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.972 253665 DEBUG nova.network.os_vif_util [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.972 253665 DEBUG nova.network.os_vif_util [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.974 253665 DEBUG nova.objects.instance [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_devices' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.991 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <uuid>bf96e20f-af8f-4db3-977f-cee93b1d7934</uuid>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <name>instance-00000024</name>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <nova:name>tempest-tempest.common.compute-instance-659535483</nova:name>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:14:08</nova:creationTime>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <nova:port uuid="8c2fda4f-7fa8-479c-8573-592021820968">
Nov 22 09:14:09 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <system>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <entry name="serial">bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <entry name="uuid">bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     </system>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <os>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   </os>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <features>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   </features>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk">
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config">
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:14:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:48:a2:dd"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <target dev="tap8c2fda4f-7f"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log" append="off"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <video>
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     </video>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:14:09 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:14:09 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:14:09 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:14:09 compute-0 nova_compute[253661]: </domain>
Nov 22 09:14:09 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.992 253665 DEBUG nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Preparing to wait for external event network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.993 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.993 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.993 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.994 253665 DEBUG nova.virt.libvirt.vif [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:14:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.995 253665 DEBUG nova.network.os_vif_util [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.995 253665 DEBUG nova.network.os_vif_util [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.996 253665 DEBUG os_vif [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.997 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:09 compute-0 nova_compute[253661]: 2025-11-22 09:14:09.997 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.000 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.001 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c2fda4f-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.001 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c2fda4f-7f, col_values=(('external_ids', {'iface-id': '8c2fda4f-7fa8-479c-8573-592021820968', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:a2:dd', 'vm-uuid': 'bf96e20f-af8f-4db3-977f-cee93b1d7934'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-404916ac18b9192713dca8d2959943ba7b4ba80d1efb516b40d26732bedd1dc6-merged.mount: Deactivated successfully.
Nov 22 09:14:10 compute-0 NetworkManager[48920]: <info>  [1763802850.0063] manager: (tap8c2fda4f-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.013 253665 INFO os_vif [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f')
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.178 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.178 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.179 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:48:a2:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.179 253665 INFO nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Using config drive
Nov 22 09:14:10 compute-0 nova_compute[253661]: 2025-11-22 09:14:10.199 253665 DEBUG nova.storage.rbd_utils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:10 compute-0 ceph-mon[75021]: pgmap v1470: 305 pgs: 305 active+clean; 239 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Nov 22 09:14:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2276994978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/962413644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.072 253665 DEBUG nova.network.neutron [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Successfully updated port: 8adf054c-477f-4973-9b6c-9732286f2337 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.086 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-01e238b6-d7eb-43ed-b69e-507706f9d9f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.086 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-01e238b6-d7eb-43ed-b69e-507706f9d9f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.087 253665 DEBUG nova.network.neutron [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.090 253665 INFO nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Creating config drive at /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/disk.config
Nov 22 09:14:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.095 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0cy4da6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:11 compute-0 podman[297025]: 2025-11-22 09:14:11.147992478 +0000 UTC m=+2.897263130 container remove c8886d39693ced8d1de781c9de93b5a413eea38265c35d9a2dcc0cefec1839f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:14:11 compute-0 sudo[296900]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.190 253665 DEBUG nova.network.neutron [req-42de63a8-471b-4304-9ae2-7f1bc41080fe req-babe6252-4e43-4336-8064-3ce55c200248 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updated VIF entry in instance network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.191 253665 DEBUG nova.network.neutron [req-42de63a8-471b-4304-9ae2-7f1bc41080fe req-babe6252-4e43-4336-8064-3ce55c200248 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.203 253665 DEBUG oslo_concurrency.lockutils [req-42de63a8-471b-4304-9ae2-7f1bc41080fe req-babe6252-4e43-4336-8064-3ce55c200248 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.234 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0cy4da6" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:11 compute-0 systemd[1]: libpod-conmon-c8886d39693ced8d1de781c9de93b5a413eea38265c35d9a2dcc0cefec1839f9.scope: Deactivated successfully.
Nov 22 09:14:11 compute-0 sudo[297272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:14:11 compute-0 sudo[297272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:11 compute-0 sudo[297272]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 239 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.3 MiB/s wr, 260 op/s
Nov 22 09:14:11 compute-0 sudo[297304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:14:11 compute-0 sudo[297304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:11 compute-0 sudo[297304]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.324 253665 DEBUG nova.storage.rbd_utils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.328 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/disk.config bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.361 253665 DEBUG nova.compute.manager [req-fd7464ef-d265-44eb-83e0-b8a0cea692a8 req-fcd6f051-7e60-4661-b02e-9877fdec5456 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Received event network-changed-8adf054c-477f-4973-9b6c-9732286f2337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.362 253665 DEBUG nova.compute.manager [req-fd7464ef-d265-44eb-83e0-b8a0cea692a8 req-fcd6f051-7e60-4661-b02e-9877fdec5456 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Refreshing instance network info cache due to event network-changed-8adf054c-477f-4973-9b6c-9732286f2337. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.362 253665 DEBUG oslo_concurrency.lockutils [req-fd7464ef-d265-44eb-83e0-b8a0cea692a8 req-fcd6f051-7e60-4661-b02e-9877fdec5456 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-01e238b6-d7eb-43ed-b69e-507706f9d9f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:11 compute-0 sudo[297340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:14:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 22 09:14:11 compute-0 sudo[297340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.372 253665 DEBUG nova.network.neutron [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:14:11 compute-0 sudo[297340]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:11 compute-0 sudo[297366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:14:11 compute-0 sudo[297366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:11 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 22 09:14:11 compute-0 nova_compute[253661]: 2025-11-22 09:14:11.731 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "1dcb56b45ed5d23d1005f166109f402c147dff64" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:11 compute-0 podman[297447]: 2025-11-22 09:14:11.755435418 +0000 UTC m=+0.024726219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:14:12 compute-0 podman[297447]: 2025-11-22 09:14:12.012726101 +0000 UTC m=+0.282016872 container create a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_babbage, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:14:12 compute-0 systemd[1]: Started libpod-conmon-a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564.scope.
Nov 22 09:14:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:14:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:14:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478591436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:14:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:14:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478591436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:14:12 compute-0 podman[297447]: 2025-11-22 09:14:12.635973279 +0000 UTC m=+0.905264080 container init a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:14:12 compute-0 podman[297447]: 2025-11-22 09:14:12.649347559 +0000 UTC m=+0.918638330 container start a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 09:14:12 compute-0 recursing_babbage[297481]: 167 167
Nov 22 09:14:12 compute-0 systemd[1]: libpod-a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564.scope: Deactivated successfully.
Nov 22 09:14:12 compute-0 conmon[297481]: conmon a216debce94d47b69ccf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564.scope/container/memory.events
Nov 22 09:14:12 compute-0 nova_compute[253661]: 2025-11-22 09:14:12.732 253665 DEBUG nova.objects.instance [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'migration_context' on Instance uuid 01e238b6-d7eb-43ed-b69e-507706f9d9f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:12 compute-0 nova_compute[253661]: 2025-11-22 09:14:12.764 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:14:12 compute-0 nova_compute[253661]: 2025-11-22 09:14:12.765 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Ensure instance console log exists: /var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:14:12 compute-0 nova_compute[253661]: 2025-11-22 09:14:12.766 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:12 compute-0 nova_compute[253661]: 2025-11-22 09:14:12.767 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:12 compute-0 nova_compute[253661]: 2025-11-22 09:14:12.767 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:12 compute-0 nova_compute[253661]: 2025-11-22 09:14:12.828 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802837.8268373, 5e1195d2-5c9f-45c5-9646-f82fdf069bd6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:12 compute-0 nova_compute[253661]: 2025-11-22 09:14:12.828 253665 INFO nova.compute.manager [-] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] VM Stopped (Lifecycle Event)
Nov 22 09:14:12 compute-0 ceph-mon[75021]: pgmap v1471: 305 pgs: 305 active+clean; 239 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.3 MiB/s wr, 260 op/s
Nov 22 09:14:12 compute-0 ceph-mon[75021]: osdmap e212: 3 total, 3 up, 3 in
Nov 22 09:14:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/478591436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:14:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/478591436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:14:12 compute-0 nova_compute[253661]: 2025-11-22 09:14:12.845 253665 DEBUG nova.compute.manager [None req-2853a7bc-a07f-40f3-a804-198c8429a168 - - - - - -] [instance: 5e1195d2-5c9f-45c5-9646-f82fdf069bd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:12 compute-0 podman[297447]: 2025-11-22 09:14:12.9025231 +0000 UTC m=+1.171813881 container attach a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_babbage, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 22 09:14:12 compute-0 podman[297447]: 2025-11-22 09:14:12.903035303 +0000 UTC m=+1.172326084 container died a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.119 253665 DEBUG nova.network.neutron [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Updating instance_info_cache with network_info: [{"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.138 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-01e238b6-d7eb-43ed-b69e-507706f9d9f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.139 253665 DEBUG nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Instance network_info: |[{"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.140 253665 DEBUG oslo_concurrency.lockutils [req-fd7464ef-d265-44eb-83e0-b8a0cea692a8 req-fcd6f051-7e60-4661-b02e-9877fdec5456 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-01e238b6-d7eb-43ed-b69e-507706f9d9f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.141 253665 DEBUG nova.network.neutron [req-fd7464ef-d265-44eb-83e0-b8a0cea692a8 req-fcd6f051-7e60-4661-b02e-9877fdec5456 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Refreshing network info cache for port 8adf054c-477f-4973-9b6c-9732286f2337 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.147 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Start _get_guest_xml network_info=[{"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:13:57Z,direct_url=<?>,disk_format='raw',id=9aba632c-c21b-46fd-8be0-d394d1f32c7d,min_disk=1,min_ram=0,name='tempest-test-snap-1284247743',owner='d6a9a80b05bf4bb3acb99c5e55603a36',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:14:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '9aba632c-c21b-46fd-8be0-d394d1f32c7d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.154 253665 WARNING nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.167 253665 DEBUG nova.virt.libvirt.host [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.168 253665 DEBUG nova.virt.libvirt.host [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.173 253665 DEBUG nova.virt.libvirt.host [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.174 253665 DEBUG nova.virt.libvirt.host [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.175 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.175 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:13:57Z,direct_url=<?>,disk_format='raw',id=9aba632c-c21b-46fd-8be0-d394d1f32c7d,min_disk=1,min_ram=0,name='tempest-test-snap-1284247743',owner='d6a9a80b05bf4bb3acb99c5e55603a36',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:14:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.176 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.176 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.177 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.177 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.177 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.178 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.178 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.179 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.179 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.179 253665 DEBUG nova.virt.hardware [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.184 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 252 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.6 MiB/s wr, 232 op/s
Nov 22 09:14:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:14:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056631065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.634 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.656 253665 DEBUG nova.storage.rbd_utils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.661 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:13 compute-0 nova_compute[253661]: 2025-11-22 09:14:13.759 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dc766675edc408f00a9d1c7ed80c82154283982395c10ff79e2bc20a8b3e94e-merged.mount: Deactivated successfully.
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.014 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802839.0126922, 60845b2b-40a4-4c4b-8a88-333a8f5e233c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.015 253665 INFO nova.compute.manager [-] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] VM Stopped (Lifecycle Event)
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.055 253665 DEBUG nova.compute.manager [None req-39f46af7-deff-4886-822c-0a8324589390 - - - - - -] [instance: 60845b2b-40a4-4c4b-8a88-333a8f5e233c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:14:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043223076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.130 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.132 253665 DEBUG nova.virt.libvirt.vif [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-726785028',display_name='tempest-ImagesTestJSON-server-726785028',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-726785028',id=37,image_ref='9aba632c-c21b-46fd-8be0-d394d1f32c7d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-sz540q5u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6c9b56d3-9edf-4e5a-88e4-c0470a193778',image_min_disk='1',image_min_ram='0',image_owner_id='d6a9a80b05bf4bb3acb99c5e55603a36',image_owner_project_name='tempest-ImagesTestJSON-1798612164',image_owner_user_name='tempest-ImagesTestJSON-1798612164-project-member',image_user_id='97872d7ce91947789de976821b771135',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:14:08Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=01e238b6-d7eb-43ed-b69e-507706f9d9f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.132 253665 DEBUG nova.network.os_vif_util [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.133 253665 DEBUG nova.network.os_vif_util [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:b9:4a,bridge_name='br-int',has_traffic_filtering=True,id=8adf054c-477f-4973-9b6c-9732286f2337,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8adf054c-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.134 253665 DEBUG nova.objects.instance [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01e238b6-d7eb-43ed-b69e-507706f9d9f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.144 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <uuid>01e238b6-d7eb-43ed-b69e-507706f9d9f3</uuid>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <name>instance-00000025</name>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesTestJSON-server-726785028</nova:name>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:14:13</nova:creationTime>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="9aba632c-c21b-46fd-8be0-d394d1f32c7d"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <nova:port uuid="8adf054c-477f-4973-9b6c-9732286f2337">
Nov 22 09:14:14 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <system>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <entry name="serial">01e238b6-d7eb-43ed-b69e-507706f9d9f3</entry>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <entry name="uuid">01e238b6-d7eb-43ed-b69e-507706f9d9f3</entry>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     </system>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <os>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   </os>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <features>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   </features>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk">
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk.config">
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:14:14 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:f9:b9:4a"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <target dev="tap8adf054c-47"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3/console.log" append="off"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <video>
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     </video>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:14:14 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:14:14 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:14:14 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:14:14 compute-0 nova_compute[253661]: </domain>
Nov 22 09:14:14 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.146 253665 DEBUG nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Preparing to wait for external event network-vif-plugged-8adf054c-477f-4973-9b6c-9732286f2337 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.146 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.146 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.147 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.147 253665 DEBUG nova.virt.libvirt.vif [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-726785028',display_name='tempest-ImagesTestJSON-server-726785028',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-726785028',id=37,image_ref='9aba632c-c21b-46fd-8be0-d394d1f32c7d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-sz540q5u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6c9b56d3-9edf-4e5a-88e4-c0470a193778',image_min_disk='1',image_min_ram='0',image_owner_id='d6a9a80b05bf4bb3acb99c5e55603a36',image_owner_project_name='tempest-ImagesTestJSON-1798612164',image_owner_user_name='tempest-ImagesTestJSON-1798612164-project-member',image_user_id='97872d7ce91947789de976821b771135',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:14:08Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=01e238b6-d7eb-43ed-b69e-507706f9d9f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.148 253665 DEBUG nova.network.os_vif_util [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.148 253665 DEBUG nova.network.os_vif_util [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:b9:4a,bridge_name='br-int',has_traffic_filtering=True,id=8adf054c-477f-4973-9b6c-9732286f2337,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8adf054c-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.149 253665 DEBUG os_vif [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:b9:4a,bridge_name='br-int',has_traffic_filtering=True,id=8adf054c-477f-4973-9b6c-9732286f2337,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8adf054c-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.150 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.150 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.153 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.153 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8adf054c-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.154 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8adf054c-47, col_values=(('external_ids', {'iface-id': '8adf054c-477f-4973-9b6c-9732286f2337', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:b9:4a', 'vm-uuid': '01e238b6-d7eb-43ed-b69e-507706f9d9f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:14 compute-0 NetworkManager[48920]: <info>  [1763802854.1560] manager: (tap8adf054c-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.158 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.167 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.168 253665 INFO os_vif [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:b9:4a,bridge_name='br-int',has_traffic_filtering=True,id=8adf054c-477f-4973-9b6c-9732286f2337,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8adf054c-47')
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.313 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.314 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.314 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:f9:b9:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.315 253665 INFO nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Using config drive
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.445 253665 DEBUG nova.storage.rbd_utils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2056631065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:14 compute-0 podman[297447]: 2025-11-22 09:14:14.611188197 +0000 UTC m=+2.880478968 container remove a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_babbage, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:14:14 compute-0 systemd[1]: libpod-conmon-a216debce94d47b69ccf78b5473d3a4e47377289f04ef2eb2bf9d4df7cf2d564.scope: Deactivated successfully.
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.748 253665 DEBUG oslo_concurrency.processutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/disk.config bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.750 253665 INFO nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Deleting local config drive /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/disk.config because it was imported into RBD.
Nov 22 09:14:14 compute-0 NetworkManager[48920]: <info>  [1763802854.8209] manager: (tap8c2fda4f-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Nov 22 09:14:14 compute-0 kernel: tap8c2fda4f-7f: entered promiscuous mode
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:14 compute-0 ovn_controller[152872]: 2025-11-22T09:14:14Z|00269|binding|INFO|Claiming lport 8c2fda4f-7fa8-479c-8573-592021820968 for this chassis.
Nov 22 09:14:14 compute-0 ovn_controller[152872]: 2025-11-22T09:14:14Z|00270|binding|INFO|8c2fda4f-7fa8-479c-8573-592021820968: Claiming fa:16:3e:48:a2:dd 10.100.0.7
Nov 22 09:14:14 compute-0 podman[297629]: 2025-11-22 09:14:14.837158461 +0000 UTC m=+0.047169211 container create e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ritchie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:14:14 compute-0 ovn_controller[152872]: 2025-11-22T09:14:14Z|00271|binding|INFO|Setting lport 8c2fda4f-7fa8-479c-8573-592021820968 ovn-installed in OVS
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.853 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:14.853 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:a2:dd 10.100.0.7'], port_security=['fa:16:3e:48:a2:dd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf96e20f-af8f-4db3-977f-cee93b1d7934', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccbdff20-588a-43ee-a362-2464b4cf13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8c2fda4f-7fa8-479c-8573-592021820968) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:14 compute-0 ovn_controller[152872]: 2025-11-22T09:14:14Z|00272|binding|INFO|Setting lport 8c2fda4f-7fa8-479c-8573-592021820968 up in Southbound
Nov 22 09:14:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:14.855 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8c2fda4f-7fa8-479c-8573-592021820968 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis
Nov 22 09:14:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:14.857 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.861 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:14.875 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7a4277-c4b6-4656-b4e0-189e6f0644c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.876 253665 DEBUG nova.network.neutron [req-fd7464ef-d265-44eb-83e0-b8a0cea692a8 req-fcd6f051-7e60-4661-b02e-9877fdec5456 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Updated VIF entry in instance network info cache for port 8adf054c-477f-4973-9b6c-9732286f2337. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.877 253665 DEBUG nova.network.neutron [req-fd7464ef-d265-44eb-83e0-b8a0cea692a8 req-fcd6f051-7e60-4661-b02e-9877fdec5456 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Updating instance_info_cache with network_info: [{"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:14 compute-0 systemd-udevd[297658]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:14:14 compute-0 systemd[1]: Started libpod-conmon-e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c.scope.
Nov 22 09:14:14 compute-0 systemd-machined[215941]: New machine qemu-41-instance-00000024.
Nov 22 09:14:14 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000024.
Nov 22 09:14:14 compute-0 nova_compute[253661]: 2025-11-22 09:14:14.897 253665 DEBUG oslo_concurrency.lockutils [req-fd7464ef-d265-44eb-83e0-b8a0cea692a8 req-fcd6f051-7e60-4661-b02e-9877fdec5456 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-01e238b6-d7eb-43ed-b69e-507706f9d9f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:14 compute-0 NetworkManager[48920]: <info>  [1763802854.9014] device (tap8c2fda4f-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:14:14 compute-0 NetworkManager[48920]: <info>  [1763802854.9022] device (tap8c2fda4f-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:14:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:14:14 compute-0 podman[297629]: 2025-11-22 09:14:14.816731199 +0000 UTC m=+0.026741969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a00b1d66d24b7a88554597f4147aef4442a4bf7821999474ce5933119b82b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a00b1d66d24b7a88554597f4147aef4442a4bf7821999474ce5933119b82b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a00b1d66d24b7a88554597f4147aef4442a4bf7821999474ce5933119b82b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a00b1d66d24b7a88554597f4147aef4442a4bf7821999474ce5933119b82b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:14 compute-0 podman[297629]: 2025-11-22 09:14:14.934425612 +0000 UTC m=+0.144436372 container init e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ritchie, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 09:14:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:14.936 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a04e9b56-ceaa-4444-80c0-9674f103f3b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:14.941 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1085fa10-126f-45e8-836c-3f9f8312f88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:14 compute-0 podman[297629]: 2025-11-22 09:14:14.944442298 +0000 UTC m=+0.154453048 container start e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ritchie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:14:14 compute-0 podman[297629]: 2025-11-22 09:14:14.949683767 +0000 UTC m=+0.159694537 container attach e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:14:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:14.972 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[626a42e7-89fe-4ee6-9837-20dab32f4c52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.009 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ef326e-78c1-4e2d-a35b-21744351efce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297677, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9ccb37-166f-4535-892c-bfb6d47fa0f5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297678, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297678, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.038 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.044 253665 INFO nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Creating config drive at /var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3/disk.config
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.045 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.046 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.046 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.047 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.050 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy9mn8d_8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.206 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy9mn8d_8" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.233 253665 DEBUG nova.storage.rbd_utils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.238 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3/disk.config 01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 257 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 5.3 MiB/s wr, 138 op/s
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.278 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802855.2774718, bf96e20f-af8f-4db3-977f-cee93b1d7934 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.279 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] VM Started (Lifecycle Event)
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.302 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.306 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802855.284645, bf96e20f-af8f-4db3-977f-cee93b1d7934 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.306 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] VM Paused (Lifecycle Event)
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.321 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.324 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.342 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.404 253665 DEBUG oslo_concurrency.processutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3/disk.config 01e238b6-d7eb-43ed-b69e-507706f9d9f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.404 253665 INFO nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Deleting local config drive /var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3/disk.config because it was imported into RBD.
Nov 22 09:14:15 compute-0 kernel: tap8adf054c-47: entered promiscuous mode
Nov 22 09:14:15 compute-0 NetworkManager[48920]: <info>  [1763802855.4545] manager: (tap8adf054c-47): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Nov 22 09:14:15 compute-0 ovn_controller[152872]: 2025-11-22T09:14:15Z|00273|binding|INFO|Claiming lport 8adf054c-477f-4973-9b6c-9732286f2337 for this chassis.
Nov 22 09:14:15 compute-0 ovn_controller[152872]: 2025-11-22T09:14:15Z|00274|binding|INFO|8adf054c-477f-4973-9b6c-9732286f2337: Claiming fa:16:3e:f9:b9:4a 10.100.0.6
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.456 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:15 compute-0 NetworkManager[48920]: <info>  [1763802855.4748] device (tap8adf054c-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:14:15 compute-0 NetworkManager[48920]: <info>  [1763802855.4761] device (tap8adf054c-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:14:15 compute-0 ovn_controller[152872]: 2025-11-22T09:14:15Z|00275|binding|INFO|Setting lport 8adf054c-477f-4973-9b6c-9732286f2337 ovn-installed in OVS
Nov 22 09:14:15 compute-0 ovn_controller[152872]: 2025-11-22T09:14:15Z|00276|binding|INFO|Setting lport 8adf054c-477f-4973-9b6c-9732286f2337 up in Southbound
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.482 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.482 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:b9:4a 10.100.0.6'], port_security=['fa:16:3e:f9:b9:4a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '01e238b6-d7eb-43ed-b69e-507706f9d9f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8adf054c-477f-4973-9b6c-9732286f2337) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.483 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8adf054c-477f-4973-9b6c-9732286f2337 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 bound to our chassis
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.485 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.503 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f44042c6-f7a2-4ca3-9d60-41badcd5ea44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:15 compute-0 systemd-machined[215941]: New machine qemu-42-instance-00000025.
Nov 22 09:14:15 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000025.
Nov 22 09:14:15 compute-0 ceph-mon[75021]: pgmap v1473: 305 pgs: 305 active+clean; 252 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.6 MiB/s wr, 232 op/s
Nov 22 09:14:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1043223076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.534 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[673a8957-e021-4056-b56a-1da8ab02a811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.539 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1f0ae3-1e61-4669-98c2-938d3053a582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.580 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[297f25ed-8a8f-4194-a716-3c4b95b74b63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.598 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f11a012d-6fa3-4d00-8a77-a41c516422e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568495, 'reachable_time': 37973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297788, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.612 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f79cfe-3670-4d5d-8084-1f904f2fd6fb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2abeeeb2-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568509, 'tstamp': 568509}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297790, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2abeeeb2-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568513, 'tstamp': 568513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297790, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.614 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.617 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:15 compute-0 nova_compute[253661]: 2025-11-22 09:14:15.617 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.618 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.618 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:15.618 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:16 compute-0 competent_ritchie[297661]: {
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "osd_id": 1,
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "type": "bluestore"
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:     },
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "osd_id": 0,
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "type": "bluestore"
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:     },
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "osd_id": 2,
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:         "type": "bluestore"
Nov 22 09:14:16 compute-0 competent_ritchie[297661]:     }
Nov 22 09:14:16 compute-0 competent_ritchie[297661]: }
Nov 22 09:14:16 compute-0 systemd[1]: libpod-e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c.scope: Deactivated successfully.
Nov 22 09:14:16 compute-0 systemd[1]: libpod-e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c.scope: Consumed 1.112s CPU time.
Nov 22 09:14:16 compute-0 podman[297629]: 2025-11-22 09:14:16.076761847 +0000 UTC m=+1.286772597 container died e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ritchie, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.087 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802856.0863059, 01e238b6-d7eb-43ed-b69e-507706f9d9f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.088 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] VM Started (Lifecycle Event)
Nov 22 09:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2a00b1d66d24b7a88554597f4147aef4442a4bf7821999474ce5933119b82b2-merged.mount: Deactivated successfully.
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.122 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.128 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802856.088289, 01e238b6-d7eb-43ed-b69e-507706f9d9f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.128 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] VM Paused (Lifecycle Event)
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.145 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.150 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:14:16 compute-0 podman[297629]: 2025-11-22 09:14:16.151993017 +0000 UTC m=+1.362003767 container remove e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ritchie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:14:16 compute-0 systemd[1]: libpod-conmon-e60e706fdca75b26694790f12c061542c18f14bf512f6ccfe6c1fe39dcb84c7c.scope: Deactivated successfully.
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.180 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:14:16 compute-0 sudo[297366]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:14:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:14:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:14:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:14:16 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8b08ab6c-0a28-4da7-9c6e-737d708b7ef1 does not exist
Nov 22 09:14:16 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0567a002-32ae-49f2-ba2b-ffecd73ad319 does not exist
Nov 22 09:14:16 compute-0 ovn_controller[152872]: 2025-11-22T09:14:16Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:fa:90 10.100.0.13
Nov 22 09:14:16 compute-0 ovn_controller[152872]: 2025-11-22T09:14:16Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:fa:90 10.100.0.13
Nov 22 09:14:16 compute-0 sudo[297871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:14:16 compute-0 sudo[297871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:16 compute-0 sudo[297871]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:16 compute-0 sudo[297896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:14:16 compute-0 sudo[297896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:14:16 compute-0 sudo[297896]: pam_unix(sudo:session): session closed for user root
Nov 22 09:14:16 compute-0 ceph-mon[75021]: pgmap v1474: 305 pgs: 305 active+clean; 257 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 5.3 MiB/s wr, 138 op/s
Nov 22 09:14:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:14:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.561 253665 DEBUG nova.compute.manager [req-38002ec2-ff9f-4d15-aad5-8d9ea67c7388 req-8b2d4e03-635b-4560-9b68-06cbcf03d84e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Received event network-vif-plugged-8adf054c-477f-4973-9b6c-9732286f2337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.561 253665 DEBUG oslo_concurrency.lockutils [req-38002ec2-ff9f-4d15-aad5-8d9ea67c7388 req-8b2d4e03-635b-4560-9b68-06cbcf03d84e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.562 253665 DEBUG oslo_concurrency.lockutils [req-38002ec2-ff9f-4d15-aad5-8d9ea67c7388 req-8b2d4e03-635b-4560-9b68-06cbcf03d84e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.562 253665 DEBUG oslo_concurrency.lockutils [req-38002ec2-ff9f-4d15-aad5-8d9ea67c7388 req-8b2d4e03-635b-4560-9b68-06cbcf03d84e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.562 253665 DEBUG nova.compute.manager [req-38002ec2-ff9f-4d15-aad5-8d9ea67c7388 req-8b2d4e03-635b-4560-9b68-06cbcf03d84e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Processing event network-vif-plugged-8adf054c-477f-4973-9b6c-9732286f2337 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.563 253665 DEBUG nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.567 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802856.5670362, 01e238b6-d7eb-43ed-b69e-507706f9d9f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.567 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] VM Resumed (Lifecycle Event)
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.572 253665 DEBUG nova.virt.libvirt.driver [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.576 253665 INFO nova.virt.libvirt.driver [-] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Instance spawned successfully.
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.577 253665 INFO nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Took 8.25 seconds to spawn the instance on the hypervisor.
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.577 253665 DEBUG nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.585 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.589 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.614 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.655 253665 INFO nova.compute.manager [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Took 9.27 seconds to build instance.
Nov 22 09:14:16 compute-0 nova_compute[253661]: 2025-11-22 09:14:16.677 253665 DEBUG oslo_concurrency.lockutils [None req-2c678b0b-a231-4a76-bcdf-58912c0a130d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.003 253665 DEBUG nova.compute.manager [req-587110ed-c27f-43ff-97ec-1c0da6ccf52f req-e1e9acd0-162e-44f5-8722-0089eb7ccc14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.003 253665 DEBUG oslo_concurrency.lockutils [req-587110ed-c27f-43ff-97ec-1c0da6ccf52f req-e1e9acd0-162e-44f5-8722-0089eb7ccc14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.003 253665 DEBUG oslo_concurrency.lockutils [req-587110ed-c27f-43ff-97ec-1c0da6ccf52f req-e1e9acd0-162e-44f5-8722-0089eb7ccc14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.004 253665 DEBUG oslo_concurrency.lockutils [req-587110ed-c27f-43ff-97ec-1c0da6ccf52f req-e1e9acd0-162e-44f5-8722-0089eb7ccc14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.004 253665 DEBUG nova.compute.manager [req-587110ed-c27f-43ff-97ec-1c0da6ccf52f req-e1e9acd0-162e-44f5-8722-0089eb7ccc14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Processing event network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.005 253665 DEBUG nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.009 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802857.0089056, bf96e20f-af8f-4db3-977f-cee93b1d7934 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.010 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] VM Resumed (Lifecycle Event)
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.012 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.016 253665 INFO nova.virt.libvirt.driver [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Instance spawned successfully.
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.017 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.028 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.034 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.039 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.039 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.040 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.040 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.041 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.041 253665 DEBUG nova.virt.libvirt.driver [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.061 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.099 253665 INFO nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Took 12.51 seconds to spawn the instance on the hypervisor.
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.099 253665 DEBUG nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.172 253665 INFO nova.compute.manager [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Took 13.47 seconds to build instance.
Nov 22 09:14:17 compute-0 nova_compute[253661]: 2025-11-22 09:14:17.187 253665 DEBUG oslo_concurrency.lockutils [None req-321189e8-9011-4b23-9e67-3640dede1dc1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 266 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 493 KiB/s rd, 5.1 MiB/s wr, 163 op/s
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.161 253665 DEBUG oslo_concurrency.lockutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.162 253665 DEBUG oslo_concurrency.lockutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.162 253665 DEBUG oslo_concurrency.lockutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.162 253665 DEBUG oslo_concurrency.lockutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.163 253665 DEBUG oslo_concurrency.lockutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.164 253665 INFO nova.compute.manager [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Terminating instance
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.165 253665 DEBUG nova.compute.manager [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:14:18 compute-0 kernel: tap8adf054c-47 (unregistering): left promiscuous mode
Nov 22 09:14:18 compute-0 NetworkManager[48920]: <info>  [1763802858.2053] device (tap8adf054c-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:14:18 compute-0 ovn_controller[152872]: 2025-11-22T09:14:18Z|00277|binding|INFO|Releasing lport 8adf054c-477f-4973-9b6c-9732286f2337 from this chassis (sb_readonly=0)
Nov 22 09:14:18 compute-0 ovn_controller[152872]: 2025-11-22T09:14:18Z|00278|binding|INFO|Setting lport 8adf054c-477f-4973-9b6c-9732286f2337 down in Southbound
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:18 compute-0 ovn_controller[152872]: 2025-11-22T09:14:18Z|00279|binding|INFO|Removing iface tap8adf054c-47 ovn-installed in OVS
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.238 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:b9:4a 10.100.0.6'], port_security=['fa:16:3e:f9:b9:4a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '01e238b6-d7eb-43ed-b69e-507706f9d9f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8adf054c-477f-4973-9b6c-9732286f2337) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.240 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8adf054c-477f-4973-9b6c-9732286f2337 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.241 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:14:18 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000025.scope: Deactivated successfully.
Nov 22 09:14:18 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000025.scope: Consumed 2.245s CPU time.
Nov 22 09:14:18 compute-0 systemd-machined[215941]: Machine qemu-42-instance-00000025 terminated.
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.260 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32a45557-4333-43d3-a476-415b7793d050]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.289 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[649af260-0d41-4849-be1e-d92e10db65c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.292 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[21c81fe6-3f72-465a-b93f-2d73cb270a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.320 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[98d987e9-1e50-4400-a84c-e641782904cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.339 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e227afde-711f-4bbc-a791-9fdec5495378]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568495, 'reachable_time': 37973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297933, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.357 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7f378e04-a864-4bda-9f9f-decc8bff2257]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2abeeeb2-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568509, 'tstamp': 568509}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297934, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2abeeeb2-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568513, 'tstamp': 568513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297934, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.358 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.364 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.366 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.366 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.366 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:18.367 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.399 253665 INFO nova.virt.libvirt.driver [-] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Instance destroyed successfully.
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.400 253665 DEBUG nova.objects.instance [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid 01e238b6-d7eb-43ed-b69e-507706f9d9f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.411 253665 DEBUG nova.virt.libvirt.vif [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-726785028',display_name='tempest-ImagesTestJSON-server-726785028',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-726785028',id=37,image_ref='9aba632c-c21b-46fd-8be0-d394d1f32c7d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-sz540q5u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6c9b56d3-9edf-4e5a-88e4-c0470a193778',image_min_disk='1',image_min_ram='0',image_owner_id='d6a9a80b05bf4bb3acb99c5e55603a36',image_owner_project_name='tempest-ImagesTestJSON-1798612164',image_owner_user_name='tempest-ImagesTestJSON-1798612164-project-member',image_user_id='97872d7ce91947789de976821b771135',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:16Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=01e238b6-d7eb-43ed-b69e-507706f9d9f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.412 253665 DEBUG nova.network.os_vif_util [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "8adf054c-477f-4973-9b6c-9732286f2337", "address": "fa:16:3e:f9:b9:4a", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8adf054c-47", "ovs_interfaceid": "8adf054c-477f-4973-9b6c-9732286f2337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.413 253665 DEBUG nova.network.os_vif_util [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:b9:4a,bridge_name='br-int',has_traffic_filtering=True,id=8adf054c-477f-4973-9b6c-9732286f2337,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8adf054c-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.413 253665 DEBUG os_vif [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:b9:4a,bridge_name='br-int',has_traffic_filtering=True,id=8adf054c-477f-4973-9b6c-9732286f2337,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8adf054c-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.415 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.416 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8adf054c-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.420 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.423 253665 INFO os_vif [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:b9:4a,bridge_name='br-int',has_traffic_filtering=True,id=8adf054c-477f-4973-9b6c-9732286f2337,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8adf054c-47')
Nov 22 09:14:18 compute-0 ceph-mon[75021]: pgmap v1475: 305 pgs: 305 active+clean; 266 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 493 KiB/s rd, 5.1 MiB/s wr, 163 op/s
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.651 253665 DEBUG nova.compute.manager [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Received event network-vif-plugged-8adf054c-477f-4973-9b6c-9732286f2337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.651 253665 DEBUG oslo_concurrency.lockutils [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.652 253665 DEBUG oslo_concurrency.lockutils [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.652 253665 DEBUG oslo_concurrency.lockutils [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.652 253665 DEBUG nova.compute.manager [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] No waiting events found dispatching network-vif-plugged-8adf054c-477f-4973-9b6c-9732286f2337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.652 253665 WARNING nova.compute.manager [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Received unexpected event network-vif-plugged-8adf054c-477f-4973-9b6c-9732286f2337 for instance with vm_state active and task_state deleting.
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.652 253665 DEBUG nova.compute.manager [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Received event network-vif-unplugged-8adf054c-477f-4973-9b6c-9732286f2337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.653 253665 DEBUG oslo_concurrency.lockutils [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.653 253665 DEBUG oslo_concurrency.lockutils [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.653 253665 DEBUG oslo_concurrency.lockutils [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.653 253665 DEBUG nova.compute.manager [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] No waiting events found dispatching network-vif-unplugged-8adf054c-477f-4973-9b6c-9732286f2337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.653 253665 DEBUG nova.compute.manager [req-d93a2991-262b-4705-b73f-ba10edf84f53 req-2958fa98-ac37-4acb-a995-bc7c9c4ea423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Received event network-vif-unplugged-8adf054c-477f-4973-9b6c-9732286f2337 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.804 253665 INFO nova.virt.libvirt.driver [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Deleting instance files /var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3_del
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.805 253665 INFO nova.virt.libvirt.driver [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Deletion of /var/lib/nova/instances/01e238b6-d7eb-43ed-b69e-507706f9d9f3_del complete
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.893 253665 INFO nova.compute.manager [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.893 253665 DEBUG oslo.service.loopingcall [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.894 253665 DEBUG nova.compute.manager [-] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:14:18 compute-0 nova_compute[253661]: 2025-11-22 09:14:18.894 253665 DEBUG nova.network.neutron [-] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:14:19 compute-0 nova_compute[253661]: 2025-11-22 09:14:19.093 253665 DEBUG nova.compute.manager [req-fd258ee0-1854-42bb-8b27-3ac054a439bb req-17307838-7186-4a1e-ab00-89fef1e09c49 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:19 compute-0 nova_compute[253661]: 2025-11-22 09:14:19.094 253665 DEBUG oslo_concurrency.lockutils [req-fd258ee0-1854-42bb-8b27-3ac054a439bb req-17307838-7186-4a1e-ab00-89fef1e09c49 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:19 compute-0 nova_compute[253661]: 2025-11-22 09:14:19.094 253665 DEBUG oslo_concurrency.lockutils [req-fd258ee0-1854-42bb-8b27-3ac054a439bb req-17307838-7186-4a1e-ab00-89fef1e09c49 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:19 compute-0 nova_compute[253661]: 2025-11-22 09:14:19.094 253665 DEBUG oslo_concurrency.lockutils [req-fd258ee0-1854-42bb-8b27-3ac054a439bb req-17307838-7186-4a1e-ab00-89fef1e09c49 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:19 compute-0 nova_compute[253661]: 2025-11-22 09:14:19.094 253665 DEBUG nova.compute.manager [req-fd258ee0-1854-42bb-8b27-3ac054a439bb req-17307838-7186-4a1e-ab00-89fef1e09c49 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:19 compute-0 nova_compute[253661]: 2025-11-22 09:14:19.094 253665 WARNING nova.compute.manager [req-fd258ee0-1854-42bb-8b27-3ac054a439bb req-17307838-7186-4a1e-ab00-89fef1e09c49 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 for instance with vm_state active and task_state None.
Nov 22 09:14:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 293 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 253 op/s
Nov 22 09:14:19 compute-0 podman[297966]: 2025-11-22 09:14:19.367621081 +0000 UTC m=+0.056612133 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 09:14:19 compute-0 podman[297967]: 2025-11-22 09:14:19.403445491 +0000 UTC m=+0.092653248 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:14:19 compute-0 nova_compute[253661]: 2025-11-22 09:14:19.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.258 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.363 253665 DEBUG nova.network.neutron [-] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.377 253665 INFO nova.compute.manager [-] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Took 1.48 seconds to deallocate network for instance.
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.419 253665 DEBUG oslo_concurrency.lockutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.420 253665 DEBUG oslo_concurrency.lockutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.520 253665 DEBUG oslo_concurrency.processutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:20 compute-0 ceph-mon[75021]: pgmap v1476: 305 pgs: 305 active+clean; 293 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 253 op/s
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.752 253665 DEBUG nova.compute.manager [req-5f057588-47fb-40e1-8863-e3b794a0ef0c req-402d2f18-6819-4f06-b8bf-fc9e1eb3f5a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Received event network-vif-plugged-8adf054c-477f-4973-9b6c-9732286f2337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.753 253665 DEBUG oslo_concurrency.lockutils [req-5f057588-47fb-40e1-8863-e3b794a0ef0c req-402d2f18-6819-4f06-b8bf-fc9e1eb3f5a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.753 253665 DEBUG oslo_concurrency.lockutils [req-5f057588-47fb-40e1-8863-e3b794a0ef0c req-402d2f18-6819-4f06-b8bf-fc9e1eb3f5a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.753 253665 DEBUG oslo_concurrency.lockutils [req-5f057588-47fb-40e1-8863-e3b794a0ef0c req-402d2f18-6819-4f06-b8bf-fc9e1eb3f5a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.753 253665 DEBUG nova.compute.manager [req-5f057588-47fb-40e1-8863-e3b794a0ef0c req-402d2f18-6819-4f06-b8bf-fc9e1eb3f5a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] No waiting events found dispatching network-vif-plugged-8adf054c-477f-4973-9b6c-9732286f2337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.753 253665 WARNING nova.compute.manager [req-5f057588-47fb-40e1-8863-e3b794a0ef0c req-402d2f18-6819-4f06-b8bf-fc9e1eb3f5a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Received unexpected event network-vif-plugged-8adf054c-477f-4973-9b6c-9732286f2337 for instance with vm_state deleted and task_state None.
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.754 253665 DEBUG nova.compute.manager [req-5f057588-47fb-40e1-8863-e3b794a0ef0c req-402d2f18-6819-4f06-b8bf-fc9e1eb3f5a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Received event network-vif-deleted-8adf054c-477f-4973-9b6c-9732286f2337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947313015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.966 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.987 253665 DEBUG oslo_concurrency.processutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:20 compute-0 nova_compute[253661]: 2025-11-22 09:14:20.992 253665 DEBUG nova.compute.provider_tree [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.010 253665 DEBUG nova.scheduler.client.report [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.035 253665 DEBUG oslo_concurrency.lockutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.091 253665 INFO nova.scheduler.client.report [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Deleted allocations for instance 01e238b6-d7eb-43ed-b69e-507706f9d9f3
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.163 253665 DEBUG oslo_concurrency.lockutils [None req-2179bb0b-dc58-45e9-9a87-0dc9f7b028fb 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "01e238b6-d7eb-43ed-b69e-507706f9d9f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.220 253665 DEBUG nova.compute.manager [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.220 253665 DEBUG nova.compute.manager [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing instance network info cache due to event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.221 253665 DEBUG oslo_concurrency.lockutils [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.221 253665 DEBUG oslo_concurrency.lockutils [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.222 253665 DEBUG nova.network.neutron [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.227 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.227 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:14:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 293 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 253 op/s
Nov 22 09:14:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1947313015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.761 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-6c9b56d3-9edf-4e5a-88e4-c0470a193778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.761 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-6c9b56d3-9edf-4e5a-88e4-c0470a193778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.762 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:14:21 compute-0 nova_compute[253661]: 2025-11-22 09:14:21.762 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6c9b56d3-9edf-4e5a-88e4-c0470a193778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 22 09:14:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 22 09:14:22 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 22 09:14:22 compute-0 ceph-mon[75021]: pgmap v1477: 305 pgs: 305 active+clean; 293 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 253 op/s
Nov 22 09:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:14:22 compute-0 nova_compute[253661]: 2025-11-22 09:14:22.919 253665 DEBUG nova.network.neutron [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updated VIF entry in instance network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:22 compute-0 nova_compute[253661]: 2025-11-22 09:14:22.919 253665 DEBUG nova.network.neutron [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 293 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 324 op/s
Nov 22 09:14:23 compute-0 podman[298024]: 2025-11-22 09:14:23.380334756 +0000 UTC m=+0.071643862 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.414 253665 DEBUG oslo_concurrency.lockutils [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.415 253665 DEBUG nova.compute.manager [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-changed-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.415 253665 DEBUG nova.compute.manager [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing instance network info cache due to event network-changed-8c2fda4f-7fa8-479c-8573-592021820968. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.415 253665 DEBUG oslo_concurrency.lockutils [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.416 253665 DEBUG oslo_concurrency.lockutils [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.416 253665 DEBUG nova.network.neutron [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.418 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:23 compute-0 ceph-mon[75021]: osdmap e213: 3 total, 3 up, 3 in
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.676 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Updating instance_info_cache with network_info: [{"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.695 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-6c9b56d3-9edf-4e5a-88e4-c0470a193778" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.696 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.696 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.697 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.697 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.697 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.717 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.718 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.718 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.719 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.719 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.773 253665 DEBUG oslo_concurrency.lockutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.774 253665 DEBUG oslo_concurrency.lockutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.775 253665 DEBUG oslo_concurrency.lockutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.775 253665 DEBUG oslo_concurrency.lockutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.775 253665 DEBUG oslo_concurrency.lockutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.777 253665 INFO nova.compute.manager [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Terminating instance
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.778 253665 DEBUG nova.compute.manager [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:14:23 compute-0 kernel: tapc1d9117b-dc (unregistering): left promiscuous mode
Nov 22 09:14:23 compute-0 NetworkManager[48920]: <info>  [1763802863.8424] device (tapc1d9117b-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:14:23 compute-0 ovn_controller[152872]: 2025-11-22T09:14:23Z|00280|binding|INFO|Releasing lport c1d9117b-dc46-4b02-a00a-2d78a8027873 from this chassis (sb_readonly=0)
Nov 22 09:14:23 compute-0 ovn_controller[152872]: 2025-11-22T09:14:23Z|00281|binding|INFO|Setting lport c1d9117b-dc46-4b02-a00a-2d78a8027873 down in Southbound
Nov 22 09:14:23 compute-0 ovn_controller[152872]: 2025-11-22T09:14:23Z|00282|binding|INFO|Removing iface tapc1d9117b-dc ovn-installed in OVS
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.857 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:23.861 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:f0:3f 10.100.0.10'], port_security=['fa:16:3e:b1:f0:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c9b56d3-9edf-4e5a-88e4-c0470a193778', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c1d9117b-dc46-4b02-a00a-2d78a8027873) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:23.863 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c1d9117b-dc46-4b02-a00a-2d78a8027873 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis
Nov 22 09:14:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:23.864 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:14:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:23.865 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a6b489-cd2d-4c97-afaf-7c9e5ec8163a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:23.866 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace which is not needed anymore
Nov 22 09:14:23 compute-0 nova_compute[253661]: 2025-11-22 09:14:23.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:23 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Deactivated successfully.
Nov 22 09:14:23 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Consumed 14.561s CPU time.
Nov 22 09:14:23 compute-0 systemd-machined[215941]: Machine qemu-39-instance-00000022 terminated.
Nov 22 09:14:24 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[295744]: [NOTICE]   (295749) : haproxy version is 2.8.14-c23fe91
Nov 22 09:14:24 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[295744]: [NOTICE]   (295749) : path to executable is /usr/sbin/haproxy
Nov 22 09:14:24 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[295744]: [WARNING]  (295749) : Exiting Master process...
Nov 22 09:14:24 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[295744]: [ALERT]    (295749) : Current worker (295753) exited with code 143 (Terminated)
Nov 22 09:14:24 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[295744]: [WARNING]  (295749) : All workers exited. Exiting... (0)
Nov 22 09:14:24 compute-0 systemd[1]: libpod-4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99.scope: Deactivated successfully.
Nov 22 09:14:24 compute-0 podman[298096]: 2025-11-22 09:14:24.010050973 +0000 UTC m=+0.045544530 container died 4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.028 253665 INFO nova.virt.libvirt.driver [-] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Instance destroyed successfully.
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.030 253665 DEBUG nova.objects.instance [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid 6c9b56d3-9edf-4e5a-88e4-c0470a193778 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.044 253665 DEBUG nova.virt.libvirt.vif [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1910653746',display_name='tempest-ImagesTestJSON-server-1910653746',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1910653746',id=34,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-qyv3s6hf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:03Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=6c9b56d3-9edf-4e5a-88e4-c0470a193778,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.045 253665 DEBUG nova.network.os_vif_util [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "address": "fa:16:3e:b1:f0:3f", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1d9117b-dc", "ovs_interfaceid": "c1d9117b-dc46-4b02-a00a-2d78a8027873", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.046 253665 DEBUG nova.network.os_vif_util [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b1:f0:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1d9117b-dc46-4b02-a00a-2d78a8027873,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1d9117b-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.046 253665 DEBUG os_vif [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:f0:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1d9117b-dc46-4b02-a00a-2d78a8027873,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1d9117b-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.048 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1d9117b-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99-userdata-shm.mount: Deactivated successfully.
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.052 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe55fc797ec71e81b03d1ddd682b7475efbaba06e45857eb15b78877c4d07b1e-merged.mount: Deactivated successfully.
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.059 253665 INFO os_vif [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:f0:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1d9117b-dc46-4b02-a00a-2d78a8027873,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1d9117b-dc')
Nov 22 09:14:24 compute-0 podman[298096]: 2025-11-22 09:14:24.070226572 +0000 UTC m=+0.105720109 container cleanup 4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:14:24 compute-0 systemd[1]: libpod-conmon-4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99.scope: Deactivated successfully.
Nov 22 09:14:24 compute-0 podman[298146]: 2025-11-22 09:14:24.154549165 +0000 UTC m=+0.056269094 container remove 4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:14:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:24.163 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69284a69-5740-4860-b84c-608097346f6d]: (4, ('Sat Nov 22 09:14:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99)\n4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99\nSat Nov 22 09:14:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99)\n4dbd3c1ae951194d9d97c618eece8c77ba9e42da2f94c8ced14160fe34020d99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:24.165 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dd08bbbb-b530-4c7a-8ac7-329c5e8b5115]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:24.166 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:24 compute-0 kernel: tap2abeeeb2-20: left promiscuous mode
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.170 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:24.189 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b62dc713-7876-4ea3-843c-715079d7efc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:24.207 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[62a8f72e-4b66-4242-afee-d96f36a963d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:24.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e56c4c3c-0d84-438a-906d-413d72d64f86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:24.231 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1865d237-d601-4da0-86de-80a6d1c61506]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568485, 'reachable_time': 15426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298165, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d2abeeeb2\x2d24a5\x2d4ccd\x2d93c8\x2d05b42d3a1a51.mount: Deactivated successfully.
Nov 22 09:14:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:24.236 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:14:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:24.236 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e27c47-699f-48ae-851b-ef5a581267d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911606172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.286 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.368 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.368 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.372 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.372 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.375 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.375 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.512 253665 INFO nova.virt.libvirt.driver [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Deleting instance files /var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778_del
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.514 253665 INFO nova.virt.libvirt.driver [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Deletion of /var/lib/nova/instances/6c9b56d3-9edf-4e5a-88e4-c0470a193778_del complete
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.563 253665 INFO nova.compute.manager [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.564 253665 DEBUG oslo.service.loopingcall [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.564 253665 DEBUG nova.compute.manager [-] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.565 253665 DEBUG nova.network.neutron [-] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.585 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.586 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3927MB free_disk=59.87632751464844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.587 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.587 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:24 compute-0 ceph-mon[75021]: pgmap v1479: 305 pgs: 305 active+clean; 293 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 324 op/s
Nov 22 09:14:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/911606172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.653 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 6c9b56d3-9edf-4e5a-88e4-c0470a193778 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.653 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.654 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance bf96e20f-af8f-4db3-977f-cee93b1d7934 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.654 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.654 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.745 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:24 compute-0 nova_compute[253661]: 2025-11-22 09:14:24.870 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.136 253665 DEBUG nova.network.neutron [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updated VIF entry in instance network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.137 253665 DEBUG nova.network.neutron [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534243927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.176 253665 DEBUG oslo_concurrency.lockutils [req-63f4ae82-ef4e-46bd-b77f-ae60af1bb502 req-352a83e8-d8a6-4fae-9d6c-01fd330b7582 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.192 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.201 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.218 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.244 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 265 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.0 MiB/s wr, 314 op/s
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.488 253665 DEBUG nova.network.neutron [-] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.505 253665 INFO nova.compute.manager [-] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Took 0.94 seconds to deallocate network for instance.
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.545 253665 DEBUG oslo_concurrency.lockutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.546 253665 DEBUG oslo_concurrency.lockutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.615 253665 DEBUG oslo_concurrency.processutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/534243927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.680 253665 DEBUG nova.compute.manager [req-c828332c-3e84-48fa-bbb3-5d351fa1b924 req-923e373d-3da0-45f0-96fd-d53c9a539a2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-changed-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.681 253665 DEBUG nova.compute.manager [req-c828332c-3e84-48fa-bbb3-5d351fa1b924 req-923e373d-3da0-45f0-96fd-d53c9a539a2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing instance network info cache due to event network-changed-8c2fda4f-7fa8-479c-8573-592021820968. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.681 253665 DEBUG oslo_concurrency.lockutils [req-c828332c-3e84-48fa-bbb3-5d351fa1b924 req-923e373d-3da0-45f0-96fd-d53c9a539a2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.681 253665 DEBUG oslo_concurrency.lockutils [req-c828332c-3e84-48fa-bbb3-5d351fa1b924 req-923e373d-3da0-45f0-96fd-d53c9a539a2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.681 253665 DEBUG nova.network.neutron [req-c828332c-3e84-48fa-bbb3-5d351fa1b924 req-923e373d-3da0-45f0-96fd-d53c9a539a2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.775 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.776 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:14:25 compute-0 nova_compute[253661]: 2025-11-22 09:14:25.777 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:14:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611144773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.066 253665 DEBUG oslo_concurrency.lockutils [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.066 253665 DEBUG oslo_concurrency.lockutils [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.067 253665 DEBUG nova.objects.instance [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.079 253665 DEBUG oslo_concurrency.processutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.090 253665 DEBUG nova.compute.provider_tree [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.101 253665 DEBUG nova.scheduler.client.report [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.119 253665 DEBUG oslo_concurrency.lockutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.181 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.181 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.188 253665 INFO nova.scheduler.client.report [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Deleted allocations for instance 6c9b56d3-9edf-4e5a-88e4-c0470a193778
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.193 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.265 253665 DEBUG oslo_concurrency.lockutils [None req-0fcf1588-505b-4652-af3b-dc991b9bbcf2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6c9b56d3-9edf-4e5a-88e4-c0470a193778" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.269 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.269 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.278 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.278 253665 INFO nova.compute.claims [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:14:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.397 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:26 compute-0 ceph-mon[75021]: pgmap v1480: 305 pgs: 305 active+clean; 265 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.0 MiB/s wr, 314 op/s
Nov 22 09:14:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2611144773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2216235117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.846 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.854 253665 DEBUG nova.compute.provider_tree [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.868 253665 DEBUG nova.scheduler.client.report [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.891 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.893 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.943 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.944 253665 DEBUG nova.network.neutron [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.958 253665 INFO nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:14:26 compute-0 nova_compute[253661]: 2025-11-22 09:14:26.974 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.063 253665 DEBUG nova.objects.instance [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.073 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.075 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.075 253665 INFO nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Creating image(s)
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.101 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.128 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.156 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.161 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.196 253665 DEBUG nova.policy [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97872d7ce91947789de976821b771135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.200 253665 DEBUG nova.network.neutron [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.238 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.239 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.240 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.240 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 215 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 1.3 MiB/s wr, 283 op/s
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.262 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.267 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.300 253665 DEBUG nova.network.neutron [req-c828332c-3e84-48fa-bbb3-5d351fa1b924 req-923e373d-3da0-45f0-96fd-d53c9a539a2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updated VIF entry in instance network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.301 253665 DEBUG nova.network.neutron [req-c828332c-3e84-48fa-bbb3-5d351fa1b924 req-923e373d-3da0-45f0-96fd-d53c9a539a2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.327 253665 DEBUG oslo_concurrency.lockutils [req-c828332c-3e84-48fa-bbb3-5d351fa1b924 req-923e373d-3da0-45f0-96fd-d53c9a539a2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.588 253665 DEBUG nova.policy [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.640 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2216235117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.716 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] resizing rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.830 253665 DEBUG nova.objects.instance [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'migration_context' on Instance uuid 264036ef-37a3-4681-9c7a-9dc70c4b5282 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.846 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.848 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Ensure instance console log exists: /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.849 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.849 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.849 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:27.957 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:27 compute-0 nova_compute[253661]: 2025-11-22 09:14:27.957 253665 DEBUG nova.network.neutron [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Successfully created port: 044e2e50-96f0-48f4-aae3-a5fce049c81f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:14:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:27.958 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:27.959 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:28 compute-0 ceph-mon[75021]: pgmap v1481: 305 pgs: 305 active+clean; 215 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 1.3 MiB/s wr, 283 op/s
Nov 22 09:14:28 compute-0 nova_compute[253661]: 2025-11-22 09:14:28.684 253665 DEBUG nova.network.neutron [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Successfully updated port: 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:14:28 compute-0 nova_compute[253661]: 2025-11-22 09:14:28.697 253665 DEBUG oslo_concurrency.lockutils [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:28 compute-0 nova_compute[253661]: 2025-11-22 09:14:28.698 253665 DEBUG oslo_concurrency.lockutils [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:28 compute-0 nova_compute[253661]: 2025-11-22 09:14:28.698 253665 DEBUG nova.network.neutron [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:14:28 compute-0 nova_compute[253661]: 2025-11-22 09:14:28.873 253665 WARNING nova.network.neutron [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it
Nov 22 09:14:28 compute-0 nova_compute[253661]: 2025-11-22 09:14:28.896 253665 DEBUG nova.network.neutron [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Successfully updated port: 044e2e50-96f0-48f4-aae3-a5fce049c81f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:14:28 compute-0 nova_compute[253661]: 2025-11-22 09:14:28.910 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:28 compute-0 nova_compute[253661]: 2025-11-22 09:14:28.910 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:28 compute-0 nova_compute[253661]: 2025-11-22 09:14:28.910 253665 DEBUG nova.network.neutron [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:14:29 compute-0 nova_compute[253661]: 2025-11-22 09:14:29.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:29 compute-0 nova_compute[253661]: 2025-11-22 09:14:29.062 253665 DEBUG nova.network.neutron [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:14:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 195 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 183 op/s
Nov 22 09:14:29 compute-0 rsyslogd[1005]: imjournal: 4620 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 09:14:29 compute-0 nova_compute[253661]: 2025-11-22 09:14:29.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.025 253665 DEBUG nova.network.neutron [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [{"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.046 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.047 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance network_info: |[{"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.050 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Start _get_guest_xml network_info=[{"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.056 253665 WARNING nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.062 253665 DEBUG nova.virt.libvirt.host [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.063 253665 DEBUG nova.virt.libvirt.host [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.066 253665 DEBUG nova.virt.libvirt.host [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.067 253665 DEBUG nova.virt.libvirt.host [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.068 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.068 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.069 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.069 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.069 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.069 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.070 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.071 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.071 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.071 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.072 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.072 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.075 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.124 253665 DEBUG nova.compute.manager [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.125 253665 DEBUG nova.compute.manager [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing instance network info cache due to event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.125 253665 DEBUG oslo_concurrency.lockutils [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.408 253665 DEBUG nova.network.neutron [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.424 253665 DEBUG oslo_concurrency.lockutils [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.425 253665 DEBUG oslo_concurrency.lockutils [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.425 253665 DEBUG nova.network.neutron [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.429 253665 DEBUG nova.virt.libvirt.vif [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.429 253665 DEBUG nova.network.os_vif_util [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.430 253665 DEBUG nova.network.os_vif_util [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.430 253665 DEBUG os_vif [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.430 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.431 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.432 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.434 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.434 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d31cb94-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.434 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d31cb94-62, col_values=(('external_ids', {'iface-id': '1d31cb94-62b9-4490-a333-cbc7c9ea8f01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:ce:ed', 'vm-uuid': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:30 compute-0 NetworkManager[48920]: <info>  [1763802870.4368] manager: (tap1d31cb94-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.444 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.445 253665 INFO os_vif [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.445 253665 DEBUG nova.virt.libvirt.vif [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.446 253665 DEBUG nova.network.os_vif_util [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.446 253665 DEBUG nova.network.os_vif_util [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.449 253665 DEBUG nova.virt.libvirt.guest [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <target dev="tap1d31cb94-62"/>
Nov 22 09:14:30 compute-0 nova_compute[253661]: </interface>
Nov 22 09:14:30 compute-0 nova_compute[253661]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 09:14:30 compute-0 kernel: tap1d31cb94-62: entered promiscuous mode
Nov 22 09:14:30 compute-0 ovn_controller[152872]: 2025-11-22T09:14:30Z|00283|binding|INFO|Claiming lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for this chassis.
Nov 22 09:14:30 compute-0 ovn_controller[152872]: 2025-11-22T09:14:30Z|00284|binding|INFO|1d31cb94-62b9-4490-a333-cbc7c9ea8f01: Claiming fa:16:3e:a2:ce:ed 10.100.0.3
Nov 22 09:14:30 compute-0 NetworkManager[48920]: <info>  [1763802870.4680] manager: (tap1d31cb94-62): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.472 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ce:ed 10.100.0.3'], port_security=['fa:16:3e:a2:ce:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1d31cb94-62b9-4490-a333-cbc7c9ea8f01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.474 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.475 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:14:30 compute-0 ovn_controller[152872]: 2025-11-22T09:14:30Z|00285|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 ovn-installed in OVS
Nov 22 09:14:30 compute-0 ovn_controller[152872]: 2025-11-22T09:14:30Z|00286|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 up in Southbound
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.508 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98590acf-bc25-4832-89ff-9876d3525658]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:30 compute-0 ovn_controller[152872]: 2025-11-22T09:14:30Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:a2:dd 10.100.0.7
Nov 22 09:14:30 compute-0 systemd-udevd[298430]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:14:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:14:30 compute-0 ovn_controller[152872]: 2025-11-22T09:14:30Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:a2:dd 10.100.0.7
Nov 22 09:14:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441735348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:30 compute-0 NetworkManager[48920]: <info>  [1763802870.5373] device (tap1d31cb94-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:14:30 compute-0 NetworkManager[48920]: <info>  [1763802870.5380] device (tap1d31cb94-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.549 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0e4ef3-60c8-4ad2-a574-5a0ac7294721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.552 253665 DEBUG nova.virt.libvirt.driver [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.554 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4762b3-ed35-4d11-b6e5-1cc9aff11fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.553 253665 DEBUG nova.virt.libvirt.driver [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.555 253665 DEBUG nova.virt.libvirt.driver [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:0e:fa:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.555 253665 DEBUG nova.virt.libvirt.driver [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:a2:ce:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.559 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.587 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.594 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d5665640-520c-4d9f-8e7e-f4c403f5df82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.599 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.623 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[893ce54f-0d02-4259-8536-6457bb76d432]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298456, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.653 253665 DEBUG nova.virt.libvirt.guest [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <nova:name>tempest-tempest.common.compute-instance-1344454464</nova:name>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:14:30</nova:creationTime>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:14:30 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     <nova:port uuid="250740a7-7283-491e-b03e-1e30171a9f3f">
Nov 22 09:14:30 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 09:14:30 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:14:30 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:30 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:14:30 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:14:30 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.654 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6db33cb3-9ac2-46ac-877b-9d83c1b08321]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298458, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298458, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.656 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.662 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.663 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.663 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.663 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:30 compute-0 ceph-mon[75021]: pgmap v1482: 305 pgs: 305 active+clean; 195 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 183 op/s
Nov 22 09:14:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3441735348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:30 compute-0 nova_compute[253661]: 2025-11-22 09:14:30.687 253665 DEBUG oslo_concurrency.lockutils [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:14:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120644826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.100 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.101 253665 DEBUG nova.virt.libvirt.vif [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1900630937',display_name='tempest-ImagesTestJSON-server-1900630937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1900630937',id=38,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-059ygmd7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:14:27Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=264036ef-37a3-4681-9c7a-9dc70c4b5282,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.102 253665 DEBUG nova.network.os_vif_util [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.103 253665 DEBUG nova.network.os_vif_util [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.104 253665 DEBUG nova.objects.instance [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 264036ef-37a3-4681-9c7a-9dc70c4b5282 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.116 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <uuid>264036ef-37a3-4681-9c7a-9dc70c4b5282</uuid>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <name>instance-00000026</name>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesTestJSON-server-1900630937</nova:name>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:14:30</nova:creationTime>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <nova:port uuid="044e2e50-96f0-48f4-aae3-a5fce049c81f">
Nov 22 09:14:31 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <system>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <entry name="serial">264036ef-37a3-4681-9c7a-9dc70c4b5282</entry>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <entry name="uuid">264036ef-37a3-4681-9c7a-9dc70c4b5282</entry>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     </system>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <os>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   </os>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <features>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   </features>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/264036ef-37a3-4681-9c7a-9dc70c4b5282_disk">
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config">
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:14:31 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:c2:c1:a7"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <target dev="tap044e2e50-96"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/console.log" append="off"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <video>
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     </video>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:14:31 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:14:31 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:14:31 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:14:31 compute-0 nova_compute[253661]: </domain>
Nov 22 09:14:31 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.116 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Preparing to wait for external event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.117 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.117 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.117 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.118 253665 DEBUG nova.virt.libvirt.vif [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1900630937',display_name='tempest-ImagesTestJSON-server-1900630937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1900630937',id=38,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-059ygmd7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:14:27Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=264036ef-37a3-4681-9c7a-9dc70c4b5282,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.118 253665 DEBUG nova.network.os_vif_util [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.119 253665 DEBUG nova.network.os_vif_util [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.119 253665 DEBUG os_vif [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.120 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.120 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.121 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.124 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.125 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap044e2e50-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.125 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap044e2e50-96, col_values=(('external_ids', {'iface-id': '044e2e50-96f0-48f4-aae3-a5fce049c81f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:c1:a7', 'vm-uuid': '264036ef-37a3-4681-9c7a-9dc70c4b5282'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:31 compute-0 NetworkManager[48920]: <info>  [1763802871.1285] manager: (tap044e2e50-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.133 253665 INFO os_vif [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96')
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.196 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.196 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.197 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:c2:c1:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.197 253665 INFO nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Using config drive
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.217 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 195 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 183 op/s
Nov 22 09:14:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 22 09:14:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 22 09:14:31 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.550 253665 INFO nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Creating config drive at /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.561 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4k3rffo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/120644826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:14:31 compute-0 ceph-mon[75021]: osdmap e214: 3 total, 3 up, 3 in
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.681 253665 DEBUG nova.network.neutron [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updated VIF entry in instance network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.682 253665 DEBUG nova.network.neutron [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.698 253665 DEBUG nova.compute.manager [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.699 253665 DEBUG oslo_concurrency.lockutils [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.699 253665 DEBUG oslo_concurrency.lockutils [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.700 253665 DEBUG oslo_concurrency.lockutils [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.700 253665 DEBUG nova.compute.manager [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.700 253665 WARNING nova.compute.manager [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.702 253665 DEBUG oslo_concurrency.lockutils [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.702 253665 DEBUG nova.compute.manager [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Received event network-vif-deleted-c1d9117b-dc46-4b02-a00a-2d78a8027873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.722 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4k3rffo" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.747 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.751 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.901 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.902 253665 INFO nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deleting local config drive /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config because it was imported into RBD.
Nov 22 09:14:31 compute-0 kernel: tap044e2e50-96: entered promiscuous mode
Nov 22 09:14:31 compute-0 NetworkManager[48920]: <info>  [1763802871.9576] manager: (tap044e2e50-96): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:31 compute-0 ovn_controller[152872]: 2025-11-22T09:14:31Z|00287|binding|INFO|Claiming lport 044e2e50-96f0-48f4-aae3-a5fce049c81f for this chassis.
Nov 22 09:14:31 compute-0 ovn_controller[152872]: 2025-11-22T09:14:31Z|00288|binding|INFO|044e2e50-96f0-48f4-aae3-a5fce049c81f: Claiming fa:16:3e:c2:c1:a7 10.100.0.6
Nov 22 09:14:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.966 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:c1:a7 10.100.0.6'], port_security=['fa:16:3e:c2:c1:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '264036ef-37a3-4681-9c7a-9dc70c4b5282', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=044e2e50-96f0-48f4-aae3-a5fce049c81f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.967 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 044e2e50-96f0-48f4-aae3-a5fce049c81f in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 bound to our chassis
Nov 22 09:14:31 compute-0 NetworkManager[48920]: <info>  [1763802871.9691] device (tap044e2e50-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:14:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.969 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:14:31 compute-0 NetworkManager[48920]: <info>  [1763802871.9714] device (tap044e2e50-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:14:31 compute-0 ovn_controller[152872]: 2025-11-22T09:14:31Z|00289|binding|INFO|Setting lport 044e2e50-96f0-48f4-aae3-a5fce049c81f ovn-installed in OVS
Nov 22 09:14:31 compute-0 ovn_controller[152872]: 2025-11-22T09:14:31Z|00290|binding|INFO|Setting lport 044e2e50-96f0-48f4-aae3-a5fce049c81f up in Southbound
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.979 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.981 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07c01ced-68c5-48f6-95a2-b1fe49845e63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.982 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2abeeeb2-21 in ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:14:31 compute-0 nova_compute[253661]: 2025-11-22 09:14:31.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.984 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2abeeeb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:14:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.984 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d01d9234-6e57-4172-bbf2-0900be8486b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.985 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be0aee78-8b12-493a-a83a-8d1c774901af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.998 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[da69d23a-0e97-47ab-8006-dc733f5caf94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 systemd-machined[215941]: New machine qemu-43-instance-00000026.
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.023 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc05cc9e-b396-43e1-bd86-b5e27f72cf77]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000026.
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.054 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8c2d04-5ffd-4ea6-8039-a6a541361978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.060 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2d2cfe-6d5a-4e70-ac4c-747afc7c28cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 NetworkManager[48920]: <info>  [1763802872.0615] manager: (tap2abeeeb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.098 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8b51e2-ba7c-474e-a7bd-4d1ea429d664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.102 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[950296dc-15b3-408f-82a4-09d04dd51412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 NetworkManager[48920]: <info>  [1763802872.1299] device (tap2abeeeb2-20): carrier: link connected
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.137 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2f14c9-c4aa-4db8-af6d-e553b26855ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.155 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82e7a47e-4886-4d3c-858c-d1bbbfabc79a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572286, 'reachable_time': 29754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298582, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.171 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b911db27-a1f6-453a-95f9-7c218b3614c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:bff7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572286, 'tstamp': 572286}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298583, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.188 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb160e6-c76b-4df0-95c7-9ed8f15c6741]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572286, 'reachable_time': 29754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298584, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e9486a-5c52-42f4-b789-d00620a6c371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.300 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[43756ee3-337a-4798-9473-9c85c94c2fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.302 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.302 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.302 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:32 compute-0 NetworkManager[48920]: <info>  [1763802872.3049] manager: (tap2abeeeb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Nov 22 09:14:32 compute-0 kernel: tap2abeeeb2-20: entered promiscuous mode
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.308 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:32 compute-0 ovn_controller[152872]: 2025-11-22T09:14:32Z|00291|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.310 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.327 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.328 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f94375-4586-4aff-bb75-d9094ebc9267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.329 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.330 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'env', 'PROCESS_TAG=haproxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:14:32 compute-0 ovn_controller[152872]: 2025-11-22T09:14:32Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a2:ce:ed 10.100.0.3
Nov 22 09:14:32 compute-0 ovn_controller[152872]: 2025-11-22T09:14:32Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:ce:ed 10.100.0.3
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.432 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802872.4320843, 264036ef-37a3-4681-9c7a-9dc70c4b5282 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.433 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] VM Started (Lifecycle Event)
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.449 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.454 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802872.4322066, 264036ef-37a3-4681-9c7a-9dc70c4b5282 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.454 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] VM Paused (Lifecycle Event)
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.468 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.472 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.485 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:14:32 compute-0 podman[298658]: 2025-11-22 09:14:32.678504548 +0000 UTC m=+0.027386264 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.779 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.780 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:32 compute-0 ceph-mon[75021]: pgmap v1483: 305 pgs: 305 active+clean; 195 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 183 op/s
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.794 253665 DEBUG nova.objects.instance [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.816 253665 DEBUG nova.virt.libvirt.vif [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.817 253665 DEBUG nova.network.os_vif_util [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.817 253665 DEBUG nova.network.os_vif_util [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.821 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.823 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.825 253665 DEBUG nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Attempting to detach device tap1d31cb94-62 from instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.826 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <target dev="tap1d31cb94-62"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]: </interface>
Nov 22 09:14:32 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.828 253665 DEBUG nova.compute.manager [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-changed-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.829 253665 DEBUG nova.compute.manager [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing instance network info cache due to event network-changed-1d31cb94-62b9-4490-a333-cbc7c9ea8f01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.829 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.829 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.829 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing network info cache for port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.852 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.855 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface>not found in domain: <domain type='kvm' id='40'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <name>instance-00000023</name>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <uuid>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</uuid>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:name>tempest-tempest.common.compute-instance-1344454464</nova:name>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:14:30</nova:creationTime>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:port uuid="250740a7-7283-491e-b03e-1e30171a9f3f">
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:14:32 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <system>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='serial'>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='uuid'>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </system>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <os>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </os>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <features>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </features>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk' index='2'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config' index='1'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:0e:fa:90'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target dev='tap250740a7-72'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:a2:ce:ed'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target dev='tap1d31cb94-62'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='net1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <source path='/dev/pts/3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log' append='off'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </target>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/3'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <source path='/dev/pts/3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log' append='off'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </console>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <video>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </video>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c50,c423</label>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c50,c423</imagelabel>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:14:32 compute-0 nova_compute[253661]: </domain>
Nov 22 09:14:32 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.855 253665 INFO nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tap1d31cb94-62 from instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 from the persistent domain config.
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.855 253665 DEBUG nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] (1/8): Attempting to detach device tap1d31cb94-62 with device alias net1 from instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.856 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <target dev="tap1d31cb94-62"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]: </interface>
Nov 22 09:14:32 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:14:32 compute-0 podman[298658]: 2025-11-22 09:14:32.897935291 +0000 UTC m=+0.246816987 container create a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:14:32 compute-0 systemd[1]: Started libpod-conmon-a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09.scope.
Nov 22 09:14:32 compute-0 kernel: tap1d31cb94-62 (unregistering): left promiscuous mode
Nov 22 09:14:32 compute-0 NetworkManager[48920]: <info>  [1763802872.9655] device (tap1d31cb94-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:14:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:14:32 compute-0 ovn_controller[152872]: 2025-11-22T09:14:32Z|00292|binding|INFO|Releasing lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 from this chassis (sb_readonly=0)
Nov 22 09:14:32 compute-0 ovn_controller[152872]: 2025-11-22T09:14:32Z|00293|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 down in Southbound
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:32 compute-0 ovn_controller[152872]: 2025-11-22T09:14:32Z|00294|binding|INFO|Removing iface tap1d31cb94-62 ovn-installed in OVS
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.977 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763802872.977399, 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.980 253665 DEBUG nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Start waiting for the detach event from libvirt for device tap1d31cb94-62 with device alias net1 for instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.980 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef65a135aef4d51b10b011a564893fe453cc14eebdbfd2c6f1ff1ad066c4d17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:14:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.984 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ce:ed 10.100.0.3'], port_security=['fa:16:3e:a2:ce:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1d31cb94-62b9-4490-a333-cbc7c9ea8f01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.985 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface>not found in domain: <domain type='kvm' id='40'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <name>instance-00000023</name>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <uuid>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</uuid>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:name>tempest-tempest.common.compute-instance-1344454464</nova:name>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:14:30</nova:creationTime>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:port uuid="250740a7-7283-491e-b03e-1e30171a9f3f">
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:14:32 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <system>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='serial'>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='uuid'>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </system>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <os>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </os>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <features>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </features>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk' index='2'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config' index='1'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:0e:fa:90'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target dev='tap250740a7-72'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <source path='/dev/pts/3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log' append='off'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       </target>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/3'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <source path='/dev/pts/3'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log' append='off'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </console>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <video>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </video>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c50,c423</label>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c50,c423</imagelabel>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:14:32 compute-0 nova_compute[253661]: </domain>
Nov 22 09:14:32 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.986 253665 INFO nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tap1d31cb94-62 from instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 from the live domain config.
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.986 253665 DEBUG nova.virt.libvirt.vif [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.986 253665 DEBUG nova.network.os_vif_util [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.987 253665 DEBUG nova.network.os_vif_util [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.987 253665 DEBUG os_vif [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.989 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d31cb94-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:32 compute-0 podman[298658]: 2025-11-22 09:14:32.993664034 +0000 UTC m=+0.342545750 container init a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.994 253665 INFO os_vif [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')
Nov 22 09:14:32 compute-0 nova_compute[253661]: 2025-11-22 09:14:32.995 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:name>tempest-tempest.common.compute-instance-1344454464</nova:name>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:14:32</nova:creationTime>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     <nova:port uuid="250740a7-7283-491e-b03e-1e30171a9f3f">
Nov 22 09:14:32 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:14:32 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:32 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:14:32 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:14:32 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:14:33 compute-0 podman[298658]: 2025-11-22 09:14:33.000986164 +0000 UTC m=+0.349867870 container start a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:14:33 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [NOTICE]   (298680) : New worker (298682) forked
Nov 22 09:14:33 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [NOTICE]   (298680) : Loading success.
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.080 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.082 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5c7611-80f1-451b-a157-45a9cc99b71c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.146 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4386d293-8fab-47bb-8764-1cfae1175935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.149 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1f19debe-8d43-4df2-b1dd-a28d5335e9dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.188 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[92dbcd80-1a6a-40e8-84fb-4c7bcf2ca7ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.213 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ac1daa-b472-43a6-b669-5ab03b202238]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298696, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.233 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f65338-3084-4db5-aed4-0dd1380427b8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298697, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298697, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.236 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.239 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.241 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.242 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.243 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.243 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 223 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 3.5 MiB/s wr, 124 op/s
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.398 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802858.3972633, 01e238b6-d7eb-43ed-b69e-507706f9d9f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.399 253665 INFO nova.compute.manager [-] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] VM Stopped (Lifecycle Event)
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.421 253665 DEBUG nova.compute.manager [None req-801756db-d25a-4d88-834d-3c2f7dba887d - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.831 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.831 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.832 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.832 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 WARNING nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Processing event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.835 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.835 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] No waiting events found dispatching network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.835 253665 WARNING nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received unexpected event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f for instance with vm_state building and task_state spawning.
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.835 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] No waiting events found dispatching network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 WARNING nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received unexpected event network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 WARNING nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.838 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.841 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802873.8416646, 264036ef-37a3-4681-9c7a-9dc70c4b5282 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.842 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] VM Resumed (Lifecycle Event)
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.843 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.846 253665 INFO nova.virt.libvirt.driver [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance spawned successfully.
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.846 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.858 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.865 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.867 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.868 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.868 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.868 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.869 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.869 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.891 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.922 253665 INFO nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 6.85 seconds to spawn the instance on the hypervisor.
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.923 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.984 253665 INFO nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 7.73 seconds to build instance.
Nov 22 09:14:33 compute-0 nova_compute[253661]: 2025-11-22 09:14:33.999 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.405 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updated VIF entry in instance network info cache for port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.406 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.425 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.426 253665 DEBUG nova.compute.manager [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-changed-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.426 253665 DEBUG nova.compute.manager [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Refreshing instance network info cache due to event network-changed-044e2e50-96f0-48f4-aae3-a5fce049c81f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.426 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.426 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.427 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Refreshing network info cache for port 044e2e50-96f0-48f4-aae3-a5fce049c81f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.703 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.704 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.704 253665 DEBUG nova.network.neutron [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:14:34 compute-0 ceph-mon[75021]: pgmap v1485: 305 pgs: 305 active+clean; 223 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 3.5 MiB/s wr, 124 op/s
Nov 22 09:14:34 compute-0 nova_compute[253661]: 2025-11-22 09:14:34.874 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.210 253665 DEBUG nova.compute.manager [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.252 253665 INFO nova.compute.manager [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] instance snapshotting
Nov 22 09:14:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 247 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.480 253665 INFO nova.virt.libvirt.driver [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Beginning live snapshot process
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.623 253665 DEBUG nova.virt.libvirt.imagebackend [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.709 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updated VIF entry in instance network info cache for port 044e2e50-96f0-48f4-aae3-a5fce049c81f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.710 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [{"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.727 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.811 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(4b0cf3baa55b4bd3be59ef54adda1e52) on rbd image(264036ef-37a3-4681-9c7a-9dc70c4b5282_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.920 253665 INFO nova.network.neutron [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.920 253665 DEBUG nova.network.neutron [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.935 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:35 compute-0 nova_compute[253661]: 2025-11-22 09:14:35.953 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 22 09:14:36 compute-0 ceph-mon[75021]: pgmap v1486: 305 pgs: 305 active+clean; 247 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Nov 22 09:14:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 22 09:14:36 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 22 09:14:36 compute-0 nova_compute[253661]: 2025-11-22 09:14:36.875 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] cloning vms/264036ef-37a3-4681-9c7a-9dc70c4b5282_disk@4b0cf3baa55b4bd3be59ef54adda1e52 to images/6c59e9c0-6dc8-47c7-8319-04839c1264af clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:14:36 compute-0 nova_compute[253661]: 2025-11-22 09:14:36.983 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] flattening images/6c59e9c0-6dc8-47c7-8319-04839c1264af flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:14:37 compute-0 nova_compute[253661]: 2025-11-22 09:14:37.221 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(4b0cf3baa55b4bd3be59ef54adda1e52) on rbd image(264036ef-37a3-4681-9c7a-9dc70c4b5282_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:14:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 247 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.4 MiB/s wr, 172 op/s
Nov 22 09:14:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 22 09:14:37 compute-0 ceph-mon[75021]: osdmap e215: 3 total, 3 up, 3 in
Nov 22 09:14:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 22 09:14:37 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 22 09:14:37 compute-0 nova_compute[253661]: 2025-11-22 09:14:37.868 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(snap) on rbd image(6c59e9c0-6dc8-47c7-8319-04839c1264af) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:14:37 compute-0 nova_compute[253661]: 2025-11-22 09:14:37.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.358 253665 DEBUG nova.compute.manager [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.358 253665 DEBUG nova.compute.manager [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing instance network info cache due to event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.359 253665 DEBUG oslo_concurrency.lockutils [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.359 253665 DEBUG oslo_concurrency.lockutils [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.359 253665 DEBUG nova.network.neutron [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:38 compute-0 ovn_controller[152872]: 2025-11-22T09:14:38Z|00295|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:14:38 compute-0 ovn_controller[152872]: 2025-11-22T09:14:38Z|00296|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 22 09:14:38 compute-0 ceph-mon[75021]: pgmap v1488: 305 pgs: 305 active+clean; 247 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.4 MiB/s wr, 172 op/s
Nov 22 09:14:38 compute-0 ceph-mon[75021]: osdmap e216: 3 total, 3 up, 3 in
Nov 22 09:14:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 22 09:14:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 6c59e9c0-6dc8-47c7-8319-04839c1264af could not be found.
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 6c59e9c0-6dc8-47c7-8319-04839c1264af
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver 
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver 
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 6c59e9c0-6dc8-47c7-8319-04839c1264af could not be found.
Nov 22 09:14:38 compute-0 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver 
Nov 22 09:14:39 compute-0 nova_compute[253661]: 2025-11-22 09:14:39.020 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802864.0191896, 6c9b56d3-9edf-4e5a-88e4-c0470a193778 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:39 compute-0 nova_compute[253661]: 2025-11-22 09:14:39.020 253665 INFO nova.compute.manager [-] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] VM Stopped (Lifecycle Event)
Nov 22 09:14:39 compute-0 nova_compute[253661]: 2025-11-22 09:14:39.022 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(snap) on rbd image(6c59e9c0-6dc8-47c7-8319-04839c1264af) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:14:39 compute-0 nova_compute[253661]: 2025-11-22 09:14:39.035 253665 DEBUG nova.compute.manager [None req-fb63d959-75b7-4d34-b103-1f68f09f5265 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:39 compute-0 nova_compute[253661]: 2025-11-22 09:14:39.043 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:39 compute-0 nova_compute[253661]: 2025-11-22 09:14:39.044 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:39 compute-0 nova_compute[253661]: 2025-11-22 09:14:39.044 253665 DEBUG nova.objects.instance [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 270 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 4.3 MiB/s wr, 308 op/s
Nov 22 09:14:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 22 09:14:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 22 09:14:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 22 09:14:39 compute-0 ceph-mon[75021]: osdmap e217: 3 total, 3 up, 3 in
Nov 22 09:14:39 compute-0 nova_compute[253661]: 2025-11-22 09:14:39.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:40 compute-0 nova_compute[253661]: 2025-11-22 09:14:40.223 253665 WARNING nova.compute.manager [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Image not found during snapshot: nova.exception.ImageNotFound: Image 6c59e9c0-6dc8-47c7-8319-04839c1264af could not be found.
Nov 22 09:14:40 compute-0 nova_compute[253661]: 2025-11-22 09:14:40.290 253665 DEBUG nova.objects.instance [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:40 compute-0 nova_compute[253661]: 2025-11-22 09:14:40.302 253665 DEBUG nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:14:40 compute-0 nova_compute[253661]: 2025-11-22 09:14:40.599 253665 DEBUG nova.policy [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:14:40 compute-0 nova_compute[253661]: 2025-11-22 09:14:40.768 253665 DEBUG nova.network.neutron [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updated VIF entry in instance network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:40 compute-0 nova_compute[253661]: 2025-11-22 09:14:40.769 253665 DEBUG nova.network.neutron [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:40 compute-0 nova_compute[253661]: 2025-11-22 09:14:40.795 253665 DEBUG oslo_concurrency.lockutils [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:40 compute-0 ceph-mon[75021]: pgmap v1491: 305 pgs: 305 active+clean; 270 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 4.3 MiB/s wr, 308 op/s
Nov 22 09:14:40 compute-0 ceph-mon[75021]: osdmap e218: 3 total, 3 up, 3 in
Nov 22 09:14:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 270 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 3.0 MiB/s wr, 194 op/s
Nov 22 09:14:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.060 253665 DEBUG nova.compute.manager [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-changed-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.061 253665 DEBUG nova.compute.manager [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing instance network info cache due to event network-changed-8c2fda4f-7fa8-479c-8573-592021820968. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.061 253665 DEBUG oslo_concurrency.lockutils [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.062 253665 DEBUG oslo_concurrency.lockutils [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.062 253665 DEBUG nova.network.neutron [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.251 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.251 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.252 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.252 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.252 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.330 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.333 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.394 253665 INFO nova.compute.manager [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Terminating instance
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.396 253665 DEBUG nova.compute.manager [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.396 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.398 253665 DEBUG nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Successfully updated port: 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.417 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:42 compute-0 kernel: tap044e2e50-96 (unregistering): left promiscuous mode
Nov 22 09:14:42 compute-0 NetworkManager[48920]: <info>  [1763802882.4359] device (tap044e2e50-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.441 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 ovn_controller[152872]: 2025-11-22T09:14:42Z|00297|binding|INFO|Releasing lport 044e2e50-96f0-48f4-aae3-a5fce049c81f from this chassis (sb_readonly=0)
Nov 22 09:14:42 compute-0 ovn_controller[152872]: 2025-11-22T09:14:42Z|00298|binding|INFO|Setting lport 044e2e50-96f0-48f4-aae3-a5fce049c81f down in Southbound
Nov 22 09:14:42 compute-0 ovn_controller[152872]: 2025-11-22T09:14:42Z|00299|binding|INFO|Removing iface tap044e2e50-96 ovn-installed in OVS
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.448 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:c1:a7 10.100.0.6'], port_security=['fa:16:3e:c2:c1:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '264036ef-37a3-4681-9c7a-9dc70c4b5282', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=044e2e50-96f0-48f4-aae3-a5fce049c81f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.449 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 044e2e50-96f0-48f4-aae3-a5fce049c81f in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.451 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.452 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc511aa3-8772-4a6b-929e-da1dee67de6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.452 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace which is not needed anymore
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Deactivated successfully.
Nov 22 09:14:42 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Consumed 9.118s CPU time.
Nov 22 09:14:42 compute-0 systemd-machined[215941]: Machine qemu-43-instance-00000026 terminated.
Nov 22 09:14:42 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [NOTICE]   (298680) : haproxy version is 2.8.14-c23fe91
Nov 22 09:14:42 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [NOTICE]   (298680) : path to executable is /usr/sbin/haproxy
Nov 22 09:14:42 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [WARNING]  (298680) : Exiting Master process...
Nov 22 09:14:42 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [ALERT]    (298680) : Current worker (298682) exited with code 143 (Terminated)
Nov 22 09:14:42 compute-0 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [WARNING]  (298680) : All workers exited. Exiting... (0)
Nov 22 09:14:42 compute-0 systemd[1]: libpod-a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09.scope: Deactivated successfully.
Nov 22 09:14:42 compute-0 podman[298901]: 2025-11-22 09:14:42.589122052 +0000 UTC m=+0.047936819 container died a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09-userdata-shm.mount: Deactivated successfully.
Nov 22 09:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ef65a135aef4d51b10b011a564893fe453cc14eebdbfd2c6f1ff1ad066c4d17-merged.mount: Deactivated successfully.
Nov 22 09:14:42 compute-0 podman[298901]: 2025-11-22 09:14:42.627642818 +0000 UTC m=+0.086457565 container cleanup a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.648 253665 INFO nova.virt.libvirt.driver [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance destroyed successfully.
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.648 253665 DEBUG nova.objects.instance [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid 264036ef-37a3-4681-9c7a-9dc70c4b5282 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:42 compute-0 systemd[1]: libpod-conmon-a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09.scope: Deactivated successfully.
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.660 253665 DEBUG nova.virt.libvirt.vif [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1900630937',display_name='tempest-ImagesTestJSON-server-1900630937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1900630937',id=38,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-059ygmd7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:40Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=264036ef-37a3-4681-9c7a-9dc70c4b5282,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.661 253665 DEBUG nova.network.os_vif_util [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.662 253665 DEBUG nova.network.os_vif_util [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.662 253665 DEBUG os_vif [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.664 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.664 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap044e2e50-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.668 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.672 253665 INFO os_vif [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96')
Nov 22 09:14:42 compute-0 podman[298935]: 2025-11-22 09:14:42.7061841 +0000 UTC m=+0.051514508 container remove a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.714 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7a7c6b-4c04-4baf-859b-9981b63d9d64]: (4, ('Sat Nov 22 09:14:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09)\na5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09\nSat Nov 22 09:14:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09)\na5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.719 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[76632135-e46c-4b2e-924e-0cff8732d4b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.720 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 kernel: tap2abeeeb2-20: left promiscuous mode
Nov 22 09:14:42 compute-0 nova_compute[253661]: 2025-11-22 09:14:42.746 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.751 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8f9e8b-f385-430c-98c3-ac35b6cd7afd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.768 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4afefb-2adb-4da1-8b18-f9895b7018b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.770 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc972ba-85d4-421b-ab68-6cda45435078]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e948d0c-09d5-453e-942c-d6000e9b6a88]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572278, 'reachable_time': 41172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298973, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.796 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:14:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d2abeeeb2\x2d24a5\x2d4ccd\x2d93c8\x2d05b42d3a1a51.mount: Deactivated successfully.
Nov 22 09:14:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.796 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1a07d7ad-ce66-44b1-9103-aa7e55879669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:42 compute-0 ceph-mon[75021]: pgmap v1493: 305 pgs: 305 active+clean; 270 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 3.0 MiB/s wr, 194 op/s
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.100 253665 INFO nova.virt.libvirt.driver [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deleting instance files /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282_del
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.101 253665 INFO nova.virt.libvirt.driver [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deletion of /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282_del complete
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.148 253665 DEBUG nova.compute.manager [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-changed-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.148 253665 DEBUG nova.compute.manager [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing instance network info cache due to event network-changed-1d31cb94-62b9-4490-a333-cbc7c9ea8f01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.149 253665 DEBUG oslo_concurrency.lockutils [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.168 253665 INFO nova.compute.manager [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.169 253665 DEBUG oslo.service.loopingcall [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.169 253665 DEBUG nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.170 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:14:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 270 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.939 253665 DEBUG nova.network.neutron [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updated VIF entry in instance network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.940 253665 DEBUG nova.network.neutron [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.961 253665 DEBUG oslo_concurrency.lockutils [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.962 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:43 compute-0 nova_compute[253661]: 2025-11-22 09:14:43.963 253665 DEBUG nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.226 253665 WARNING nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.276 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.297 253665 INFO nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 1.13 seconds to deallocate network for instance.
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.347 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.348 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.446 253665 DEBUG oslo_concurrency.processutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.496 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.543 253665 DEBUG nova.compute.manager [req-cfc6a970-35ad-4d53-ab67-0c3447ddfca4 req-32584332-bfc5-4704-b7d0-d72cf660b31f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-deleted-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463990540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.861 253665 DEBUG oslo_concurrency.processutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.867 253665 DEBUG nova.compute.provider_tree [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.881 253665 DEBUG nova.scheduler.client.report [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:44 compute-0 ceph-mon[75021]: pgmap v1494: 305 pgs: 305 active+clean; 270 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Nov 22 09:14:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3463990540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.899 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.924 253665 INFO nova.scheduler.client.report [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Deleted allocations for instance 264036ef-37a3-4681-9c7a-9dc70c4b5282
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.989 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.991 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.991 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.991 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.992 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.993 253665 INFO nova.compute.manager [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Terminating instance
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.993 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.994 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:44 compute-0 nova_compute[253661]: 2025-11-22 09:14:44.994 253665 DEBUG nova.network.neutron [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.014 253665 DEBUG nova.compute.utils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010
Nov 22 09:14:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 219 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.9 MiB/s wr, 206 op/s
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.306 253665 DEBUG nova.network.neutron [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.521 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-unplugged-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.523 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.524 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.525 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.525 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] No waiting events found dispatching network-vif-unplugged-044e2e50-96f0-48f4-aae3-a5fce049c81f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.526 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-unplugged-044e2e50-96f0-48f4-aae3-a5fce049c81f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.526 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.527 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.528 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.529 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.530 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] No waiting events found dispatching network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.530 253665 WARNING nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received unexpected event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f for instance with vm_state deleted and task_state deleting.
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.584 253665 DEBUG nova.network.neutron [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.602 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.603 253665 DEBUG nova.compute.manager [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.610 253665 DEBUG nova.virt.libvirt.driver [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.610 253665 INFO nova.virt.libvirt.driver [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance destroyed successfully.
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.611 253665 DEBUG nova.objects.instance [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid 264036ef-37a3-4681-9c7a-9dc70c4b5282 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.644 253665 INFO nova.virt.libvirt.driver [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deletion of /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282_del complete
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.684 253665 INFO nova.compute.manager [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 0.08 seconds to destroy the instance on the hypervisor.
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.685 253665 DEBUG oslo.service.loopingcall [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.685 253665 DEBUG nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.685 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.832 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.844 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.860 253665 INFO nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 0.17 seconds to deallocate network for instance.
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.889 253665 INFO nova.compute.manager [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance disappeared during terminate
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.890 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.960 253665 DEBUG nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.982 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.984 253665 DEBUG oslo_concurrency.lockutils [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.984 253665 DEBUG nova.network.neutron [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing network info cache for port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.987 253665 DEBUG nova.virt.libvirt.vif [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.987 253665 DEBUG nova.network.os_vif_util [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.988 253665 DEBUG nova.network.os_vif_util [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.989 253665 DEBUG os_vif [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.989 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.990 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.993 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d31cb94-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:45 compute-0 nova_compute[253661]: 2025-11-22 09:14:45.994 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d31cb94-62, col_values=(('external_ids', {'iface-id': '1d31cb94-62b9-4490-a333-cbc7c9ea8f01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:ce:ed', 'vm-uuid': 'bf96e20f-af8f-4db3-977f-cee93b1d7934'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:46 compute-0 NetworkManager[48920]: <info>  [1763802886.0204] manager: (tap1d31cb94-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.025 253665 INFO os_vif [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.026 253665 DEBUG nova.virt.libvirt.vif [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.028 253665 DEBUG nova.network.os_vif_util [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.029 253665 DEBUG nova.network.os_vif_util [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.032 253665 DEBUG nova.virt.libvirt.guest [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <target dev="tap1d31cb94-62"/>
Nov 22 09:14:46 compute-0 nova_compute[253661]: </interface>
Nov 22 09:14:46 compute-0 nova_compute[253661]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 09:14:46 compute-0 kernel: tap1d31cb94-62: entered promiscuous mode
Nov 22 09:14:46 compute-0 NetworkManager[48920]: <info>  [1763802886.0453] manager: (tap1d31cb94-62): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Nov 22 09:14:46 compute-0 ovn_controller[152872]: 2025-11-22T09:14:46Z|00300|binding|INFO|Claiming lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for this chassis.
Nov 22 09:14:46 compute-0 ovn_controller[152872]: 2025-11-22T09:14:46Z|00301|binding|INFO|1d31cb94-62b9-4490-a333-cbc7c9ea8f01: Claiming fa:16:3e:a2:ce:ed 10.100.0.3
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.053 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ce:ed 10.100.0.3'], port_security=['fa:16:3e:a2:ce:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bf96e20f-af8f-4db3-977f-cee93b1d7934', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1d31cb94-62b9-4490-a333-cbc7c9ea8f01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.055 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.056 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:14:46 compute-0 ovn_controller[152872]: 2025-11-22T09:14:46Z|00302|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 ovn-installed in OVS
Nov 22 09:14:46 compute-0 ovn_controller[152872]: 2025-11-22T09:14:46Z|00303|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 up in Southbound
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.074 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d272a25-cb4b-451e-a15a-8da5db0a0866]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:46 compute-0 systemd-udevd[299021]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.108 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[529d858c-06d0-4229-a61e-9bfbe5eaa0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.114 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b2c9da-6826-4ab0-bdf9-0a9d1da1fb99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:46 compute-0 NetworkManager[48920]: <info>  [1763802886.1159] device (tap1d31cb94-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:14:46 compute-0 NetworkManager[48920]: <info>  [1763802886.1178] device (tap1d31cb94-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.126 253665 DEBUG nova.virt.libvirt.driver [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.126 253665 DEBUG nova.virt.libvirt.driver [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.126 253665 DEBUG nova.virt.libvirt.driver [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:48:a2:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.127 253665 DEBUG nova.virt.libvirt.driver [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:a2:ce:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.150 253665 DEBUG nova.virt.libvirt.guest [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <nova:name>tempest-tempest.common.compute-instance-659535483</nova:name>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:14:46</nova:creationTime>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:14:46 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     <nova:port uuid="8c2fda4f-7fa8-479c-8573-592021820968">
Nov 22 09:14:46 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 09:14:46 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:14:46 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:46 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:14:46 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:14:46 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.152 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[29315ca6-d01f-4eaf-84c7-332fc497860a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.172 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.173 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5d212591-0631-4c12-9df8-9e94bc2e721b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299026, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2c7397-4012-4b86-9bad-23334e9beb98]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299027, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299027, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.197 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:46 compute-0 nova_compute[253661]: 2025-11-22 09:14:46.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 22 09:14:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 22 09:14:46 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 22 09:14:46 compute-0 ceph-mon[75021]: pgmap v1495: 305 pgs: 305 active+clean; 219 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.9 MiB/s wr, 206 op/s
Nov 22 09:14:46 compute-0 ceph-mon[75021]: osdmap e219: 3 total, 3 up, 3 in
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.092 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.093 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.107 253665 DEBUG nova.objects.instance [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.125 253665 DEBUG nova.virt.libvirt.vif [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.126 253665 DEBUG nova.network.os_vif_util [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.127 253665 DEBUG nova.network.os_vif_util [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.129 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.132 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.135 253665 DEBUG nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Attempting to detach device tap1d31cb94-62 from instance bf96e20f-af8f-4db3-977f-cee93b1d7934 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.135 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <target dev="tap1d31cb94-62"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]: </interface>
Nov 22 09:14:47 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.152 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.157 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface>not found in domain: <domain type='kvm' id='41'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <name>instance-00000024</name>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <uuid>bf96e20f-af8f-4db3-977f-cee93b1d7934</uuid>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:name>tempest-tempest.common.compute-instance-659535483</nova:name>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:14:46</nova:creationTime>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:port uuid="8c2fda4f-7fa8-479c-8573-592021820968">
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:14:47 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <system>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='serial'>bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='uuid'>bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </system>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <os>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </os>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <features>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </features>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk' index='2'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config' index='1'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:48:a2:dd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target dev='tap8c2fda4f-7f'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:a2:ce:ed'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target dev='tap1d31cb94-62'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='net1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log' append='off'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </target>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log' append='off'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </console>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <video>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </video>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c565,c929</label>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c565,c929</imagelabel>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:14:47 compute-0 nova_compute[253661]: </domain>
Nov 22 09:14:47 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.157 253665 INFO nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tap1d31cb94-62 from instance bf96e20f-af8f-4db3-977f-cee93b1d7934 from the persistent domain config.
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.159 253665 DEBUG nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] (1/8): Attempting to detach device tap1d31cb94-62 with device alias net1 from instance bf96e20f-af8f-4db3-977f-cee93b1d7934 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.159 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <target dev="tap1d31cb94-62"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]: </interface>
Nov 22 09:14:47 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:14:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 200 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1001 KiB/s wr, 103 op/s
Nov 22 09:14:47 compute-0 kernel: tap1d31cb94-62 (unregistering): left promiscuous mode
Nov 22 09:14:47 compute-0 NetworkManager[48920]: <info>  [1763802887.2738] device (tap1d31cb94-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:14:47 compute-0 ovn_controller[152872]: 2025-11-22T09:14:47Z|00304|binding|INFO|Releasing lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 from this chassis (sb_readonly=0)
Nov 22 09:14:47 compute-0 ovn_controller[152872]: 2025-11-22T09:14:47Z|00305|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 down in Southbound
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:47 compute-0 ovn_controller[152872]: 2025-11-22T09:14:47Z|00306|binding|INFO|Removing iface tap1d31cb94-62 ovn-installed in OVS
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.288 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763802887.2880626, bf96e20f-af8f-4db3-977f-cee93b1d7934 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.289 253665 DEBUG nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Start waiting for the detach event from libvirt for device tap1d31cb94-62 with device alias net1 for instance bf96e20f-af8f-4db3-977f-cee93b1d7934 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.290 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.292 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ce:ed 10.100.0.3'], port_security=['fa:16:3e:a2:ce:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bf96e20f-af8f-4db3-977f-cee93b1d7934', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1d31cb94-62b9-4490-a333-cbc7c9ea8f01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.294 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.295 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface>not found in domain: <domain type='kvm' id='41'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <name>instance-00000024</name>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <uuid>bf96e20f-af8f-4db3-977f-cee93b1d7934</uuid>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:name>tempest-tempest.common.compute-instance-659535483</nova:name>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:14:46</nova:creationTime>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:port uuid="8c2fda4f-7fa8-479c-8573-592021820968">
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:14:47 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <system>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='serial'>bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='uuid'>bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </system>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <os>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </os>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <features>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </features>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk' index='2'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config' index='1'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </source>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:48:a2:dd'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target dev='tap8c2fda4f-7f'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log' append='off'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       </target>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log' append='off'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </console>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </input>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <video>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </video>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c565,c929</label>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c565,c929</imagelabel>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.297 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:14:47 compute-0 nova_compute[253661]: </domain>
Nov 22 09:14:47 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.296 253665 INFO nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tap1d31cb94-62 from instance bf96e20f-af8f-4db3-977f-cee93b1d7934 from the live domain config.
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.296 253665 DEBUG nova.virt.libvirt.vif [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.297 253665 DEBUG nova.network.os_vif_util [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.298 253665 DEBUG nova.network.os_vif_util [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.298 253665 DEBUG os_vif [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.301 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d31cb94-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.303 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.304 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.308 253665 INFO os_vif [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.309 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:name>tempest-tempest.common.compute-instance-659535483</nova:name>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:14:47</nova:creationTime>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     <nova:port uuid="8c2fda4f-7fa8-479c-8573-592021820968">
Nov 22 09:14:47 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:14:47 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:14:47 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:14:47 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:14:47 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.322 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a881ca58-05d5-444a-884e-85c594b59a70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.367 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9c553f1b-0bc1-4df2-a797-894e68ef890f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.371 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[995cfc05-b8cb-4a20-9d97-cc3d205ce78c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.410 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b23cdb-a8f3-44d5-a31c-f4710383d267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.430 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f84650be-9253-40b3-a219-4e7ccd6684a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299037, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.446 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01dc51e4-8b93-4d9d-9e94-c26400a3f327]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299038, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299038, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.449 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.452 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.452 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.453 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.453 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.583 253665 DEBUG nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.584 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.584 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.584 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.584 253665 DEBUG nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 WARNING nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 DEBUG nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.586 253665 DEBUG nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:47 compute-0 nova_compute[253661]: 2025-11-22 09:14:47.586 253665 WARNING nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.
Nov 22 09:14:48 compute-0 ceph-mon[75021]: pgmap v1497: 305 pgs: 305 active+clean; 200 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1001 KiB/s wr, 103 op/s
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.048 253665 DEBUG nova.network.neutron [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updated VIF entry in instance network info cache for port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.049 253665 DEBUG nova.network.neutron [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.063 253665 DEBUG oslo_concurrency.lockutils [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 200 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 852 KiB/s wr, 88 op/s
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.667 253665 DEBUG nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 DEBUG nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 WARNING nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.670 253665 WARNING nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.881 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.962 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.962 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:14:49 compute-0 nova_compute[253661]: 2025-11-22 09:14:49.963 253665 DEBUG nova.network.neutron [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:14:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:50.335 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:50 compute-0 podman[299040]: 2025-11-22 09:14:50.367797667 +0000 UTC m=+0.061413290 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:14:50 compute-0 podman[299039]: 2025-11-22 09:14:50.389296305 +0000 UTC m=+0.082809356 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:14:50 compute-0 nova_compute[253661]: 2025-11-22 09:14:50.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:50 compute-0 ceph-mon[75021]: pgmap v1498: 305 pgs: 305 active+clean; 200 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 852 KiB/s wr, 88 op/s
Nov 22 09:14:50 compute-0 nova_compute[253661]: 2025-11-22 09:14:50.994 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:50 compute-0 nova_compute[253661]: 2025-11-22 09:14:50.995 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:50 compute-0 nova_compute[253661]: 2025-11-22 09:14:50.995 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:50 compute-0 nova_compute[253661]: 2025-11-22 09:14:50.996 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:50 compute-0 nova_compute[253661]: 2025-11-22 09:14:50.996 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:50 compute-0 nova_compute[253661]: 2025-11-22 09:14:50.997 253665 INFO nova.compute.manager [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Terminating instance
Nov 22 09:14:50 compute-0 nova_compute[253661]: 2025-11-22 09:14:50.998 253665 DEBUG nova.compute.manager [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:14:51 compute-0 kernel: tap8c2fda4f-7f (unregistering): left promiscuous mode
Nov 22 09:14:51 compute-0 NetworkManager[48920]: <info>  [1763802891.0478] device (tap8c2fda4f-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:14:51 compute-0 ovn_controller[152872]: 2025-11-22T09:14:51Z|00307|binding|INFO|Releasing lport 8c2fda4f-7fa8-479c-8573-592021820968 from this chassis (sb_readonly=0)
Nov 22 09:14:51 compute-0 ovn_controller[152872]: 2025-11-22T09:14:51Z|00308|binding|INFO|Setting lport 8c2fda4f-7fa8-479c-8573-592021820968 down in Southbound
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.056 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 ovn_controller[152872]: 2025-11-22T09:14:51Z|00309|binding|INFO|Removing iface tap8c2fda4f-7f ovn-installed in OVS
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.062 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:a2:dd 10.100.0.7'], port_security=['fa:16:3e:48:a2:dd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf96e20f-af8f-4db3-977f-cee93b1d7934', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccbdff20-588a-43ee-a362-2464b4cf13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8c2fda4f-7fa8-479c-8573-592021820968) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.064 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8c2fda4f-7fa8-479c-8573-592021820968 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.065 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.081 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.083 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7c196c-6ee6-4f9e-a0ed-71e674c00d5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:51 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000024.scope: Deactivated successfully.
Nov 22 09:14:51 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000024.scope: Consumed 14.502s CPU time.
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.110 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dc60ee3b-0c1c-4a3e-87cf-e5a44145d5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:51 compute-0 systemd-machined[215941]: Machine qemu-41-instance-00000024 terminated.
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.114 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[579fc746-9feb-4585-b44a-6a425fed0b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.150 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6d09a195-ff6a-4b54-89a6-49261b3a6ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.169 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[57a4c176-ba47-412e-9640-13b3eb870a86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299090, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.187 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfac1866-1073-4f2d-8853-ba5ead78f1d6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299091, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299091, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.192 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.200 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.202 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.202 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.220 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.235 253665 INFO nova.virt.libvirt.driver [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Instance destroyed successfully.
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.235 253665 DEBUG nova.objects.instance [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'resources' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.250 253665 DEBUG nova.virt.libvirt.vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.252 253665 DEBUG nova.network.os_vif_util [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.253 253665 DEBUG nova.network.os_vif_util [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.253 253665 DEBUG os_vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.255 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c2fda4f-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.259 253665 INFO os_vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f')
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.260 253665 DEBUG nova.virt.libvirt.vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.260 253665 DEBUG nova.network.os_vif_util [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.261 253665 DEBUG nova.network.os_vif_util [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.261 253665 DEBUG os_vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.262 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d31cb94-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.262 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.264 253665 INFO os_vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')
Nov 22 09:14:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 200 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 998 KiB/s rd, 803 KiB/s wr, 83 op/s
Nov 22 09:14:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.328407) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891328455, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2292, "num_deletes": 518, "total_data_size": 2935068, "memory_usage": 2982248, "flush_reason": "Manual Compaction"}
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891342674, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1911362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28100, "largest_seqno": 30391, "table_properties": {"data_size": 1903314, "index_size": 4098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 23405, "raw_average_key_size": 20, "raw_value_size": 1883679, "raw_average_value_size": 1646, "num_data_blocks": 181, "num_entries": 1144, "num_filter_entries": 1144, "num_deletions": 518, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802730, "oldest_key_time": 1763802730, "file_creation_time": 1763802891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 14322 microseconds, and 5975 cpu microseconds.
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.342724) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1911362 bytes OK
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.342747) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.345205) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.345217) EVENT_LOG_v1 {"time_micros": 1763802891345213, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.345234) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2924170, prev total WAL file size 2924170, number of live WAL files 2.
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.346369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1866KB)], [62(8829KB)]
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891346439, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10952927, "oldest_snapshot_seqno": -1}
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5491 keys, 8517169 bytes, temperature: kUnknown
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891392298, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8517169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8479703, "index_size": 22636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 137498, "raw_average_key_size": 25, "raw_value_size": 8380109, "raw_average_value_size": 1526, "num_data_blocks": 929, "num_entries": 5491, "num_filter_entries": 5491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.392625) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8517169 bytes
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.394403) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 238.3 rd, 185.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.6 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 6460, records dropped: 969 output_compression: NoCompression
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.394421) EVENT_LOG_v1 {"time_micros": 1763802891394413, "job": 34, "event": "compaction_finished", "compaction_time_micros": 45967, "compaction_time_cpu_micros": 24870, "output_level": 6, "num_output_files": 1, "total_output_size": 8517169, "num_input_records": 6460, "num_output_records": 5491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891394863, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891396239, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.346215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:14:51 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.687 253665 INFO nova.virt.libvirt.driver [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Deleting instance files /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934_del
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.688 253665 INFO nova.virt.libvirt.driver [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Deletion of /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934_del complete
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.708 253665 INFO nova.network.neutron [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.709 253665 DEBUG nova.network.neutron [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.749 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.758 253665 INFO nova.compute.manager [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.758 253665 DEBUG oslo.service.loopingcall [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.759 253665 DEBUG nova.compute.manager [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.759 253665 DEBUG nova.network.neutron [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:14:51 compute-0 nova_compute[253661]: 2025-11-22 09:14:51.782 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:14:52
Nov 22 09:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups']
Nov 22 09:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:14:53 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:14:53 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:14:53 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:14:53 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:14:53 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:14:53 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:14:53 compute-0 ceph-mon[75021]: pgmap v1499: 305 pgs: 305 active+clean; 200 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 998 KiB/s rd, 803 KiB/s wr, 83 op/s
Nov 22 09:14:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 167 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 8.5 KiB/s wr, 63 op/s
Nov 22 09:14:53 compute-0 ovn_controller[152872]: 2025-11-22T09:14:53Z|00310|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:14:53 compute-0 ovn_controller[152872]: 2025-11-22T09:14:53Z|00311|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:14:53 compute-0 nova_compute[253661]: 2025-11-22 09:14:53.769 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:53 compute-0 nova_compute[253661]: 2025-11-22 09:14:53.849 253665 DEBUG nova.compute.manager [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-unplugged-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:53 compute-0 nova_compute[253661]: 2025-11-22 09:14:53.850 253665 DEBUG oslo_concurrency.lockutils [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:53 compute-0 nova_compute[253661]: 2025-11-22 09:14:53.851 253665 DEBUG oslo_concurrency.lockutils [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:53 compute-0 nova_compute[253661]: 2025-11-22 09:14:53.851 253665 DEBUG oslo_concurrency.lockutils [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:53 compute-0 nova_compute[253661]: 2025-11-22 09:14:53.851 253665 DEBUG nova.compute.manager [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-unplugged-8c2fda4f-7fa8-479c-8573-592021820968 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:53 compute-0 nova_compute[253661]: 2025-11-22 09:14:53.852 253665 DEBUG nova.compute.manager [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-unplugged-8c2fda4f-7fa8-479c-8573-592021820968 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:14:54 compute-0 ceph-mon[75021]: pgmap v1500: 305 pgs: 305 active+clean; 167 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 8.5 KiB/s wr, 63 op/s
Nov 22 09:14:54 compute-0 podman[299123]: 2025-11-22 09:14:54.441873591 +0000 UTC m=+0.129912114 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:14:54 compute-0 nova_compute[253661]: 2025-11-22 09:14:54.486 253665 DEBUG nova.network.neutron [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:54 compute-0 nova_compute[253661]: 2025-11-22 09:14:54.505 253665 INFO nova.compute.manager [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Took 2.75 seconds to deallocate network for instance.
Nov 22 09:14:54 compute-0 ovn_controller[152872]: 2025-11-22T09:14:54Z|00312|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:14:54 compute-0 nova_compute[253661]: 2025-11-22 09:14:54.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:54 compute-0 nova_compute[253661]: 2025-11-22 09:14:54.558 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:54 compute-0 nova_compute[253661]: 2025-11-22 09:14:54.559 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:54 compute-0 ovn_controller[152872]: 2025-11-22T09:14:54Z|00313|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 09:14:54 compute-0 nova_compute[253661]: 2025-11-22 09:14:54.631 253665 DEBUG oslo_concurrency.processutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:54 compute-0 nova_compute[253661]: 2025-11-22 09:14:54.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:14:54 compute-0 nova_compute[253661]: 2025-11-22 09:14:54.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:14:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:14:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271775981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.070 253665 DEBUG oslo_concurrency.processutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.078 253665 DEBUG nova.compute.provider_tree [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.093 253665 DEBUG nova.scheduler.client.report [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.127 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.155 253665 INFO nova.scheduler.client.report [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Deleted allocations for instance bf96e20f-af8f-4db3-977f-cee93b1d7934
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.217 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 121 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 7.2 KiB/s wr, 49 op/s
Nov 22 09:14:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1271775981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.957 253665 DEBUG nova.compute.manager [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.958 253665 DEBUG oslo_concurrency.lockutils [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.958 253665 DEBUG oslo_concurrency.lockutils [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.958 253665 DEBUG oslo_concurrency.lockutils [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.958 253665 DEBUG nova.compute.manager [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.959 253665 WARNING nova.compute.manager [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 for instance with vm_state deleted and task_state None.
Nov 22 09:14:55 compute-0 nova_compute[253661]: 2025-11-22 09:14:55.959 253665 DEBUG nova.compute.manager [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-deleted-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.045 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.046 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.046 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.046 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.047 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.048 253665 INFO nova.compute.manager [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Terminating instance
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.049 253665 DEBUG nova.compute.manager [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:14:56 compute-0 kernel: tap250740a7-72 (unregistering): left promiscuous mode
Nov 22 09:14:56 compute-0 NetworkManager[48920]: <info>  [1763802896.1173] device (tap250740a7-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:14:56 compute-0 ovn_controller[152872]: 2025-11-22T09:14:56Z|00314|binding|INFO|Releasing lport 250740a7-7283-491e-b03e-1e30171a9f3f from this chassis (sb_readonly=0)
Nov 22 09:14:56 compute-0 ovn_controller[152872]: 2025-11-22T09:14:56Z|00315|binding|INFO|Setting lport 250740a7-7283-491e-b03e-1e30171a9f3f down in Southbound
Nov 22 09:14:56 compute-0 ovn_controller[152872]: 2025-11-22T09:14:56Z|00316|binding|INFO|Removing iface tap250740a7-72 ovn-installed in OVS
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.125 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.139 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:fa:90 10.100.0.13'], port_security=['fa:16:3e:0e:fa:90 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccbdff20-588a-43ee-a362-2464b4cf13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=250740a7-7283-491e-b03e-1e30171a9f3f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.140 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 250740a7-7283-491e-b03e-1e30171a9f3f in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.141 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00be96b4-0190-48b3-be03-69230f0c0be2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.142 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace which is not needed anymore
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:56 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 22 09:14:56 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Consumed 15.819s CPU time.
Nov 22 09:14:56 compute-0 systemd-machined[215941]: Machine qemu-40-instance-00000023 terminated.
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:56 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [NOTICE]   (295942) : haproxy version is 2.8.14-c23fe91
Nov 22 09:14:56 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [NOTICE]   (295942) : path to executable is /usr/sbin/haproxy
Nov 22 09:14:56 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [WARNING]  (295942) : Exiting Master process...
Nov 22 09:14:56 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [ALERT]    (295942) : Current worker (295944) exited with code 143 (Terminated)
Nov 22 09:14:56 compute-0 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [WARNING]  (295942) : All workers exited. Exiting... (0)
Nov 22 09:14:56 compute-0 systemd[1]: libpod-e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88.scope: Deactivated successfully.
Nov 22 09:14:56 compute-0 podman[299197]: 2025-11-22 09:14:56.290912757 +0000 UTC m=+0.051402235 container died e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.291 253665 INFO nova.virt.libvirt.driver [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Instance destroyed successfully.
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.292 253665 DEBUG nova.objects.instance [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'resources' on Instance uuid 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.302 253665 DEBUG nova.virt.libvirt.vif [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.303 253665 DEBUG nova.network.os_vif_util [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.303 253665 DEBUG nova.network.os_vif_util [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.304 253665 DEBUG os_vif [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.306 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap250740a7-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.331 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.333 253665 INFO os_vif [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72')
Nov 22 09:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c33cf72e08b1ddd2978dc2e03da03122704dc3ba1214ec5d21f8fe0f1527d854-merged.mount: Deactivated successfully.
Nov 22 09:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88-userdata-shm.mount: Deactivated successfully.
Nov 22 09:14:56 compute-0 ceph-mon[75021]: pgmap v1501: 305 pgs: 305 active+clean; 121 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 7.2 KiB/s wr, 49 op/s
Nov 22 09:14:56 compute-0 podman[299197]: 2025-11-22 09:14:56.358109359 +0000 UTC m=+0.118598827 container cleanup e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:14:56 compute-0 systemd[1]: libpod-conmon-e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88.scope: Deactivated successfully.
Nov 22 09:14:56 compute-0 podman[299254]: 2025-11-22 09:14:56.42896142 +0000 UTC m=+0.047885148 container remove e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.435 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[97717ee3-fd77-4a7d-b25d-5a15337043b8]: (4, ('Sat Nov 22 09:14:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88)\ne4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88\nSat Nov 22 09:14:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88)\ne4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.437 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[410caa76-b15c-4820-b149-58deb2e783ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.438 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:56 compute-0 kernel: tap5e2cd359-c0: left promiscuous mode
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.458 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.461 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c65a4b1a-ac11-43d0-83cb-5853f536019a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.471 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b13c7ceb-5302-4a67-8aea-cd2f961dc37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.473 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc00de0-50cc-4b2e-9c69-87ffa57734fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.495 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaadce06-56b5-4d0f-82d6-10f4d7c63ab2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568630, 'reachable_time': 15097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299272, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.497 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:14:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.498 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a9d3f1-ba73-4a2d-8d78-025f660afe34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:14:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d5e2cd359\x2dc68f\x2d4256\x2d90e8\x2d0ad40aff8a00.mount: Deactivated successfully.
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.788 253665 INFO nova.virt.libvirt.driver [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Deleting instance files /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_del
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.789 253665 INFO nova.virt.libvirt.driver [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Deletion of /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_del complete
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.844 253665 INFO nova.compute.manager [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.845 253665 DEBUG oslo.service.loopingcall [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.845 253665 DEBUG nova.compute.manager [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:14:56 compute-0 nova_compute[253661]: 2025-11-22 09:14:56.845 253665 DEBUG nova.network.neutron [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:14:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 96 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 6.9 KiB/s wr, 47 op/s
Nov 22 09:14:57 compute-0 nova_compute[253661]: 2025-11-22 09:14:57.645 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802882.6444323, 264036ef-37a3-4681-9c7a-9dc70c4b5282 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:14:57 compute-0 nova_compute[253661]: 2025-11-22 09:14:57.646 253665 INFO nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] VM Stopped (Lifecycle Event)
Nov 22 09:14:57 compute-0 nova_compute[253661]: 2025-11-22 09:14:57.670 253665 DEBUG nova.compute.manager [None req-77b94f36-ee54-49df-a2e9-9d8911fcb0f9 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.278 253665 DEBUG nova.network.neutron [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.300 253665 INFO nova.compute.manager [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Took 1.45 seconds to deallocate network for instance.
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.338 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.339 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:14:58 compute-0 ceph-mon[75021]: pgmap v1502: 305 pgs: 305 active+clean; 96 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 6.9 KiB/s wr, 47 op/s
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.380 253665 DEBUG oslo_concurrency.processutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.500 253665 DEBUG nova.compute.manager [req-bb5575f0-5c09-4ec0-82b0-b02417bcdf12 req-106e575f-1327-4a8e-8b6e-5f2ae2fc5d07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-deleted-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:14:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:14:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765326782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.894 253665 DEBUG oslo_concurrency.processutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.899 253665 DEBUG nova.compute.provider_tree [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.916 253665 DEBUG nova.scheduler.client.report [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.941 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:58 compute-0 nova_compute[253661]: 2025-11-22 09:14:58.994 253665 INFO nova.scheduler.client.report [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Deleted allocations for instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6
Nov 22 09:14:59 compute-0 nova_compute[253661]: 2025-11-22 09:14:59.154 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:14:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 42 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Nov 22 09:14:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2765326782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:14:59 compute-0 nova_compute[253661]: 2025-11-22 09:14:59.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:00 compute-0 ceph-mon[75021]: pgmap v1503: 305 pgs: 305 active+clean; 42 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Nov 22 09:15:00 compute-0 nova_compute[253661]: 2025-11-22 09:15:00.545 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:00 compute-0 nova_compute[253661]: 2025-11-22 09:15:00.546 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:00 compute-0 nova_compute[253661]: 2025-11-22 09:15:00.619 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:15:00 compute-0 nova_compute[253661]: 2025-11-22 09:15:00.755 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:00 compute-0 nova_compute[253661]: 2025-11-22 09:15:00.755 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:00 compute-0 nova_compute[253661]: 2025-11-22 09:15:00.767 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:15:00 compute-0 nova_compute[253661]: 2025-11-22 09:15:00.768 253665 INFO nova.compute.claims [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.139 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 42 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 22 09:15:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.331 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:15:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241279518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.610 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.619 253665 DEBUG nova.compute.provider_tree [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.637 253665 DEBUG nova.scheduler.client.report [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.689 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.690 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.768 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.768 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.817 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:15:01 compute-0 nova_compute[253661]: 2025-11-22 09:15:01.874 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.049 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.051 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.051 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Creating image(s)
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.078 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.103 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.132 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.137 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.225 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.226 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.227 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.227 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.318 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.324 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.373 253665 DEBUG nova.policy [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e5b221447624e728e9eb5442b5238d1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6fc32fb5484840b1b6654dffb70595ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:15:02 compute-0 ceph-mon[75021]: pgmap v1504: 305 pgs: 305 active+clean; 42 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 22 09:15:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3241279518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.7344004362831283e-06 of space, bias 1.0, pg target 0.0008203201308849384 quantized to 32 (current 32)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:15:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.887 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:02 compute-0 nova_compute[253661]: 2025-11-22 09:15:02.961 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] resizing rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:15:03 compute-0 nova_compute[253661]: 2025-11-22 09:15:03.074 253665 DEBUG nova.objects.instance [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lazy-loading 'migration_context' on Instance uuid 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:15:03 compute-0 nova_compute[253661]: 2025-11-22 09:15:03.087 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:15:03 compute-0 nova_compute[253661]: 2025-11-22 09:15:03.087 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Ensure instance console log exists: /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:15:03 compute-0 nova_compute[253661]: 2025-11-22 09:15:03.088 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:03 compute-0 nova_compute[253661]: 2025-11-22 09:15:03.088 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:03 compute-0 nova_compute[253661]: 2025-11-22 09:15:03.089 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 41 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.0 KiB/s wr, 55 op/s
Nov 22 09:15:04 compute-0 ceph-mon[75021]: pgmap v1505: 305 pgs: 305 active+clean; 41 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.0 KiB/s wr, 55 op/s
Nov 22 09:15:04 compute-0 nova_compute[253661]: 2025-11-22 09:15:04.761 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Successfully created port: 50e75895-e769-4e23-b607-7d52eb14fb62 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:15:04 compute-0 nova_compute[253661]: 2025-11-22 09:15:04.889 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 61 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 1.0 MiB/s wr, 73 op/s
Nov 22 09:15:06 compute-0 nova_compute[253661]: 2025-11-22 09:15:06.233 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802891.232201, bf96e20f-af8f-4db3-977f-cee93b1d7934 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:06 compute-0 nova_compute[253661]: 2025-11-22 09:15:06.234 253665 INFO nova.compute.manager [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] VM Stopped (Lifecycle Event)
Nov 22 09:15:06 compute-0 nova_compute[253661]: 2025-11-22 09:15:06.251 253665 DEBUG nova.compute.manager [None req-a43f9ab2-c44f-44d4-9476-6be8acb5a8d2 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:06 compute-0 nova_compute[253661]: 2025-11-22 09:15:06.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:06 compute-0 ceph-mon[75021]: pgmap v1506: 305 pgs: 305 active+clean; 61 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 1.0 MiB/s wr, 73 op/s
Nov 22 09:15:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 22 09:15:08 compute-0 nova_compute[253661]: 2025-11-22 09:15:08.061 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Successfully updated port: 50e75895-e769-4e23-b607-7d52eb14fb62 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:15:08 compute-0 nova_compute[253661]: 2025-11-22 09:15:08.076 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:15:08 compute-0 nova_compute[253661]: 2025-11-22 09:15:08.077 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquired lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:15:08 compute-0 nova_compute[253661]: 2025-11-22 09:15:08.077 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:15:08 compute-0 nova_compute[253661]: 2025-11-22 09:15:08.206 253665 DEBUG nova.compute.manager [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:08 compute-0 nova_compute[253661]: 2025-11-22 09:15:08.207 253665 DEBUG nova.compute.manager [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing instance network info cache due to event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:15:08 compute-0 nova_compute[253661]: 2025-11-22 09:15:08.207 253665 DEBUG oslo_concurrency.lockutils [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:15:08 compute-0 nova_compute[253661]: 2025-11-22 09:15:08.297 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:15:08 compute-0 ceph-mon[75021]: pgmap v1507: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 22 09:15:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 22 09:15:09 compute-0 nova_compute[253661]: 2025-11-22 09:15:09.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.185 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.220 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Releasing lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.221 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance network_info: |[{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.222 253665 DEBUG oslo_concurrency.lockutils [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.222 253665 DEBUG nova.network.neutron [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.225 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start _get_guest_xml network_info=[{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.231 253665 WARNING nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.237 253665 DEBUG nova.virt.libvirt.host [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.237 253665 DEBUG nova.virt.libvirt.host [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.247 253665 DEBUG nova.virt.libvirt.host [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.248 253665 DEBUG nova.virt.libvirt.host [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.248 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.249 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.249 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.250 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.250 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.250 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.250 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.251 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.251 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.252 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.252 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.252 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.255 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:10 compute-0 ceph-mon[75021]: pgmap v1508: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 22 09:15:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:15:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513364440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.749 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.780 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:10 compute-0 nova_compute[253661]: 2025-11-22 09:15:10.786 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:15:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2379157159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.248 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.250 253665 DEBUG nova.virt.libvirt.vif [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-467780681',id=39,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fc32fb5484840b1b6654dffb70595ef',ramdisk_id='',reservation_id='r-072h7wv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:01Z,user_data=None,user_id='0e5b221447624e728e9eb5442b5238d1',uuid=9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.251 253665 DEBUG nova.network.os_vif_util [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converting VIF {"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.252 253665 DEBUG nova.network.os_vif_util [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.253 253665 DEBUG nova.objects.instance [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.270 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <uuid>9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1</uuid>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <name>instance-00000027</name>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681</nova:name>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:15:10</nova:creationTime>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <nova:user uuid="0e5b221447624e728e9eb5442b5238d1">tempest-FloatingIPsAssociationNegativeTestJSON-1334234428-project-member</nova:user>
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <nova:project uuid="6fc32fb5484840b1b6654dffb70595ef">tempest-FloatingIPsAssociationNegativeTestJSON-1334234428</nova:project>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <nova:port uuid="50e75895-e769-4e23-b607-7d52eb14fb62">
Nov 22 09:15:11 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <system>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <entry name="serial">9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1</entry>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <entry name="uuid">9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1</entry>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     </system>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <os>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   </os>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <features>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   </features>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk">
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       </source>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config">
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       </source>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:15:11 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:07:ef:1b"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <target dev="tap50e75895-e7"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/console.log" append="off"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <video>
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     </video>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:15:11 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:15:11 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:15:11 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:15:11 compute-0 nova_compute[253661]: </domain>
Nov 22 09:15:11 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.272 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Preparing to wait for external event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.272 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.273 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.273 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.274 253665 DEBUG nova.virt.libvirt.vif [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-467780681',id=39,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fc32fb5484840b1b6654dffb70595ef',ramdisk_id='',reservation_id='r-072h7wv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:01Z,user_data=None,user_id='0e5b221447624e728e9eb5442b5238d1',uuid=9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.275 253665 DEBUG nova.network.os_vif_util [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converting VIF {"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.276 253665 DEBUG nova.network.os_vif_util [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.277 253665 DEBUG os_vif [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.278 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.278 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:15:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.283 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.284 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e75895-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.285 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50e75895-e7, col_values=(('external_ids', {'iface-id': '50e75895-e769-4e23-b607-7d52eb14fb62', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:ef:1b', 'vm-uuid': '9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:11 compute-0 NetworkManager[48920]: <info>  [1763802911.2881] manager: (tap50e75895-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.290 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802896.2880502, 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.290 253665 INFO nova.compute.manager [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] VM Stopped (Lifecycle Event)
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.295 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.296 253665 INFO os_vif [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7')
Nov 22 09:15:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.467 253665 DEBUG nova.compute.manager [None req-36c53732-f07c-4dc3-becc-e23540e3cdd4 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.487 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.488 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.488 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] No VIF found with MAC fa:16:3e:07:ef:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.489 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Using config drive
Nov 22 09:15:11 compute-0 nova_compute[253661]: 2025-11-22 09:15:11.513 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1513364440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2379157159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.018 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Creating config drive at /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.024 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgdl6zihk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.167 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgdl6zihk" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.204 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.209 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:15:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1869376563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:15:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:15:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1869376563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:15:12 compute-0 ceph-mon[75021]: pgmap v1509: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:15:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1869376563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:15:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1869376563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.707 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.708 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Deleting local config drive /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config because it was imported into RBD.
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.773 253665 DEBUG nova.network.neutron [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updated VIF entry in instance network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.774 253665 DEBUG nova.network.neutron [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.790 253665 DEBUG oslo_concurrency.lockutils [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:15:12 compute-0 kernel: tap50e75895-e7: entered promiscuous mode
Nov 22 09:15:12 compute-0 NetworkManager[48920]: <info>  [1763802912.8041] manager: (tap50e75895-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Nov 22 09:15:12 compute-0 ovn_controller[152872]: 2025-11-22T09:15:12Z|00317|binding|INFO|Claiming lport 50e75895-e769-4e23-b607-7d52eb14fb62 for this chassis.
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:12 compute-0 ovn_controller[152872]: 2025-11-22T09:15:12Z|00318|binding|INFO|50e75895-e769-4e23-b607-7d52eb14fb62: Claiming fa:16:3e:07:ef:1b 10.100.0.9
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.815 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:12 compute-0 systemd-udevd[299620]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.837 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:ef:1b 10.100.0.9'], port_security=['fa:16:3e:07:ef:1b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fc32fb5484840b1b6654dffb70595ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f093055-0f73-4edf-a345-d9278a345d48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7266d51a-8673-408f-8e3f-05b71c491331, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=50e75895-e769-4e23-b607-7d52eb14fb62) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.838 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 50e75895-e769-4e23-b607-7d52eb14fb62 in datapath 35d4669f-adae-4ff8-9cc1-a890f0b28c31 bound to our chassis
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.839 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35d4669f-adae-4ff8-9cc1-a890f0b28c31
Nov 22 09:15:12 compute-0 NetworkManager[48920]: <info>  [1763802912.8499] device (tap50e75895-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:15:12 compute-0 NetworkManager[48920]: <info>  [1763802912.8511] device (tap50e75895-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:15:12 compute-0 systemd-machined[215941]: New machine qemu-44-instance-00000027.
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.855 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[116c533d-f821-4125-9e78-9bb9e38b2db2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.856 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35d4669f-a1 in ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.858 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35d4669f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.858 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc29ec3-8aa7-4c66-9f53-02d1a1c035e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.859 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8382ea0f-3e2a-4b0d-84c5-0b6633d7607a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.875 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0c818a86-bdfc-4efd-aff8-5a0c319c1701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:12 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000027.
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.907 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5297c63-01dc-4db2-8de3-94a7c060c584]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.909 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:12 compute-0 ovn_controller[152872]: 2025-11-22T09:15:12Z|00319|binding|INFO|Setting lport 50e75895-e769-4e23-b607-7d52eb14fb62 ovn-installed in OVS
Nov 22 09:15:12 compute-0 ovn_controller[152872]: 2025-11-22T09:15:12Z|00320|binding|INFO|Setting lport 50e75895-e769-4e23-b607-7d52eb14fb62 up in Southbound
Nov 22 09:15:12 compute-0 nova_compute[253661]: 2025-11-22 09:15:12.916 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.950 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1d862f1a-0e0e-498f-b877-0d18c276da2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:12 compute-0 NetworkManager[48920]: <info>  [1763802912.9604] manager: (tap35d4669f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.959 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[174b025c-0319-41ab-8e6b-222f3d709bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.994 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7918d848-7ce6-4c74-af46-7c4156293809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.997 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a533447f-e0cc-4fc5-9180-ce1dfe352422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:13 compute-0 NetworkManager[48920]: <info>  [1763802913.0238] device (tap35d4669f-a0): carrier: link connected
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.033 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[96339895-4771-4b0d-af35-8668fdb3267f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.055 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcab0dd0-9d07-4ab9-aaed-918c03149876]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35d4669f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:da:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576376, 'reachable_time': 22389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299654, 'error': None, 'target': 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.075 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be5ee53c-4f0c-46c0-a45b-20d33a884861]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:da4b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576376, 'tstamp': 576376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299655, 'error': None, 'target': 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.092 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46ec4f4f-8022-4597-b634-5d563581fb3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35d4669f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:da:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576376, 'reachable_time': 22389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299656, 'error': None, 'target': 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.128 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[074112bb-9015-472f-9675-884d4550c382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.158 253665 DEBUG nova.compute.manager [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.159 253665 DEBUG oslo_concurrency.lockutils [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.159 253665 DEBUG oslo_concurrency.lockutils [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.160 253665 DEBUG oslo_concurrency.lockutils [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.160 253665 DEBUG nova.compute.manager [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Processing event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.203 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46791a56-3352-40b0-a60d-cb6939f2602c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.204 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35d4669f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.204 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.204 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35d4669f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:13 compute-0 NetworkManager[48920]: <info>  [1763802913.2071] manager: (tap35d4669f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Nov 22 09:15:13 compute-0 kernel: tap35d4669f-a0: entered promiscuous mode
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.210 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.211 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35d4669f-a0, col_values=(('external_ids', {'iface-id': 'd5d5c0f3-ca4b-44d4-9294-d8da8d674dc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:13 compute-0 ovn_controller[152872]: 2025-11-22T09:15:13Z|00321|binding|INFO|Releasing lport d5d5c0f3-ca4b-44d4-9294-d8da8d674dc8 from this chassis (sb_readonly=0)
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.234 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35d4669f-adae-4ff8-9cc1-a890f0b28c31.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35d4669f-adae-4ff8-9cc1-a890f0b28c31.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.235 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd205b4-197c-4d42-b39a-c78979e096ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.235 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-35d4669f-adae-4ff8-9cc1-a890f0b28c31
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/35d4669f-adae-4ff8-9cc1-a890f0b28c31.pid.haproxy
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 35d4669f-adae-4ff8-9cc1-a890f0b28c31
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:15:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.236 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'env', 'PROCESS_TAG=haproxy-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35d4669f-adae-4ff8-9cc1-a890f0b28c31.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:15:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.460 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802913.460091, 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.462 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] VM Started (Lifecycle Event)
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.464 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.468 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.473 253665 INFO nova.virt.libvirt.driver [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance spawned successfully.
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.474 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.480 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.484 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.496 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.497 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.498 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.498 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.499 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.500 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.504 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.505 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802913.4603581, 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.505 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] VM Paused (Lifecycle Event)
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.533 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.539 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802913.467998, 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.540 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] VM Resumed (Lifecycle Event)
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.574 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.582 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.612 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.627 253665 INFO nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Took 11.58 seconds to spawn the instance on the hypervisor.
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.628 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.690 253665 INFO nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Took 12.95 seconds to build instance.
Nov 22 09:15:13 compute-0 podman[299729]: 2025-11-22 09:15:13.710133262 +0000 UTC m=+0.098137053 container create 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:15:13 compute-0 nova_compute[253661]: 2025-11-22 09:15:13.716 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:13 compute-0 podman[299729]: 2025-11-22 09:15:13.643190646 +0000 UTC m=+0.031194457 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:15:13 compute-0 systemd[1]: Started libpod-conmon-2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb.scope.
Nov 22 09:15:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd3306a5b7915795f7c52c8141867f19aeb43af9b9e1b85d687973702260cc6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:13 compute-0 podman[299729]: 2025-11-22 09:15:13.82522749 +0000 UTC m=+0.213231311 container init 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:15:13 compute-0 podman[299729]: 2025-11-22 09:15:13.832325595 +0000 UTC m=+0.220329386 container start 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 09:15:13 compute-0 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [NOTICE]   (299748) : New worker (299750) forked
Nov 22 09:15:13 compute-0 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [NOTICE]   (299748) : Loading success.
Nov 22 09:15:14 compute-0 ceph-mon[75021]: pgmap v1510: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:15:14 compute-0 nova_compute[253661]: 2025-11-22 09:15:14.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:15 compute-0 nova_compute[253661]: 2025-11-22 09:15:15.261 253665 DEBUG nova.compute.manager [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:15 compute-0 nova_compute[253661]: 2025-11-22 09:15:15.261 253665 DEBUG oslo_concurrency.lockutils [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:15 compute-0 nova_compute[253661]: 2025-11-22 09:15:15.262 253665 DEBUG oslo_concurrency.lockutils [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:15 compute-0 nova_compute[253661]: 2025-11-22 09:15:15.262 253665 DEBUG oslo_concurrency.lockutils [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:15 compute-0 nova_compute[253661]: 2025-11-22 09:15:15.263 253665 DEBUG nova.compute.manager [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] No waiting events found dispatching network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:15:15 compute-0 nova_compute[253661]: 2025-11-22 09:15:15.263 253665 WARNING nova.compute.manager [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received unexpected event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 for instance with vm_state active and task_state None.
Nov 22 09:15:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 22 09:15:16 compute-0 nova_compute[253661]: 2025-11-22 09:15:16.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:16 compute-0 sudo[299759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:16 compute-0 sudo[299759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:16 compute-0 sudo[299759]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:16 compute-0 sudo[299784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:15:16 compute-0 sudo[299784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:16 compute-0 sudo[299784]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:16 compute-0 sudo[299809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:16 compute-0 sudo[299809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:16 compute-0 sudo[299809]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:16 compute-0 sudo[299834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 09:15:16 compute-0 sudo[299834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:16 compute-0 ceph-mon[75021]: pgmap v1511: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.727111) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916727185, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 465, "num_deletes": 251, "total_data_size": 386966, "memory_usage": 397096, "flush_reason": "Manual Compaction"}
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916732620, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 383190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30392, "largest_seqno": 30856, "table_properties": {"data_size": 380547, "index_size": 679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6385, "raw_average_key_size": 18, "raw_value_size": 375335, "raw_average_value_size": 1107, "num_data_blocks": 31, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802891, "oldest_key_time": 1763802891, "file_creation_time": 1763802916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 5570 microseconds, and 2616 cpu microseconds.
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.732683) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 383190 bytes OK
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.732711) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734283) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734300) EVENT_LOG_v1 {"time_micros": 1763802916734295, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734329) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 384180, prev total WAL file size 384180, number of live WAL files 2.
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734806) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(374KB)], [65(8317KB)]
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916734845, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 8900359, "oldest_snapshot_seqno": -1}
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5320 keys, 7222833 bytes, temperature: kUnknown
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916787120, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7222833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7187830, "index_size": 20621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 134640, "raw_average_key_size": 25, "raw_value_size": 7092483, "raw_average_value_size": 1333, "num_data_blocks": 836, "num_entries": 5320, "num_filter_entries": 5320, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.787551) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7222833 bytes
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.788956) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.8 rd, 137.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 8.1 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(42.1) write-amplify(18.8) OK, records in: 5830, records dropped: 510 output_compression: NoCompression
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.788981) EVENT_LOG_v1 {"time_micros": 1763802916788968, "job": 36, "event": "compaction_finished", "compaction_time_micros": 52412, "compaction_time_cpu_micros": 17945, "output_level": 6, "num_output_files": 1, "total_output_size": 7222833, "num_input_records": 5830, "num_output_records": 5320, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916789245, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916790931, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:15:16 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:15:16 compute-0 sudo[299834]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:15:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:15:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:17 compute-0 sudo[299878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:17 compute-0 sudo[299878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:17 compute-0 sudo[299878]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:17 compute-0 sudo[299903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:15:17 compute-0 sudo[299903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:17 compute-0 sudo[299903]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:17 compute-0 sudo[299928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:17 compute-0 sudo[299928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:17 compute-0 sudo[299928]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:17 compute-0 sudo[299953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:15:17 compute-0 sudo[299953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 882 KiB/s rd, 801 KiB/s wr, 42 op/s
Nov 22 09:15:17 compute-0 sudo[299953]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:15:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:15:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:15:17 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:15:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:15:17 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:17 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c763b04c-5aae-4842-8d06-dfcc8fe86cd7 does not exist
Nov 22 09:15:17 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 568958b2-50ce-4539-a8f7-8e9a0d952aad does not exist
Nov 22 09:15:17 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 766f4205-894a-4170-8880-7df53836c9c5 does not exist
Nov 22 09:15:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:15:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:15:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:15:17 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:15:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:15:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:15:17 compute-0 sudo[300009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:17 compute-0 sudo[300009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:17 compute-0 sudo[300009]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:17 compute-0 sudo[300034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:15:17 compute-0 sudo[300034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:17 compute-0 sudo[300034]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:15:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:15:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:15:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:15:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:15:18 compute-0 sudo[300059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:18 compute-0 sudo[300059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:18 compute-0 sudo[300059]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:18 compute-0 sudo[300084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:15:18 compute-0 sudo[300084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:18 compute-0 podman[300148]: 2025-11-22 09:15:18.502717394 +0000 UTC m=+0.049891917 container create cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 09:15:18 compute-0 podman[300148]: 2025-11-22 09:15:18.477519445 +0000 UTC m=+0.024693988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:15:18 compute-0 systemd[1]: Started libpod-conmon-cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704.scope.
Nov 22 09:15:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:15:18 compute-0 podman[300148]: 2025-11-22 09:15:18.701567552 +0000 UTC m=+0.248742085 container init cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:15:18 compute-0 podman[300148]: 2025-11-22 09:15:18.713196937 +0000 UTC m=+0.260371460 container start cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:15:18 compute-0 great_faraday[300164]: 167 167
Nov 22 09:15:18 compute-0 systemd[1]: libpod-cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704.scope: Deactivated successfully.
Nov 22 09:15:18 compute-0 podman[300148]: 2025-11-22 09:15:18.72671276 +0000 UTC m=+0.273887293 container attach cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 09:15:18 compute-0 podman[300148]: 2025-11-22 09:15:18.72877482 +0000 UTC m=+0.275949343 container died cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:15:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-03e60de34f06bb88de27e2bf1ab62e5020244f0235333c974b80c74c826fbff8-merged.mount: Deactivated successfully.
Nov 22 09:15:19 compute-0 ceph-mon[75021]: pgmap v1512: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 882 KiB/s rd, 801 KiB/s wr, 42 op/s
Nov 22 09:15:19 compute-0 podman[300148]: 2025-11-22 09:15:19.125175974 +0000 UTC m=+0.672350507 container remove cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:15:19 compute-0 systemd[1]: libpod-conmon-cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704.scope: Deactivated successfully.
Nov 22 09:15:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:15:19 compute-0 podman[300191]: 2025-11-22 09:15:19.34272364 +0000 UTC m=+0.061736258 container create 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:15:19 compute-0 systemd[1]: Started libpod-conmon-348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb.scope.
Nov 22 09:15:19 compute-0 podman[300191]: 2025-11-22 09:15:19.315631895 +0000 UTC m=+0.034644533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:15:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:19 compute-0 podman[300191]: 2025-11-22 09:15:19.481068721 +0000 UTC m=+0.200081359 container init 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:15:19 compute-0 podman[300191]: 2025-11-22 09:15:19.491097747 +0000 UTC m=+0.210110365 container start 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:15:19 compute-0 podman[300191]: 2025-11-22 09:15:19.514998204 +0000 UTC m=+0.234010842 container attach 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 09:15:19 compute-0 nova_compute[253661]: 2025-11-22 09:15:19.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.431 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.431 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.519 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:15:20 compute-0 compassionate_blackwell[300207]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:15:20 compute-0 compassionate_blackwell[300207]: --> relative data size: 1.0
Nov 22 09:15:20 compute-0 compassionate_blackwell[300207]: --> All data devices are unavailable
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:20 compute-0 NetworkManager[48920]: <info>  [1763802920.6943] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Nov 22 09:15:20 compute-0 NetworkManager[48920]: <info>  [1763802920.6957] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Nov 22 09:15:20 compute-0 podman[300191]: 2025-11-22 09:15:20.710107477 +0000 UTC m=+1.429120105 container died 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:15:20 compute-0 systemd[1]: libpod-348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb.scope: Deactivated successfully.
Nov 22 09:15:20 compute-0 systemd[1]: libpod-348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb.scope: Consumed 1.116s CPU time.
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.714 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.716 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.743 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.744 253665 INFO nova.compute.claims [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:20 compute-0 ovn_controller[152872]: 2025-11-22T09:15:20Z|00322|binding|INFO|Releasing lport d5d5c0f3-ca4b-44d4-9294-d8da8d674dc8 from this chassis (sb_readonly=0)
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:20 compute-0 nova_compute[253661]: 2025-11-22 09:15:20.916 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97-merged.mount: Deactivated successfully.
Nov 22 09:15:21 compute-0 ceph-mon[75021]: pgmap v1513: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:15:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.292 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:21 compute-0 podman[300191]: 2025-11-22 09:15:21.33961025 +0000 UTC m=+2.058622868 container remove 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:15:21 compute-0 sudo[300084]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:15:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2920842919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.412 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:21 compute-0 systemd[1]: libpod-conmon-348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb.scope: Deactivated successfully.
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.422 253665 DEBUG nova.compute.provider_tree [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.436 253665 DEBUG nova.scheduler.client.report [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:15:21 compute-0 sudo[300291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:21 compute-0 sudo[300291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:21 compute-0 sudo[300291]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.457 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.458 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:15:21 compute-0 podman[300244]: 2025-11-22 09:15:21.483471236 +0000 UTC m=+0.720868639 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 22 09:15:21 compute-0 podman[300237]: 2025-11-22 09:15:21.500881763 +0000 UTC m=+0.740879610 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:15:21 compute-0 sudo[300330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:15:21 compute-0 sudo[300330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.529 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.531 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:15:21 compute-0 sudo[300330]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.551 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.571 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:15:21 compute-0 sudo[300358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:21 compute-0 sudo[300358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:21 compute-0 sudo[300358]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:21 compute-0 sudo[300383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:15:21 compute-0 sudo[300383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.666 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.667 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.668 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Creating image(s)
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.695 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.727 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.753 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.758 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.829 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.830 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.830 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.831 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.881 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.895 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 45051f55-4273-48ff-b5be-72501a74d560_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:21 compute-0 nova_compute[253661]: 2025-11-22 09:15:21.938 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:22 compute-0 podman[300521]: 2025-11-22 09:15:22.00524488 +0000 UTC m=+0.028971843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.129 253665 DEBUG nova.policy [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fefecdd1a6a94e3ea3896308da03d91b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc07b24fb9ba4101a34be65493a83a22', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:15:22 compute-0 podman[300521]: 2025-11-22 09:15:22.174011328 +0000 UTC m=+0.197738251 container create 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:15:22 compute-0 ceph-mon[75021]: pgmap v1514: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:15:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2920842919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:22 compute-0 systemd[1]: Started libpod-conmon-39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e.scope.
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.265 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.266 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:15:22 compute-0 podman[300521]: 2025-11-22 09:15:22.312411 +0000 UTC m=+0.336137933 container init 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 09:15:22 compute-0 podman[300521]: 2025-11-22 09:15:22.327183592 +0000 UTC m=+0.350910505 container start 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:15:22 compute-0 zen_jackson[300552]: 167 167
Nov 22 09:15:22 compute-0 systemd[1]: libpod-39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e.scope: Deactivated successfully.
Nov 22 09:15:22 compute-0 podman[300521]: 2025-11-22 09:15:22.391884403 +0000 UTC m=+0.415611346 container attach 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:15:22 compute-0 podman[300521]: 2025-11-22 09:15:22.392677402 +0000 UTC m=+0.416404315 container died 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9fa2c8a6744e1671ada74f5be31ecd67ae5a552762d8f5cdbb7406eda9f63a2-merged.mount: Deactivated successfully.
Nov 22 09:15:22 compute-0 podman[300521]: 2025-11-22 09:15:22.64200364 +0000 UTC m=+0.665730553 container remove 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:15:22 compute-0 systemd[1]: libpod-conmon-39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e.scope: Deactivated successfully.
Nov 22 09:15:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:15:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118184208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.836 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.904 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:15:22 compute-0 nova_compute[253661]: 2025-11-22 09:15:22.905 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:15:22 compute-0 podman[300598]: 2025-11-22 09:15:22.845723727 +0000 UTC m=+0.026797089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.073 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.076 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4022MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.076 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.076 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:23 compute-0 podman[300598]: 2025-11-22 09:15:23.088664388 +0000 UTC m=+0.269737740 container create c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.167 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.167 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 45051f55-4273-48ff-b5be-72501a74d560 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:15:23 compute-0 systemd[1]: Started libpod-conmon-c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29.scope.
Nov 22 09:15:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.216 253665 DEBUG nova.compute.manager [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.216 253665 DEBUG nova.compute.manager [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing instance network info cache due to event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.217 253665 DEBUG oslo_concurrency.lockutils [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.217 253665 DEBUG oslo_concurrency.lockutils [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.217 253665 DEBUG nova.network.neutron [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.245 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 22 09:15:23 compute-0 podman[300598]: 2025-11-22 09:15:23.302153776 +0000 UTC m=+0.483227158 container init c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:15:23 compute-0 podman[300598]: 2025-11-22 09:15:23.311933946 +0000 UTC m=+0.493007298 container start c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.344 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 45051f55-4273-48ff-b5be-72501a74d560_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:23 compute-0 podman[300598]: 2025-11-22 09:15:23.356103981 +0000 UTC m=+0.537177363 container attach c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 09:15:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2118184208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.537 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] resizing rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:15:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:15:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638808918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.822 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.830 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.848 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.952 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:15:23 compute-0 nova_compute[253661]: 2025-11-22 09:15:23.953 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:24 compute-0 crazy_yonath[300615]: {
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:     "0": [
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:         {
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "devices": [
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "/dev/loop3"
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             ],
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_name": "ceph_lv0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_size": "21470642176",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "name": "ceph_lv0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "tags": {
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.cluster_name": "ceph",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.crush_device_class": "",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.encrypted": "0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.osd_id": "0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.type": "block",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.vdo": "0"
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             },
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "type": "block",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "vg_name": "ceph_vg0"
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:         }
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:     ],
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:     "1": [
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:         {
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "devices": [
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "/dev/loop4"
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             ],
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_name": "ceph_lv1",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_size": "21470642176",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "name": "ceph_lv1",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "tags": {
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.cluster_name": "ceph",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.crush_device_class": "",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.encrypted": "0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.osd_id": "1",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.type": "block",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.vdo": "0"
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             },
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "type": "block",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "vg_name": "ceph_vg1"
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:         }
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:     ],
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:     "2": [
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:         {
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "devices": [
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "/dev/loop5"
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             ],
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_name": "ceph_lv2",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_size": "21470642176",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "name": "ceph_lv2",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "tags": {
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.cluster_name": "ceph",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.crush_device_class": "",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.encrypted": "0",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.osd_id": "2",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.type": "block",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:                 "ceph.vdo": "0"
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             },
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "type": "block",
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:             "vg_name": "ceph_vg2"
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:         }
Nov 22 09:15:24 compute-0 crazy_yonath[300615]:     ]
Nov 22 09:15:24 compute-0 crazy_yonath[300615]: }
Nov 22 09:15:24 compute-0 systemd[1]: libpod-c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29.scope: Deactivated successfully.
Nov 22 09:15:24 compute-0 podman[300598]: 2025-11-22 09:15:24.151519711 +0000 UTC m=+1.332593103 container died c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.295 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.298 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.400 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:15:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9-merged.mount: Deactivated successfully.
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.547 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.549 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.558 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.558 253665 INFO nova.compute.claims [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:15:24 compute-0 ceph-mon[75021]: pgmap v1515: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 22 09:15:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3638808918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.662 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Successfully created port: 1da58540-88e4-4125-96c0-62be7cec281d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.678 253665 DEBUG nova.objects.instance [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lazy-loading 'migration_context' on Instance uuid 45051f55-4273-48ff-b5be-72501a74d560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.690 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.691 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Ensure instance console log exists: /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.692 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.692 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:24 compute-0 podman[300598]: 2025-11-22 09:15:24.69301194 +0000 UTC m=+1.874085302 container remove c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.693 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:24 compute-0 systemd[1]: libpod-conmon-c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29.scope: Deactivated successfully.
Nov 22 09:15:24 compute-0 sudo[300383]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.770 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:24 compute-0 sudo[300733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:24 compute-0 sudo[300733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:24 compute-0 sudo[300733]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:24 compute-0 sudo[300778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:15:24 compute-0 sudo[300778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:24 compute-0 sudo[300778]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:24 compute-0 podman[300732]: 2025-11-22 09:15:24.876338086 +0000 UTC m=+0.121149128 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:24 compute-0 sudo[300809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:24 compute-0 sudo[300809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:24 compute-0 sudo[300809]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.945 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.945 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.946 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:15:24 compute-0 sudo[300844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:15:24 compute-0 sudo[300844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.999 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.999 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:24 compute-0 nova_compute[253661]: 2025-11-22 09:15:24.999 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.220 253665 DEBUG nova.network.neutron [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updated VIF entry in instance network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.221 253665 DEBUG nova.network.neutron [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.235 253665 DEBUG oslo_concurrency.lockutils [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:15:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:15:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1038955148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.273 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.280 253665 DEBUG nova.compute.provider_tree [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:15:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 126 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.292 253665 DEBUG nova.scheduler.client.report [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:15:25 compute-0 podman[300920]: 2025-11-22 09:15:25.309090152 +0000 UTC m=+0.025191220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.423 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.424 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.548 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.548 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:15:25 compute-0 podman[300920]: 2025-11-22 09:15:25.549786438 +0000 UTC m=+0.265887476 container create aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.571 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:15:25 compute-0 systemd[1]: Started libpod-conmon-aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f.scope.
Nov 22 09:15:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.665 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:15:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1038955148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:25 compute-0 nova_compute[253661]: 2025-11-22 09:15:25.762 253665 DEBUG nova.policy [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'db8ccc99aef946c58a2604bc21e0ef23', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ad111e77e47541688eda72c9090309e9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:15:25 compute-0 podman[300920]: 2025-11-22 09:15:25.778283514 +0000 UTC m=+0.494384562 container init aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:15:25 compute-0 podman[300920]: 2025-11-22 09:15:25.787922641 +0000 UTC m=+0.504023699 container start aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:15:25 compute-0 xenodochial_wiles[300936]: 167 167
Nov 22 09:15:25 compute-0 systemd[1]: libpod-aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f.scope: Deactivated successfully.
Nov 22 09:15:25 compute-0 podman[300920]: 2025-11-22 09:15:25.910853623 +0000 UTC m=+0.626954871 container attach aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:15:25 compute-0 podman[300920]: 2025-11-22 09:15:25.911897568 +0000 UTC m=+0.627998606 container died aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-533109b4ac836828aafc0a360d806ded20bfe72878438a9c9a4f0d1e488e86b9-merged.mount: Deactivated successfully.
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.479 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.482 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.483 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Creating image(s)
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.517 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.556 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.589 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.595 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.683 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.685 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.685 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.686 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.715 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:26 compute-0 nova_compute[253661]: 2025-11-22 09:15:26.721 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:27 compute-0 podman[300920]: 2025-11-22 09:15:27.237006386 +0000 UTC m=+1.953107434 container remove aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:15:27 compute-0 ceph-mon[75021]: pgmap v1516: 305 pgs: 305 active+clean; 126 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Nov 22 09:15:27 compute-0 systemd[1]: libpod-conmon-aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f.scope: Deactivated successfully.
Nov 22 09:15:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 134 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 22 09:15:27 compute-0 podman[301053]: 2025-11-22 09:15:27.407376954 +0000 UTC m=+0.037825531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:15:27 compute-0 podman[301053]: 2025-11-22 09:15:27.700702543 +0000 UTC m=+0.331151100 container create a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:15:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:27.959 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:27.959 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:27.960 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:28 compute-0 systemd[1]: Started libpod-conmon-a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b.scope.
Nov 22 09:15:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:28 compute-0 nova_compute[253661]: 2025-11-22 09:15:28.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:15:28 compute-0 podman[301053]: 2025-11-22 09:15:28.504387276 +0000 UTC m=+1.134835883 container init a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:15:28 compute-0 ceph-mon[75021]: pgmap v1517: 305 pgs: 305 active+clean; 134 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 22 09:15:28 compute-0 podman[301053]: 2025-11-22 09:15:28.520629055 +0000 UTC m=+1.151077582 container start a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:15:28 compute-0 nova_compute[253661]: 2025-11-22 09:15:28.860 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Successfully created port: 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:15:28 compute-0 podman[301053]: 2025-11-22 09:15:28.899197369 +0000 UTC m=+1.529645936 container attach a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:15:29 compute-0 nova_compute[253661]: 2025-11-22 09:15:29.023 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Successfully updated port: 1da58540-88e4-4125-96c0-62be7cec281d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:15:29 compute-0 nova_compute[253661]: 2025-11-22 09:15:29.126 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:15:29 compute-0 nova_compute[253661]: 2025-11-22 09:15:29.127 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquired lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:15:29 compute-0 nova_compute[253661]: 2025-11-22 09:15:29.128 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:15:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 141 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 96 op/s
Nov 22 09:15:29 compute-0 nova_compute[253661]: 2025-11-22 09:15:29.355 253665 DEBUG nova.compute.manager [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-changed-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:29 compute-0 nova_compute[253661]: 2025-11-22 09:15:29.356 253665 DEBUG nova.compute.manager [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Refreshing instance network info cache due to event network-changed-1da58540-88e4-4125-96c0-62be7cec281d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:15:29 compute-0 nova_compute[253661]: 2025-11-22 09:15:29.356 253665 DEBUG oslo_concurrency.lockutils [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]: {
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "osd_id": 1,
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "type": "bluestore"
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:     },
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "osd_id": 0,
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "type": "bluestore"
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:     },
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "osd_id": 2,
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:         "type": "bluestore"
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]:     }
Nov 22 09:15:29 compute-0 quizzical_rosalind[301069]: }
Nov 22 09:15:29 compute-0 systemd[1]: libpod-a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b.scope: Deactivated successfully.
Nov 22 09:15:29 compute-0 systemd[1]: libpod-a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b.scope: Consumed 1.051s CPU time.
Nov 22 09:15:29 compute-0 podman[301053]: 2025-11-22 09:15:29.582288698 +0000 UTC m=+2.212737255 container died a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 09:15:29 compute-0 nova_compute[253661]: 2025-11-22 09:15:29.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:30 compute-0 nova_compute[253661]: 2025-11-22 09:15:30.126 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd-merged.mount: Deactivated successfully.
Nov 22 09:15:30 compute-0 nova_compute[253661]: 2025-11-22 09:15:30.455 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Successfully updated port: 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:15:30 compute-0 nova_compute[253661]: 2025-11-22 09:15:30.468 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:15:30 compute-0 nova_compute[253661]: 2025-11-22 09:15:30.469 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquired lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:15:30 compute-0 nova_compute[253661]: 2025-11-22 09:15:30.469 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:15:30 compute-0 nova_compute[253661]: 2025-11-22 09:15:30.625 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:15:30 compute-0 podman[301053]: 2025-11-22 09:15:30.961667862 +0000 UTC m=+3.592116399 container remove a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:15:30 compute-0 systemd[1]: libpod-conmon-a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b.scope: Deactivated successfully.
Nov 22 09:15:30 compute-0 sudo[300844]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:30 compute-0 ceph-mon[75021]: pgmap v1518: 305 pgs: 305 active+clean; 141 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 96 op/s
Nov 22 09:15:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:15:31 compute-0 nova_compute[253661]: 2025-11-22 09:15:31.116 253665 DEBUG nova.compute.manager [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-changed-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:31 compute-0 nova_compute[253661]: 2025-11-22 09:15:31.117 253665 DEBUG nova.compute.manager [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Refreshing instance network info cache due to event network-changed-14eb6b64-11d1-4c6f-9c3c-e24463c899c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:15:31 compute-0 nova_compute[253661]: 2025-11-22 09:15:31.117 253665 DEBUG oslo_concurrency.lockutils [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:15:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:15:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 141 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.4 MiB/s wr, 61 op/s
Nov 22 09:15:31 compute-0 nova_compute[253661]: 2025-11-22 09:15:31.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 161e868c-971b-462c-8688-625899dc2d0f does not exist
Nov 22 09:15:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c89e39f5-e36b-4d06-a87c-ab5d4da87495 does not exist
Nov 22 09:15:31 compute-0 sudo[301121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:15:31 compute-0 sudo[301121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:31 compute-0 sudo[301121]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:31 compute-0 sudo[301146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:15:31 compute-0 sudo[301146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:15:31 compute-0 sudo[301146]: pam_unix(sudo:session): session closed for user root
Nov 22 09:15:32 compute-0 nova_compute[253661]: 2025-11-22 09:15:32.086 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Updating instance_info_cache with network_info: [{"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:15:32 compute-0 nova_compute[253661]: 2025-11-22 09:15:32.101 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Releasing lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:15:32 compute-0 nova_compute[253661]: 2025-11-22 09:15:32.102 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance network_info: |[{"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:15:32 compute-0 nova_compute[253661]: 2025-11-22 09:15:32.102 253665 DEBUG oslo_concurrency.lockutils [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:15:32 compute-0 nova_compute[253661]: 2025-11-22 09:15:32.102 253665 DEBUG nova.network.neutron [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Refreshing network info cache for port 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:15:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:32 compute-0 ceph-mon[75021]: pgmap v1519: 305 pgs: 305 active+clean; 141 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.4 MiB/s wr, 61 op/s
Nov 22 09:15:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.198 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.277 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updating instance_info_cache with network_info: [{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:15:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 151 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.4 MiB/s wr, 68 op/s
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.306 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Releasing lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.307 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance network_info: |[{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.308 253665 DEBUG oslo_concurrency.lockutils [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.309 253665 DEBUG nova.network.neutron [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Refreshing network info cache for port 1da58540-88e4-4125-96c0-62be7cec281d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.313 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start _get_guest_xml network_info=[{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.321 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] resizing rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.391 253665 WARNING nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.400 253665 DEBUG nova.virt.libvirt.host [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.400 253665 DEBUG nova.virt.libvirt.host [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.403 253665 DEBUG nova.virt.libvirt.host [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.404 253665 DEBUG nova.virt.libvirt.host [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.405 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.405 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.405 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.406 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.406 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.406 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.407 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.407 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.407 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.407 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.408 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.408 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.411 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.593 253665 DEBUG nova.objects.instance [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lazy-loading 'migration_context' on Instance uuid aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.608 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.608 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Ensure instance console log exists: /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.609 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.609 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.610 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.614 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start _get_guest_xml network_info=[{"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.620 253665 WARNING nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.625 253665 DEBUG nova.virt.libvirt.host [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.626 253665 DEBUG nova.virt.libvirt.host [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.630 253665 DEBUG nova.virt.libvirt.host [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.630 253665 DEBUG nova.virt.libvirt.host [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.630 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.631 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.632 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.632 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.632 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.632 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.633 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.633 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.633 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.633 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.634 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.634 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.637 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:15:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951426530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.874 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.895 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:33 compute-0 nova_compute[253661]: 2025-11-22 09:15:33.900 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:15:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2296681367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:34 compute-0 ovn_controller[152872]: 2025-11-22T09:15:34Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:ef:1b 10.100.0.9
Nov 22 09:15:34 compute-0 ovn_controller[152872]: 2025-11-22T09:15:34Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:ef:1b 10.100.0.9
Nov 22 09:15:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1951426530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2296681367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.086 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.108 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.112 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.183 253665 DEBUG nova.network.neutron [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Updated VIF entry in instance network info cache for port 14eb6b64-11d1-4c6f-9c3c-e24463c899c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.184 253665 DEBUG nova.network.neutron [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Updating instance_info_cache with network_info: [{"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.205 253665 DEBUG oslo_concurrency.lockutils [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:15:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:15:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479638859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.374 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.376 253665 DEBUG nova.virt.libvirt.vif [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1927732921',display_name='tempest-ServersTestManualDisk-server-1927732921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1927732921',id=40,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBX0yjbHKSpcMTELYvbrtlV9HnVJ+VN3g8rkd9TCKWMPUjySXweCS4cpqzW/ksedFJ/34L4Xm/tZKO9hmn9Qms+oHuE0viyLQ9MdGgB+HYr9JkLrXZ9hRmwZrKPRvprMA==',key_name='tempest-keypair-581094436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc07b24fb9ba4101a34be65493a83a22',ramdisk_id='',reservation_id='r-tu3melt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-357496739',owner_user_name='tempest-ServersTestManualDisk-357496739-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fefecdd1a6a94e3ea3896308da03d91b',uuid=45051f55-4273-48ff-b5be-72501a74d560,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.376 253665 DEBUG nova.network.os_vif_util [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converting VIF {"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.377 253665 DEBUG nova.network.os_vif_util [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.378 253665 DEBUG nova.objects.instance [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45051f55-4273-48ff-b5be-72501a74d560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.392 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <uuid>45051f55-4273-48ff-b5be-72501a74d560</uuid>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <name>instance-00000028</name>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestManualDisk-server-1927732921</nova:name>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:15:33</nova:creationTime>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:user uuid="fefecdd1a6a94e3ea3896308da03d91b">tempest-ServersTestManualDisk-357496739-project-member</nova:user>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:project uuid="dc07b24fb9ba4101a34be65493a83a22">tempest-ServersTestManualDisk-357496739</nova:project>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:port uuid="1da58540-88e4-4125-96c0-62be7cec281d">
Nov 22 09:15:34 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <system>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="serial">45051f55-4273-48ff-b5be-72501a74d560</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="uuid">45051f55-4273-48ff-b5be-72501a74d560</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </system>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <os>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </os>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <features>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </features>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/45051f55-4273-48ff-b5be-72501a74d560_disk">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </source>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/45051f55-4273-48ff-b5be-72501a74d560_disk.config">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </source>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:23:3e:da"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <target dev="tap1da58540-88"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/console.log" append="off"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <video>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </video>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:15:34 compute-0 nova_compute[253661]: </domain>
Nov 22 09:15:34 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.393 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Preparing to wait for external event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.394 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.394 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.394 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.395 253665 DEBUG nova.virt.libvirt.vif [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1927732921',display_name='tempest-ServersTestManualDisk-server-1927732921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1927732921',id=40,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBX0yjbHKSpcMTELYvbrtlV9HnVJ+VN3g8rkd9TCKWMPUjySXweCS4cpqzW/ksedFJ/34L4Xm/tZKO9hmn9Qms+oHuE0viyLQ9MdGgB+HYr9JkLrXZ9hRmwZrKPRvprMA==',key_name='tempest-keypair-581094436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc07b24fb9ba4101a34be65493a83a22',ramdisk_id='',reservation_id='r-tu3melt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-357496739',owner_user_name='tempest-ServersTestManualDisk-357496739-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fefecdd1a6a94e3ea3896308da03d91b',uuid=45051f55-4273-48ff-b5be-72501a74d560,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.395 253665 DEBUG nova.network.os_vif_util [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converting VIF {"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.396 253665 DEBUG nova.network.os_vif_util [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.396 253665 DEBUG os_vif [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.397 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.398 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.402 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1da58540-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.402 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1da58540-88, col_values=(('external_ids', {'iface-id': '1da58540-88e4-4125-96c0-62be7cec281d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:3e:da', 'vm-uuid': '45051f55-4273-48ff-b5be-72501a74d560'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:34 compute-0 NetworkManager[48920]: <info>  [1763802934.4054] manager: (tap1da58540-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.407 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.413 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.414 253665 INFO os_vif [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88')
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.458 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.459 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.459 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] No VIF found with MAC fa:16:3e:23:3e:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.460 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Using config drive
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.482 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:15:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2845922941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.603 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.605 253665 DEBUG nova.virt.libvirt.vif [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-121050772',display_name='tempest-ImagesOneServerTestJSON-server-121050772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-121050772',id=41,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad111e77e47541688eda72c9090309e9',ramdisk_id='',reservation_id='r-820nx03b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1578797770',owner_user_name='tempest-ImagesOneServerTestJSON-1578797770-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:25Z,user_data=None,user_id='db8ccc99aef946c58a2604bc21e0ef23',uuid=aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.605 253665 DEBUG nova.network.os_vif_util [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converting VIF {"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.606 253665 DEBUG nova.network.os_vif_util [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.607 253665 DEBUG nova.objects.instance [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.619 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <uuid>aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad</uuid>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <name>instance-00000029</name>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesOneServerTestJSON-server-121050772</nova:name>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:15:33</nova:creationTime>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:user uuid="db8ccc99aef946c58a2604bc21e0ef23">tempest-ImagesOneServerTestJSON-1578797770-project-member</nova:user>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:project uuid="ad111e77e47541688eda72c9090309e9">tempest-ImagesOneServerTestJSON-1578797770</nova:project>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <nova:port uuid="14eb6b64-11d1-4c6f-9c3c-e24463c899c9">
Nov 22 09:15:34 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <system>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="serial">aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="uuid">aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </system>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <os>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </os>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <features>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </features>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </source>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </source>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:15:34 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:f8:a3:a6"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <target dev="tap14eb6b64-11"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/console.log" append="off"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <video>
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </video>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:15:34 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:15:34 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:15:34 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:15:34 compute-0 nova_compute[253661]: </domain>
Nov 22 09:15:34 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.620 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Preparing to wait for external event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.621 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.621 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.621 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.622 253665 DEBUG nova.virt.libvirt.vif [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-121050772',display_name='tempest-ImagesOneServerTestJSON-server-121050772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-121050772',id=41,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad111e77e47541688eda72c9090309e9',ramdisk_id='',reservation_id='r-820nx03b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1578797770',owner_user_name='tempest-ImagesOneServerTestJSON-1578797770-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:25Z,user_data=None,user_id='db8ccc99aef946c58a2604bc21e0ef23',uuid=aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.622 253665 DEBUG nova.network.os_vif_util [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converting VIF {"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.623 253665 DEBUG nova.network.os_vif_util [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.625 253665 DEBUG os_vif [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.626 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.627 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.630 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14eb6b64-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.630 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14eb6b64-11, col_values=(('external_ids', {'iface-id': '14eb6b64-11d1-4c6f-9c3c-e24463c899c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:a3:a6', 'vm-uuid': 'aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:34 compute-0 NetworkManager[48920]: <info>  [1763802934.6329] manager: (tap14eb6b64-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.643 253665 INFO os_vif [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11')
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.683 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.684 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.684 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No VIF found with MAC fa:16:3e:f8:a3:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.685 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Using config drive
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.708 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:34 compute-0 nova_compute[253661]: 2025-11-22 09:15:34.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:35 compute-0 ceph-mon[75021]: pgmap v1520: 305 pgs: 305 active+clean; 151 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.4 MiB/s wr, 68 op/s
Nov 22 09:15:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/479638859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2845922941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.100 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Creating config drive at /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.109 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45xa9b7v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.138 253665 DEBUG nova.network.neutron [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updated VIF entry in instance network info cache for port 1da58540-88e4-4125-96c0-62be7cec281d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.139 253665 DEBUG nova.network.neutron [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updating instance_info_cache with network_info: [{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.145 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Creating config drive at /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.151 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0wk_gxsh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.189 253665 DEBUG oslo_concurrency.lockutils [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.248 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45xa9b7v" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.273 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.279 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 194 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 5.2 MiB/s wr, 102 op/s
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.321 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0wk_gxsh" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.346 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.350 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config 45051f55-4273-48ff-b5be-72501a74d560_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.482 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.483 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Deleting local config drive /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config because it was imported into RBD.
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.527 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config 45051f55-4273-48ff-b5be-72501a74d560_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.527 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Deleting local config drive /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config because it was imported into RBD.
Nov 22 09:15:35 compute-0 NetworkManager[48920]: <info>  [1763802935.5565] manager: (tap14eb6b64-11): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Nov 22 09:15:35 compute-0 kernel: tap14eb6b64-11: entered promiscuous mode
Nov 22 09:15:35 compute-0 ovn_controller[152872]: 2025-11-22T09:15:35Z|00323|binding|INFO|Claiming lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 for this chassis.
Nov 22 09:15:35 compute-0 ovn_controller[152872]: 2025-11-22T09:15:35Z|00324|binding|INFO|14eb6b64-11d1-4c6f-9c3c-e24463c899c9: Claiming fa:16:3e:f8:a3:a6 10.100.0.14
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:35 compute-0 ovn_controller[152872]: 2025-11-22T09:15:35Z|00325|binding|INFO|Setting lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 ovn-installed in OVS
Nov 22 09:15:35 compute-0 systemd-udevd[301500]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.593 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:35 compute-0 NetworkManager[48920]: <info>  [1763802935.6084] device (tap14eb6b64-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:15:35 compute-0 NetworkManager[48920]: <info>  [1763802935.6094] device (tap14eb6b64-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.613 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:35 compute-0 NetworkManager[48920]: <info>  [1763802935.6441] manager: (tap1da58540-88): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Nov 22 09:15:35 compute-0 kernel: tap1da58540-88: entered promiscuous mode
Nov 22 09:15:35 compute-0 systemd-udevd[301506]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:15:35 compute-0 ovn_controller[152872]: 2025-11-22T09:15:35Z|00326|if_status|INFO|Not updating pb chassis for 1da58540-88e4-4125-96c0-62be7cec281d now as sb is readonly
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:35 compute-0 systemd-machined[215941]: New machine qemu-45-instance-00000029.
Nov 22 09:15:35 compute-0 NetworkManager[48920]: <info>  [1763802935.6568] device (tap1da58540-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:15:35 compute-0 NetworkManager[48920]: <info>  [1763802935.6598] device (tap1da58540-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:35 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.673 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:35 compute-0 systemd-machined[215941]: New machine qemu-46-instance-00000028.
Nov 22 09:15:35 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000028.
Nov 22 09:15:35 compute-0 ovn_controller[152872]: 2025-11-22T09:15:35Z|00327|binding|INFO|Claiming lport 1da58540-88e4-4125-96c0-62be7cec281d for this chassis.
Nov 22 09:15:35 compute-0 ovn_controller[152872]: 2025-11-22T09:15:35Z|00328|binding|INFO|1da58540-88e4-4125-96c0-62be7cec281d: Claiming fa:16:3e:23:3e:da 10.100.0.5
Nov 22 09:15:35 compute-0 ovn_controller[152872]: 2025-11-22T09:15:35Z|00329|binding|INFO|Setting lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 up in Southbound
Nov 22 09:15:35 compute-0 ovn_controller[152872]: 2025-11-22T09:15:35Z|00330|binding|INFO|Setting lport 1da58540-88e4-4125-96c0-62be7cec281d ovn-installed in OVS
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.781 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:a3:a6 10.100.0.14'], port_security=['fa:16:3e:f8:a3:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad111e77e47541688eda72c9090309e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '848a987a-5baf-4ba8-9981-79089e68d473', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f043f8a-2814-434a-a39b-7e1b32dc2849, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=14eb6b64-11d1-4c6f-9c3c-e24463c899c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:15:35 compute-0 nova_compute[253661]: 2025-11-22 09:15:35.783 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.785 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 in datapath 4ca459bc-d9ea-444a-9677-3a7c12339ffd bound to our chassis
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.789 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca459bc-d9ea-444a-9677-3a7c12339ffd
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.804 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1a756e13-b377-414e-8c9c-3f653dcbcffe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.805 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ca459bc-d1 in ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.808 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ca459bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.808 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc88ddce-b198-477f-ac73-44c5cf8661b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.810 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c0a70-68e1-45fd-9f5f-593309c15011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.823 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0867097a-9b62-4ab3-acde-35c711b20576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.847 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:3e:da 10.100.0.5'], port_security=['fa:16:3e:23:3e:da 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '45051f55-4273-48ff-b5be-72501a74d560', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc07b24fb9ba4101a34be65493a83a22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82819533-0bf5-47c8-9437-4b645122166d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36ec89d9-135e-42eb-84d1-00a3805c21a1, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1da58540-88e4-4125-96c0-62be7cec281d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:15:35 compute-0 ovn_controller[152872]: 2025-11-22T09:15:35Z|00331|binding|INFO|Setting lport 1da58540-88e4-4125-96c0-62be7cec281d up in Southbound
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.848 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba37940-7268-48df-bd04-0373832f84e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.884 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[29fba930-861a-49bd-8eca-75ee5af6673e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.891 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e94831b3-8ffc-4bd5-8fd7-ab623d40d197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 NetworkManager[48920]: <info>  [1763802935.8926] manager: (tap4ca459bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.935 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[87e2eba3-87a1-4d53-9807-26668229b889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.939 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d843547b-e236-4b48-903c-37e18fc21857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 NetworkManager[48920]: <info>  [1763802935.9676] device (tap4ca459bc-d0): carrier: link connected
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.974 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4b49ddf4-97eb-456e-9431-0b4242397a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.995 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7552866-ecbe-46ff-8554-19f9a8116c91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca459bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:89:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578670, 'reachable_time': 16065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301555, 'error': None, 'target': 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.014 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c6487cd5-2397-4e70-af3e-02783bb471d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec2:893c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578670, 'tstamp': 578670}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301556, 'error': None, 'target': 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.036 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b461d02e-a5ad-4a02-b579-4ea43cba2993]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca459bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:89:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578670, 'reachable_time': 16065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301557, 'error': None, 'target': 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.079 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[09fd131a-2087-41e8-9416-8dc6bd00eab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.162 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e19b9bea-3027-4d07-bab8-7a8de4473ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.165 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca459bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.166 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.166 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca459bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:36 compute-0 kernel: tap4ca459bc-d0: entered promiscuous mode
Nov 22 09:15:36 compute-0 NetworkManager[48920]: <info>  [1763802936.1708] manager: (tap4ca459bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.183 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca459bc-d0, col_values=(('external_ids', {'iface-id': '113a1272-74c8-4666-96b6-8dbb3f235854'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:36 compute-0 ovn_controller[152872]: 2025-11-22T09:15:36Z|00332|binding|INFO|Releasing lport 113a1272-74c8-4666-96b6-8dbb3f235854 from this chassis (sb_readonly=0)
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.194 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ca459bc-d9ea-444a-9677-3a7c12339ffd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ca459bc-d9ea-444a-9677-3a7c12339ffd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee41a379-0f83-4452-b8f8-6f27eedd2974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.196 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-4ca459bc-d9ea-444a-9677-3a7c12339ffd
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/4ca459bc-d9ea-444a-9677-3a7c12339ffd.pid.haproxy
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 4ca459bc-d9ea-444a-9677-3a7c12339ffd
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.198 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'env', 'PROCESS_TAG=haproxy-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ca459bc-d9ea-444a-9677-3a7c12339ffd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.365 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802936.3636937, aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.365 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] VM Started (Lifecycle Event)
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.385 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.390 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802936.363955, aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.391 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] VM Paused (Lifecycle Event)
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.405 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.409 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.427 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.615 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802936.6145773, 45051f55-4273-48ff-b5be-72501a74d560 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.617 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] VM Started (Lifecycle Event)
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.635 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.640 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802936.6164296, 45051f55-4273-48ff-b5be-72501a74d560 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.641 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] VM Paused (Lifecycle Event)
Nov 22 09:15:36 compute-0 podman[301674]: 2025-11-22 09:15:36.677468256 +0000 UTC m=+0.084877508 container create 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:15:36 compute-0 systemd[1]: Started libpod-conmon-39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0.scope.
Nov 22 09:15:36 compute-0 podman[301674]: 2025-11-22 09:15:36.64229334 +0000 UTC m=+0.049702692 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:15:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d134d16ff507fa45a351af632330afd5dc37dd1d818793e1cf8013d20243ecc4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:36 compute-0 podman[301674]: 2025-11-22 09:15:36.77367518 +0000 UTC m=+0.181084452 container init 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:15:36 compute-0 podman[301674]: 2025-11-22 09:15:36.786592528 +0000 UTC m=+0.194001780 container start 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:15:36 compute-0 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [NOTICE]   (301693) : New worker (301695) forked
Nov 22 09:15:36 compute-0 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [NOTICE]   (301693) : Loading success.
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.842 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.849 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.851 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1da58540-88e4-4125-96c0-62be7cec281d in datapath 1a784673-76a0-4c6e-a5bb-2fe1d4413dea unbound from our chassis
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.854 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a784673-76a0-4c6e-a5bb-2fe1d4413dea
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.867 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb05832-ae04-43f6-865e-fa5402b0d416]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.868 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1a784673-71 in ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.871 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1a784673-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac9e6a8-f3eb-4363-b304-b8e57059262b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.873 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b814e0f2-e351-4b55-ab81-408141a8e1eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.890 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4abf3d60-dc5b-4440-8c11-14f3d70067b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.908 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1c86b1-e259-403b-a082-3383ecc2940b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 nova_compute[253661]: 2025-11-22 09:15:36.927 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.943 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa12588-df7b-4d9c-b581-9c6f8aa781a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 systemd-udevd[301541]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.952 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abec80c6-32a2-4599-9bb2-2bb5840f25bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:36 compute-0 NetworkManager[48920]: <info>  [1763802936.9545] manager: (tap1a784673-70): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Nov 22 09:15:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.996 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a685692a-d53d-4d44-960d-306f65d00f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.002 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[35ae62a9-029c-48af-9088-3171d1cdeb09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:37 compute-0 NetworkManager[48920]: <info>  [1763802937.0352] device (tap1a784673-70): carrier: link connected
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.043 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[28889c55-94fb-4346-aed7-3cd0e30ea693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.065 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5286fc04-c3d7-4913-b3ac-c122e76ea7c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a784673-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:03:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578777, 'reachable_time': 41542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301714, 'error': None, 'target': 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.081 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2b3953-72df-4446-a4df-df34b785257c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:354'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578777, 'tstamp': 578777}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301715, 'error': None, 'target': 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1613c536-fd1c-4894-b5ea-00fc246e9cca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a784673-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:03:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578777, 'reachable_time': 41542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301716, 'error': None, 'target': 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:37 compute-0 ceph-mon[75021]: pgmap v1521: 305 pgs: 305 active+clean; 194 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 5.2 MiB/s wr, 102 op/s
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.145 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[27ec54c3-dfc2-4550-ba6d-4e63f6bf282c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.219 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[96dae545-10b6-4dce-a181-15006ffe652a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.222 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a784673-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.222 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.223 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a784673-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.225 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:37 compute-0 kernel: tap1a784673-70: entered promiscuous mode
Nov 22 09:15:37 compute-0 NetworkManager[48920]: <info>  [1763802937.2262] manager: (tap1a784673-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.228 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.228 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a784673-70, col_values=(('external_ids', {'iface-id': '779c5ded-a0c7-4d1b-a9a9-ea6d3ab61012'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:37 compute-0 ovn_controller[152872]: 2025-11-22T09:15:37Z|00333|binding|INFO|Releasing lport 779c5ded-a0c7-4d1b-a9a9-ea6d3ab61012 from this chassis (sb_readonly=0)
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.255 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.256 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1a784673-76a0-4c6e-a5bb-2fe1d4413dea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1a784673-76a0-4c6e-a5bb-2fe1d4413dea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.257 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21bbfc6b-22c9-406e-80c9-9dce089ff3bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.258 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-1a784673-76a0-4c6e-a5bb-2fe1d4413dea
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/1a784673-76a0-4c6e-a5bb-2fe1d4413dea.pid.haproxy
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 1a784673-76a0-4c6e-a5bb-2fe1d4413dea
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:15:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.259 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'env', 'PROCESS_TAG=haproxy-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1a784673-76a0-4c6e-a5bb-2fe1d4413dea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:15:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 208 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 4.2 MiB/s wr, 121 op/s
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.311 253665 DEBUG nova.compute.manager [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.311 253665 DEBUG oslo_concurrency.lockutils [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.312 253665 DEBUG oslo_concurrency.lockutils [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.312 253665 DEBUG oslo_concurrency.lockutils [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.312 253665 DEBUG nova.compute.manager [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Processing event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.313 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.318 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802937.317737, aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.318 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] VM Resumed (Lifecycle Event)
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.321 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.326 253665 INFO nova.virt.libvirt.driver [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance spawned successfully.
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.327 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.358 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.368 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.373 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.373 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.374 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.375 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.375 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.375 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.613 253665 INFO nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 11.13 seconds to spawn the instance on the hypervisor.
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.615 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:37 compute-0 podman[301746]: 2025-11-22 09:15:37.681942263 +0000 UTC m=+0.053358682 container create a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:15:37 compute-0 systemd[1]: Started libpod-conmon-a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0.scope.
Nov 22 09:15:37 compute-0 podman[301746]: 2025-11-22 09:15:37.656487898 +0000 UTC m=+0.027904337 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:15:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024c7edb50aa40522fe8ae29b9f05311d4a6ea55593e5b288aee9bfe5a69618c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.780 253665 INFO nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 13.26 seconds to build instance.
Nov 22 09:15:37 compute-0 podman[301746]: 2025-11-22 09:15:37.794851179 +0000 UTC m=+0.166267618 container init a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:15:37 compute-0 podman[301746]: 2025-11-22 09:15:37.804563618 +0000 UTC m=+0.175980037 container start a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:15:37 compute-0 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [NOTICE]   (301765) : New worker (301767) forked
Nov 22 09:15:37 compute-0 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [NOTICE]   (301765) : Loading success.
Nov 22 09:15:37 compute-0 nova_compute[253661]: 2025-11-22 09:15:37.837 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:39 compute-0 ceph-mon[75021]: pgmap v1522: 305 pgs: 305 active+clean; 208 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 4.2 MiB/s wr, 121 op/s
Nov 22 09:15:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 22 09:15:39 compute-0 nova_compute[253661]: 2025-11-22 09:15:39.635 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:39 compute-0 nova_compute[253661]: 2025-11-22 09:15:39.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.094 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.094 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.094 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.094 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] No waiting events found dispatching network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 WARNING nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received unexpected event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 for instance with vm_state active and task_state None.
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.096 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.096 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Processing event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.096 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.096 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.097 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.097 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.097 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] No waiting events found dispatching network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.097 253665 WARNING nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received unexpected event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d for instance with vm_state building and task_state spawning.
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.098 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.103 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802941.1022315, 45051f55-4273-48ff-b5be-72501a74d560 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.104 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] VM Resumed (Lifecycle Event)
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.106 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.111 253665 INFO nova.virt.libvirt.driver [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance spawned successfully.
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.112 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.130 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:41 compute-0 ceph-mon[75021]: pgmap v1523: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.138 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.141 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.142 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.142 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.143 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.143 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.144 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.168 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:15:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 179 op/s
Nov 22 09:15:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.468 253665 INFO nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Took 19.80 seconds to spawn the instance on the hypervisor.
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.469 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.524 253665 INFO nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Took 20.84 seconds to build instance.
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.550 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.865 253665 DEBUG nova.compute.manager [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:15:41 compute-0 nova_compute[253661]: 2025-11-22 09:15:41.903 253665 INFO nova.compute.manager [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] instance snapshotting
Nov 22 09:15:42 compute-0 nova_compute[253661]: 2025-11-22 09:15:42.141 253665 INFO nova.virt.libvirt.driver [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Beginning live snapshot process
Nov 22 09:15:42 compute-0 nova_compute[253661]: 2025-11-22 09:15:42.747 253665 DEBUG nova.virt.libvirt.imagebackend [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:15:42 compute-0 nova_compute[253661]: 2025-11-22 09:15:42.918 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] creating snapshot(edc3549a16324003bc4517f29f0dcf23) on rbd image(aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:15:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 22 09:15:43 compute-0 ceph-mon[75021]: pgmap v1524: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 179 op/s
Nov 22 09:15:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 22 09:15:43 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 22 09:15:43 compute-0 nova_compute[253661]: 2025-11-22 09:15:43.237 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] cloning vms/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk@edc3549a16324003bc4517f29f0dcf23 to images/dfd5ab81-737b-4b61-b64f-2eae89761a6b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:15:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 219 op/s
Nov 22 09:15:43 compute-0 nova_compute[253661]: 2025-11-22 09:15:43.338 253665 DEBUG nova.compute.manager [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:43 compute-0 nova_compute[253661]: 2025-11-22 09:15:43.339 253665 DEBUG nova.compute.manager [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing instance network info cache due to event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:15:43 compute-0 nova_compute[253661]: 2025-11-22 09:15:43.339 253665 DEBUG oslo_concurrency.lockutils [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:15:43 compute-0 nova_compute[253661]: 2025-11-22 09:15:43.339 253665 DEBUG oslo_concurrency.lockutils [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:15:43 compute-0 nova_compute[253661]: 2025-11-22 09:15:43.339 253665 DEBUG nova.network.neutron [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:15:43 compute-0 nova_compute[253661]: 2025-11-22 09:15:43.434 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] flattening images/dfd5ab81-737b-4b61-b64f-2eae89761a6b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:15:44 compute-0 ceph-mon[75021]: osdmap e220: 3 total, 3 up, 3 in
Nov 22 09:15:44 compute-0 nova_compute[253661]: 2025-11-22 09:15:44.265 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] removing snapshot(edc3549a16324003bc4517f29f0dcf23) on rbd image(aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:15:44 compute-0 nova_compute[253661]: 2025-11-22 09:15:44.639 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:44 compute-0 nova_compute[253661]: 2025-11-22 09:15:44.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:45 compute-0 ceph-mon[75021]: pgmap v1526: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 219 op/s
Nov 22 09:15:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 22 09:15:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 22 09:15:45 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 22 09:15:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 104 KiB/s wr, 281 op/s
Nov 22 09:15:45 compute-0 nova_compute[253661]: 2025-11-22 09:15:45.315 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] creating snapshot(snap) on rbd image(dfd5ab81-737b-4b61-b64f-2eae89761a6b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:15:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 22 09:15:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 22 09:15:46 compute-0 ceph-mon[75021]: osdmap e221: 3 total, 3 up, 3 in
Nov 22 09:15:46 compute-0 ceph-mon[75021]: pgmap v1528: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 104 KiB/s wr, 281 op/s
Nov 22 09:15:46 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 22 09:15:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:46 compute-0 nova_compute[253661]: 2025-11-22 09:15:46.697 253665 DEBUG nova.network.neutron [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updated VIF entry in instance network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:15:46 compute-0 nova_compute[253661]: 2025-11-22 09:15:46.698 253665 DEBUG nova.network.neutron [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:15:46 compute-0 nova_compute[253661]: 2025-11-22 09:15:46.713 253665 DEBUG oslo_concurrency.lockutils [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:15:47 compute-0 ceph-mon[75021]: osdmap e222: 3 total, 3 up, 3 in
Nov 22 09:15:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 238 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 1.9 MiB/s wr, 261 op/s
Nov 22 09:15:47 compute-0 nova_compute[253661]: 2025-11-22 09:15:47.801 253665 INFO nova.virt.libvirt.driver [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Snapshot image upload complete
Nov 22 09:15:47 compute-0 nova_compute[253661]: 2025-11-22 09:15:47.802 253665 INFO nova.compute.manager [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 5.90 seconds to snapshot the instance on the hypervisor.
Nov 22 09:15:48 compute-0 nova_compute[253661]: 2025-11-22 09:15:48.006 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:48 compute-0 nova_compute[253661]: 2025-11-22 09:15:48.006 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:48 compute-0 ceph-mon[75021]: pgmap v1530: 305 pgs: 305 active+clean; 238 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 1.9 MiB/s wr, 261 op/s
Nov 22 09:15:48 compute-0 nova_compute[253661]: 2025-11-22 09:15:48.375 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:15:48 compute-0 nova_compute[253661]: 2025-11-22 09:15:48.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:48.390 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:15:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:48.396 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:15:48 compute-0 nova_compute[253661]: 2025-11-22 09:15:48.680 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:48 compute-0 nova_compute[253661]: 2025-11-22 09:15:48.681 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:48 compute-0 nova_compute[253661]: 2025-11-22 09:15:48.698 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:15:48 compute-0 nova_compute[253661]: 2025-11-22 09:15:48.699 253665 INFO nova.compute.claims [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:15:48 compute-0 nova_compute[253661]: 2025-11-22 09:15:48.981 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 260 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 3.5 MiB/s wr, 266 op/s
Nov 22 09:15:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:15:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039658547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:49 compute-0 nova_compute[253661]: 2025-11-22 09:15:49.508 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:49 compute-0 nova_compute[253661]: 2025-11-22 09:15:49.515 253665 DEBUG nova.compute.provider_tree [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:15:49 compute-0 nova_compute[253661]: 2025-11-22 09:15:49.531 253665 DEBUG nova.scheduler.client.report [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:15:49 compute-0 nova_compute[253661]: 2025-11-22 09:15:49.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:49 compute-0 nova_compute[253661]: 2025-11-22 09:15:49.907 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:50 compute-0 ceph-mon[75021]: pgmap v1531: 305 pgs: 305 active+clean; 260 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 3.5 MiB/s wr, 266 op/s
Nov 22 09:15:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3039658547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:15:50 compute-0 ovn_controller[152872]: 2025-11-22T09:15:50Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:a3:a6 10.100.0.14
Nov 22 09:15:50 compute-0 ovn_controller[152872]: 2025-11-22T09:15:50Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:a3:a6 10.100.0.14
Nov 22 09:15:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 260 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.7 MiB/s wr, 204 op/s
Nov 22 09:15:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 22 09:15:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 22 09:15:51 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 22 09:15:51 compute-0 nova_compute[253661]: 2025-11-22 09:15:51.918 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:51 compute-0 nova_compute[253661]: 2025-11-22 09:15:51.920 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:15:52
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:15:52 compute-0 ceph-mon[75021]: pgmap v1532: 305 pgs: 305 active+clean; 260 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.7 MiB/s wr, 204 op/s
Nov 22 09:15:52 compute-0 ceph-mon[75021]: osdmap e223: 3 total, 3 up, 3 in
Nov 22 09:15:52 compute-0 podman[301940]: 2025-11-22 09:15:52.392629196 +0000 UTC m=+0.069627053 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:15:52 compute-0 podman[301941]: 2025-11-22 09:15:52.406458155 +0000 UTC m=+0.071935639 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:15:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 272 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 102 op/s
Nov 22 09:15:54 compute-0 ceph-mon[75021]: pgmap v1534: 305 pgs: 305 active+clean; 272 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 102 op/s
Nov 22 09:15:54 compute-0 nova_compute[253661]: 2025-11-22 09:15:54.646 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:15:54 compute-0 nova_compute[253661]: 2025-11-22 09:15:54.910 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:15:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:15:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 297 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.0 MiB/s wr, 183 op/s
Nov 22 09:15:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:55.398 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:55 compute-0 podman[301978]: 2025-11-22 09:15:55.430442499 +0000 UTC m=+0.111730698 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:15:55 compute-0 ovn_controller[152872]: 2025-11-22T09:15:55Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:3e:da 10.100.0.5
Nov 22 09:15:55 compute-0 ovn_controller[152872]: 2025-11-22T09:15:55Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:3e:da 10.100.0.5
Nov 22 09:15:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:15:56 compute-0 ceph-mon[75021]: pgmap v1535: 305 pgs: 305 active+clean; 297 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.0 MiB/s wr, 183 op/s
Nov 22 09:15:56 compute-0 nova_compute[253661]: 2025-11-22 09:15:56.772 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:15:56 compute-0 nova_compute[253661]: 2025-11-22 09:15:56.773 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:15:56 compute-0 nova_compute[253661]: 2025-11-22 09:15:56.794 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:15:56 compute-0 nova_compute[253661]: 2025-11-22 09:15:56.820 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:15:56 compute-0 nova_compute[253661]: 2025-11-22 09:15:56.934 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:15:56 compute-0 nova_compute[253661]: 2025-11-22 09:15:56.935 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:15:56 compute-0 nova_compute[253661]: 2025-11-22 09:15:56.935 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Creating image(s)
Nov 22 09:15:56 compute-0 nova_compute[253661]: 2025-11-22 09:15:56.958 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:56 compute-0 nova_compute[253661]: 2025-11-22 09:15:56.980 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.002 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.006 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.035 253665 DEBUG nova.policy [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b394acfc2f44ed180b65249224f2788', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.070 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.072 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.072 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.072 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.092 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.096 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b45c203c-7ae1-436b-86d3-bfc0146dd536_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:15:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 303 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 134 op/s
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.456 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b45c203c-7ae1-436b-86d3-bfc0146dd536_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.514 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] resizing rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.559 253665 DEBUG nova.compute.manager [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-changed-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.559 253665 DEBUG nova.compute.manager [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Refreshing instance network info cache due to event network-changed-1da58540-88e4-4125-96c0-62be7cec281d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.559 253665 DEBUG oslo_concurrency.lockutils [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.560 253665 DEBUG oslo_concurrency.lockutils [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.560 253665 DEBUG nova.network.neutron [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Refreshing network info cache for port 1da58540-88e4-4125-96c0-62be7cec281d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.645 253665 DEBUG nova.objects.instance [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'migration_context' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.657 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.657 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Ensure instance console log exists: /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.658 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.658 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:57 compute-0 nova_compute[253661]: 2025-11-22 09:15:57.658 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.347 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.347 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.348 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.348 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.348 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.350 253665 INFO nova.compute.manager [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Terminating instance
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.351 253665 DEBUG nova.compute.manager [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:15:58 compute-0 kernel: tap50e75895-e7 (unregistering): left promiscuous mode
Nov 22 09:15:58 compute-0 NetworkManager[48920]: <info>  [1763802958.4609] device (tap50e75895-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:15:58 compute-0 ceph-mon[75021]: pgmap v1536: 305 pgs: 305 active+clean; 303 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 134 op/s
Nov 22 09:15:58 compute-0 ovn_controller[152872]: 2025-11-22T09:15:58Z|00334|binding|INFO|Releasing lport 50e75895-e769-4e23-b607-7d52eb14fb62 from this chassis (sb_readonly=0)
Nov 22 09:15:58 compute-0 ovn_controller[152872]: 2025-11-22T09:15:58Z|00335|binding|INFO|Setting lport 50e75895-e769-4e23-b607-7d52eb14fb62 down in Southbound
Nov 22 09:15:58 compute-0 ovn_controller[152872]: 2025-11-22T09:15:58Z|00336|binding|INFO|Removing iface tap50e75895-e7 ovn-installed in OVS
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.523 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:ef:1b 10.100.0.9'], port_security=['fa:16:3e:07:ef:1b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fc32fb5484840b1b6654dffb70595ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f093055-0f73-4edf-a345-d9278a345d48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7266d51a-8673-408f-8e3f-05b71c491331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=50e75895-e769-4e23-b607-7d52eb14fb62) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.524 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 50e75895-e769-4e23-b607-7d52eb14fb62 in datapath 35d4669f-adae-4ff8-9cc1-a890f0b28c31 unbound from our chassis
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.526 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35d4669f-adae-4ff8-9cc1-a890f0b28c31, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.527 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d915c0-d7e4-4ccf-a66e-c6b10dde11b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.528 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 namespace which is not needed anymore
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:58 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Deactivated successfully.
Nov 22 09:15:58 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Consumed 15.278s CPU time.
Nov 22 09:15:58 compute-0 systemd-machined[215941]: Machine qemu-44-instance-00000027 terminated.
Nov 22 09:15:58 compute-0 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [NOTICE]   (299748) : haproxy version is 2.8.14-c23fe91
Nov 22 09:15:58 compute-0 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [NOTICE]   (299748) : path to executable is /usr/sbin/haproxy
Nov 22 09:15:58 compute-0 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [WARNING]  (299748) : Exiting Master process...
Nov 22 09:15:58 compute-0 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [WARNING]  (299748) : Exiting Master process...
Nov 22 09:15:58 compute-0 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [ALERT]    (299748) : Current worker (299750) exited with code 143 (Terminated)
Nov 22 09:15:58 compute-0 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [WARNING]  (299748) : All workers exited. Exiting... (0)
Nov 22 09:15:58 compute-0 systemd[1]: libpod-2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb.scope: Deactivated successfully.
Nov 22 09:15:58 compute-0 podman[302193]: 2025-11-22 09:15:58.66787515 +0000 UTC m=+0.048365390 container died 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbd3306a5b7915795f7c52c8141867f19aeb43af9b9e1b85d687973702260cc6-merged.mount: Deactivated successfully.
Nov 22 09:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb-userdata-shm.mount: Deactivated successfully.
Nov 22 09:15:58 compute-0 podman[302193]: 2025-11-22 09:15:58.707223658 +0000 UTC m=+0.087713878 container cleanup 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:15:58 compute-0 systemd[1]: libpod-conmon-2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb.scope: Deactivated successfully.
Nov 22 09:15:58 compute-0 podman[302223]: 2025-11-22 09:15:58.775898326 +0000 UTC m=+0.046342780 container remove 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9c93b00d-fb3c-4fc6-a655-68892ddf049b]: (4, ('Sat Nov 22 09:15:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 (2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb)\n2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb\nSat Nov 22 09:15:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 (2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb)\n2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.785 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47e1b9ba-1942-4fb3-9f58-1c0165328b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.786 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35d4669f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.788 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:58 compute-0 kernel: tap35d4669f-a0: left promiscuous mode
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.792 253665 INFO nova.virt.libvirt.driver [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance destroyed successfully.
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.793 253665 DEBUG nova.objects.instance [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lazy-loading 'resources' on Instance uuid 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.810 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[090716c2-dbc4-4360-9d0a-c665bc748863]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.813 253665 DEBUG nova.virt.libvirt.vif [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-467780681',id=39,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fc32fb5484840b1b6654dffb70595ef',ramdisk_id='',reservation_id='r-072h7wv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:15:13Z,user_data=None,user_id='0e5b221447624e728e9eb5442b5238d1',uuid=9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.813 253665 DEBUG nova.network.os_vif_util [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converting VIF {"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.814 253665 DEBUG nova.network.os_vif_util [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.815 253665 DEBUG os_vif [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.817 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e75895-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.818 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.821 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[36e8068a-83f7-4926-954d-71aaa8a2af6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b379a088-7b44-4ef4-ac31-a57c94a4c8d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.825 253665 INFO os_vif [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7')
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.844 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[634ad8d2-6921-437a-b928-7756da5899d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576368, 'reachable_time': 31150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302253, 'error': None, 'target': 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.847 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:15:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.847 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec46165-9dd5-4a43-8d07-1d2deed9bc0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:15:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d35d4669f\x2dadae\x2d4ff8\x2d9cc1\x2da890f0b28c31.mount: Deactivated successfully.
Nov 22 09:15:58 compute-0 nova_compute[253661]: 2025-11-22 09:15:58.890 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Successfully created port: 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:15:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 358 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 783 KiB/s rd, 6.7 MiB/s wr, 191 op/s
Nov 22 09:15:59 compute-0 nova_compute[253661]: 2025-11-22 09:15:59.314 253665 INFO nova.virt.libvirt.driver [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Deleting instance files /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_del
Nov 22 09:15:59 compute-0 nova_compute[253661]: 2025-11-22 09:15:59.315 253665 INFO nova.virt.libvirt.driver [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Deletion of /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_del complete
Nov 22 09:15:59 compute-0 nova_compute[253661]: 2025-11-22 09:15:59.372 253665 INFO nova.compute.manager [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Took 1.02 seconds to destroy the instance on the hypervisor.
Nov 22 09:15:59 compute-0 nova_compute[253661]: 2025-11-22 09:15:59.373 253665 DEBUG oslo.service.loopingcall [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:15:59 compute-0 nova_compute[253661]: 2025-11-22 09:15:59.373 253665 DEBUG nova.compute.manager [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:15:59 compute-0 nova_compute[253661]: 2025-11-22 09:15:59.373 253665 DEBUG nova.network.neutron [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:15:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 22 09:15:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 22 09:15:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 22 09:15:59 compute-0 nova_compute[253661]: 2025-11-22 09:15:59.913 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:15:59 compute-0 nova_compute[253661]: 2025-11-22 09:15:59.994 253665 DEBUG nova.network.neutron [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updated VIF entry in instance network info cache for port 1da58540-88e4-4125-96c0-62be7cec281d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:15:59 compute-0 nova_compute[253661]: 2025-11-22 09:15:59.995 253665 DEBUG nova.network.neutron [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updating instance_info_cache with network_info: [{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.025 253665 DEBUG oslo_concurrency.lockutils [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.090 253665 DEBUG nova.compute.manager [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-unplugged-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.091 253665 DEBUG oslo_concurrency.lockutils [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.092 253665 DEBUG oslo_concurrency.lockutils [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.092 253665 DEBUG oslo_concurrency.lockutils [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.093 253665 DEBUG nova.compute.manager [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] No waiting events found dispatching network-vif-unplugged-50e75895-e769-4e23-b607-7d52eb14fb62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.093 253665 DEBUG nova.compute.manager [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-unplugged-50e75895-e769-4e23-b607-7d52eb14fb62 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.342 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Successfully updated port: 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.357 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.358 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.358 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:16:00 compute-0 ceph-mon[75021]: pgmap v1537: 305 pgs: 305 active+clean; 358 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 783 KiB/s rd, 6.7 MiB/s wr, 191 op/s
Nov 22 09:16:00 compute-0 ceph-mon[75021]: osdmap e224: 3 total, 3 up, 3 in
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.551 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.588 253665 DEBUG nova.network.neutron [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.634 253665 INFO nova.compute.manager [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Took 1.26 seconds to deallocate network for instance.
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.652 253665 DEBUG nova.compute.manager [req-a497c4e8-7132-4b87-aed0-2caf5753a489 req-b09be5a1-68aa-4ece-985e-e4510c058e11 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-deleted-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.682 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.682 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.725 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.750 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.751 253665 DEBUG nova.compute.provider_tree [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.774 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.779 253665 DEBUG nova.compute.manager [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.800 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.826 253665 INFO nova.compute.manager [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] instance snapshotting
Nov 22 09:16:00 compute-0 nova_compute[253661]: 2025-11-22 09:16:00.898 253665 DEBUG oslo_concurrency.processutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.079 253665 INFO nova.virt.libvirt.driver [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Beginning live snapshot process
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.233 253665 DEBUG nova.virt.libvirt.imagebackend [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:16:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 358 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 787 KiB/s rd, 6.7 MiB/s wr, 192 op/s
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.306 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.323 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.324 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance network_info: |[{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.330 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start _get_guest_xml network_info=[{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.335 253665 WARNING nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:16:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.342 253665 DEBUG nova.virt.libvirt.host [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.343 253665 DEBUG nova.virt.libvirt.host [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.348 253665 DEBUG nova.virt.libvirt.host [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.349 253665 DEBUG nova.virt.libvirt.host [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.350 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:16:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:16:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687046343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.350 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.351 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.351 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.351 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.351 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.352 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.352 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.352 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.353 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.353 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.353 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.358 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.402 253665 DEBUG oslo_concurrency.processutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.411 253665 DEBUG nova.compute.provider_tree [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.422 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] creating snapshot(7613fd608e534b8ca1bfd4aed4e348b2) on rbd image(aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.459 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.487 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.515 253665 INFO nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Deleted allocations for instance 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1
Nov 22 09:16:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 22 09:16:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/687046343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 22 09:16:01 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.588 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] cloning vms/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk@7613fd608e534b8ca1bfd4aed4e348b2 to images/cf47a6a1-d9bd-4443-bd1d-cb349f9fcfe4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.629 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.713 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] flattening images/cf47a6a1-d9bd-4443-bd1d-cb349f9fcfe4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:16:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:16:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692037821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.875 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.904 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:01 compute-0 nova_compute[253661]: 2025-11-22 09:16:01.909 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.364 253665 DEBUG nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.366 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.366 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.366 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.366 253665 DEBUG nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] No waiting events found dispatching network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.368 253665 WARNING nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received unexpected event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 for instance with vm_state deleted and task_state None.
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.368 253665 DEBUG nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.368 253665 DEBUG nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing instance network info cache due to event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.369 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.369 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.369 253665 DEBUG nova.network.neutron [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:16:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:16:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297915625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.439 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.441 253665 DEBUG nova.virt.libvirt.vif [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1942299149',display_name='tempest-AttachInterfacesUnderV243Test-server-1942299149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1942299149',id=42,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJjKzxQ6a+OuJML0HHQQYvCuHT4o36Pe0HTJXEDf/t0kK24QNwKu6PCguH+C6XVYn+ibPKaOztSJwRFEDsoyxOxItcOZetU3VENvv82U9z5y/gmG/qHovd9IPqkeCrJiA==',key_name='tempest-keypair-1712592069',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f74a0d8c2374c07a9c9cd48b42318c3',ramdisk_id='',reservation_id='r-2g9ce9k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-776663851',owner_user_name='tempest-AttachInterfacesUnderV243Test-776663851-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b394acfc2f44ed180b65249224f2788',uuid=b45c203c-7ae1-436b-86d3-bfc0146dd536,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.441 253665 DEBUG nova.network.os_vif_util [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converting VIF {"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.442 253665 DEBUG nova.network.os_vif_util [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.443 253665 DEBUG nova.objects.instance [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <uuid>b45c203c-7ae1-436b-86d3-bfc0146dd536</uuid>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <name>instance-0000002a</name>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1942299149</nova:name>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:16:01</nova:creationTime>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <nova:user uuid="7b394acfc2f44ed180b65249224f2788">tempest-AttachInterfacesUnderV243Test-776663851-project-member</nova:user>
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <nova:project uuid="2f74a0d8c2374c07a9c9cd48b42318c3">tempest-AttachInterfacesUnderV243Test-776663851</nova:project>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <nova:port uuid="91a0d7d2-517a-4636-a7fd-86f4d72aed04">
Nov 22 09:16:02 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <system>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <entry name="serial">b45c203c-7ae1-436b-86d3-bfc0146dd536</entry>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <entry name="uuid">b45c203c-7ae1-436b-86d3-bfc0146dd536</entry>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     </system>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <os>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   </os>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <features>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   </features>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b45c203c-7ae1-436b-86d3-bfc0146dd536_disk">
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       </source>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config">
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       </source>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:16:02 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:d8:22:b3"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <target dev="tap91a0d7d2-51"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/console.log" append="off"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <video>
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     </video>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:16:02 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:16:02 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:16:02 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:16:02 compute-0 nova_compute[253661]: </domain>
Nov 22 09:16:02 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Preparing to wait for external event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.456 253665 DEBUG nova.virt.libvirt.vif [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1942299149',display_name='tempest-AttachInterfacesUnderV243Test-server-1942299149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1942299149',id=42,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJjKzxQ6a+OuJML0HHQQYvCuHT4o36Pe0HTJXEDf/t0kK24QNwKu6PCguH+C6XVYn+ibPKaOztSJwRFEDsoyxOxItcOZetU3VENvv82U9z5y/gmG/qHovd9IPqkeCrJiA==',key_name='tempest-keypair-1712592069',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f74a0d8c2374c07a9c9cd48b42318c3',ramdisk_id='',reservation_id='r-2g9ce9k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-776663851',owner_user_name='tempest-AttachInterfacesUnderV243Test-776663851-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b394acfc2f44ed180b65249224f2788',uuid=b45c203c-7ae1-436b-86d3-bfc0146dd536,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.456 253665 DEBUG nova.network.os_vif_util [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converting VIF {"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.457 253665 DEBUG nova.network.os_vif_util [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.457 253665 DEBUG os_vif [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.458 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.458 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.461 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.461 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91a0d7d2-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.461 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91a0d7d2-51, col_values=(('external_ids', {'iface-id': '91a0d7d2-517a-4636-a7fd-86f4d72aed04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:22:b3', 'vm-uuid': 'b45c203c-7ae1-436b-86d3-bfc0146dd536'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:02 compute-0 NetworkManager[48920]: <info>  [1763802962.4641] manager: (tap91a0d7d2-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.466 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.470 253665 INFO os_vif [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51')
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.597 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.598 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.598 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] No VIF found with MAC fa:16:3e:d8:22:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.599 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Using config drive
Nov 22 09:16:02 compute-0 ceph-mon[75021]: pgmap v1539: 305 pgs: 305 active+clean; 358 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 787 KiB/s rd, 6.7 MiB/s wr, 192 op/s
Nov 22 09:16:02 compute-0 ceph-mon[75021]: osdmap e225: 3 total, 3 up, 3 in
Nov 22 09:16:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3692037821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:16:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/297915625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0025243603283518915 of space, bias 1.0, pg target 0.7573080985055675 quantized to 32 (current 32)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:16:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:16:02 compute-0 nova_compute[253661]: 2025-11-22 09:16:02.679 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.151 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] removing snapshot(7613fd608e534b8ca1bfd4aed4e348b2) on rbd image(aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.286 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Creating config drive at /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.300 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztn3fn_9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 304 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 508 KiB/s rd, 4.9 MiB/s wr, 165 op/s
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.475 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztn3fn_9" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.511 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.517 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 22 09:16:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 22 09:16:03 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.740 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] creating snapshot(snap) on rbd image(cf47a6a1-d9bd-4443-bd1d-cb349f9fcfe4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.885 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.885 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.886 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.886 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.886 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.888 253665 INFO nova.compute.manager [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Terminating instance
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.889 253665 DEBUG nova.compute.manager [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.938 253665 DEBUG nova.network.neutron [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated VIF entry in instance network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.939 253665 DEBUG nova.network.neutron [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.947 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.948 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Deleting local config drive /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config because it was imported into RBD.
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.959 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:03 compute-0 kernel: tap1da58540-88 (unregistering): left promiscuous mode
Nov 22 09:16:03 compute-0 NetworkManager[48920]: <info>  [1763802963.9860] device (tap1da58540-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:16:03 compute-0 ovn_controller[152872]: 2025-11-22T09:16:03Z|00337|binding|INFO|Releasing lport 1da58540-88e4-4125-96c0-62be7cec281d from this chassis (sb_readonly=0)
Nov 22 09:16:03 compute-0 ovn_controller[152872]: 2025-11-22T09:16:03Z|00338|binding|INFO|Setting lport 1da58540-88e4-4125-96c0-62be7cec281d down in Southbound
Nov 22 09:16:03 compute-0 ovn_controller[152872]: 2025-11-22T09:16:03Z|00339|binding|INFO|Removing iface tap1da58540-88 ovn-installed in OVS
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:03 compute-0 nova_compute[253661]: 2025-11-22 09:16:03.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:03.999 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:3e:da 10.100.0.5'], port_security=['fa:16:3e:23:3e:da 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '45051f55-4273-48ff-b5be-72501a74d560', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc07b24fb9ba4101a34be65493a83a22', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82819533-0bf5-47c8-9437-4b645122166d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36ec89d9-135e-42eb-84d1-00a3805c21a1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1da58540-88e4-4125-96c0-62be7cec281d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.001 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1da58540-88e4-4125-96c0-62be7cec281d in datapath 1a784673-76a0-4c6e-a5bb-2fe1d4413dea unbound from our chassis
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.003 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1a784673-76a0-4c6e-a5bb-2fe1d4413dea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.004 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d144f61-df18-45f8-baa7-807d2ec36fb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.005 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea namespace which is not needed anymore
Nov 22 09:16:04 compute-0 kernel: tap91a0d7d2-51: entered promiscuous mode
Nov 22 09:16:04 compute-0 NetworkManager[48920]: <info>  [1763802964.0228] manager: (tap91a0d7d2-51): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Nov 22 09:16:04 compute-0 systemd-udevd[302574]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:16:04 compute-0 ovn_controller[152872]: 2025-11-22T09:16:04Z|00340|binding|INFO|Claiming lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 for this chassis.
Nov 22 09:16:04 compute-0 ovn_controller[152872]: 2025-11-22T09:16:04Z|00341|binding|INFO|91a0d7d2-517a-4636-a7fd-86f4d72aed04: Claiming fa:16:3e:d8:22:b3 10.100.0.5
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.079 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:22:b3 10.100.0.5'], port_security=['fa:16:3e:d8:22:b3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b45c203c-7ae1-436b-86d3-bfc0146dd536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7459b4dc-5141-4001-a5b6-0d7256031901', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba63bb0-30f0-4e31-af74-7247ce34941d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=91a0d7d2-517a-4636-a7fd-86f4d72aed04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:04 compute-0 NetworkManager[48920]: <info>  [1763802964.0827] device (tap91a0d7d2-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:16:04 compute-0 NetworkManager[48920]: <info>  [1763802964.0836] device (tap91a0d7d2-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:16:04 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Deactivated successfully.
Nov 22 09:16:04 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Consumed 14.615s CPU time.
Nov 22 09:16:04 compute-0 ovn_controller[152872]: 2025-11-22T09:16:04Z|00342|binding|INFO|Setting lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 ovn-installed in OVS
Nov 22 09:16:04 compute-0 ovn_controller[152872]: 2025-11-22T09:16:04Z|00343|binding|INFO|Setting lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 up in Southbound
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 systemd-machined[215941]: Machine qemu-46-instance-00000028 terminated.
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 systemd-machined[215941]: New machine qemu-47-instance-0000002a.
Nov 22 09:16:04 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Nov 22 09:16:04 compute-0 NetworkManager[48920]: <info>  [1763802964.1225] manager: (tap1da58540-88): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.123 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.140 253665 INFO nova.virt.libvirt.driver [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance destroyed successfully.
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.142 253665 DEBUG nova.objects.instance [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lazy-loading 'resources' on Instance uuid 45051f55-4273-48ff-b5be-72501a74d560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.156 253665 DEBUG nova.virt.libvirt.vif [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:15:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1927732921',display_name='tempest-ServersTestManualDisk-server-1927732921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1927732921',id=40,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBX0yjbHKSpcMTELYvbrtlV9HnVJ+VN3g8rkd9TCKWMPUjySXweCS4cpqzW/ksedFJ/34L4Xm/tZKO9hmn9Qms+oHuE0viyLQ9MdGgB+HYr9JkLrXZ9hRmwZrKPRvprMA==',key_name='tempest-keypair-581094436',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:15:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc07b24fb9ba4101a34be65493a83a22',ramdisk_id='',reservation_id='r-tu3melt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-357496739',owner_user_name='tempest-ServersTestManualDisk-357496739-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:15:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fefecdd1a6a94e3ea3896308da03d91b',uuid=45051f55-4273-48ff-b5be-72501a74d560,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.156 253665 DEBUG nova.network.os_vif_util [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converting VIF {"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.157 253665 DEBUG nova.network.os_vif_util [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.157 253665 DEBUG os_vif [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.158 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.159 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da58540-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.162 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.177 253665 INFO os_vif [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88')
Nov 22 09:16:04 compute-0 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [NOTICE]   (301765) : haproxy version is 2.8.14-c23fe91
Nov 22 09:16:04 compute-0 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [NOTICE]   (301765) : path to executable is /usr/sbin/haproxy
Nov 22 09:16:04 compute-0 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [WARNING]  (301765) : Exiting Master process...
Nov 22 09:16:04 compute-0 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [WARNING]  (301765) : Exiting Master process...
Nov 22 09:16:04 compute-0 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [ALERT]    (301765) : Current worker (301767) exited with code 143 (Terminated)
Nov 22 09:16:04 compute-0 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [WARNING]  (301765) : All workers exited. Exiting... (0)
Nov 22 09:16:04 compute-0 systemd[1]: libpod-a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0.scope: Deactivated successfully.
Nov 22 09:16:04 compute-0 podman[302606]: 2025-11-22 09:16:04.200869601 +0000 UTC m=+0.058357635 container died a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0-userdata-shm.mount: Deactivated successfully.
Nov 22 09:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-024c7edb50aa40522fe8ae29b9f05311d4a6ea55593e5b288aee9bfe5a69618c-merged.mount: Deactivated successfully.
Nov 22 09:16:04 compute-0 podman[302606]: 2025-11-22 09:16:04.256965239 +0000 UTC m=+0.114453263 container cleanup a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:16:04 compute-0 systemd[1]: libpod-conmon-a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0.scope: Deactivated successfully.
Nov 22 09:16:04 compute-0 podman[302665]: 2025-11-22 09:16:04.34365708 +0000 UTC m=+0.052876891 container remove a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.353 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dd025800-7c3c-4faf-a80c-ba5c1aebba40]: (4, ('Sat Nov 22 09:16:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea (a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0)\na60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0\nSat Nov 22 09:16:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea (a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0)\na60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.355 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaeba918-08d0-424d-bbc4-35fac193b5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.357 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a784673-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 kernel: tap1a784673-70: left promiscuous mode
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.381 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[644062dc-e4d4-4018-8044-8ab0afae5e3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.396 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[088fc0f1-8b75-49ea-8019-c6be8635bde2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.398 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a78bcec9-4ef9-4e0a-97c1-5d6f88b2615f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.422 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f59f3535-4fd1-42cf-88dc-65b4c5a6fc66]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578767, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302678, 'error': None, 'target': 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d1a784673\x2d76a0\x2d4c6e\x2da5bb\x2d2fe1d4413dea.mount: Deactivated successfully.
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.428 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.428 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3b9029-4e6e-4de8-84f6-b2f4d74b0bf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.430 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 in datapath a8ceec0c-2cf6-459a-a4d7-aaf770041b6c unbound from our chassis
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.431 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8ceec0c-2cf6-459a-a4d7-aaf770041b6c
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.442 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-unplugged-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.443 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.443 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.443 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] No waiting events found dispatching network-vif-unplugged-1da58540-88e4-4125-96c0-62be7cec281d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-unplugged-1da58540-88e4-4125-96c0-62be7cec281d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] No waiting events found dispatching network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 WARNING nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received unexpected event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d for instance with vm_state active and task_state deleting.
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Processing event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[338cf403-6cf3-45b1-bf6c-4a87cf792bb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.449 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8ceec0c-21 in ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.451 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8ceec0c-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.451 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ebb846-d1e0-4160-b010-df762ce456ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.452 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a544db52-661a-4124-b320-16e83d459d66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.467 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[33879aac-9468-455b-b224-70c52387438a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.495 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dda00fa5-7af1-4060-a1d2-85e98754b5d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.539 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1b224940-a11b-48de-b79b-fdfd7317acd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 NetworkManager[48920]: <info>  [1763802964.5539] manager: (tapa8ceec0c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.552 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9eef6d04-57b6-4570-9dc1-3e8252799713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.598 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ddac8033-e8ad-422e-ab48-79437d26c9e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.601 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[43c5dfd8-6370-4a7a-883e-bd202520d78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 NetworkManager[48920]: <info>  [1763802964.6297] device (tapa8ceec0c-20): carrier: link connected
Nov 22 09:16:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.638 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[49cd6113-9f9c-4558-94e8-16f81b264c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 22 09:16:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 22 09:16:04 compute-0 ceph-mon[75021]: pgmap v1541: 305 pgs: 305 active+clean; 304 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 508 KiB/s rd, 4.9 MiB/s wr, 165 op/s
Nov 22 09:16:04 compute-0 ceph-mon[75021]: osdmap e226: 3 total, 3 up, 3 in
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.664 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6afbb932-67dd-43ce-a8eb-e6c690df3ad7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8ceec0c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:0f:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581536, 'reachable_time': 31837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302706, 'error': None, 'target': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.691 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1b55cd-7594-49f8-9717-219600c38a6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:fe7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581536, 'tstamp': 581536}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302707, 'error': None, 'target': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.712 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b52179f5-0b95-4c04-992a-19b5fc08a25d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8ceec0c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:0f:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581536, 'reachable_time': 31837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302708, 'error': None, 'target': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.752 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aa35c8-5db7-4079-ace4-117da5e47486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.841 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[666bc81e-1220-4bce-8cb1-60cd2bb1992a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.843 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8ceec0c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.844 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.844 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8ceec0c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:04 compute-0 NetworkManager[48920]: <info>  [1763802964.8479] manager: (tapa8ceec0c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Nov 22 09:16:04 compute-0 kernel: tapa8ceec0c-20: entered promiscuous mode
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.847 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.852 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8ceec0c-20, col_values=(('external_ids', {'iface-id': '73744eaa-7d97-4c21-9fb3-4378f10417f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.853 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 ovn_controller[152872]: 2025-11-22T09:16:04Z|00344|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.870 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.871 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8ceec0c-2cf6-459a-a4d7-aaf770041b6c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8ceec0c-2cf6-459a-a4d7-aaf770041b6c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d9b377-c312-4490-9a9d-929c3cd685f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.873 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/a8ceec0c-2cf6-459a-a4d7-aaf770041b6c.pid.haproxy
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID a8ceec0c-2cf6-459a-a4d7-aaf770041b6c
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:16:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.874 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'env', 'PROCESS_TAG=haproxy-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8ceec0c-2cf6-459a-a4d7-aaf770041b6c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.915 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.928 253665 INFO nova.virt.libvirt.driver [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Deleting instance files /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560_del
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.929 253665 INFO nova.virt.libvirt.driver [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Deletion of /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560_del complete
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.981 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802964.9809964, b45c203c-7ae1-436b-86d3-bfc0146dd536 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.982 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] VM Started (Lifecycle Event)
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.986 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.990 253665 INFO nova.compute.manager [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Took 1.10 seconds to destroy the instance on the hypervisor.
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.990 253665 DEBUG oslo.service.loopingcall [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.991 253665 DEBUG nova.compute.manager [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.991 253665 DEBUG nova.network.neutron [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:16:04 compute-0 nova_compute[253661]: 2025-11-22 09:16:04.995 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.001 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.002 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.005 253665 INFO nova.virt.libvirt.driver [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance spawned successfully.
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.005 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.018 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.018 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802964.9812398, b45c203c-7ae1-436b-86d3-bfc0146dd536 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.019 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] VM Paused (Lifecycle Event)
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.026 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.027 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.027 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.028 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.028 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.028 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.035 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.038 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802964.988516, b45c203c-7ae1-436b-86d3-bfc0146dd536 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.038 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] VM Resumed (Lifecycle Event)
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.061 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.065 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.082 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.091 253665 INFO nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Took 8.16 seconds to spawn the instance on the hypervisor.
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.092 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.196 253665 INFO nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Took 16.55 seconds to build instance.
Nov 22 09:16:05 compute-0 nova_compute[253661]: 2025-11-22 09:16:05.212 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 307 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.7 MiB/s wr, 201 op/s
Nov 22 09:16:05 compute-0 podman[302781]: 2025-11-22 09:16:05.306482965 +0000 UTC m=+0.087330248 container create 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 09:16:05 compute-0 podman[302781]: 2025-11-22 09:16:05.253274057 +0000 UTC m=+0.034121360 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:16:05 compute-0 systemd[1]: Started libpod-conmon-7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557.scope.
Nov 22 09:16:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014918619f881c93f9b6124aa0acf97a639f1327a1c3280fd54ede139d565034/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:05 compute-0 podman[302781]: 2025-11-22 09:16:05.425121091 +0000 UTC m=+0.205968404 container init 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:16:05 compute-0 podman[302781]: 2025-11-22 09:16:05.431959148 +0000 UTC m=+0.212806431 container start 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:16:05 compute-0 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [NOTICE]   (302802) : New worker (302804) forked
Nov 22 09:16:05 compute-0 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [NOTICE]   (302802) : Loading success.
Nov 22 09:16:05 compute-0 ceph-mon[75021]: osdmap e227: 3 total, 3 up, 3 in
Nov 22 09:16:06 compute-0 ovn_controller[152872]: 2025-11-22T09:16:06Z|00345|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 09:16:06 compute-0 ovn_controller[152872]: 2025-11-22T09:16:06Z|00346|binding|INFO|Releasing lport 113a1272-74c8-4666-96b6-8dbb3f235854 from this chassis (sb_readonly=0)
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.105 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.206 253665 INFO nova.virt.libvirt.driver [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Snapshot image upload complete
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.209 253665 INFO nova.compute.manager [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 5.38 seconds to snapshot the instance on the hypervisor.
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.236 253665 DEBUG nova.network.neutron [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:06 compute-0 ovn_controller[152872]: 2025-11-22T09:16:06Z|00347|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 09:16:06 compute-0 ovn_controller[152872]: 2025-11-22T09:16:06Z|00348|binding|INFO|Releasing lport 113a1272-74c8-4666-96b6-8dbb3f235854 from this chassis (sb_readonly=0)
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.265 253665 INFO nova.compute.manager [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Took 1.27 seconds to deallocate network for instance.
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.268 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.318 253665 DEBUG nova.compute.manager [req-1e333cd7-c7f8-4f59-a2ca-ca8774cba4d1 req-1fe13229-9cd3-4864-b910-51a593e1817c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-deleted-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.329 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.330 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.423 253665 DEBUG oslo_concurrency.processutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 22 09:16:06 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.732 253665 DEBUG nova.compute.manager [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.733 253665 DEBUG oslo_concurrency.lockutils [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.733 253665 DEBUG oslo_concurrency.lockutils [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.733 253665 DEBUG oslo_concurrency.lockutils [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.733 253665 DEBUG nova.compute.manager [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] No waiting events found dispatching network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.734 253665 WARNING nova.compute.manager [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received unexpected event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 for instance with vm_state active and task_state None.
Nov 22 09:16:06 compute-0 ceph-mon[75021]: pgmap v1544: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 307 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.7 MiB/s wr, 201 op/s
Nov 22 09:16:06 compute-0 ceph-mon[75021]: osdmap e228: 3 total, 3 up, 3 in
Nov 22 09:16:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:16:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160454603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.900 253665 DEBUG oslo_concurrency.processutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.909 253665 DEBUG nova.compute.provider_tree [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.923 253665 DEBUG nova.scheduler.client.report [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.946 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:06 compute-0 nova_compute[253661]: 2025-11-22 09:16:06.975 253665 INFO nova.scheduler.client.report [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Deleted allocations for instance 45051f55-4273-48ff-b5be-72501a74d560
Nov 22 09:16:07 compute-0 nova_compute[253661]: 2025-11-22 09:16:07.059 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 311 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 9.1 MiB/s wr, 328 op/s
Nov 22 09:16:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4160454603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:08 compute-0 nova_compute[253661]: 2025-11-22 09:16:08.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:08 compute-0 NetworkManager[48920]: <info>  [1763802968.1262] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Nov 22 09:16:08 compute-0 NetworkManager[48920]: <info>  [1763802968.1282] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Nov 22 09:16:08 compute-0 nova_compute[253661]: 2025-11-22 09:16:08.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:08 compute-0 ovn_controller[152872]: 2025-11-22T09:16:08Z|00349|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 09:16:08 compute-0 ovn_controller[152872]: 2025-11-22T09:16:08Z|00350|binding|INFO|Releasing lport 113a1272-74c8-4666-96b6-8dbb3f235854 from this chassis (sb_readonly=0)
Nov 22 09:16:08 compute-0 nova_compute[253661]: 2025-11-22 09:16:08.217 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 22 09:16:08 compute-0 ceph-mon[75021]: pgmap v1546: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 311 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 9.1 MiB/s wr, 328 op/s
Nov 22 09:16:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 22 09:16:08 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.001 253665 DEBUG nova.compute.manager [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.001 253665 DEBUG nova.compute.manager [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing instance network info cache due to event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.001 253665 DEBUG oslo_concurrency.lockutils [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.002 253665 DEBUG oslo_concurrency.lockutils [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.002 253665 DEBUG nova.network.neutron [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.161 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 246 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 6.0 MiB/s wr, 408 op/s
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.819 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.819 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.820 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.820 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.820 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.821 253665 INFO nova.compute.manager [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Terminating instance
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.822 253665 DEBUG nova.compute.manager [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:16:09 compute-0 ceph-mon[75021]: osdmap e229: 3 total, 3 up, 3 in
Nov 22 09:16:09 compute-0 kernel: tap14eb6b64-11 (unregistering): left promiscuous mode
Nov 22 09:16:09 compute-0 NetworkManager[48920]: <info>  [1763802969.8980] device (tap14eb6b64-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.908 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:09 compute-0 ovn_controller[152872]: 2025-11-22T09:16:09Z|00351|binding|INFO|Releasing lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 from this chassis (sb_readonly=0)
Nov 22 09:16:09 compute-0 ovn_controller[152872]: 2025-11-22T09:16:09Z|00352|binding|INFO|Setting lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 down in Southbound
Nov 22 09:16:09 compute-0 ovn_controller[152872]: 2025-11-22T09:16:09Z|00353|binding|INFO|Removing iface tap14eb6b64-11 ovn-installed in OVS
Nov 22 09:16:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.913 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:a3:a6 10.100.0.14'], port_security=['fa:16:3e:f8:a3:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad111e77e47541688eda72c9090309e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '848a987a-5baf-4ba8-9981-79089e68d473', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f043f8a-2814-434a-a39b-7e1b32dc2849, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=14eb6b64-11d1-4c6f-9c3c-e24463c899c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.915 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 in datapath 4ca459bc-d9ea-444a-9677-3a7c12339ffd unbound from our chassis
Nov 22 09:16:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.916 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ca459bc-d9ea-444a-9677-3a7c12339ffd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:16:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.918 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe881a87-fc26-4c0d-87ae-c91a8d516286]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.918 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd namespace which is not needed anymore
Nov 22 09:16:09 compute-0 nova_compute[253661]: 2025-11-22 09:16:09.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:09 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Nov 22 09:16:09 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 14.502s CPU time.
Nov 22 09:16:09 compute-0 systemd-machined[215941]: Machine qemu-45-instance-00000029 terminated.
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.069 253665 INFO nova.virt.libvirt.driver [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance destroyed successfully.
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.069 253665 DEBUG nova.objects.instance [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lazy-loading 'resources' on Instance uuid aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:10 compute-0 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [NOTICE]   (301693) : haproxy version is 2.8.14-c23fe91
Nov 22 09:16:10 compute-0 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [NOTICE]   (301693) : path to executable is /usr/sbin/haproxy
Nov 22 09:16:10 compute-0 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [WARNING]  (301693) : Exiting Master process...
Nov 22 09:16:10 compute-0 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [WARNING]  (301693) : Exiting Master process...
Nov 22 09:16:10 compute-0 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [ALERT]    (301693) : Current worker (301695) exited with code 143 (Terminated)
Nov 22 09:16:10 compute-0 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [WARNING]  (301693) : All workers exited. Exiting... (0)
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.082 253665 DEBUG nova.virt.libvirt.vif [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:15:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-121050772',display_name='tempest-ImagesOneServerTestJSON-server-121050772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-121050772',id=41,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:15:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad111e77e47541688eda72c9090309e9',ramdisk_id='',reservation_id='r-820nx03b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1578797770',owner_user_name='tempest-ImagesOneServerTestJSON-1578797770-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:06Z,user_data=None,user_id='db8ccc99aef946c58a2604bc21e0ef23',uuid=aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.082 253665 DEBUG nova.network.os_vif_util [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converting VIF {"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.083 253665 DEBUG nova.network.os_vif_util [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.083 253665 DEBUG os_vif [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:16:10 compute-0 systemd[1]: libpod-39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0.scope: Deactivated successfully.
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.088 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14eb6b64-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:10 compute-0 podman[302858]: 2025-11-22 09:16:10.088728153 +0000 UTC m=+0.060701693 container died 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.101 253665 INFO os_vif [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11')
Nov 22 09:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0-userdata-shm.mount: Deactivated successfully.
Nov 22 09:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d134d16ff507fa45a351af632330afd5dc37dd1d818793e1cf8013d20243ecc4-merged.mount: Deactivated successfully.
Nov 22 09:16:10 compute-0 podman[302858]: 2025-11-22 09:16:10.127197179 +0000 UTC m=+0.099170719 container cleanup 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:16:10 compute-0 systemd[1]: libpod-conmon-39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0.scope: Deactivated successfully.
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.191 253665 DEBUG nova.compute.manager [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-unplugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.192 253665 DEBUG oslo_concurrency.lockutils [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.192 253665 DEBUG oslo_concurrency.lockutils [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.192 253665 DEBUG oslo_concurrency.lockutils [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.193 253665 DEBUG nova.compute.manager [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] No waiting events found dispatching network-vif-unplugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.193 253665 DEBUG nova.compute.manager [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-unplugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:16:10 compute-0 podman[302915]: 2025-11-22 09:16:10.211124532 +0000 UTC m=+0.058232493 container remove 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:16:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46c24c35-578c-4d24-b8a5-66618269469e]: (4, ('Sat Nov 22 09:16:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd (39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0)\n39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0\nSat Nov 22 09:16:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd (39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0)\n39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.220 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[435513fd-e823-4b54-87fa-19b2d0db1d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.221 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca459bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.223 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:10 compute-0 kernel: tap4ca459bc-d0: left promiscuous mode
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.228 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3f5b89-8b0b-45ce-9740-86ff7fb6269f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.248 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.249 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73a50dc2-7504-4437-a0e6-6859584a1c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.250 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1eefbf7c-35da-4cbf-8897-2dcba0c4da5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ad603bab-58a6-4e93-a5f0-176bfeeed60b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578661, 'reachable_time': 17100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302931, 'error': None, 'target': 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d4ca459bc\x2dd9ea\x2d444a\x2d9677\x2d3a7c12339ffd.mount: Deactivated successfully.
Nov 22 09:16:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.284 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:16:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.285 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2df8d5-6bd0-4463-9e42-1c18e9bccb45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.560 253665 DEBUG nova.network.neutron [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated VIF entry in instance network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.561 253665 DEBUG nova.network.neutron [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.572 253665 INFO nova.virt.libvirt.driver [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Deleting instance files /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_del
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.572 253665 INFO nova.virt.libvirt.driver [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Deletion of /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_del complete
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.592 253665 DEBUG oslo_concurrency.lockutils [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.621 253665 INFO nova.compute.manager [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 0.80 seconds to destroy the instance on the hypervisor.
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.622 253665 DEBUG oslo.service.loopingcall [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.622 253665 DEBUG nova.compute.manager [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:16:10 compute-0 nova_compute[253661]: 2025-11-22 09:16:10.622 253665 DEBUG nova.network.neutron [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:16:10 compute-0 ceph-mon[75021]: pgmap v1548: 305 pgs: 305 active+clean; 246 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 6.0 MiB/s wr, 408 op/s
Nov 22 09:16:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 246 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.1 MiB/s wr, 267 op/s
Nov 22 09:16:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 22 09:16:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 22 09:16:11 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 22 09:16:11 compute-0 ovn_controller[152872]: 2025-11-22T09:16:11Z|00354|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 09:16:11 compute-0 nova_compute[253661]: 2025-11-22 09:16:11.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:11 compute-0 nova_compute[253661]: 2025-11-22 09:16:11.832 253665 DEBUG nova.network.neutron [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:11 compute-0 nova_compute[253661]: 2025-11-22 09:16:11.851 253665 INFO nova.compute.manager [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 1.23 seconds to deallocate network for instance.
Nov 22 09:16:11 compute-0 nova_compute[253661]: 2025-11-22 09:16:11.897 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:11 compute-0 nova_compute[253661]: 2025-11-22 09:16:11.898 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:11 compute-0 nova_compute[253661]: 2025-11-22 09:16:11.916 253665 DEBUG nova.compute.manager [req-c3fd46cb-92bf-46a1-8813-e7dfc735ca89 req-a76ed072-26a7-43f3-a54d-1dd06401414d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-deleted-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:11 compute-0 nova_compute[253661]: 2025-11-22 09:16:11.993 253665 DEBUG oslo_concurrency.processutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.275 253665 DEBUG nova.compute.manager [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.275 253665 DEBUG oslo_concurrency.lockutils [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.276 253665 DEBUG oslo_concurrency.lockutils [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.276 253665 DEBUG oslo_concurrency.lockutils [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.276 253665 DEBUG nova.compute.manager [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] No waiting events found dispatching network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.276 253665 WARNING nova.compute.manager [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received unexpected event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 for instance with vm_state deleted and task_state None.
Nov 22 09:16:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:16:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3882136957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:16:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:16:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3882136957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:16:12 compute-0 ceph-mon[75021]: pgmap v1549: 305 pgs: 305 active+clean; 246 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.1 MiB/s wr, 267 op/s
Nov 22 09:16:12 compute-0 ceph-mon[75021]: osdmap e230: 3 total, 3 up, 3 in
Nov 22 09:16:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3882136957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:16:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3882136957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:16:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:16:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059825303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.483 253665 DEBUG oslo_concurrency.processutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.491 253665 DEBUG nova.compute.provider_tree [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.510 253665 DEBUG nova.scheduler.client.report [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.533 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.576 253665 INFO nova.scheduler.client.report [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Deleted allocations for instance aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad
Nov 22 09:16:12 compute-0 nova_compute[253661]: 2025-11-22 09:16:12.639 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 208 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.0 MiB/s wr, 287 op/s
Nov 22 09:16:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3059825303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:13 compute-0 nova_compute[253661]: 2025-11-22 09:16:13.791 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802958.790629, 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:16:13 compute-0 nova_compute[253661]: 2025-11-22 09:16:13.792 253665 INFO nova.compute.manager [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] VM Stopped (Lifecycle Event)
Nov 22 09:16:13 compute-0 nova_compute[253661]: 2025-11-22 09:16:13.810 253665 DEBUG nova.compute.manager [None req-33f9528d-8391-4f16-a5f9-dbfbc617d596 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:14 compute-0 ceph-mon[75021]: pgmap v1551: 305 pgs: 305 active+clean; 208 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.0 MiB/s wr, 287 op/s
Nov 22 09:16:14 compute-0 nova_compute[253661]: 2025-11-22 09:16:14.954 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:15 compute-0 nova_compute[253661]: 2025-11-22 09:16:15.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 27 KiB/s wr, 200 op/s
Nov 22 09:16:15 compute-0 ovn_controller[152872]: 2025-11-22T09:16:15Z|00355|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 09:16:15 compute-0 nova_compute[253661]: 2025-11-22 09:16:15.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 22 09:16:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 22 09:16:16 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 22 09:16:16 compute-0 ceph-mon[75021]: pgmap v1552: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 27 KiB/s wr, 200 op/s
Nov 22 09:16:16 compute-0 ceph-mon[75021]: osdmap e231: 3 total, 3 up, 3 in
Nov 22 09:16:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 4.4 KiB/s wr, 69 op/s
Nov 22 09:16:17 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 09:16:18 compute-0 nova_compute[253661]: 2025-11-22 09:16:18.509 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:18 compute-0 ceph-mon[75021]: pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 4.4 KiB/s wr, 69 op/s
Nov 22 09:16:19 compute-0 nova_compute[253661]: 2025-11-22 09:16:19.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:19 compute-0 nova_compute[253661]: 2025-11-22 09:16:19.139 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802964.1378357, 45051f55-4273-48ff-b5be-72501a74d560 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:16:19 compute-0 nova_compute[253661]: 2025-11-22 09:16:19.140 253665 INFO nova.compute.manager [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] VM Stopped (Lifecycle Event)
Nov 22 09:16:19 compute-0 nova_compute[253661]: 2025-11-22 09:16:19.160 253665 DEBUG nova.compute.manager [None req-a2a0b78d-cf98-4ee4-995d-a4bc30c84f06 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 105 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 406 KiB/s rd, 2.4 MiB/s wr, 125 op/s
Nov 22 09:16:19 compute-0 nova_compute[253661]: 2025-11-22 09:16:19.956 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:20 compute-0 ovn_controller[152872]: 2025-11-22T09:16:20Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:22:b3 10.100.0.5
Nov 22 09:16:20 compute-0 ovn_controller[152872]: 2025-11-22T09:16:20Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:22:b3 10.100.0.5
Nov 22 09:16:20 compute-0 nova_compute[253661]: 2025-11-22 09:16:20.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:20 compute-0 ceph-mon[75021]: pgmap v1555: 305 pgs: 305 active+clean; 105 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 406 KiB/s rd, 2.4 MiB/s wr, 125 op/s
Nov 22 09:16:21 compute-0 nova_compute[253661]: 2025-11-22 09:16:21.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:16:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 105 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 1.9 MiB/s wr, 101 op/s
Nov 22 09:16:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:21 compute-0 nova_compute[253661]: 2025-11-22 09:16:21.525 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:22 compute-0 nova_compute[253661]: 2025-11-22 09:16:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:16:22 compute-0 ceph-mon[75021]: pgmap v1556: 305 pgs: 305 active+clean; 105 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 1.9 MiB/s wr, 101 op/s
Nov 22 09:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:16:22 compute-0 ovn_controller[152872]: 2025-11-22T09:16:22Z|00356|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 09:16:22 compute-0 nova_compute[253661]: 2025-11-22 09:16:22.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:23 compute-0 nova_compute[253661]: 2025-11-22 09:16:23.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:16:23 compute-0 nova_compute[253661]: 2025-11-22 09:16:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:16:23 compute-0 nova_compute[253661]: 2025-11-22 09:16:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:16:23 compute-0 nova_compute[253661]: 2025-11-22 09:16:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:16:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 112 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.5 MiB/s wr, 95 op/s
Nov 22 09:16:23 compute-0 podman[302955]: 2025-11-22 09:16:23.374506483 +0000 UTC m=+0.063029820 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:16:23 compute-0 podman[302956]: 2025-11-22 09:16:23.379425884 +0000 UTC m=+0.064703141 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 09:16:23 compute-0 nova_compute[253661]: 2025-11-22 09:16:23.466 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:23 compute-0 nova_compute[253661]: 2025-11-22 09:16:23.467 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:23 compute-0 nova_compute[253661]: 2025-11-22 09:16:23.467 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:16:23 compute-0 nova_compute[253661]: 2025-11-22 09:16:23.467 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:23 compute-0 nova_compute[253661]: 2025-11-22 09:16:23.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:24 compute-0 ceph-mon[75021]: pgmap v1557: 305 pgs: 305 active+clean; 112 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.5 MiB/s wr, 95 op/s
Nov 22 09:16:24 compute-0 nova_compute[253661]: 2025-11-22 09:16:24.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.065 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802970.0640647, aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.065 253665 INFO nova.compute.manager [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] VM Stopped (Lifecycle Event)
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.084 253665 DEBUG nova.compute.manager [None req-e1f0e237-9bc3-4b8f-b459-c766f62b8578 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.094 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 472 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.871 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.885 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.886 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.887 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.887 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.887 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.909 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.909 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.909 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.909 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:16:25 compute-0 nova_compute[253661]: 2025-11-22 09:16:25.910 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:16:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066135902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.348 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.428 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.429 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:16:26 compute-0 podman[303015]: 2025-11-22 09:16:26.444344204 +0000 UTC m=+0.139417838 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.585 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.587 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4010MB free_disk=59.9428596496582GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.587 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.587 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:26 compute-0 ceph-mon[75021]: pgmap v1558: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 472 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Nov 22 09:16:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2066135902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.697 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance b45c203c-7ae1-436b-86d3-bfc0146dd536 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.697 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.698 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:16:26 compute-0 nova_compute[253661]: 2025-11-22 09:16:26.742 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:16:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121506405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:27 compute-0 nova_compute[253661]: 2025-11-22 09:16:27.224 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:27 compute-0 nova_compute[253661]: 2025-11-22 09:16:27.233 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:16:27 compute-0 nova_compute[253661]: 2025-11-22 09:16:27.263 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:16:27 compute-0 nova_compute[253661]: 2025-11-22 09:16:27.281 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:16:27 compute-0 nova_compute[253661]: 2025-11-22 09:16:27.282 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 433 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Nov 22 09:16:27 compute-0 nova_compute[253661]: 2025-11-22 09:16:27.624 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:16:27 compute-0 nova_compute[253661]: 2025-11-22 09:16:27.625 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:16:27 compute-0 nova_compute[253661]: 2025-11-22 09:16:27.626 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:16:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 22 09:16:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1121506405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 22 09:16:27 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 22 09:16:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:27.959 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:27.960 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:27.961 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:28 compute-0 ceph-mon[75021]: pgmap v1559: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 433 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Nov 22 09:16:28 compute-0 ceph-mon[75021]: osdmap e232: 3 total, 3 up, 3 in
Nov 22 09:16:29 compute-0 nova_compute[253661]: 2025-11-22 09:16:29.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 675 KiB/s wr, 44 op/s
Nov 22 09:16:29 compute-0 nova_compute[253661]: 2025-11-22 09:16:29.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:30 compute-0 nova_compute[253661]: 2025-11-22 09:16:30.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:30 compute-0 ceph-mon[75021]: pgmap v1561: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 675 KiB/s wr, 44 op/s
Nov 22 09:16:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 675 KiB/s wr, 44 op/s
Nov 22 09:16:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:31 compute-0 nova_compute[253661]: 2025-11-22 09:16:31.562 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:31 compute-0 nova_compute[253661]: 2025-11-22 09:16:31.563 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:31 compute-0 nova_compute[253661]: 2025-11-22 09:16:31.579 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:16:31 compute-0 nova_compute[253661]: 2025-11-22 09:16:31.658 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:31 compute-0 nova_compute[253661]: 2025-11-22 09:16:31.659 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:31 compute-0 nova_compute[253661]: 2025-11-22 09:16:31.669 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:16:31 compute-0 nova_compute[253661]: 2025-11-22 09:16:31.670 253665 INFO nova.compute.claims [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:16:31 compute-0 sudo[303068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:16:31 compute-0 sudo[303068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:31 compute-0 sudo[303068]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:31 compute-0 nova_compute[253661]: 2025-11-22 09:16:31.809 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:31 compute-0 sudo[303093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:16:31 compute-0 sudo[303093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:31 compute-0 sudo[303093]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:31 compute-0 sudo[303119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:16:31 compute-0 sudo[303119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:31 compute-0 sudo[303119]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:32 compute-0 sudo[303163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:16:32 compute-0 sudo[303163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:16:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236845316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.257 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.266 253665 DEBUG nova.compute.provider_tree [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.280 253665 DEBUG nova.scheduler.client.report [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.310 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.311 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.382 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.382 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.408 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.429 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.515 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.516 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.517 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Creating image(s)
Nov 22 09:16:32 compute-0 sudo[303163]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.544 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.568 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 09:16:32 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:16:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:16:32 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:16:32 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:16:32 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 58d7eab2-c467-4842-8976-f07ff5ee6e00 does not exist
Nov 22 09:16:32 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ad118c96-d7a0-4800-968f-2a7dd491eebf does not exist
Nov 22 09:16:32 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev da6849e0-b24e-4dc6-8a89-92a6201342d7 does not exist
Nov 22 09:16:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:16:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:16:32 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:16:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.598 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.603 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:32 compute-0 sudo[303274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:16:32 compute-0 sudo[303274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:32 compute-0 sudo[303274]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.681 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.683 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.683 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.684 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.709 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:32 compute-0 nova_compute[253661]: 2025-11-22 09:16:32.714 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:32 compute-0 sudo[303300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:16:32 compute-0 sudo[303300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:32 compute-0 sudo[303300]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:32 compute-0 sudo[303346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:16:32 compute-0 sudo[303346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:32 compute-0 sudo[303346]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:32 compute-0 ceph-mon[75021]: pgmap v1562: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 675 KiB/s wr, 44 op/s
Nov 22 09:16:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2236845316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:16:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:16:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:16:32 compute-0 sudo[303386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:16:32 compute-0 sudo[303386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:33 compute-0 nova_compute[253661]: 2025-11-22 09:16:33.105 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:33 compute-0 nova_compute[253661]: 2025-11-22 09:16:33.136 253665 DEBUG nova.policy [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5ae8af2cc9f40e083473a191ddd445f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:16:33 compute-0 nova_compute[253661]: 2025-11-22 09:16:33.186 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] resizing rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:16:33 compute-0 podman[303472]: 2025-11-22 09:16:33.214563425 +0000 UTC m=+0.056828787 container create fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:16:33 compute-0 podman[303472]: 2025-11-22 09:16:33.182158959 +0000 UTC m=+0.024424351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:16:33 compute-0 systemd[1]: Started libpod-conmon-fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477.scope.
Nov 22 09:16:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 78 KiB/s wr, 35 op/s
Nov 22 09:16:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:16:33 compute-0 podman[303472]: 2025-11-22 09:16:33.444784854 +0000 UTC m=+0.287050236 container init fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:16:33 compute-0 podman[303472]: 2025-11-22 09:16:33.460985772 +0000 UTC m=+0.303251134 container start fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 09:16:33 compute-0 gracious_pike[303524]: 167 167
Nov 22 09:16:33 compute-0 systemd[1]: libpod-fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477.scope: Deactivated successfully.
Nov 22 09:16:33 compute-0 podman[303472]: 2025-11-22 09:16:33.551407665 +0000 UTC m=+0.393673037 container attach fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:16:33 compute-0 podman[303472]: 2025-11-22 09:16:33.552556413 +0000 UTC m=+0.394821775 container died fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 22 09:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5023cc3e7f4737db0cb89a154e8adf10969638e8d99de612a8c8646724af487-merged.mount: Deactivated successfully.
Nov 22 09:16:33 compute-0 nova_compute[253661]: 2025-11-22 09:16:33.772 253665 DEBUG nova.objects.instance [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:33 compute-0 nova_compute[253661]: 2025-11-22 09:16:33.785 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:16:33 compute-0 nova_compute[253661]: 2025-11-22 09:16:33.785 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Ensure instance console log exists: /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:16:33 compute-0 nova_compute[253661]: 2025-11-22 09:16:33.786 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:33 compute-0 nova_compute[253661]: 2025-11-22 09:16:33.786 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:33 compute-0 nova_compute[253661]: 2025-11-22 09:16:33.787 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:33 compute-0 podman[303472]: 2025-11-22 09:16:33.813176328 +0000 UTC m=+0.655441690 container remove fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:16:33 compute-0 systemd[1]: libpod-conmon-fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477.scope: Deactivated successfully.
Nov 22 09:16:33 compute-0 podman[303566]: 2025-11-22 09:16:33.991397068 +0000 UTC m=+0.046945345 container create 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:16:34 compute-0 podman[303566]: 2025-11-22 09:16:33.969714816 +0000 UTC m=+0.025263133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:16:34 compute-0 systemd[1]: Started libpod-conmon-80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b.scope.
Nov 22 09:16:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:34 compute-0 podman[303566]: 2025-11-22 09:16:34.171413463 +0000 UTC m=+0.226961760 container init 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 09:16:34 compute-0 podman[303566]: 2025-11-22 09:16:34.178870866 +0000 UTC m=+0.234419143 container start 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 09:16:34 compute-0 podman[303566]: 2025-11-22 09:16:34.216750247 +0000 UTC m=+0.272298614 container attach 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:16:34 compute-0 nova_compute[253661]: 2025-11-22 09:16:34.219 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully created port: dc08e15e-7d04-4fac-8489-61a2d7b5a642 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:16:34 compute-0 nova_compute[253661]: 2025-11-22 09:16:34.652 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully created port: babfaba0-2c0a-4eb0-adc0-3473b0b80a08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:16:34 compute-0 ceph-mon[75021]: pgmap v1563: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 78 KiB/s wr, 35 op/s
Nov 22 09:16:34 compute-0 nova_compute[253661]: 2025-11-22 09:16:34.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:35 compute-0 nova_compute[253661]: 2025-11-22 09:16:35.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 151 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Nov 22 09:16:35 compute-0 great_matsumoto[303582]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:16:35 compute-0 great_matsumoto[303582]: --> relative data size: 1.0
Nov 22 09:16:35 compute-0 great_matsumoto[303582]: --> All data devices are unavailable
Nov 22 09:16:35 compute-0 systemd[1]: libpod-80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b.scope: Deactivated successfully.
Nov 22 09:16:35 compute-0 systemd[1]: libpod-80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b.scope: Consumed 1.133s CPU time.
Nov 22 09:16:35 compute-0 podman[303566]: 2025-11-22 09:16:35.370804281 +0000 UTC m=+1.426352578 container died 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:16:35 compute-0 nova_compute[253661]: 2025-11-22 09:16:35.382 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully created port: 3c735a93-ffc0-4525-bd7d-7db35fe17769 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b-merged.mount: Deactivated successfully.
Nov 22 09:16:35 compute-0 podman[303566]: 2025-11-22 09:16:35.693185795 +0000 UTC m=+1.748734062 container remove 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:16:35 compute-0 systemd[1]: libpod-conmon-80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b.scope: Deactivated successfully.
Nov 22 09:16:35 compute-0 sudo[303386]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:35 compute-0 sudo[303625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:16:35 compute-0 sudo[303625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:35 compute-0 sudo[303625]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:35 compute-0 sudo[303650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:16:35 compute-0 sudo[303650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:35 compute-0 sudo[303650]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:35 compute-0 sudo[303675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:16:35 compute-0 sudo[303675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:35 compute-0 sudo[303675]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:36 compute-0 sudo[303700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:16:36 compute-0 sudo[303700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:36 compute-0 nova_compute[253661]: 2025-11-22 09:16:36.092 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully updated port: dc08e15e-7d04-4fac-8489-61a2d7b5a642 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:16:36 compute-0 nova_compute[253661]: 2025-11-22 09:16:36.252 253665 DEBUG nova.compute.manager [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-changed-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:36 compute-0 nova_compute[253661]: 2025-11-22 09:16:36.253 253665 DEBUG nova.compute.manager [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing instance network info cache due to event network-changed-dc08e15e-7d04-4fac-8489-61a2d7b5a642. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:16:36 compute-0 nova_compute[253661]: 2025-11-22 09:16:36.253 253665 DEBUG oslo_concurrency.lockutils [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:36 compute-0 nova_compute[253661]: 2025-11-22 09:16:36.253 253665 DEBUG oslo_concurrency.lockutils [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:36 compute-0 nova_compute[253661]: 2025-11-22 09:16:36.254 253665 DEBUG nova.network.neutron [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing network info cache for port dc08e15e-7d04-4fac-8489-61a2d7b5a642 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:16:36 compute-0 podman[303762]: 2025-11-22 09:16:36.401497794 +0000 UTC m=+0.046147085 container create 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 09:16:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:36 compute-0 systemd[1]: Started libpod-conmon-51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7.scope.
Nov 22 09:16:36 compute-0 nova_compute[253661]: 2025-11-22 09:16:36.449 253665 DEBUG nova.network.neutron [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:16:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:16:36 compute-0 podman[303762]: 2025-11-22 09:16:36.380892028 +0000 UTC m=+0.025541349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:16:36 compute-0 podman[303762]: 2025-11-22 09:16:36.495238038 +0000 UTC m=+0.139887359 container init 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 09:16:36 compute-0 podman[303762]: 2025-11-22 09:16:36.504798143 +0000 UTC m=+0.149447444 container start 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:16:36 compute-0 podman[303762]: 2025-11-22 09:16:36.509512798 +0000 UTC m=+0.154162209 container attach 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:16:36 compute-0 dazzling_nightingale[303778]: 167 167
Nov 22 09:16:36 compute-0 systemd[1]: libpod-51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7.scope: Deactivated successfully.
Nov 22 09:16:36 compute-0 podman[303762]: 2025-11-22 09:16:36.511780925 +0000 UTC m=+0.156430226 container died 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2085170234b579f1d9a3a27e4767b0934d88101fb9bfbf18dab73ae81721aa6-merged.mount: Deactivated successfully.
Nov 22 09:16:36 compute-0 podman[303762]: 2025-11-22 09:16:36.576493245 +0000 UTC m=+0.221142556 container remove 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:16:36 compute-0 systemd[1]: libpod-conmon-51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7.scope: Deactivated successfully.
Nov 22 09:16:36 compute-0 podman[303800]: 2025-11-22 09:16:36.748985155 +0000 UTC m=+0.042019335 container create a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:16:36 compute-0 systemd[1]: Started libpod-conmon-a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f.scope.
Nov 22 09:16:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:36 compute-0 podman[303800]: 2025-11-22 09:16:36.732652403 +0000 UTC m=+0.025686603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:16:36 compute-0 podman[303800]: 2025-11-22 09:16:36.837175052 +0000 UTC m=+0.130209252 container init a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:16:36 compute-0 podman[303800]: 2025-11-22 09:16:36.843459817 +0000 UTC m=+0.136493997 container start a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:16:36 compute-0 podman[303800]: 2025-11-22 09:16:36.85011356 +0000 UTC m=+0.143147740 container attach a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:16:36 compute-0 ceph-mon[75021]: pgmap v1564: 305 pgs: 305 active+clean; 151 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Nov 22 09:16:37 compute-0 nova_compute[253661]: 2025-11-22 09:16:37.142 253665 DEBUG nova.network.neutron [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:37 compute-0 nova_compute[253661]: 2025-11-22 09:16:37.160 253665 DEBUG oslo_concurrency.lockutils [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:37 compute-0 nova_compute[253661]: 2025-11-22 09:16:37.250 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully updated port: babfaba0-2c0a-4eb0-adc0-3473b0b80a08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:16:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]: {
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:     "0": [
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:         {
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "devices": [
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "/dev/loop3"
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             ],
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_name": "ceph_lv0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_size": "21470642176",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "name": "ceph_lv0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "tags": {
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.cluster_name": "ceph",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.crush_device_class": "",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.encrypted": "0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.osd_id": "0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.type": "block",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.vdo": "0"
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             },
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "type": "block",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "vg_name": "ceph_vg0"
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:         }
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:     ],
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:     "1": [
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:         {
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "devices": [
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "/dev/loop4"
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             ],
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_name": "ceph_lv1",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_size": "21470642176",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "name": "ceph_lv1",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "tags": {
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.cluster_name": "ceph",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.crush_device_class": "",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.encrypted": "0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.osd_id": "1",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.type": "block",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.vdo": "0"
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             },
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "type": "block",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "vg_name": "ceph_vg1"
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:         }
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:     ],
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:     "2": [
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:         {
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "devices": [
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "/dev/loop5"
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             ],
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_name": "ceph_lv2",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_size": "21470642176",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "name": "ceph_lv2",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "tags": {
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.cluster_name": "ceph",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.crush_device_class": "",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.encrypted": "0",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.osd_id": "2",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.type": "block",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:                 "ceph.vdo": "0"
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             },
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "type": "block",
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:             "vg_name": "ceph_vg2"
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:         }
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]:     ]
Nov 22 09:16:37 compute-0 crazy_lichterman[303817]: }
Nov 22 09:16:37 compute-0 systemd[1]: libpod-a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f.scope: Deactivated successfully.
Nov 22 09:16:37 compute-0 podman[303800]: 2025-11-22 09:16:37.650709258 +0000 UTC m=+0.943743458 container died a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53-merged.mount: Deactivated successfully.
Nov 22 09:16:37 compute-0 podman[303800]: 2025-11-22 09:16:37.736953417 +0000 UTC m=+1.029987597 container remove a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:16:37 compute-0 systemd[1]: libpod-conmon-a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f.scope: Deactivated successfully.
Nov 22 09:16:37 compute-0 sudo[303700]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:37 compute-0 sudo[303840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:16:37 compute-0 sudo[303840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:37 compute-0 sudo[303840]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:37 compute-0 sudo[303865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:16:37 compute-0 sudo[303865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:37 compute-0 sudo[303865]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:38 compute-0 sudo[303890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:16:38 compute-0 sudo[303890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:38 compute-0 sudo[303890]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:38 compute-0 sudo[303915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:16:38 compute-0 sudo[303915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:38 compute-0 nova_compute[253661]: 2025-11-22 09:16:38.223 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully updated port: 3c735a93-ffc0-4525-bd7d-7db35fe17769 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:16:38 compute-0 nova_compute[253661]: 2025-11-22 09:16:38.234 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:38 compute-0 nova_compute[253661]: 2025-11-22 09:16:38.235 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquired lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:38 compute-0 nova_compute[253661]: 2025-11-22 09:16:38.235 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:16:38 compute-0 nova_compute[253661]: 2025-11-22 09:16:38.368 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:16:38 compute-0 nova_compute[253661]: 2025-11-22 09:16:38.373 253665 DEBUG nova.compute.manager [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-changed-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:38 compute-0 nova_compute[253661]: 2025-11-22 09:16:38.373 253665 DEBUG nova.compute.manager [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing instance network info cache due to event network-changed-babfaba0-2c0a-4eb0-adc0-3473b0b80a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:16:38 compute-0 nova_compute[253661]: 2025-11-22 09:16:38.373 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:38 compute-0 podman[303981]: 2025-11-22 09:16:38.449984482 +0000 UTC m=+0.052005209 container create 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:16:38 compute-0 systemd[1]: Started libpod-conmon-44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240.scope.
Nov 22 09:16:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:16:38 compute-0 podman[303981]: 2025-11-22 09:16:38.425180143 +0000 UTC m=+0.027200850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:16:38 compute-0 podman[303981]: 2025-11-22 09:16:38.537712528 +0000 UTC m=+0.139733235 container init 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:16:38 compute-0 podman[303981]: 2025-11-22 09:16:38.547188091 +0000 UTC m=+0.149208778 container start 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:16:38 compute-0 crazy_goodall[303997]: 167 167
Nov 22 09:16:38 compute-0 systemd[1]: libpod-44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240.scope: Deactivated successfully.
Nov 22 09:16:38 compute-0 podman[303981]: 2025-11-22 09:16:38.550924712 +0000 UTC m=+0.152945399 container attach 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:16:38 compute-0 conmon[303997]: conmon 44b425e4b0339ac00bb7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240.scope/container/memory.events
Nov 22 09:16:38 compute-0 podman[303981]: 2025-11-22 09:16:38.555659019 +0000 UTC m=+0.157679706 container died 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:16:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-faa7f143f749beb83b340bbf9c7f195a66dc36f92217024753c11ae824c5f6df-merged.mount: Deactivated successfully.
Nov 22 09:16:38 compute-0 podman[303981]: 2025-11-22 09:16:38.593858958 +0000 UTC m=+0.195879665 container remove 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:16:38 compute-0 systemd[1]: libpod-conmon-44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240.scope: Deactivated successfully.
Nov 22 09:16:38 compute-0 podman[304020]: 2025-11-22 09:16:38.772883648 +0000 UTC m=+0.050580164 container create 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:16:38 compute-0 systemd[1]: Started libpod-conmon-8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85.scope.
Nov 22 09:16:38 compute-0 podman[304020]: 2025-11-22 09:16:38.751140563 +0000 UTC m=+0.028837099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:16:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:38 compute-0 podman[304020]: 2025-11-22 09:16:38.869679107 +0000 UTC m=+0.147375653 container init 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:16:38 compute-0 podman[304020]: 2025-11-22 09:16:38.883535568 +0000 UTC m=+0.161232094 container start 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 22 09:16:38 compute-0 podman[304020]: 2025-11-22 09:16:38.88725697 +0000 UTC m=+0.164953516 container attach 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:16:38 compute-0 ceph-mon[75021]: pgmap v1565: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Nov 22 09:16:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 22 09:16:39 compute-0 nova_compute[253661]: 2025-11-22 09:16:39.963 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:39 compute-0 brave_euler[304037]: {
Nov 22 09:16:39 compute-0 brave_euler[304037]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "osd_id": 1,
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "type": "bluestore"
Nov 22 09:16:39 compute-0 brave_euler[304037]:     },
Nov 22 09:16:39 compute-0 brave_euler[304037]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "osd_id": 0,
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "type": "bluestore"
Nov 22 09:16:39 compute-0 brave_euler[304037]:     },
Nov 22 09:16:39 compute-0 brave_euler[304037]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "osd_id": 2,
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:16:39 compute-0 brave_euler[304037]:         "type": "bluestore"
Nov 22 09:16:39 compute-0 brave_euler[304037]:     }
Nov 22 09:16:39 compute-0 brave_euler[304037]: }
Nov 22 09:16:40 compute-0 systemd[1]: libpod-8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85.scope: Deactivated successfully.
Nov 22 09:16:40 compute-0 podman[304020]: 2025-11-22 09:16:40.028236903 +0000 UTC m=+1.305933419 container died 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:16:40 compute-0 systemd[1]: libpod-8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85.scope: Consumed 1.146s CPU time.
Nov 22 09:16:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a-merged.mount: Deactivated successfully.
Nov 22 09:16:40 compute-0 podman[304020]: 2025-11-22 09:16:40.091839676 +0000 UTC m=+1.369536192 container remove 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:16:40 compute-0 nova_compute[253661]: 2025-11-22 09:16:40.099 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:40 compute-0 systemd[1]: libpod-conmon-8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85.scope: Deactivated successfully.
Nov 22 09:16:40 compute-0 sudo[303915]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:16:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:16:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:16:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:16:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev bb376bef-3335-4d6d-ac13-cf66ff798504 does not exist
Nov 22 09:16:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 72b66b13-f5c7-4861-9de6-832d5d658a93 does not exist
Nov 22 09:16:40 compute-0 sudo[304086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:16:40 compute-0 sudo[304086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:40 compute-0 sudo[304086]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:40 compute-0 sudo[304111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:16:40 compute-0 sudo[304111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:16:40 compute-0 sudo[304111]: pam_unix(sudo:session): session closed for user root
Nov 22 09:16:41 compute-0 ceph-mon[75021]: pgmap v1566: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 22 09:16:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:16:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:16:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 22 09:16:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.570 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.598 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Releasing lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.598 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance network_info: |[{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.599 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.600 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing network info cache for port babfaba0-2c0a-4eb0-adc0-3473b0b80a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.607 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start _get_guest_xml network_info=[{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.614 253665 WARNING nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.621 253665 DEBUG nova.virt.libvirt.host [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.622 253665 DEBUG nova.virt.libvirt.host [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.625 253665 DEBUG nova.virt.libvirt.host [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.626 253665 DEBUG nova.virt.libvirt.host [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.626 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.627 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.627 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.627 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.629 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.629 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.629 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:16:41 compute-0 nova_compute[253661]: 2025-11-22 09:16:41.632 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:16:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304591467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.132 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 22 09:16:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3304591467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:16:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 22 09:16:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.169 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.176 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.310 253665 DEBUG nova.objects.instance [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'flavor' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.335 253665 DEBUG oslo_concurrency.lockutils [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.336 253665 DEBUG oslo_concurrency.lockutils [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:16:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/495675620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.669 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.671 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.672 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.673 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.674 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.675 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.675 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.676 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.676 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.677 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.678 253665 DEBUG nova.objects.instance [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.693 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <uuid>4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e</uuid>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <name>instance-0000002b</name>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestMultiNic-server-619809161</nova:name>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:16:41</nova:creationTime>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:user uuid="c5ae8af2cc9f40e083473a191ddd445f">tempest-ServersTestMultiNic-1064785551-project-member</nova:user>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:project uuid="2d156ca65e214b4aacdf111fd47dc4f6">tempest-ServersTestMultiNic-1064785551</nova:project>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:port uuid="dc08e15e-7d04-4fac-8489-61a2d7b5a642">
Nov 22 09:16:42 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.172" ipVersion="4"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:port uuid="babfaba0-2c0a-4eb0-adc0-3473b0b80a08">
Nov 22 09:16:42 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.1.80" ipVersion="4"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <nova:port uuid="3c735a93-ffc0-4525-bd7d-7db35fe17769">
Nov 22 09:16:42 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.229" ipVersion="4"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <system>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <entry name="serial">4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e</entry>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <entry name="uuid">4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e</entry>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </system>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <os>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   </os>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <features>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   </features>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk">
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config">
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:16:42 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:fb:74:e4"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <target dev="tapdc08e15e-7d"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:10:00:b1"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <target dev="tapbabfaba0-2c"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:eb:17:78"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <target dev="tap3c735a93-ff"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/console.log" append="off"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <video>
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </video>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:16:42 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:16:42 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:16:42 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:16:42 compute-0 nova_compute[253661]: </domain>
Nov 22 09:16:42 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.695 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Preparing to wait for external event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.696 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.696 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Preparing to wait for external event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.698 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Preparing to wait for external event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.698 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.698 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.698 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.699 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.699 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.700 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.700 253665 DEBUG os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.702 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.702 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.707 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc08e15e-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.707 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc08e15e-7d, col_values=(('external_ids', {'iface-id': 'dc08e15e-7d04-4fac-8489-61a2d7b5a642', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:74:e4', 'vm-uuid': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 NetworkManager[48920]: <info>  [1763803002.7110] manager: (tapdc08e15e-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.720 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.722 253665 INFO os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d')
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.723 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.723 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.724 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.724 253665 DEBUG os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.725 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.725 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.728 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.729 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbabfaba0-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.729 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbabfaba0-2c, col_values=(('external_ids', {'iface-id': 'babfaba0-2c0a-4eb0-adc0-3473b0b80a08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:00:b1', 'vm-uuid': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.731 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 NetworkManager[48920]: <info>  [1763803002.7321] manager: (tapbabfaba0-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.733 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.741 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.743 253665 INFO os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c')
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.745 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.745 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.746 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.746 253665 DEBUG os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.746 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.747 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.747 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.749 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.749 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c735a93-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.749 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c735a93-ff, col_values=(('external_ids', {'iface-id': '3c735a93-ffc0-4525-bd7d-7db35fe17769', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:17:78', 'vm-uuid': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 NetworkManager[48920]: <info>  [1763803002.7521] manager: (tap3c735a93-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.753 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.763 253665 INFO os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff')
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.819 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:fb:74:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:10:00:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:eb:17:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.821 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Using config drive
Nov 22 09:16:42 compute-0 nova_compute[253661]: 2025-11-22 09:16:42.844 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:43 compute-0 ceph-mon[75021]: pgmap v1567: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 22 09:16:43 compute-0 ceph-mon[75021]: osdmap e233: 3 total, 3 up, 3 in
Nov 22 09:16:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/495675620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:16:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Nov 22 09:16:43 compute-0 nova_compute[253661]: 2025-11-22 09:16:43.507 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Creating config drive at /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config
Nov 22 09:16:43 compute-0 nova_compute[253661]: 2025-11-22 09:16:43.514 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpang54t2k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:43 compute-0 nova_compute[253661]: 2025-11-22 09:16:43.667 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpang54t2k" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:43 compute-0 nova_compute[253661]: 2025-11-22 09:16:43.698 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:16:43 compute-0 nova_compute[253661]: 2025-11-22 09:16:43.702 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:43 compute-0 nova_compute[253661]: 2025-11-22 09:16:43.913 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:43 compute-0 nova_compute[253661]: 2025-11-22 09:16:43.914 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Deleting local config drive /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config because it was imported into RBD.
Nov 22 09:16:43 compute-0 NetworkManager[48920]: <info>  [1763803003.9870] manager: (tapdc08e15e-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Nov 22 09:16:43 compute-0 ovn_controller[152872]: 2025-11-22T09:16:43Z|00357|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 09:16:43 compute-0 kernel: tapdc08e15e-7d: entered promiscuous mode
Nov 22 09:16:43 compute-0 nova_compute[253661]: 2025-11-22 09:16:43.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:43 compute-0 ovn_controller[152872]: 2025-11-22T09:16:43Z|00358|binding|INFO|Claiming lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 for this chassis.
Nov 22 09:16:43 compute-0 ovn_controller[152872]: 2025-11-22T09:16:43Z|00359|binding|INFO|dc08e15e-7d04-4fac-8489-61a2d7b5a642: Claiming fa:16:3e:fb:74:e4 10.100.0.172
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.005 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:74:e4 10.100.0.172'], port_security=['fa:16:3e:fb:74:e4 10.100.0.172'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.172/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d4cba21-42dc-4923-abab-98063b71666c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dc08e15e-7d04-4fac-8489-61a2d7b5a642) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.007 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dc08e15e-7d04-4fac-8489-61a2d7b5a642 in datapath c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 bound to our chassis
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.009 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.0157] manager: (tapbabfaba0-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Nov 22 09:16:44 compute-0 systemd-udevd[304278]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:16:44 compute-0 systemd-udevd[304280]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.028 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updated VIF entry in instance network info cache for port babfaba0-2c0a-4eb0-adc0-3473b0b80a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.029 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5900503-5b9f-43ed-bc27-2750243ef208]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.028 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc43f3b8e-c1 in ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.030 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc43f3b8e-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.031 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccb47d9-8ad9-43cd-9add-4699b55b319b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6424717-259d-4302-a1ba-c9a09b784b8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.0372] manager: (tap3c735a93-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/160)
Nov 22 09:16:44 compute-0 systemd-udevd[304286]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.039 253665 DEBUG nova.network.neutron [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.043 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.043 253665 DEBUG nova.compute.manager [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-changed-3c735a93-ffc0-4525-bd7d-7db35fe17769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.043 253665 DEBUG nova.compute.manager [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing instance network info cache due to event network-changed-3c735a93-ffc0-4525-bd7d-7db35fe17769. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.043 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.044 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.044 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing network info cache for port 3c735a93-ffc0-4525-bd7d-7db35fe17769 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:16:44 compute-0 kernel: tapbabfaba0-2c: entered promiscuous mode
Nov 22 09:16:44 compute-0 kernel: tap3c735a93-ff: entered promiscuous mode
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.0481] device (tapdc08e15e-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.0544] device (tapdc08e15e-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.0552] device (tapbabfaba0-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.0585] device (tapbabfaba0-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.0599] device (tap3c735a93-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.054 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[df8b9d52-41c9-44d9-b9d2-6c899ef1c922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00360|binding|INFO|Claiming lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 for this chassis.
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00361|binding|INFO|babfaba0-2c0a-4eb0-adc0-3473b0b80a08: Claiming fa:16:3e:10:00:b1 10.100.1.80
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00362|binding|INFO|Claiming lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 for this chassis.
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00363|binding|INFO|3c735a93-ffc0-4525-bd7d-7db35fe17769: Claiming fa:16:3e:eb:17:78 10.100.0.229
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.0639] device (tap3c735a93-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.067 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.068 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:17:78 10.100.0.229'], port_security=['fa:16:3e:eb:17:78 10.100.0.229'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.229/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d4cba21-42dc-4923-abab-98063b71666c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=3c735a93-ffc0-4525-bd7d-7db35fe17769) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.069 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:00:b1 10.100.1.80'], port_security=['fa:16:3e:10:00:b1 10.100.1.80'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.80/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=529c2359-2293-4f9c-a99f-590f8fe2f28e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=babfaba0-2c0a-4eb0-adc0-3473b0b80a08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.087 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dffaeb43-bb68-4abe-8d93-6402614cc386]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 systemd-machined[215941]: New machine qemu-48-instance-0000002b.
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00364|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 09:16:44 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000002b.
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00365|binding|INFO|Setting lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 ovn-installed in OVS
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00366|binding|INFO|Setting lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 up in Southbound
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.110 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.129 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[130c8af9-bf90-4867-b343-8d154af8cd32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.1384] manager: (tapc43f3b8e-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/161)
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.140 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2e6895-250d-4e59-b5eb-a3ff87df64a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.148 253665 DEBUG nova.compute.manager [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.148 253665 DEBUG nova.compute.manager [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing instance network info cache due to event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.149 253665 DEBUG oslo_concurrency.lockutils [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00367|binding|INFO|Setting lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 ovn-installed in OVS
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00368|binding|INFO|Setting lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 up in Southbound
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00369|binding|INFO|Setting lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 ovn-installed in OVS
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00370|binding|INFO|Setting lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 up in Southbound
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.164 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.182 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[52c476f4-8eb3-4462-ba22-488c4d8b65a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.184 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[20616cdd-e0bf-4f29-870a-a8f726565e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.2095] device (tapc43f3b8e-c0): carrier: link connected
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.216 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6f797977-b96d-4e31-a5a7-b6bef105cbbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.236 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00434859-482f-40dd-a61c-b8c8c97878f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc43f3b8e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:07:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585494, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304319, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.254 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38bbf224-611b-43f2-a4a0-a5ec28304124]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:70a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585494, 'tstamp': 585494}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304320, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.273 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce923e1e-fcc6-4060-954a-e96fd1418eea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc43f3b8e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:07:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585494, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304321, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.308 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58aa705c-0d18-4416-8720-176db68a8e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.373 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d20514f7-0b42-464b-98b5-a704f5d36782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.375 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc43f3b8e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.375 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.376 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc43f3b8e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 NetworkManager[48920]: <info>  [1763803004.3790] manager: (tapc43f3b8e-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Nov 22 09:16:44 compute-0 kernel: tapc43f3b8e-c0: entered promiscuous mode
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.391 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc43f3b8e-c0, col_values=(('external_ids', {'iface-id': '61d4e08f-2ccc-4601-a2e7-6cb33cc906ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:44 compute-0 ovn_controller[152872]: 2025-11-22T09:16:44Z|00371|binding|INFO|Releasing lport 61d4e08f-2ccc-4601-a2e7-6cb33cc906ee from this chassis (sb_readonly=0)
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.408 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b76b669-f70f-4acd-9df1-95a2f3e54b04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.410 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6.pid.haproxy
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.410 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'env', 'PROCESS_TAG=haproxy-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:16:44 compute-0 podman[304396]: 2025-11-22 09:16:44.779152922 +0000 UTC m=+0.052752128 container create ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.793 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803004.7922537, 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.793 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] VM Started (Lifecycle Event)
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.810 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.816 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803004.7930264, 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.816 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] VM Paused (Lifecycle Event)
Nov 22 09:16:44 compute-0 systemd[1]: Started libpod-conmon-ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e.scope.
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.832 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.836 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:16:44 compute-0 podman[304396]: 2025-11-22 09:16:44.75021716 +0000 UTC m=+0.023816386 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.850 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:16:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c249f8811210d708abd429b642e58a08ae9985262779c5dbd5ea97093529a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:44 compute-0 podman[304396]: 2025-11-22 09:16:44.876973256 +0000 UTC m=+0.150572502 container init ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:16:44 compute-0 podman[304396]: 2025-11-22 09:16:44.884356637 +0000 UTC m=+0.157955843 container start ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:16:44 compute-0 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [NOTICE]   (304416) : New worker (304418) forked
Nov 22 09:16:44 compute-0 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [NOTICE]   (304416) : Loading success.
Nov 22 09:16:44 compute-0 nova_compute[253661]: 2025-11-22 09:16:44.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.984 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 3c735a93-ffc0-4525-bd7d-7db35fe17769 in datapath c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 unbound from our chassis
Nov 22 09:16:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.986 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.005 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3674444-d4db-4de2-af71-3adfd63114bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.042 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd5a3c3-3650-496f-94e6-cfe823519734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.047 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[abd8861a-d31d-4879-ba43-b2cb18aa47ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.088 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3efefb-80dc-483a-8952-d41fd27c3fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.109 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3c45e715-1d6a-4aac-b867-1ed0d59d0fc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc43f3b8e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:07:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 6, 'rx_bytes': 90, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 6, 'rx_bytes': 90, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585494, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304432, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.129 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[726a18a0-3d61-4e4e-8221-ac07e913e26a]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapc43f3b8e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585507, 'tstamp': 585507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304433, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc43f3b8e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585510, 'tstamp': 585510}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304433, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.131 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc43f3b8e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:45 compute-0 nova_compute[253661]: 2025-11-22 09:16:45.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.136 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc43f3b8e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc43f3b8e-c0, col_values=(('external_ids', {'iface-id': '61d4e08f-2ccc-4601-a2e7-6cb33cc906ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.138 162862 INFO neutron.agent.ovn.metadata.agent [-] Port babfaba0-2c0a-4eb0-adc0-3473b0b80a08 in datapath 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c unbound from our chassis
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.139 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.153 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[51dc0a0f-f4a3-421e-b0cf-bb3735cc8ca6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.154 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d8f3ef5-81 in ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.156 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d8f3ef5-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.156 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc767047-bb5f-4837-83b9-fc4b8917631f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.157 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc12a0e1-d603-4bf6-9e68-39d38c3ae17b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.170 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[fb038518-da90-46af-a1d2-b5b6fd612fe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ceph-mon[75021]: pgmap v1569: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9f565f71-6808-4b05-9b8e-02f1c9200c3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.237 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[94c91ddd-d78b-4bb0-a5dd-47c8e95192c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.243 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[585414e4-a5dd-413d-9643-f0ae773b3d9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 NetworkManager[48920]: <info>  [1763803005.2445] manager: (tap2d8f3ef5-80): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
Nov 22 09:16:45 compute-0 systemd-udevd[304309]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.284 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[03ec2fb7-bddf-4c7f-bbbb-773bbac66dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.288 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[30d434a1-e23a-491a-92e8-87d56bc84329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 NetworkManager[48920]: <info>  [1763803005.3141] device (tap2d8f3ef5-80): carrier: link connected
Nov 22 09:16:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 884 KiB/s wr, 34 op/s
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.322 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[89dcd2f8-1884-4727-9d21-05bff8f7eb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.344 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cd16f70c-4699-4099-bd9f-2365a4b0c238]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d8f3ef5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:dd:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585605, 'reachable_time': 24107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304444, 'error': None, 'target': 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be0e5068-b26e-426d-a641-92db3c5c4354]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:ddc7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585605, 'tstamp': 585605}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304445, 'error': None, 'target': 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.385 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3e663a-3fdc-4f1e-94fe-bf2b9dcccc71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d8f3ef5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:dd:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585605, 'reachable_time': 24107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304446, 'error': None, 'target': 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e72f5e-ae05-4218-a3cb-4b0552e31263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.504 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f261481-e6ba-4b92-ad15-b6c7c48d5110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d8f3ef5-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.506 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d8f3ef5-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:45 compute-0 nova_compute[253661]: 2025-11-22 09:16:45.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:45 compute-0 NetworkManager[48920]: <info>  [1763803005.5091] manager: (tap2d8f3ef5-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Nov 22 09:16:45 compute-0 kernel: tap2d8f3ef5-80: entered promiscuous mode
Nov 22 09:16:45 compute-0 nova_compute[253661]: 2025-11-22 09:16:45.510 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.511 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d8f3ef5-80, col_values=(('external_ids', {'iface-id': '4afb8ac6-882e-4ceb-b8f8-84b902cc9bc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:45 compute-0 nova_compute[253661]: 2025-11-22 09:16:45.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:45 compute-0 ovn_controller[152872]: 2025-11-22T09:16:45Z|00372|binding|INFO|Releasing lport 4afb8ac6-882e-4ceb-b8f8-84b902cc9bc9 from this chassis (sb_readonly=0)
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.536 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:16:45 compute-0 nova_compute[253661]: 2025-11-22 09:16:45.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.537 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21c855dc-280a-4392-bc72-b95e9b22997f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.537 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c.pid.haproxy
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:16:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.538 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'env', 'PROCESS_TAG=haproxy-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:16:45 compute-0 podman[304478]: 2025-11-22 09:16:45.954187981 +0000 UTC m=+0.063942133 container create b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:16:45 compute-0 systemd[1]: Started libpod-conmon-b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f.scope.
Nov 22 09:16:46 compute-0 podman[304478]: 2025-11-22 09:16:45.914479216 +0000 UTC m=+0.024233388 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:16:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/558a2bf28aa725adc01b2257b2fa48656e975a746fc020a7ec5246f4f99d866b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:16:46 compute-0 podman[304478]: 2025-11-22 09:16:46.040660127 +0000 UTC m=+0.150414279 container init b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:16:46 compute-0 podman[304478]: 2025-11-22 09:16:46.048261863 +0000 UTC m=+0.158016015 container start b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 09:16:46 compute-0 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [NOTICE]   (304497) : New worker (304499) forked
Nov 22 09:16:46 compute-0 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [NOTICE]   (304497) : Loading success.
Nov 22 09:16:46 compute-0 ceph-mon[75021]: pgmap v1570: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 884 KiB/s wr, 34 op/s
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.315 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.317 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.317 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.318 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.318 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Processing event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.318 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.318 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.319 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.319 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.319 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No event matching network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 in dict_keys([('network-vif-plugged', 'babfaba0-2c0a-4eb0-adc0-3473b0b80a08'), ('network-vif-plugged', '3c735a93-ffc0-4525-bd7d-7db35fe17769')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.319 253665 WARNING nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 for instance with vm_state building and task_state spawning.
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.320 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.320 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.320 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.320 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.321 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Processing event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.380 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updated VIF entry in instance network info cache for port 3c735a93-ffc0-4525-bd7d-7db35fe17769. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.381 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.402 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.604 253665 DEBUG nova.network.neutron [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.625 253665 DEBUG oslo_concurrency.lockutils [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.625 253665 DEBUG nova.compute.manager [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.626 253665 DEBUG nova.compute.manager [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] network_info to inject: |[{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.629 253665 DEBUG oslo_concurrency.lockutils [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:46 compute-0 nova_compute[253661]: 2025-11-22 09:16:46.629 253665 DEBUG nova.network.neutron [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:16:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.6 KiB/s wr, 31 op/s
Nov 22 09:16:47 compute-0 nova_compute[253661]: 2025-11-22 09:16:47.709 253665 DEBUG nova.objects.instance [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'flavor' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:47 compute-0 nova_compute[253661]: 2025-11-22 09:16:47.728 253665 DEBUG oslo_concurrency.lockutils [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:47 compute-0 nova_compute[253661]: 2025-11-22 09:16:47.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.163 253665 DEBUG nova.network.neutron [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated VIF entry in instance network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.164 253665 DEBUG nova.network.neutron [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.176 253665 DEBUG oslo_concurrency.lockutils [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.177 253665 DEBUG oslo_concurrency.lockutils [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:48 compute-0 ceph-mon[75021]: pgmap v1571: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.6 KiB/s wr, 31 op/s
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.461 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.461 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.462 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.463 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.463 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No event matching network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 in dict_keys([('network-vif-plugged', '3c735a93-ffc0-4525-bd7d-7db35fe17769')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.463 253665 WARNING nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 for instance with vm_state building and task_state spawning.
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.464 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.464 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.464 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.465 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.465 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Processing event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.465 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.466 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.466 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.466 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.466 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.467 253665 WARNING nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 for instance with vm_state building and task_state spawning.
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.468 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.474 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803008.4741724, 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.474 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] VM Resumed (Lifecycle Event)
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.477 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.480 253665 INFO nova.virt.libvirt.driver [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance spawned successfully.
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.480 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.506 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.512 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.512 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.513 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.514 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.514 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.515 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.521 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.557 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.589 253665 INFO nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Took 16.07 seconds to spawn the instance on the hypervisor.
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.590 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.668 253665 INFO nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Took 17.04 seconds to build instance.
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.685 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:48 compute-0 nova_compute[253661]: 2025-11-22 09:16:48.952 253665 DEBUG nova.network.neutron [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.041 253665 DEBUG nova.compute.manager [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.041 253665 DEBUG nova.compute.manager [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing instance network info cache due to event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.042 253665 DEBUG oslo_concurrency.lockutils [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:16:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:49.054 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.056 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:49.056 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:16:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 37 op/s
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.839 253665 DEBUG nova.network.neutron [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.853 253665 DEBUG oslo_concurrency.lockutils [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.854 253665 DEBUG nova.compute.manager [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.854 253665 DEBUG nova.compute.manager [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] network_info to inject: |[{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.860 253665 DEBUG oslo_concurrency.lockutils [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.861 253665 DEBUG nova.network.neutron [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:16:49 compute-0 nova_compute[253661]: 2025-11-22 09:16:49.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:49 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:16:49 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.071 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.071 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.072 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.072 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.072 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.073 253665 INFO nova.compute.manager [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Terminating instance
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.075 253665 DEBUG nova.compute.manager [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:16:50 compute-0 kernel: tap91a0d7d2-51 (unregistering): left promiscuous mode
Nov 22 09:16:50 compute-0 NetworkManager[48920]: <info>  [1763803010.1424] device (tap91a0d7d2-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 ovn_controller[152872]: 2025-11-22T09:16:50Z|00373|binding|INFO|Releasing lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 from this chassis (sb_readonly=0)
Nov 22 09:16:50 compute-0 ovn_controller[152872]: 2025-11-22T09:16:50Z|00374|binding|INFO|Setting lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 down in Southbound
Nov 22 09:16:50 compute-0 ovn_controller[152872]: 2025-11-22T09:16:50Z|00375|binding|INFO|Removing iface tap91a0d7d2-51 ovn-installed in OVS
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.158 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:22:b3 10.100.0.5'], port_security=['fa:16:3e:d8:22:b3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b45c203c-7ae1-436b-86d3-bfc0146dd536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7459b4dc-5141-4001-a5b6-0d7256031901', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba63bb0-30f0-4e31-af74-7247ce34941d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=91a0d7d2-517a-4636-a7fd-86f4d72aed04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.160 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 in datapath a8ceec0c-2cf6-459a-a4d7-aaf770041b6c unbound from our chassis
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.163 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.168 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[203c6820-1983-41d6-aa51-796e35681b56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.170 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c namespace which is not needed anymore
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Nov 22 09:16:50 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 15.808s CPU time.
Nov 22 09:16:50 compute-0 systemd-machined[215941]: Machine qemu-47-instance-0000002a terminated.
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.321 253665 INFO nova.virt.libvirt.driver [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance destroyed successfully.
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.322 253665 DEBUG nova.objects.instance [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'resources' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.334 253665 DEBUG nova.virt.libvirt.vif [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1942299149',display_name='tempest-AttachInterfacesUnderV243Test-server-1942299149',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1942299149',id=42,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJjKzxQ6a+OuJML0HHQQYvCuHT4o36Pe0HTJXEDf/t0kK24QNwKu6PCguH+C6XVYn+ibPKaOztSJwRFEDsoyxOxItcOZetU3VENvv82U9z5y/gmG/qHovd9IPqkeCrJiA==',key_name='tempest-keypair-1712592069',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:16:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2f74a0d8c2374c07a9c9cd48b42318c3',ramdisk_id='',reservation_id='r-2g9ce9k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-776663851',owner_user_name='tempest-AttachInterfacesUnderV243Test-776663851-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b394acfc2f44ed180b65249224f2788',uuid=b45c203c-7ae1-436b-86d3-bfc0146dd536,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.334 253665 DEBUG nova.network.os_vif_util [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converting VIF {"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.336 253665 DEBUG nova.network.os_vif_util [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.337 253665 DEBUG os_vif [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:16:50 compute-0 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [NOTICE]   (302802) : haproxy version is 2.8.14-c23fe91
Nov 22 09:16:50 compute-0 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [NOTICE]   (302802) : path to executable is /usr/sbin/haproxy
Nov 22 09:16:50 compute-0 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [WARNING]  (302802) : Exiting Master process...
Nov 22 09:16:50 compute-0 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [ALERT]    (302802) : Current worker (302804) exited with code 143 (Terminated)
Nov 22 09:16:50 compute-0 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [WARNING]  (302802) : All workers exited. Exiting... (0)
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.341 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91a0d7d2-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 systemd[1]: libpod-7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557.scope: Deactivated successfully.
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.346 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.349 253665 INFO os_vif [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51')
Nov 22 09:16:50 compute-0 podman[304532]: 2025-11-22 09:16:50.351332685 +0000 UTC m=+0.061847701 container died 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 09:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-014918619f881c93f9b6124aa0acf97a639f1327a1c3280fd54ede139d565034-merged.mount: Deactivated successfully.
Nov 22 09:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557-userdata-shm.mount: Deactivated successfully.
Nov 22 09:16:50 compute-0 ceph-mon[75021]: pgmap v1572: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 37 op/s
Nov 22 09:16:50 compute-0 podman[304532]: 2025-11-22 09:16:50.427547088 +0000 UTC m=+0.138062104 container cleanup 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:16:50 compute-0 systemd[1]: libpod-conmon-7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557.scope: Deactivated successfully.
Nov 22 09:16:50 compute-0 podman[304590]: 2025-11-22 09:16:50.513776708 +0000 UTC m=+0.059803141 container remove 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.520 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eeed5dea-4c6d-4c76-aa4e-8f4803dbc78f]: (4, ('Sat Nov 22 09:16:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c (7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557)\n7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557\nSat Nov 22 09:16:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c (7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557)\n7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.523 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6e174701-8754-4ae3-a903-7815c82b4940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.525 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8ceec0c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.527 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 kernel: tapa8ceec0c-20: left promiscuous mode
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.550 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c59bbe6-bf96-4f06-a6f7-953d30021faf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.568 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c2856f4d-323d-4e29-a8a2-6b8670e95627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.570 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb18198-947d-485c-bb72-5277502ee062]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.598 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f5987fb4-2cc1-474f-9b39-fde2078dcb7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581526, 'reachable_time': 31766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304606, 'error': None, 'target': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:50 compute-0 systemd[1]: run-netns-ovnmeta\x2da8ceec0c\x2d2cf6\x2d459a\x2da4d7\x2daaf770041b6c.mount: Deactivated successfully.
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.607 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.608 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbea081-c48e-4491-a4b1-11165c6b00b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.669 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.858 253665 INFO nova.virt.libvirt.driver [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Deleting instance files /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536_del
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.859 253665 INFO nova.virt.libvirt.driver [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Deletion of /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536_del complete
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.909 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.909 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.910 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.910 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.910 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.911 253665 INFO nova.compute.manager [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Terminating instance
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.912 253665 DEBUG nova.compute.manager [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.916 253665 INFO nova.compute.manager [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.916 253665 DEBUG oslo.service.loopingcall [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.917 253665 DEBUG nova.compute.manager [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.917 253665 DEBUG nova.network.neutron [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:16:50 compute-0 kernel: tapdc08e15e-7d (unregistering): left promiscuous mode
Nov 22 09:16:50 compute-0 NetworkManager[48920]: <info>  [1763803010.9550] device (tapdc08e15e-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:16:50 compute-0 ovn_controller[152872]: 2025-11-22T09:16:50Z|00376|binding|INFO|Releasing lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 from this chassis (sb_readonly=0)
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 ovn_controller[152872]: 2025-11-22T09:16:50Z|00377|binding|INFO|Setting lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 down in Southbound
Nov 22 09:16:50 compute-0 ovn_controller[152872]: 2025-11-22T09:16:50Z|00378|binding|INFO|Removing iface tapdc08e15e-7d ovn-installed in OVS
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.965 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:74:e4 10.100.0.172'], port_security=['fa:16:3e:fb:74:e4 10.100.0.172'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.172/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d4cba21-42dc-4923-abab-98063b71666c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dc08e15e-7d04-4fac-8489-61a2d7b5a642) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.967 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dc08e15e-7d04-4fac-8489-61a2d7b5a642 in datapath c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 unbound from our chassis
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.968 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6
Nov 22 09:16:50 compute-0 kernel: tapbabfaba0-2c (unregistering): left promiscuous mode
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 NetworkManager[48920]: <info>  [1763803010.9828] device (tapbabfaba0-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.987 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1026d72-2a6e-4040-9c30-5114d0c18273]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:50 compute-0 ovn_controller[152872]: 2025-11-22T09:16:50Z|00379|binding|INFO|Releasing lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 from this chassis (sb_readonly=0)
Nov 22 09:16:50 compute-0 ovn_controller[152872]: 2025-11-22T09:16:50Z|00380|binding|INFO|Setting lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 down in Southbound
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 ovn_controller[152872]: 2025-11-22T09:16:50Z|00381|binding|INFO|Removing iface tapbabfaba0-2c ovn-installed in OVS
Nov 22 09:16:50 compute-0 nova_compute[253661]: 2025-11-22 09:16:50.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.996 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:00:b1 10.100.1.80'], port_security=['fa:16:3e:10:00:b1 10.100.1.80'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.80/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=529c2359-2293-4f9c-a99f-590f8fe2f28e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=babfaba0-2c0a-4eb0-adc0-3473b0b80a08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 kernel: tap3c735a93-ff (unregistering): left promiscuous mode
Nov 22 09:16:51 compute-0 NetworkManager[48920]: <info>  [1763803011.0173] device (tap3c735a93-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.037 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6954e6-2754-46ec-8a31-ff0ec3714a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.041 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc4ea39-9c58-4783-8d64-c5b15752d94a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_controller[152872]: 2025-11-22T09:16:51Z|00382|binding|INFO|Releasing lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 from this chassis (sb_readonly=0)
Nov 22 09:16:51 compute-0 ovn_controller[152872]: 2025-11-22T09:16:51Z|00383|binding|INFO|Setting lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 down in Southbound
Nov 22 09:16:51 compute-0 ovn_controller[152872]: 2025-11-22T09:16:51Z|00384|binding|INFO|Removing iface tap3c735a93-ff ovn-installed in OVS
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.042 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.045 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.061 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:17:78 10.100.0.229'], port_security=['fa:16:3e:eb:17:78 10.100.0.229'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.229/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d4cba21-42dc-4923-abab-98063b71666c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=3c735a93-ffc0-4525-bd7d-7db35fe17769) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.080 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[28449923-ac2f-40b6-a69a-f369d1a1c7b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Nov 22 09:16:51 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Consumed 3.230s CPU time.
Nov 22 09:16:51 compute-0 systemd-machined[215941]: Machine qemu-48-instance-0000002b terminated.
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.102 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18de082a-dd71-45a6-b88e-a9dcbf246c9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc43f3b8e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:07:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585494, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304628, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.123 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe9c762-9eb7-4a69-8a12-346a3745ebb5]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapc43f3b8e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585507, 'tstamp': 585507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304629, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc43f3b8e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585510, 'tstamp': 585510}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304629, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.126 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc43f3b8e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.136 253665 DEBUG nova.compute.manager [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-unplugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.136 253665 DEBUG oslo_concurrency.lockutils [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.137 253665 DEBUG oslo_concurrency.lockutils [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.137 253665 DEBUG oslo_concurrency.lockutils [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.137 253665 DEBUG nova.compute.manager [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] No waiting events found dispatching network-vif-unplugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.137 253665 DEBUG nova.compute.manager [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-unplugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:16:51 compute-0 NetworkManager[48920]: <info>  [1763803011.1392] manager: (tapdc08e15e-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.145 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc43f3b8e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.146 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.146 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc43f3b8e-c0, col_values=(('external_ids', {'iface-id': '61d4e08f-2ccc-4601-a2e7-6cb33cc906ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.146 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.147 162862 INFO neutron.agent.ovn.metadata.agent [-] Port babfaba0-2c0a-4eb0-adc0-3473b0b80a08 in datapath 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c unbound from our chassis
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.150 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.152 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f67f046f-5fef-4fc0-a3f0-8e3b673b1d82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.153 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c namespace which is not needed anymore
Nov 22 09:16:51 compute-0 NetworkManager[48920]: <info>  [1763803011.1546] manager: (tapbabfaba0-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Nov 22 09:16:51 compute-0 NetworkManager[48920]: <info>  [1763803011.1686] manager: (tap3c735a93-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.184 253665 INFO nova.virt.libvirt.driver [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance destroyed successfully.
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.185 253665 DEBUG nova.objects.instance [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'resources' on Instance uuid 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.204 253665 DEBUG nova.virt.libvirt.vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:16:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:48Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.204 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.205 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.205 253665 DEBUG os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.207 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc08e15e-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.217 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.219 253665 INFO os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d')
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.220 253665 DEBUG nova.virt.libvirt.vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:16:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:48Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.220 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.221 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.221 253665 DEBUG os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.223 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.223 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbabfaba0-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.226 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.230 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.232 253665 INFO os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c')
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.233 253665 DEBUG nova.virt.libvirt.vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:16:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:48Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.233 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.234 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.234 253665 DEBUG os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.235 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.236 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c735a93-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.237 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.240 253665 INFO os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff')
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [NOTICE]   (304497) : haproxy version is 2.8.14-c23fe91
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [NOTICE]   (304497) : path to executable is /usr/sbin/haproxy
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [WARNING]  (304497) : Exiting Master process...
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [ALERT]    (304497) : Current worker (304499) exited with code 143 (Terminated)
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [WARNING]  (304497) : All workers exited. Exiting... (0)
Nov 22 09:16:51 compute-0 systemd[1]: libpod-b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f.scope: Deactivated successfully.
Nov 22 09:16:51 compute-0 podman[304686]: 2025-11-22 09:16:51.318897936 +0000 UTC m=+0.058327034 container died b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:16:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 37 op/s
Nov 22 09:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f-userdata-shm.mount: Deactivated successfully.
Nov 22 09:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-558a2bf28aa725adc01b2257b2fa48656e975a746fc020a7ec5246f4f99d866b-merged.mount: Deactivated successfully.
Nov 22 09:16:51 compute-0 podman[304686]: 2025-11-22 09:16:51.373745424 +0000 UTC m=+0.113174492 container cleanup b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:16:51 compute-0 systemd[1]: libpod-conmon-b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f.scope: Deactivated successfully.
Nov 22 09:16:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 22 09:16:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 22 09:16:51 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 22 09:16:51 compute-0 podman[304734]: 2025-11-22 09:16:51.46149825 +0000 UTC m=+0.060308343 container remove b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.468 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df7e3a97-4072-4903-8f34-c5eef8d20e01]: (4, ('Sat Nov 22 09:16:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c (b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f)\nb69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f\nSat Nov 22 09:16:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c (b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f)\nb69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.469 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98856ec8-584a-4053-ba59-6c6ca302c0c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.470 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d8f3ef5-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.491 253665 DEBUG nova.compute.manager [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-unplugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.491 253665 DEBUG oslo_concurrency.lockutils [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.491 253665 DEBUG oslo_concurrency.lockutils [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.491 253665 DEBUG oslo_concurrency.lockutils [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.492 253665 DEBUG nova.compute.manager [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-unplugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.492 253665 DEBUG nova.compute.manager [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-unplugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.505 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 kernel: tap2d8f3ef5-80: left promiscuous mode
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.524 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4875b6-30f0-4529-b88a-e012ad3bd7cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[520cd2eb-e028-4ff3-9a5b-e41e3e32ceb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.547 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[904242a8-ea9f-4df6-bbc7-444e0064da05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.573 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c67fe838-be82-4ff5-86bc-70213cc2d285]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585596, 'reachable_time': 20045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304748, 'error': None, 'target': 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d8f3ef5\x2d84f0\x2d49e9\x2d9b4b\x2d24a885a3aa6c.mount: Deactivated successfully.
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.576 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.577 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2299b565-ea6d-408b-b2fe-59c55de1e1b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.578 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 3c735a93-ffc0-4525-bd7d-7db35fe17769 in datapath c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 unbound from our chassis
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.579 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.580 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c03b1fcf-3143-42a3-bbf6-4eae0bd914d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.581 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 namespace which is not needed anymore
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [NOTICE]   (304416) : haproxy version is 2.8.14-c23fe91
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [NOTICE]   (304416) : path to executable is /usr/sbin/haproxy
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [WARNING]  (304416) : Exiting Master process...
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [ALERT]    (304416) : Current worker (304418) exited with code 143 (Terminated)
Nov 22 09:16:51 compute-0 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [WARNING]  (304416) : All workers exited. Exiting... (0)
Nov 22 09:16:51 compute-0 systemd[1]: libpod-ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e.scope: Deactivated successfully.
Nov 22 09:16:51 compute-0 podman[304766]: 2025-11-22 09:16:51.747036708 +0000 UTC m=+0.059895132 container died ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.750 253665 INFO nova.virt.libvirt.driver [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Deleting instance files /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_del
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.751 253665 INFO nova.virt.libvirt.driver [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Deletion of /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_del complete
Nov 22 09:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-19c249f8811210d708abd429b642e58a08ae9985262779c5dbd5ea97093529a7-merged.mount: Deactivated successfully.
Nov 22 09:16:51 compute-0 podman[304766]: 2025-11-22 09:16:51.78980233 +0000 UTC m=+0.102660764 container cleanup ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.808 253665 INFO nova.compute.manager [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Took 0.90 seconds to destroy the instance on the hypervisor.
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.809 253665 DEBUG oslo.service.loopingcall [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.809 253665 DEBUG nova.compute.manager [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.810 253665 DEBUG nova.network.neutron [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:16:51 compute-0 systemd[1]: libpod-conmon-ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e.scope: Deactivated successfully.
Nov 22 09:16:51 compute-0 podman[304794]: 2025-11-22 09:16:51.869917259 +0000 UTC m=+0.052780259 container remove ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ab643b-5947-409c-a739-496f3fa030aa]: (4, ('Sat Nov 22 09:16:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 (ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e)\nca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e\nSat Nov 22 09:16:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 (ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e)\nca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.879 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[572c3e53-f5cd-4d10-82f1-f3623f353848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.880 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc43f3b8e-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:51 compute-0 kernel: tapc43f3b8e-c0: left promiscuous mode
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 nova_compute[253661]: 2025-11-22 09:16:51.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.904 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e95474b-e8cf-4580-83b0-2d0240ce3632]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.919 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[359a034a-d87e-4310-a181-55f19dfc7d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.921 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b8742ec1-327c-4b73-98ff-4d744646383a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.947 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[51614735-881d-45ae-b9e4-67d0f595a227]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585485, 'reachable_time': 41183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304808, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.950 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:16:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.950 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[595a980f-504a-4896-a648-5a1ae5590904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:16:51 compute-0 systemd[1]: run-netns-ovnmeta\x2dc43f3b8e\x2dc53d\x2d47a3\x2dab2b\x2dbfb152b82dd6.mount: Deactivated successfully.
Nov 22 09:16:52 compute-0 nova_compute[253661]: 2025-11-22 09:16:52.045 253665 DEBUG nova.network.neutron [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated VIF entry in instance network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:16:52 compute-0 nova_compute[253661]: 2025-11-22 09:16:52.045 253665 DEBUG nova.network.neutron [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:52 compute-0 nova_compute[253661]: 2025-11-22 09:16:52.059 253665 DEBUG oslo_concurrency.lockutils [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:16:52
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', 'images', '.mgr', 'backups', 'default.rgw.control']
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:16:52 compute-0 ceph-mon[75021]: pgmap v1573: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 37 op/s
Nov 22 09:16:52 compute-0 ceph-mon[75021]: osdmap e234: 3 total, 3 up, 3 in
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.119 253665 DEBUG nova.network.neutron [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.140 253665 INFO nova.compute.manager [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Took 2.22 seconds to deallocate network for instance.
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.192 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.192 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.243 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.244 253665 DEBUG oslo_concurrency.lockutils [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.244 253665 DEBUG oslo_concurrency.lockutils [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.245 253665 DEBUG oslo_concurrency.lockutils [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.245 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] No waiting events found dispatching network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.246 253665 WARNING nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received unexpected event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 for instance with vm_state deleted and task_state None.
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.246 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-deleted-3c735a93-ffc0-4525-bd7d-7db35fe17769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.247 253665 INFO nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Neutron deleted interface 3c735a93-ffc0-4525-bd7d-7db35fe17769; detaching it from the instance and deleting it from the info cache
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.247 253665 DEBUG nova.network.neutron [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.286 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Detach interface failed, port_id=3c735a93-ffc0-4525-bd7d-7db35fe17769, reason: Instance 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.287 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-deleted-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.287 253665 INFO nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Neutron deleted interface dc08e15e-7d04-4fac-8489-61a2d7b5a642; detaching it from the instance and deleting it from the info cache
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.287 253665 DEBUG nova.network.neutron [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.301 253665 DEBUG oslo_concurrency.processutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:53 compute-0 ceph-mgr[75315]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1636168236
Nov 22 09:16:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 130 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 20 KiB/s wr, 95 op/s
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.357 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Detach interface failed, port_id=dc08e15e-7d04-4fac-8489-61a2d7b5a642, reason: Instance 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.684 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.685 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.685 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.686 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.686 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.687 253665 WARNING nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 for instance with vm_state active and task_state deleting.
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.687 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-unplugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.688 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.689 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.690 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.690 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-unplugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.691 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-unplugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.691 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-deleted-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.694 253665 DEBUG nova.network.neutron [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.714 253665 INFO nova.compute.manager [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Took 1.90 seconds to deallocate network for instance.
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.752 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:16:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/851078135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.818 253665 DEBUG oslo_concurrency.processutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.826 253665 DEBUG nova.compute.provider_tree [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.843 253665 DEBUG nova.scheduler.client.report [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.870 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.873 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.903 253665 INFO nova.scheduler.client.report [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Deleted allocations for instance b45c203c-7ae1-436b-86d3-bfc0146dd536
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.927 253665 DEBUG oslo_concurrency.processutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:16:53 compute-0 nova_compute[253661]: 2025-11-22 09:16:53.974 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:16:54 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2933749736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:54 compute-0 podman[304851]: 2025-11-22 09:16:54.384545024 +0000 UTC m=+0.070254758 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 09:16:54 compute-0 podman[304852]: 2025-11-22 09:16:54.39335925 +0000 UTC m=+0.075714802 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:16:54 compute-0 nova_compute[253661]: 2025-11-22 09:16:54.397 253665 DEBUG oslo_concurrency.processutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:16:54 compute-0 nova_compute[253661]: 2025-11-22 09:16:54.405 253665 DEBUG nova.compute.provider_tree [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:16:54 compute-0 nova_compute[253661]: 2025-11-22 09:16:54.416 253665 DEBUG nova.scheduler.client.report [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:16:54 compute-0 nova_compute[253661]: 2025-11-22 09:16:54.436 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:54 compute-0 nova_compute[253661]: 2025-11-22 09:16:54.459 253665 INFO nova.scheduler.client.report [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Deleted allocations for instance 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e
Nov 22 09:16:54 compute-0 nova_compute[253661]: 2025-11-22 09:16:54.510 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:54 compute-0 ceph-mon[75021]: pgmap v1575: 305 pgs: 305 active+clean; 130 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 20 KiB/s wr, 95 op/s
Nov 22 09:16:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/851078135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2933749736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:16:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:16:54 compute-0 nova_compute[253661]: 2025-11-22 09:16:54.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:55 compute-0 nova_compute[253661]: 2025-11-22 09:16:55.317 253665 DEBUG nova.compute.manager [req-e1741556-fdfb-452c-b9dd-053d1197584a req-fe93a008-ee34-4921-81a7-a3654030923f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-deleted-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 41 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 157 op/s
Nov 22 09:16:55 compute-0 nova_compute[253661]: 2025-11-22 09:16:55.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:55 compute-0 nova_compute[253661]: 2025-11-22 09:16:55.781 253665 DEBUG nova.compute.manager [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:16:55 compute-0 nova_compute[253661]: 2025-11-22 09:16:55.782 253665 DEBUG oslo_concurrency.lockutils [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:16:55 compute-0 nova_compute[253661]: 2025-11-22 09:16:55.782 253665 DEBUG oslo_concurrency.lockutils [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:16:55 compute-0 nova_compute[253661]: 2025-11-22 09:16:55.782 253665 DEBUG oslo_concurrency.lockutils [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:16:55 compute-0 nova_compute[253661]: 2025-11-22 09:16:55.783 253665 DEBUG nova.compute.manager [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:16:55 compute-0 nova_compute[253661]: 2025-11-22 09:16:55.783 253665 WARNING nova.compute.manager [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 for instance with vm_state deleted and task_state None.
Nov 22 09:16:56 compute-0 nova_compute[253661]: 2025-11-22 09:16:56.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:16:56 compute-0 nova_compute[253661]: 2025-11-22 09:16:56.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:56 compute-0 ceph-mon[75021]: pgmap v1576: 305 pgs: 305 active+clean; 41 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 157 op/s
Nov 22 09:16:56 compute-0 nova_compute[253661]: 2025-11-22 09:16:56.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:16:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:16:57.059 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:16:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 146 op/s
Nov 22 09:16:57 compute-0 podman[304892]: 2025-11-22 09:16:57.482391444 +0000 UTC m=+0.166218227 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 22 09:16:58 compute-0 ceph-mon[75021]: pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 146 op/s
Nov 22 09:16:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 141 op/s
Nov 22 09:16:59 compute-0 nova_compute[253661]: 2025-11-22 09:16:59.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:00 compute-0 ceph-mon[75021]: pgmap v1578: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 141 op/s
Nov 22 09:17:01 compute-0 nova_compute[253661]: 2025-11-22 09:17:01.242 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:01 compute-0 nova_compute[253661]: 2025-11-22 09:17:01.327 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:01 compute-0 nova_compute[253661]: 2025-11-22 09:17:01.328 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 141 op/s
Nov 22 09:17:01 compute-0 nova_compute[253661]: 2025-11-22 09:17:01.340 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:17:01 compute-0 nova_compute[253661]: 2025-11-22 09:17:01.412 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:01 compute-0 nova_compute[253661]: 2025-11-22 09:17:01.413 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:01 compute-0 nova_compute[253661]: 2025-11-22 09:17:01.421 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:17:01 compute-0 nova_compute[253661]: 2025-11-22 09:17:01.422 253665 INFO nova.compute.claims [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:17:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:01 compute-0 nova_compute[253661]: 2025-11-22 09:17:01.507 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801285361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.007 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.014 253665 DEBUG nova.compute.provider_tree [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.032 253665 DEBUG nova.scheduler.client.report [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.052 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.053 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.106 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.107 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.130 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.153 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.254 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.255 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.256 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Creating image(s)
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.282 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.307 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.339 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.343 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.395 253665 DEBUG nova.policy [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.407 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.407 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.421 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.441 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.443 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.444 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.444 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.471 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.476 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 8dafc0d0-bd93-4080-b51e-36887936ea66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.541 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.542 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.551 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.552 253665 INFO nova.compute.claims [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:17:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:17:02 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.668 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:02 compute-0 ceph-mon[75021]: pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 141 op/s
Nov 22 09:17:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/801285361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.882 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 8dafc0d0-bd93-4080-b51e-36887936ea66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:02 compute-0 nova_compute[253661]: 2025-11-22 09:17:02.949 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.078 253665 DEBUG nova.objects.instance [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid 8dafc0d0-bd93-4080-b51e-36887936ea66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.109 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.110 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Ensure instance console log exists: /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.111 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.112 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.112 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495696526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.140 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.147 253665 DEBUG nova.compute.provider_tree [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.174 253665 DEBUG nova.scheduler.client.report [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.203 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.204 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.252 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.252 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.274 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.294 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:17:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 KiB/s wr, 118 op/s
Nov 22 09:17:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:03.361 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:0f:e7'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba63bb0-30f0-4e31-af74-7247ce34941d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=73744eaa-7d97-4c21-9fb3-4378f10417f6) old=Port_Binding(mac=['fa:16:3e:96:0f:e7 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:03.364 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 73744eaa-7d97-4c21-9fb3-4378f10417f6 in datapath a8ceec0c-2cf6-459a-a4d7-aaf770041b6c updated
Nov 22 09:17:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:03.366 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a8ceec0c-2cf6-459a-a4d7-aaf770041b6c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:17:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:03.368 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc9b402-e563-4c32-ab43-86ea30a84ccc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.377 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.378 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.379 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Creating image(s)
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.400 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.423 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.447 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.451 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.490 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Successfully created port: f72c6b7d-0ba5-4d25-a08a-e2518c2a479d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.496 253665 DEBUG nova.policy [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca3a4f3a44014ad7a069b7dbdffb7c04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ce05fa5ad2745dab1909b0954fb83d6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.529 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.531 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.531 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.532 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.557 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.562 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4589c5da-d558-41a1-bf54-30746991be9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1495696526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.909 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4589c5da-d558-41a1-bf54-30746991be9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:03 compute-0 nova_compute[253661]: 2025-11-22 09:17:03.984 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] resizing rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.106 253665 DEBUG nova.objects.instance [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lazy-loading 'migration_context' on Instance uuid 4589c5da-d558-41a1-bf54-30746991be9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.119 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.119 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Ensure instance console log exists: /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.120 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.120 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.120 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.386 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Successfully updated port: f72c6b7d-0ba5-4d25-a08a-e2518c2a479d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.405 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.405 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.406 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.446 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Successfully created port: 79319cd8-59bd-43b2-a72b-a88f70eb5570 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.568 253665 DEBUG nova.compute.manager [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-changed-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.568 253665 DEBUG nova.compute.manager [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Refreshing instance network info cache due to event network-changed-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.568 253665 DEBUG oslo_concurrency.lockutils [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:04 compute-0 ceph-mon[75021]: pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 KiB/s wr, 118 op/s
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.958 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:17:04 compute-0 nova_compute[253661]: 2025-11-22 09:17:04.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:05 compute-0 nova_compute[253661]: 2025-11-22 09:17:05.319 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803010.318197, b45c203c-7ae1-436b-86d3-bfc0146dd536 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:05 compute-0 nova_compute[253661]: 2025-11-22 09:17:05.320 253665 INFO nova.compute.manager [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] VM Stopped (Lifecycle Event)
Nov 22 09:17:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 68 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 84 op/s
Nov 22 09:17:05 compute-0 nova_compute[253661]: 2025-11-22 09:17:05.341 253665 DEBUG nova.compute.manager [None req-6869de07-e164-4892-86d3-f3bd5f608b3f - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:05 compute-0 nova_compute[253661]: 2025-11-22 09:17:05.749 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Successfully updated port: 79319cd8-59bd-43b2-a72b-a88f70eb5570 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:17:05 compute-0 nova_compute[253661]: 2025-11-22 09:17:05.766 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:05 compute-0 nova_compute[253661]: 2025-11-22 09:17:05.766 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquired lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:05 compute-0 nova_compute[253661]: 2025-11-22 09:17:05.766 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.035 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.183 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803011.181592, 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.183 253665 INFO nova.compute.manager [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] VM Stopped (Lifecycle Event)
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.193 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Updating instance_info_cache with network_info: [{"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.204 253665 DEBUG nova.compute.manager [None req-a44e40fa-2bb2-4830-a8ef-a8089817de0d - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.235 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.236 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance network_info: |[{"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.237 253665 DEBUG oslo_concurrency.lockutils [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.237 253665 DEBUG nova.network.neutron [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Refreshing network info cache for port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.243 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start _get_guest_xml network_info=[{"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.250 253665 WARNING nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.260 253665 DEBUG nova.virt.libvirt.host [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.261 253665 DEBUG nova.virt.libvirt.host [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.265 253665 DEBUG nova.virt.libvirt.host [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.266 253665 DEBUG nova.virt.libvirt.host [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.267 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.267 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.268 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.268 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.268 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.269 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.269 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.269 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.269 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.270 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.270 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.270 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.273 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025858630' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.773 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.797 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:06 compute-0 nova_compute[253661]: 2025-11-22 09:17:06.801 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:06 compute-0 ceph-mon[75021]: pgmap v1581: 305 pgs: 305 active+clean; 68 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 84 op/s
Nov 22 09:17:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2025858630' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3211189844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.281 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.283 253665 DEBUG nova.virt.libvirt.vif [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1053734599',display_name='tempest-DeleteServersTestJSON-server-1053734599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1053734599',id=44,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-w9qj9zgo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:02Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=8dafc0d0-bd93-4080-b51e-36887936ea66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.283 253665 DEBUG nova.network.os_vif_util [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.284 253665 DEBUG nova.network.os_vif_util [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.285 253665 DEBUG nova.objects.instance [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dafc0d0-bd93-4080-b51e-36887936ea66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.308 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <uuid>8dafc0d0-bd93-4080-b51e-36887936ea66</uuid>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <name>instance-0000002c</name>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <nova:name>tempest-DeleteServersTestJSON-server-1053734599</nova:name>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:17:06</nova:creationTime>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <nova:port uuid="f72c6b7d-0ba5-4d25-a08a-e2518c2a479d">
Nov 22 09:17:07 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <system>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <entry name="serial">8dafc0d0-bd93-4080-b51e-36887936ea66</entry>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <entry name="uuid">8dafc0d0-bd93-4080-b51e-36887936ea66</entry>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     </system>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <os>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   </os>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <features>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   </features>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/8dafc0d0-bd93-4080-b51e-36887936ea66_disk">
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config">
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:73:d5:8c"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <target dev="tapf72c6b7d-0b"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/console.log" append="off"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <video>
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     </video>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:17:07 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:17:07 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:17:07 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:17:07 compute-0 nova_compute[253661]: </domain>
Nov 22 09:17:07 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.309 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Preparing to wait for external event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.310 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.310 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.310 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.311 253665 DEBUG nova.virt.libvirt.vif [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1053734599',display_name='tempest-DeleteServersTestJSON-server-1053734599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1053734599',id=44,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-w9qj9zgo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:02Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=8dafc0d0-bd93-4080-b51e-36887936ea66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.311 253665 DEBUG nova.network.os_vif_util [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.312 253665 DEBUG nova.network.os_vif_util [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.312 253665 DEBUG os_vif [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.319 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf72c6b7d-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.320 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf72c6b7d-0b, col_values=(('external_ids', {'iface-id': 'f72c6b7d-0ba5-4d25-a08a-e2518c2a479d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:d5:8c', 'vm-uuid': '8dafc0d0-bd93-4080-b51e-36887936ea66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.321 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:07 compute-0 NetworkManager[48920]: <info>  [1763803027.3226] manager: (tapf72c6b7d-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.325 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.330 253665 INFO os_vif [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b')
Nov 22 09:17:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 113 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.0 MiB/s wr, 33 op/s
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.379 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.380 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.380 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:73:d5:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.381 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Using config drive
Nov 22 09:17:07 compute-0 nova_compute[253661]: 2025-11-22 09:17:07.401 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3211189844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.294 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Creating config drive at /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.299 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpayjm6mzv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.427 253665 DEBUG nova.compute.manager [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-changed-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.428 253665 DEBUG nova.compute.manager [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Refreshing instance network info cache due to event network-changed-79319cd8-59bd-43b2-a72b-a88f70eb5570. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.429 253665 DEBUG oslo_concurrency.lockutils [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.455 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpayjm6mzv" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.485 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.491 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.666 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.667 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Deleting local config drive /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config because it was imported into RBD.
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.671 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Updating instance_info_cache with network_info: [{"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.697 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Releasing lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.698 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance network_info: |[{"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.699 253665 DEBUG oslo_concurrency.lockutils [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.699 253665 DEBUG nova.network.neutron [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Refreshing network info cache for port 79319cd8-59bd-43b2-a72b-a88f70eb5570 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.704 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start _get_guest_xml network_info=[{"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.709 253665 WARNING nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.726 253665 DEBUG nova.virt.libvirt.host [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.727 253665 DEBUG nova.virt.libvirt.host [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.733 253665 DEBUG nova.virt.libvirt.host [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.735 253665 DEBUG nova.virt.libvirt.host [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.736 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.736 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.737 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.737 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.737 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.739 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.739 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:17:08 compute-0 kernel: tapf72c6b7d-0b: entered promiscuous mode
Nov 22 09:17:08 compute-0 NetworkManager[48920]: <info>  [1763803028.7407] manager: (tapf72c6b7d-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/169)
Nov 22 09:17:08 compute-0 ovn_controller[152872]: 2025-11-22T09:17:08Z|00385|binding|INFO|Claiming lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for this chassis.
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.743 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:08 compute-0 ovn_controller[152872]: 2025-11-22T09:17:08Z|00386|binding|INFO|f72c6b7d-0ba5-4d25-a08a-e2518c2a479d: Claiming fa:16:3e:73:d5:8c 10.100.0.11
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.758 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d5:8c 10.100.0.11'], port_security=['fa:16:3e:73:d5:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dafc0d0-bd93-4080-b51e-36887936ea66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.760 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d in datapath d93e3720-b00d-41f5-8283-164e9f857d24 bound to our chassis
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.762 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:17:08 compute-0 systemd-udevd[305432]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.778 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5450a02f-cc49-4857-9429-d3cea4a24cbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.779 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.784 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.784 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17d3e0dd-4ff9-49bd-9a50-60df92b80dbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.786 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4697fd11-a580-4ad8-b006-9fd26b5e4228]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.788 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:08 compute-0 NetworkManager[48920]: <info>  [1763803028.7942] device (tapf72c6b7d-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:17:08 compute-0 NetworkManager[48920]: <info>  [1763803028.7951] device (tapf72c6b7d-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.802 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa8a244-9a22-45e4-b8f8-31f946c83603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 systemd-machined[215941]: New machine qemu-49-instance-0000002c.
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.823 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0fba26c0-5e42-4c23-a232-586c63054f7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002c.
Nov 22 09:17:08 compute-0 ovn_controller[152872]: 2025-11-22T09:17:08Z|00387|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d ovn-installed in OVS
Nov 22 09:17:08 compute-0 ovn_controller[152872]: 2025-11-22T09:17:08Z|00388|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d up in Southbound
Nov 22 09:17:08 compute-0 nova_compute[253661]: 2025-11-22 09:17:08.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:08 compute-0 ceph-mon[75021]: pgmap v1582: 305 pgs: 305 active+clean; 113 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.0 MiB/s wr, 33 op/s
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.882 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[57fd41b6-9ff8-4b35-92ae-82cad1c4ff4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 NetworkManager[48920]: <info>  [1763803028.8925] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/170)
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.892 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[57f54984-d630-4bb3-9379-271a1ac3b097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.931 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1559187b-bf54-4cca-8f1a-07518fc6bae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.935 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe2780a-bbe5-4ed1-9916-7b97249a18f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 NetworkManager[48920]: <info>  [1763803028.9638] device (tapd93e3720-b0): carrier: link connected
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.970 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[877c51d6-48e0-4755-b010-5017fa93bf45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.991 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e328b820-0ebc-493f-84a4-5e3b59823101]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587970, 'reachable_time': 44482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305484, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.013 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[036399fa-c4d0-4c79-8169-e7f30d1fea57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587970, 'tstamp': 587970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305485, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.033 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a5811fe9-0f43-4747-9fbc-39f095033cc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587970, 'reachable_time': 44482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305486, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.071 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48285153-c526-49db-9187-2ecf2a974979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.078 253665 DEBUG nova.compute.manager [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.078 253665 DEBUG oslo_concurrency.lockutils [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.079 253665 DEBUG oslo_concurrency.lockutils [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.079 253665 DEBUG oslo_concurrency.lockutils [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.079 253665 DEBUG nova.compute.manager [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Processing event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0d702278-ba03-44ba-9c9c-943aa9e3683a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:09 compute-0 NetworkManager[48920]: <info>  [1763803029.1476] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Nov 22 09:17:09 compute-0 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.152 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.153 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:09 compute-0 ovn_controller[152872]: 2025-11-22T09:17:09Z|00389|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.156 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.157 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d07f27e9-8b43-4618-93f3-77890ccc3d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.158 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:17:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.159 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.164 253665 DEBUG nova.network.neutron [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Updated VIF entry in instance network info cache for port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.165 253665 DEBUG nova.network.neutron [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Updating instance_info_cache with network_info: [{"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.168 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.182 253665 DEBUG oslo_concurrency.lockutils [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717109147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.266 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.298 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.306 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 134 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.432 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803029.432045, 8dafc0d0-bd93-4080-b51e-36887936ea66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.433 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] VM Started (Lifecycle Event)
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.438 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.443 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.447 253665 INFO nova.virt.libvirt.driver [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance spawned successfully.
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.448 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.451 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.456 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.467 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.468 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.469 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.470 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.471 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.471 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.477 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.478 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803029.432413, 8dafc0d0-bd93-4080-b51e-36887936ea66 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.478 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] VM Paused (Lifecycle Event)
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.501 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.516 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803029.4419057, 8dafc0d0-bd93-4080-b51e-36887936ea66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.517 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] VM Resumed (Lifecycle Event)
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.532 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.536 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.548 253665 INFO nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Took 7.29 seconds to spawn the instance on the hypervisor.
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.549 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:09 compute-0 podman[305599]: 2025-11-22 09:17:09.557170266 +0000 UTC m=+0.054009475 container create 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.561 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:09 compute-0 systemd[1]: Started libpod-conmon-6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889.scope.
Nov 22 09:17:09 compute-0 podman[305599]: 2025-11-22 09:17:09.526197517 +0000 UTC m=+0.023036746 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:17:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.625 253665 INFO nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Took 8.23 seconds to build instance.
Nov 22 09:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebc4bd6ad0fd49fe8ea4cf8e7431f6025e366defa6658f1cd2635875334b6b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.635 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.636 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:09 compute-0 podman[305599]: 2025-11-22 09:17:09.64264704 +0000 UTC m=+0.139486249 container init 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.645 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:09 compute-0 podman[305599]: 2025-11-22 09:17:09.650213032 +0000 UTC m=+0.147052241 container start 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.653 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:17:09 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [NOTICE]   (305619) : New worker (305621) forked
Nov 22 09:17:09 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [NOTICE]   (305619) : Loading success.
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.729 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.730 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.740 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.741 253665 INFO nova.compute.claims [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:17:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816249735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.826 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.828 253665 DEBUG nova.virt.libvirt.vif [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-293583865',display_name='tempest-ImagesNegativeTestJSON-server-293583865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-293583865',id=45,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ce05fa5ad2745dab1909b0954fb83d6',ramdisk_id='',reservation_id='r-cc0nf0zk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1501914571',owner_user_name='tempest-ImagesNegativeTestJSON-1501914571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:03Z,user_data=None,user_id='ca3a4f3a44014ad7a069b7dbdffb7c04',uuid=4589c5da-d558-41a1-bf54-30746991be9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.828 253665 DEBUG nova.network.os_vif_util [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converting VIF {"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.829 253665 DEBUG nova.network.os_vif_util [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.830 253665 DEBUG nova.objects.instance [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4589c5da-d558-41a1-bf54-30746991be9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.843 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <uuid>4589c5da-d558-41a1-bf54-30746991be9e</uuid>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <name>instance-0000002d</name>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <nova:name>tempest-ImagesNegativeTestJSON-server-293583865</nova:name>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:17:08</nova:creationTime>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <nova:user uuid="ca3a4f3a44014ad7a069b7dbdffb7c04">tempest-ImagesNegativeTestJSON-1501914571-project-member</nova:user>
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <nova:project uuid="4ce05fa5ad2745dab1909b0954fb83d6">tempest-ImagesNegativeTestJSON-1501914571</nova:project>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <nova:port uuid="79319cd8-59bd-43b2-a72b-a88f70eb5570">
Nov 22 09:17:09 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <system>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <entry name="serial">4589c5da-d558-41a1-bf54-30746991be9e</entry>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <entry name="uuid">4589c5da-d558-41a1-bf54-30746991be9e</entry>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     </system>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <os>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   </os>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <features>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   </features>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4589c5da-d558-41a1-bf54-30746991be9e_disk">
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4589c5da-d558-41a1-bf54-30746991be9e_disk.config">
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:14:e8:cd"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <target dev="tap79319cd8-59"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/console.log" append="off"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <video>
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     </video>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:17:09 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:17:09 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:17:09 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:17:09 compute-0 nova_compute[253661]: </domain>
Nov 22 09:17:09 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.850 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Preparing to wait for external event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.850 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.851 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.851 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.852 253665 DEBUG nova.virt.libvirt.vif [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-293583865',display_name='tempest-ImagesNegativeTestJSON-server-293583865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-293583865',id=45,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ce05fa5ad2745dab1909b0954fb83d6',ramdisk_id='',reservation_id='r-cc0nf0zk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1501914571',owner_user_name='tempest-ImagesNegativeTestJSON-1501914571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:03Z,user_data=None,user_id='ca3a4f3a44014ad7a069b7dbdffb7c04',uuid=4589c5da-d558-41a1-bf54-30746991be9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.852 253665 DEBUG nova.network.os_vif_util [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converting VIF {"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.853 253665 DEBUG nova.network.os_vif_util [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3717109147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1816249735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.854 253665 DEBUG os_vif [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.855 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.856 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.864 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79319cd8-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.865 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79319cd8-59, col_values=(('external_ids', {'iface-id': '79319cd8-59bd-43b2-a72b-a88f70eb5570', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:e8:cd', 'vm-uuid': '4589c5da-d558-41a1-bf54-30746991be9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:09 compute-0 NetworkManager[48920]: <info>  [1763803029.8680] manager: (tap79319cd8-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.874 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.876 253665 INFO os_vif [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59')
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.930 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.932 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.932 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] No VIF found with MAC fa:16:3e:14:e8:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.933 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Using config drive
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.956 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:09 compute-0 nova_compute[253661]: 2025-11-22 09:17:09.965 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.025 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.070 253665 DEBUG nova.network.neutron [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Updated VIF entry in instance network info cache for port 79319cd8-59bd-43b2-a72b-a88f70eb5570. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.082 253665 DEBUG nova.network.neutron [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Updating instance_info_cache with network_info: [{"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.102 253665 DEBUG oslo_concurrency.lockutils [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.330 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Creating config drive at /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.335 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgyady1zy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.482 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgyady1zy" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385645448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.512 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.519 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config 4589c5da-d558-41a1-bf54-30746991be9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.554 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.561 253665 DEBUG nova.compute.provider_tree [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.579 253665 DEBUG nova.scheduler.client.report [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.611 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.612 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.663 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.663 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.680 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.695 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config 4589c5da-d558-41a1-bf54-30746991be9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.696 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Deleting local config drive /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config because it was imported into RBD.
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.703 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:17:10 compute-0 kernel: tap79319cd8-59: entered promiscuous mode
Nov 22 09:17:10 compute-0 NetworkManager[48920]: <info>  [1763803030.7636] manager: (tap79319cd8-59): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.764 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:10 compute-0 ovn_controller[152872]: 2025-11-22T09:17:10Z|00390|binding|INFO|Claiming lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 for this chassis.
Nov 22 09:17:10 compute-0 ovn_controller[152872]: 2025-11-22T09:17:10Z|00391|binding|INFO|79319cd8-59bd-43b2-a72b-a88f70eb5570: Claiming fa:16:3e:14:e8:cd 10.100.0.9
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.767 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:10 compute-0 systemd-udevd[305460]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.778 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:e8:cd 10.100.0.9'], port_security=['fa:16:3e:14:e8:cd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4589c5da-d558-41a1-bf54-30746991be9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e716dd72-0105-4eae-a6f7-a94546350d4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce05fa5ad2745dab1909b0954fb83d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f974b244-516a-40d5-add2-959691d72108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83728acd-5828-4942-8843-484dbd1b47c1, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=79319cd8-59bd-43b2-a72b-a88f70eb5570) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.780 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 79319cd8-59bd-43b2-a72b-a88f70eb5570 in datapath e716dd72-0105-4eae-a6f7-a94546350d4d bound to our chassis
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.781 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e716dd72-0105-4eae-a6f7-a94546350d4d
Nov 22 09:17:10 compute-0 NetworkManager[48920]: <info>  [1763803030.7932] device (tap79319cd8-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:17:10 compute-0 NetworkManager[48920]: <info>  [1763803030.7942] device (tap79319cd8-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.797 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.799 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9764ad1b-cd28-471a-a837-6ae191bb2475]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.800 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape716dd72-01 in ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.799 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.799 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Creating image(s)
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.802 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape716dd72-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.802 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19fc8905-fcab-48d6-8697-7fc4aff53d70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.803 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c88109b-06c3-4613-9413-c31ec8cbfa08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 systemd-machined[215941]: New machine qemu-50-instance-0000002d.
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.820 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[69d6987c-13dc-4e48-8c48-1c7053fa2f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002d.
Nov 22 09:17:10 compute-0 ovn_controller[152872]: 2025-11-22T09:17:10Z|00392|binding|INFO|Setting lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 ovn-installed in OVS
Nov 22 09:17:10 compute-0 ovn_controller[152872]: 2025-11-22T09:17:10Z|00393|binding|INFO|Setting lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 up in Southbound
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.844 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.845 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[117fe6ac-60bd-4ec8-a7cd-de59fcf582e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 ceph-mon[75021]: pgmap v1583: 305 pgs: 305 active+clean; 134 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 22 09:17:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1385645448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.889 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.889 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cd455a-91d6-4eba-8413-03d3baf6dc63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 NetworkManager[48920]: <info>  [1763803030.8985] manager: (tape716dd72-00): new Veth device (/org/freedesktop/NetworkManager/Devices/174)
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e5e318-c5ec-450e-aa48-e94118d04209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.944 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.947 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8eecb07a-8aeb-4101-ae4a-629e062fe921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.951 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[75d7791d-2778-41ca-a066-e28a065734cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.952 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:10 compute-0 NetworkManager[48920]: <info>  [1763803030.9789] device (tape716dd72-00): carrier: link connected
Nov 22 09:17:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.988 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23f71793-1d1a-4948-8c72-f7cd71c5cc1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:10 compute-0 nova_compute[253661]: 2025-11-22 09:17:10.998 253665 DEBUG nova.policy [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5ae8af2cc9f40e083473a191ddd445f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.008 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[90c6eca2-2241-4a1a-82ad-11e8fad8b4a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape716dd72-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:a7:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588171, 'reachable_time': 23987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305798, 'error': None, 'target': 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.019 253665 DEBUG nova.compute.manager [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.020 253665 DEBUG oslo_concurrency.lockutils [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.020 253665 DEBUG oslo_concurrency.lockutils [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.020 253665 DEBUG oslo_concurrency.lockutils [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.021 253665 DEBUG nova.compute.manager [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Processing event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e95e75-4859-4e22-9bde-6dc30eb749de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:a77c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588171, 'tstamp': 588171}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305799, 'error': None, 'target': 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.041 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.042 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.042 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.043 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.044 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7933775e-769a-4507-a214-e529a8cb63f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape716dd72-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:a7:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588171, 'reachable_time': 23987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305800, 'error': None, 'target': 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.069 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.076 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a63d88a0-884c-4328-a21c-6bedf9264f2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.082 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[632edd66-1760-4bd4-ad0f-a7461d8d32a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.146 253665 DEBUG nova.compute.manager [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.147 253665 DEBUG oslo_concurrency.lockutils [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.148 253665 DEBUG oslo_concurrency.lockutils [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.148 253665 DEBUG oslo_concurrency.lockutils [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.148 253665 DEBUG nova.compute.manager [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.149 253665 WARNING nova.compute.manager [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state active and task_state None.
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.158 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[179266bc-5ea9-4d54-8a72-0dbeb2866ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.161 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape716dd72-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.162 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.164 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape716dd72-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:11 compute-0 kernel: tape716dd72-00: entered promiscuous mode
Nov 22 09:17:11 compute-0 NetworkManager[48920]: <info>  [1763803031.1671] manager: (tape716dd72-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.172 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape716dd72-00, col_values=(('external_ids', {'iface-id': 'a6bc6da3-64a7-49d4-9920-2ddf5959e212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00394|binding|INFO|Releasing lport a6bc6da3-64a7-49d4-9920-2ddf5959e212 from this chassis (sb_readonly=0)
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.177 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e716dd72-0105-4eae-a6f7-a94546350d4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e716dd72-0105-4eae-a6f7-a94546350d4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.178 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc263ffc-bbdf-4eb1-b330-6acb9228cd95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.179 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-e716dd72-0105-4eae-a6f7-a94546350d4d
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/e716dd72-0105-4eae-a6f7-a94546350d4d.pid.haproxy
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID e716dd72-0105-4eae-a6f7-a94546350d4d
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.181 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'env', 'PROCESS_TAG=haproxy-e716dd72-0105-4eae-a6f7-a94546350d4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e716dd72-0105-4eae-a6f7-a94546350d4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 134 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.391 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.392 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.392 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.392 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.393 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.395 253665 INFO nova.compute.manager [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Terminating instance
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.396 253665 DEBUG nova.compute.manager [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.427 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a63d88a0-884c-4328-a21c-6bedf9264f2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:11 compute-0 kernel: tapf72c6b7d-0b (unregistering): left promiscuous mode
Nov 22 09:17:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:11 compute-0 NetworkManager[48920]: <info>  [1763803031.4426] device (tapf72c6b7d-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00395|binding|INFO|Releasing lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d from this chassis (sb_readonly=0)
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00396|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d down in Southbound
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00397|binding|INFO|Removing iface tapf72c6b7d-0b ovn-installed in OVS
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.458 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d5:8c 10.100.0.11'], port_security=['fa:16:3e:73:d5:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dafc0d0-bd93-4080-b51e-36887936ea66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Nov 22 09:17:11 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Consumed 2.519s CPU time.
Nov 22 09:17:11 compute-0 systemd-machined[215941]: Machine qemu-49-instance-0000002c terminated.
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.508 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] resizing rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.551 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803031.5505958, 4589c5da-d558-41a1-bf54-30746991be9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.551 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] VM Started (Lifecycle Event)
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.554 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.560 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.565 253665 INFO nova.virt.libvirt.driver [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance spawned successfully.
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.565 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.570 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.580 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:11 compute-0 kernel: tapf72c6b7d-0b: entered promiscuous mode
Nov 22 09:17:11 compute-0 NetworkManager[48920]: <info>  [1763803031.6228] manager: (tapf72c6b7d-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00398|binding|INFO|Claiming lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for this chassis.
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00399|binding|INFO|f72c6b7d-0ba5-4d25-a08a-e2518c2a479d: Claiming fa:16:3e:73:d5:8c 10.100.0.11
Nov 22 09:17:11 compute-0 kernel: tapf72c6b7d-0b (unregistering): left promiscuous mode
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.632 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d5:8c 10.100.0.11'], port_security=['fa:16:3e:73:d5:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dafc0d0-bd93-4080-b51e-36887936ea66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.638 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.639 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803031.5536268, 4589c5da-d558-41a1-bf54-30746991be9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.639 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] VM Paused (Lifecycle Event)
Nov 22 09:17:11 compute-0 podman[305970]: 2025-11-22 09:17:11.645278784 +0000 UTC m=+0.070289568 container create 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00400|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d ovn-installed in OVS
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00401|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d up in Southbound
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.660 253665 DEBUG nova.objects.instance [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'migration_context' on Instance uuid a63d88a0-884c-4328-a21c-6bedf9264f2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00402|binding|INFO|Releasing lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d from this chassis (sb_readonly=0)
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00403|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d down in Southbound
Nov 22 09:17:11 compute-0 ovn_controller[152872]: 2025-11-22T09:17:11Z|00404|binding|INFO|Removing iface tapf72c6b7d-0b ovn-installed in OVS
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.664 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.666 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.668 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.668 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.669 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.669 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.669 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.670 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.670 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d5:8c 10.100.0.11'], port_security=['fa:16:3e:73:d5:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dafc0d0-bd93-4080-b51e-36887936ea66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.681 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803031.5600092, 4589c5da-d558-41a1-bf54-30746991be9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.682 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] VM Resumed (Lifecycle Event)
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.687 253665 INFO nova.virt.libvirt.driver [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance destroyed successfully.
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.687 253665 DEBUG nova.objects.instance [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid 8dafc0d0-bd93-4080-b51e-36887936ea66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 systemd[1]: Started libpod-conmon-8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47.scope.
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.697 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.697 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Ensure instance console log exists: /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.698 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.698 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.699 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:11 compute-0 podman[305970]: 2025-11-22 09:17:11.604835308 +0000 UTC m=+0.029846112 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.718 253665 DEBUG nova.virt.libvirt.vif [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1053734599',display_name='tempest-DeleteServersTestJSON-server-1053734599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1053734599',id=44,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-w9qj9zgo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:09Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=8dafc0d0-bd93-4080-b51e-36887936ea66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.719 253665 DEBUG nova.network.os_vif_util [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.721 253665 DEBUG nova.network.os_vif_util [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.722 253665 DEBUG os_vif [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.725 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3cb4a182fbe93ee3d0d58357102266c7d0274a0d3457d7bc4d58a8aed79f9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.726 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf72c6b7d-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.731 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.737 253665 INFO os_vif [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b')
Nov 22 09:17:11 compute-0 podman[305970]: 2025-11-22 09:17:11.742670055 +0000 UTC m=+0.167680859 container init 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:17:11 compute-0 podman[305970]: 2025-11-22 09:17:11.750001743 +0000 UTC m=+0.175012527 container start 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.761 253665 INFO nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Took 8.38 seconds to spawn the instance on the hypervisor.
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.762 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.763 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:11 compute-0 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [NOTICE]   (306016) : New worker (306026) forked
Nov 22 09:17:11 compute-0 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [NOTICE]   (306016) : Loading success.
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.797 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.831 253665 INFO nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Took 9.31 seconds to build instance.
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.837 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.839 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.840 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[564b2cb7-28e2-4f3b-b50d-a3a4f15b3a0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.840 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.860 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:11 compute-0 nova_compute[253661]: 2025-11-22 09:17:11.866 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Successfully created port: 1f84d052-9d22-469d-b43d-259c9b54bcaf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:17:11 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [NOTICE]   (305619) : haproxy version is 2.8.14-c23fe91
Nov 22 09:17:11 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [NOTICE]   (305619) : path to executable is /usr/sbin/haproxy
Nov 22 09:17:11 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [WARNING]  (305619) : Exiting Master process...
Nov 22 09:17:11 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [WARNING]  (305619) : Exiting Master process...
Nov 22 09:17:11 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [ALERT]    (305619) : Current worker (305621) exited with code 143 (Terminated)
Nov 22 09:17:11 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [WARNING]  (305619) : All workers exited. Exiting... (0)
Nov 22 09:17:11 compute-0 systemd[1]: libpod-6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889.scope: Deactivated successfully.
Nov 22 09:17:12 compute-0 podman[306055]: 2025-11-22 09:17:12.000467161 +0000 UTC m=+0.045900430 container died 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889-userdata-shm.mount: Deactivated successfully.
Nov 22 09:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ebc4bd6ad0fd49fe8ea4cf8e7431f6025e366defa6658f1cd2635875334b6b7-merged.mount: Deactivated successfully.
Nov 22 09:17:12 compute-0 podman[306055]: 2025-11-22 09:17:12.062610501 +0000 UTC m=+0.108043770 container cleanup 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:17:12 compute-0 systemd[1]: libpod-conmon-6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889.scope: Deactivated successfully.
Nov 22 09:17:12 compute-0 podman[306086]: 2025-11-22 09:17:12.134487336 +0000 UTC m=+0.048232645 container remove 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10c0dfa0-1b58-4b13-bbfd-5a6e9e46be79]: (4, ('Sat Nov 22 09:17:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889)\n6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889\nSat Nov 22 09:17:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889)\n6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.144 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4714aa-9c29-48db-8137-783879aa229e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.146 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:12 compute-0 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.156 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b17058-f9e1-4092-92ad-0fb987ba8f4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.172 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[392593cd-1d72-4ad0-949e-77f556043921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.177 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[22f6cf61-d123-4ea5-b57a-618d8284d824]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.199 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1a22d1-774b-40c8-9700-4e7dcdc98b5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587960, 'reachable_time': 32377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306099, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:12 compute-0 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.203 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.203 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[34bbd71f-7179-487b-b518-92cb014fd289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.204 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.206 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.207 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[39f41eb2-31bd-4f5e-9e7e-2b617a3b6016]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.207 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.209 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:17:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b43550d-ce6a-406a-90fa-fc68efdff3dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.224 253665 INFO nova.virt.libvirt.driver [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Deleting instance files /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66_del
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.224 253665 INFO nova.virt.libvirt.driver [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Deletion of /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66_del complete
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.275 253665 INFO nova.compute.manager [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.275 253665 DEBUG oslo.service.loopingcall [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.276 253665 DEBUG nova.compute.manager [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.277 253665 DEBUG nova.network.neutron [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:17:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:17:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467955315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:17:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:17:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467955315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:17:12 compute-0 nova_compute[253661]: 2025-11-22 09:17:12.827 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Successfully created port: b12fa008-cd82-4cf3-8abd-a89e90fb9e4c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:17:12 compute-0 ceph-mon[75021]: pgmap v1584: 305 pgs: 305 active+clean; 134 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 22 09:17:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1467955315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:17:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1467955315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.283 253665 DEBUG nova.network.neutron [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.306 253665 INFO nova.compute.manager [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Took 1.03 seconds to deallocate network for instance.
Nov 22 09:17:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 146 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 809 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.361 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.362 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.453 253665 DEBUG oslo_concurrency.processutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.516 253665 DEBUG nova.compute.manager [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.517 253665 DEBUG oslo_concurrency.lockutils [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.517 253665 DEBUG oslo_concurrency.lockutils [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.518 253665 DEBUG oslo_concurrency.lockutils [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.518 253665 DEBUG nova.compute.manager [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] No waiting events found dispatching network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.518 253665 WARNING nova.compute.manager [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received unexpected event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 for instance with vm_state active and task_state None.
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.594 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.595 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.595 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.595 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.596 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.596 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.596 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.597 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.597 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.597 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.597 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.598 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.598 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.598 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.599 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.599 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.599 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.599 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.600 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.600 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.600 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.601 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.601 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.601 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.601 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.602 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.602 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.602 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.603 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.603 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.721 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.722 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.722 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.723 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.723 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.725 253665 INFO nova.compute.manager [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Terminating instance
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.726 253665 DEBUG nova.compute.manager [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:17:13 compute-0 kernel: tap79319cd8-59 (unregistering): left promiscuous mode
Nov 22 09:17:13 compute-0 NetworkManager[48920]: <info>  [1763803033.7771] device (tap79319cd8-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:17:13 compute-0 ovn_controller[152872]: 2025-11-22T09:17:13Z|00405|binding|INFO|Releasing lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 from this chassis (sb_readonly=0)
Nov 22 09:17:13 compute-0 ovn_controller[152872]: 2025-11-22T09:17:13Z|00406|binding|INFO|Setting lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 down in Southbound
Nov 22 09:17:13 compute-0 ovn_controller[152872]: 2025-11-22T09:17:13Z|00407|binding|INFO|Removing iface tap79319cd8-59 ovn-installed in OVS
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.809 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:e8:cd 10.100.0.9'], port_security=['fa:16:3e:14:e8:cd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4589c5da-d558-41a1-bf54-30746991be9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e716dd72-0105-4eae-a6f7-a94546350d4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce05fa5ad2745dab1909b0954fb83d6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f974b244-516a-40d5-add2-959691d72108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83728acd-5828-4942-8843-484dbd1b47c1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=79319cd8-59bd-43b2-a72b-a88f70eb5570) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.812 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 79319cd8-59bd-43b2-a72b-a88f70eb5570 in datapath e716dd72-0105-4eae-a6f7-a94546350d4d unbound from our chassis
Nov 22 09:17:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.814 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e716dd72-0105-4eae-a6f7-a94546350d4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:17:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.815 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d682f7b0-c69b-482f-a4d1-605341999053]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.816 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d namespace which is not needed anymore
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.824 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:13 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Nov 22 09:17:13 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Consumed 2.867s CPU time.
Nov 22 09:17:13 compute-0 systemd-machined[215941]: Machine qemu-50-instance-0000002d terminated.
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.955 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:13 compute-0 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [NOTICE]   (306016) : haproxy version is 2.8.14-c23fe91
Nov 22 09:17:13 compute-0 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [NOTICE]   (306016) : path to executable is /usr/sbin/haproxy
Nov 22 09:17:13 compute-0 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [WARNING]  (306016) : Exiting Master process...
Nov 22 09:17:13 compute-0 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [ALERT]    (306016) : Current worker (306026) exited with code 143 (Terminated)
Nov 22 09:17:13 compute-0 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [WARNING]  (306016) : All workers exited. Exiting... (0)
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:13 compute-0 systemd[1]: libpod-8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47.scope: Deactivated successfully.
Nov 22 09:17:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114348137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:13 compute-0 podman[306146]: 2025-11-22 09:17:13.966214305 +0000 UTC m=+0.054332383 container died 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.977 253665 INFO nova.virt.libvirt.driver [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance destroyed successfully.
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.978 253665 DEBUG nova.objects.instance [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lazy-loading 'resources' on Instance uuid 4589c5da-d558-41a1-bf54-30746991be9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:13 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.987 253665 DEBUG oslo_concurrency.processutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47-userdata-shm.mount: Deactivated successfully.
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.994 253665 DEBUG nova.virt.libvirt.vif [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-293583865',display_name='tempest-ImagesNegativeTestJSON-server-293583865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-293583865',id=45,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ce05fa5ad2745dab1909b0954fb83d6',ramdisk_id='',reservation_id='r-cc0nf0zk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1501914571',owner_user_name='tempest-ImagesNegativeTestJSON-1501914571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:11Z,user_data=None,user_id='ca3a4f3a44014ad7a069b7dbdffb7c04',uuid=4589c5da-d558-41a1-bf54-30746991be9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.995 253665 DEBUG nova.network.os_vif_util [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converting VIF {"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.997 253665 DEBUG nova.network.os_vif_util [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:13.997 253665 DEBUG os_vif [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.001 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.002 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79319cd8-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f3cb4a182fbe93ee3d0d58357102266c7d0274a0d3457d7bc4d58a8aed79f9e-merged.mount: Deactivated successfully.
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.004 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.012 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Successfully updated port: 1f84d052-9d22-469d-b43d-259c9b54bcaf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.014 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.018 253665 DEBUG nova.compute.provider_tree [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.021 253665 INFO os_vif [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59')
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.038 253665 DEBUG nova.scheduler.client.report [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:14 compute-0 podman[306146]: 2025-11-22 09:17:14.047893287 +0000 UTC m=+0.136011355 container cleanup 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:17:14 compute-0 systemd[1]: libpod-conmon-8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47.scope: Deactivated successfully.
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.062 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.100 253665 INFO nova.scheduler.client.report [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance 8dafc0d0-bd93-4080-b51e-36887936ea66
Nov 22 09:17:14 compute-0 podman[306198]: 2025-11-22 09:17:14.149331176 +0000 UTC m=+0.075484494 container remove 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:17:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.156 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa336cc-3a70-4a86-83dd-aefb393046c0]: (4, ('Sat Nov 22 09:17:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d (8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47)\n8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47\nSat Nov 22 09:17:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d (8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47)\n8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.158 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4af1dfe5-5489-4008-96bb-df655f548bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.159 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape716dd72-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:14 compute-0 kernel: tape716dd72-00: left promiscuous mode
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.162 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.172 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.177 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.181 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[072f18b1-7370-4758-b76c-c89e5b45b5db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[16cde236-ba3e-40f0-9be6-7560ecc35560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.197 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06f2d6f2-a53c-4f96-8385-1aa72d1294a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.214 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea4c2d8-f35d-4ce2-817f-900253768de3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588162, 'reachable_time': 33661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306216, 'error': None, 'target': 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.217 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:17:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.217 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2460ee48-9d41-48a5-a160-40d86befe3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:14 compute-0 systemd[1]: run-netns-ovnmeta\x2de716dd72\x2d0105\x2d4eae\x2da6f7\x2da94546350d4d.mount: Deactivated successfully.
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.902 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Successfully updated port: b12fa008-cd82-4cf3-8abd-a89e90fb9e4c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.921 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.922 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquired lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:14 compute-0 ceph-mon[75021]: pgmap v1585: 305 pgs: 305 active+clean; 146 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 809 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Nov 22 09:17:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/114348137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.922 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:17:14 compute-0 nova_compute[253661]: 2025-11-22 09:17:14.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:15 compute-0 nova_compute[253661]: 2025-11-22 09:17:15.100 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:17:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 159 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 204 op/s
Nov 22 09:17:15 compute-0 nova_compute[253661]: 2025-11-22 09:17:15.374 253665 INFO nova.virt.libvirt.driver [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Deleting instance files /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e_del
Nov 22 09:17:15 compute-0 nova_compute[253661]: 2025-11-22 09:17:15.375 253665 INFO nova.virt.libvirt.driver [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Deletion of /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e_del complete
Nov 22 09:17:15 compute-0 nova_compute[253661]: 2025-11-22 09:17:15.432 253665 INFO nova.compute.manager [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Took 1.71 seconds to destroy the instance on the hypervisor.
Nov 22 09:17:15 compute-0 nova_compute[253661]: 2025-11-22 09:17:15.433 253665 DEBUG oslo.service.loopingcall [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:17:15 compute-0 nova_compute[253661]: 2025-11-22 09:17:15.434 253665 DEBUG nova.compute.manager [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:17:15 compute-0 nova_compute[253661]: 2025-11-22 09:17:15.434 253665 DEBUG nova.network.neutron [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.102 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-deleted-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.103 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-unplugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.103 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] No waiting events found dispatching network-vif-unplugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-unplugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-changed-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.105 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Refreshing instance network info cache due to event network-changed-1f84d052-9d22-469d-b43d-259c9b54bcaf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.105 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.167 253665 DEBUG nova.compute.manager [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.168 253665 DEBUG oslo_concurrency.lockutils [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.168 253665 DEBUG oslo_concurrency.lockutils [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.168 253665 DEBUG oslo_concurrency.lockutils [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.169 253665 DEBUG nova.compute.manager [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.169 253665 WARNING nova.compute.manager [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.
Nov 22 09:17:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.670 253665 DEBUG nova.network.neutron [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.682 253665 INFO nova.compute.manager [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Took 1.25 seconds to deallocate network for instance.
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.722 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.722 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:16 compute-0 nova_compute[253661]: 2025-11-22 09:17:16.787 253665 DEBUG oslo_concurrency.processutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:17 compute-0 ceph-mon[75021]: pgmap v1586: 305 pgs: 305 active+clean; 159 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 204 op/s
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.127 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.144 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Releasing lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.145 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance network_info: |[{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.146 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.146 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Refreshing network info cache for port 1f84d052-9d22-469d-b43d-259c9b54bcaf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.152 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start _get_guest_xml network_info=[{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.160 253665 WARNING nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.171 253665 DEBUG nova.virt.libvirt.host [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.172 253665 DEBUG nova.virt.libvirt.host [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.179 253665 DEBUG nova.virt.libvirt.host [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.180 253665 DEBUG nova.virt.libvirt.host [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.181 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.181 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.181 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.183 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.183 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.183 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.183 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.186 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2961703626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.254 253665 DEBUG oslo_concurrency.processutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.261 253665 DEBUG nova.compute.provider_tree [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.276 253665 DEBUG nova.scheduler.client.report [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.299 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.330 253665 INFO nova.scheduler.client.report [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Deleted allocations for instance 4589c5da-d558-41a1-bf54-30746991be9e
Nov 22 09:17:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 114 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 243 op/s
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.386 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1151482682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.690 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.718 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:17 compute-0 nova_compute[253661]: 2025-11-22 09:17:17.723 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2961703626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1151482682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2202826143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.227 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.229 253665 DEBUG nova.virt.libvirt.vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:10Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.229 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.230 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.231 253665 DEBUG nova.virt.libvirt.vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:10Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.231 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.232 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.233 253665 DEBUG nova.objects.instance [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a63d88a0-884c-4328-a21c-6bedf9264f2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.249 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <uuid>a63d88a0-884c-4328-a21c-6bedf9264f2e</uuid>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <name>instance-0000002e</name>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestMultiNic-server-1847576078</nova:name>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:17:17</nova:creationTime>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <nova:user uuid="c5ae8af2cc9f40e083473a191ddd445f">tempest-ServersTestMultiNic-1064785551-project-member</nova:user>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <nova:project uuid="2d156ca65e214b4aacdf111fd47dc4f6">tempest-ServersTestMultiNic-1064785551</nova:project>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <nova:port uuid="1f84d052-9d22-469d-b43d-259c9b54bcaf">
Nov 22 09:17:18 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.82" ipVersion="4"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <nova:port uuid="b12fa008-cd82-4cf3-8abd-a89e90fb9e4c">
Nov 22 09:17:18 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.1.147" ipVersion="4"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <system>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <entry name="serial">a63d88a0-884c-4328-a21c-6bedf9264f2e</entry>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <entry name="uuid">a63d88a0-884c-4328-a21c-6bedf9264f2e</entry>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </system>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <os>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   </os>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <features>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   </features>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a63d88a0-884c-4328-a21c-6bedf9264f2e_disk">
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config">
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:18 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ca:e5:b4"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <target dev="tap1f84d052-9d"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:f9:be:fa"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <target dev="tapb12fa008-cd"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/console.log" append="off"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <video>
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </video>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:17:18 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:17:18 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:17:18 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:17:18 compute-0 nova_compute[253661]: </domain>
Nov 22 09:17:18 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.251 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Preparing to wait for external event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.251 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.252 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.252 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.252 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Preparing to wait for external event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.253 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.253 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.253 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.254 253665 DEBUG nova.virt.libvirt.vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:10Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.254 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.255 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.256 253665 DEBUG os_vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.257 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.258 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.263 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f84d052-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.264 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f84d052-9d, col_values=(('external_ids', {'iface-id': '1f84d052-9d22-469d-b43d-259c9b54bcaf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:e5:b4', 'vm-uuid': 'a63d88a0-884c-4328-a21c-6bedf9264f2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:18 compute-0 NetworkManager[48920]: <info>  [1763803038.2680] manager: (tap1f84d052-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.268 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.276 253665 INFO os_vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d')
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.277 253665 DEBUG nova.virt.libvirt.vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:10Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.277 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.278 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.278 253665 DEBUG os_vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.282 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.282 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb12fa008-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.283 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb12fa008-cd, col_values=(('external_ids', {'iface-id': 'b12fa008-cd82-4cf3-8abd-a89e90fb9e4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:be:fa', 'vm-uuid': 'a63d88a0-884c-4328-a21c-6bedf9264f2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:18 compute-0 NetworkManager[48920]: <info>  [1763803038.2852] manager: (tapb12fa008-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.292 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.293 253665 INFO os_vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd')
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.348 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.349 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.349 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:ca:e5:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.350 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:f9:be:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.350 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Using config drive
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.374 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.884 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updated VIF entry in instance network info cache for port 1f84d052-9d22-469d-b43d-259c9b54bcaf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.886 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.903 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.904 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.904 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.905 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.905 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.906 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] No waiting events found dispatching network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.906 253665 WARNING nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received unexpected event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 for instance with vm_state active and task_state deleting.
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.906 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-changed-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.907 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Refreshing instance network info cache due to event network-changed-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.907 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.908 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.908 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Refreshing network info cache for port b12fa008-cd82-4cf3-8abd-a89e90fb9e4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.975 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Creating config drive at /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config
Nov 22 09:17:18 compute-0 nova_compute[253661]: 2025-11-22 09:17:18.980 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmply7ndxa0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:19 compute-0 ceph-mon[75021]: pgmap v1587: 305 pgs: 305 active+clean; 114 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 243 op/s
Nov 22 09:17:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2202826143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.129 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmply7ndxa0" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.152 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.156 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.213 253665 DEBUG nova.compute.manager [req-e5c9c00a-dea6-4439-83ef-6c47dcc43ba5 req-6a1a42af-9a96-4dd6-81de-ef9fa270b321 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-deleted-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 248 op/s
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.508 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.508 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Deleting local config drive /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config because it was imported into RBD.
Nov 22 09:17:19 compute-0 NetworkManager[48920]: <info>  [1763803039.5678] manager: (tap1f84d052-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Nov 22 09:17:19 compute-0 kernel: tap1f84d052-9d: entered promiscuous mode
Nov 22 09:17:19 compute-0 ovn_controller[152872]: 2025-11-22T09:17:19Z|00408|binding|INFO|Claiming lport 1f84d052-9d22-469d-b43d-259c9b54bcaf for this chassis.
Nov 22 09:17:19 compute-0 ovn_controller[152872]: 2025-11-22T09:17:19Z|00409|binding|INFO|1f84d052-9d22-469d-b43d-259c9b54bcaf: Claiming fa:16:3e:ca:e5:b4 10.100.0.82
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.581 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:e5:b4 10.100.0.82'], port_security=['fa:16:3e:ca:e5:b4 10.100.0.82'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.82/24', 'neutron:device_id': 'a63d88a0-884c-4328-a21c-6bedf9264f2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-843f0308-8d5e-40fc-9082-c0a02b73f832', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0cd90fe-647e-4a9f-911e-14c3221ee262, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1f84d052-9d22-469d-b43d-259c9b54bcaf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.582 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1f84d052-9d22-469d-b43d-259c9b54bcaf in datapath 843f0308-8d5e-40fc-9082-c0a02b73f832 bound to our chassis
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.584 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 843f0308-8d5e-40fc-9082-c0a02b73f832
Nov 22 09:17:19 compute-0 NetworkManager[48920]: <info>  [1763803039.5847] manager: (tapb12fa008-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Nov 22 09:17:19 compute-0 systemd-udevd[306381]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:19 compute-0 systemd-udevd[306382]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[677e77ea-bc71-4f30-a75b-05ca13ff21c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.598 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap843f0308-81 in ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.599 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap843f0308-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.600 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73f08e0e-c01b-4a18-b270-1a899d52933d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.600 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[de922be5-2c55-41d9-873f-07adec6d73a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.612 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[92a53cba-a9e3-44fd-8803-bab7edc7df47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 NetworkManager[48920]: <info>  [1763803039.6196] device (tap1f84d052-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:17:19 compute-0 NetworkManager[48920]: <info>  [1763803039.6208] device (tap1f84d052-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:17:19 compute-0 systemd-machined[215941]: New machine qemu-51-instance-0000002e.
Nov 22 09:17:19 compute-0 NetworkManager[48920]: <info>  [1763803039.6387] device (tapb12fa008-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:17:19 compute-0 kernel: tapb12fa008-cd: entered promiscuous mode
Nov 22 09:17:19 compute-0 NetworkManager[48920]: <info>  [1763803039.6405] device (tapb12fa008-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:17:19 compute-0 ovn_controller[152872]: 2025-11-22T09:17:19Z|00410|binding|INFO|Claiming lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c for this chassis.
Nov 22 09:17:19 compute-0 ovn_controller[152872]: 2025-11-22T09:17:19Z|00411|binding|INFO|b12fa008-cd82-4cf3-8abd-a89e90fb9e4c: Claiming fa:16:3e:f9:be:fa 10.100.1.147
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6c2c7f-988a-4aed-b2e3-e3f1fb9c7091]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002e.
Nov 22 09:17:19 compute-0 ovn_controller[152872]: 2025-11-22T09:17:19Z|00412|binding|INFO|Setting lport 1f84d052-9d22-469d-b43d-259c9b54bcaf ovn-installed in OVS
Nov 22 09:17:19 compute-0 ovn_controller[152872]: 2025-11-22T09:17:19Z|00413|binding|INFO|Setting lport 1f84d052-9d22-469d-b43d-259c9b54bcaf up in Southbound
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.649 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:be:fa 10.100.1.147'], port_security=['fa:16:3e:f9:be:fa 10.100.1.147'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.147/24', 'neutron:device_id': 'a63d88a0-884c-4328-a21c-6bedf9264f2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f8b641-eec2-42fb-ae80-bc7afe5817fe, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.650 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.678 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d08cf387-6e0c-459c-979d-e189b9ff2094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.686 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[071f7af5-ff94-465f-a1e0-4124fb549a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 NetworkManager[48920]: <info>  [1763803039.6904] manager: (tap843f0308-80): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Nov 22 09:17:19 compute-0 ovn_controller[152872]: 2025-11-22T09:17:19Z|00414|binding|INFO|Setting lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c ovn-installed in OVS
Nov 22 09:17:19 compute-0 ovn_controller[152872]: 2025-11-22T09:17:19Z|00415|binding|INFO|Setting lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c up in Southbound
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.722 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8d5fca09-56df-4979-b931-6bab69711c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.726 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[96c202a0-5303-4a09-9a89-4e1893590d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 NetworkManager[48920]: <info>  [1763803039.7546] device (tap843f0308-80): carrier: link connected
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.760 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7c36185c-54f5-4948-a25c-e5029850963d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.780 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13b510ab-7ea9-45c5-a16c-7bf7d5d85110]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap843f0308-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:17:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589049, 'reachable_time': 44401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306417, 'error': None, 'target': 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.798 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1cd60e-76e0-4db8-964f-c0aa122fd2b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:17d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589049, 'tstamp': 589049}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306418, 'error': None, 'target': 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.814 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdaa2fd-3e2a-4b8a-92fa-d64f51d4fb0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap843f0308-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:17:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589049, 'reachable_time': 44401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306419, 'error': None, 'target': 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.849 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd909d8-2fc3-4213-a1c2-9b764d331e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.857 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.857 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.877 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.917 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[172e705a-6789-4b29-bf72-09860a663d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.919 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap843f0308-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.919 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.920 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap843f0308-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:19 compute-0 NetworkManager[48920]: <info>  [1763803039.9226] manager: (tap843f0308-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Nov 22 09:17:19 compute-0 kernel: tap843f0308-80: entered promiscuous mode
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.928 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap843f0308-80, col_values=(('external_ids', {'iface-id': '421f3205-68de-408e-8920-e7fb640c7177'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.929 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:19 compute-0 ovn_controller[152872]: 2025-11-22T09:17:19Z|00416|binding|INFO|Releasing lport 421f3205-68de-408e-8920-e7fb640c7177 from this chassis (sb_readonly=0)
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.930 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.933 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/843f0308-8d5e-40fc-9082-c0a02b73f832.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/843f0308-8d5e-40fc-9082-c0a02b73f832.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.934 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48cba49c-0914-4dd3-a1d9-31351f7c7faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.935 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-843f0308-8d5e-40fc-9082-c0a02b73f832
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/843f0308-8d5e-40fc-9082-c0a02b73f832.pid.haproxy
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 843f0308-8d5e-40fc-9082-c0a02b73f832
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:17:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.936 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'env', 'PROCESS_TAG=haproxy-843f0308-8d5e-40fc-9082-c0a02b73f832', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/843f0308-8d5e-40fc-9082-c0a02b73f832.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.954 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.954 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.964 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.964 253665 INFO nova.compute.claims [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.983 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:19 compute-0 nova_compute[253661]: 2025-11-22 09:17:19.983 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.000 253665 DEBUG nova.compute.manager [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.000 253665 DEBUG oslo_concurrency.lockutils [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.001 253665 DEBUG oslo_concurrency.lockutils [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.001 253665 DEBUG oslo_concurrency.lockutils [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.001 253665 DEBUG nova.compute.manager [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Processing event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.017 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.095 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.167 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updated VIF entry in instance network info cache for port b12fa008-cd82-4cf3-8abd-a89e90fb9e4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.167 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.170 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803040.170243, a63d88a0-884c-4328-a21c-6bedf9264f2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.171 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] VM Started (Lifecycle Event)
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.173 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.209 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.221 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.225 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803040.172484, a63d88a0-884c-4328-a21c-6bedf9264f2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.226 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] VM Paused (Lifecycle Event)
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.240 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.244 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.258 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:20 compute-0 podman[306495]: 2025-11-22 09:17:20.367687483 +0000 UTC m=+0.060986824 container create 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:17:20 compute-0 systemd[1]: Started libpod-conmon-24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb.scope.
Nov 22 09:17:20 compute-0 podman[306495]: 2025-11-22 09:17:20.338832496 +0000 UTC m=+0.032131847 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:17:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862f2ca2fa37b1a4dd8d301fd07f294fa6ad5aa4c9c004f76d711540cd99b7fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:20 compute-0 podman[306495]: 2025-11-22 09:17:20.588396561 +0000 UTC m=+0.281695932 container init 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:17:20 compute-0 podman[306495]: 2025-11-22 09:17:20.602198855 +0000 UTC m=+0.295498226 container start 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:17:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002589410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:20 compute-0 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [NOTICE]   (306533) : New worker (306537) forked
Nov 22 09:17:20 compute-0 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [NOTICE]   (306533) : Loading success.
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.640 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.647 253665 DEBUG nova.compute.provider_tree [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.666 253665 DEBUG nova.scheduler.client.report [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.686 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.687 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.690 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.697 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.698 253665 INFO nova.compute.claims [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.763 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.763 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.790 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.808 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.872 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.893 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b12fa008-cd82-4cf3-8abd-a89e90fb9e4c in datapath 5d251c03-e62d-4f4a-933e-92ba86d2f7be unbound from our chassis
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.895 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d251c03-e62d-4f4a-933e-92ba86d2f7be
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.909 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[61117f24-2dca-41ca-ab0f-ccff1534e594]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.911 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d251c03-e1 in ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.910 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.912 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.913 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Creating image(s)
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.913 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d251c03-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.913 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7030b2de-9613-48b1-88bc-952168f90e8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.915 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[55787abb-5d28-4a15-b862-551a50ac9aeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.931 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0de55a-f75a-4266-996d-f1ac5b981aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.947 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.959 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1438f9-f200-4ed6-9077-a019979c62d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:20 compute-0 nova_compute[253661]: 2025-11-22 09:17:20.979 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.993 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[368d88bd-80b1-40dc-b3ea-d52c0dcb77f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f08a80e1-5072-41d1-a318-931710c2d776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 systemd-udevd[306402]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:21 compute-0 NetworkManager[48920]: <info>  [1763803041.0022] manager: (tap5d251c03-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.021 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.028 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.040 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d85738-c134-45c0-b0f1-1067ec911936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.045 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4469aad2-d3e9-4330-b052-97b81061d383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 NetworkManager[48920]: <info>  [1763803041.0735] device (tap5d251c03-e0): carrier: link connected
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.080 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6583e4-9292-4fd7-a2c3-cbf377a2f2d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.086 253665 DEBUG nova.policy [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31a3f645b946468d9e6fe3b907dfdc0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:17:21 compute-0 ceph-mon[75021]: pgmap v1588: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 248 op/s
Nov 22 09:17:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4002589410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.104 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[135446c6-d83d-4ad1-812a-b9a5be70c170]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d251c03-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:aa:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589181, 'reachable_time': 40318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306631, 'error': None, 'target': 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.130 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d28ee165-b1f7-43d0-844c-9f3599cb53ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:aa98'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589181, 'tstamp': 589181}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306632, 'error': None, 'target': 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.130 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.131 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.132 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.132 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.152 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4bf4d7-675a-4631-a014-2e8f4d309dcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d251c03-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:aa:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589181, 'reachable_time': 40318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306635, 'error': None, 'target': 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.159 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.171 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8d4f1a-1f18-45ef-b990-889df5ef021d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.280 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1a07fa63-0022-413f-91b7-c220082a8adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.283 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d251c03-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.283 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.284 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d251c03-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:21 compute-0 NetworkManager[48920]: <info>  [1763803041.2871] manager: (tap5d251c03-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Nov 22 09:17:21 compute-0 kernel: tap5d251c03-e0: entered promiscuous mode
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.289 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d251c03-e0, col_values=(('external_ids', {'iface-id': '70a7c301-9e7f-4dbe-8105-d63ada0ee0cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:21 compute-0 ovn_controller[152872]: 2025-11-22T09:17:21Z|00417|binding|INFO|Releasing lport 70a7c301-9e7f-4dbe-8105-d63ada0ee0cc from this chassis (sb_readonly=0)
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.291 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.311 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d251c03-e62d-4f4a-933e-92ba86d2f7be.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d251c03-e62d-4f4a-933e-92ba86d2f7be.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.312 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8448ca6f-6534-4602-a5ce-c72dec885848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.313 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-5d251c03-e62d-4f4a-933e-92ba86d2f7be
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/5d251c03-e62d-4f4a-933e-92ba86d2f7be.pid.haproxy
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 5d251c03-e62d-4f4a-933e-92ba86d2f7be
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.312 253665 DEBUG nova.compute.manager [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.313 253665 DEBUG oslo_concurrency.lockutils [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.314 253665 DEBUG oslo_concurrency.lockutils [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.314 253665 DEBUG oslo_concurrency.lockutils [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.314 253665 DEBUG nova.compute.manager [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Processing event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.314 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.315 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'env', 'PROCESS_TAG=haproxy-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d251c03-e62d-4f4a-933e-92ba86d2f7be.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.315 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.319 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803041.3192072, a63d88a0-884c-4328-a21c-6bedf9264f2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.319 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] VM Resumed (Lifecycle Event)
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.322 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.325 253665 INFO nova.virt.libvirt.driver [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance spawned successfully.
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.325 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:17:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.353 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.353 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.354 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.354 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.355 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.355 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.361 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/94112967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.390 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.407 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.422 253665 DEBUG nova.compute.provider_tree [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.431 253665 INFO nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Took 10.63 seconds to spawn the instance on the hypervisor.
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.432 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.435 253665 DEBUG nova.scheduler.client.report [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.467 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.468 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.512 253665 INFO nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Took 11.81 seconds to build instance.
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.519 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.520 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.533 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.536 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.551 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.586 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.669 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.671 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.672 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Creating image(s)
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.695 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.719 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:21 compute-0 podman[306741]: 2025-11-22 09:17:21.738708607 +0000 UTC m=+0.055480311 container create f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.752 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.757 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:21 compute-0 systemd[1]: Started libpod-conmon-f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d.scope.
Nov 22 09:17:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.802 253665 DEBUG nova.policy [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c652dea7d569ad4907db1e9365d3ee319c62c3437d1edcb0b89d72aa63a1823/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:21 compute-0 podman[306741]: 2025-11-22 09:17:21.709228125 +0000 UTC m=+0.025999849 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.817 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] resizing rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:17:21 compute-0 podman[306741]: 2025-11-22 09:17:21.819071827 +0000 UTC m=+0.135843521 container init f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:17:21 compute-0 podman[306741]: 2025-11-22 09:17:21.827260104 +0000 UTC m=+0.144031798 container start f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:17:21 compute-0 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [NOTICE]   (306829) : New worker (306837) forked
Nov 22 09:17:21 compute-0 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [NOTICE]   (306829) : Loading success.
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.858 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.859 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.860 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.860 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.885 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.890 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:21 compute-0 nova_compute[253661]: 2025-11-22 09:17:21.995 253665 DEBUG nova.objects.instance [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'migration_context' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.012 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.013 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Ensure instance console log exists: /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.014 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.014 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.014 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.086 253665 DEBUG nova.compute.manager [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.086 253665 DEBUG oslo_concurrency.lockutils [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.086 253665 DEBUG oslo_concurrency.lockutils [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.087 253665 DEBUG oslo_concurrency.lockutils [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.087 253665 DEBUG nova.compute.manager [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.087 253665 WARNING nova.compute.manager [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received unexpected event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf for instance with vm_state active and task_state None.
Nov 22 09:17:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/94112967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.294 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.332 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Successfully created port: e8eabe8a-7cdb-44e6-9266-09d08038b4ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.338 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Successfully created port: a029f6c5-4597-4645-9974-c282b8014824 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.391 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.563 253665 DEBUG nova.objects.instance [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid 2964b30c-ab3b-4bab-8f11-2492007f83ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.576 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.576 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Ensure instance console log exists: /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.577 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.577 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:22 compute-0 nova_compute[253661]: 2025-11-22 09:17:22.577 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:17:23 compute-0 ceph-mon[75021]: pgmap v1589: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.254 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.265 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Successfully updated port: e8eabe8a-7cdb-44e6-9266-09d08038b4ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.277 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.278 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.278 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 95 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 240 op/s
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.416 253665 DEBUG nova.compute.manager [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.416 253665 DEBUG oslo_concurrency.lockutils [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.417 253665 DEBUG oslo_concurrency.lockutils [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.417 253665 DEBUG oslo_concurrency.lockutils [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.417 253665 DEBUG nova.compute.manager [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.417 253665 WARNING nova.compute.manager [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received unexpected event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c for instance with vm_state active and task_state None.
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.472 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.472 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.472 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.473 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a63d88a0-884c-4328-a21c-6bedf9264f2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.513 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.514 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.514 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.514 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.514 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.515 253665 INFO nova.compute.manager [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Terminating instance
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.516 253665 DEBUG nova.compute.manager [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.553 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Successfully updated port: a029f6c5-4597-4645-9974-c282b8014824 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:17:23 compute-0 kernel: tap1f84d052-9d (unregistering): left promiscuous mode
Nov 22 09:17:23 compute-0 NetworkManager[48920]: <info>  [1763803043.5601] device (tap1f84d052-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.566 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:17:23 compute-0 ovn_controller[152872]: 2025-11-22T09:17:23Z|00418|binding|INFO|Releasing lport 1f84d052-9d22-469d-b43d-259c9b54bcaf from this chassis (sb_readonly=0)
Nov 22 09:17:23 compute-0 ovn_controller[152872]: 2025-11-22T09:17:23Z|00419|binding|INFO|Setting lport 1f84d052-9d22-469d-b43d-259c9b54bcaf down in Southbound
Nov 22 09:17:23 compute-0 ovn_controller[152872]: 2025-11-22T09:17:23Z|00420|binding|INFO|Removing iface tap1f84d052-9d ovn-installed in OVS
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.576 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.577 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquired lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.577 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.578 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.584 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:e5:b4 10.100.0.82'], port_security=['fa:16:3e:ca:e5:b4 10.100.0.82'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.82/24', 'neutron:device_id': 'a63d88a0-884c-4328-a21c-6bedf9264f2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-843f0308-8d5e-40fc-9082-c0a02b73f832', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0cd90fe-647e-4a9f-911e-14c3221ee262, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1f84d052-9d22-469d-b43d-259c9b54bcaf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.586 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1f84d052-9d22-469d-b43d-259c9b54bcaf in datapath 843f0308-8d5e-40fc-9082-c0a02b73f832 unbound from our chassis
Nov 22 09:17:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.589 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 843f0308-8d5e-40fc-9082-c0a02b73f832, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:17:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.590 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[307e8f44-1d9f-4046-ac73-b546618954ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.590 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 namespace which is not needed anymore
Nov 22 09:17:23 compute-0 kernel: tapb12fa008-cd (unregistering): left promiscuous mode
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.593 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 NetworkManager[48920]: <info>  [1763803043.5973] device (tapb12fa008-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 ovn_controller[152872]: 2025-11-22T09:17:23Z|00421|binding|INFO|Releasing lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c from this chassis (sb_readonly=0)
Nov 22 09:17:23 compute-0 ovn_controller[152872]: 2025-11-22T09:17:23Z|00422|binding|INFO|Setting lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c down in Southbound
Nov 22 09:17:23 compute-0 ovn_controller[152872]: 2025-11-22T09:17:23Z|00423|binding|INFO|Removing iface tapb12fa008-cd ovn-installed in OVS
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.616 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:be:fa 10.100.1.147'], port_security=['fa:16:3e:f9:be:fa 10.100.1.147'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.147/24', 'neutron:device_id': 'a63d88a0-884c-4328-a21c-6bedf9264f2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f8b641-eec2-42fb-ae80-bc7afe5817fe, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Nov 22 09:17:23 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Consumed 2.795s CPU time.
Nov 22 09:17:23 compute-0 systemd-machined[215941]: Machine qemu-51-instance-0000002e terminated.
Nov 22 09:17:23 compute-0 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [NOTICE]   (306533) : haproxy version is 2.8.14-c23fe91
Nov 22 09:17:23 compute-0 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [NOTICE]   (306533) : path to executable is /usr/sbin/haproxy
Nov 22 09:17:23 compute-0 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [WARNING]  (306533) : Exiting Master process...
Nov 22 09:17:23 compute-0 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [WARNING]  (306533) : Exiting Master process...
Nov 22 09:17:23 compute-0 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [ALERT]    (306533) : Current worker (306537) exited with code 143 (Terminated)
Nov 22 09:17:23 compute-0 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [WARNING]  (306533) : All workers exited. Exiting... (0)
Nov 22 09:17:23 compute-0 systemd[1]: libpod-24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb.scope: Deactivated successfully.
Nov 22 09:17:23 compute-0 podman[307001]: 2025-11-22 09:17:23.75036747 +0000 UTC m=+0.053230997 container died 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.765 253665 INFO nova.virt.libvirt.driver [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance destroyed successfully.
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.766 253665 DEBUG nova.objects.instance [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'resources' on Instance uuid a63d88a0-884c-4328-a21c-6bedf9264f2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.780 253665 DEBUG nova.virt.libvirt.vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:21Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.781 253665 DEBUG nova.network.os_vif_util [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.782 253665 DEBUG nova.network.os_vif_util [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.782 253665 DEBUG os_vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.785 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f84d052-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.786 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.788 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.791 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.793 253665 INFO os_vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d')
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.794 253665 DEBUG nova.virt.libvirt.vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:21Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.794 253665 DEBUG nova.network.os_vif_util [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.795 253665 DEBUG nova.network.os_vif_util [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.795 253665 DEBUG os_vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.796 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb12fa008-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.797 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.798 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.799 253665 INFO os_vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd')
Nov 22 09:17:23 compute-0 nova_compute[253661]: 2025-11-22 09:17:23.952 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb-userdata-shm.mount: Deactivated successfully.
Nov 22 09:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-862f2ca2fa37b1a4dd8d301fd07f294fa6ad5aa4c9c004f76d711540cd99b7fa-merged.mount: Deactivated successfully.
Nov 22 09:17:24 compute-0 podman[307001]: 2025-11-22 09:17:24.061388579 +0000 UTC m=+0.364252106 container cleanup 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:17:24 compute-0 systemd[1]: libpod-conmon-24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb.scope: Deactivated successfully.
Nov 22 09:17:24 compute-0 podman[307074]: 2025-11-22 09:17:24.139109236 +0000 UTC m=+0.052749064 container remove 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.147 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[da04969e-3e4f-47a4-87cf-3a6e3edd2487]: (4, ('Sat Nov 22 09:17:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 (24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb)\n24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb\nSat Nov 22 09:17:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 (24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb)\n24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d554a8c1-6329-4665-bda4-34d4ae6b4189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.153 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap843f0308-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:24 compute-0 kernel: tap843f0308-80: left promiscuous mode
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.181 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5366f98e-3bcc-49f6-a32f-a1e3548a28ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.202 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e17449-fd94-4e47-8f95-1162488dbcd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.203 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74e44c12-e6a4-46f5-8cd9-e148c77c28ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69ca2ba5-054d-40ee-80dd-1e45fdae0ebb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589040, 'reachable_time': 43561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307090, 'error': None, 'target': 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d843f0308\x2d8d5e\x2d40fc\x2d9082\x2dc0a02b73f832.mount: Deactivated successfully.
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.225 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.226 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[23db5614-8f06-418d-92e8-eb845229dce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.227 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b12fa008-cd82-4cf3-8abd-a89e90fb9e4c in datapath 5d251c03-e62d-4f4a-933e-92ba86d2f7be unbound from our chassis
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.229 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d251c03-e62d-4f4a-933e-92ba86d2f7be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.230 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a882db5c-7918-4bf8-a994-196c8efd5083]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.231 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be namespace which is not needed anymore
Nov 22 09:17:24 compute-0 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [NOTICE]   (306829) : haproxy version is 2.8.14-c23fe91
Nov 22 09:17:24 compute-0 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [NOTICE]   (306829) : path to executable is /usr/sbin/haproxy
Nov 22 09:17:24 compute-0 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [WARNING]  (306829) : Exiting Master process...
Nov 22 09:17:24 compute-0 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [ALERT]    (306829) : Current worker (306837) exited with code 143 (Terminated)
Nov 22 09:17:24 compute-0 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [WARNING]  (306829) : All workers exited. Exiting... (0)
Nov 22 09:17:24 compute-0 systemd[1]: libpod-f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d.scope: Deactivated successfully.
Nov 22 09:17:24 compute-0 podman[307109]: 2025-11-22 09:17:24.388096498 +0000 UTC m=+0.056625878 container died f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 09:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c652dea7d569ad4907db1e9365d3ee319c62c3437d1edcb0b89d72aa63a1823-merged.mount: Deactivated successfully.
Nov 22 09:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d-userdata-shm.mount: Deactivated successfully.
Nov 22 09:17:24 compute-0 podman[307109]: 2025-11-22 09:17:24.437346717 +0000 UTC m=+0.105876087 container cleanup f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.452 253665 INFO nova.virt.libvirt.driver [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Deleting instance files /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e_del
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.453 253665 INFO nova.virt.libvirt.driver [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Deletion of /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e_del complete
Nov 22 09:17:24 compute-0 systemd[1]: libpod-conmon-f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d.scope: Deactivated successfully.
Nov 22 09:17:24 compute-0 podman[307136]: 2025-11-22 09:17:24.542933917 +0000 UTC m=+0.098426629 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:17:24 compute-0 podman[307138]: 2025-11-22 09:17:24.548618134 +0000 UTC m=+0.104012723 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 09:17:24 compute-0 podman[307150]: 2025-11-22 09:17:24.570086532 +0000 UTC m=+0.104896004 container remove f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.576 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec474fe8-4437-4524-ba3d-32372aa48829]: (4, ('Sat Nov 22 09:17:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be (f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d)\nf0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d\nSat Nov 22 09:17:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be (f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d)\nf0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.578 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99b01e21-fd05-4ee7-8712-ea092eb91808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.579 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d251c03-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:24 compute-0 kernel: tap5d251c03-e0: left promiscuous mode
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.581 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.599 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abb6a939-9f95-49c3-bd02-891cf5e2e75f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.613 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce721e8f-22c2-45de-b537-3ab1f2cec4c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.615 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1a0295-a200-451d-a1bf-7d62bb4432a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.634 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d76ac3a3-0af0-44a0-96da-d5eff6aeb66f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589172, 'reachable_time': 21768, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307186, 'error': None, 'target': 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.637 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:17:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.638 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d59b13dc-a238-4f83-a5e5-8fb60eb23f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d251c03\x2de62d\x2d4f4a\x2d933e\x2d92ba86d2f7be.mount: Deactivated successfully.
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.643 253665 INFO nova.compute.manager [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Took 1.13 seconds to destroy the instance on the hypervisor.
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.643 253665 DEBUG oslo.service.loopingcall [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.643 253665 DEBUG nova.compute.manager [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.644 253665 DEBUG nova.network.neutron [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.709 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-changed-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.710 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Refreshing instance network info cache due to event network-changed-e8eabe8a-7cdb-44e6-9266-09d08038b4ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.710 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.731 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Updating instance_info_cache with network_info: [{"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.787 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.787 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance network_info: |[{"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.788 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.788 253665 DEBUG nova.network.neutron [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Refreshing network info cache for port e8eabe8a-7cdb-44e6-9266-09d08038b4ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.793 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start _get_guest_xml network_info=[{"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.801 253665 WARNING nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.812 253665 DEBUG nova.virt.libvirt.host [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.814 253665 DEBUG nova.virt.libvirt.host [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.819 253665 DEBUG nova.virt.libvirt.host [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.819 253665 DEBUG nova.virt.libvirt.host [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.820 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.820 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.820 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.820 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.822 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.822 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.822 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.825 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:24 compute-0 nova_compute[253661]: 2025-11-22 09:17:24.988 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.032 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Releasing lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.033 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance network_info: |[{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.036 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start _get_guest_xml network_info=[{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.040 253665 WARNING nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.045 253665 DEBUG nova.virt.libvirt.host [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.046 253665 DEBUG nova.virt.libvirt.host [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.048 253665 DEBUG nova.virt.libvirt.host [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.048 253665 DEBUG nova.virt.libvirt.host [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.049 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.049 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.049 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.051 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.051 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.051 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.054 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:25 compute-0 ceph-mon[75021]: pgmap v1590: 305 pgs: 305 active+clean; 95 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 240 op/s
Nov 22 09:17:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035658854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.307 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.331 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.336 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 164 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 268 op/s
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.494 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-changed-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.495 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Refreshing instance network info cache due to event network-changed-a029f6c5-4597-4645-9974-c282b8014824. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.495 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.495 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.496 253665 DEBUG nova.network.neutron [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Refreshing network info cache for port a029f6c5-4597-4645-9974-c282b8014824 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:17:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077379852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.633 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.665 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.671 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2551848851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.821 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.824 253665 DEBUG nova.virt.libvirt.vif [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1581347636',display_name='tempest-DeleteServersTestJSON-server-1581347636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1581347636',id=48,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-3v2ufiuv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:21Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2964b30c-ab3b-4bab-8f11-2492007f83ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.824 253665 DEBUG nova.network.os_vif_util [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.826 253665 DEBUG nova.network.os_vif_util [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.828 253665 DEBUG nova.objects.instance [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2964b30c-ab3b-4bab-8f11-2492007f83ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.844 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <uuid>2964b30c-ab3b-4bab-8f11-2492007f83ac</uuid>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <name>instance-00000030</name>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <nova:name>tempest-DeleteServersTestJSON-server-1581347636</nova:name>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:17:24</nova:creationTime>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <nova:port uuid="e8eabe8a-7cdb-44e6-9266-09d08038b4ea">
Nov 22 09:17:25 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <system>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <entry name="serial">2964b30c-ab3b-4bab-8f11-2492007f83ac</entry>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <entry name="uuid">2964b30c-ab3b-4bab-8f11-2492007f83ac</entry>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     </system>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <os>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   </os>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <features>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   </features>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2964b30c-ab3b-4bab-8f11-2492007f83ac_disk">
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config">
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:25 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:80:82:17"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <target dev="tape8eabe8a-7c"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/console.log" append="off"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <video>
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     </video>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:17:25 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:17:25 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:17:25 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:17:25 compute-0 nova_compute[253661]: </domain>
Nov 22 09:17:25 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.846 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Preparing to wait for external event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.846 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.847 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.847 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.847 253665 DEBUG nova.virt.libvirt.vif [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1581347636',display_name='tempest-DeleteServersTestJSON-server-1581347636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1581347636',id=48,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-3v2ufiuv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:21Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2964b30c-ab3b-4bab-8f11-2492007f83ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.848 253665 DEBUG nova.network.os_vif_util [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.848 253665 DEBUG nova.network.os_vif_util [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.849 253665 DEBUG os_vif [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.850 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.851 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.854 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8eabe8a-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.855 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8eabe8a-7c, col_values=(('external_ids', {'iface-id': 'e8eabe8a-7cdb-44e6-9266-09d08038b4ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:82:17', 'vm-uuid': '2964b30c-ab3b-4bab-8f11-2492007f83ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.857 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:25 compute-0 NetworkManager[48920]: <info>  [1763803045.8588] manager: (tape8eabe8a-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.865 253665 INFO os_vif [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c')
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.922 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.923 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.923 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:80:82:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.924 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Using config drive
Nov 22 09:17:25 compute-0 nova_compute[253661]: 2025-11-22 09:17:25.949 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735200162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.149 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.150 253665 DEBUG nova.virt.libvirt.vif [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:20Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.151 253665 DEBUG nova.network.os_vif_util [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.151 253665 DEBUG nova.network.os_vif_util [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.152 253665 DEBUG nova.objects.instance [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.173 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <uuid>971e37bd-eb33-42b7-b5c7-86eff88cb700</uuid>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <name>instance-0000002f</name>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <nova:name>tempest-AttachInterfacesV270Test-server-1595917141</nova:name>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:17:25</nova:creationTime>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <nova:user uuid="31a3f645b946468d9e6fe3b907dfdc0b">tempest-AttachInterfacesV270Test-1454989706-project-member</nova:user>
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <nova:project uuid="0b711aaafbb94138a8f95e1e15d0f0a4">tempest-AttachInterfacesV270Test-1454989706</nova:project>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <nova:port uuid="a029f6c5-4597-4645-9974-c282b8014824">
Nov 22 09:17:26 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <system>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <entry name="serial">971e37bd-eb33-42b7-b5c7-86eff88cb700</entry>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <entry name="uuid">971e37bd-eb33-42b7-b5c7-86eff88cb700</entry>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     </system>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <os>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   </os>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <features>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   </features>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/971e37bd-eb33-42b7-b5c7-86eff88cb700_disk">
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config">
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:26 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:85:8e:01"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <target dev="tapa029f6c5-45"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/console.log" append="off"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <video>
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     </video>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:17:26 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:17:26 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:17:26 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:17:26 compute-0 nova_compute[253661]: </domain>
Nov 22 09:17:26 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.174 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Preparing to wait for external event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.176 253665 DEBUG nova.virt.libvirt.vif [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:20Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.176 253665 DEBUG nova.network.os_vif_util [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.177 253665 DEBUG nova.network.os_vif_util [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.177 253665 DEBUG os_vif [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.177 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.178 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.178 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.180 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa029f6c5-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.181 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa029f6c5-45, col_values=(('external_ids', {'iface-id': 'a029f6c5-4597-4645-9974-c282b8014824', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:8e:01', 'vm-uuid': '971e37bd-eb33-42b7-b5c7-86eff88cb700'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3035658854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1077379852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2551848851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1735200162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:26 compute-0 NetworkManager[48920]: <info>  [1763803046.2245] manager: (tapa029f6c5-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.224 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.228 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.232 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.233 253665 INFO os_vif [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45')
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.298 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.299 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.299 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No VIF found with MAC fa:16:3e:85:8e:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.300 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Using config drive
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.325 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.624 253665 DEBUG nova.network.neutron [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.648 253665 INFO nova.compute.manager [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Took 2.00 seconds to deallocate network for instance.
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.654 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Creating config drive at /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.659 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb277ybpr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.744 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.745 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.759 253665 DEBUG nova.network.neutron [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Updated VIF entry in instance network info cache for port e8eabe8a-7cdb-44e6-9266-09d08038b4ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.759 253665 DEBUG nova.network.neutron [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Updating instance_info_cache with network_info: [{"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.774 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-unplugged-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-unplugged-1f84d052-9d22-469d-b43d-259c9b54bcaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-unplugged-1f84d052-9d22-469d-b43d-259c9b54bcaf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.799 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803031.6466799, 8dafc0d0-bd93-4080-b51e-36887936ea66 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.800 253665 INFO nova.compute.manager [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] VM Stopped (Lifecycle Event)
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.806 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb277ybpr" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.832 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.836 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.887 253665 DEBUG nova.compute.manager [None req-d98b4322-b187-491d-a121-13a1a03a14a1 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.889 253665 DEBUG nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.889 253665 DEBUG oslo_concurrency.lockutils [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.890 253665 DEBUG oslo_concurrency.lockutils [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.890 253665 DEBUG oslo_concurrency.lockutils [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.890 253665 DEBUG nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.891 253665 WARNING nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received unexpected event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf for instance with vm_state deleted and task_state None.
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.891 253665 DEBUG nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-deleted-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.891 253665 DEBUG nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-deleted-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:26 compute-0 nova_compute[253661]: 2025-11-22 09:17:26.933 253665 DEBUG oslo_concurrency.processutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:27 compute-0 ceph-mon[75021]: pgmap v1591: 305 pgs: 305 active+clean; 164 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 268 op/s
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.250 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Creating config drive at /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.255 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xt9tp0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.311 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.312 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Deleting local config drive /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config because it was imported into RBD.
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.340 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 168 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.362 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.362 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.363 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.363 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.363 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.363 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.364 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.364 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:27 compute-0 NetworkManager[48920]: <info>  [1763803047.3720] manager: (tape8eabe8a-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Nov 22 09:17:27 compute-0 kernel: tape8eabe8a-7c: entered promiscuous mode
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:27 compute-0 ovn_controller[152872]: 2025-11-22T09:17:27Z|00424|binding|INFO|Claiming lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea for this chassis.
Nov 22 09:17:27 compute-0 ovn_controller[152872]: 2025-11-22T09:17:27Z|00425|binding|INFO|e8eabe8a-7cdb-44e6-9266-09d08038b4ea: Claiming fa:16:3e:80:82:17 10.100.0.12
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.391 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:82:17 10.100.0.12'], port_security=['fa:16:3e:80:82:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2964b30c-ab3b-4bab-8f11-2492007f83ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e8eabe8a-7cdb-44e6-9266-09d08038b4ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.392 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e8eabe8a-7cdb-44e6-9266-09d08038b4ea in datapath d93e3720-b00d-41f5-8283-164e9f857d24 bound to our chassis
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.394 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.395 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.400 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xt9tp0z" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028693184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.411 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f75f9046-2cfe-434a-98a3-e725d7260be9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.412 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.414 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.414 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d86acd0e-75f2-41c6-972c-c62df11a321c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.415 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a57c98d-a56f-4290-b661-b8264cb868ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 systemd-machined[215941]: New machine qemu-52-instance-00000030.
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.429 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e293849e-251e-42eb-a508-d2ce7cfba72e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.433 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.436 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:27 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-00000030.
Nov 22 09:17:27 compute-0 systemd-udevd[307453]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.458 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[808206d9-2d07-484f-8172-5af0b4070f85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 NetworkManager[48920]: <info>  [1763803047.4739] device (tape8eabe8a-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:17:27 compute-0 NetworkManager[48920]: <info>  [1763803047.4758] device (tape8eabe8a-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:17:27 compute-0 ovn_controller[152872]: 2025-11-22T09:17:27Z|00426|binding|INFO|Setting lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea ovn-installed in OVS
Nov 22 09:17:27 compute-0 ovn_controller[152872]: 2025-11-22T09:17:27Z|00427|binding|INFO|Setting lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea up in Southbound
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.488 253665 DEBUG oslo_concurrency.processutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.497 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[067df6bc-5b0b-4e11-a2ae-ad269bcb4cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 systemd-udevd[307459]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:27 compute-0 NetworkManager[48920]: <info>  [1763803047.5048] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.506 253665 DEBUG nova.compute.provider_tree [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.503 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[790f8856-6b4a-4d70-aeae-4462af576711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.522 253665 DEBUG nova.scheduler.client.report [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.543 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23170e48-bfbb-4845-a829-4567dfa5139f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.546 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7bee5dc0-3aec-4373-bc16-2de0e043ed67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.551 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.555 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.556 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.556 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.556 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:27 compute-0 NetworkManager[48920]: <info>  [1763803047.5768] device (tapd93e3720-b0): carrier: link connected
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.586 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5f31cedf-3f8d-4c5e-ad70-1d7a52caa05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.605 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b4fc1a-15f8-4616-b0fb-73944fbfed02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589831, 'reachable_time': 33339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307523, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.627 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a102cee2-fd5f-44d2-b9c1-768bd193c6ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589831, 'tstamp': 589831}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307528, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.628 253665 INFO nova.scheduler.client.report [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Deleted allocations for instance a63d88a0-884c-4328-a21c-6bedf9264f2e
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.650 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f77f93a9-a297-4afa-b405-3143d256cf11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589831, 'reachable_time': 33339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307534, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.655 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.656 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Deleting local config drive /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config because it was imported into RBD.
Nov 22 09:17:27 compute-0 podman[307479]: 2025-11-22 09:17:27.668668989 +0000 UTC m=+0.126327581 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.692 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.698 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[051d6987-3829-499a-82e9-ef443e319193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 kernel: tapa029f6c5-45: entered promiscuous mode
Nov 22 09:17:27 compute-0 NetworkManager[48920]: <info>  [1763803047.7174] manager: (tapa029f6c5-45): new Tun device (/org/freedesktop/NetworkManager/Devices/189)
Nov 22 09:17:27 compute-0 systemd-udevd[307476]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:27 compute-0 ovn_controller[152872]: 2025-11-22T09:17:27Z|00428|binding|INFO|Claiming lport a029f6c5-4597-4645-9974-c282b8014824 for this chassis.
Nov 22 09:17:27 compute-0 ovn_controller[152872]: 2025-11-22T09:17:27Z|00429|binding|INFO|a029f6c5-4597-4645-9974-c282b8014824: Claiming fa:16:3e:85:8e:01 10.100.0.11
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:27 compute-0 NetworkManager[48920]: <info>  [1763803047.7325] device (tapa029f6c5-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.733 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:8e:01 10.100.0.11'], port_security=['fa:16:3e:85:8e:01 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '971e37bd-eb33-42b7-b5c7-86eff88cb700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82b2fc18-34ac-4d00-9742-ce510a84048d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=808e605d-2f47-40d3-afaa-9b5d699201f0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a029f6c5-4597-4645-9974-c282b8014824) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:27 compute-0 NetworkManager[48920]: <info>  [1763803047.7338] device (tapa029f6c5-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:17:27 compute-0 systemd-machined[215941]: New machine qemu-53-instance-0000002f.
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.791 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:27 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-0000002f.
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.794 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a90c1e89-b18e-4c64-90bc-687f1822af0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.796 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.797 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.797 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:27 compute-0 ovn_controller[152872]: 2025-11-22T09:17:27Z|00430|binding|INFO|Setting lport a029f6c5-4597-4645-9974-c282b8014824 ovn-installed in OVS
Nov 22 09:17:27 compute-0 ovn_controller[152872]: 2025-11-22T09:17:27Z|00431|binding|INFO|Setting lport a029f6c5-4597-4645-9974-c282b8014824 up in Southbound
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:27 compute-0 NetworkManager[48920]: <info>  [1763803047.8002] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Nov 22 09:17:27 compute-0 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.808 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:27 compute-0 ovn_controller[152872]: 2025-11-22T09:17:27Z|00432|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.812 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.813 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81fdb9cd-00a4-451d-95bc-15dec2fe97a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.814 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.815 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:17:27 compute-0 nova_compute[253661]: 2025-11-22 09:17:27.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.960 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.962 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.962 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4177867885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.092 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.090278, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.093 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Started (Lifecycle Event)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.095 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.121 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.131 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.091522, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.133 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Paused (Lifecycle Event)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.156 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.160 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.191 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.198 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.199 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.204 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.204 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.205 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.203779, 971e37bd-eb33-42b7-b5c7-86eff88cb700 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.205 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] VM Started (Lifecycle Event)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.217 253665 DEBUG nova.compute.manager [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.217 253665 DEBUG oslo_concurrency.lockutils [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.218 253665 DEBUG oslo_concurrency.lockutils [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.218 253665 DEBUG oslo_concurrency.lockutils [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.218 253665 DEBUG nova.compute.manager [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Processing event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.219 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.229 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.231 253665 DEBUG nova.network.neutron [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updated VIF entry in instance network info cache for port a029f6c5-4597-4645-9974-c282b8014824. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.231 253665 DEBUG nova.network.neutron [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.235 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:17:28 compute-0 ceph-mon[75021]: pgmap v1592: 305 pgs: 305 active+clean; 168 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Nov 22 09:17:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1028693184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4177867885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.239 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.203899, 971e37bd-eb33-42b7-b5c7-86eff88cb700 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.239 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] VM Paused (Lifecycle Event)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.242 253665 INFO nova.virt.libvirt.driver [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance spawned successfully.
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.243 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.245 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-unplugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-unplugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-unplugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.248 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.248 253665 WARNING nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received unexpected event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c for instance with vm_state active and task_state deleting.
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.259 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.268 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.272 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.272 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.273 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.273 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.274 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.274 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 podman[307691]: 2025-11-22 09:17:28.281786523 +0000 UTC m=+0.077032051 container create 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.298 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.299 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.2268512, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.299 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Resumed (Lifecycle Event)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.322 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.326 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:28 compute-0 podman[307691]: 2025-11-22 09:17:28.240668651 +0000 UTC m=+0.035914179 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.333 253665 INFO nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Took 6.66 seconds to spawn the instance on the hypervisor.
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.334 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.343 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:28 compute-0 systemd[1]: Started libpod-conmon-03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688.scope.
Nov 22 09:17:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff0afd4b1c1c84ce2231b8246fcd82b5ce225f423297f3fe953c53d2600a88c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.401 253665 INFO nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Took 8.33 seconds to build instance.
Nov 22 09:17:28 compute-0 podman[307691]: 2025-11-22 09:17:28.420131964 +0000 UTC m=+0.215377512 container init 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.422 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:28 compute-0 podman[307691]: 2025-11-22 09:17:28.428027764 +0000 UTC m=+0.223273292 container start 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:17:28 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [NOTICE]   (307710) : New worker (307712) forked
Nov 22 09:17:28 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [NOTICE]   (307710) : Loading success.
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.520 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4101MB free_disk=59.933631896972656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.521 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.521 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.532 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a029f6c5-4597-4645-9974-c282b8014824 in datapath d29d4d6f-5bba-4588-9a6e-d6174b2f2613 unbound from our chassis
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.535 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d29d4d6f-5bba-4588-9a6e-d6174b2f2613
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.551 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[daecf899-360d-44ad-8ea4-5720f3c5cdd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.552 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd29d4d6f-51 in ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.555 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd29d4d6f-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.555 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dce610b3-0ee7-4dfc-b911-0aaa7e43293e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07d5c3c3-c4c1-47e5-b45b-79c043a13d98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.569 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[5668bb20-a337-4c51-9f80-9b99cded4e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.586 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[885a6c08-97b7-41ce-845a-ee7fd801367c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.588 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 971e37bd-eb33-42b7-b5c7-86eff88cb700 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.588 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2964b30c-ab3b-4bab-8f11-2492007f83ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.588 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.589 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.625 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c2738df5-ccbd-4ef8-83e8-e60a922f7e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 NetworkManager[48920]: <info>  [1763803048.6348] manager: (tapd29d4d6f-50): new Veth device (/org/freedesktop/NetworkManager/Devices/191)
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.636 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4361a0-8cc0-49fe-8b39-5df73782e721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.640 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.679 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8819d0e8-3fac-43b4-b1c1-fabdee950bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.684 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3588ee82-35b2-4a53-a738-80d82f51e37e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 NetworkManager[48920]: <info>  [1763803048.7137] device (tapd29d4d6f-50): carrier: link connected
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.720 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[caee8f1c-5977-4699-a62b-da10200e94f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.742 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[12d4a634-6505-4040-9f09-61fd488551fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd29d4d6f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fd:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589945, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307732, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.765 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb5160b-2d1f-4820-9511-c93a4b68f235]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:fd66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589945, 'tstamp': 589945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307733, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.786 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4a54e7-343c-438b-950d-bb962ea4ccde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd29d4d6f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fd:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589945, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307735, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.821 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[308134b8-89d1-4246-94eb-1b2203772855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.895 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[972a00ae-74d2-45d1-bbd7-dde992434173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.899 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd29d4d6f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.899 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.900 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd29d4d6f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:28 compute-0 NetworkManager[48920]: <info>  [1763803048.9032] manager: (tapd29d4d6f-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Nov 22 09:17:28 compute-0 kernel: tapd29d4d6f-50: entered promiscuous mode
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.906 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd29d4d6f-50, col_values=(('external_ids', {'iface-id': 'c20b47b0-28aa-4f40-a285-cb154afca96a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:28 compute-0 ovn_controller[152872]: 2025-11-22T09:17:28Z|00433|binding|INFO|Releasing lport c20b47b0-28aa-4f40-a285-cb154afca96a from this chassis (sb_readonly=0)
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.924 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d29d4d6f-5bba-4588-9a6e-d6174b2f2613.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d29d4d6f-5bba-4588-9a6e-d6174b2f2613.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.925 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d2def3f0-771e-4d85-9d8c-429ea92d5992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.926 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-d29d4d6f-5bba-4588-9a6e-d6174b2f2613
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/d29d4d6f-5bba-4588-9a6e-d6174b2f2613.pid.haproxy
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID d29d4d6f-5bba-4588-9a6e-d6174b2f2613
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.927 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'env', 'PROCESS_TAG=haproxy-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d29d4d6f-5bba-4588-9a6e-d6174b2f2613.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.929 253665 DEBUG nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.930 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.931 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.932 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.932 253665 DEBUG nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Processing event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.933 253665 DEBUG nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.934 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.935 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.935 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.936 253665 DEBUG nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.936 253665 WARNING nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 for instance with vm_state building and task_state spawning.
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.937 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.939 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.944 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.9438565, 971e37bd-eb33-42b7-b5c7-86eff88cb700 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.945 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] VM Resumed (Lifecycle Event)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.948 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.955 253665 INFO nova.virt.libvirt.driver [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance spawned successfully.
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.957 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.969 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803033.9682946, 4589c5da-d558-41a1-bf54-30746991be9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.970 253665 INFO nova.compute.manager [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] VM Stopped (Lifecycle Event)
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.976 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.987 253665 DEBUG nova.compute.manager [None req-06a5be85-6028-4699-9254-031adc6feaaf - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.989 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.992 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.993 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.994 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.994 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.994 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:28 compute-0 nova_compute[253661]: 2025-11-22 09:17:28.995 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.026 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.059 253665 INFO nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Took 8.15 seconds to spawn the instance on the hypervisor.
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.060 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589551161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.123 253665 INFO nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Took 9.19 seconds to build instance.
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.140 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.142 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.147 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.174 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.202 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.202 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1589551161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 188 op/s
Nov 22 09:17:29 compute-0 podman[307788]: 2025-11-22 09:17:29.372889288 +0000 UTC m=+0.063891343 container create 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:17:29 compute-0 systemd[1]: Started libpod-conmon-4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30.scope.
Nov 22 09:17:29 compute-0 podman[307788]: 2025-11-22 09:17:29.335497206 +0000 UTC m=+0.026499281 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.442 253665 INFO nova.compute.manager [None req-e547e8aa-6b04-4f8c-a7aa-aa4de0fce2e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Pausing
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.443 253665 DEBUG nova.objects.instance [None req-e547e8aa-6b04-4f8c-a7aa-aa4de0fce2e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'flavor' on Instance uuid 2964b30c-ab3b-4bab-8f11-2492007f83ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad2c1a91593403ac1a5427beee5092aa718b4ae7eb2816e33351712b63d9327/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:29 compute-0 podman[307788]: 2025-11-22 09:17:29.466854607 +0000 UTC m=+0.157856662 container init 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.467 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803049.4672964, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.468 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Paused (Lifecycle Event)
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.470 253665 DEBUG nova.compute.manager [None req-e547e8aa-6b04-4f8c-a7aa-aa4de0fce2e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:29 compute-0 podman[307788]: 2025-11-22 09:17:29.473200621 +0000 UTC m=+0.164202666 container start 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.491 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.496 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:29 compute-0 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [NOTICE]   (307807) : New worker (307809) forked
Nov 22 09:17:29 compute-0 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [NOTICE]   (307807) : Loading success.
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.518 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.961 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "interface-971e37bd-eb33-42b7-b5c7-86eff88cb700-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.962 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "interface-971e37bd-eb33-42b7-b5c7-86eff88cb700-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.963 253665 DEBUG nova.objects.instance [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'flavor' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:29 compute-0 nova_compute[253661]: 2025-11-22 09:17:29.990 253665 DEBUG nova.objects.instance [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'pci_requests' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:30 compute-0 nova_compute[253661]: 2025-11-22 09:17:30.011 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:17:30 compute-0 ceph-mon[75021]: pgmap v1593: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 188 op/s
Nov 22 09:17:30 compute-0 nova_compute[253661]: 2025-11-22 09:17:30.369 253665 DEBUG nova.compute.manager [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:30 compute-0 nova_compute[253661]: 2025-11-22 09:17:30.369 253665 DEBUG oslo_concurrency.lockutils [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:30 compute-0 nova_compute[253661]: 2025-11-22 09:17:30.370 253665 DEBUG oslo_concurrency.lockutils [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:30 compute-0 nova_compute[253661]: 2025-11-22 09:17:30.370 253665 DEBUG oslo_concurrency.lockutils [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:30 compute-0 nova_compute[253661]: 2025-11-22 09:17:30.370 253665 DEBUG nova.compute.manager [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] No waiting events found dispatching network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:30 compute-0 nova_compute[253661]: 2025-11-22 09:17:30.371 253665 WARNING nova.compute.manager [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received unexpected event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea for instance with vm_state paused and task_state None.
Nov 22 09:17:30 compute-0 nova_compute[253661]: 2025-11-22 09:17:30.554 253665 DEBUG nova.policy [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31a3f645b946468d9e6fe3b907dfdc0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:17:31 compute-0 nova_compute[253661]: 2025-11-22 09:17:31.224 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Nov 22 09:17:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:31 compute-0 nova_compute[253661]: 2025-11-22 09:17:31.511 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Successfully created port: 0368daf2-eb08-4459-8b7b-e5a565dbb954 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:17:31 compute-0 nova_compute[253661]: 2025-11-22 09:17:31.930 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:31 compute-0 nova_compute[253661]: 2025-11-22 09:17:31.931 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:31 compute-0 nova_compute[253661]: 2025-11-22 09:17:31.931 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:31 compute-0 nova_compute[253661]: 2025-11-22 09:17:31.932 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:31 compute-0 nova_compute[253661]: 2025-11-22 09:17:31.932 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:31 compute-0 nova_compute[253661]: 2025-11-22 09:17:31.934 253665 INFO nova.compute.manager [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Terminating instance
Nov 22 09:17:31 compute-0 nova_compute[253661]: 2025-11-22 09:17:31.935 253665 DEBUG nova.compute.manager [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:17:32 compute-0 kernel: tape8eabe8a-7c (unregistering): left promiscuous mode
Nov 22 09:17:32 compute-0 NetworkManager[48920]: <info>  [1763803052.0874] device (tape8eabe8a-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.100 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:32 compute-0 ovn_controller[152872]: 2025-11-22T09:17:32Z|00434|binding|INFO|Releasing lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea from this chassis (sb_readonly=0)
Nov 22 09:17:32 compute-0 ovn_controller[152872]: 2025-11-22T09:17:32Z|00435|binding|INFO|Setting lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea down in Southbound
Nov 22 09:17:32 compute-0 ovn_controller[152872]: 2025-11-22T09:17:32Z|00436|binding|INFO|Removing iface tape8eabe8a-7c ovn-installed in OVS
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.107 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.111 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:82:17 10.100.0.12'], port_security=['fa:16:3e:80:82:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2964b30c-ab3b-4bab-8f11-2492007f83ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e8eabe8a-7cdb-44e6-9266-09d08038b4ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.113 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e8eabe8a-7cdb-44e6-9266-09d08038b4ea in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.115 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.116 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ce9350-4829-4e37-a22f-348c84c480e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.117 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore
Nov 22 09:17:32 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000030.scope: Deactivated successfully.
Nov 22 09:17:32 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000030.scope: Consumed 1.804s CPU time.
Nov 22 09:17:32 compute-0 systemd-machined[215941]: Machine qemu-52-instance-00000030 terminated.
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.174 253665 INFO nova.virt.libvirt.driver [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance destroyed successfully.
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.175 253665 DEBUG nova.objects.instance [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid 2964b30c-ab3b-4bab-8f11-2492007f83ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.189 253665 DEBUG nova.virt.libvirt.vif [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1581347636',display_name='tempest-DeleteServersTestJSON-server-1581347636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1581347636',id=48,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-3v2ufiuv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2964b30c-ab3b-4bab-8f11-2492007f83ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.189 253665 DEBUG nova.network.os_vif_util [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.190 253665 DEBUG nova.network.os_vif_util [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.191 253665 DEBUG os_vif [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.192 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.193 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8eabe8a-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.200 253665 INFO os_vif [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c')
Nov 22 09:17:32 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [NOTICE]   (307710) : haproxy version is 2.8.14-c23fe91
Nov 22 09:17:32 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [NOTICE]   (307710) : path to executable is /usr/sbin/haproxy
Nov 22 09:17:32 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [WARNING]  (307710) : Exiting Master process...
Nov 22 09:17:32 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [WARNING]  (307710) : Exiting Master process...
Nov 22 09:17:32 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [ALERT]    (307710) : Current worker (307712) exited with code 143 (Terminated)
Nov 22 09:17:32 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [WARNING]  (307710) : All workers exited. Exiting... (0)
Nov 22 09:17:32 compute-0 systemd[1]: libpod-03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688.scope: Deactivated successfully.
Nov 22 09:17:32 compute-0 podman[307853]: 2025-11-22 09:17:32.331965097 +0000 UTC m=+0.106445987 container died 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688-userdata-shm.mount: Deactivated successfully.
Nov 22 09:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-eff0afd4b1c1c84ce2231b8246fcd82b5ce225f423297f3fe953c53d2600a88c-merged.mount: Deactivated successfully.
Nov 22 09:17:32 compute-0 ceph-mon[75021]: pgmap v1594: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Nov 22 09:17:32 compute-0 podman[307853]: 2025-11-22 09:17:32.572790169 +0000 UTC m=+0.347271039 container cleanup 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:17:32 compute-0 systemd[1]: libpod-conmon-03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688.scope: Deactivated successfully.
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.679 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Successfully updated port: 0368daf2-eb08-4459-8b7b-e5a565dbb954 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.694 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.694 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquired lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.694 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:17:32 compute-0 podman[307900]: 2025-11-22 09:17:32.830707946 +0000 UTC m=+0.234155381 container remove 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.844 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69800ddd-6079-46a4-96b9-374bfde07ca7]: (4, ('Sat Nov 22 09:17:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688)\n03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688\nSat Nov 22 09:17:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688)\n03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.847 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d772891-e102-40a4-86c5-407a004a5e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.849 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:32 compute-0 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.869 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[487e1fd3-e65c-4c51-947b-2fd42d4e7198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.887 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[781af5c9-fc18-4911-aded-e5df27023638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.889 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c094be-5cce-46e4-b5f1-4658e05d41be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.901 253665 WARNING nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] d29d4d6f-5bba-4588-9a6e-d6174b2f2613 already exists in list: networks containing: ['d29d4d6f-5bba-4588-9a6e-d6174b2f2613']. ignoring it
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.913 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2b3ced-2e39-4114-835f-221005bf5c82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589822, 'reachable_time': 40933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307914, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.916 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:17:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.916 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[51791ef9-adea-4cf8-9fa9-8fd02d63ba4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:32 compute-0 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.996 253665 DEBUG nova.compute.manager [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-unplugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.996 253665 DEBUG oslo_concurrency.lockutils [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.997 253665 DEBUG oslo_concurrency.lockutils [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.997 253665 DEBUG oslo_concurrency.lockutils [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.997 253665 DEBUG nova.compute.manager [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] No waiting events found dispatching network-vif-unplugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:32 compute-0 nova_compute[253661]: 2025-11-22 09:17:32.997 253665 DEBUG nova.compute.manager [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-unplugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:17:33 compute-0 nova_compute[253661]: 2025-11-22 09:17:33.055 253665 DEBUG nova.compute.manager [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-changed-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:33 compute-0 nova_compute[253661]: 2025-11-22 09:17:33.055 253665 DEBUG nova.compute.manager [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Refreshing instance network info cache due to event network-changed-0368daf2-eb08-4459-8b7b-e5a565dbb954. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:17:33 compute-0 nova_compute[253661]: 2025-11-22 09:17:33.056 253665 DEBUG oslo_concurrency.lockutils [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 200 op/s
Nov 22 09:17:34 compute-0 nova_compute[253661]: 2025-11-22 09:17:34.264 253665 INFO nova.virt.libvirt.driver [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Deleting instance files /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac_del
Nov 22 09:17:34 compute-0 nova_compute[253661]: 2025-11-22 09:17:34.265 253665 INFO nova.virt.libvirt.driver [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Deletion of /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac_del complete
Nov 22 09:17:34 compute-0 nova_compute[253661]: 2025-11-22 09:17:34.308 253665 INFO nova.compute.manager [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Took 2.37 seconds to destroy the instance on the hypervisor.
Nov 22 09:17:34 compute-0 nova_compute[253661]: 2025-11-22 09:17:34.309 253665 DEBUG oslo.service.loopingcall [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:17:34 compute-0 nova_compute[253661]: 2025-11-22 09:17:34.309 253665 DEBUG nova.compute.manager [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:17:34 compute-0 nova_compute[253661]: 2025-11-22 09:17:34.309 253665 DEBUG nova.network.neutron [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:17:34 compute-0 ceph-mon[75021]: pgmap v1595: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 200 op/s
Nov 22 09:17:34 compute-0 nova_compute[253661]: 2025-11-22 09:17:34.986 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.119 253665 DEBUG nova.compute.manager [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.120 253665 DEBUG oslo_concurrency.lockutils [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.120 253665 DEBUG oslo_concurrency.lockutils [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.120 253665 DEBUG oslo_concurrency.lockutils [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.121 253665 DEBUG nova.compute.manager [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] No waiting events found dispatching network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.121 253665 WARNING nova.compute.manager [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received unexpected event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea for instance with vm_state paused and task_state deleting.
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.195 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.252 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.270 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Releasing lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.271 253665 DEBUG oslo_concurrency.lockutils [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.271 253665 DEBUG nova.network.neutron [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Refreshing network info cache for port 0368daf2-eb08-4459-8b7b-e5a565dbb954 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.276 253665 DEBUG nova.virt.libvirt.vif [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.277 253665 DEBUG nova.network.os_vif_util [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.278 253665 DEBUG nova.network.os_vif_util [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.278 253665 DEBUG os_vif [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.280 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.288 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0368daf2-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.288 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0368daf2-eb, col_values=(('external_ids', {'iface-id': '0368daf2-eb08-4459-8b7b-e5a565dbb954', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:86:4f', 'vm-uuid': '971e37bd-eb33-42b7-b5c7-86eff88cb700'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:35 compute-0 NetworkManager[48920]: <info>  [1763803055.2915] manager: (tap0368daf2-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.295 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.298 253665 INFO os_vif [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb')
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.299 253665 DEBUG nova.virt.libvirt.vif [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.300 253665 DEBUG nova.network.os_vif_util [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.300 253665 DEBUG nova.network.os_vif_util [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.305 253665 DEBUG nova.virt.libvirt.guest [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] attach device xml: <interface type="ethernet">
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:15:86:4f"/>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <target dev="tap0368daf2-eb"/>
Nov 22 09:17:35 compute-0 nova_compute[253661]: </interface>
Nov 22 09:17:35 compute-0 nova_compute[253661]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 09:17:35 compute-0 kernel: tap0368daf2-eb: entered promiscuous mode
Nov 22 09:17:35 compute-0 NetworkManager[48920]: <info>  [1763803055.3271] manager: (tap0368daf2-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Nov 22 09:17:35 compute-0 ovn_controller[152872]: 2025-11-22T09:17:35Z|00437|binding|INFO|Claiming lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 for this chassis.
Nov 22 09:17:35 compute-0 ovn_controller[152872]: 2025-11-22T09:17:35Z|00438|binding|INFO|0368daf2-eb08-4459-8b7b-e5a565dbb954: Claiming fa:16:3e:15:86:4f 10.100.0.13
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.334 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:86:4f 10.100.0.13'], port_security=['fa:16:3e:15:86:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '971e37bd-eb33-42b7-b5c7-86eff88cb700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82b2fc18-34ac-4d00-9742-ce510a84048d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=808e605d-2f47-40d3-afaa-9b5d699201f0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0368daf2-eb08-4459-8b7b-e5a565dbb954) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.335 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0368daf2-eb08-4459-8b7b-e5a565dbb954 in datapath d29d4d6f-5bba-4588-9a6e-d6174b2f2613 bound to our chassis
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.337 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d29d4d6f-5bba-4588-9a6e-d6174b2f2613
Nov 22 09:17:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 121 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.2 MiB/s wr, 264 op/s
Nov 22 09:17:35 compute-0 ovn_controller[152872]: 2025-11-22T09:17:35Z|00439|binding|INFO|Setting lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 ovn-installed in OVS
Nov 22 09:17:35 compute-0 ovn_controller[152872]: 2025-11-22T09:17:35Z|00440|binding|INFO|Setting lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 up in Southbound
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.354 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:35 compute-0 systemd-udevd[307923]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.363 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[340bfab1-a3cc-4c5d-a5f4-d9573125d561]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:35 compute-0 NetworkManager[48920]: <info>  [1763803055.3910] device (tap0368daf2-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:17:35 compute-0 NetworkManager[48920]: <info>  [1763803055.3937] device (tap0368daf2-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.409 253665 DEBUG nova.virt.libvirt.driver [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.409 253665 DEBUG nova.virt.libvirt.driver [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.410 253665 DEBUG nova.virt.libvirt.driver [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No VIF found with MAC fa:16:3e:85:8e:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.410 253665 DEBUG nova.virt.libvirt.driver [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No VIF found with MAC fa:16:3e:15:86:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.411 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e921f4f7-14f1-4c20-9dd0-9ca69ace7cfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.415 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cab2c15a-b5bc-4062-8218-ab2abbe5c044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.436 253665 DEBUG nova.virt.libvirt.guest [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <nova:name>tempest-AttachInterfacesV270Test-server-1595917141</nova:name>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:17:35</nova:creationTime>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:17:35 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     <nova:user uuid="31a3f645b946468d9e6fe3b907dfdc0b">tempest-AttachInterfacesV270Test-1454989706-project-member</nova:user>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     <nova:project uuid="0b711aaafbb94138a8f95e1e15d0f0a4">tempest-AttachInterfacesV270Test-1454989706</nova:project>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     <nova:port uuid="a029f6c5-4597-4645-9974-c282b8014824">
Nov 22 09:17:35 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     <nova:port uuid="0368daf2-eb08-4459-8b7b-e5a565dbb954">
Nov 22 09:17:35 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:17:35 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:17:35 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:17:35 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:17:35 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.452 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c62d095e-d0f0-40e3-b298-fe862945fa74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.469 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "interface-971e37bd-eb33-42b7-b5c7-86eff88cb700-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.472 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2d59858b-3c04-49aa-8709-2726f5e7ec68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd29d4d6f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fd:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589945, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307930, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4af937a1-0ef3-4210-a727-42266e8b7c45]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd29d4d6f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589959, 'tstamp': 589959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307931, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd29d4d6f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589962, 'tstamp': 589962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307931, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.500 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd29d4d6f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:35 compute-0 nova_compute[253661]: 2025-11-22 09:17:35.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd29d4d6f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd29d4d6f-50, col_values=(('external_ids', {'iface-id': 'c20b47b0-28aa-4f40-a285-cb154afca96a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.520 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:36 compute-0 nova_compute[253661]: 2025-11-22 09:17:36.449 253665 DEBUG nova.network.neutron [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:36 compute-0 nova_compute[253661]: 2025-11-22 09:17:36.464 253665 INFO nova.compute.manager [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Took 2.16 seconds to deallocate network for instance.
Nov 22 09:17:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:36 compute-0 nova_compute[253661]: 2025-11-22 09:17:36.506 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:36 compute-0 nova_compute[253661]: 2025-11-22 09:17:36.507 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:36 compute-0 nova_compute[253661]: 2025-11-22 09:17:36.575 253665 DEBUG oslo_concurrency.processutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:36 compute-0 ceph-mon[75021]: pgmap v1596: 305 pgs: 305 active+clean; 121 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.2 MiB/s wr, 264 op/s
Nov 22 09:17:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/249014273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.023 253665 DEBUG oslo_concurrency.processutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.029 253665 DEBUG nova.compute.provider_tree [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.043 253665 DEBUG nova.scheduler.client.report [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.072 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.095 253665 INFO nova.scheduler.client.report [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance 2964b30c-ab3b-4bab-8f11-2492007f83ac
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.156 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.201 253665 DEBUG nova.network.neutron [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updated VIF entry in instance network info cache for port 0368daf2-eb08-4459-8b7b-e5a565dbb954. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.202 253665 DEBUG nova.network.neutron [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.220 253665 DEBUG oslo_concurrency.lockutils [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.295 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.296 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.296 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.297 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.297 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.297 253665 WARNING nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 for instance with vm_state active and task_state None.
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.297 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-deleted-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.298 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.298 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.298 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.298 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.299 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.299 253665 WARNING nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 for instance with vm_state active and task_state None.
Nov 22 09:17:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 105 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 705 KiB/s wr, 194 op/s
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.409 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.410 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.410 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.410 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.411 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.412 253665 INFO nova.compute.manager [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Terminating instance
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.413 253665 DEBUG nova.compute.manager [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:17:37 compute-0 kernel: tapa029f6c5-45 (unregistering): left promiscuous mode
Nov 22 09:17:37 compute-0 NetworkManager[48920]: <info>  [1763803057.6425] device (tapa029f6c5-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:17:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/249014273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.685 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 ovn_controller[152872]: 2025-11-22T09:17:37Z|00441|binding|INFO|Releasing lport a029f6c5-4597-4645-9974-c282b8014824 from this chassis (sb_readonly=0)
Nov 22 09:17:37 compute-0 ovn_controller[152872]: 2025-11-22T09:17:37Z|00442|binding|INFO|Setting lport a029f6c5-4597-4645-9974-c282b8014824 down in Southbound
Nov 22 09:17:37 compute-0 ovn_controller[152872]: 2025-11-22T09:17:37Z|00443|binding|INFO|Removing iface tapa029f6c5-45 ovn-installed in OVS
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.687 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.692 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:8e:01 10.100.0.11'], port_security=['fa:16:3e:85:8e:01 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '971e37bd-eb33-42b7-b5c7-86eff88cb700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82b2fc18-34ac-4d00-9742-ce510a84048d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=808e605d-2f47-40d3-afaa-9b5d699201f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a029f6c5-4597-4645-9974-c282b8014824) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.693 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a029f6c5-4597-4645-9974-c282b8014824 in datapath d29d4d6f-5bba-4588-9a6e-d6174b2f2613 unbound from our chassis
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.695 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d29d4d6f-5bba-4588-9a6e-d6174b2f2613
Nov 22 09:17:37 compute-0 kernel: tap0368daf2-eb (unregistering): left promiscuous mode
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 NetworkManager[48920]: <info>  [1763803057.7076] device (tap0368daf2-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:17:37 compute-0 ovn_controller[152872]: 2025-11-22T09:17:37Z|00444|binding|INFO|Releasing lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 from this chassis (sb_readonly=0)
Nov 22 09:17:37 compute-0 ovn_controller[152872]: 2025-11-22T09:17:37Z|00445|binding|INFO|Setting lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 down in Southbound
Nov 22 09:17:37 compute-0 ovn_controller[152872]: 2025-11-22T09:17:37Z|00446|binding|INFO|Removing iface tap0368daf2-eb ovn-installed in OVS
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.718 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e187871c-dbbd-40ef-8a0f-d872ba5bbfbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.724 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:86:4f 10.100.0.13'], port_security=['fa:16:3e:15:86:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '971e37bd-eb33-42b7-b5c7-86eff88cb700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82b2fc18-34ac-4d00-9742-ce510a84048d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=808e605d-2f47-40d3-afaa-9b5d699201f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0368daf2-eb08-4459-8b7b-e5a565dbb954) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.738 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.754 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[58e41b19-07f3-4624-92a1-72055c1b6a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.758 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3d5c25-0378-40b1-a0fd-6ab4092ce4f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:37 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Nov 22 09:17:37 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Consumed 8.925s CPU time.
Nov 22 09:17:37 compute-0 systemd-machined[215941]: Machine qemu-53-instance-0000002f terminated.
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.791 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6272218e-368a-4e82-9d18-4409037452bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed1ad42-b03a-4294-87e7-e8bd3067121f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd29d4d6f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fd:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589945, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307970, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.838 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a024bb1-f4fc-4a93-ab65-43f571ce8735]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd29d4d6f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589959, 'tstamp': 589959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307971, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd29d4d6f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589962, 'tstamp': 589962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307971, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:37 compute-0 NetworkManager[48920]: <info>  [1763803057.8399] manager: (tapa029f6c5-45): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.840 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd29d4d6f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 NetworkManager[48920]: <info>  [1763803057.8556] manager: (tap0368daf2-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.859 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd29d4d6f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.859 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.860 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd29d4d6f-50, col_values=(('external_ids', {'iface-id': 'c20b47b0-28aa-4f40-a285-cb154afca96a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.860 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.862 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0368daf2-eb08-4459-8b7b-e5a565dbb954 in datapath d29d4d6f-5bba-4588-9a6e-d6174b2f2613 unbound from our chassis
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.863 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d29d4d6f-5bba-4588-9a6e-d6174b2f2613, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74813c28-f19d-43df-ae95-37b325600b7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.864 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 namespace which is not needed anymore
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.875 253665 INFO nova.virt.libvirt.driver [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance destroyed successfully.
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.876 253665 DEBUG nova.objects.instance [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'resources' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.891 253665 DEBUG nova.virt.libvirt.vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.891 253665 DEBUG nova.network.os_vif_util [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.893 253665 DEBUG nova.network.os_vif_util [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.893 253665 DEBUG os_vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.895 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.897 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa029f6c5-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.908 253665 INFO os_vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45')
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.909 253665 DEBUG nova.virt.libvirt.vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.910 253665 DEBUG nova.network.os_vif_util [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.910 253665 DEBUG nova.network.os_vif_util [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.911 253665 DEBUG os_vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.912 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0368daf2-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.914 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.915 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:37 compute-0 nova_compute[253661]: 2025-11-22 09:17:37.918 253665 INFO os_vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb')
Nov 22 09:17:38 compute-0 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [NOTICE]   (307807) : haproxy version is 2.8.14-c23fe91
Nov 22 09:17:38 compute-0 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [NOTICE]   (307807) : path to executable is /usr/sbin/haproxy
Nov 22 09:17:38 compute-0 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [WARNING]  (307807) : Exiting Master process...
Nov 22 09:17:38 compute-0 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [WARNING]  (307807) : Exiting Master process...
Nov 22 09:17:38 compute-0 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [ALERT]    (307807) : Current worker (307809) exited with code 143 (Terminated)
Nov 22 09:17:38 compute-0 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [WARNING]  (307807) : All workers exited. Exiting... (0)
Nov 22 09:17:38 compute-0 systemd[1]: libpod-4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30.scope: Deactivated successfully.
Nov 22 09:17:38 compute-0 podman[308027]: 2025-11-22 09:17:38.039161592 +0000 UTC m=+0.055592232 container died 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30-userdata-shm.mount: Deactivated successfully.
Nov 22 09:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad2c1a91593403ac1a5427beee5092aa718b4ae7eb2816e33351712b63d9327-merged.mount: Deactivated successfully.
Nov 22 09:17:38 compute-0 podman[308027]: 2025-11-22 09:17:38.086082222 +0000 UTC m=+0.102512852 container cleanup 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.092 253665 DEBUG nova.compute.manager [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-unplugged-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.093 253665 DEBUG oslo_concurrency.lockutils [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.093 253665 DEBUG oslo_concurrency.lockutils [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.093 253665 DEBUG oslo_concurrency.lockutils [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.093 253665 DEBUG nova.compute.manager [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-unplugged-a029f6c5-4597-4645-9974-c282b8014824 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.094 253665 DEBUG nova.compute.manager [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-unplugged-a029f6c5-4597-4645-9974-c282b8014824 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:17:38 compute-0 systemd[1]: libpod-conmon-4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30.scope: Deactivated successfully.
Nov 22 09:17:38 compute-0 podman[308056]: 2025-11-22 09:17:38.164892978 +0000 UTC m=+0.051788220 container remove 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:17:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.172 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b6d08f-9176-4451-8230-9278b79e0eeb]: (4, ('Sat Nov 22 09:17:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 (4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30)\n4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30\nSat Nov 22 09:17:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 (4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30)\n4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.174 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33beb765-f26d-4bfb-b2a3-8ab62bcfb2bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.176 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd29d4d6f-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.178 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:38 compute-0 kernel: tapd29d4d6f-50: left promiscuous mode
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dac9d2b1-2f1b-4ee6-8366-79ee4cc678ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1685d9f8-fb5a-43e9-87a8-ce305c7e7e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.218 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbbee29-bc2e-4a1c-97fa-41f9fa36ba43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06eaa753-6b9d-4ff1-ae3f-aa9f4dc57d38]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589935, 'reachable_time': 33253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308071, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:38 compute-0 systemd[1]: run-netns-ovnmeta\x2dd29d4d6f\x2d5bba\x2d4588\x2d9a6e\x2dd6174b2f2613.mount: Deactivated successfully.
Nov 22 09:17:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.243 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:17:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.244 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c787b0f1-b562-4f29-82a7-e6c316f35462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.340 253665 INFO nova.virt.libvirt.driver [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Deleting instance files /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700_del
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.341 253665 INFO nova.virt.libvirt.driver [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Deletion of /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700_del complete
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.384 253665 INFO nova.compute.manager [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Took 0.97 seconds to destroy the instance on the hypervisor.
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.384 253665 DEBUG oslo.service.loopingcall [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.385 253665 DEBUG nova.compute.manager [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.385 253665 DEBUG nova.network.neutron [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:17:38 compute-0 ceph-mon[75021]: pgmap v1597: 305 pgs: 305 active+clean; 105 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 705 KiB/s wr, 194 op/s
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.764 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803043.7633476, a63d88a0-884c-4328-a21c-6bedf9264f2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.765 253665 INFO nova.compute.manager [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] VM Stopped (Lifecycle Event)
Nov 22 09:17:38 compute-0 nova_compute[253661]: 2025-11-22 09:17:38.783 253665 DEBUG nova.compute.manager [None req-73c3a6d0-5c1b-4f7c-b36f-a19bc71c3cc4 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 49 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 26 KiB/s wr, 176 op/s
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.425 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-unplugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.425 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.426 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.426 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.426 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-unplugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.427 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-unplugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.427 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.427 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.427 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.428 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.428 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.428 253665 WARNING nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 for instance with vm_state active and task_state deleting.
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.429 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-deleted-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.429 253665 INFO nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Neutron deleted interface a029f6c5-4597-4645-9974-c282b8014824; detaching it from the instance and deleting it from the info cache
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.429 253665 DEBUG nova.network.neutron [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.452 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Detach interface failed, port_id=a029f6c5-4597-4645-9974-c282b8014824, reason: Instance 971e37bd-eb33-42b7-b5c7-86eff88cb700 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.788 253665 DEBUG nova.network.neutron [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.806 253665 INFO nova.compute.manager [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Took 1.42 seconds to deallocate network for instance.
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.852 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.853 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.904 253665 DEBUG oslo_concurrency.processutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:39 compute-0 nova_compute[253661]: 2025-11-22 09:17:39.988 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.185 253665 DEBUG nova.compute.manager [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.186 253665 DEBUG oslo_concurrency.lockutils [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.186 253665 DEBUG oslo_concurrency.lockutils [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.186 253665 DEBUG oslo_concurrency.lockutils [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.187 253665 DEBUG nova.compute.manager [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.187 253665 WARNING nova.compute.manager [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 for instance with vm_state deleted and task_state None.
Nov 22 09:17:40 compute-0 sudo[308093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:40 compute-0 sudo[308093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:40 compute-0 sudo[308093]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272310526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.414 253665 DEBUG oslo_concurrency.processutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.421 253665 DEBUG nova.compute.provider_tree [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:40 compute-0 sudo[308120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:17:40 compute-0 sudo[308120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.445 253665 DEBUG nova.scheduler.client.report [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:40 compute-0 sudo[308120]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.475 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.502 253665 INFO nova.scheduler.client.report [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Deleted allocations for instance 971e37bd-eb33-42b7-b5c7-86eff88cb700
Nov 22 09:17:40 compute-0 sudo[308145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:40 compute-0 sudo[308145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:40 compute-0 sudo[308145]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.563 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:40 compute-0 sudo[308170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 09:17:40 compute-0 sudo[308170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:40 compute-0 ceph-mon[75021]: pgmap v1598: 305 pgs: 305 active+clean; 49 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 26 KiB/s wr, 176 op/s
Nov 22 09:17:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4272310526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.947 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.948 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:40 compute-0 nova_compute[253661]: 2025-11-22 09:17:40.966 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.026 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.027 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.033 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.034 253665 INFO nova.compute.claims [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.117 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:41 compute-0 podman[308267]: 2025-11-22 09:17:41.144184688 +0000 UTC m=+0.099125239 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 09:17:41 compute-0 podman[308267]: 2025-11-22 09:17:41.256881617 +0000 UTC m=+0.211822158 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:17:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 49 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 KiB/s wr, 152 op/s
Nov 22 09:17:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.576 253665 DEBUG nova.compute.manager [req-745f209a-110a-4b4b-9bf5-7d659bcfb22f req-1b848f30-4cfc-43b2-b3cb-002fc9311dee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-deleted-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3787652747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.651 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.661 253665 DEBUG nova.compute.provider_tree [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.676 253665 DEBUG nova.scheduler.client.report [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3787652747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.700 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.702 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.755 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.756 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.774 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.789 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.873 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.875 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.876 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Creating image(s)
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.903 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.933 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.958 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:41 compute-0 nova_compute[253661]: 2025-11-22 09:17:41.964 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:41 compute-0 sudo[308170]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:17:41 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:17:42 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.040 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.041 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.042 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.042 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.064 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:42 compute-0 sudo[308495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.073 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:42 compute-0 sudo[308495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:42 compute-0 sudo[308495]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:42 compute-0 sudo[308540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:17:42 compute-0 sudo[308540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:42 compute-0 sudo[308540]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:42 compute-0 sudo[308573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:42 compute-0 sudo[308573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:42 compute-0 sudo[308573]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.240 253665 DEBUG nova.policy [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:17:42 compute-0 sudo[308609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:17:42 compute-0 sudo[308609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.381 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.450 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.557 253665 DEBUG nova.objects.instance [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.569 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.570 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Ensure instance console log exists: /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.570 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.571 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.571 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:42 compute-0 sudo[308609]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:17:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:17:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:17:42 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:17:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:17:42 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:42 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 27244405-b9fd-4433-b14c-1c6922c09a66 does not exist
Nov 22 09:17:42 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 125286ea-17ec-4c31-9e34-69eed71bd558 does not exist
Nov 22 09:17:42 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 61458e94-e40f-411f-b760-397bacb922f8 does not exist
Nov 22 09:17:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:17:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:17:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:17:42 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:17:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:17:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:17:42 compute-0 nova_compute[253661]: 2025-11-22 09:17:42.915 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:42 compute-0 sudo[308739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:42 compute-0 sudo[308739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:42 compute-0 sudo[308739]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:42 compute-0 ceph-mon[75021]: pgmap v1599: 305 pgs: 305 active+clean; 49 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 KiB/s wr, 152 op/s
Nov 22 09:17:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:17:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:17:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:17:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:17:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:17:43 compute-0 sudo[308764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:17:43 compute-0 sudo[308764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:43 compute-0 sudo[308764]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:43 compute-0 sudo[308789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:43 compute-0 sudo[308789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:43 compute-0 sudo[308789]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:43 compute-0 sudo[308814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:17:43 compute-0 sudo[308814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 KiB/s wr, 160 op/s
Nov 22 09:17:43 compute-0 podman[308881]: 2025-11-22 09:17:43.489758212 +0000 UTC m=+0.045645320 container create adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 09:17:43 compute-0 systemd[1]: Started libpod-conmon-adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63.scope.
Nov 22 09:17:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:43 compute-0 podman[308881]: 2025-11-22 09:17:43.469167832 +0000 UTC m=+0.025054970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:17:43 compute-0 podman[308881]: 2025-11-22 09:17:43.569780577 +0000 UTC m=+0.125667705 container init adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:17:43 compute-0 podman[308881]: 2025-11-22 09:17:43.583231563 +0000 UTC m=+0.139118671 container start adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:17:43 compute-0 podman[308881]: 2025-11-22 09:17:43.586765679 +0000 UTC m=+0.142652887 container attach adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:17:43 compute-0 elegant_chaum[308898]: 167 167
Nov 22 09:17:43 compute-0 systemd[1]: libpod-adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63.scope: Deactivated successfully.
Nov 22 09:17:43 compute-0 conmon[308898]: conmon adacc5a305454d6a0b56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63.scope/container/memory.events
Nov 22 09:17:43 compute-0 podman[308881]: 2025-11-22 09:17:43.594980429 +0000 UTC m=+0.150867557 container died adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-111287d53fd15aee228c08531a6b9ea6e1763e817b4f1104945e11ea90d11eb5-merged.mount: Deactivated successfully.
Nov 22 09:17:43 compute-0 podman[308881]: 2025-11-22 09:17:43.640245078 +0000 UTC m=+0.196132186 container remove adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 09:17:43 compute-0 nova_compute[253661]: 2025-11-22 09:17:43.645 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Successfully created port: 085e3bcc-2e77-4c2e-8298-872aac04e65e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:17:43 compute-0 systemd[1]: libpod-conmon-adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63.scope: Deactivated successfully.
Nov 22 09:17:43 compute-0 podman[308922]: 2025-11-22 09:17:43.82301133 +0000 UTC m=+0.057926369 container create e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:17:43 compute-0 systemd[1]: Started libpod-conmon-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope.
Nov 22 09:17:43 compute-0 podman[308922]: 2025-11-22 09:17:43.795213424 +0000 UTC m=+0.030128503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:17:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:43 compute-0 podman[308922]: 2025-11-22 09:17:43.931497005 +0000 UTC m=+0.166412034 container init e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:17:43 compute-0 podman[308922]: 2025-11-22 09:17:43.938726301 +0000 UTC m=+0.173641300 container start e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:17:43 compute-0 podman[308922]: 2025-11-22 09:17:43.942071893 +0000 UTC m=+0.176986912 container attach e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:17:44 compute-0 nova_compute[253661]: 2025-11-22 09:17:44.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:45 compute-0 ceph-mon[75021]: pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 KiB/s wr, 160 op/s
Nov 22 09:17:45 compute-0 romantic_khorana[308938]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:17:45 compute-0 romantic_khorana[308938]: --> relative data size: 1.0
Nov 22 09:17:45 compute-0 romantic_khorana[308938]: --> All data devices are unavailable
Nov 22 09:17:45 compute-0 systemd[1]: libpod-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope: Deactivated successfully.
Nov 22 09:17:45 compute-0 systemd[1]: libpod-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope: Consumed 1.060s CPU time.
Nov 22 09:17:45 compute-0 conmon[308938]: conmon e144e79918758160f326 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope/container/memory.events
Nov 22 09:17:45 compute-0 podman[308922]: 2025-11-22 09:17:45.040500332 +0000 UTC m=+1.275415351 container died e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669-merged.mount: Deactivated successfully.
Nov 22 09:17:45 compute-0 podman[308922]: 2025-11-22 09:17:45.11489612 +0000 UTC m=+1.349811129 container remove e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 09:17:45 compute-0 systemd[1]: libpod-conmon-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope: Deactivated successfully.
Nov 22 09:17:45 compute-0 sudo[308814]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:45 compute-0 sudo[308978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:45 compute-0 sudo[308978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:45 compute-0 sudo[308978]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:45 compute-0 sudo[309003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:17:45 compute-0 sudo[309003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:45 compute-0 sudo[309003]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 71 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 142 op/s
Nov 22 09:17:45 compute-0 sudo[309028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:45 compute-0 sudo[309028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:45 compute-0 sudo[309028]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:45 compute-0 sudo[309053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:17:45 compute-0 sudo[309053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:45 compute-0 podman[309116]: 2025-11-22 09:17:45.791373248 +0000 UTC m=+0.047086266 container create a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 09:17:45 compute-0 systemd[1]: Started libpod-conmon-a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c.scope.
Nov 22 09:17:45 compute-0 podman[309116]: 2025-11-22 09:17:45.770738966 +0000 UTC m=+0.026451984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:17:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:45 compute-0 podman[309116]: 2025-11-22 09:17:45.892559866 +0000 UTC m=+0.148272914 container init a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:17:45 compute-0 podman[309116]: 2025-11-22 09:17:45.904157938 +0000 UTC m=+0.159870956 container start a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 09:17:45 compute-0 podman[309116]: 2025-11-22 09:17:45.909488647 +0000 UTC m=+0.165201685 container attach a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:17:45 compute-0 goofy_archimedes[309132]: 167 167
Nov 22 09:17:45 compute-0 systemd[1]: libpod-a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c.scope: Deactivated successfully.
Nov 22 09:17:45 compute-0 podman[309116]: 2025-11-22 09:17:45.911916577 +0000 UTC m=+0.167629595 container died a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 22 09:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42c87cba9eb76046cd31fc8a7f1895d0f9dd9773c94d1bc14a20ac4a128751b-merged.mount: Deactivated successfully.
Nov 22 09:17:45 compute-0 podman[309116]: 2025-11-22 09:17:45.955424764 +0000 UTC m=+0.211137782 container remove a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:17:45 compute-0 nova_compute[253661]: 2025-11-22 09:17:45.966 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Successfully updated port: 085e3bcc-2e77-4c2e-8298-872aac04e65e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:17:45 compute-0 systemd[1]: libpod-conmon-a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c.scope: Deactivated successfully.
Nov 22 09:17:46 compute-0 nova_compute[253661]: 2025-11-22 09:17:46.009 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:46 compute-0 nova_compute[253661]: 2025-11-22 09:17:46.010 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:46 compute-0 nova_compute[253661]: 2025-11-22 09:17:46.011 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:17:46 compute-0 nova_compute[253661]: 2025-11-22 09:17:46.093 253665 DEBUG nova.compute.manager [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-changed-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:46 compute-0 nova_compute[253661]: 2025-11-22 09:17:46.093 253665 DEBUG nova.compute.manager [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Refreshing instance network info cache due to event network-changed-085e3bcc-2e77-4c2e-8298-872aac04e65e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:17:46 compute-0 nova_compute[253661]: 2025-11-22 09:17:46.094 253665 DEBUG oslo_concurrency.lockutils [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:17:46 compute-0 podman[309155]: 2025-11-22 09:17:46.149837788 +0000 UTC m=+0.047840384 container create cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:17:46 compute-0 systemd[1]: Started libpod-conmon-cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9.scope.
Nov 22 09:17:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:46 compute-0 podman[309155]: 2025-11-22 09:17:46.13266732 +0000 UTC m=+0.030669936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:46 compute-0 podman[309155]: 2025-11-22 09:17:46.244281442 +0000 UTC m=+0.142284058 container init cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:17:46 compute-0 nova_compute[253661]: 2025-11-22 09:17:46.247 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:17:46 compute-0 podman[309155]: 2025-11-22 09:17:46.257410801 +0000 UTC m=+0.155413397 container start cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:17:46 compute-0 podman[309155]: 2025-11-22 09:17:46.261963232 +0000 UTC m=+0.159965828 container attach cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:17:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:47 compute-0 ceph-mon[75021]: pgmap v1601: 305 pgs: 305 active+clean; 71 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 142 op/s
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.172 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803052.1709483, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.174 253665 INFO nova.compute.manager [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Stopped (Lifecycle Event)
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]: {
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:     "0": [
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:         {
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "devices": [
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "/dev/loop3"
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             ],
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_name": "ceph_lv0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_size": "21470642176",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "name": "ceph_lv0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "tags": {
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.cluster_name": "ceph",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.crush_device_class": "",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.encrypted": "0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.osd_id": "0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.type": "block",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.vdo": "0"
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             },
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "type": "block",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "vg_name": "ceph_vg0"
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:         }
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:     ],
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:     "1": [
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:         {
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "devices": [
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "/dev/loop4"
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             ],
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_name": "ceph_lv1",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_size": "21470642176",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "name": "ceph_lv1",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "tags": {
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.cluster_name": "ceph",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.crush_device_class": "",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.encrypted": "0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.osd_id": "1",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.type": "block",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.vdo": "0"
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             },
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "type": "block",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "vg_name": "ceph_vg1"
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:         }
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:     ],
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:     "2": [
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:         {
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "devices": [
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "/dev/loop5"
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             ],
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_name": "ceph_lv2",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_size": "21470642176",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "name": "ceph_lv2",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "tags": {
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.cluster_name": "ceph",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.crush_device_class": "",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.encrypted": "0",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.osd_id": "2",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.type": "block",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:                 "ceph.vdo": "0"
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             },
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "type": "block",
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:             "vg_name": "ceph_vg2"
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:         }
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]:     ]
Nov 22 09:17:47 compute-0 heuristic_fermat[309172]: }
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.198 253665 DEBUG nova.compute.manager [None req-5c81062d-5d68-44fc-8f7d-4a9c1da2dd4d - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:47 compute-0 systemd[1]: libpod-cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9.scope: Deactivated successfully.
Nov 22 09:17:47 compute-0 podman[309155]: 2025-11-22 09:17:47.247279563 +0000 UTC m=+1.145282169 container died cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f-merged.mount: Deactivated successfully.
Nov 22 09:17:47 compute-0 podman[309155]: 2025-11-22 09:17:47.330227199 +0000 UTC m=+1.228229795 container remove cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:17:47 compute-0 systemd[1]: libpod-conmon-cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9.scope: Deactivated successfully.
Nov 22 09:17:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 22 09:17:47 compute-0 sudo[309053]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:47 compute-0 sudo[309195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:47 compute-0 sudo[309195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:47 compute-0 sudo[309195]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:47 compute-0 sudo[309220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:17:47 compute-0 sudo[309220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:47 compute-0 sudo[309220]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:47 compute-0 sudo[309245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:47 compute-0 sudo[309245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:47 compute-0 sudo[309245]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:47 compute-0 sudo[309270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:17:47 compute-0 sudo[309270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.735 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.765 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.765 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance network_info: |[{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.765 253665 DEBUG oslo_concurrency.lockutils [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.766 253665 DEBUG nova.network.neutron [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Refreshing network info cache for port 085e3bcc-2e77-4c2e-8298-872aac04e65e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.770 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Start _get_guest_xml network_info=[{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.776 253665 WARNING nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.784 253665 DEBUG nova.virt.libvirt.host [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.785 253665 DEBUG nova.virt.libvirt.host [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.789 253665 DEBUG nova.virt.libvirt.host [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.790 253665 DEBUG nova.virt.libvirt.host [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.791 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.791 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.797 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:47 compute-0 nova_compute[253661]: 2025-11-22 09:17:47.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:48 compute-0 podman[309354]: 2025-11-22 09:17:48.042104446 +0000 UTC m=+0.049875023 container create dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:17:48 compute-0 systemd[1]: Started libpod-conmon-dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f.scope.
Nov 22 09:17:48 compute-0 podman[309354]: 2025-11-22 09:17:48.019393685 +0000 UTC m=+0.027164272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:17:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:48 compute-0 podman[309354]: 2025-11-22 09:17:48.152330055 +0000 UTC m=+0.160100642 container init dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 09:17:48 compute-0 podman[309354]: 2025-11-22 09:17:48.161049587 +0000 UTC m=+0.168820154 container start dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:17:48 compute-0 podman[309354]: 2025-11-22 09:17:48.165572636 +0000 UTC m=+0.173343223 container attach dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:17:48 compute-0 vigorous_lalande[309370]: 167 167
Nov 22 09:17:48 compute-0 systemd[1]: libpod-dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f.scope: Deactivated successfully.
Nov 22 09:17:48 compute-0 podman[309354]: 2025-11-22 09:17:48.168928208 +0000 UTC m=+0.176698785 container died dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 09:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e632415c437e028d5b1318311830c66ce196bb1ed3813d59d092115e0ed66c2-merged.mount: Deactivated successfully.
Nov 22 09:17:48 compute-0 podman[309354]: 2025-11-22 09:17:48.224537229 +0000 UTC m=+0.232307796 container remove dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:17:48 compute-0 systemd[1]: libpod-conmon-dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f.scope: Deactivated successfully.
Nov 22 09:17:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607910177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.323 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.352 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.358 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:48 compute-0 podman[309411]: 2025-11-22 09:17:48.413560902 +0000 UTC m=+0.050712663 container create 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 09:17:48 compute-0 systemd[1]: Started libpod-conmon-896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3.scope.
Nov 22 09:17:48 compute-0 podman[309411]: 2025-11-22 09:17:48.389671271 +0000 UTC m=+0.026823052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:17:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:48 compute-0 podman[309411]: 2025-11-22 09:17:48.532037131 +0000 UTC m=+0.169188922 container init 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:17:48 compute-0 podman[309411]: 2025-11-22 09:17:48.540077916 +0000 UTC m=+0.177229677 container start 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:17:48 compute-0 podman[309411]: 2025-11-22 09:17:48.547611159 +0000 UTC m=+0.184762940 container attach 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 09:17:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:17:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708910595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.894 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.896 253665 DEBUG nova.virt.libvirt.vif [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1186810200',display_name='tempest-DeleteServersTestJSON-server-1186810200',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1186810200',id=49,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-bkwxncnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:41Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=ee68ed8e-d5b3-4069-ac90-f7e94430ed0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.897 253665 DEBUG nova.network.os_vif_util [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.898 253665 DEBUG nova.network.os_vif_util [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.899 253665 DEBUG nova.objects.instance [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.916 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <uuid>ee68ed8e-d5b3-4069-ac90-f7e94430ed0d</uuid>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <name>instance-00000031</name>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <nova:name>tempest-DeleteServersTestJSON-server-1186810200</nova:name>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:17:47</nova:creationTime>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <nova:port uuid="085e3bcc-2e77-4c2e-8298-872aac04e65e">
Nov 22 09:17:48 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <system>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <entry name="serial">ee68ed8e-d5b3-4069-ac90-f7e94430ed0d</entry>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <entry name="uuid">ee68ed8e-d5b3-4069-ac90-f7e94430ed0d</entry>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     </system>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <os>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   </os>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <features>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   </features>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk">
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config">
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       </source>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:17:48 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:52:a1:a9"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <target dev="tap085e3bcc-2e"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/console.log" append="off"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <video>
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     </video>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:17:48 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:17:48 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:17:48 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:17:48 compute-0 nova_compute[253661]: </domain>
Nov 22 09:17:48 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.917 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Preparing to wait for external event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.917 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.918 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.918 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.919 253665 DEBUG nova.virt.libvirt.vif [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1186810200',display_name='tempest-DeleteServersTestJSON-server-1186810200',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1186810200',id=49,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-bkwxncnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:41Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=ee68ed8e-d5b3-4069-ac90-f7e94430ed0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.919 253665 DEBUG nova.network.os_vif_util [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.919 253665 DEBUG nova.network.os_vif_util [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.920 253665 DEBUG os_vif [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.920 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.921 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.921 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.925 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap085e3bcc-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.925 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap085e3bcc-2e, col_values=(('external_ids', {'iface-id': '085e3bcc-2e77-4c2e-8298-872aac04e65e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:a1:a9', 'vm-uuid': 'ee68ed8e-d5b3-4069-ac90-f7e94430ed0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:48 compute-0 NetworkManager[48920]: <info>  [1763803068.9284] manager: (tap085e3bcc-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.932 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.938 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:48 compute-0 nova_compute[253661]: 2025-11-22 09:17:48.939 253665 INFO os_vif [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e')
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.007 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.007 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.008 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:52:a1:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.008 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Using config drive
Nov 22 09:17:49 compute-0 ceph-mon[75021]: pgmap v1602: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 22 09:17:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2607910177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/708910595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.033 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.488 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Creating config drive at /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.492 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3yy5hhw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:49 compute-0 sshd-session[309489]: Connection closed by 185.216.140.186 port 46812
Nov 22 09:17:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:49.610 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:49.613 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.650 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3yy5hhw" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]: {
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "osd_id": 1,
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "type": "bluestore"
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:     },
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "osd_id": 0,
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "type": "bluestore"
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:     },
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "osd_id": 2,
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:         "type": "bluestore"
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]:     }
Nov 22 09:17:49 compute-0 hardcore_meitner[309432]: }
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.679 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.687 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:49 compute-0 systemd[1]: libpod-896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3.scope: Deactivated successfully.
Nov 22 09:17:49 compute-0 systemd[1]: libpod-896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3.scope: Consumed 1.179s CPU time.
Nov 22 09:17:49 compute-0 podman[309411]: 2025-11-22 09:17:49.72843288 +0000 UTC m=+1.365584661 container died 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:17:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8-merged.mount: Deactivated successfully.
Nov 22 09:17:49 compute-0 podman[309411]: 2025-11-22 09:17:49.815905996 +0000 UTC m=+1.453057757 container remove 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 09:17:49 compute-0 systemd[1]: libpod-conmon-896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3.scope: Deactivated successfully.
Nov 22 09:17:49 compute-0 sudo[309270]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:17:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:17:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 56dc1e41-5ade-4a59-ad75-d6c419cb1a5c does not exist
Nov 22 09:17:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 3005e85d-41da-4122-8bb4-e5d0c5b87b49 does not exist
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.911 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.912 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Deleting local config drive /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config because it was imported into RBD.
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.947 253665 DEBUG nova.network.neutron [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updated VIF entry in instance network info cache for port 085e3bcc-2e77-4c2e-8298-872aac04e65e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.948 253665 DEBUG nova.network.neutron [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:17:49 compute-0 sudo[309560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.969 253665 DEBUG oslo_concurrency.lockutils [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:17:49 compute-0 sudo[309560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:49 compute-0 sudo[309560]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:49 compute-0 nova_compute[253661]: 2025-11-22 09:17:49.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 kernel: tap085e3bcc-2e: entered promiscuous mode
Nov 22 09:17:50 compute-0 NetworkManager[48920]: <info>  [1763803070.0096] manager: (tap085e3bcc-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 ovn_controller[152872]: 2025-11-22T09:17:50Z|00447|binding|INFO|Claiming lport 085e3bcc-2e77-4c2e-8298-872aac04e65e for this chassis.
Nov 22 09:17:50 compute-0 ovn_controller[152872]: 2025-11-22T09:17:50Z|00448|binding|INFO|085e3bcc-2e77-4c2e-8298-872aac04e65e: Claiming fa:16:3e:52:a1:a9 10.100.0.10
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.027 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:a1:a9 10.100.0.10'], port_security=['fa:16:3e:52:a1:a9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ee68ed8e-d5b3-4069-ac90-f7e94430ed0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=085e3bcc-2e77-4c2e-8298-872aac04e65e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.028 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 085e3bcc-2e77-4c2e-8298-872aac04e65e in datapath d93e3720-b00d-41f5-8283-164e9f857d24 bound to our chassis
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.030 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.049 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2e9a1e-eade-4607-8464-4ccda9b912ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.051 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.054 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c878191e-4ea7-4687-a3f3-e31add845f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 systemd-udevd[309622]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.055 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[918a2b7d-10bf-4b6c-95ad-8ce1de65adc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 systemd-machined[215941]: New machine qemu-54-instance-00000031.
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.073 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[77641050-9fdd-4d30-8cec-c655c7e22cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 NetworkManager[48920]: <info>  [1763803070.0811] device (tap085e3bcc-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:17:50 compute-0 NetworkManager[48920]: <info>  [1763803070.0826] device (tap085e3bcc-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:17:50 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000031.
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 sudo[309592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:17:50 compute-0 ovn_controller[152872]: 2025-11-22T09:17:50Z|00449|binding|INFO|Setting lport 085e3bcc-2e77-4c2e-8298-872aac04e65e ovn-installed in OVS
Nov 22 09:17:50 compute-0 ovn_controller[152872]: 2025-11-22T09:17:50Z|00450|binding|INFO|Setting lport 085e3bcc-2e77-4c2e-8298-872aac04e65e up in Southbound
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.096 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[60b013d8-b3e0-425f-bd21-ea1baf2acba2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 sudo[309592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:17:50 compute-0 sudo[309592]: pam_unix(sudo:session): session closed for user root
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.144 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c55d4a2f-27ec-48d6-ae91-6508141208b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 NetworkManager[48920]: <info>  [1763803070.1524] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9dbd08-a9ed-4e3f-ae2c-a366f16ecdc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 systemd-udevd[309627]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.189 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[02155866-8a2d-4754-89b1-8a69e41b834e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.194 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9ba968-4e91-4b14-b59e-1ec76a35fd2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 NetworkManager[48920]: <info>  [1763803070.2270] device (tapd93e3720-b0): carrier: link connected
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.235 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[af0e6f24-c177-49ae-891c-5a07c24be81d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.266 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3a8524-7037-46e5-9255-43a9dd4f605e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592096, 'reachable_time': 39365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309656, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.291 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[49c048f4-7559-4bfe-9e2f-c07ac8f30b37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592096, 'tstamp': 592096}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309657, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.319 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3e40b7-1017-4056-a120-758065b059cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592096, 'reachable_time': 39365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309658, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00c8b873-0531-493e-bc43-c5f2de4269ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.447 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1de7ea1-72c0-4bab-9dfa-e0a96791a8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.450 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.451 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.451 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:50 compute-0 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 NetworkManager[48920]: <info>  [1763803070.4546] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.456 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.460 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 ovn_controller[152872]: 2025-11-22T09:17:50Z|00451|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.463 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.464 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.466 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f8329976-87e2-422d-9b37-d5c27e8ec0bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.467 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:17:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.468 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.479 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.539 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803070.5387995, ee68ed8e-d5b3-4069-ac90-f7e94430ed0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.539 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] VM Started (Lifecycle Event)
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.559 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.564 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803070.5391448, ee68ed8e-d5b3-4069-ac90-f7e94430ed0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.564 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] VM Paused (Lifecycle Event)
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.581 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.587 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:50 compute-0 nova_compute[253661]: 2025-11-22 09:17:50.609 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:50 compute-0 ceph-mon[75021]: pgmap v1603: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 22 09:17:50 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:50 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:17:50 compute-0 podman[309731]: 2025-11-22 09:17:50.899047915 +0000 UTC m=+0.061506255 container create 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:17:50 compute-0 systemd[1]: Started libpod-conmon-2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424.scope.
Nov 22 09:17:50 compute-0 podman[309731]: 2025-11-22 09:17:50.864779062 +0000 UTC m=+0.027237432 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:17:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4b6fe6d0e31e82f90921939e8d033bb084c60d0f39f2ba0237780fbbf52430/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:17:51 compute-0 podman[309731]: 2025-11-22 09:17:51.000235783 +0000 UTC m=+0.162694153 container init 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:17:51 compute-0 podman[309731]: 2025-11-22 09:17:51.007130951 +0000 UTC m=+0.169589291 container start 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:17:51 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [NOTICE]   (309750) : New worker (309752) forked
Nov 22 09:17:51 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [NOTICE]   (309750) : Loading success.
Nov 22 09:17:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 22 09:17:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:17:52
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root']
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:17:52 compute-0 nova_compute[253661]: 2025-11-22 09:17:52.873 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803057.8727126, 971e37bd-eb33-42b7-b5c7-86eff88cb700 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:52 compute-0 nova_compute[253661]: 2025-11-22 09:17:52.874 253665 INFO nova.compute.manager [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] VM Stopped (Lifecycle Event)
Nov 22 09:17:52 compute-0 nova_compute[253661]: 2025-11-22 09:17:52.894 253665 DEBUG nova.compute.manager [None req-a2a017e0-40d1-4124-9d9e-cd71c158066f - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:53 compute-0 ceph-mon[75021]: pgmap v1604: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 22 09:17:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 22 09:17:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:17:53.617 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:17:53 compute-0 nova_compute[253661]: 2025-11-22 09:17:53.928 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:17:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:17:54 compute-0 nova_compute[253661]: 2025-11-22 09:17:54.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:55 compute-0 ceph-mon[75021]: pgmap v1605: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 22 09:17:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 22 09:17:55 compute-0 podman[309762]: 2025-11-22 09:17:55.384717638 +0000 UTC m=+0.070711059 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 22 09:17:55 compute-0 podman[309761]: 2025-11-22 09:17:55.400327577 +0000 UTC m=+0.088979842 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 09:17:56 compute-0 ceph-mon[75021]: pgmap v1606: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 22 09:17:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.917 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.917 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.923 253665 DEBUG nova.compute.manager [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.924 253665 DEBUG oslo_concurrency.lockutils [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.924 253665 DEBUG oslo_concurrency.lockutils [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.924 253665 DEBUG oslo_concurrency.lockutils [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.924 253665 DEBUG nova.compute.manager [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Processing event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.925 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.931 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803076.9312377, ee68ed8e-d5b3-4069-ac90-f7e94430ed0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.932 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] VM Resumed (Lifecycle Event)
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.935 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.938 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.945 253665 INFO nova.virt.libvirt.driver [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance spawned successfully.
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.946 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.964 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.972 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.980 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.981 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.982 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.982 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.983 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.984 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:17:56 compute-0 nova_compute[253661]: 2025-11-22 09:17:56.993 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.052 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.053 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.057 253665 INFO nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Took 15.18 seconds to spawn the instance on the hypervisor.
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.058 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.068 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.068 253665 INFO nova.compute.claims [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.172 253665 INFO nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Took 16.17 seconds to build instance.
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.196 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.266 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 590 KiB/s wr, 22 op/s
Nov 22 09:17:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:17:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991763992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.831 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.839 253665 DEBUG nova.compute.provider_tree [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.856 253665 DEBUG nova.scheduler.client.report [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.880 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.882 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.929 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.931 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.952 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:17:57 compute-0 nova_compute[253661]: 2025-11-22 09:17:57.976 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.067 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.069 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.070 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Creating image(s)
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.105 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.147 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.180 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.186 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.245 253665 DEBUG nova.policy [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '559fd7e00a0a468797efe4955caffc4a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.291 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.293 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.294 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.294 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.327 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.334 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 636b1046-fff8-4a45-8a14-04010b2f282e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:17:58 compute-0 podman[309875]: 2025-11-22 09:17:58.418256749 +0000 UTC m=+0.107868722 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 09:17:58 compute-0 ceph-mon[75021]: pgmap v1607: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 590 KiB/s wr, 22 op/s
Nov 22 09:17:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1991763992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:17:58 compute-0 nova_compute[253661]: 2025-11-22 09:17:58.931 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.001 253665 DEBUG nova.compute.manager [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.002 253665 DEBUG oslo_concurrency.lockutils [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.002 253665 DEBUG oslo_concurrency.lockutils [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.003 253665 DEBUG oslo_concurrency.lockutils [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.003 253665 DEBUG nova.compute.manager [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] No waiting events found dispatching network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.003 253665 WARNING nova.compute.manager [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received unexpected event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e for instance with vm_state active and task_state shelving.
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.261 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.263 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.263 253665 INFO nova.compute.manager [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Shelving
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.282 253665 DEBUG nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:17:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 51 op/s
Nov 22 09:17:59 compute-0 nova_compute[253661]: 2025-11-22 09:17:59.476 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Successfully created port: a288a5e5-7b57-4be8-9617-3271ea1e210f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:18:00 compute-0 nova_compute[253661]: 2025-11-22 09:18:00.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:00 compute-0 nova_compute[253661]: 2025-11-22 09:18:00.387 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Successfully updated port: a288a5e5-7b57-4be8-9617-3271ea1e210f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:18:00 compute-0 nova_compute[253661]: 2025-11-22 09:18:00.400 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:00 compute-0 nova_compute[253661]: 2025-11-22 09:18:00.400 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:00 compute-0 nova_compute[253661]: 2025-11-22 09:18:00.401 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:18:00 compute-0 nova_compute[253661]: 2025-11-22 09:18:00.495 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 636b1046-fff8-4a45-8a14-04010b2f282e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:00 compute-0 nova_compute[253661]: 2025-11-22 09:18:00.555 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] resizing rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:18:00 compute-0 nova_compute[253661]: 2025-11-22 09:18:00.805 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:18:00 compute-0 ceph-mon[75021]: pgmap v1608: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 51 op/s
Nov 22 09:18:01 compute-0 nova_compute[253661]: 2025-11-22 09:18:01.115 253665 DEBUG nova.objects.instance [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'migration_context' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:01 compute-0 nova_compute[253661]: 2025-11-22 09:18:01.128 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:18:01 compute-0 nova_compute[253661]: 2025-11-22 09:18:01.129 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Ensure instance console log exists: /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:18:01 compute-0 nova_compute[253661]: 2025-11-22 09:18:01.129 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:01 compute-0 nova_compute[253661]: 2025-11-22 09:18:01.130 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:01 compute-0 nova_compute[253661]: 2025-11-22 09:18:01.130 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 51 op/s
Nov 22 09:18:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:01 compute-0 nova_compute[253661]: 2025-11-22 09:18:01.730 253665 DEBUG nova.compute.manager [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-changed-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:01 compute-0 nova_compute[253661]: 2025-11-22 09:18:01.731 253665 DEBUG nova.compute.manager [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Refreshing instance network info cache due to event network-changed-a288a5e5-7b57-4be8-9617-3271ea1e210f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:18:01 compute-0 nova_compute[253661]: 2025-11-22 09:18:01.731 253665 DEBUG oslo_concurrency.lockutils [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.580 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.618 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.619 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance network_info: |[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.620 253665 DEBUG oslo_concurrency.lockutils [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.620 253665 DEBUG nova.network.neutron [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Refreshing network info cache for port a288a5e5-7b57-4be8-9617-3271ea1e210f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.625 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start _get_guest_xml network_info=[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.629 253665 WARNING nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:18:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.652 253665 DEBUG nova.virt.libvirt.host [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.653 253665 DEBUG nova.virt.libvirt.host [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.661 253665 DEBUG nova.virt.libvirt.host [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.662 253665 DEBUG nova.virt.libvirt.host [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.662 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.663 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.663 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.664 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.664 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.664 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.665 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.665 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.665 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.666 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.666 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.666 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:18:02 compute-0 nova_compute[253661]: 2025-11-22 09:18:02.669 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:03 compute-0 ceph-mon[75021]: pgmap v1609: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 51 op/s
Nov 22 09:18:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583057949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.165 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.195 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.200 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 106 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 531 KiB/s wr, 74 op/s
Nov 22 09:18:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007183746' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.685 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.688 253665 DEBUG nova.virt.libvirt.vif [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.688 253665 DEBUG nova.network.os_vif_util [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.689 253665 DEBUG nova.network.os_vif_util [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.690 253665 DEBUG nova.objects.instance [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.708 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <uuid>636b1046-fff8-4a45-8a14-04010b2f282e</uuid>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <name>instance-00000032</name>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestJSON-server-149918095</nova:name>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:18:02</nova:creationTime>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <nova:port uuid="a288a5e5-7b57-4be8-9617-3271ea1e210f">
Nov 22 09:18:03 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <system>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <entry name="serial">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <entry name="uuid">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     </system>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <os>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   </os>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <features>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   </features>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk">
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk.config">
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:03 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:70:38:8e"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <target dev="tapa288a5e5-7b"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log" append="off"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <video>
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     </video>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:18:03 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:18:03 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:18:03 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:18:03 compute-0 nova_compute[253661]: </domain>
Nov 22 09:18:03 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.710 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Preparing to wait for external event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.710 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.710 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.711 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.712 253665 DEBUG nova.virt.libvirt.vif [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.712 253665 DEBUG nova.network.os_vif_util [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.713 253665 DEBUG nova.network.os_vif_util [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.713 253665 DEBUG os_vif [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.714 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.715 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.715 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.719 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.720 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.759 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:03 compute-0 NetworkManager[48920]: <info>  [1763803083.7618] manager: (tapa288a5e5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.771 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.773 253665 INFO os_vif [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.837 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.838 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.838 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No VIF found with MAC fa:16:3e:70:38:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.839 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Using config drive
Nov 22 09:18:03 compute-0 nova_compute[253661]: 2025-11-22 09:18:03.861 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/583057949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3007183746' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:04 compute-0 nova_compute[253661]: 2025-11-22 09:18:04.220 253665 DEBUG nova.network.neutron [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updated VIF entry in instance network info cache for port a288a5e5-7b57-4be8-9617-3271ea1e210f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:18:04 compute-0 nova_compute[253661]: 2025-11-22 09:18:04.221 253665 DEBUG nova.network.neutron [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:04 compute-0 nova_compute[253661]: 2025-11-22 09:18:04.241 253665 DEBUG oslo_concurrency.lockutils [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:04 compute-0 nova_compute[253661]: 2025-11-22 09:18:04.314 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Creating config drive at /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config
Nov 22 09:18:04 compute-0 nova_compute[253661]: 2025-11-22 09:18:04.321 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwjwiek9t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:04 compute-0 nova_compute[253661]: 2025-11-22 09:18:04.478 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwjwiek9t" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:04 compute-0 nova_compute[253661]: 2025-11-22 09:18:04.511 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:04 compute-0 nova_compute[253661]: 2025-11-22 09:18:04.522 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:05 compute-0 nova_compute[253661]: 2025-11-22 09:18:05.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:05 compute-0 ceph-mon[75021]: pgmap v1610: 305 pgs: 305 active+clean; 106 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 531 KiB/s wr, 74 op/s
Nov 22 09:18:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 22 09:18:05 compute-0 nova_compute[253661]: 2025-11-22 09:18:05.360 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.838s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:05 compute-0 nova_compute[253661]: 2025-11-22 09:18:05.361 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Deleting local config drive /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config because it was imported into RBD.
Nov 22 09:18:05 compute-0 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 09:18:05 compute-0 nova_compute[253661]: 2025-11-22 09:18:05.421 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:05 compute-0 NetworkManager[48920]: <info>  [1763803085.4250] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Nov 22 09:18:05 compute-0 ovn_controller[152872]: 2025-11-22T09:18:05Z|00452|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 09:18:05 compute-0 ovn_controller[152872]: 2025-11-22T09:18:05Z|00453|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.435 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.436 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 bound to our chassis
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.437 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:18:05 compute-0 systemd-udevd[310148]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.453 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a6b238-3f77-4d78-a5c1-08d3a5e5054f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.455 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.458 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.458 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a10991-63af-4c72-a13d-f28c3caa3a0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 NetworkManager[48920]: <info>  [1763803085.4651] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:18:05 compute-0 NetworkManager[48920]: <info>  [1763803085.4659] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:18:05 compute-0 systemd-machined[215941]: New machine qemu-55-instance-00000032.
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.461 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[211e57dd-6862-4a64-b46a-47c49818c309]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.479 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b104baf9-5d0f-4134-97d8-51e079f79393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000032.
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.501 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[497afcaa-984f-4df9-a59f-2e17d1c97dd9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 nova_compute[253661]: 2025-11-22 09:18:05.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:05 compute-0 ovn_controller[152872]: 2025-11-22T09:18:05Z|00454|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 09:18:05 compute-0 ovn_controller[152872]: 2025-11-22T09:18:05Z|00455|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 09:18:05 compute-0 nova_compute[253661]: 2025-11-22 09:18:05.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.543 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8d164450-9c39-4e3f-beb1-6f040ff806eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2bff0a23-c5bb-425a-b78a-3acbb5f389f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 NetworkManager[48920]: <info>  [1763803085.5545] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/203)
Nov 22 09:18:05 compute-0 systemd-udevd[310151]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.620 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdeae6a-85c5-4375-9982-5dd276f90518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.628 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[318c48f4-91f1-45ec-a521-3010d1b9f16f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 NetworkManager[48920]: <info>  [1763803085.6616] device (tapebc42408-70): carrier: link connected
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.669 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c04dc445-edab-487b-9f55-1b805a8d912c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.698 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f228402-8bf5-4df8-b479-f8ba760bfacd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593639, 'reachable_time': 29944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310181, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.721 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[db0ebb0b-cbe9-47bb-a820-64762e00cb31]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593639, 'tstamp': 593639}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310182, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.751 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7562b77f-5790-469c-88b0-6332b1eb6544]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593639, 'reachable_time': 29944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310183, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c22cad1-9bec-4d80-a9af-1a9a96b73404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4353da41-d287-408d-814e-1a9c14d5f4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.901 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.901 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.902 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:05 compute-0 nova_compute[253661]: 2025-11-22 09:18:05.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:05 compute-0 NetworkManager[48920]: <info>  [1763803085.9050] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Nov 22 09:18:05 compute-0 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.910 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:05 compute-0 nova_compute[253661]: 2025-11-22 09:18:05.911 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:05 compute-0 ovn_controller[152872]: 2025-11-22T09:18:05Z|00456|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.915 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.917 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7b3b30-4a0d-4af8-9a62-f882e3f0e41c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.917 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:18:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.919 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:18:05 compute-0 nova_compute[253661]: 2025-11-22 09:18:05.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:06 compute-0 podman[310248]: 2025-11-22 09:18:06.289073007 +0000 UTC m=+0.027870459 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:18:06 compute-0 ceph-mon[75021]: pgmap v1611: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.436 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803086.4356837, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.438 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.458 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.467 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803086.437159, 636b1046-fff8-4a45-8a14-04010b2f282e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.468 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Paused (Lifecycle Event)
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.483 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:06 compute-0 podman[310248]: 2025-11-22 09:18:06.484416493 +0000 UTC m=+0.223213925 container create 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.487 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.505 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:18:06 compute-0 systemd[1]: Started libpod-conmon-17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de.scope.
Nov 22 09:18:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5234ae273265799e05869e908957591393f90ceac631303829d1658fcdf1825/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:06 compute-0 podman[310248]: 2025-11-22 09:18:06.683566362 +0000 UTC m=+0.422363824 container init 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:18:06 compute-0 podman[310248]: 2025-11-22 09:18:06.690249945 +0000 UTC m=+0.429047377 container start 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:18:06 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [NOTICE]   (310276) : New worker (310278) forked
Nov 22 09:18:06 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [NOTICE]   (310276) : Loading success.
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.799 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.800 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.826 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.898 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.899 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.907 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:18:06 compute-0 nova_compute[253661]: 2025-11-22 09:18:06.908 253665 INFO nova.compute.claims [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.040 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Nov 22 09:18:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104638163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.537 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.545 253665 DEBUG nova.compute.provider_tree [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.561 253665 DEBUG nova.scheduler.client.report [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.587 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.589 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.632 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.634 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.652 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.668 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.765 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.767 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.767 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Creating image(s)
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.789 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.813 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.836 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.841 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.884 253665 DEBUG nova.policy [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e3344198c364c67aa73008f33323a4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '377c148737af4a5fb70d3e00de87fcd3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.925 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.927 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.927 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.928 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.947 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:07 compute-0 nova_compute[253661]: 2025-11-22 09:18:07.950 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d583bf52-8135-4fca-a3f4-cf6efd88f497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:08 compute-0 ceph-mon[75021]: pgmap v1612: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Nov 22 09:18:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2104638163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:08 compute-0 nova_compute[253661]: 2025-11-22 09:18:08.484 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d583bf52-8135-4fca-a3f4-cf6efd88f497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:08 compute-0 nova_compute[253661]: 2025-11-22 09:18:08.555 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] resizing rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:18:08 compute-0 nova_compute[253661]: 2025-11-22 09:18:08.760 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:08 compute-0 nova_compute[253661]: 2025-11-22 09:18:08.832 253665 DEBUG nova.objects.instance [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'migration_context' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:08 compute-0 nova_compute[253661]: 2025-11-22 09:18:08.851 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:18:08 compute-0 nova_compute[253661]: 2025-11-22 09:18:08.852 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Ensure instance console log exists: /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:18:08 compute-0 nova_compute[253661]: 2025-11-22 09:18:08.852 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:08 compute-0 nova_compute[253661]: 2025-11-22 09:18:08.853 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:08 compute-0 nova_compute[253661]: 2025-11-22 09:18:08.853 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.089 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Successfully created port: 68713fec-01b1-463b-861c-b96beeb4381a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.333 253665 DEBUG nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:18:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 154 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 113 op/s
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.626 253665 DEBUG nova.compute.manager [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.626 253665 DEBUG oslo_concurrency.lockutils [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.626 253665 DEBUG oslo_concurrency.lockutils [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.627 253665 DEBUG oslo_concurrency.lockutils [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.627 253665 DEBUG nova.compute.manager [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Processing event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.628 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.632 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803089.6317296, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.632 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.635 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.639 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance spawned successfully.
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.640 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.655 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.661 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.664 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.664 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.666 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.666 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.666 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.668 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.697 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.731 253665 INFO nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Took 11.66 seconds to spawn the instance on the hypervisor.
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.732 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.809 253665 INFO nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Took 12.80 seconds to build instance.
Nov 22 09:18:09 compute-0 nova_compute[253661]: 2025-11-22 09:18:09.830 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:10 compute-0 nova_compute[253661]: 2025-11-22 09:18:10.059 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:10 compute-0 ceph-mon[75021]: pgmap v1613: 305 pgs: 305 active+clean; 154 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 113 op/s
Nov 22 09:18:10 compute-0 nova_compute[253661]: 2025-11-22 09:18:10.643 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Successfully updated port: 68713fec-01b1-463b-861c-b96beeb4381a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:18:10 compute-0 nova_compute[253661]: 2025-11-22 09:18:10.666 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:10 compute-0 nova_compute[253661]: 2025-11-22 09:18:10.666 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquired lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:10 compute-0 nova_compute[253661]: 2025-11-22 09:18:10.667 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:18:10 compute-0 nova_compute[253661]: 2025-11-22 09:18:10.955 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:18:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 154 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 688 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Nov 22 09:18:11 compute-0 ovn_controller[152872]: 2025-11-22T09:18:11Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:a1:a9 10.100.0.10
Nov 22 09:18:11 compute-0 ovn_controller[152872]: 2025-11-22T09:18:11Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:a1:a9 10.100.0.10
Nov 22 09:18:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:11 compute-0 nova_compute[253661]: 2025-11-22 09:18:11.954 253665 DEBUG nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:11 compute-0 nova_compute[253661]: 2025-11-22 09:18:11.954 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:11 compute-0 nova_compute[253661]: 2025-11-22 09:18:11.955 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:11 compute-0 nova_compute[253661]: 2025-11-22 09:18:11.955 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:11 compute-0 nova_compute[253661]: 2025-11-22 09:18:11.955 253665 DEBUG nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:11 compute-0 nova_compute[253661]: 2025-11-22 09:18:11.956 253665 WARNING nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:18:11 compute-0 nova_compute[253661]: 2025-11-22 09:18:11.956 253665 DEBUG nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-changed-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:11 compute-0 nova_compute[253661]: 2025-11-22 09:18:11.956 253665 DEBUG nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Refreshing instance network info cache due to event network-changed-68713fec-01b1-463b-861c-b96beeb4381a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:18:11 compute-0 nova_compute[253661]: 2025-11-22 09:18:11.956 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:18:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/683596592' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:18:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:18:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/683596592' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:12 compute-0 NetworkManager[48920]: <info>  [1763803092.3859] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 22 09:18:12 compute-0 NetworkManager[48920]: <info>  [1763803092.3868] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.491 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:12 compute-0 ceph-mon[75021]: pgmap v1614: 305 pgs: 305 active+clean; 154 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 688 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Nov 22 09:18:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/683596592' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:18:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/683596592' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:18:12 compute-0 ovn_controller[152872]: 2025-11-22T09:18:12Z|00457|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:18:12 compute-0 ovn_controller[152872]: 2025-11-22T09:18:12Z|00458|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.546 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updating instance_info_cache with network_info: [{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.566 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Releasing lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.566 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance network_info: |[{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.566 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.567 253665 DEBUG nova.network.neutron [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Refreshing network info cache for port 68713fec-01b1-463b-861c-b96beeb4381a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.571 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start _get_guest_xml network_info=[{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.576 253665 WARNING nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.581 253665 DEBUG nova.virt.libvirt.host [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.582 253665 DEBUG nova.virt.libvirt.host [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.590 253665 DEBUG nova.virt.libvirt.host [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.591 253665 DEBUG nova.virt.libvirt.host [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.591 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.591 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.592 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.592 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.592 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.592 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.594 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.597 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.654 253665 DEBUG nova.compute.manager [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-changed-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.654 253665 DEBUG nova.compute.manager [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Refreshing instance network info cache due to event network-changed-a288a5e5-7b57-4be8-9617-3271ea1e210f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.655 253665 DEBUG oslo_concurrency.lockutils [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.655 253665 DEBUG oslo_concurrency.lockutils [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:12 compute-0 nova_compute[253661]: 2025-11-22 09:18:12.655 253665 DEBUG nova.network.neutron [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Refreshing network info cache for port a288a5e5-7b57-4be8-9617-3271ea1e210f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:18:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395612675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.104 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.128 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.135 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 196 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.2 MiB/s wr, 125 op/s
Nov 22 09:18:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2395612675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174716448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.629 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.631 253665 DEBUG nova.virt.libvirt.vif [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:07Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.632 253665 DEBUG nova.network.os_vif_util [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.633 253665 DEBUG nova.network.os_vif_util [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.634 253665 DEBUG nova.objects.instance [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.650 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <uuid>d583bf52-8135-4fca-a3f4-cf6efd88f497</uuid>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <name>instance-00000033</name>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <nova:name>tempest-InstanceActionsTestJSON-server-1695394731</nova:name>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:18:12</nova:creationTime>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <nova:user uuid="8e3344198c364c67aa73008f33323a4d">tempest-InstanceActionsTestJSON-371860100-project-member</nova:user>
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <nova:project uuid="377c148737af4a5fb70d3e00de87fcd3">tempest-InstanceActionsTestJSON-371860100</nova:project>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <nova:port uuid="68713fec-01b1-463b-861c-b96beeb4381a">
Nov 22 09:18:13 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <system>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <entry name="serial">d583bf52-8135-4fca-a3f4-cf6efd88f497</entry>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <entry name="uuid">d583bf52-8135-4fca-a3f4-cf6efd88f497</entry>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     </system>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <os>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   </os>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <features>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   </features>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d583bf52-8135-4fca-a3f4-cf6efd88f497_disk">
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config">
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:13 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:89:be:dc"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <target dev="tap68713fec-01"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/console.log" append="off"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <video>
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     </video>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:18:13 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:18:13 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:18:13 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:18:13 compute-0 nova_compute[253661]: </domain>
Nov 22 09:18:13 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.651 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Preparing to wait for external event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.651 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.651 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.651 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.652 253665 DEBUG nova.virt.libvirt.vif [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:07Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.652 253665 DEBUG nova.network.os_vif_util [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.653 253665 DEBUG nova.network.os_vif_util [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.653 253665 DEBUG os_vif [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.654 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.655 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.657 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68713fec-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.658 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68713fec-01, col_values=(('external_ids', {'iface-id': '68713fec-01b1-463b-861c-b96beeb4381a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:be:dc', 'vm-uuid': 'd583bf52-8135-4fca-a3f4-cf6efd88f497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:13 compute-0 NetworkManager[48920]: <info>  [1763803093.6608] manager: (tap68713fec-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.670 253665 INFO os_vif [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01')
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.724 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.724 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.724 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] No VIF found with MAC fa:16:3e:89:be:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.725 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Using config drive
Nov 22 09:18:13 compute-0 nova_compute[253661]: 2025-11-22 09:18:13.746 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.034 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Creating config drive at /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.041 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv11zrjko execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.191 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv11zrjko" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.221 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.226 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:14 compute-0 ceph-mon[75021]: pgmap v1615: 305 pgs: 305 active+clean; 196 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.2 MiB/s wr, 125 op/s
Nov 22 09:18:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3174716448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.717 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.718 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Deleting local config drive /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config because it was imported into RBD.
Nov 22 09:18:14 compute-0 kernel: tap68713fec-01: entered promiscuous mode
Nov 22 09:18:14 compute-0 NetworkManager[48920]: <info>  [1763803094.7749] manager: (tap68713fec-01): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Nov 22 09:18:14 compute-0 ovn_controller[152872]: 2025-11-22T09:18:14Z|00459|binding|INFO|Claiming lport 68713fec-01b1-463b-861c-b96beeb4381a for this chassis.
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:14 compute-0 ovn_controller[152872]: 2025-11-22T09:18:14Z|00460|binding|INFO|68713fec-01b1-463b-861c-b96beeb4381a: Claiming fa:16:3e:89:be:dc 10.100.0.12
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.784 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:be:dc 10.100.0.12'], port_security=['fa:16:3e:89:be:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd583bf52-8135-4fca-a3f4-cf6efd88f497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '377c148737af4a5fb70d3e00de87fcd3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24b1af78-e337-4ff8-adc9-262229584365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2f4f0a9-7cb3-4409-b976-e7e8b221c96a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=68713fec-01b1-463b-861c-b96beeb4381a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.786 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 68713fec-01b1-463b-861c-b96beeb4381a in datapath 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e bound to our chassis
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.788 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.806 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.806 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea416d2d-8285-462d-82b7-6c842c33da63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.807 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f7cdf45-21 in ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:18:14 compute-0 ovn_controller[152872]: 2025-11-22T09:18:14Z|00461|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a ovn-installed in OVS
Nov 22 09:18:14 compute-0 ovn_controller[152872]: 2025-11-22T09:18:14Z|00462|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a up in Southbound
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:14 compute-0 systemd-udevd[310612]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.811 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f7cdf45-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06d8ffd1-eeab-4778-a24c-e988f377ccf7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.814 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1750c4-6ded-4829-ae3d-356a463594ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:14 compute-0 systemd-machined[215941]: New machine qemu-56-instance-00000033.
Nov 22 09:18:14 compute-0 NetworkManager[48920]: <info>  [1763803094.8354] device (tap68713fec-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:18:14 compute-0 NetworkManager[48920]: <info>  [1763803094.8367] device (tap68713fec-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.840 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3131394d-09de-444f-94ae-1b12881603ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:14 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000033.
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.875 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3190129d-b7e3-49e5-b702-8eb198cc37cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.908 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44cab5ad-9501-431a-a174-af981bf57600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:14 compute-0 NetworkManager[48920]: <info>  [1763803094.9167] manager: (tap8f7cdf45-20): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.918 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[53f3b0bf-7cf3-4ed8-a9ee-d75b88743c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.958 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d972aa25-2a05-44cb-b6d2-ecf560504aaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.962 253665 DEBUG nova.network.neutron [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updated VIF entry in instance network info cache for port 68713fec-01b1-463b-861c-b96beeb4381a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.963 253665 DEBUG nova.network.neutron [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updating instance_info_cache with network_info: [{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.967 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a2f945-aa84-4a8b-bb63-549882beeb3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:14 compute-0 nova_compute[253661]: 2025-11-22 09:18:14.982 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:14 compute-0 NetworkManager[48920]: <info>  [1763803094.9964] device (tap8f7cdf45-20): carrier: link connected
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.001 253665 DEBUG nova.network.neutron [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updated VIF entry in instance network info cache for port a288a5e5-7b57-4be8-9617-3271ea1e210f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.002 253665 DEBUG nova.network.neutron [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.008 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9c381425-c96b-433d-9116-8c8dc405d7bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.022 253665 DEBUG nova.compute.manager [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.023 253665 DEBUG oslo_concurrency.lockutils [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.023 253665 DEBUG oslo_concurrency.lockutils [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.023 253665 DEBUG oslo_concurrency.lockutils [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.023 253665 DEBUG nova.compute.manager [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Processing event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.025 253665 DEBUG oslo_concurrency.lockutils [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.034 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2855da4c-a50a-4a94-98ff-817bb8380abe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f7cdf45-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:09:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594573, 'reachable_time': 41566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310645, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e168a46d-7488-4245-aa73-6c5892a298e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594573, 'tstamp': 594573}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310646, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.082 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[50090949-06e5-4add-9631-a4a97aca4ee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f7cdf45-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:09:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594573, 'reachable_time': 41566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310647, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.126 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[792af2a0-f84c-4ab6-ad86-0122cc02e7f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.197 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[277e5ecd-a722-4fe6-9eda-dab60273392e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.199 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f7cdf45-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.199 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.200 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f7cdf45-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:15 compute-0 NetworkManager[48920]: <info>  [1763803095.2032] manager: (tap8f7cdf45-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Nov 22 09:18:15 compute-0 kernel: tap8f7cdf45-20: entered promiscuous mode
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.207 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f7cdf45-20, col_values=(('external_ids', {'iface-id': '9041b29d-074d-4855-9e30-a4e5a849535d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:15 compute-0 ovn_controller[152872]: 2025-11-22T09:18:15Z|00463|binding|INFO|Releasing lport 9041b29d-074d-4855-9e30-a4e5a849535d from this chassis (sb_readonly=0)
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.211 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[648244f0-e7f4-4375-a396-29e2956157a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.218 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:18:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.220 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'env', 'PROCESS_TAG=haproxy-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 213 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 189 op/s
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.456 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803095.456058, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.457 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Started (Lifecycle Event)
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.460 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.465 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.469 253665 INFO nova.virt.libvirt.driver [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance spawned successfully.
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.470 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.475 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.479 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.491 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.492 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.493 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.493 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.494 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.494 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.499 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803095.456177, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Paused (Lifecycle Event)
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.521 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.526 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803095.4637592, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.526 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Resumed (Lifecycle Event)
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.544 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.550 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.559 253665 INFO nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Took 7.79 seconds to spawn the instance on the hypervisor.
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.560 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.572 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.618 253665 INFO nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Took 8.75 seconds to build instance.
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.633 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:15 compute-0 podman[310720]: 2025-11-22 09:18:15.639029115 +0000 UTC m=+0.064065748 container create 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:18:15 compute-0 ovn_controller[152872]: 2025-11-22T09:18:15Z|00464|binding|INFO|Releasing lport 9041b29d-074d-4855-9e30-a4e5a849535d from this chassis (sb_readonly=0)
Nov 22 09:18:15 compute-0 ovn_controller[152872]: 2025-11-22T09:18:15Z|00465|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:18:15 compute-0 ovn_controller[152872]: 2025-11-22T09:18:15Z|00466|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 09:18:15 compute-0 systemd[1]: Started libpod-conmon-47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694.scope.
Nov 22 09:18:15 compute-0 podman[310720]: 2025-11-22 09:18:15.60795924 +0000 UTC m=+0.032995893 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:18:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:15 compute-0 nova_compute[253661]: 2025-11-22 09:18:15.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c75362a33d3428f67980ba20a070be98c841833d0506f9fa1c0a3666ede05df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:15 compute-0 podman[310720]: 2025-11-22 09:18:15.734655718 +0000 UTC m=+0.159692351 container init 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:18:15 compute-0 podman[310720]: 2025-11-22 09:18:15.741176137 +0000 UTC m=+0.166212770 container start 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 09:18:15 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [NOTICE]   (310739) : New worker (310741) forked
Nov 22 09:18:15 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [NOTICE]   (310739) : Loading success.
Nov 22 09:18:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:16 compute-0 ceph-mon[75021]: pgmap v1616: 305 pgs: 305 active+clean; 213 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 189 op/s
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.136 253665 DEBUG nova.compute.manager [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.139 253665 DEBUG oslo_concurrency.lockutils [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.139 253665 DEBUG oslo_concurrency.lockutils [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.139 253665 DEBUG oslo_concurrency.lockutils [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.140 253665 DEBUG nova.compute.manager [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.140 253665 WARNING nova.compute.manager [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state None.
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.271 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.272 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.272 253665 INFO nova.compute.manager [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Rebooting instance
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.289 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.289 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquired lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:17 compute-0 nova_compute[253661]: 2025-11-22 09:18:17.290 253665 DEBUG nova.network.neutron [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:18:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 213 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 22 09:18:18 compute-0 ceph-mon[75021]: pgmap v1617: 305 pgs: 305 active+clean; 213 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 22 09:18:18 compute-0 nova_compute[253661]: 2025-11-22 09:18:18.661 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.206 253665 DEBUG nova.network.neutron [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updating instance_info_cache with network_info: [{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.232 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Releasing lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.233 253665 DEBUG nova.compute.manager [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:19 compute-0 kernel: tap68713fec-01 (unregistering): left promiscuous mode
Nov 22 09:18:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 214 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 220 op/s
Nov 22 09:18:19 compute-0 NetworkManager[48920]: <info>  [1763803099.3691] device (tap68713fec-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:19 compute-0 ovn_controller[152872]: 2025-11-22T09:18:19Z|00467|binding|INFO|Releasing lport 68713fec-01b1-463b-861c-b96beeb4381a from this chassis (sb_readonly=0)
Nov 22 09:18:19 compute-0 ovn_controller[152872]: 2025-11-22T09:18:19Z|00468|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a down in Southbound
Nov 22 09:18:19 compute-0 ovn_controller[152872]: 2025-11-22T09:18:19Z|00469|binding|INFO|Removing iface tap68713fec-01 ovn-installed in OVS
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.390 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:be:dc 10.100.0.12'], port_security=['fa:16:3e:89:be:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd583bf52-8135-4fca-a3f4-cf6efd88f497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '377c148737af4a5fb70d3e00de87fcd3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24b1af78-e337-4ff8-adc9-262229584365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2f4f0a9-7cb3-4409-b976-e7e8b221c96a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=68713fec-01b1-463b-861c-b96beeb4381a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.392 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 68713fec-01b1-463b-861c-b96beeb4381a in datapath 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e unbound from our chassis
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.394 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.396 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c659b11-5f0c-4b39-8440-762b372c8ccf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.397 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e namespace which is not needed anymore
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:19 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 22 09:18:19 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Consumed 4.535s CPU time.
Nov 22 09:18:19 compute-0 systemd-machined[215941]: Machine qemu-56-instance-00000033 terminated.
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.536 253665 DEBUG nova.compute.manager [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-unplugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.538 253665 DEBUG oslo_concurrency.lockutils [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.538 253665 DEBUG oslo_concurrency.lockutils [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.539 253665 DEBUG oslo_concurrency.lockutils [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.539 253665 DEBUG nova.compute.manager [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-unplugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.540 253665 WARNING nova.compute.manager [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-unplugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state reboot_started_hard.
Nov 22 09:18:19 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [NOTICE]   (310739) : haproxy version is 2.8.14-c23fe91
Nov 22 09:18:19 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [NOTICE]   (310739) : path to executable is /usr/sbin/haproxy
Nov 22 09:18:19 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [WARNING]  (310739) : Exiting Master process...
Nov 22 09:18:19 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [ALERT]    (310739) : Current worker (310741) exited with code 143 (Terminated)
Nov 22 09:18:19 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [WARNING]  (310739) : All workers exited. Exiting... (0)
Nov 22 09:18:19 compute-0 systemd[1]: libpod-47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694.scope: Deactivated successfully.
Nov 22 09:18:19 compute-0 podman[310774]: 2025-11-22 09:18:19.584398431 +0000 UTC m=+0.072283248 container died 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.591 253665 INFO nova.virt.libvirt.driver [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance destroyed successfully.
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.592 253665 DEBUG nova.objects.instance [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'resources' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.617 253665 DEBUG nova.virt.libvirt.vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:19Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.618 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.621 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.622 253665 DEBUG os_vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.627 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.627 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68713fec-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.630 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694-userdata-shm.mount: Deactivated successfully.
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c75362a33d3428f67980ba20a070be98c841833d0506f9fa1c0a3666ede05df-merged.mount: Deactivated successfully.
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.639 253665 INFO os_vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01')
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.653 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start _get_guest_xml network_info=[{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:18:19 compute-0 podman[310774]: 2025-11-22 09:18:19.660143461 +0000 UTC m=+0.148028248 container cleanup 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.659 253665 WARNING nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:18:19 compute-0 systemd[1]: libpod-conmon-47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694.scope: Deactivated successfully.
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.669 253665 DEBUG nova.virt.libvirt.host [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.671 253665 DEBUG nova.virt.libvirt.host [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.676 253665 DEBUG nova.virt.libvirt.host [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.676 253665 DEBUG nova.virt.libvirt.host [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.677 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.677 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.678 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.679 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.680 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.680 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.681 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.682 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.682 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.682 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.683 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.683 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.683 253665 DEBUG nova.objects.instance [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.707 253665 DEBUG oslo_concurrency.processutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:19 compute-0 podman[310814]: 2025-11-22 09:18:19.74938845 +0000 UTC m=+0.054974486 container remove 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.759 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a17d47f-bb00-47c2-a825-4ca8c70cd3a5]: (4, ('Sat Nov 22 09:18:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e (47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694)\n47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694\nSat Nov 22 09:18:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e (47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694)\n47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.761 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18e8ee15-0ba0-45fc-9adb-08092ad1b121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.762 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f7cdf45-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:19 compute-0 kernel: tap8f7cdf45-20: left promiscuous mode
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.765 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:19 compute-0 nova_compute[253661]: 2025-11-22 09:18:19.779 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[476da982-6363-40eb-ab35-f75911cdcdf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.811 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe133437-3ae9-4fe8-aa8e-57ed32f40b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.834 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7b202e2a-64f9-43f9-aad0-282c8320cc9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.857 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[337f7226-82ab-4db6-ab10-d59e793708d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594564, 'reachable_time': 24326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310829, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d8f7cdf45\x2d2d9c\x2d4e24\x2d9818\x2d3c9ecbf1b21e.mount: Deactivated successfully.
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.864 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:18:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.865 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2b0055-11ff-4759-bbd1-485c93191998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085474209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.239 253665 DEBUG oslo_concurrency.processutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.281 253665 DEBUG oslo_concurrency.processutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.392 253665 DEBUG nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:18:20 compute-0 ceph-mon[75021]: pgmap v1618: 305 pgs: 305 active+clean; 214 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 220 op/s
Nov 22 09:18:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4085474209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2653312510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.753 253665 DEBUG oslo_concurrency.processutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.755 253665 DEBUG nova.virt.libvirt.vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:19Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.755 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.756 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.759 253665 DEBUG nova.objects.instance [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.776 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <uuid>d583bf52-8135-4fca-a3f4-cf6efd88f497</uuid>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <name>instance-00000033</name>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <nova:name>tempest-InstanceActionsTestJSON-server-1695394731</nova:name>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:18:19</nova:creationTime>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <nova:user uuid="8e3344198c364c67aa73008f33323a4d">tempest-InstanceActionsTestJSON-371860100-project-member</nova:user>
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <nova:project uuid="377c148737af4a5fb70d3e00de87fcd3">tempest-InstanceActionsTestJSON-371860100</nova:project>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <nova:port uuid="68713fec-01b1-463b-861c-b96beeb4381a">
Nov 22 09:18:20 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <system>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <entry name="serial">d583bf52-8135-4fca-a3f4-cf6efd88f497</entry>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <entry name="uuid">d583bf52-8135-4fca-a3f4-cf6efd88f497</entry>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     </system>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <os>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   </os>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <features>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   </features>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d583bf52-8135-4fca-a3f4-cf6efd88f497_disk">
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config">
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:20 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:89:be:dc"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <target dev="tap68713fec-01"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/console.log" append="off"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <video>
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     </video>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:18:20 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:18:20 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:18:20 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:18:20 compute-0 nova_compute[253661]: </domain>
Nov 22 09:18:20 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.780 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.780 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.781 253665 DEBUG nova.virt.libvirt.vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:19Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.781 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.786 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.787 253665 DEBUG os_vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.788 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.789 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.794 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.794 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68713fec-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.794 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68713fec-01, col_values=(('external_ids', {'iface-id': '68713fec-01b1-463b-861c-b96beeb4381a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:be:dc', 'vm-uuid': 'd583bf52-8135-4fca-a3f4-cf6efd88f497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.797 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:20 compute-0 NetworkManager[48920]: <info>  [1763803100.7982] manager: (tap68713fec-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.805 253665 INFO os_vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01')
Nov 22 09:18:20 compute-0 kernel: tap68713fec-01: entered promiscuous mode
Nov 22 09:18:20 compute-0 systemd-udevd[310754]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:18:20 compute-0 NetworkManager[48920]: <info>  [1763803100.8980] manager: (tap68713fec-01): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:20 compute-0 ovn_controller[152872]: 2025-11-22T09:18:20Z|00470|binding|INFO|Claiming lport 68713fec-01b1-463b-861c-b96beeb4381a for this chassis.
Nov 22 09:18:20 compute-0 ovn_controller[152872]: 2025-11-22T09:18:20Z|00471|binding|INFO|68713fec-01b1-463b-861c-b96beeb4381a: Claiming fa:16:3e:89:be:dc 10.100.0.12
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.908 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:be:dc 10.100.0.12'], port_security=['fa:16:3e:89:be:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd583bf52-8135-4fca-a3f4-cf6efd88f497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '377c148737af4a5fb70d3e00de87fcd3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '24b1af78-e337-4ff8-adc9-262229584365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2f4f0a9-7cb3-4409-b976-e7e8b221c96a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=68713fec-01b1-463b-861c-b96beeb4381a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.909 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 68713fec-01b1-463b-861c-b96beeb4381a in datapath 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e bound to our chassis
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.911 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 09:18:20 compute-0 NetworkManager[48920]: <info>  [1763803100.9153] device (tap68713fec-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:18:20 compute-0 NetworkManager[48920]: <info>  [1763803100.9167] device (tap68713fec-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:18:20 compute-0 ovn_controller[152872]: 2025-11-22T09:18:20Z|00472|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a ovn-installed in OVS
Nov 22 09:18:20 compute-0 ovn_controller[152872]: 2025-11-22T09:18:20Z|00473|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a up in Southbound
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:20 compute-0 nova_compute[253661]: 2025-11-22 09:18:20.925 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.928 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[502f30a0-e985-4a43-87a0-d3bed5472f83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.930 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f7cdf45-21 in ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.932 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f7cdf45-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.932 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee82c25-a9cd-4abd-a6a7-3db5ca31a313]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.935 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b83794-9297-4211-8f18-01a0af9d2afd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:20 compute-0 systemd-machined[215941]: New machine qemu-57-instance-00000033.
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.953 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2c0114-da78-4be6-9e24-44fba6937cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:20 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000033.
Nov 22 09:18:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.982 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[093eeb6f-3585-47cf-9ce1-76fa4f9221d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.033 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[01d00f54-3204-43ee-81e4-d4d3f630866e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.044 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e2bb1b-4c77-4c78-b7d3-7f5a5091b152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 NetworkManager[48920]: <info>  [1763803101.0461] manager: (tap8f7cdf45-20): new Veth device (/org/freedesktop/NetworkManager/Devices/213)
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.089 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[768239d8-affa-4fef-b010-5b79f9830172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.093 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a50c0e-e607-48eb-a026-b6a8f5499eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 NetworkManager[48920]: <info>  [1763803101.1234] device (tap8f7cdf45-20): carrier: link connected
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.130 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[69733b7b-39f9-484b-bdd0-7f5e7e7a46b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.155 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82e7452f-6a8f-4231-bd78-ed0b4c7b2c95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f7cdf45-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:09:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595186, 'reachable_time': 30358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310938, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.177 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a7d4c9-77d6-444c-8c5f-5912dff93464]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595186, 'tstamp': 595186}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310939, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1d0d4e-3199-4e0a-9488-efa23a4eddb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f7cdf45-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:09:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595186, 'reachable_time': 30358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310940, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c5505ab5-2430-4581-98b4-e88b7d0e3a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.317 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7539f4-2010-4ad0-9b54-9bc1f6fe57c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.318 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f7cdf45-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f7cdf45-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:21 compute-0 NetworkManager[48920]: <info>  [1763803101.3222] manager: (tap8f7cdf45-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 22 09:18:21 compute-0 kernel: tap8f7cdf45-20: entered promiscuous mode
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.324 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f7cdf45-20, col_values=(('external_ids', {'iface-id': '9041b29d-074d-4855-9e30-a4e5a849535d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:21 compute-0 ovn_controller[152872]: 2025-11-22T09:18:21Z|00474|binding|INFO|Releasing lport 9041b29d-074d-4855-9e30-a4e5a849535d from this chassis (sb_readonly=0)
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.328 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.329 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb9836f-55ef-4d7e-b954-c6d94ed1eb4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.330 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:18:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.331 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'env', 'PROCESS_TAG=haproxy-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 214 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.3 MiB/s wr, 199 op/s
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.428 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for d583bf52-8135-4fca-a3f4-cf6efd88f497 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.429 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803101.4282403, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.429 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Resumed (Lifecycle Event)
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.432 253665 DEBUG nova.compute.manager [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.442 253665 INFO nova.virt.libvirt.driver [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance rebooted successfully.
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.443 253665 DEBUG nova.compute.manager [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.450 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.457 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.486 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.487 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803101.4316216, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.487 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Started (Lifecycle Event)
Nov 22 09:18:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.507 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.509 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:21 compute-0 nova_compute[253661]: 2025-11-22 09:18:21.514 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2653312510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:21 compute-0 podman[311014]: 2025-11-22 09:18:21.753161138 +0000 UTC m=+0.057187750 container create 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:18:21 compute-0 systemd[1]: Started libpod-conmon-4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4.scope.
Nov 22 09:18:21 compute-0 podman[311014]: 2025-11-22 09:18:21.725709981 +0000 UTC m=+0.029736613 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:18:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f73e94104a9b40a1c2c8be8af5cf7f587f3e059d0fe257d63eefedbd0fc6b01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:21 compute-0 podman[311014]: 2025-11-22 09:18:21.842888468 +0000 UTC m=+0.146915100 container init 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:18:21 compute-0 podman[311014]: 2025-11-22 09:18:21.84870727 +0000 UTC m=+0.152733892 container start 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:18:21 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [NOTICE]   (311033) : New worker (311035) forked
Nov 22 09:18:21 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [NOTICE]   (311033) : Loading success.
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.455 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.455 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.456 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.456 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.457 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.457 253665 WARNING nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state None.
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.457 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.458 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.458 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.458 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.458 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.459 253665 WARNING nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state None.
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.459 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.460 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.460 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.461 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.461 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.461 253665 WARNING nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state None.
Nov 22 09:18:22 compute-0 ceph-mon[75021]: pgmap v1619: 305 pgs: 305 active+clean; 214 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.3 MiB/s wr, 199 op/s
Nov 22 09:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:18:22 compute-0 kernel: tap085e3bcc-2e (unregistering): left promiscuous mode
Nov 22 09:18:22 compute-0 NetworkManager[48920]: <info>  [1763803102.8346] device (tap085e3bcc-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:22 compute-0 ovn_controller[152872]: 2025-11-22T09:18:22Z|00475|binding|INFO|Releasing lport 085e3bcc-2e77-4c2e-8298-872aac04e65e from this chassis (sb_readonly=0)
Nov 22 09:18:22 compute-0 ovn_controller[152872]: 2025-11-22T09:18:22Z|00476|binding|INFO|Setting lport 085e3bcc-2e77-4c2e-8298-872aac04e65e down in Southbound
Nov 22 09:18:22 compute-0 ovn_controller[152872]: 2025-11-22T09:18:22Z|00477|binding|INFO|Removing iface tap085e3bcc-2e ovn-installed in OVS
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.848 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.857 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:a1:a9 10.100.0.10'], port_security=['fa:16:3e:52:a1:a9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ee68ed8e-d5b3-4069-ac90-f7e94430ed0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=085e3bcc-2e77-4c2e-8298-872aac04e65e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.858 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 085e3bcc-2e77-4c2e-8298-872aac04e65e in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis
Nov 22 09:18:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.860 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:18:22 compute-0 nova_compute[253661]: 2025-11-22 09:18:22.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c838948-4f8e-42ab-b757-adbfe6c2ad2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.874 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore
Nov 22 09:18:22 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 22 09:18:22 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Consumed 14.835s CPU time.
Nov 22 09:18:22 compute-0 systemd-machined[215941]: Machine qemu-54-instance-00000031 terminated.
Nov 22 09:18:23 compute-0 ovn_controller[152872]: 2025-11-22T09:18:23Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:18:23 compute-0 ovn_controller[152872]: 2025-11-22T09:18:23Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:18:23 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [NOTICE]   (309750) : haproxy version is 2.8.14-c23fe91
Nov 22 09:18:23 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [NOTICE]   (309750) : path to executable is /usr/sbin/haproxy
Nov 22 09:18:23 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [WARNING]  (309750) : Exiting Master process...
Nov 22 09:18:23 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [ALERT]    (309750) : Current worker (309752) exited with code 143 (Terminated)
Nov 22 09:18:23 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [WARNING]  (309750) : All workers exited. Exiting... (0)
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.252 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.252 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:23 compute-0 systemd[1]: libpod-2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424.scope: Deactivated successfully.
Nov 22 09:18:23 compute-0 podman[311067]: 2025-11-22 09:18:23.2590915 +0000 UTC m=+0.278789956 container died 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:18:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 214 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 222 op/s
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.412 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance shutdown successfully after 24 seconds.
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.420 253665 INFO nova.virt.libvirt.driver [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance destroyed successfully.
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.421 253665 DEBUG nova.objects.instance [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'numa_topology' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:23 compute-0 nova_compute[253661]: 2025-11-22 09:18:23.773 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Beginning cold snapshot process
Nov 22 09:18:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424-userdata-shm.mount: Deactivated successfully.
Nov 22 09:18:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e4b6fe6d0e31e82f90921939e8d033bb084c60d0f39f2ba0237780fbbf52430-merged.mount: Deactivated successfully.
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.163 253665 DEBUG nova.virt.libvirt.imagebackend [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.332 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] creating snapshot(a4d1fb214b444dfa90abb7feb6651e29) on rbd image(ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:18:24 compute-0 podman[311067]: 2025-11-22 09:18:24.511283336 +0000 UTC m=+1.530981782 container cleanup 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:18:24 compute-0 systemd[1]: libpod-conmon-2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424.scope: Deactivated successfully.
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.524 253665 DEBUG nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-vif-unplugged-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.525 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.525 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.526 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.526 253665 DEBUG nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] No waiting events found dispatching network-vif-unplugged-085e3bcc-2e77-4c2e-8298-872aac04e65e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.526 253665 WARNING nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received unexpected event network-vif-unplugged-085e3bcc-2e77-4c2e-8298-872aac04e65e for instance with vm_state active and task_state shelving_image_uploading.
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.527 253665 DEBUG nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.527 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.527 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.527 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.528 253665 DEBUG nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] No waiting events found dispatching network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.528 253665 WARNING nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received unexpected event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e for instance with vm_state active and task_state shelving_image_uploading.
Nov 22 09:18:24 compute-0 ceph-mon[75021]: pgmap v1620: 305 pgs: 305 active+clean; 214 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 222 op/s
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.813 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.813 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.814 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.814 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.814 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.815 253665 INFO nova.compute.manager [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Terminating instance
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.816 253665 DEBUG nova.compute.manager [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:18:24 compute-0 podman[311158]: 2025-11-22 09:18:24.940221008 +0000 UTC m=+0.400938532 container remove 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:18:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.946 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a18505f-cadb-4ffa-a798-619978f64366]: (4, ('Sat Nov 22 09:18:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424)\n2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424\nSat Nov 22 09:18:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424)\n2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.949 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[981c5da3-702f-444f-8927-5ad87bd459d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.950 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:24 compute-0 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:24 compute-0 kernel: tap68713fec-01 (unregistering): left promiscuous mode
Nov 22 09:18:24 compute-0 NetworkManager[48920]: <info>  [1763803104.9614] device (tap68713fec-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.979 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0762110-3c67-416b-b685-7048d260c212]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.987 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:24 compute-0 ovn_controller[152872]: 2025-11-22T09:18:24Z|00478|binding|INFO|Releasing lport 68713fec-01b1-463b-861c-b96beeb4381a from this chassis (sb_readonly=0)
Nov 22 09:18:24 compute-0 ovn_controller[152872]: 2025-11-22T09:18:24Z|00479|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a down in Southbound
Nov 22 09:18:24 compute-0 ovn_controller[152872]: 2025-11-22T09:18:24Z|00480|binding|INFO|Removing iface tap68713fec-01 ovn-installed in OVS
Nov 22 09:18:24 compute-0 nova_compute[253661]: 2025-11-22 09:18:24.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.995 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:be:dc 10.100.0.12'], port_security=['fa:16:3e:89:be:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd583bf52-8135-4fca-a3f4-cf6efd88f497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '377c148737af4a5fb70d3e00de87fcd3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '24b1af78-e337-4ff8-adc9-262229584365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2f4f0a9-7cb3-4409-b976-e7e8b221c96a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=68713fec-01b1-463b-861c-b96beeb4381a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.004 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9992b44e-c74f-45bf-bd60-a9385adbacce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.006 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e40154ec-b6f8-42e5-99dc-892297c418f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:25 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 22 09:18:25 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Consumed 3.878s CPU time.
Nov 22 09:18:25 compute-0 systemd-machined[215941]: Machine qemu-57-instance-00000033 terminated.
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.028 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b0f857-15a3-46db-beaf-7a1d2605d3e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592087, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311181, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.035 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.036 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9a73d2d9-f98d-4f8d-94b8-03e2c3fba420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.037 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 68713fec-01b1-463b-861c-b96beeb4381a in datapath 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e unbound from our chassis
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.039 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.040 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca2d63d-5110-45bd-89ba-e34c1a0c0c7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.041 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e namespace which is not needed anymore
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.061 253665 INFO nova.virt.libvirt.driver [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance destroyed successfully.
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.061 253665 DEBUG nova.objects.instance [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'resources' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.073 253665 DEBUG nova.virt.libvirt.vif [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:21Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.074 253665 DEBUG nova.network.os_vif_util [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.075 253665 DEBUG nova.network.os_vif_util [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.077 253665 DEBUG os_vif [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.081 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.081 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68713fec-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.092 253665 INFO os_vif [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01')
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.109 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:25 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [NOTICE]   (311033) : haproxy version is 2.8.14-c23fe91
Nov 22 09:18:25 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [NOTICE]   (311033) : path to executable is /usr/sbin/haproxy
Nov 22 09:18:25 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [WARNING]  (311033) : Exiting Master process...
Nov 22 09:18:25 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [ALERT]    (311033) : Current worker (311035) exited with code 143 (Terminated)
Nov 22 09:18:25 compute-0 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [WARNING]  (311033) : All workers exited. Exiting... (0)
Nov 22 09:18:25 compute-0 systemd[1]: libpod-4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4.scope: Deactivated successfully.
Nov 22 09:18:25 compute-0 podman[311232]: 2025-11-22 09:18:25.206628551 +0000 UTC m=+0.047569786 container died 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 09:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4-userdata-shm.mount: Deactivated successfully.
Nov 22 09:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f73e94104a9b40a1c2c8be8af5cf7f587f3e059d0fe257d63eefedbd0fc6b01-merged.mount: Deactivated successfully.
Nov 22 09:18:25 compute-0 podman[311232]: 2025-11-22 09:18:25.269777537 +0000 UTC m=+0.110718762 container cleanup 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:18:25 compute-0 systemd[1]: libpod-conmon-4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4.scope: Deactivated successfully.
Nov 22 09:18:25 compute-0 podman[311264]: 2025-11-22 09:18:25.367161702 +0000 UTC m=+0.069634303 container remove 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:18:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 225 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 223 op/s
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.378 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[45183ce7-cabe-4927-8cf1-835776a88740]: (4, ('Sat Nov 22 09:18:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e (4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4)\n4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4\nSat Nov 22 09:18:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e (4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4)\n4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.381 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a04c33a3-b260-489d-b70b-08bed836df2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.382 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f7cdf45-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:25 compute-0 kernel: tap8f7cdf45-20: left promiscuous mode
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.407 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f86190ba-6b0c-41c0-a561-daa7e664039a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.411 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.421 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41a88a03-9f04-4021-8f95-d6970b78afd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.422 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[87101c45-a070-471c-8bb1-a9bbcfee2047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.426 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.427 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.427 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.441 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[24074353-f1ec-46ab-a390-1b74d87a4b90]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595176, 'reachable_time': 28739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311283, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d8f7cdf45\x2d2d9c\x2d4e24\x2d9818\x2d3c9ecbf1b21e.mount: Deactivated successfully.
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.445 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:18:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.445 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[25cac3eb-f47f-4c98-948e-fe61d029e154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:25 compute-0 podman[311281]: 2025-11-22 09:18:25.513015906 +0000 UTC m=+0.069387036 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 09:18:25 compute-0 podman[311284]: 2025-11-22 09:18:25.520124539 +0000 UTC m=+0.071015616 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 09:18:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 22 09:18:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 22 09:18:25 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 22 09:18:25 compute-0 nova_compute[253661]: 2025-11-22 09:18:25.860 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] cloning vms/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk@a4d1fb214b444dfa90abb7feb6651e29 to images/c410abb5-ca6a-4ea8-bbe5-04c76de12b91 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.031 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] flattening images/c410abb5-ca6a-4ea8-bbe5-04c76de12b91 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.154 253665 INFO nova.virt.libvirt.driver [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Deleting instance files /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497_del
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.154 253665 INFO nova.virt.libvirt.driver [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Deletion of /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497_del complete
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.220 253665 INFO nova.compute.manager [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Took 1.40 seconds to destroy the instance on the hypervisor.
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.221 253665 DEBUG oslo.service.loopingcall [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.221 253665 DEBUG nova.compute.manager [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.222 253665 DEBUG nova.network.neutron [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.686 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] removing snapshot(a4d1fb214b444dfa90abb7feb6651e29) on rbd image(ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:18:26 compute-0 ceph-mon[75021]: pgmap v1621: 305 pgs: 305 active+clean; 225 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 223 op/s
Nov 22 09:18:26 compute-0 ceph-mon[75021]: osdmap e235: 3 total, 3 up, 3 in
Nov 22 09:18:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324048381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.781 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 22 09:18:26 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.853 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] creating snapshot(snap) on rbd image(c410abb5-ca6a-4ea8-bbe5-04c76de12b91) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.914 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.914 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.919 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:18:26 compute-0 nova_compute[253661]: 2025-11-22 09:18:26.919 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.082 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.083 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3908MB free_disk=59.8883171081543GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.083 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.084 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance ee68ed8e-d5b3-4069-ac90-f7e94430ed0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 636b1046-fff8-4a45-8a14-04010b2f282e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d583bf52-8135-4fca-a3f4-cf6efd88f497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.262 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.262 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:18:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 245 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.8 MiB/s wr, 319 op/s
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.433 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/324048381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:27 compute-0 ceph-mon[75021]: osdmap e236: 3 total, 3 up, 3 in
Nov 22 09:18:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 22 09:18:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 22 09:18:27 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.860 253665 DEBUG nova.network.neutron [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.875 253665 INFO nova.compute.manager [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Took 1.65 seconds to deallocate network for instance.
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.920 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1566281401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.950 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.957 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:18:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:27.960 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:27.961 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:27.961 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.965 253665 DEBUG nova.compute.manager [req-39436e8e-906e-4f27-8ddd-8f1a82a5de3e req-4af51820-619b-4a45-b851-aee93ef96c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-deleted-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.970 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.990 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.991 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:27 compute-0 nova_compute[253661]: 2025-11-22 09:18:27.991 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.068 253665 DEBUG oslo_concurrency.processutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715458066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.551 253665 DEBUG oslo_concurrency.processutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.558 253665 DEBUG nova.compute.provider_tree [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.573 253665 DEBUG nova.scheduler.client.report [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.596 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.625 253665 INFO nova.scheduler.client.report [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Deleted allocations for instance d583bf52-8135-4fca-a3f4-cf6efd88f497
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.699 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:28 compute-0 ceph-mon[75021]: pgmap v1624: 305 pgs: 305 active+clean; 245 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.8 MiB/s wr, 319 op/s
Nov 22 09:18:28 compute-0 ceph-mon[75021]: osdmap e237: 3 total, 3 up, 3 in
Nov 22 09:18:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1566281401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/715458066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.989 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:28 compute-0 nova_compute[253661]: 2025-11-22 09:18:28.990 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:18:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 279 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 453 op/s
Nov 22 09:18:29 compute-0 podman[311481]: 2025-11-22 09:18:29.39531159 +0000 UTC m=+0.082784792 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.653 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.654 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.654 253665 INFO nova.compute.manager [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Rebooting instance
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.667 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.668 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.668 253665 DEBUG nova.network.neutron [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.687 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Snapshot image upload complete
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.687 253665 DEBUG nova.compute.manager [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.737 253665 INFO nova.compute.manager [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Shelve offloading
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.744 253665 INFO nova.virt.libvirt.driver [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance destroyed successfully.
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.744 253665 DEBUG nova.compute.manager [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.746 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.746 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:29 compute-0 nova_compute[253661]: 2025-11-22 09:18:29.746 253665 DEBUG nova.network.neutron [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:18:30 compute-0 nova_compute[253661]: 2025-11-22 09:18:30.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:31 compute-0 nova_compute[253661]: 2025-11-22 09:18:31.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:31 compute-0 nova_compute[253661]: 2025-11-22 09:18:31.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:18:31 compute-0 ceph-mon[75021]: pgmap v1626: 305 pgs: 305 active+clean; 279 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 453 op/s
Nov 22 09:18:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 279 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 9.8 MiB/s wr, 344 op/s
Nov 22 09:18:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:31 compute-0 nova_compute[253661]: 2025-11-22 09:18:31.954 253665 DEBUG nova.network.neutron [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:31 compute-0 nova_compute[253661]: 2025-11-22 09:18:31.957 253665 DEBUG nova.network.neutron [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:31 compute-0 nova_compute[253661]: 2025-11-22 09:18:31.976 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:31 compute-0 nova_compute[253661]: 2025-11-22 09:18:31.978 253665 DEBUG nova.compute.manager [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:31 compute-0 nova_compute[253661]: 2025-11-22 09:18:31.978 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:32 compute-0 ceph-mon[75021]: pgmap v1627: 305 pgs: 305 active+clean; 279 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 9.8 MiB/s wr, 344 op/s
Nov 22 09:18:32 compute-0 kernel: tapa288a5e5-7b (unregistering): left promiscuous mode
Nov 22 09:18:32 compute-0 NetworkManager[48920]: <info>  [1763803112.7714] device (tapa288a5e5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:32 compute-0 ovn_controller[152872]: 2025-11-22T09:18:32Z|00481|binding|INFO|Releasing lport a288a5e5-7b57-4be8-9617-3271ea1e210f from this chassis (sb_readonly=0)
Nov 22 09:18:32 compute-0 ovn_controller[152872]: 2025-11-22T09:18:32Z|00482|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f down in Southbound
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.783 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:32 compute-0 ovn_controller[152872]: 2025-11-22T09:18:32Z|00483|binding|INFO|Removing iface tapa288a5e5-7b ovn-installed in OVS
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.789 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.790 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:18:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.792 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebc42408-7b03-480c-a016-1e5bb2ebcc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:18:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.795 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6725983d-fd18-4f19-8b93-8ea625f16bf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.796 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace which is not needed anymore
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:32 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 22 09:18:32 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Consumed 14.927s CPU time.
Nov 22 09:18:32 compute-0 systemd-machined[215941]: Machine qemu-55-instance-00000032 terminated.
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.906 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.907 253665 DEBUG nova.objects.instance [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.921 253665 DEBUG nova.virt.libvirt.vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.922 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.922 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.923 253665 DEBUG os_vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.925 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.925 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa288a5e5-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.930 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.936 253665 INFO os_vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.946 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start _get_guest_xml network_info=[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.950 253665 WARNING nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.957 253665 DEBUG nova.virt.libvirt.host [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.958 253665 DEBUG nova.virt.libvirt.host [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.962 253665 DEBUG nova.virt.libvirt.host [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.962 253665 DEBUG nova.virt.libvirt.host [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.965 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.965 253665 DEBUG nova.objects.instance [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:32 compute-0 ovn_controller[152872]: 2025-11-22T09:18:32Z|00484|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:18:32 compute-0 nova_compute[253661]: 2025-11-22 09:18:32.984 253665 DEBUG oslo_concurrency.processutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.033 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.233 253665 INFO nova.virt.libvirt.driver [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance destroyed successfully.
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.233 253665 DEBUG nova.objects.instance [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:33 compute-0 ovn_controller[152872]: 2025-11-22T09:18:33Z|00485|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.242 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.247 253665 DEBUG nova.virt.libvirt.vif [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1186810200',display_name='tempest-DeleteServersTestJSON-server-1186810200',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1186810200',id=49,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-bkwxncnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member',shelved_at='2025-11-22T09:18:29.687740',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='c410abb5-ca6a-4ea8-bbe5-04c76de12b91'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:23Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=ee68ed8e-d5b3-4069-ac90-f7e94430ed0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.247 253665 DEBUG nova.network.os_vif_util [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.248 253665 DEBUG nova.network.os_vif_util [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.248 253665 DEBUG os_vif [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.250 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap085e3bcc-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.252 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.256 253665 INFO os_vif [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e')
Nov 22 09:18:33 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [NOTICE]   (310276) : haproxy version is 2.8.14-c23fe91
Nov 22 09:18:33 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [NOTICE]   (310276) : path to executable is /usr/sbin/haproxy
Nov 22 09:18:33 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [WARNING]  (310276) : Exiting Master process...
Nov 22 09:18:33 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [WARNING]  (310276) : Exiting Master process...
Nov 22 09:18:33 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [ALERT]    (310276) : Current worker (310278) exited with code 143 (Terminated)
Nov 22 09:18:33 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [WARNING]  (310276) : All workers exited. Exiting... (0)
Nov 22 09:18:33 compute-0 systemd[1]: libpod-17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de.scope: Deactivated successfully.
Nov 22 09:18:33 compute-0 podman[311536]: 2025-11-22 09:18:33.327303032 +0000 UTC m=+0.428670777 container died 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:18:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 279 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 7.7 MiB/s wr, 297 op/s
Nov 22 09:18:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4041587310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.508 253665 DEBUG oslo_concurrency.processutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:33 compute-0 nova_compute[253661]: 2025-11-22 09:18:33.540 253665 DEBUG oslo_concurrency.processutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de-userdata-shm.mount: Deactivated successfully.
Nov 22 09:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5234ae273265799e05869e908957591393f90ceac631303829d1658fcdf1825-merged.mount: Deactivated successfully.
Nov 22 09:18:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4041587310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:33 compute-0 podman[311536]: 2025-11-22 09:18:33.877362057 +0000 UTC m=+0.978729782 container cleanup 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:18:33 compute-0 systemd[1]: libpod-conmon-17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de.scope: Deactivated successfully.
Nov 22 09:18:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201504531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.010 253665 DEBUG oslo_concurrency.processutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.013 253665 DEBUG nova.virt.libvirt.vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.013 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.015 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.017 253665 DEBUG nova.objects.instance [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.024660) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114024746, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2184, "num_deletes": 256, "total_data_size": 3394928, "memory_usage": 3458816, "flush_reason": "Manual Compaction"}
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.030 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <uuid>636b1046-fff8-4a45-8a14-04010b2f282e</uuid>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <name>instance-00000032</name>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestJSON-server-149918095</nova:name>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:18:32</nova:creationTime>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <nova:port uuid="a288a5e5-7b57-4be8-9617-3271ea1e210f">
Nov 22 09:18:34 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <system>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <entry name="serial">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <entry name="uuid">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     </system>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <os>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   </os>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <features>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   </features>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk">
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk.config">
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:34 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:70:38:8e"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <target dev="tapa288a5e5-7b"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log" append="off"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <video>
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     </video>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:18:34 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:18:34 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:18:34 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:18:34 compute-0 nova_compute[253661]: </domain>
Nov 22 09:18:34 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.031 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.031 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.032 253665 DEBUG nova.virt.libvirt.vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.033 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.033 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.034 253665 DEBUG os_vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.035 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.039 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.039 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.040 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:34 compute-0 NetworkManager[48920]: <info>  [1763803114.0431] manager: (tapa288a5e5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.044 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.049 253665 INFO os_vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114075585, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3313731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30857, "largest_seqno": 33040, "table_properties": {"data_size": 3303792, "index_size": 6305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21045, "raw_average_key_size": 20, "raw_value_size": 3283705, "raw_average_value_size": 3235, "num_data_blocks": 277, "num_entries": 1015, "num_filter_entries": 1015, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802916, "oldest_key_time": 1763802916, "file_creation_time": 1763803114, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 50966 microseconds, and 9434 cpu microseconds.
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.075635) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3313731 bytes OK
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.075667) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.114344) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.114393) EVENT_LOG_v1 {"time_micros": 1763803114114381, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.114420) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3385624, prev total WAL file size 3385624, number of live WAL files 2.
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.115693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3236KB)], [68(7053KB)]
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114115820, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10536564, "oldest_snapshot_seqno": -1}
Nov 22 09:18:34 compute-0 podman[311656]: 2025-11-22 09:18:34.142061269 +0000 UTC m=+0.240788372 container remove 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.149 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8023a022-8841-4744-a0a1-ddf933707d1c]: (4, ('Sat Nov 22 09:18:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de)\n17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de\nSat Nov 22 09:18:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de)\n17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.150 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfe44a0-70a2-4136-80db-38736d85d306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.152 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.154 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 kernel: tapebc42408-70: left promiscuous mode
Nov 22 09:18:34 compute-0 NetworkManager[48920]: <info>  [1763803114.1730] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Nov 22 09:18:34 compute-0 systemd-udevd[311514]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.177 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.182 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6bce74d3-f27f-4501-9342-a17ed3752e9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_controller[152872]: 2025-11-22T09:18:34Z|00486|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 09:18:34 compute-0 ovn_controller[152872]: 2025-11-22T09:18:34Z|00487|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 NetworkManager[48920]: <info>  [1763803114.1921] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:18:34 compute-0 NetworkManager[48920]: <info>  [1763803114.1933] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.196 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d97deaac-6300-427a-8398-749068545771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 NetworkManager[48920]: <info>  [1763803114.2000] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 NetworkManager[48920]: <info>  [1763803114.2007] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e987a5d-ee05-44fe-912f-4273ef350e58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.203 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5810 keys, 8876187 bytes, temperature: kUnknown
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114205487, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8876187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8836163, "index_size": 24426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 145821, "raw_average_key_size": 25, "raw_value_size": 8730446, "raw_average_value_size": 1502, "num_data_blocks": 993, "num_entries": 5810, "num_filter_entries": 5810, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803114, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.205994) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8876187 bytes
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.209359) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.1 rd, 98.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6335, records dropped: 525 output_compression: NoCompression
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.209401) EVENT_LOG_v1 {"time_micros": 1763803114209384, "job": 38, "event": "compaction_finished", "compaction_time_micros": 89957, "compaction_time_cpu_micros": 24225, "output_level": 6, "num_output_files": 1, "total_output_size": 8876187, "num_input_records": 6335, "num_output_records": 5810, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114210693, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114212053, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.115547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:18:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:18:34 compute-0 systemd-machined[215941]: New machine qemu-58-instance-00000032.
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.223 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bde3761d-8a15-4164-8c82-bb71f72cb7dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593627, 'reachable_time': 28985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311688, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.226 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.226 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e640f1f7-531e-4bff-a579-240e541d44b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.227 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 bound to our chassis
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.228 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.242 253665 DEBUG nova.compute.manager [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-changed-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.243 253665 DEBUG nova.compute.manager [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Refreshing instance network info cache due to event network-changed-085e3bcc-2e77-4c2e-8298-872aac04e65e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.243 253665 DEBUG oslo_concurrency.lockutils [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.243 253665 DEBUG oslo_concurrency.lockutils [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.244 253665 DEBUG nova.network.neutron [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Refreshing network info cache for port 085e3bcc-2e77-4c2e-8298-872aac04e65e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.243 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[854e36ab-3164-47b4-a9e0-d036bd8a3d0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.245 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.246 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.247 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b10f95a-4da5-48c8-b8f8-a4bda5103b75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.247 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[509d590a-ea77-4414-96d5-b660d529b117]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.262 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[00627682-b433-43c6-b530-8c30ca496b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000032.
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.287 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0ce46d-4995-4e0c-ac8f-4440d357fb1c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.323 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[133acd3a-3980-405c-ac5c-a5a4b2a69a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.330 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[788a90a5-f767-41e0-933d-242d9ff2e5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 NetworkManager[48920]: <info>  [1763803114.3336] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.376 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[576020b2-5ad8-4c38-b67f-5b7585b24b12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.379 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[573a387c-d600-46df-9690-3f8e27c83aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 NetworkManager[48920]: <info>  [1763803114.4019] device (tapebc42408-70): carrier: link connected
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.407 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9b7941-5014-46f6-bc4a-8ed84e74b278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.418 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.433 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68be5a7b-cdfd-48f3-a22a-4e544514e729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596513, 'reachable_time': 27218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311719, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 ovn_controller[152872]: 2025-11-22T09:18:34Z|00488|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 09:18:34 compute-0 ovn_controller[152872]: 2025-11-22T09:18:34Z|00489|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.453 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4781e2b1-372e-4fbf-8abd-42571e1f6449]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596513, 'tstamp': 596513}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311720, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.477 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1605adbf-8ceb-4790-bea8-5cc074d1c88c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596513, 'reachable_time': 27218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311721, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[594abb1e-ac37-4851-8ad5-01d725ca8b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.592 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c605a2e-e9b7-4e53-9d90-956577b07e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.593 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.593 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.594 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:34 compute-0 NetworkManager[48920]: <info>  [1763803114.5966] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Nov 22 09:18:34 compute-0 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.598 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:34 compute-0 ovn_controller[152872]: 2025-11-22T09:18:34Z|00490|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.614 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.616 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.617 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f0f05f-ede2-4777-8bf1-6f74ae7eba87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.618 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:18:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.618 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.955 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 636b1046-fff8-4a45-8a14-04010b2f282e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.955 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803114.954769, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.955 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.958 253665 DEBUG nova.compute.manager [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.962 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance rebooted successfully.
Nov 22 09:18:34 compute-0 nova_compute[253661]: 2025-11-22 09:18:34.962 253665 DEBUG nova.compute.manager [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:34 compute-0 ceph-mon[75021]: pgmap v1628: 305 pgs: 305 active+clean; 279 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 7.7 MiB/s wr, 297 op/s
Nov 22 09:18:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1201504531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.006 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.012 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.042 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.045 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803114.9576468, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.045 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.065 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.069 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:35 compute-0 podman[311795]: 2025-11-22 09:18:34.977493849 +0000 UTC m=+0.027538601 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.222 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.222 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.234 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.302 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.302 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.309 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.309 253665 INFO nova.compute.claims [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:18:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 279 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 80 op/s
Nov 22 09:18:35 compute-0 podman[311795]: 2025-11-22 09:18:35.380593923 +0000 UTC m=+0.430638665 container create 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.430 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:35 compute-0 systemd[1]: Started libpod-conmon-3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb.scope.
Nov 22 09:18:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999d845ec8bd02cd92c7e0cd63eb74e8c346a62a6f4728db53982c6a161b0456/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:35 compute-0 podman[311795]: 2025-11-22 09:18:35.659042579 +0000 UTC m=+0.709087321 container init 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:18:35 compute-0 podman[311795]: 2025-11-22 09:18:35.665859505 +0000 UTC m=+0.715904247 container start 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:18:35 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [NOTICE]   (311834) : New worker (311836) forked
Nov 22 09:18:35 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [NOTICE]   (311834) : Loading success.
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.720 253665 DEBUG nova.network.neutron [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updated VIF entry in instance network info cache for port 085e3bcc-2e77-4c2e-8298-872aac04e65e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.721 253665 DEBUG nova.network.neutron [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": null, "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.729 253665 INFO nova.compute.manager [None req-0e1493bb-09fb-4bc6-befa-a524253c3e15 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Get console output
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.738 253665 INFO oslo.privsep.daemon [None req-0e1493bb-09fb-4bc6-befa-a524253c3e15 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpw8tso7nc/privsep.sock']
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.740 253665 DEBUG oslo_concurrency.lockutils [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583672798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.984 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:35 compute-0 nova_compute[253661]: 2025-11-22 09:18:35.991 253665 DEBUG nova.compute.provider_tree [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.005 253665 DEBUG nova.scheduler.client.report [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.024 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.025 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.071 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.071 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.089 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.104 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.186 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.187 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.188 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Creating image(s)
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.211 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.238 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.351 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.356 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3583672798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.449 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.450 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.451 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.451 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.513 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.519 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 dec3a0c0-4e66-47fb-845c-42748f871bd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.570 253665 DEBUG nova.policy [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '22a23d70ca814c9597ead334e32c08a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9d1054fa34554ffa8a188984d2db6a60', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:18:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.799 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.800 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.800 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.800 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 WARNING nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 WARNING nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 WARNING nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.804 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.804 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:36 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.804 253665 WARNING nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:18:36 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 22 09:18:37 compute-0 nova_compute[253661]: 2025-11-22 09:18:37.026 253665 INFO oslo.privsep.daemon [None req-0e1493bb-09fb-4bc6-befa-a524253c3e15 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Spawned new privsep daemon via rootwrap
Nov 22 09:18:37 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.865 311943 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 09:18:37 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.868 311943 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 09:18:37 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.870 311943 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 22 09:18:37 compute-0 nova_compute[253661]: 2025-11-22 09:18:36.870 311943 INFO oslo.privsep.daemon [-] privsep daemon running as pid 311943
Nov 22 09:18:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 247 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 109 op/s
Nov 22 09:18:37 compute-0 ceph-mon[75021]: pgmap v1629: 305 pgs: 305 active+clean; 279 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 80 op/s
Nov 22 09:18:37 compute-0 ceph-mon[75021]: osdmap e238: 3 total, 3 up, 3 in
Nov 22 09:18:38 compute-0 nova_compute[253661]: 2025-11-22 09:18:38.087 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803103.0860217, ee68ed8e-d5b3-4069-ac90-f7e94430ed0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:38 compute-0 nova_compute[253661]: 2025-11-22 09:18:38.088 253665 INFO nova.compute.manager [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] VM Stopped (Lifecycle Event)
Nov 22 09:18:38 compute-0 nova_compute[253661]: 2025-11-22 09:18:38.104 253665 DEBUG nova.compute.manager [None req-1b93036c-2ede-4c0a-9c0d-4f791728e7b4 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:38 compute-0 nova_compute[253661]: 2025-11-22 09:18:38.109 253665 DEBUG nova.compute.manager [None req-1b93036c-2ede-4c0a-9c0d-4f791728e7b4 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:38 compute-0 nova_compute[253661]: 2025-11-22 09:18:38.125 253665 INFO nova.compute.manager [None req-1b93036c-2ede-4c0a-9c0d-4f791728e7b4 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Nov 22 09:18:38 compute-0 nova_compute[253661]: 2025-11-22 09:18:38.383 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 dec3a0c0-4e66-47fb-845c-42748f871bd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.864s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:38 compute-0 nova_compute[253661]: 2025-11-22 09:18:38.449 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] resizing rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:18:38 compute-0 nova_compute[253661]: 2025-11-22 09:18:38.852 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Successfully created port: c10e771b-271b-4855-9004-fe8ee858ec5d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:18:39 compute-0 ceph-mon[75021]: pgmap v1631: 305 pgs: 305 active+clean; 247 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 109 op/s
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.043 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.241 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.242 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.257 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:18:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 233 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.503 253665 DEBUG nova.objects.instance [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lazy-loading 'migration_context' on Instance uuid dec3a0c0-4e66-47fb-845c-42748f871bd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.516 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.517 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Ensure instance console log exists: /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.518 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.518 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.519 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.560 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Deleting instance files /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_del
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.561 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Deletion of /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_del complete
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.657 253665 INFO nova.scheduler.client.report [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance ee68ed8e-d5b3-4069-ac90-f7e94430ed0d
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.715 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.716 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:39 compute-0 nova_compute[253661]: 2025-11-22 09:18:39.826 253665 DEBUG oslo_concurrency.processutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.059 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803105.0584335, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.060 253665 INFO nova.compute.manager [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Stopped (Lifecycle Event)
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.083 253665 DEBUG nova.compute.manager [None req-d601ede0-b2a9-4ee3-baaf-a222543b3f22 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.131 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.237 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Successfully updated port: c10e771b-271b-4855-9004-fe8ee858ec5d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.253 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.254 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquired lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.254 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:18:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906110231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.321 253665 DEBUG oslo_concurrency.processutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.326 253665 DEBUG nova.compute.provider_tree [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.341 253665 DEBUG nova.scheduler.client.report [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.385 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.451 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 41.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.821 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.963 253665 DEBUG nova.compute.manager [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-changed-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.963 253665 DEBUG nova.compute.manager [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Refreshing instance network info cache due to event network-changed-c10e771b-271b-4855-9004-fe8ee858ec5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:18:40 compute-0 nova_compute[253661]: 2025-11-22 09:18:40.964 253665 DEBUG oslo_concurrency.lockutils [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:41 compute-0 ceph-mon[75021]: pgmap v1632: 305 pgs: 305 active+clean; 233 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Nov 22 09:18:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3906110231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 233 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Nov 22 09:18:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 22 09:18:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 22 09:18:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 22 09:18:42 compute-0 ceph-mon[75021]: pgmap v1633: 305 pgs: 305 active+clean; 233 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Nov 22 09:18:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 246 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.405 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updating instance_info_cache with network_info: [{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.424 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Releasing lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.425 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance network_info: |[{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.425 253665 DEBUG oslo_concurrency.lockutils [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.425 253665 DEBUG nova.network.neutron [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Refreshing network info cache for port c10e771b-271b-4855-9004-fe8ee858ec5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.428 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start _get_guest_xml network_info=[{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.434 253665 WARNING nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.458 253665 DEBUG nova.virt.libvirt.host [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.459 253665 DEBUG nova.virt.libvirt.host [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.466 253665 DEBUG nova.virt.libvirt.host [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.467 253665 DEBUG nova.virt.libvirt.host [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.467 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.467 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.468 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.468 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.469 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.469 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.469 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.470 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.470 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.470 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.471 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.471 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.475 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:43 compute-0 ceph-mon[75021]: osdmap e239: 3 total, 3 up, 3 in
Nov 22 09:18:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2131222639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.973 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:43 compute-0 nova_compute[253661]: 2025-11-22 09:18:43.998 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.003 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.047 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192174460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.498 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.500 253665 DEBUG nova.virt.libvirt.vif [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1266648346',display_name='tempest-ServersTestJSON-server-1266648346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1266648346',id=52,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlHqesJZ81rHIrLZzqDDZqmgjYu5MzxRRBun28RXCGOItUHcjpLw69lsrxKRvDbiIeTcAfAS0eY1jM4zBK+YEZ0Fqn+yA8iBWGS3Ng7czuJICvlXeiMEyvgNWSqN1n7cw==',key_name='tempest-keypair-330217895',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d1054fa34554ffa8a188984d2db6a60',ramdisk_id='',reservation_id='r-562p3oi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1156873673',owner_user_name='tempest-ServersTestJSON-1156873673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22a23d70ca814c9597ead334e32c08a1',uuid=dec3a0c0-4e66-47fb-845c-42748f871bd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.500 253665 DEBUG nova.network.os_vif_util [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converting VIF {"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.501 253665 DEBUG nova.network.os_vif_util [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.503 253665 DEBUG nova.objects.instance [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lazy-loading 'pci_devices' on Instance uuid dec3a0c0-4e66-47fb-845c-42748f871bd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.517 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <uuid>dec3a0c0-4e66-47fb-845c-42748f871bd3</uuid>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <name>instance-00000034</name>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestJSON-server-1266648346</nova:name>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:18:43</nova:creationTime>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <nova:user uuid="22a23d70ca814c9597ead334e32c08a1">tempest-ServersTestJSON-1156873673-project-member</nova:user>
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <nova:project uuid="9d1054fa34554ffa8a188984d2db6a60">tempest-ServersTestJSON-1156873673</nova:project>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <nova:port uuid="c10e771b-271b-4855-9004-fe8ee858ec5d">
Nov 22 09:18:44 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <system>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <entry name="serial">dec3a0c0-4e66-47fb-845c-42748f871bd3</entry>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <entry name="uuid">dec3a0c0-4e66-47fb-845c-42748f871bd3</entry>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     </system>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <os>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   </os>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <features>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   </features>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/dec3a0c0-4e66-47fb-845c-42748f871bd3_disk">
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config">
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:44 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:f1:f2:e5"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <target dev="tapc10e771b-27"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/console.log" append="off"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <video>
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     </video>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:18:44 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:18:44 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:18:44 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:18:44 compute-0 nova_compute[253661]: </domain>
Nov 22 09:18:44 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.518 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Preparing to wait for external event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.518 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.519 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.519 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.520 253665 DEBUG nova.virt.libvirt.vif [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1266648346',display_name='tempest-ServersTestJSON-server-1266648346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1266648346',id=52,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlHqesJZ81rHIrLZzqDDZqmgjYu5MzxRRBun28RXCGOItUHcjpLw69lsrxKRvDbiIeTcAfAS0eY1jM4zBK+YEZ0Fqn+yA8iBWGS3Ng7czuJICvlXeiMEyvgNWSqN1n7cw==',key_name='tempest-keypair-330217895',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d1054fa34554ffa8a188984d2db6a60',ramdisk_id='',reservation_id='r-562p3oi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1156873673',owner_user_name='tempest-ServersTestJSON-1156873673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22a23d70ca814c9597ead334e32c08a1',uuid=dec3a0c0-4e66-47fb-845c-42748f871bd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.520 253665 DEBUG nova.network.os_vif_util [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converting VIF {"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.521 253665 DEBUG nova.network.os_vif_util [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.521 253665 DEBUG os_vif [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.522 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.523 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.526 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc10e771b-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.527 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc10e771b-27, col_values=(('external_ids', {'iface-id': 'c10e771b-271b-4855-9004-fe8ee858ec5d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f1:f2:e5', 'vm-uuid': 'dec3a0c0-4e66-47fb-845c-42748f871bd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:44 compute-0 NetworkManager[48920]: <info>  [1763803124.5309] manager: (tapc10e771b-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.538 253665 INFO os_vif [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27')
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.589 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.589 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.589 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] No VIF found with MAC fa:16:3e:f1:f2:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.590 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Using config drive
Nov 22 09:18:44 compute-0 nova_compute[253661]: 2025-11-22 09:18:44.610 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:44 compute-0 ceph-mon[75021]: pgmap v1635: 305 pgs: 305 active+clean; 246 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Nov 22 09:18:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2131222639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1192174460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.082 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Creating config drive at /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.089 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprg4lfp3s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.134 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.245 253665 DEBUG oslo_concurrency.lockutils [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.246 253665 DEBUG oslo_concurrency.lockutils [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.246 253665 DEBUG nova.compute.manager [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.247 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprg4lfp3s" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.274 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.279 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.333 253665 DEBUG nova.compute.manager [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.335 253665 DEBUG nova.objects.instance [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.365 253665 DEBUG nova.virt.libvirt.driver [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:18:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 246 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 136 op/s
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.440 253665 DEBUG nova.network.neutron [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updated VIF entry in instance network info cache for port c10e771b-271b-4855-9004-fe8ee858ec5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.441 253665 DEBUG nova.network.neutron [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updating instance_info_cache with network_info: [{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.453 253665 DEBUG oslo_concurrency.lockutils [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.496 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.497 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Deleting local config drive /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config because it was imported into RBD.
Nov 22 09:18:45 compute-0 kernel: tapc10e771b-27: entered promiscuous mode
Nov 22 09:18:45 compute-0 NetworkManager[48920]: <info>  [1763803125.5611] manager: (tapc10e771b-27): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Nov 22 09:18:45 compute-0 systemd-udevd[312176]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:18:45 compute-0 NetworkManager[48920]: <info>  [1763803125.6048] device (tapc10e771b-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:18:45 compute-0 NetworkManager[48920]: <info>  [1763803125.6061] device (tapc10e771b-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.609 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.617 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:f2:e5 10.100.0.3'], port_security=['fa:16:3e:f1:f2:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dec3a0c0-4e66-47fb-845c-42748f871bd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af1599cd-9805-40cb-9d20-ed7982b07412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d1054fa34554ffa8a188984d2db6a60', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac0f6fad-418e-4cf8-9b02-babdac3fb88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64f6ab83-a798-4bd9-aa90-a1cb3d63c1c0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c10e771b-271b-4855-9004-fe8ee858ec5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:45 compute-0 ovn_controller[152872]: 2025-11-22T09:18:45Z|00491|binding|INFO|Claiming lport c10e771b-271b-4855-9004-fe8ee858ec5d for this chassis.
Nov 22 09:18:45 compute-0 ovn_controller[152872]: 2025-11-22T09:18:45Z|00492|binding|INFO|c10e771b-271b-4855-9004-fe8ee858ec5d: Claiming fa:16:3e:f1:f2:e5 10.100.0.3
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.619 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c10e771b-271b-4855-9004-fe8ee858ec5d in datapath af1599cd-9805-40cb-9d20-ed7982b07412 bound to our chassis
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.621 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network af1599cd-9805-40cb-9d20-ed7982b07412
Nov 22 09:18:45 compute-0 ovn_controller[152872]: 2025-11-22T09:18:45Z|00493|binding|INFO|Setting lport c10e771b-271b-4855-9004-fe8ee858ec5d ovn-installed in OVS
Nov 22 09:18:45 compute-0 ovn_controller[152872]: 2025-11-22T09:18:45Z|00494|binding|INFO|Setting lport c10e771b-271b-4855-9004-fe8ee858ec5d up in Southbound
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.638 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[993a8fa3-6c6f-4ed1-835a-2a2c61805cf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.639 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaf1599cd-91 in ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:18:45 compute-0 nova_compute[253661]: 2025-11-22 09:18:45.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.642 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaf1599cd-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.642 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d81abd7-6401-4b39-b809-6b6da519e8e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c90cb6d5-841a-4e33-96f7-703f03522518]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.656 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb4b16d-2719-41c8-bab0-bfedb90ca60f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 systemd-machined[215941]: New machine qemu-59-instance-00000034.
Nov 22 09:18:45 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000034.
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.686 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf9c9f7-b2e1-4dce-af8a-fe16b0f6b19d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.728 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[750602b7-52ac-4def-b3ce-026edff813c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 NetworkManager[48920]: <info>  [1763803125.7358] manager: (tapaf1599cd-90): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.734 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8bc325-d697-41ae-953e-4d83889bd910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.780 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd7d2d8-f3d1-4a76-9e10-ada6e780df3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.784 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e878785a-51b8-4731-8f32-1749d5ebf42b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 NetworkManager[48920]: <info>  [1763803125.8148] device (tapaf1599cd-90): carrier: link connected
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.820 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[577ed49b-59de-4579-9b10-21f98c6d988b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.841 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e422ec6f-cdfd-4972-81c1-9512277d1b40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf1599cd-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:da:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597655, 'reachable_time': 30270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312213, 'error': None, 'target': 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.865 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[40a26ab5-0836-4836-a6b5-15b9875fbd66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:dad5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597655, 'tstamp': 597655}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312214, 'error': None, 'target': 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f098414-e96a-49a4-9a3d-cb2c887f1ae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf1599cd-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:da:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597655, 'reachable_time': 30270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312215, 'error': None, 'target': 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.926 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[761bc251-9071-4e33-8d01-ff44ed25802a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.008 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[64d8b908-eccc-4ff7-8238-c738db5cfffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.009 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf1599cd-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.009 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.010 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf1599cd-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.012 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:46 compute-0 NetworkManager[48920]: <info>  [1763803126.0127] manager: (tapaf1599cd-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Nov 22 09:18:46 compute-0 kernel: tapaf1599cd-90: entered promiscuous mode
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.027 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.029 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaf1599cd-90, col_values=(('external_ids', {'iface-id': 'b4f427ac-62da-49fd-b335-ef7e4dcae695'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.030 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:46 compute-0 ovn_controller[152872]: 2025-11-22T09:18:46Z|00495|binding|INFO|Releasing lport b4f427ac-62da-49fd-b335-ef7e4dcae695 from this chassis (sb_readonly=0)
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.031 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.032 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/af1599cd-9805-40cb-9d20-ed7982b07412.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/af1599cd-9805-40cb-9d20-ed7982b07412.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.033 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8cb93f-3269-49c4-aa28-61e586cc44fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.033 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-af1599cd-9805-40cb-9d20-ed7982b07412
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/af1599cd-9805-40cb-9d20-ed7982b07412.pid.haproxy
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID af1599cd-9805-40cb-9d20-ed7982b07412
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:18:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.034 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'env', 'PROCESS_TAG=haproxy-af1599cd-9805-40cb-9d20-ed7982b07412', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/af1599cd-9805-40cb-9d20-ed7982b07412.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.046 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.165 253665 DEBUG nova.compute.manager [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.167 253665 DEBUG oslo_concurrency.lockutils [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.167 253665 DEBUG oslo_concurrency.lockutils [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.167 253665 DEBUG oslo_concurrency.lockutils [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.168 253665 DEBUG nova.compute.manager [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Processing event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.323 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.325 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803126.3243744, dec3a0c0-4e66-47fb-845c-42748f871bd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.325 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] VM Started (Lifecycle Event)
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.328 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.331 253665 INFO nova.virt.libvirt.driver [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance spawned successfully.
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.331 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.348 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.354 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.357 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.358 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.358 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.359 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.359 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.359 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.396 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.397 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803126.3244557, dec3a0c0-4e66-47fb-845c-42748f871bd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.397 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] VM Paused (Lifecycle Event)
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.417 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.426 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803126.3268192, dec3a0c0-4e66-47fb-845c-42748f871bd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.426 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] VM Resumed (Lifecycle Event)
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.453 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.456 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.465 253665 INFO nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Took 10.28 seconds to spawn the instance on the hypervisor.
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.465 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.489 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.525 253665 INFO nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Took 11.25 seconds to build instance.
Nov 22 09:18:46 compute-0 podman[312290]: 2025-11-22 09:18:46.435185191 +0000 UTC m=+0.028076643 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:18:46 compute-0 nova_compute[253661]: 2025-11-22 09:18:46.540 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:46 compute-0 podman[312290]: 2025-11-22 09:18:46.621042067 +0000 UTC m=+0.213933499 container create d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:18:46 compute-0 systemd[1]: Started libpod-conmon-d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324.scope.
Nov 22 09:18:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c46800e6c5ca462086d92c20678c674e5c07106b89d77525644fa067c6c4bcd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:46 compute-0 podman[312290]: 2025-11-22 09:18:46.747295445 +0000 UTC m=+0.340186897 container init d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:18:46 compute-0 podman[312290]: 2025-11-22 09:18:46.753423504 +0000 UTC m=+0.346314936 container start d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:18:46 compute-0 ceph-mon[75021]: pgmap v1636: 305 pgs: 305 active+clean; 246 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 136 op/s
Nov 22 09:18:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:46 compute-0 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [NOTICE]   (312309) : New worker (312311) forked
Nov 22 09:18:46 compute-0 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [NOTICE]   (312309) : Loading success.
Nov 22 09:18:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 210 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Nov 22 09:18:47 compute-0 ovn_controller[152872]: 2025-11-22T09:18:47Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:18:48 compute-0 nova_compute[253661]: 2025-11-22 09:18:48.376 253665 DEBUG nova.compute.manager [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:48 compute-0 nova_compute[253661]: 2025-11-22 09:18:48.376 253665 DEBUG oslo_concurrency.lockutils [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:48 compute-0 nova_compute[253661]: 2025-11-22 09:18:48.377 253665 DEBUG oslo_concurrency.lockutils [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:48 compute-0 nova_compute[253661]: 2025-11-22 09:18:48.377 253665 DEBUG oslo_concurrency.lockutils [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:48 compute-0 nova_compute[253661]: 2025-11-22 09:18:48.377 253665 DEBUG nova.compute.manager [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] No waiting events found dispatching network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:48 compute-0 nova_compute[253661]: 2025-11-22 09:18:48.377 253665 WARNING nova.compute.manager [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received unexpected event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d for instance with vm_state active and task_state None.
Nov 22 09:18:48 compute-0 ceph-mon[75021]: pgmap v1637: 305 pgs: 305 active+clean; 210 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Nov 22 09:18:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 167 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 741 KiB/s wr, 164 op/s
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.510 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.511 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.529 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.618 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.619 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.628 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.629 253665 INFO nova.compute.claims [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.794 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:49.891 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:49.892 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:18:49 compute-0 nova_compute[253661]: 2025-11-22 09:18:49.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:50 compute-0 sudo[312340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:18:50 compute-0 sudo[312340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:50 compute-0 sudo[312340]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:50 compute-0 sudo[312365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:18:50 compute-0 sudo[312365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:50 compute-0 sudo[312365]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/300004298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.264 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.270 253665 DEBUG nova.compute.provider_tree [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.286 253665 DEBUG nova.scheduler.client.report [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.304 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.305 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:18:50 compute-0 sudo[312392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:18:50 compute-0 sudo[312392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:50 compute-0 sudo[312392]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.353 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.354 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:18:50 compute-0 sudo[312417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:18:50 compute-0 sudo[312417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.376 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.392 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.502 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.504 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.505 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Creating image(s)
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.532 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.565 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.592 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.597 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.650 253665 DEBUG nova.policy [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.688 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.690 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.691 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.691 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.721 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.727 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:50 compute-0 ceph-mon[75021]: pgmap v1638: 305 pgs: 305 active+clean; 167 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 741 KiB/s wr, 164 op/s
Nov 22 09:18:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/300004298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:50 compute-0 sudo[312417]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:18:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:18:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:18:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.950 253665 DEBUG nova.compute.manager [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-changed-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.951 253665 DEBUG nova.compute.manager [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Refreshing instance network info cache due to event network-changed-c10e771b-271b-4855-9004-fe8ee858ec5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.951 253665 DEBUG oslo_concurrency.lockutils [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.951 253665 DEBUG oslo_concurrency.lockutils [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:50 compute-0 nova_compute[253661]: 2025-11-22 09:18:50.951 253665 DEBUG nova.network.neutron [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Refreshing network info cache for port c10e771b-271b-4855-9004-fe8ee858ec5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:18:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:18:51 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:18:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d1471e07-2751-439c-8b48-e83808a9788e does not exist
Nov 22 09:18:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0575880f-278d-4a0d-a466-6f311d7fdc7a does not exist
Nov 22 09:18:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8822cc69-3995-4e70-ac0b-7b6528e53fdd does not exist
Nov 22 09:18:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:18:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:18:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:18:51 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:18:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:18:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.123 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:51 compute-0 sudo[312566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:18:51 compute-0 sudo[312566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:51 compute-0 sudo[312566]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.197 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:18:51 compute-0 sudo[312606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:18:51 compute-0 sudo[312606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:51 compute-0 sudo[312606]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:51 compute-0 sudo[312659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:18:51 compute-0 sudo[312659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:51 compute-0 sudo[312659]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:51 compute-0 sudo[312695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:18:51 compute-0 sudo[312695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 167 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 741 KiB/s wr, 164 op/s
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.407 253665 DEBUG nova.objects.instance [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.431 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.432 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Ensure instance console log exists: /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.432 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.432 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.432 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:51 compute-0 podman[312778]: 2025-11-22 09:18:51.668784769 +0000 UTC m=+0.038863455 container create c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:18:51 compute-0 systemd[1]: Started libpod-conmon-c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8.scope.
Nov 22 09:18:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:51 compute-0 podman[312778]: 2025-11-22 09:18:51.650573817 +0000 UTC m=+0.020652523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:18:51 compute-0 podman[312778]: 2025-11-22 09:18:51.775071512 +0000 UTC m=+0.145150218 container init c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 09:18:51 compute-0 podman[312778]: 2025-11-22 09:18:51.785632439 +0000 UTC m=+0.155711115 container start c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 09:18:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 22 09:18:51 compute-0 podman[312778]: 2025-11-22 09:18:51.790552608 +0000 UTC m=+0.160631324 container attach c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:18:51 compute-0 keen_keller[312795]: 167 167
Nov 22 09:18:51 compute-0 systemd[1]: libpod-c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8.scope: Deactivated successfully.
Nov 22 09:18:51 compute-0 podman[312778]: 2025-11-22 09:18:51.793623172 +0000 UTC m=+0.163701868 container died c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:18:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 22 09:18:51 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 22 09:18:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:18:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:18:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:18:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:18:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:18:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0af812a6933bc8243815a941d50c66c341d6b54995f6cee3aa38957180d2185-merged.mount: Deactivated successfully.
Nov 22 09:18:51 compute-0 podman[312778]: 2025-11-22 09:18:51.881474197 +0000 UTC m=+0.251552883 container remove c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.888 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Successfully created port: 8d030734-0e50-4fca-a432-cc2d1c2c9dea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:18:51 compute-0 systemd[1]: libpod-conmon-c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8.scope: Deactivated successfully.
Nov 22 09:18:51 compute-0 nova_compute[253661]: 2025-11-22 09:18:51.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:52 compute-0 podman[312818]: 2025-11-22 09:18:52.0861542 +0000 UTC m=+0.053865299 container create e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:18:52 compute-0 systemd[1]: Started libpod-conmon-e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019.scope.
Nov 22 09:18:52 compute-0 podman[312818]: 2025-11-22 09:18:52.061253395 +0000 UTC m=+0.028964294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:18:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:52 compute-0 podman[312818]: 2025-11-22 09:18:52.189671176 +0000 UTC m=+0.157382075 container init e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:18:52
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', '.rgw.root', 'images']
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:18:52 compute-0 podman[312818]: 2025-11-22 09:18:52.200688004 +0000 UTC m=+0.168398883 container start e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:18:52 compute-0 podman[312818]: 2025-11-22 09:18:52.204765843 +0000 UTC m=+0.172476722 container attach e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:18:52 compute-0 ceph-mon[75021]: pgmap v1639: 305 pgs: 305 active+clean; 167 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 741 KiB/s wr, 164 op/s
Nov 22 09:18:52 compute-0 ceph-mon[75021]: osdmap e240: 3 total, 3 up, 3 in
Nov 22 09:18:53 compute-0 nova_compute[253661]: 2025-11-22 09:18:53.258 253665 DEBUG nova.network.neutron [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updated VIF entry in instance network info cache for port c10e771b-271b-4855-9004-fe8ee858ec5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:18:53 compute-0 nova_compute[253661]: 2025-11-22 09:18:53.258 253665 DEBUG nova.network.neutron [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updating instance_info_cache with network_info: [{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:53 compute-0 nova_compute[253661]: 2025-11-22 09:18:53.279 253665 DEBUG oslo_concurrency.lockutils [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:53 compute-0 quirky_booth[312835]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:18:53 compute-0 quirky_booth[312835]: --> relative data size: 1.0
Nov 22 09:18:53 compute-0 quirky_booth[312835]: --> All data devices are unavailable
Nov 22 09:18:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 180 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 663 KiB/s wr, 184 op/s
Nov 22 09:18:53 compute-0 systemd[1]: libpod-e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019.scope: Deactivated successfully.
Nov 22 09:18:53 compute-0 podman[312818]: 2025-11-22 09:18:53.389556061 +0000 UTC m=+1.357266970 container died e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 09:18:53 compute-0 systemd[1]: libpod-e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019.scope: Consumed 1.087s CPU time.
Nov 22 09:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da-merged.mount: Deactivated successfully.
Nov 22 09:18:53 compute-0 podman[312818]: 2025-11-22 09:18:53.633023067 +0000 UTC m=+1.600733946 container remove e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:18:53 compute-0 systemd[1]: libpod-conmon-e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019.scope: Deactivated successfully.
Nov 22 09:18:53 compute-0 sudo[312695]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:53 compute-0 sudo[312876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:18:53 compute-0 sudo[312876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:53 compute-0 sudo[312876]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:53 compute-0 sudo[312901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:18:53 compute-0 sudo[312901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:53 compute-0 sudo[312901]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:53 compute-0 sudo[312926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:18:53 compute-0 sudo[312926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:53 compute-0 sudo[312926]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:53 compute-0 sudo[312951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:18:53 compute-0 sudo[312951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:54 compute-0 nova_compute[253661]: 2025-11-22 09:18:54.239 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Successfully updated port: 8d030734-0e50-4fca-a432-cc2d1c2c9dea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:18:54 compute-0 podman[313016]: 2025-11-22 09:18:54.243136652 +0000 UTC m=+0.044420641 container create 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:18:54 compute-0 nova_compute[253661]: 2025-11-22 09:18:54.252 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:54 compute-0 nova_compute[253661]: 2025-11-22 09:18:54.253 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:54 compute-0 nova_compute[253661]: 2025-11-22 09:18:54.253 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:18:54 compute-0 systemd[1]: Started libpod-conmon-09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656.scope.
Nov 22 09:18:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:54 compute-0 podman[313016]: 2025-11-22 09:18:54.221165038 +0000 UTC m=+0.022449047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:18:54 compute-0 podman[313016]: 2025-11-22 09:18:54.332125864 +0000 UTC m=+0.133409883 container init 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:18:54 compute-0 podman[313016]: 2025-11-22 09:18:54.338640042 +0000 UTC m=+0.139924041 container start 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 09:18:54 compute-0 naughty_franklin[313033]: 167 167
Nov 22 09:18:54 compute-0 systemd[1]: libpod-09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656.scope: Deactivated successfully.
Nov 22 09:18:54 compute-0 podman[313016]: 2025-11-22 09:18:54.35209765 +0000 UTC m=+0.153381649 container attach 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 09:18:54 compute-0 podman[313016]: 2025-11-22 09:18:54.353090013 +0000 UTC m=+0.154374022 container died 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:18:54 compute-0 nova_compute[253661]: 2025-11-22 09:18:54.364 253665 DEBUG nova.compute.manager [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-changed-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:54 compute-0 nova_compute[253661]: 2025-11-22 09:18:54.366 253665 DEBUG nova.compute.manager [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Refreshing instance network info cache due to event network-changed-8d030734-0e50-4fca-a432-cc2d1c2c9dea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:18:54 compute-0 nova_compute[253661]: 2025-11-22 09:18:54.366 253665 DEBUG oslo_concurrency.lockutils [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c9e6704ca143a8c678a8d289cf4759964765c68e4ea3989247cfc095cd2c3a0-merged.mount: Deactivated successfully.
Nov 22 09:18:54 compute-0 nova_compute[253661]: 2025-11-22 09:18:54.415 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:18:54 compute-0 podman[313016]: 2025-11-22 09:18:54.436005398 +0000 UTC m=+0.237289387 container remove 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:18:54 compute-0 systemd[1]: libpod-conmon-09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656.scope: Deactivated successfully.
Nov 22 09:18:54 compute-0 nova_compute[253661]: 2025-11-22 09:18:54.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:54 compute-0 podman[313058]: 2025-11-22 09:18:54.613152183 +0000 UTC m=+0.027377017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:18:54 compute-0 podman[313058]: 2025-11-22 09:18:54.800221928 +0000 UTC m=+0.214446752 container create 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:18:54 compute-0 ceph-mon[75021]: pgmap v1641: 305 pgs: 305 active+clean; 180 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 663 KiB/s wr, 184 op/s
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:18:54 compute-0 systemd[1]: Started libpod-conmon-117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb.scope.
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:18:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:18:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:54 compute-0 podman[313058]: 2025-11-22 09:18:54.988271037 +0000 UTC m=+0.402495861 container init 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:18:54 compute-0 podman[313058]: 2025-11-22 09:18:54.996800595 +0000 UTC m=+0.411025439 container start 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:18:55 compute-0 podman[313058]: 2025-11-22 09:18:55.008533859 +0000 UTC m=+0.422758813 container attach 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.362 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Updating instance_info_cache with network_info: [{"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 215 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 198 op/s
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.404 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.405 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance network_info: |[{"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.406 253665 DEBUG oslo_concurrency.lockutils [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.406 253665 DEBUG nova.network.neutron [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Refreshing network info cache for port 8d030734-0e50-4fca-a432-cc2d1c2c9dea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.410 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start _get_guest_xml network_info=[{"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.416 253665 WARNING nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.426 253665 DEBUG nova.virt.libvirt.host [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.427 253665 DEBUG nova.virt.libvirt.host [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.432 253665 DEBUG nova.virt.libvirt.host [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.433 253665 DEBUG nova.virt.libvirt.host [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.433 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.434 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.434 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.434 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.436 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.436 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.436 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.440 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.490 253665 DEBUG nova.virt.libvirt.driver [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.800 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.802 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.820 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.839 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.840 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.862 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:18:55 compute-0 cool_hypatia[313075]: {
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:     "0": [
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:         {
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "devices": [
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "/dev/loop3"
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             ],
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_name": "ceph_lv0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_size": "21470642176",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "name": "ceph_lv0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "tags": {
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.cluster_name": "ceph",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.crush_device_class": "",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.encrypted": "0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.osd_id": "0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.type": "block",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.vdo": "0"
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             },
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "type": "block",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "vg_name": "ceph_vg0"
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:         }
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:     ],
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:     "1": [
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:         {
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "devices": [
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "/dev/loop4"
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             ],
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_name": "ceph_lv1",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_size": "21470642176",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "name": "ceph_lv1",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "tags": {
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.cluster_name": "ceph",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.crush_device_class": "",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.encrypted": "0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.osd_id": "1",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.type": "block",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.vdo": "0"
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             },
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "type": "block",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "vg_name": "ceph_vg1"
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:         }
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:     ],
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:     "2": [
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:         {
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "devices": [
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "/dev/loop5"
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             ],
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_name": "ceph_lv2",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_size": "21470642176",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "name": "ceph_lv2",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "tags": {
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.cluster_name": "ceph",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.crush_device_class": "",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.encrypted": "0",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.osd_id": "2",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.type": "block",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:                 "ceph.vdo": "0"
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             },
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "type": "block",
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:             "vg_name": "ceph_vg2"
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:         }
Nov 22 09:18:55 compute-0 cool_hypatia[313075]:     ]
Nov 22 09:18:55 compute-0 cool_hypatia[313075]: }
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.878 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.878 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.893 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.894 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:55 compute-0 systemd[1]: libpod-117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb.scope: Deactivated successfully.
Nov 22 09:18:55 compute-0 podman[313058]: 2025-11-22 09:18:55.897697564 +0000 UTC m=+1.311922378 container died 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.901 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.901 253665 INFO nova.compute.claims [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.904 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:18:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870768261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4-merged.mount: Deactivated successfully.
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.942 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.960 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:55 compute-0 podman[313058]: 2025-11-22 09:18:55.965522363 +0000 UTC m=+1.379747177 container remove 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 09:18:55 compute-0 systemd[1]: libpod-conmon-117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb.scope: Deactivated successfully.
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.993 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:55 compute-0 nova_compute[253661]: 2025-11-22 09:18:55.996 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:56 compute-0 sudo[312951]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:56 compute-0 podman[313112]: 2025-11-22 09:18:56.014823761 +0000 UTC m=+0.088584383 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:18:56 compute-0 podman[313105]: 2025-11-22 09:18:56.047529355 +0000 UTC m=+0.124320851 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.059 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:56 compute-0 sudo[313174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:18:56 compute-0 sudo[313174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:56 compute-0 sudo[313174]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:56 compute-0 sudo[313200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:18:56 compute-0 sudo[313200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:56 compute-0 sudo[313200]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.180 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:56 compute-0 sudo[313244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:18:56 compute-0 sudo[313244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:56 compute-0 sudo[313244]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:56 compute-0 sudo[313270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:18:56 compute-0 sudo[313270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:18:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128944253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.440 253665 DEBUG nova.network.neutron [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Updated VIF entry in instance network info cache for port 8d030734-0e50-4fca-a432-cc2d1c2c9dea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.441 253665 DEBUG nova.network.neutron [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Updating instance_info_cache with network_info: [{"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.457 253665 DEBUG oslo_concurrency.lockutils [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.470 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.472 253665 DEBUG nova.virt.libvirt.vif [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1545163364',display_name='tempest-DeleteServersTestJSON-server-1545163364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1545163364',id=53,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-0l9kcni8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:50Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2f0d9dce-1900-41c4-9b69-7e46f34dde81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.473 253665 DEBUG nova.network.os_vif_util [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.476 253665 DEBUG nova.network.os_vif_util [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.478 253665 DEBUG nova.objects.instance [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.493 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <uuid>2f0d9dce-1900-41c4-9b69-7e46f34dde81</uuid>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <name>instance-00000035</name>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <nova:name>tempest-DeleteServersTestJSON-server-1545163364</nova:name>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:18:55</nova:creationTime>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <nova:port uuid="8d030734-0e50-4fca-a432-cc2d1c2c9dea">
Nov 22 09:18:56 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <system>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <entry name="serial">2f0d9dce-1900-41c4-9b69-7e46f34dde81</entry>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <entry name="uuid">2f0d9dce-1900-41c4-9b69-7e46f34dde81</entry>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     </system>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <os>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   </os>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <features>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   </features>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk">
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config">
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:18:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:8b:66:f7"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <target dev="tap8d030734-0e"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/console.log" append="off"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <video>
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     </video>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:18:56 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:18:56 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:18:56 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:18:56 compute-0 nova_compute[253661]: </domain>
Nov 22 09:18:56 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.494 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Preparing to wait for external event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.494 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.494 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.494 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.495 253665 DEBUG nova.virt.libvirt.vif [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1545163364',display_name='tempest-DeleteServersTestJSON-server-1545163364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1545163364',id=53,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-0l9kcni8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:50Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2f0d9dce-1900-41c4-9b69-7e46f34dde81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.496 253665 DEBUG nova.network.os_vif_util [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.496 253665 DEBUG nova.network.os_vif_util [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.497 253665 DEBUG os_vif [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.500 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.501 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.512 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d030734-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.512 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d030734-0e, col_values=(('external_ids', {'iface-id': '8d030734-0e50-4fca-a432-cc2d1c2c9dea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:66:f7', 'vm-uuid': '2f0d9dce-1900-41c4-9b69-7e46f34dde81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:56 compute-0 NetworkManager[48920]: <info>  [1763803136.5155] manager: (tap8d030734-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.526 253665 INFO os_vif [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e')
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.595 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.595 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.595 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:8b:66:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.596 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Using config drive
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.620 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2296007028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.673 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.681 253665 DEBUG nova.compute.provider_tree [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.693 253665 DEBUG nova.scheduler.client.report [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:18:56 compute-0 podman[313358]: 2025-11-22 09:18:56.613708333 +0000 UTC m=+0.025123421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.713 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.714 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.716 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.727 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.728 253665 INFO nova.compute.claims [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:18:56 compute-0 podman[313358]: 2025-11-22 09:18:56.731642388 +0000 UTC m=+0.143057456 container create c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:18:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:18:56 compute-0 systemd[1]: Started libpod-conmon-c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da.scope.
Nov 22 09:18:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:56.894 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.917 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.918 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.935 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:18:56 compute-0 nova_compute[253661]: 2025-11-22 09:18:56.948 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:18:56 compute-0 ceph-mon[75021]: pgmap v1642: 305 pgs: 305 active+clean; 215 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 198 op/s
Nov 22 09:18:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2870768261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4128944253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:18:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2296007028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:56 compute-0 podman[313358]: 2025-11-22 09:18:56.979913591 +0000 UTC m=+0.391328679 container init c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:18:56 compute-0 podman[313358]: 2025-11-22 09:18:56.987994337 +0000 UTC m=+0.399409405 container start c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 09:18:56 compute-0 confident_morse[313393]: 167 167
Nov 22 09:18:56 compute-0 systemd[1]: libpod-c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da.scope: Deactivated successfully.
Nov 22 09:18:56 compute-0 conmon[313393]: conmon c9d82b0649d181c741f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da.scope/container/memory.events
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.029 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.030 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.030 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Creating image(s)
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.065 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:57 compute-0 podman[313358]: 2025-11-22 09:18:57.081009278 +0000 UTC m=+0.492424376 container attach c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:18:57 compute-0 podman[313358]: 2025-11-22 09:18:57.081547001 +0000 UTC m=+0.492962089 container died c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.093 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.114 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.117 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.167 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.212 253665 DEBUG nova.policy [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5f9c3cac3ab4d74a7aeffd50c07da03', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1d5505406f294eb4a17d4137cad567f1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.213 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.214 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.215 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.215 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.239 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.245 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-466c37357c3b2bcca86c1007c60932cd1431a4328f3f0957c3c471c86fc71f50-merged.mount: Deactivated successfully.
Nov 22 09:18:57 compute-0 podman[313358]: 2025-11-22 09:18:57.304828485 +0000 UTC m=+0.716243553 container remove c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:18:57 compute-0 systemd[1]: libpod-conmon-c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da.scope: Deactivated successfully.
Nov 22 09:18:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 215 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.422 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Creating config drive at /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.427 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4n7pran3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:57 compute-0 podman[313532]: 2025-11-22 09:18:57.557992387 +0000 UTC m=+0.082812323 container create f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.566 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4n7pran3" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.596 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:57 compute-0 podman[313532]: 2025-11-22 09:18:57.505759808 +0000 UTC m=+0.030579794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.602 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:57 compute-0 systemd[1]: Started libpod-conmon-f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86.scope.
Nov 22 09:18:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505920369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.677 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:57 compute-0 podman[313532]: 2025-11-22 09:18:57.724442932 +0000 UTC m=+0.249262868 container init f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.725 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:57 compute-0 podman[313532]: 2025-11-22 09:18:57.735188803 +0000 UTC m=+0.260008739 container start f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 09:18:57 compute-0 podman[313532]: 2025-11-22 09:18:57.748473915 +0000 UTC m=+0.273293851 container attach f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.771 253665 DEBUG nova.compute.provider_tree [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.776 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] resizing rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.851 253665 DEBUG nova.scheduler.client.report [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.872 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.873 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.876 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.884 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.885 253665 INFO nova.compute.claims [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:18:57 compute-0 kernel: tapa288a5e5-7b (unregistering): left promiscuous mode
Nov 22 09:18:57 compute-0 NetworkManager[48920]: <info>  [1763803137.9353] device (tapa288a5e5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.938 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.939 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:18:57 compute-0 ovn_controller[152872]: 2025-11-22T09:18:57Z|00496|binding|INFO|Releasing lport a288a5e5-7b57-4be8-9617-3271ea1e210f from this chassis (sb_readonly=0)
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:57 compute-0 ovn_controller[152872]: 2025-11-22T09:18:57Z|00497|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f down in Southbound
Nov 22 09:18:57 compute-0 ovn_controller[152872]: 2025-11-22T09:18:57Z|00498|binding|INFO|Removing iface tapa288a5e5-7b ovn-installed in OVS
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.958 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:18:57 compute-0 nova_compute[253661]: 2025-11-22 09:18:57.967 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.969 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.970 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:18:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.972 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebc42408-7b03-480c-a016-1e5bb2ebcc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:18:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.974 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ddba8c-2ad8-4df5-9617-0687271f9cb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.976 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace which is not needed anymore
Nov 22 09:18:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1505920369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:57 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 22 09:18:57 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000032.scope: Consumed 13.967s CPU time.
Nov 22 09:18:57 compute-0 systemd-machined[215941]: Machine qemu-58-instance-00000032 terminated.
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.023 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.025 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.026 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.029 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Deleting local config drive /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config because it was imported into RBD.
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.047 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'migration_context' on Instance uuid 045614f9-cfb4-4a52-996e-e880cbdf7dcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.068 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.068 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Ensure instance console log exists: /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.069 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.069 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.069 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:58 compute-0 NetworkManager[48920]: <info>  [1763803138.0911] manager: (tap8d030734-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Nov 22 09:18:58 compute-0 kernel: tap8d030734-0e: entered promiscuous mode
Nov 22 09:18:58 compute-0 systemd-udevd[313656]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:18:58 compute-0 ovn_controller[152872]: 2025-11-22T09:18:58Z|00499|binding|INFO|Claiming lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea for this chassis.
Nov 22 09:18:58 compute-0 ovn_controller[152872]: 2025-11-22T09:18:58Z|00500|binding|INFO|8d030734-0e50-4fca-a432-cc2d1c2c9dea: Claiming fa:16:3e:8b:66:f7 10.100.0.10
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.101 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.108 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:66:f7 10.100.0.10'], port_security=['fa:16:3e:8b:66:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2f0d9dce-1900-41c4-9b69-7e46f34dde81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8d030734-0e50-4fca-a432-cc2d1c2c9dea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:18:58 compute-0 NetworkManager[48920]: <info>  [1763803138.1103] device (tap8d030734-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:18:58 compute-0 NetworkManager[48920]: <info>  [1763803138.1114] device (tap8d030734-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.121 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.123 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.123 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Creating image(s)
Nov 22 09:18:58 compute-0 ovn_controller[152872]: 2025-11-22T09:18:58Z|00501|binding|INFO|Setting lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea ovn-installed in OVS
Nov 22 09:18:58 compute-0 ovn_controller[152872]: 2025-11-22T09:18:58Z|00502|binding|INFO|Setting lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea up in Southbound
Nov 22 09:18:58 compute-0 systemd-machined[215941]: New machine qemu-60-instance-00000035.
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.150 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:58 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000035.
Nov 22 09:18:58 compute-0 NetworkManager[48920]: <info>  [1763803138.1631] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.195 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.254 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:58 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [NOTICE]   (311834) : haproxy version is 2.8.14-c23fe91
Nov 22 09:18:58 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [NOTICE]   (311834) : path to executable is /usr/sbin/haproxy
Nov 22 09:18:58 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [WARNING]  (311834) : Exiting Master process...
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.259 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:58 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [ALERT]    (311834) : Current worker (311836) exited with code 143 (Terminated)
Nov 22 09:18:58 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [WARNING]  (311834) : All workers exited. Exiting... (0)
Nov 22 09:18:58 compute-0 systemd[1]: libpod-3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb.scope: Deactivated successfully.
Nov 22 09:18:58 compute-0 podman[313702]: 2025-11-22 09:18:58.270610813 +0000 UTC m=+0.176107801 container died 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.303 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.310 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Successfully created port: 88cfebb7-b545-4137-8094-3fa68a13f42b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.313 253665 DEBUG nova.policy [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5f9c3cac3ab4d74a7aeffd50c07da03', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1d5505406f294eb4a17d4137cad567f1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.356 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.356 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.357 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.357 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.381 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.384 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b4a5932d-6547-4c01-9c71-0907c65247a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.418 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb-userdata-shm.mount: Deactivated successfully.
Nov 22 09:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-999d845ec8bd02cd92c7e0cd63eb74e8c346a62a6f4728db53982c6a161b0456-merged.mount: Deactivated successfully.
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.482 253665 DEBUG nova.compute.manager [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.483 253665 DEBUG oslo_concurrency.lockutils [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.484 253665 DEBUG oslo_concurrency.lockutils [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.484 253665 DEBUG oslo_concurrency.lockutils [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.485 253665 DEBUG nova.compute.manager [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Processing event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.528 253665 INFO nova.virt.libvirt.driver [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance shutdown successfully after 13 seconds.
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.535 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.535 253665 DEBUG nova.objects.instance [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.541 253665 DEBUG nova.compute.manager [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.541 253665 DEBUG oslo_concurrency.lockutils [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.542 253665 DEBUG oslo_concurrency.lockutils [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.542 253665 DEBUG oslo_concurrency.lockutils [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.542 253665 DEBUG nova.compute.manager [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.542 253665 WARNING nova.compute.manager [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state powering-off.
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.546 253665 DEBUG nova.compute.manager [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.602 253665 DEBUG oslo_concurrency.lockutils [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:58 compute-0 podman[313702]: 2025-11-22 09:18:58.636857742 +0000 UTC m=+0.542354730 container cleanup 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 09:18:58 compute-0 systemd[1]: libpod-conmon-3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb.scope: Deactivated successfully.
Nov 22 09:18:58 compute-0 podman[313893]: 2025-11-22 09:18:58.776943476 +0000 UTC m=+0.104408778 container remove 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.785 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b90f6c1-a5fe-463f-b50f-24082167ae74]: (4, ('Sat Nov 22 09:18:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb)\n3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb\nSat Nov 22 09:18:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb)\n3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.787 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc38c6f0-df94-4148-be72-f729aa824071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.789 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.793 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:58 compute-0 kernel: tapebc42408-70: left promiscuous mode
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.817 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4f53e97a-a061-49f0-8541-69a189482bde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.835 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2e4fdb-c152-420f-b222-f2201a613308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.840 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0ca317-5999-492b-a57b-eff472fb05e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.862 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11bf2405-f3d9-4c0c-9e34-22c4459104c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596505, 'reachable_time': 26711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313956, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.868 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:18:58 compute-0 systemd[1]: run-netns-ovnmeta\x2debc42408\x2d7b03\x2d480c\x2da016\x2d1e5bb2ebcc93.mount: Deactivated successfully.
Nov 22 09:18:58 compute-0 gracious_shockley[313574]: {
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "osd_id": 1,
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "type": "bluestore"
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:     },
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "osd_id": 0,
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "type": "bluestore"
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:     },
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "osd_id": 2,
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:         "type": "bluestore"
Nov 22 09:18:58 compute-0 gracious_shockley[313574]:     }
Nov 22 09:18:58 compute-0 gracious_shockley[313574]: }
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.868 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[30faf1ef-6170-4868-b35d-ed8e114a3764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.873 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8d030734-0e50-4fca-a432-cc2d1c2c9dea in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.874 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.886 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803138.8864884, 2f0d9dce-1900-41c4-9b69-7e46f34dde81 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.887 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] VM Started (Lifecycle Event)
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[155405cf-84d8-466d-8ab3-f886f483f7a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.890 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.890 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.895 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.897 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7455d4d9-cd11-4393-8bb6-7ccdf5a3359c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.898 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[850fd33b-37ce-4c1e-92c3-dbdb5ee07ccf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.899 253665 INFO nova.virt.libvirt.driver [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance spawned successfully.
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.899 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.904 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.912 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[36bac71d-da25-4006-8b2c-305a8664ec3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 systemd[1]: libpod-f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86.scope: Deactivated successfully.
Nov 22 09:18:58 compute-0 systemd[1]: libpod-f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86.scope: Consumed 1.108s CPU time.
Nov 22 09:18:58 compute-0 podman[313532]: 2025-11-22 09:18:58.92155905 +0000 UTC m=+1.446379006 container died f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.924 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.933 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.934 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.934 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.935 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.935 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.936 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.943 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7d8c28-8713-4e24-82a3-f3f49166fe30]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.944 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.945 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803138.889366, 2f0d9dce-1900-41c4-9b69-7e46f34dde81 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.945 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] VM Paused (Lifecycle Event)
Nov 22 09:18:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:18:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604372826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.981 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.990 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7c328baa-5c59-4c27-a3f2-8e508e5b8d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.997 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8a853816-7985-4f66-bcc8-fbff13437645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:58 compute-0 NetworkManager[48920]: <info>  [1763803138.9991] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.998 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803138.8954682, 2f0d9dce-1900-41c4-9b69-7e46f34dde81 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:18:58 compute-0 nova_compute[253661]: 2025-11-22 09:18:58.999 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] VM Resumed (Lifecycle Event)
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.008 253665 INFO nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Took 8.51 seconds to spawn the instance on the hypervisor.
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.009 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1-merged.mount: Deactivated successfully.
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.025 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.038 253665 DEBUG nova.compute.provider_tree [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.041 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.045 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.048 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[599d8a5d-d113-4112-a728-474265a63ad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.052 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[696fa02a-9cf4-43d9-8738-133ae90a599a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:59 compute-0 ceph-mon[75021]: pgmap v1643: 305 pgs: 305 active+clean; 215 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Nov 22 09:18:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3604372826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.074 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:18:59 compute-0 NetworkManager[48920]: <info>  [1763803139.0767] device (tapd93e3720-b0): carrier: link connected
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.080 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cd646734-8466-4551-9280-04395a669a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.082 253665 DEBUG nova.scheduler.client.report [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.107 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[83da51f2-9a06-426b-95de-715593823679]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598981, 'reachable_time': 18964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314002, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.111 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.112 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.116 253665 INFO nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Took 9.53 seconds to build instance.
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.126 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d198bb59-be08-4456-8450-42d48e9c23a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598981, 'tstamp': 598981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314003, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:59 compute-0 podman[313532]: 2025-11-22 09:18:59.135095599 +0000 UTC m=+1.659915545 container remove f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.147 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:59 compute-0 systemd[1]: libpod-conmon-f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86.scope: Deactivated successfully.
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.158 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b53f76-f0e0-4294-9655-4b98015790f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598981, 'reachable_time': 18964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314004, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:59 compute-0 sudo[313270]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.169 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.170 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:18:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:18:59 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:18:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.194 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.203 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b4a5932d-6547-4c01-9c71-0907c65247a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:59 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:18:59 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c4cb6b3d-54cc-4daa-8069-dd830fd9885f does not exist
Nov 22 09:18:59 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev cb2f250d-18c4-4c34-bbd0-6d089fb7d7d9 does not exist
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.211 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ce6fcc-468d-4ba4-a978-ca86cf26ebee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.246 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:18:59 compute-0 sudo[314007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:18:59 compute-0 sudo[314007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:59 compute-0 sudo[314007]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.315 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c13f8f-c7cb-4395-9789-676430f0906a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.318 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.318 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:59 compute-0 NetworkManager[48920]: <info>  [1763803139.3221] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Nov 22 09:18:59 compute-0 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.324 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:18:59 compute-0 ovn_controller[152872]: 2025-11-22T09:18:59Z|00503|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.353 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] resizing rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.355 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.356 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf0bc01-8527-43c8-9f1d-6850fbfd3b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.358 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:18:59 compute-0 sudo[314069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:18:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.359 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:18:59 compute-0 sudo[314069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:18:59 compute-0 sudo[314069]: pam_unix(sudo:session): session closed for user root
Nov 22 09:18:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 249 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 118 op/s
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.403 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.404 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.404 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Creating image(s)
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.430 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.464 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.495 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.503 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.567 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.580 253665 DEBUG nova.policy [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5f9c3cac3ab4d74a7aeffd50c07da03', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1d5505406f294eb4a17d4137cad567f1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.628 253665 DEBUG oslo_concurrency.lockutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.629 253665 DEBUG oslo_concurrency.lockutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.629 253665 DEBUG nova.network.neutron [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.629 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'info_cache' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.631 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.632 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.632 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.633 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.655 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.661 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 64142c1c-95e0-4db4-b743-bb94c85a208f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.707 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'migration_context' on Instance uuid b4a5932d-6547-4c01-9c71-0907c65247a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.726 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.727 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Ensure instance console log exists: /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.727 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.728 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:18:59 compute-0 nova_compute[253661]: 2025-11-22 09:18:59.728 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:18:59 compute-0 podman[314250]: 2025-11-22 09:18:59.816792013 +0000 UTC m=+0.063674049 container create 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 09:18:59 compute-0 systemd[1]: Started libpod-conmon-0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19.scope.
Nov 22 09:18:59 compute-0 podman[314250]: 2025-11-22 09:18:59.778163514 +0000 UTC m=+0.025045550 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:18:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e78995b3caad89fb07d376941b58acfb371f548ff3ea9aaf066112a811b999c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:18:59 compute-0 podman[314250]: 2025-11-22 09:18:59.951095995 +0000 UTC m=+0.197978041 container init 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:18:59 compute-0 podman[314266]: 2025-11-22 09:18:59.957536682 +0000 UTC m=+0.102694426 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:18:59 compute-0 podman[314250]: 2025-11-22 09:18:59.958721941 +0000 UTC m=+0.205603977 container start 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:18:59 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [NOTICE]   (314299) : New worker (314301) forked
Nov 22 09:18:59 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [NOTICE]   (314299) : Loading success.
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.053 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 64142c1c-95e0-4db4-b743-bb94c85a208f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.114 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] resizing rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:19:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.247 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Successfully created port: 7612be10-c22f-4d60-89f7-232e865b6524 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.252 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Successfully updated port: 88cfebb7-b545-4137-8094-3fa68a13f42b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.261 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'migration_context' on Instance uuid 64142c1c-95e0-4db4-b743-bb94c85a208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.271 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.272 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquired lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.272 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.274 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.274 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Ensure instance console log exists: /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.274 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.275 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.275 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.464 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.893 253665 DEBUG nova.compute.manager [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.894 253665 DEBUG oslo_concurrency.lockutils [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.894 253665 DEBUG oslo_concurrency.lockutils [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.894 253665 DEBUG oslo_concurrency.lockutils [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.895 253665 DEBUG nova.compute.manager [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] No waiting events found dispatching network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.895 253665 WARNING nova.compute.manager [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received unexpected event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea for instance with vm_state active and task_state None.
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.952 253665 DEBUG nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.953 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.953 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.954 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.954 253665 DEBUG nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.955 253665 WARNING nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state stopped and task_state powering-on.
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.955 253665 DEBUG nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-changed-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.955 253665 DEBUG nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Refreshing instance network info cache due to event network-changed-88cfebb7-b545-4137-8094-3fa68a13f42b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:19:00 compute-0 nova_compute[253661]: 2025-11-22 09:19:00.956 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.056 253665 DEBUG oslo_concurrency.lockutils [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.056 253665 DEBUG oslo_concurrency.lockutils [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.057 253665 DEBUG nova.compute.manager [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.059 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Successfully created port: 11f926cd-f731-4de9-861e-5842f91f48df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.064 253665 DEBUG nova.compute.manager [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.065 253665 DEBUG nova.objects.instance [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'flavor' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.086 253665 DEBUG nova.virt.libvirt.driver [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:19:01 compute-0 ceph-mon[75021]: pgmap v1644: 305 pgs: 305 active+clean; 249 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 118 op/s
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.287 253665 DEBUG nova.network.neutron [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:01 compute-0 ovn_controller[152872]: 2025-11-22T09:19:01Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f1:f2:e5 10.100.0.3
Nov 22 09:19:01 compute-0 ovn_controller[152872]: 2025-11-22T09:19:01Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f1:f2:e5 10.100.0.3
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.318 253665 DEBUG oslo_concurrency.lockutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.344 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.344 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.356 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.367 253665 DEBUG nova.virt.libvirt.vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.370 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.371 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.371 253665 DEBUG os_vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.373 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.374 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa288a5e5-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.381 253665 INFO os_vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:19:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 249 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 118 op/s
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.389 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start _get_guest_xml network_info=[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.394 253665 WARNING nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.398 253665 DEBUG nova.virt.libvirt.host [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.399 253665 DEBUG nova.virt.libvirt.host [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.402 253665 DEBUG nova.virt.libvirt.host [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.402 253665 DEBUG nova.virt.libvirt.host [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.403 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.403 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.403 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.404 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.404 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.404 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.404 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.405 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.405 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.405 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.405 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.406 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.406 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.418 253665 DEBUG oslo_concurrency.processutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.780 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Successfully updated port: 7612be10-c22f-4d60-89f7-232e865b6524 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:19:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.793 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Updating instance_info_cache with network_info: [{"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.795 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.796 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquired lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.796 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.824 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Releasing lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.825 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance network_info: |[{"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.825 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.825 253665 DEBUG nova.network.neutron [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Refreshing network info cache for port 88cfebb7-b545-4137-8094-3fa68a13f42b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.829 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start _get_guest_xml network_info=[{"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.834 253665 WARNING nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.840 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.840 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.847 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.847 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.848 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.848 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.848 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.850 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.850 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.853 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3084654943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.939 253665 DEBUG oslo_concurrency.processutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:01 compute-0 nova_compute[253661]: 2025-11-22 09:19:01.971 253665 DEBUG oslo_concurrency.processutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.219 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:19:02 compute-0 ceph-mon[75021]: pgmap v1645: 305 pgs: 305 active+clean; 249 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 118 op/s
Nov 22 09:19:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3084654943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083887834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.366 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.419 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439856252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.427 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.486 253665 DEBUG oslo_concurrency.processutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.488 253665 DEBUG nova.virt.libvirt.vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.488 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.489 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.491 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.505 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <uuid>636b1046-fff8-4a45-8a14-04010b2f282e</uuid>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <name>instance-00000032</name>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestJSON-server-149918095</nova:name>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:01</nova:creationTime>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:port uuid="a288a5e5-7b57-4be8-9617-3271ea1e210f">
Nov 22 09:19:02 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="serial">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="uuid">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk.config">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:70:38:8e"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <target dev="tapa288a5e5-7b"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log" append="off"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:02 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:02 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.506 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.506 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.506 253665 DEBUG nova.virt.libvirt.vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.507 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.507 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.508 253665 DEBUG os_vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.508 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.509 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.514 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.514 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 NetworkManager[48920]: <info>  [1763803142.5174] manager: (tapa288a5e5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.523 253665 INFO os_vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:19:02 compute-0 NetworkManager[48920]: <info>  [1763803142.6243] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Nov 22 09:19:02 compute-0 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 ovn_controller[152872]: 2025-11-22T09:19:02Z|00504|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 09:19:02 compute-0 ovn_controller[152872]: 2025-11-22T09:19:02Z|00505|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.641 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.643 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 bound to our chassis
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.645 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:19:02 compute-0 ovn_controller[152872]: 2025-11-22T09:19:02Z|00506|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 09:19:02 compute-0 ovn_controller[152872]: 2025-11-22T09:19:02Z|00507|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:19:02 compute-0 systemd-udevd[314518]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.662 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73a1d481-710c-4ab7-b190-2635f3ed966e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.663 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.666 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017137695757634954 of space, bias 1.0, pg target 0.5141308727290487 quantized to 32 (current 32)
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.666 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe0a4bb-b5c8-41c6-b43c-f9e236a184e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:19:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.668 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf30bf4-a84b-463a-a962-17bfdc946616]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 NetworkManager[48920]: <info>  [1763803142.6725] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:19:02 compute-0 NetworkManager[48920]: <info>  [1763803142.6735] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:19:02 compute-0 systemd-machined[215941]: New machine qemu-61-instance-00000032.
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.682 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4b29ee8a-8f8e-4cf4-b699-dec17c456df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000032.
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.705 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fca989-5518-4458-978e-7277a5cd745c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.744 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Successfully updated port: 11f926cd-f731-4de9-861e-5842f91f48df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.746 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[438b3c6a-80bd-4b2a-8d51-5e55a2ab6329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.752 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59c79439-664b-43ad-92c3-73dbc9dfc19b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 NetworkManager[48920]: <info>  [1763803142.7535] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/232)
Nov 22 09:19:02 compute-0 systemd-udevd[314523]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.770 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.770 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquired lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.771 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.805 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c634a370-2a94-44ae-98ec-81f4eba28962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.810 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[15655f4a-e260-41a0-9bf1-96e8c945c46a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 NetworkManager[48920]: <info>  [1763803142.8388] device (tapebc42408-70): carrier: link connected
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.848 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c26ecdc7-7712-4412-9c52-36f47456628b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.870 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6385263-5cad-47aa-93ea-93074fa02358]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599357, 'reachable_time': 37278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314552, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.896 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f63db4e5-a38a-4853-a078-4bd2f262e873]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599357, 'tstamp': 599357}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314553, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.916 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46249882-8d0b-43da-94b6-130ca86ca14b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599357, 'reachable_time': 37278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314554, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126397494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.949 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7021e375-7427-4b24-9fcf-05972c71064e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.956 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.958 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-1',id=54,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:56Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=045614f9-cfb4-4a52-996e-e880cbdf7dcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.959 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.960 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.961 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 045614f9-cfb4-4a52-996e-e880cbdf7dcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.974 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <uuid>045614f9-cfb4-4a52-996e-e880cbdf7dcd</uuid>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <name>instance-00000036</name>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:name>tempest-ListServersNegativeTestJSON-server-724322260-1</nova:name>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:01</nova:creationTime>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:user uuid="e5f9c3cac3ab4d74a7aeffd50c07da03">tempest-ListServersNegativeTestJSON-920959944-project-member</nova:user>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:project uuid="1d5505406f294eb4a17d4137cad567f1">tempest-ListServersNegativeTestJSON-920959944</nova:project>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <nova:port uuid="88cfebb7-b545-4137-8094-3fa68a13f42b">
Nov 22 09:19:02 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="serial">045614f9-cfb4-4a52-996e-e880cbdf7dcd</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="uuid">045614f9-cfb4-4a52-996e-e880cbdf7dcd</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:02 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:54:8a:3b"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <target dev="tap88cfebb7-b5"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/console.log" append="off"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:02 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:02 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:02 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:02 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:02 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.979 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Preparing to wait for external event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.980 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.980 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.980 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.981 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-1',id=54,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:56Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=045614f9-cfb4-4a52-996e-e880cbdf7dcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.981 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.982 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.983 253665 DEBUG os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.984 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.984 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.986 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.986 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88cfebb7-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.987 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88cfebb7-b5, col_values=(('external_ids', {'iface-id': '88cfebb7-b545-4137-8094-3fa68a13f42b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:8a:3b', 'vm-uuid': '045614f9-cfb4-4a52-996e-e880cbdf7dcd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.988 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 NetworkManager[48920]: <info>  [1763803142.9895] manager: (tap88cfebb7-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.994 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:02 compute-0 nova_compute[253661]: 2025-11-22 09:19:02.995 253665 INFO os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5')
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.031 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b12c5fc-4e27-42f5-910f-734b92f71483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.035 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.035 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:03 compute-0 NetworkManager[48920]: <info>  [1763803143.0673] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Nov 22 09:19:03 compute-0 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.069 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.074 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.075 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:03 compute-0 ovn_controller[152872]: 2025-11-22T09:19:03Z|00508|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.078 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.078 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.079 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No VIF found with MAC fa:16:3e:54:8a:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.079 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Using config drive
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.101 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.102 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab0408a-a822-4555-a5b8-3cc8a275c00b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.103 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.105 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.102 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.228 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:19:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3083887834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/439856252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/126397494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 308 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 782 KiB/s rd, 5.8 MiB/s wr, 132 op/s
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.390 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Creating config drive at /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.396 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnfd5fy7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.450 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 636b1046-fff8-4a45-8a14-04010b2f282e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.451 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803143.3879097, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.451 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.462 253665 DEBUG nova.compute.manager [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.464 253665 DEBUG nova.compute.manager [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-changed-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.464 253665 DEBUG nova.compute.manager [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Refreshing instance network info cache due to event network-changed-7612be10-c22f-4d60-89f7-232e865b6524. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.465 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.471 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance rebooted successfully.
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.471 253665 DEBUG nova.compute.manager [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.476 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.488 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.514 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.514 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803143.425389, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.515 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.547 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.550 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:03 compute-0 podman[314655]: 2025-11-22 09:19:03.557217199 +0000 UTC m=+0.057373775 container create 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.569 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnfd5fy7" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:03 compute-0 systemd[1]: Started libpod-conmon-96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79.scope.
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.597 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.603 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1467195d1769cb3d2de1fc7c53fe46b5d8e844ec9d0e75fe4e8f0d6486282a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:19:03 compute-0 podman[314655]: 2025-11-22 09:19:03.53174693 +0000 UTC m=+0.031903526 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:19:03 compute-0 podman[314655]: 2025-11-22 09:19:03.643110285 +0000 UTC m=+0.143266871 container init 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 09:19:03 compute-0 podman[314655]: 2025-11-22 09:19:03.648862815 +0000 UTC m=+0.149019391 container start 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:19:03 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [NOTICE]   (314693) : New worker (314695) forked
Nov 22 09:19:03 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [NOTICE]   (314693) : Loading success.
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.766 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Updating instance_info_cache with network_info: [{"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.783 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Releasing lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.784 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance network_info: |[{"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.784 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.785 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Refreshing network info cache for port 7612be10-c22f-4d60-89f7-232e865b6524 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.788 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start _get_guest_xml network_info=[{"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.795 253665 WARNING nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.801 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.802 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.803 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.804 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Deleting local config drive /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config because it was imported into RBD.
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.813 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.814 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.814 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.815 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.815 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.816 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.816 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.816 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.816 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.817 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.817 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.817 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.817 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.818 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.821 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:03 compute-0 NetworkManager[48920]: <info>  [1763803143.8723] manager: (tap88cfebb7-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Nov 22 09:19:03 compute-0 kernel: tap88cfebb7-b5: entered promiscuous mode
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.871 253665 DEBUG nova.network.neutron [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Updated VIF entry in instance network info cache for port 88cfebb7-b545-4137-8094-3fa68a13f42b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.872 253665 DEBUG nova.network.neutron [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Updating instance_info_cache with network_info: [{"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:03 compute-0 ovn_controller[152872]: 2025-11-22T09:19:03Z|00509|binding|INFO|Claiming lport 88cfebb7-b545-4137-8094-3fa68a13f42b for this chassis.
Nov 22 09:19:03 compute-0 ovn_controller[152872]: 2025-11-22T09:19:03Z|00510|binding|INFO|88cfebb7-b545-4137-8094-3fa68a13f42b: Claiming fa:16:3e:54:8a:3b 10.100.0.10
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.888 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:8a:3b 10.100.0.10'], port_security=['fa:16:3e:54:8a:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '045614f9-cfb4-4a52-996e-e880cbdf7dcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88cfebb7-b545-4137-8094-3fa68a13f42b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:03 compute-0 NetworkManager[48920]: <info>  [1763803143.8922] device (tap88cfebb7-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:19:03 compute-0 NetworkManager[48920]: <info>  [1763803143.8933] device (tap88cfebb7-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.895 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88cfebb7-b545-4137-8094-3fa68a13f42b in datapath f6001c81-6c53-4678-b8e8-39c35706be23 bound to our chassis
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.899 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:03 compute-0 ovn_controller[152872]: 2025-11-22T09:19:03Z|00511|binding|INFO|Setting lport 88cfebb7-b545-4137-8094-3fa68a13f42b ovn-installed in OVS
Nov 22 09:19:03 compute-0 ovn_controller[152872]: 2025-11-22T09:19:03Z|00512|binding|INFO|Setting lport 88cfebb7-b545-4137-8094-3fa68a13f42b up in Southbound
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.903 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:03 compute-0 nova_compute[253661]: 2025-11-22 09:19:03.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.916 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[27d70246-7024-48bb-8412-3892cf21564c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.919 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6001c81-61 in ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.922 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6001c81-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.922 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e795bcbc-05ed-40ce-811f-4f83bfb8195d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.923 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[178bd60f-9f38-4b09-9d46-48cd6b8d2caa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:03 compute-0 systemd-machined[215941]: New machine qemu-62-instance-00000036.
Nov 22 09:19:03 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000036.
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.941 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8a70bac4-58b4-4323-b376-5f98b53b3602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.959 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f731f8-3331-4b68-9340-a22248a3f143]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.013 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[80416119-4b3c-4202-b6d1-dd3bbc977569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 NetworkManager[48920]: <info>  [1763803144.0210] manager: (tapf6001c81-60): new Veth device (/org/freedesktop/NetworkManager/Devices/236)
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdb5f32-1030-4a7a-b9ac-6b2d7ba65e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.079 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5514d509-89a1-40d8-9f18-d610ef56982f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.085 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6addf0cc-27c8-4ca5-803d-3ad294cf25be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 NetworkManager[48920]: <info>  [1763803144.1186] device (tapf6001c81-60): carrier: link connected
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.127 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ce2832-ed26-4606-a81f-a2575532ed99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f749a7c-596c-43da-a050-d0052b0e9800]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 40749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314772, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.173 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8fef96-d4d7-4c5a-84e6-f05f8bf93351]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:70c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599485, 'tstamp': 599485}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314773, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af915c29-7475-4a4b-85a1-85fe038f4510]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 40749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314774, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.233 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[223f51f2-64b3-403e-8db1-bed57e125a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.273 253665 DEBUG nova.compute.manager [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.273 253665 DEBUG oslo_concurrency.lockutils [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.277 253665 DEBUG oslo_concurrency.lockutils [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.281 253665 DEBUG oslo_concurrency.lockutils [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.282 253665 DEBUG nova.compute.manager [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Processing event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:19:04 compute-0 ceph-mon[75021]: pgmap v1646: 305 pgs: 305 active+clean; 308 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 782 KiB/s rd, 5.8 MiB/s wr, 132 op/s
Nov 22 09:19:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962065208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.316 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdcd05c-1d0f-4632-ae3f-2a558d216181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.320 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:04 compute-0 NetworkManager[48920]: <info>  [1763803144.3234] manager: (tapf6001c81-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Nov 22 09:19:04 compute-0 kernel: tapf6001c81-60: entered promiscuous mode
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.326 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:04 compute-0 ovn_controller[152872]: 2025-11-22T09:19:04Z|00513|binding|INFO|Releasing lport a0af7d9b-5116-431a-ad00-3df7641dc72f from this chassis (sb_readonly=0)
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.341 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.351 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6001c81-6c53-4678-b8e8-39c35706be23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6001c81-6c53-4678-b8e8-39c35706be23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.353 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[40dd5594-7d72-4b3a-ae32-740c6b29797b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.354 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-f6001c81-6c53-4678-b8e8-39c35706be23
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/f6001c81-6c53-4678-b8e8-39c35706be23.pid.haproxy
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID f6001c81-6c53-4678-b8e8-39c35706be23
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:19:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.356 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'env', 'PROCESS_TAG=haproxy-f6001c81-6c53-4678-b8e8-39c35706be23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6001c81-6c53-4678-b8e8-39c35706be23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.386 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.393 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.453 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Updating instance_info_cache with network_info: [{"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.491 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Releasing lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.492 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance network_info: |[{"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.495 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start _get_guest_xml network_info=[{"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.502 253665 WARNING nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.511 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.512 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.515 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.515 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.515 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.521 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:04 compute-0 podman[314882]: 2025-11-22 09:19:04.832003553 +0000 UTC m=+0.074242125 container create ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:19:04 compute-0 systemd[1]: Started libpod-conmon-ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9.scope.
Nov 22 09:19:04 compute-0 podman[314882]: 2025-11-22 09:19:04.794294178 +0000 UTC m=+0.036532760 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:19:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:19:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708818679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aea65242036820cb6858ded2c2732f7a57b937b65a8c1c3d87746de9de9af3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.909 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803144.9060185, 045614f9-cfb4-4a52-996e-e880cbdf7dcd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.909 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] VM Started (Lifecycle Event)
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.911 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.915 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:19:04 compute-0 podman[314882]: 2025-11-22 09:19:04.919555071 +0000 UTC m=+0.161793663 container init ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.919 253665 INFO nova.virt.libvirt.driver [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance spawned successfully.
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.920 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:19:04 compute-0 podman[314882]: 2025-11-22 09:19:04.9277477 +0000 UTC m=+0.169986272 container start ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.927 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.949 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:04 compute-0 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [NOTICE]   (314926) : New worker (314928) forked
Nov 22 09:19:04 compute-0 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [NOTICE]   (314926) : Loading success.
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.959 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.960 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.960 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.961 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.961 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.962 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.966 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.967 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.967 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803144.9062157, 045614f9-cfb4-4a52-996e-e880cbdf7dcd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.967 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] VM Paused (Lifecycle Event)
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.969 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-2',id=55,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:58Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=b4a5932d-6547-4c01-9c71-0907c65247a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.969 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.970 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.971 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4a5932d-6547-4c01-9c71-0907c65247a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.989 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <uuid>b4a5932d-6547-4c01-9c71-0907c65247a1</uuid>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <name>instance-00000037</name>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <nova:name>tempest-ListServersNegativeTestJSON-server-724322260-2</nova:name>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:03</nova:creationTime>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <nova:user uuid="e5f9c3cac3ab4d74a7aeffd50c07da03">tempest-ListServersNegativeTestJSON-920959944-project-member</nova:user>
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <nova:project uuid="1d5505406f294eb4a17d4137cad567f1">tempest-ListServersNegativeTestJSON-920959944</nova:project>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <nova:port uuid="7612be10-c22f-4d60-89f7-232e865b6524">
Nov 22 09:19:04 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <entry name="serial">b4a5932d-6547-4c01-9c71-0907c65247a1</entry>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <entry name="uuid">b4a5932d-6547-4c01-9c71-0907c65247a1</entry>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b4a5932d-6547-4c01-9c71-0907c65247a1_disk">
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config">
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:04 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:b3:cc:cd"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <target dev="tap7612be10-c2"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/console.log" append="off"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:04 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:04 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:04 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:04 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:04 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.990 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Preparing to wait for external event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.990 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.990 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.991 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.991 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-2',id=55,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:58Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=b4a5932d-6547-4c01-9c71-0907c65247a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.991 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.992 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.992 253665 DEBUG os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.993 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.994 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.997 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.998 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.998 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7612be10-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:04 compute-0 nova_compute[253661]: 2025-11-22 09:19:04.999 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7612be10-c2, col_values=(('external_ids', {'iface-id': '7612be10-c22f-4d60-89f7-232e865b6524', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:cc:cd', 'vm-uuid': 'b4a5932d-6547-4c01-9c71-0907c65247a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.000 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:05 compute-0 NetworkManager[48920]: <info>  [1763803145.0025] manager: (tap7612be10-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.010 253665 INFO os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2')
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.011 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803144.9147544, 045614f9-cfb4-4a52-996e-e880cbdf7dcd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.011 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] VM Resumed (Lifecycle Event)
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.015 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Took 7.99 seconds to spawn the instance on the hypervisor.
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.015 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.027 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.034 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3140546305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.054 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.061 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.089 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.096 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.198 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.212 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Took 9.35 seconds to build instance.
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.222 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.223 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.223 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No VIF found with MAC fa:16:3e:b3:cc:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.223 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Using config drive
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.249 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.262 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/962065208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1708818679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3140546305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.353 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Updated VIF entry in instance network info cache for port 7612be10-c22f-4d60-89f7-232e865b6524. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.353 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Updating instance_info_cache with network_info: [{"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.365 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.366 253665 DEBUG nova.compute.manager [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-changed-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.366 253665 DEBUG nova.compute.manager [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Refreshing instance network info cache due to event network-changed-11f926cd-f731-4de9-861e-5842f91f48df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.367 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.367 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.367 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Refreshing network info cache for port 11f926cd-f731-4de9-861e-5842f91f48df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:19:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 388 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 8.8 MiB/s wr, 252 op/s
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.561 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Creating config drive at /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.568 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkh4whl1d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554906053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.630 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.633 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-3',id=56,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:59Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=64142c1c-95e0-4db4-b743-bb94c85a208f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.634 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.635 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.636 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 64142c1c-95e0-4db4-b743-bb94c85a208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.649 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <uuid>64142c1c-95e0-4db4-b743-bb94c85a208f</uuid>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <name>instance-00000038</name>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <nova:name>tempest-ListServersNegativeTestJSON-server-724322260-3</nova:name>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:04</nova:creationTime>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <nova:user uuid="e5f9c3cac3ab4d74a7aeffd50c07da03">tempest-ListServersNegativeTestJSON-920959944-project-member</nova:user>
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <nova:project uuid="1d5505406f294eb4a17d4137cad567f1">tempest-ListServersNegativeTestJSON-920959944</nova:project>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <nova:port uuid="11f926cd-f731-4de9-861e-5842f91f48df">
Nov 22 09:19:05 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <entry name="serial">64142c1c-95e0-4db4-b743-bb94c85a208f</entry>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <entry name="uuid">64142c1c-95e0-4db4-b743-bb94c85a208f</entry>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/64142c1c-95e0-4db4-b743-bb94c85a208f_disk">
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config">
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:05 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:41:54:ea"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <target dev="tap11f926cd-f7"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/console.log" append="off"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:05 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:05 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:05 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:05 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:05 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.650 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Preparing to wait for external event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.650 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.650 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.651 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.652 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-3',id=56,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:59Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=64142c1c-95e0-4db4-b743-bb94c85a208f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.652 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.653 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.654 253665 DEBUG os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.655 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.656 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.664 253665 DEBUG nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.664 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.664 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.665 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.665 253665 DEBUG nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.665 253665 WARNING nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.665 253665 DEBUG nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.666 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.666 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.666 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.666 253665 DEBUG nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.667 253665 WARNING nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.668 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11f926cd-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.668 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11f926cd-f7, col_values=(('external_ids', {'iface-id': '11f926cd-f731-4de9-861e-5842f91f48df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:54:ea', 'vm-uuid': '64142c1c-95e0-4db4-b743-bb94c85a208f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:05 compute-0 NetworkManager[48920]: <info>  [1763803145.6717] manager: (tap11f926cd-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.672 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.680 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.680 253665 INFO os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7')
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.725 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.725 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.725 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No VIF found with MAC fa:16:3e:41:54:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.726 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Using config drive
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.753 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.761 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkh4whl1d" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.795 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:05 compute-0 nova_compute[253661]: 2025-11-22 09:19:05.803 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.063 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.064 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Deleting local config drive /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config because it was imported into RBD.
Nov 22 09:19:06 compute-0 kernel: tap7612be10-c2: entered promiscuous mode
Nov 22 09:19:06 compute-0 NetworkManager[48920]: <info>  [1763803146.1157] manager: (tap7612be10-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Nov 22 09:19:06 compute-0 ovn_controller[152872]: 2025-11-22T09:19:06Z|00514|binding|INFO|Claiming lport 7612be10-c22f-4d60-89f7-232e865b6524 for this chassis.
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.121 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:06 compute-0 ovn_controller[152872]: 2025-11-22T09:19:06Z|00515|binding|INFO|7612be10-c22f-4d60-89f7-232e865b6524: Claiming fa:16:3e:b3:cc:cd 10.100.0.9
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.129 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:cc:cd 10.100.0.9'], port_security=['fa:16:3e:b3:cc:cd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b4a5932d-6547-4c01-9c71-0907c65247a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7612be10-c22f-4d60-89f7-232e865b6524) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.131 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7612be10-c22f-4d60-89f7-232e865b6524 in datapath f6001c81-6c53-4678-b8e8-39c35706be23 bound to our chassis
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.133 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23
Nov 22 09:19:06 compute-0 ovn_controller[152872]: 2025-11-22T09:19:06Z|00516|binding|INFO|Setting lport 7612be10-c22f-4d60-89f7-232e865b6524 ovn-installed in OVS
Nov 22 09:19:06 compute-0 ovn_controller[152872]: 2025-11-22T09:19:06Z|00517|binding|INFO|Setting lport 7612be10-c22f-4d60-89f7-232e865b6524 up in Southbound
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.158 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a582a09-88e2-414f-977f-6f05339ed75a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.159 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:06 compute-0 systemd-machined[215941]: New machine qemu-63-instance-00000037.
Nov 22 09:19:06 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000037.
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.193 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[921b2b7a-5099-4385-906a-ef6319a3513f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 systemd-udevd[315077]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.197 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a2833e-ac9a-4f11-b1e9-6fed9c50bd05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 NetworkManager[48920]: <info>  [1763803146.2115] device (tap7612be10-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:19:06 compute-0 NetworkManager[48920]: <info>  [1763803146.2124] device (tap7612be10-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.246 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4181948a-ce90-4b63-afeb-58a0fb598e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.263 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb8d359-19bc-4c35-9c98-4ba8b3592e78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 40749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315087, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7fe806-d86e-45ad-bd31-163ea3a4b10f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599500, 'tstamp': 599500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315089, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599504, 'tstamp': 599504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315089, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.279 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:06 compute-0 ceph-mon[75021]: pgmap v1647: 305 pgs: 305 active+clean; 388 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 8.8 MiB/s wr, 252 op/s
Nov 22 09:19:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2554906053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.320 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.328 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.329 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.329 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.329 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.379 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Creating config drive at /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.385 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuz91_n2f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.435 253665 DEBUG nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.435 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.436 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.436 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.436 253665 DEBUG nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] No waiting events found dispatching network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.436 253665 WARNING nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received unexpected event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b for instance with vm_state active and task_state None.
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Processing event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.538 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuz91_n2f" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.569 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.576 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.645 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Updated VIF entry in instance network info cache for port 11f926cd-f731-4de9-861e-5842f91f48df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.646 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Updating instance_info_cache with network_info: [{"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.660 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.715 253665 INFO nova.compute.manager [None req-a55cbe0e-e992-469b-a194-fc4a1a6993a2 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Pausing
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.716 253665 DEBUG nova.objects.instance [None req-a55cbe0e-e992-469b-a194-fc4a1a6993a2 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.742 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803146.7418752, 636b1046-fff8-4a45-8a14-04010b2f282e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.742 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Paused (Lifecycle Event)
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.745 253665 DEBUG nova.compute.manager [None req-a55cbe0e-e992-469b-a194-fc4a1a6993a2 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.749 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.750 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Deleting local config drive /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config because it was imported into RBD.
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.766 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.773 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:06 compute-0 kernel: tap11f926cd-f7: entered promiscuous mode
Nov 22 09:19:06 compute-0 systemd-udevd[315081]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:06 compute-0 NetworkManager[48920]: <info>  [1763803146.8148] manager: (tap11f926cd-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Nov 22 09:19:06 compute-0 ovn_controller[152872]: 2025-11-22T09:19:06Z|00518|binding|INFO|Claiming lport 11f926cd-f731-4de9-861e-5842f91f48df for this chassis.
Nov 22 09:19:06 compute-0 ovn_controller[152872]: 2025-11-22T09:19:06Z|00519|binding|INFO|11f926cd-f731-4de9-861e-5842f91f48df: Claiming fa:16:3e:41:54:ea 10.100.0.5
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:06 compute-0 NetworkManager[48920]: <info>  [1763803146.8286] device (tap11f926cd-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:19:06 compute-0 NetworkManager[48920]: <info>  [1763803146.8294] device (tap11f926cd-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.832 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:54:ea 10.100.0.5'], port_security=['fa:16:3e:41:54:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '64142c1c-95e0-4db4-b743-bb94c85a208f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=11f926cd-f731-4de9-861e-5842f91f48df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.833 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 11f926cd-f731-4de9-861e-5842f91f48df in datapath f6001c81-6c53-4678-b8e8-39c35706be23 bound to our chassis
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.836 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23
Nov 22 09:19:06 compute-0 ovn_controller[152872]: 2025-11-22T09:19:06Z|00520|binding|INFO|Setting lport 11f926cd-f731-4de9-861e-5842f91f48df ovn-installed in OVS
Nov 22 09:19:06 compute-0 ovn_controller[152872]: 2025-11-22T09:19:06Z|00521|binding|INFO|Setting lport 11f926cd-f731-4de9-861e-5842f91f48df up in Southbound
Nov 22 09:19:06 compute-0 nova_compute[253661]: 2025-11-22 09:19:06.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.853 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94865e7a-374c-4b2e-a0fe-bd76e2108be6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 systemd-machined[215941]: New machine qemu-64-instance-00000038.
Nov 22 09:19:06 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000038.
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.895 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4e30dd-93e0-4e27-858b-2d4d4c0b52b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.899 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[06f50ee8-7e02-4ef0-8949-d5db41c24ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.950 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[df13161b-c96a-4932-b6ee-72a1a2573814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.972 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e08c85-6c3c-4696-8117-e86ada17d9e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 40749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315154, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ec5491-7315-4b5b-a95f-c7dc65ea7bcd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599500, 'tstamp': 599500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315156, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599504, 'tstamp': 599504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315156, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.003 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.005 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.009 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.010 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.010 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.011 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 388 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.5 MiB/s wr, 279 op/s
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.512 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.512294, 64142c1c-95e0-4db4-b743-bb94c85a208f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.513 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] VM Started (Lifecycle Event)
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.530 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.544 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.5127084, 64142c1c-95e0-4db4-b743-bb94c85a208f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.545 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] VM Paused (Lifecycle Event)
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.561 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.565 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.580 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.594 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.595 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.5936224, b4a5932d-6547-4c01-9c71-0907c65247a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.595 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] VM Started (Lifecycle Event)
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.600 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.605 253665 INFO nova.virt.libvirt.driver [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance spawned successfully.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.605 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.610 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.612 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.630 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.5938897, b4a5932d-6547-4c01-9c71-0907c65247a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] VM Paused (Lifecycle Event)
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.646 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.652 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.652 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.653 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.655 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.655 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.655 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.660 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.5993876, b4a5932d-6547-4c01-9c71-0907c65247a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.661 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] VM Resumed (Lifecycle Event)
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.684 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.688 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.713 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.726 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Took 9.60 seconds to spawn the instance on the hypervisor.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.726 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.737 253665 DEBUG nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.737 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.737 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.738 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.738 253665 DEBUG nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Processing event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.738 253665 DEBUG nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.738 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.739 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.739 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.739 253665 DEBUG nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] No waiting events found dispatching network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.739 253665 WARNING nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received unexpected event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df for instance with vm_state building and task_state spawning.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.740 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.751 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.7440078, 64142c1c-95e0-4db4-b743-bb94c85a208f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.751 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] VM Resumed (Lifecycle Event)
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.763 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.795 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.809 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Took 11.89 seconds to build instance.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.811 253665 INFO nova.virt.libvirt.driver [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance spawned successfully.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.812 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.814 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.824 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.824 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.824 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.824 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.825 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.826 253665 INFO nova.compute.manager [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Terminating instance
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.826 253665 DEBUG nova.compute.manager [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.834 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.838 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.840 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.840 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.841 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.841 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.841 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.842 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:07 compute-0 kernel: tapc10e771b-27 (unregistering): left promiscuous mode
Nov 22 09:19:07 compute-0 NetworkManager[48920]: <info>  [1763803147.8849] device (tapc10e771b-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:19:07 compute-0 ovn_controller[152872]: 2025-11-22T09:19:07Z|00522|binding|INFO|Releasing lport c10e771b-271b-4855-9004-fe8ee858ec5d from this chassis (sb_readonly=0)
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:07 compute-0 ovn_controller[152872]: 2025-11-22T09:19:07Z|00523|binding|INFO|Setting lport c10e771b-271b-4855-9004-fe8ee858ec5d down in Southbound
Nov 22 09:19:07 compute-0 ovn_controller[152872]: 2025-11-22T09:19:07Z|00524|binding|INFO|Removing iface tapc10e771b-27 ovn-installed in OVS
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.908 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:f2:e5 10.100.0.3'], port_security=['fa:16:3e:f1:f2:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dec3a0c0-4e66-47fb-845c-42748f871bd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af1599cd-9805-40cb-9d20-ed7982b07412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d1054fa34554ffa8a188984d2db6a60', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac0f6fad-418e-4cf8-9b02-babdac3fb88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64f6ab83-a798-4bd9-aa90-a1cb3d63c1c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c10e771b-271b-4855-9004-fe8ee858ec5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.910 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c10e771b-271b-4855-9004-fe8ee858ec5d in datapath af1599cd-9805-40cb-9d20-ed7982b07412 unbound from our chassis
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.910 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Took 8.51 seconds to spawn the instance on the hypervisor.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.911 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.912 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network af1599cd-9805-40cb-9d20-ed7982b07412, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.913 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2224df67-6de1-4c88-b4d1-48ef44947d73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.917 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 namespace which is not needed anymore
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.925 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:07 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Deactivated successfully.
Nov 22 09:19:07 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Consumed 14.150s CPU time.
Nov 22 09:19:07 compute-0 systemd-machined[215941]: Machine qemu-59-instance-00000034 terminated.
Nov 22 09:19:07 compute-0 nova_compute[253661]: 2025-11-22 09:19:07.982 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Took 11.94 seconds to build instance.
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.004 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:08 compute-0 NetworkManager[48920]: <info>  [1763803148.0484] manager: (tapc10e771b-27): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.052 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.059 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:08 compute-0 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [NOTICE]   (312309) : haproxy version is 2.8.14-c23fe91
Nov 22 09:19:08 compute-0 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [NOTICE]   (312309) : path to executable is /usr/sbin/haproxy
Nov 22 09:19:08 compute-0 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [WARNING]  (312309) : Exiting Master process...
Nov 22 09:19:08 compute-0 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [WARNING]  (312309) : Exiting Master process...
Nov 22 09:19:08 compute-0 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [ALERT]    (312309) : Current worker (312311) exited with code 143 (Terminated)
Nov 22 09:19:08 compute-0 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [WARNING]  (312309) : All workers exited. Exiting... (0)
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.067 253665 INFO nova.virt.libvirt.driver [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance destroyed successfully.
Nov 22 09:19:08 compute-0 systemd[1]: libpod-d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324.scope: Deactivated successfully.
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.069 253665 DEBUG nova.objects.instance [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lazy-loading 'resources' on Instance uuid dec3a0c0-4e66-47fb-845c-42748f871bd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:08 compute-0 podman[315260]: 2025-11-22 09:19:08.077742081 +0000 UTC m=+0.061673300 container died d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.081 253665 DEBUG nova.virt.libvirt.vif [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1266648346',display_name='tempest-ServersTestJSON-server-1266648346',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1266648346',id=52,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlHqesJZ81rHIrLZzqDDZqmgjYu5MzxRRBun28RXCGOItUHcjpLw69lsrxKRvDbiIeTcAfAS0eY1jM4zBK+YEZ0Fqn+yA8iBWGS3Ng7czuJICvlXeiMEyvgNWSqN1n7cw==',key_name='tempest-keypair-330217895',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9d1054fa34554ffa8a188984d2db6a60',ramdisk_id='',reservation_id='r-562p3oi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1156873673',owner_user_name='tempest-ServersTestJSON-1156873673-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22a23d70ca814c9597ead334e32c08a1',uuid=dec3a0c0-4e66-47fb-845c-42748f871bd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.082 253665 DEBUG nova.network.os_vif_util [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converting VIF {"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.083 253665 DEBUG nova.network.os_vif_util [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.083 253665 DEBUG os_vif [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.085 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc10e771b-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.090 253665 INFO os_vif [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27')
Nov 22 09:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c46800e6c5ca462086d92c20678c674e5c07106b89d77525644fa067c6c4bcd-merged.mount: Deactivated successfully.
Nov 22 09:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324-userdata-shm.mount: Deactivated successfully.
Nov 22 09:19:08 compute-0 podman[315260]: 2025-11-22 09:19:08.13781281 +0000 UTC m=+0.121744019 container cleanup d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:19:08 compute-0 systemd[1]: libpod-conmon-d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324.scope: Deactivated successfully.
Nov 22 09:19:08 compute-0 podman[315310]: 2025-11-22 09:19:08.221660587 +0000 UTC m=+0.054630729 container remove d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:19:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.232 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d604c4e1-69cf-45af-9506-aadb75dedd7d]: (4, ('Sat Nov 22 09:19:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 (d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324)\nd237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324\nSat Nov 22 09:19:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 (d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324)\nd237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.234 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17604fcb-25f5-4bbe-ba6f-86ce65d0d5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.235 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf1599cd-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.237 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:08 compute-0 kernel: tapaf1599cd-90: left promiscuous mode
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.261 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ddb2f0-3091-4d7b-839f-344258415d32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d2f952-d0df-4282-81c7-c3d29c8c84b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.278 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01f67cf9-06c3-462d-8e83-8ef825c153ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.298 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df7d101e-fe84-467b-ae89-1969c2f6c63c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597645, 'reachable_time': 22521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315326, 'error': None, 'target': 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.303 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:19:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.304 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[68d09468-7b28-463d-b7fc-56208a60db57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:08 compute-0 systemd[1]: run-netns-ovnmeta\x2daf1599cd\x2d9805\x2d40cb\x2d9d20\x2ded7982b07412.mount: Deactivated successfully.
Nov 22 09:19:08 compute-0 ceph-mon[75021]: pgmap v1648: 305 pgs: 305 active+clean; 388 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.5 MiB/s wr, 279 op/s
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.575 253665 INFO nova.virt.libvirt.driver [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Deleting instance files /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3_del
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.576 253665 INFO nova.virt.libvirt.driver [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Deletion of /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3_del complete
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.621 253665 INFO nova.compute.manager [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.622 253665 DEBUG oslo.service.loopingcall [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.623 253665 DEBUG nova.compute.manager [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.623 253665 DEBUG nova.network.neutron [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.696 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.697 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.697 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.698 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.698 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] No waiting events found dispatching network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.698 253665 WARNING nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received unexpected event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 for instance with vm_state active and task_state None.
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.699 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-unplugged-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.699 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.699 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.699 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.700 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] No waiting events found dispatching network-vif-unplugged-c10e771b-271b-4855-9004-fe8ee858ec5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.700 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-unplugged-c10e771b-271b-4855-9004-fe8ee858ec5d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.700 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.701 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.701 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.702 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.702 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] No waiting events found dispatching network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:08 compute-0 nova_compute[253661]: 2025-11-22 09:19:08.702 253665 WARNING nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received unexpected event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d for instance with vm_state active and task_state deleting.
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.098 253665 INFO nova.compute.manager [None req-2819563a-46c9-4dbf-8da2-eb6e4f495104 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Unpausing
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.099 253665 DEBUG nova.objects.instance [None req-2819563a-46c9-4dbf-8da2-eb6e4f495104 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.134 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803149.1339312, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.134 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)
Nov 22 09:19:09 compute-0 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.143 253665 DEBUG nova.virt.libvirt.guest [None req-2819563a-46c9-4dbf-8da2-eb6e4f495104 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.143 253665 DEBUG nova.compute.manager [None req-2819563a-46c9-4dbf-8da2-eb6e4f495104 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.153 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.156 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.188 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.188 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.188 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.188 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.189 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.190 253665 INFO nova.compute.manager [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Terminating instance
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.190 253665 DEBUG nova.compute.manager [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.191 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 22 09:19:09 compute-0 kernel: tap88cfebb7-b5 (unregistering): left promiscuous mode
Nov 22 09:19:09 compute-0 NetworkManager[48920]: <info>  [1763803149.2360] device (tap88cfebb7-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:19:09 compute-0 ovn_controller[152872]: 2025-11-22T09:19:09Z|00525|binding|INFO|Releasing lport 88cfebb7-b545-4137-8094-3fa68a13f42b from this chassis (sb_readonly=0)
Nov 22 09:19:09 compute-0 ovn_controller[152872]: 2025-11-22T09:19:09Z|00526|binding|INFO|Setting lport 88cfebb7-b545-4137-8094-3fa68a13f42b down in Southbound
Nov 22 09:19:09 compute-0 ovn_controller[152872]: 2025-11-22T09:19:09Z|00527|binding|INFO|Removing iface tap88cfebb7-b5 ovn-installed in OVS
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.257 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:8a:3b 10.100.0.10'], port_security=['fa:16:3e:54:8a:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '045614f9-cfb4-4a52-996e-e880cbdf7dcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88cfebb7-b545-4137-8094-3fa68a13f42b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.258 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88cfebb7-b545-4137-8094-3fa68a13f42b in datapath f6001c81-6c53-4678-b8e8-39c35706be23 unbound from our chassis
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.264 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:09 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Deactivated successfully.
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.288 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb67f76-635a-41ac-9752-9a2fdfbc7637]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:09 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Consumed 5.088s CPU time.
Nov 22 09:19:09 compute-0 systemd-machined[215941]: Machine qemu-62-instance-00000036 terminated.
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.330 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[efa6b128-0b71-4c8e-b62f-5bfef6110837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.334 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84f13511-5753-4219-9bd0-807804064b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.382 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9495b50b-4eff-4ee4-b467-42871806381a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 343 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.5 MiB/s wr, 396 op/s
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.414 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b21bf68-b360-4ed2-8fea-ec8c14c028f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 36494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315338, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.429 253665 DEBUG nova.network.neutron [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.432 253665 INFO nova.virt.libvirt.driver [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance destroyed successfully.
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.433 253665 DEBUG nova.objects.instance [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'resources' on Instance uuid 045614f9-cfb4-4a52-996e-e880cbdf7dcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.449 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f87fea75-90ea-4e80-870f-6e495fb8c587]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599500, 'tstamp': 599500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315348, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599504, 'tstamp': 599504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315348, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.452 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.452 253665 DEBUG nova.virt.libvirt.vif [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-1',id=54,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:05Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=045614f9-cfb4-4a52-996e-e880cbdf7dcd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.453 253665 DEBUG nova.network.os_vif_util [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.454 253665 DEBUG nova.network.os_vif_util [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.455 253665 DEBUG os_vif [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.457 253665 INFO nova.compute.manager [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Took 0.83 seconds to deallocate network for instance.
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.458 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88cfebb7-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.459 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.463 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.464 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.467 253665 INFO os_vif [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5')
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.472 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.472 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.473 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.473 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.517 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.518 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.678 253665 DEBUG oslo_concurrency.processutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.970 253665 INFO nova.virt.libvirt.driver [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Deleting instance files /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd_del
Nov 22 09:19:09 compute-0 nova_compute[253661]: 2025-11-22 09:19:09.971 253665 INFO nova.virt.libvirt.driver [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Deletion of /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd_del complete
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.053 253665 INFO nova.compute.manager [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.053 253665 DEBUG oslo.service.loopingcall [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.054 253665 DEBUG nova.compute.manager [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.054 253665 DEBUG nova.network.neutron [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.192 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070905071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.231 253665 DEBUG oslo_concurrency.processutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.237 253665 DEBUG nova.compute.provider_tree [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.251 253665 DEBUG nova.scheduler.client.report [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.276 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.304 253665 INFO nova.scheduler.client.report [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Deleted allocations for instance dec3a0c0-4e66-47fb-845c-42748f871bd3
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.389 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:10 compute-0 ceph-mon[75021]: pgmap v1649: 305 pgs: 305 active+clean; 343 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.5 MiB/s wr, 396 op/s
Nov 22 09:19:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4070905071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.763 253665 DEBUG nova.network.neutron [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.789 253665 INFO nova.compute.manager [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Took 0.73 seconds to deallocate network for instance.
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.839 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.840 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.915 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-deleted-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.915 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-unplugged-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.915 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.916 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.916 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.916 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] No waiting events found dispatching network-vif-unplugged-88cfebb7-b545-4137-8094-3fa68a13f42b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.917 253665 WARNING nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received unexpected event network-vif-unplugged-88cfebb7-b545-4137-8094-3fa68a13f42b for instance with vm_state deleted and task_state None.
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.917 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.917 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.917 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.918 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.918 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] No waiting events found dispatching network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.918 253665 WARNING nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received unexpected event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b for instance with vm_state deleted and task_state None.
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.943 253665 DEBUG oslo_concurrency.processutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:10 compute-0 nova_compute[253661]: 2025-11-22 09:19:10.987 253665 DEBUG nova.compute.manager [req-ed7bfede-62e6-480b-8614-aa7b702c562b req-cd6e4fb3-5385-4ae3-9411-3c037c90df89 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-deleted-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:11 compute-0 nova_compute[253661]: 2025-11-22 09:19:11.250 253665 DEBUG nova.virt.libvirt.driver [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:19:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 343 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.2 MiB/s wr, 361 op/s
Nov 22 09:19:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1414933995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:11 compute-0 nova_compute[253661]: 2025-11-22 09:19:11.410 253665 DEBUG oslo_concurrency.processutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:11 compute-0 nova_compute[253661]: 2025-11-22 09:19:11.415 253665 DEBUG nova.compute.provider_tree [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:11 compute-0 nova_compute[253661]: 2025-11-22 09:19:11.433 253665 DEBUG nova.scheduler.client.report [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:11 compute-0 nova_compute[253661]: 2025-11-22 09:19:11.453 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1414933995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:11 compute-0 nova_compute[253661]: 2025-11-22 09:19:11.489 253665 INFO nova.scheduler.client.report [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Deleted allocations for instance 045614f9-cfb4-4a52-996e-e880cbdf7dcd
Nov 22 09:19:11 compute-0 nova_compute[253661]: 2025-11-22 09:19:11.560 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:12 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 22 09:19:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:19:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3512549868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:19:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:19:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3512549868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:19:12 compute-0 ceph-mon[75021]: pgmap v1650: 305 pgs: 305 active+clean; 343 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.2 MiB/s wr, 361 op/s
Nov 22 09:19:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3512549868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:19:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3512549868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:19:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 299 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 7.0 MiB/s wr, 486 op/s
Nov 22 09:19:13 compute-0 ovn_controller[152872]: 2025-11-22T09:19:13Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:66:f7 10.100.0.10
Nov 22 09:19:13 compute-0 ovn_controller[152872]: 2025-11-22T09:19:13Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:66:f7 10.100.0.10
Nov 22 09:19:14 compute-0 nova_compute[253661]: 2025-11-22 09:19:14.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:14 compute-0 ceph-mon[75021]: pgmap v1651: 305 pgs: 305 active+clean; 299 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 7.0 MiB/s wr, 486 op/s
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.381 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.382 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.382 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.383 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.383 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.384 253665 INFO nova.compute.manager [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Terminating instance
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.385 253665 DEBUG nova.compute.manager [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:19:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 285 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 4.8 MiB/s wr, 500 op/s
Nov 22 09:19:15 compute-0 kernel: tap7612be10-c2 (unregistering): left promiscuous mode
Nov 22 09:19:15 compute-0 NetworkManager[48920]: <info>  [1763803155.4257] device (tap7612be10-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:19:15 compute-0 ovn_controller[152872]: 2025-11-22T09:19:15Z|00528|binding|INFO|Releasing lport 7612be10-c22f-4d60-89f7-232e865b6524 from this chassis (sb_readonly=0)
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 ovn_controller[152872]: 2025-11-22T09:19:15Z|00529|binding|INFO|Setting lport 7612be10-c22f-4d60-89f7-232e865b6524 down in Southbound
Nov 22 09:19:15 compute-0 ovn_controller[152872]: 2025-11-22T09:19:15Z|00530|binding|INFO|Removing iface tap7612be10-c2 ovn-installed in OVS
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.439 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.443 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:cc:cd 10.100.0.9'], port_security=['fa:16:3e:b3:cc:cd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b4a5932d-6547-4c01-9c71-0907c65247a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7612be10-c22f-4d60-89f7-232e865b6524) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.444 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7612be10-c22f-4d60-89f7-232e865b6524 in datapath f6001c81-6c53-4678-b8e8-39c35706be23 unbound from our chassis
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.445 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.475 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a08bc5f5-3c16-45e2-aadd-1011a6634572]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 22 09:19:15 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Consumed 9.091s CPU time.
Nov 22 09:19:15 compute-0 systemd-machined[215941]: Machine qemu-63-instance-00000037 terminated.
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.511 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.511 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.511 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.512 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.512 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.513 253665 INFO nova.compute.manager [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Terminating instance
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.514 253665 DEBUG nova.compute.manager [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.523 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[63b5f52d-10de-46fa-86ae-d987e455fb2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.526 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3adb0aae-6c63-4e51-9e9f-c6b382f94f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.553 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[aac76173-fe44-49ac-bdbe-bc395f0713c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:15 compute-0 kernel: tap11f926cd-f7 (unregistering): left promiscuous mode
Nov 22 09:19:15 compute-0 NetworkManager[48920]: <info>  [1763803155.5629] device (tap11f926cd-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.571 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[52f13c03-8998-40c8-bafb-a750963cd18b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 36494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315429, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:15 compute-0 ovn_controller[152872]: 2025-11-22T09:19:15Z|00531|binding|INFO|Releasing lport 11f926cd-f731-4de9-861e-5842f91f48df from this chassis (sb_readonly=0)
Nov 22 09:19:15 compute-0 ovn_controller[152872]: 2025-11-22T09:19:15Z|00532|binding|INFO|Setting lport 11f926cd-f731-4de9-861e-5842f91f48df down in Southbound
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 ovn_controller[152872]: 2025-11-22T09:19:15Z|00533|binding|INFO|Removing iface tap11f926cd-f7 ovn-installed in OVS
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.585 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:54:ea 10.100.0.5'], port_security=['fa:16:3e:41:54:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '64142c1c-95e0-4db4-b743-bb94c85a208f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=11f926cd-f731-4de9-861e-5842f91f48df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.596 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8f4367-7d1a-4d00-96b3-3469518e9270]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599500, 'tstamp': 599500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315434, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599504, 'tstamp': 599504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315434, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.598 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Deactivated successfully.
Nov 22 09:19:15 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Consumed 8.215s CPU time.
Nov 22 09:19:15 compute-0 systemd-machined[215941]: Machine qemu-64-instance-00000038 terminated.
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.616 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.616 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.617 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.617 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.618 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 11f926cd-f731-4de9-861e-5842f91f48df in datapath f6001c81-6c53-4678-b8e8-39c35706be23 unbound from our chassis
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.620 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6001c81-6c53-4678-b8e8-39c35706be23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.620 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[798ab1ca-0188-4085-9295-709ea2a88134]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.622 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 namespace which is not needed anymore
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.627 253665 INFO nova.virt.libvirt.driver [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance destroyed successfully.
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.627 253665 DEBUG nova.objects.instance [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'resources' on Instance uuid b4a5932d-6547-4c01-9c71-0907c65247a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.638 253665 DEBUG nova.virt.libvirt.vif [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-2',id=55,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-22T09:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:07Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=b4a5932d-6547-4c01-9c71-0907c65247a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.638 253665 DEBUG nova.network.os_vif_util [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.639 253665 DEBUG nova.network.os_vif_util [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.640 253665 DEBUG os_vif [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.642 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7612be10-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.649 253665 DEBUG nova.compute.manager [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-unplugged-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.649 253665 DEBUG oslo_concurrency.lockutils [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.650 253665 DEBUG oslo_concurrency.lockutils [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.650 253665 DEBUG oslo_concurrency.lockutils [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.650 253665 DEBUG nova.compute.manager [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] No waiting events found dispatching network-vif-unplugged-7612be10-c22f-4d60-89f7-232e865b6524 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.651 253665 DEBUG nova.compute.manager [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-unplugged-7612be10-c22f-4d60-89f7-232e865b6524 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.654 253665 INFO os_vif [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2')
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.742 253665 INFO nova.virt.libvirt.driver [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance destroyed successfully.
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.743 253665 DEBUG nova.objects.instance [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'resources' on Instance uuid 64142c1c-95e0-4db4-b743-bb94c85a208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.757 253665 DEBUG nova.virt.libvirt.vif [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-3',id=56,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-22T09:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:07Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=64142c1c-95e0-4db4-b743-bb94c85a208f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.758 253665 DEBUG nova.network.os_vif_util [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.759 253665 DEBUG nova.network.os_vif_util [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.759 253665 DEBUG os_vif [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.761 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.761 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11f926cd-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.764 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.767 253665 DEBUG nova.compute.manager [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-unplugged-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.768 253665 DEBUG oslo_concurrency.lockutils [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.768 253665 DEBUG oslo_concurrency.lockutils [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.768 253665 DEBUG oslo_concurrency.lockutils [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.769 253665 DEBUG nova.compute.manager [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] No waiting events found dispatching network-vif-unplugged-11f926cd-f731-4de9-861e-5842f91f48df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.769 253665 DEBUG nova.compute.manager [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-unplugged-11f926cd-f731-4de9-861e-5842f91f48df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.770 253665 INFO os_vif [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7')
Nov 22 09:19:15 compute-0 ovn_controller[152872]: 2025-11-22T09:19:15Z|00534|binding|INFO|Releasing lport a0af7d9b-5116-431a-ad00-3df7641dc72f from this chassis (sb_readonly=0)
Nov 22 09:19:15 compute-0 ovn_controller[152872]: 2025-11-22T09:19:15Z|00535|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:19:15 compute-0 ovn_controller[152872]: 2025-11-22T09:19:15Z|00536|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 09:19:15 compute-0 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [NOTICE]   (314926) : haproxy version is 2.8.14-c23fe91
Nov 22 09:19:15 compute-0 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [NOTICE]   (314926) : path to executable is /usr/sbin/haproxy
Nov 22 09:19:15 compute-0 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [WARNING]  (314926) : Exiting Master process...
Nov 22 09:19:15 compute-0 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [ALERT]    (314926) : Current worker (314928) exited with code 143 (Terminated)
Nov 22 09:19:15 compute-0 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [WARNING]  (314926) : All workers exited. Exiting... (0)
Nov 22 09:19:15 compute-0 systemd[1]: libpod-ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9.scope: Deactivated successfully.
Nov 22 09:19:15 compute-0 podman[315484]: 2025-11-22 09:19:15.795971619 +0000 UTC m=+0.062283914 container died ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9-userdata-shm.mount: Deactivated successfully.
Nov 22 09:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aea65242036820cb6858ded2c2732f7a57b937b65a8c1c3d87746de9de9af3b-merged.mount: Deactivated successfully.
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 podman[315484]: 2025-11-22 09:19:15.865107069 +0000 UTC m=+0.131419364 container cleanup ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:19:15 compute-0 systemd[1]: libpod-conmon-ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9.scope: Deactivated successfully.
Nov 22 09:19:15 compute-0 podman[315543]: 2025-11-22 09:19:15.948893445 +0000 UTC m=+0.057788115 container remove ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.955 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae1a867-6cf1-4bc5-9628-ffef9f867218]: (4, ('Sat Nov 22 09:19:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 (ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9)\nee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9\nSat Nov 22 09:19:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 (ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9)\nee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.957 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a70f84-2d5c-47c2-a44c-574eb3b289e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.958 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 kernel: tapf6001c81-60: left promiscuous mode
Nov 22 09:19:15 compute-0 nova_compute[253661]: 2025-11-22 09:19:15.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.983 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9e2a0a-1a8d-40eb-9877-2d3a52328e3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.999 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c503a568-a95b-46d6-9936-a43ad13b13c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:16.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[873f4760-909b-4827-b6bc-5bc8dd075b81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:16.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08278100-ec31-4231-a982-0decbfc53a53]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599474, 'reachable_time': 27445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315559, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:16 compute-0 systemd[1]: run-netns-ovnmeta\x2df6001c81\x2d6c53\x2d4678\x2db8e8\x2d39c35706be23.mount: Deactivated successfully.
Nov 22 09:19:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:16.019 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:19:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:16.019 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[714b9bdd-4bb1-48f9-8778-63de7d790e25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.194 253665 INFO nova.virt.libvirt.driver [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Deleting instance files /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1_del
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.195 253665 INFO nova.virt.libvirt.driver [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Deletion of /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1_del complete
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.241 253665 INFO nova.compute.manager [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.242 253665 DEBUG oslo.service.loopingcall [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.243 253665 DEBUG nova.compute.manager [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.243 253665 DEBUG nova.network.neutron [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.291 253665 INFO nova.virt.libvirt.driver [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Deleting instance files /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f_del
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.292 253665 INFO nova.virt.libvirt.driver [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Deletion of /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f_del complete
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.351 253665 INFO nova.compute.manager [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.352 253665 DEBUG oslo.service.loopingcall [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.352 253665 DEBUG nova.compute.manager [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:19:16 compute-0 nova_compute[253661]: 2025-11-22 09:19:16.352 253665 DEBUG nova.network.neutron [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:19:16 compute-0 ceph-mon[75021]: pgmap v1652: 305 pgs: 305 active+clean; 285 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 4.8 MiB/s wr, 500 op/s
Nov 22 09:19:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 257 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 2.1 MiB/s wr, 407 op/s
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.248 253665 DEBUG nova.compute.manager [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.249 253665 DEBUG oslo_concurrency.lockutils [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.250 253665 DEBUG oslo_concurrency.lockutils [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.250 253665 DEBUG oslo_concurrency.lockutils [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.251 253665 DEBUG nova.compute.manager [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] No waiting events found dispatching network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.251 253665 WARNING nova.compute.manager [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received unexpected event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 for instance with vm_state active and task_state deleting.
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.324 253665 DEBUG nova.compute.manager [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.324 253665 DEBUG oslo_concurrency.lockutils [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.325 253665 DEBUG oslo_concurrency.lockutils [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.325 253665 DEBUG oslo_concurrency.lockutils [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.325 253665 DEBUG nova.compute.manager [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] No waiting events found dispatching network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.325 253665 WARNING nova.compute.manager [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received unexpected event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df for instance with vm_state active and task_state deleting.
Nov 22 09:19:18 compute-0 ceph-mon[75021]: pgmap v1653: 305 pgs: 305 active+clean; 257 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 2.1 MiB/s wr, 407 op/s
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.884 253665 DEBUG nova.network.neutron [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.901 253665 INFO nova.compute.manager [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Took 2.55 seconds to deallocate network for instance.
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.943 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:18 compute-0 nova_compute[253661]: 2025-11-22 09:19:18.944 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.026 253665 DEBUG nova.network.neutron [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.040 253665 INFO nova.compute.manager [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Took 2.80 seconds to deallocate network for instance.
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.057 253665 DEBUG oslo_concurrency.processutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.097 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 202 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.2 MiB/s wr, 413 op/s
Nov 22 09:19:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399072842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.573 253665 DEBUG oslo_concurrency.processutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.580 253665 DEBUG nova.compute.provider_tree [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.594 253665 DEBUG nova.scheduler.client.report [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.617 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.620 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.677 253665 INFO nova.scheduler.client.report [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Deleted allocations for instance 64142c1c-95e0-4db4-b743-bb94c85a208f
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.724 253665 DEBUG oslo_concurrency.processutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:19 compute-0 nova_compute[253661]: 2025-11-22 09:19:19.760 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825166211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.187 253665 DEBUG oslo_concurrency.processutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.193 253665 DEBUG nova.compute.provider_tree [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.198 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.206 253665 DEBUG nova.scheduler.client.report [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.222 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.248 253665 INFO nova.scheduler.client.report [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Deleted allocations for instance b4a5932d-6547-4c01-9c71-0907c65247a1
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.337 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:20 compute-0 ovn_controller[152872]: 2025-11-22T09:19:20Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.429 253665 DEBUG nova.compute.manager [req-26f570e4-da80-4871-b3a9-9be8d4d4d623 req-1b838af3-fbaa-479d-a88c-5df653826789 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-deleted-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.430 253665 DEBUG nova.compute.manager [req-3469098a-c9ad-401a-a91d-4401cd10105d req-f95f8c14-e27a-48bb-835e-c5e4c7174e3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-deleted-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:20 compute-0 ceph-mon[75021]: pgmap v1654: 305 pgs: 305 active+clean; 202 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.2 MiB/s wr, 413 op/s
Nov 22 09:19:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3399072842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1825166211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:20 compute-0 nova_compute[253661]: 2025-11-22 09:19:20.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:21 compute-0 nova_compute[253661]: 2025-11-22 09:19:21.259 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 202 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 295 op/s
Nov 22 09:19:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:22 compute-0 nova_compute[253661]: 2025-11-22 09:19:22.297 253665 DEBUG nova.virt.libvirt.driver [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:19:22 compute-0 ceph-mon[75021]: pgmap v1655: 305 pgs: 305 active+clean; 202 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 295 op/s
Nov 22 09:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:19:23 compute-0 nova_compute[253661]: 2025-11-22 09:19:23.062 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803148.0608265, dec3a0c0-4e66-47fb-845c-42748f871bd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:23 compute-0 nova_compute[253661]: 2025-11-22 09:19:23.062 253665 INFO nova.compute.manager [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] VM Stopped (Lifecycle Event)
Nov 22 09:19:23 compute-0 nova_compute[253661]: 2025-11-22 09:19:23.081 253665 DEBUG nova.compute.manager [None req-a39fc479-51cd-47bf-a80b-b2c957638f21 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 202 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 321 op/s
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.424 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803149.422696, 045614f9-cfb4-4a52-996e-e880cbdf7dcd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.424 253665 INFO nova.compute.manager [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] VM Stopped (Lifecycle Event)
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.435 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.436 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.436 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.452 253665 DEBUG nova.compute.manager [None req-ebec4ebf-1989-4978-a43c-7e797b5ab535 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:24 compute-0 ovn_controller[152872]: 2025-11-22T09:19:24Z|00537|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:19:24 compute-0 ovn_controller[152872]: 2025-11-22T09:19:24Z|00538|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 09:19:24 compute-0 ceph-mon[75021]: pgmap v1656: 305 pgs: 305 active+clean; 202 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 321 op/s
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:24 compute-0 kernel: tap8d030734-0e (unregistering): left promiscuous mode
Nov 22 09:19:24 compute-0 NetworkManager[48920]: <info>  [1763803164.6047] device (tap8d030734-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.614 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:24 compute-0 ovn_controller[152872]: 2025-11-22T09:19:24Z|00539|binding|INFO|Releasing lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea from this chassis (sb_readonly=0)
Nov 22 09:19:24 compute-0 ovn_controller[152872]: 2025-11-22T09:19:24Z|00540|binding|INFO|Setting lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea down in Southbound
Nov 22 09:19:24 compute-0 ovn_controller[152872]: 2025-11-22T09:19:24Z|00541|binding|INFO|Removing iface tap8d030734-0e ovn-installed in OVS
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.625 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:66:f7 10.100.0.10'], port_security=['fa:16:3e:8b:66:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2f0d9dce-1900-41c4-9b69-7e46f34dde81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8d030734-0e50-4fca-a432-cc2d1c2c9dea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.627 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8d030734-0e50-4fca-a432-cc2d1c2c9dea in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.628 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.630 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[781d85fe-750b-410b-af5f-be0899059fcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.631 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:24 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 22 09:19:24 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Consumed 14.643s CPU time.
Nov 22 09:19:24 compute-0 systemd-machined[215941]: Machine qemu-60-instance-00000035 terminated.
Nov 22 09:19:24 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [NOTICE]   (314299) : haproxy version is 2.8.14-c23fe91
Nov 22 09:19:24 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [NOTICE]   (314299) : path to executable is /usr/sbin/haproxy
Nov 22 09:19:24 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [WARNING]  (314299) : Exiting Master process...
Nov 22 09:19:24 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [WARNING]  (314299) : Exiting Master process...
Nov 22 09:19:24 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [ALERT]    (314299) : Current worker (314301) exited with code 143 (Terminated)
Nov 22 09:19:24 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [WARNING]  (314299) : All workers exited. Exiting... (0)
Nov 22 09:19:24 compute-0 systemd[1]: libpod-0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19.scope: Deactivated successfully.
Nov 22 09:19:24 compute-0 podman[315627]: 2025-11-22 09:19:24.806018768 +0000 UTC m=+0.052861555 container died 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:19:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19-userdata-shm.mount: Deactivated successfully.
Nov 22 09:19:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e78995b3caad89fb07d376941b58acfb371f548ff3ea9aaf066112a811b999c-merged.mount: Deactivated successfully.
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:24 compute-0 podman[315627]: 2025-11-22 09:19:24.856414323 +0000 UTC m=+0.103257070 container cleanup 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:19:24 compute-0 systemd[1]: libpod-conmon-0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19.scope: Deactivated successfully.
Nov 22 09:19:24 compute-0 podman[315667]: 2025-11-22 09:19:24.932910182 +0000 UTC m=+0.048417398 container remove 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.941 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a978dffd-255e-4af8-a31c-a524a74d6aa2]: (4, ('Sat Nov 22 09:19:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19)\n0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19\nSat Nov 22 09:19:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19)\n0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.943 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[89796764-3892-4924-afc5-5fbb15981bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.945 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:24 compute-0 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 09:19:24 compute-0 nova_compute[253661]: 2025-11-22 09:19:24.966 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.970 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a873c037-e934-4e4f-aea8-de465e703e51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.993 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eca22bb1-21af-4abc-8b52-e5451b056aa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec252189-6ad4-4595-84e3-46cfc282d99b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:25.013 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[acd85d4e-66d5-4c3e-9e33-aa2a3e9bebe1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598972, 'reachable_time': 20149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315687, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 09:19:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:25.016 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:19:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:25.016 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[277d640e-63ea-4da3-89d6-59a3b92ca09d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.133 253665 DEBUG nova.compute.manager [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-unplugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.133 253665 DEBUG oslo_concurrency.lockutils [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.133 253665 DEBUG oslo_concurrency.lockutils [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.134 253665 DEBUG oslo_concurrency.lockutils [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.134 253665 DEBUG nova.compute.manager [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] No waiting events found dispatching network-vif-unplugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.134 253665 WARNING nova.compute.manager [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received unexpected event network-vif-unplugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea for instance with vm_state active and task_state powering-off.
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.314 253665 INFO nova.virt.libvirt.driver [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance shutdown successfully after 24 seconds.
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.319 253665 INFO nova.virt.libvirt.driver [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance destroyed successfully.
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.319 253665 DEBUG nova.objects.instance [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'numa_topology' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.330 253665 DEBUG nova.compute.manager [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.371 253665 DEBUG oslo_concurrency.lockutils [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 202 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 215 op/s
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.764 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.969 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.989 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.990 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:19:25 compute-0 nova_compute[253661]: 2025-11-22 09:19:25.990 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.265 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.265 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.265 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.266 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.307 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:26 compute-0 podman[315688]: 2025-11-22 09:19:26.383688813 +0000 UTC m=+0.061088056 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 09:19:26 compute-0 podman[315690]: 2025-11-22 09:19:26.416327446 +0000 UTC m=+0.090842849 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:19:26 compute-0 ceph-mon[75021]: pgmap v1657: 305 pgs: 305 active+clean; 202 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 215 op/s
Nov 22 09:19:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984988463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.760 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.801816) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166801867, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 777, "num_deletes": 259, "total_data_size": 873398, "memory_usage": 887832, "flush_reason": "Manual Compaction"}
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166811209, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 864208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33041, "largest_seqno": 33817, "table_properties": {"data_size": 860269, "index_size": 1655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9443, "raw_average_key_size": 19, "raw_value_size": 852003, "raw_average_value_size": 1771, "num_data_blocks": 73, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803114, "oldest_key_time": 1763803114, "file_creation_time": 1763803166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9432 microseconds, and 4483 cpu microseconds.
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.811250) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 864208 bytes OK
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.811275) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.812909) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.812926) EVENT_LOG_v1 {"time_micros": 1763803166812919, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.812951) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 869368, prev total WAL file size 869368, number of live WAL files 2.
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.813620) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(843KB)], [71(8668KB)]
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166813700, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9740395, "oldest_snapshot_seqno": -1}
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.860 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.860 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5756 keys, 9631814 bytes, temperature: kUnknown
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166886559, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9631814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9590700, "index_size": 25644, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 145743, "raw_average_key_size": 25, "raw_value_size": 9484453, "raw_average_value_size": 1647, "num_data_blocks": 1042, "num_entries": 5756, "num_filter_entries": 5756, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.886893) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9631814 bytes
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.888551) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.5 rd, 132.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(22.4) write-amplify(11.1) OK, records in: 6291, records dropped: 535 output_compression: NoCompression
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.888570) EVENT_LOG_v1 {"time_micros": 1763803166888561, "job": 40, "event": "compaction_finished", "compaction_time_micros": 72970, "compaction_time_cpu_micros": 33065, "output_level": 6, "num_output_files": 1, "total_output_size": 9631814, "num_input_records": 6291, "num_output_records": 5756, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166888845, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166890688, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.813524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:19:26 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.977 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.978 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.978 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.978 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.979 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.980 253665 INFO nova.compute.manager [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Terminating instance
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.981 253665 DEBUG nova.compute.manager [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.986 253665 INFO nova.virt.libvirt.driver [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance destroyed successfully.
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.987 253665 DEBUG nova.objects.instance [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.998 253665 DEBUG nova.virt.libvirt.vif [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1545163364',display_name='tempest-DeleteServersTestJSON-server-1545163364',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1545163364',id=53,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-0l9kcni8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:25Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2f0d9dce-1900-41c4-9b69-7e46f34dde81,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:19:26 compute-0 nova_compute[253661]: 2025-11-22 09:19:26.999 253665 DEBUG nova.network.os_vif_util [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.000 253665 DEBUG nova.network.os_vif_util [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.000 253665 DEBUG os_vif [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.003 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d030734-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.005 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.009 253665 INFO os_vif [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e')
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.084 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.086 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3844MB free_disk=59.89710998535156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.086 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.086 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.167 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 636b1046-fff8-4a45-8a14-04010b2f282e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2f0d9dce-1900-41c4-9b69-7e46f34dde81 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.243 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 202 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 726 KiB/s rd, 594 KiB/s wr, 126 op/s
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.452 253665 DEBUG nova.compute.manager [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.453 253665 DEBUG oslo_concurrency.lockutils [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.454 253665 DEBUG oslo_concurrency.lockutils [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.454 253665 DEBUG oslo_concurrency.lockutils [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.454 253665 DEBUG nova.compute.manager [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] No waiting events found dispatching network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.454 253665 WARNING nova.compute.manager [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received unexpected event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea for instance with vm_state stopped and task_state deleting.
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.523 253665 INFO nova.virt.libvirt.driver [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Deleting instance files /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81_del
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.524 253665 INFO nova.virt.libvirt.driver [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Deletion of /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81_del complete
Nov 22 09:19:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1984988463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.595 253665 INFO nova.compute.manager [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Took 0.61 seconds to destroy the instance on the hypervisor.
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.596 253665 DEBUG oslo.service.loopingcall [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.596 253665 DEBUG nova.compute.manager [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.596 253665 DEBUG nova.network.neutron [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:19:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422326734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.723 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.730 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.744 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.790 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:19:27 compute-0 nova_compute[253661]: 2025-11-22 09:19:27.790 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:27.961 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:27.962 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:27.964 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.436 253665 DEBUG nova.network.neutron [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.451 253665 INFO nova.compute.manager [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Took 0.85 seconds to deallocate network for instance.
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.502 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.503 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:28 compute-0 ceph-mon[75021]: pgmap v1658: 305 pgs: 305 active+clean; 202 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 726 KiB/s rd, 594 KiB/s wr, 126 op/s
Nov 22 09:19:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3422326734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.576 253665 DEBUG oslo_concurrency.processutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.790 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.791 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.792 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.802 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.834 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.835 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.835 253665 INFO nova.compute.manager [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Rebooting instance
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.849 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.849 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:28 compute-0 nova_compute[253661]: 2025-11-22 09:19:28.850 253665 DEBUG nova.network.neutron [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:19:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364095659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.103 253665 DEBUG oslo_concurrency.processutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.109 253665 DEBUG nova.compute.provider_tree [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.129 253665 DEBUG nova.scheduler.client.report [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.151 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.222 253665 INFO nova.scheduler.client.report [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance 2f0d9dce-1900-41c4-9b69-7e46f34dde81
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.343 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 146 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 574 KiB/s rd, 95 KiB/s wr, 105 op/s
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.517 253665 DEBUG nova.compute.manager [req-e4191d2d-dc55-4ba9-8a27-471b43dca9e7 req-c7632a6c-1846-42f4-9c60-7220840053c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-deleted-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1364095659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.614 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.615 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.639 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.709 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.710 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.718 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.718 253665 INFO nova.compute.claims [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:19:29 compute-0 nova_compute[253661]: 2025-11-22 09:19:29.836 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.195 253665 DEBUG nova.network.neutron [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.201 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.209 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.212 253665 DEBUG nova.compute.manager [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:19:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394375110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.321 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.328 253665 DEBUG nova.compute.provider_tree [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.343 253665 DEBUG nova.scheduler.client.report [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.385 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.386 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:19:30 compute-0 kernel: tapa288a5e5-7b (unregistering): left promiscuous mode
Nov 22 09:19:30 compute-0 podman[315831]: 2025-11-22 09:19:30.408100119 +0000 UTC m=+0.094558638 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:19:30 compute-0 NetworkManager[48920]: <info>  [1763803170.4135] device (tapa288a5e5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:19:30 compute-0 ovn_controller[152872]: 2025-11-22T09:19:30Z|00542|binding|INFO|Releasing lport a288a5e5-7b57-4be8-9617-3271ea1e210f from this chassis (sb_readonly=0)
Nov 22 09:19:30 compute-0 ovn_controller[152872]: 2025-11-22T09:19:30Z|00543|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f down in Southbound
Nov 22 09:19:30 compute-0 ovn_controller[152872]: 2025-11-22T09:19:30Z|00544|binding|INFO|Removing iface tapa288a5e5-7b ovn-installed in OVS
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.443 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.445 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.446 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebc42408-7b03-480c-a016-1e5bb2ebcc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.447 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21334c29-b559-4501-b498-2db17d5541da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.449 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace which is not needed anymore
Nov 22 09:19:30 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 22 09:19:30 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000032.scope: Consumed 14.616s CPU time.
Nov 22 09:19:30 compute-0 systemd-machined[215941]: Machine qemu-61-instance-00000032 terminated.
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.476 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.502 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.547 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:19:30 compute-0 ceph-mon[75021]: pgmap v1659: 305 pgs: 305 active+clean; 146 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 574 KiB/s rd, 95 KiB/s wr, 105 op/s
Nov 22 09:19:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3394375110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:30 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [NOTICE]   (314693) : haproxy version is 2.8.14-c23fe91
Nov 22 09:19:30 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [NOTICE]   (314693) : path to executable is /usr/sbin/haproxy
Nov 22 09:19:30 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [WARNING]  (314693) : Exiting Master process...
Nov 22 09:19:30 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [ALERT]    (314693) : Current worker (314695) exited with code 143 (Terminated)
Nov 22 09:19:30 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [WARNING]  (314693) : All workers exited. Exiting... (0)
Nov 22 09:19:30 compute-0 systemd[1]: libpod-96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79.scope: Deactivated successfully.
Nov 22 09:19:30 compute-0 podman[315884]: 2025-11-22 09:19:30.603073457 +0000 UTC m=+0.052954468 container died 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.608 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.608 253665 DEBUG nova.objects.instance [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.622 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803155.620987, b4a5932d-6547-4c01-9c71-0907c65247a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.622 253665 INFO nova.compute.manager [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] VM Stopped (Lifecycle Event)
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.625 253665 DEBUG nova.virt.libvirt.vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.625 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.627 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.627 253665 DEBUG os_vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.629 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa288a5e5-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.646 253665 DEBUG nova.compute.manager [None req-743c4d9e-bfae-463c-9751-4b1771437bf2 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.677 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79-userdata-shm.mount: Deactivated successfully.
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.683 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:19:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1467195d1769cb3d2de1fc7c53fe46b5d8e844ec9d0e75fe4e8f0d6486282a0-merged.mount: Deactivated successfully.
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.685 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.686 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Creating image(s)
Nov 22 09:19:30 compute-0 podman[315884]: 2025-11-22 09:19:30.691226038 +0000 UTC m=+0.141107039 container cleanup 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.710 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:30 compute-0 systemd[1]: libpod-conmon-96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79.scope: Deactivated successfully.
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.744 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.775 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:30 compute-0 podman[315936]: 2025-11-22 09:19:30.77606014 +0000 UTC m=+0.054414353 container remove 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.783 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6eeb4b18-a884-460d-9667-402d8768cbe9]: (4, ('Sat Nov 22 09:19:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79)\n96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79\nSat Nov 22 09:19:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79)\n96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.788 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c512257-7d56-41d1-9686-9e4cba08f98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.790 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:30 compute-0 kernel: tapebc42408-70: left promiscuous mode
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.813 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b716c5-eb11-44df-845d-4b9dc6d7ad2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.827 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.830 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94fcc511-46d3-416e-b4cb-788714e893e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.831 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd87446-d9a2-4af4-bf10-cd8f2a439f8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.831 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803155.7391326, 64142c1c-95e0-4db4-b743-bb94c85a208f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.832 253665 INFO nova.compute.manager [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] VM Stopped (Lifecycle Event)
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.837 253665 INFO os_vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.846 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start _get_guest_xml network_info=[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.851 253665 DEBUG nova.compute.manager [None req-96879638-cbb2-44b6-9172-bcf0d327e678 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.854 253665 WARNING nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.854 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82425e75-4126-44e2-94c0-bbad3c8bcffd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599347, 'reachable_time': 39752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315991, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:30 compute-0 systemd[1]: run-netns-ovnmeta\x2debc42408\x2d7b03\x2d480c\x2da016\x2d1e5bb2ebcc93.mount: Deactivated successfully.
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.859 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.860 253665 DEBUG nova.virt.libvirt.host [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.859 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d62379-52d3-4bdc-9e51-3065f9cee6a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.861 253665 DEBUG nova.virt.libvirt.host [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.865 253665 DEBUG nova.virt.libvirt.host [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.865 253665 DEBUG nova.virt.libvirt.host [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.866 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.866 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.866 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.868 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.868 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.868 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.868 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.869 253665 DEBUG nova.objects.instance [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.880 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.880 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.881 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.881 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.903 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.907 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:30 compute-0 nova_compute[253661]: 2025-11-22 09:19:30.964 253665 DEBUG oslo_concurrency.processutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.292 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.369 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] resizing rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:19:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 146 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 51 KiB/s wr, 57 op/s
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.475 253665 DEBUG nova.objects.instance [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lazy-loading 'migration_context' on Instance uuid 18709ea6-4d81-4329-8bbc-2d62e5344ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.487 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.488 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Ensure instance console log exists: /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.488 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.489 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.489 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.490 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.494 253665 WARNING nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.499 253665 DEBUG nova.virt.libvirt.host [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.499 253665 DEBUG nova.virt.libvirt.host [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.502 253665 DEBUG nova.virt.libvirt.host [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.502 253665 DEBUG nova.virt.libvirt.host [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.503 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.503 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.505 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.505 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.505 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.505 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.506 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.509 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985709953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3985709953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.590 253665 DEBUG oslo_concurrency.processutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.630 253665 DEBUG oslo_concurrency.processutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.690 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.691 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.712 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.796 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.797 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.809 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.809 253665 INFO nova.compute.claims [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:19:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616682928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:31 compute-0 nova_compute[253661]: 2025-11-22 09:19:31.970 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.010 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.032 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.035 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/989007554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.117 253665 DEBUG oslo_concurrency.processutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.119 253665 DEBUG nova.virt.libvirt.vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.120 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.121 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.122 253665 DEBUG nova.objects.instance [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.144 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <uuid>636b1046-fff8-4a45-8a14-04010b2f282e</uuid>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <name>instance-00000032</name>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestJSON-server-149918095</nova:name>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:30</nova:creationTime>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:port uuid="a288a5e5-7b57-4be8-9617-3271ea1e210f">
Nov 22 09:19:32 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="serial">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="uuid">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk.config">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:70:38:8e"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <target dev="tapa288a5e5-7b"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log" append="off"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:32 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:32 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.146 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.146 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.147 253665 DEBUG nova.virt.libvirt.vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.147 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.148 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.148 253665 DEBUG os_vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.149 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.150 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.153 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.153 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 NetworkManager[48920]: <info>  [1763803172.1571] manager: (tapa288a5e5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.163 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.165 253665 INFO os_vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:19:32 compute-0 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 09:19:32 compute-0 systemd-udevd[315864]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:32 compute-0 NetworkManager[48920]: <info>  [1763803172.2683] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Nov 22 09:19:32 compute-0 ovn_controller[152872]: 2025-11-22T09:19:32Z|00545|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 09:19:32 compute-0 ovn_controller[152872]: 2025-11-22T09:19:32Z|00546|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.270 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 NetworkManager[48920]: <info>  [1763803172.2834] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:19:32 compute-0 NetworkManager[48920]: <info>  [1763803172.2846] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:19:32 compute-0 ovn_controller[152872]: 2025-11-22T09:19:32Z|00547|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 ovn_controller[152872]: 2025-11-22T09:19:32Z|00548|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.304 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.305 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.307 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 bound to our chassis
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.309 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.329 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a8365356-4b5d-4ce5-8e4b-8d1623236150]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.330 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.332 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.332 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa3c4ca-bf94-4812-9649-fa270e2ba21c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.333 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea59460f-20b2-49e5-9f49-ccec20ddc569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 systemd-machined[215941]: New machine qemu-65-instance-00000032.
Nov 22 09:19:32 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000032.
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.353 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1a2c49-372a-492d-b327-8d7261ee75a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.372 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[704e5cad-104c-4dbe-9e50-598a160ae8e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.415 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0dd5031-af71-414f-89d7-6eba38b463e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 NetworkManager[48920]: <info>  [1763803172.4252] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d22b42bc-5ec8-4875-a6a6-79fc6bb5f22c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839126014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.468 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[50b5f28d-b67e-4770-a183-577b5f2e31db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.472 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3acea2-fafe-40b7-b47b-b1bcd3d4032f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928460180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.498 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:32 compute-0 NetworkManager[48920]: <info>  [1763803172.5068] device (tapebc42408-70): carrier: link connected
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.512 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[abcf9551-b239-4f3f-9a8f-bf781ca116e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.514 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.515 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.524 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.526 253665 DEBUG nova.objects.instance [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18709ea6-4d81-4329-8bbc-2d62e5344ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.528 253665 DEBUG nova.compute.provider_tree [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.536 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4454b320-4d14-42e8-940a-d0fd367175c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602324, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316294, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.543 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.548 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <uuid>18709ea6-4d81-4329-8bbc-2d62e5344ef5</uuid>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <name>instance-00000039</name>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersAaction247Test-server-673972960</nova:name>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:31</nova:creationTime>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:user uuid="7a7411d90d324909b49644bc8eef8e0f">tempest-ServersAaction247Test-1280298349-project-member</nova:user>
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <nova:project uuid="ef6ee0eea0834ec7853091ff44562661">tempest-ServersAaction247Test-1280298349</nova:project>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="serial">18709ea6-4d81-4329-8bbc-2d62e5344ef5</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="uuid">18709ea6-4d81-4329-8bbc-2d62e5344ef5</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:32 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/console.log" append="off"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:32 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:32 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:32 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:32 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:32 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.554 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a78d45ee-843e-4dff-ba51-2552775b29a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602324, 'tstamp': 602324}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316295, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.559 253665 DEBUG nova.scheduler.client.report [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.571 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7cad1a03-e531-488e-92e2-343fa53aab42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602324, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316296, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ceph-mon[75021]: pgmap v1660: 305 pgs: 305 active+clean; 146 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 51 KiB/s wr, 57 op/s
Nov 22 09:19:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3616682928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/989007554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/839126014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3928460180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.598 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa433f32-2306-4da9-9c9d-1d486b2232f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.611 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.612 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.618 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.618 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.619 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Using config drive
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.645 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.671 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[172d980e-e8ea-41a1-af9a-5e0fc4e150b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.674 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:32 compute-0 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 09:19:32 compute-0 NetworkManager[48920]: <info>  [1763803172.6774] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.677 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.679 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:32 compute-0 ovn_controller[152872]: 2025-11-22T09:19:32Z|00549|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.684 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.684 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.686 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.689 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.689 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.702 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.704 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[162b28d7-18cf-4900-88a8-4e7076f8e414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.705 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:19:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.707 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.709 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.709 253665 INFO nova.compute.claims [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.731 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.777 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.907 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.908 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.909 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating image(s)
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.936 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.963 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:32 compute-0 nova_compute[253661]: 2025-11-22 09:19:32.996 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.003 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.056 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.107 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.109 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.110 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.111 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.141 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.147 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:33 compute-0 podman[316442]: 2025-11-22 09:19:33.153833936 +0000 UTC m=+0.078555300 container create 719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.198 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Creating config drive at /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config
Nov 22 09:19:33 compute-0 podman[316442]: 2025-11-22 09:19:33.107171802 +0000 UTC m=+0.031893076 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.203 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy8c3aqfo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:33 compute-0 systemd[1]: Started libpod-conmon-719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9.scope.
Nov 22 09:19:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e741de3208f7891e80bf1534293c26fdb1a7bf352ddd9a294c705d9b5426cd1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:19:33 compute-0 podman[316442]: 2025-11-22 09:19:33.258230613 +0000 UTC m=+0.182951897 container init 719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.257 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 636b1046-fff8-4a45-8a14-04010b2f282e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.259 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803173.1139796, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.259 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.262 253665 DEBUG nova.compute.manager [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:19:33 compute-0 podman[316442]: 2025-11-22 09:19:33.265413947 +0000 UTC m=+0.190135211 container start 719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.267 253665 DEBUG nova.policy [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5352d2182544454aab03bd4a74160247', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.281 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance rebooted successfully.
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.282 253665 DEBUG nova.compute.manager [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.286 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:33 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [NOTICE]   (316521) : New worker (316526) forked
Nov 22 09:19:33 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [NOTICE]   (316521) : Loading success.
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.299 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.326 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.327 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803173.114166, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.330 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.355 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.358 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.368 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.371 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy8c3aqfo" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 151 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 1.1 MiB/s wr, 93 op/s
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.416 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.428 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.531 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444495597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1444495597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.613 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.616 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.616 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Deleting local config drive /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config because it was imported into RBD.
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.630 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] resizing rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.681 253665 DEBUG nova.compute.provider_tree [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.694 253665 DEBUG nova.scheduler.client.report [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:33 compute-0 systemd-machined[215941]: New machine qemu-66-instance-00000039.
Nov 22 09:19:33 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-00000039.
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.757 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.758 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.767 253665 DEBUG nova.objects.instance [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'migration_context' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.781 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.781 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Ensure instance console log exists: /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.782 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.782 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.782 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.807 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.809 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.824 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.840 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.920 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.922 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.922 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Creating image(s)
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.944 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:33 compute-0 nova_compute[253661]: 2025-11-22 09:19:33.970 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.002 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.009 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.106 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.107 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.108 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.109 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.138 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.143 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.186 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803174.1211925, 18709ea6-4d81-4329-8bbc-2d62e5344ef5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.188 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] VM Resumed (Lifecycle Event)
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.192 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.192 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.197 253665 INFO nova.virt.libvirt.driver [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance spawned successfully.
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.198 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.208 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Successfully created port: 2a28300e-6b6b-4513-831f-e30f3694fbcd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.212 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.218 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.224 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.224 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.225 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.225 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.226 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.226 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.240 253665 DEBUG nova.policy [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.247 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.247 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803174.1371229, 18709ea6-4d81-4329-8bbc-2d62e5344ef5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.247 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] VM Started (Lifecycle Event)
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.269 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.274 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.281 253665 INFO nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Took 3.60 seconds to spawn the instance on the hypervisor.
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.282 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.294 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.344 253665 INFO nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Took 4.65 seconds to build instance.
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.363 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.446 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.501 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.594 253665 DEBUG nova.objects.instance [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:34 compute-0 ceph-mon[75021]: pgmap v1661: 305 pgs: 305 active+clean; 151 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 1.1 MiB/s wr, 93 op/s
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.607 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.608 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Ensure instance console log exists: /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.608 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.608 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.609 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.685 253665 DEBUG nova.compute.manager [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.686 253665 DEBUG oslo_concurrency.lockutils [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.686 253665 DEBUG oslo_concurrency.lockutils [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.686 253665 DEBUG oslo_concurrency.lockutils [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.687 253665 DEBUG nova.compute.manager [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:34 compute-0 nova_compute[253661]: 2025-11-22 09:19:34.687 253665 WARNING nova.compute.manager [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.076 253665 DEBUG nova.compute.manager [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.112 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Successfully updated port: 2a28300e-6b6b-4513-831f-e30f3694fbcd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.116 253665 INFO nova.compute.manager [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] instance snapshotting
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.117 253665 DEBUG nova.objects.instance [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lazy-loading 'flavor' on Instance uuid 18709ea6-4d81-4329-8bbc-2d62e5344ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.133 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.136 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquired lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.138 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.191 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Successfully created port: dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.205 253665 DEBUG nova.compute.manager [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-changed-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.205 253665 DEBUG nova.compute.manager [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Refreshing instance network info cache due to event network-changed-2a28300e-6b6b-4513-831f-e30f3694fbcd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.206 253665 DEBUG oslo_concurrency.lockutils [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.223 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.223 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.223 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.224 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.224 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.225 253665 INFO nova.compute.manager [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Terminating instance
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.226 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "refresh_cache-18709ea6-4d81-4329-8bbc-2d62e5344ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.226 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquired lock "refresh_cache-18709ea6-4d81-4329-8bbc-2d62e5344ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.226 253665 DEBUG nova.network.neutron [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.337 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:19:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 173 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.423 253665 DEBUG nova.network.neutron [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.446 253665 INFO nova.virt.libvirt.driver [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Beginning live snapshot process
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.494 253665 DEBUG nova.compute.manager [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.695 253665 DEBUG nova.network.neutron [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.708 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Releasing lock "refresh_cache-18709ea6-4d81-4329-8bbc-2d62e5344ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.708 253665 DEBUG nova.compute.manager [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:19:35 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000039.scope: Deactivated successfully.
Nov 22 09:19:35 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000039.scope: Consumed 1.970s CPU time.
Nov 22 09:19:35 compute-0 systemd-machined[215941]: Machine qemu-66-instance-00000039 terminated.
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.933 253665 INFO nova.virt.libvirt.driver [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance destroyed successfully.
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.937 253665 DEBUG nova.objects.instance [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lazy-loading 'resources' on Instance uuid 18709ea6-4d81-4329-8bbc-2d62e5344ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:35 compute-0 nova_compute[253661]: 2025-11-22 09:19:35.978 253665 DEBUG nova.compute.manager [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.124 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Successfully updated port: dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.136 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.136 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.136 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.226 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Updating instance_info_cache with network_info: [{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.244 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Releasing lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.245 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance network_info: |[{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.245 253665 DEBUG oslo_concurrency.lockutils [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.246 253665 DEBUG nova.network.neutron [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Refreshing network info cache for port 2a28300e-6b6b-4513-831f-e30f3694fbcd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.249 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start _get_guest_xml network_info=[{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.254 253665 WARNING nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.267 253665 DEBUG nova.virt.libvirt.host [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.268 253665 DEBUG nova.virt.libvirt.host [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.270 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.278 253665 DEBUG nova.virt.libvirt.host [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.279 253665 DEBUG nova.virt.libvirt.host [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.279 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.280 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.280 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.281 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.281 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.281 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.281 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.282 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.282 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.282 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.283 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.283 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.286 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.389 253665 INFO nova.virt.libvirt.driver [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Deleting instance files /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5_del
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.391 253665 INFO nova.virt.libvirt.driver [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Deletion of /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5_del complete
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.442 253665 INFO nova.compute.manager [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.443 253665 DEBUG oslo.service.loopingcall [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.443 253665 DEBUG nova.compute.manager [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.443 253665 DEBUG nova.network.neutron [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.547 253665 DEBUG nova.network.neutron [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.559 253665 DEBUG nova.network.neutron [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.572 253665 INFO nova.compute.manager [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Took 0.13 seconds to deallocate network for instance.
Nov 22 09:19:36 compute-0 ceph-mon[75021]: pgmap v1662: 305 pgs: 305 active+clean; 173 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.621 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.622 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/568750702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.713 253665 DEBUG oslo_concurrency.processutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.792 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.793 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.794 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.795 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.796 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.797 253665 WARNING nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.797 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.798 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.799 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.800 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.800 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.801 253665 WARNING nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.802 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.802 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.803 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.805 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.806 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.806 253665 WARNING nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.806 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-changed-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.807 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Refreshing instance network info cache due to event network-changed-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.807 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.809 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.845 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:36 compute-0 nova_compute[253661]: 2025-11-22 09:19:36.851 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.161 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.192 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Updating instance_info_cache with network_info: [{"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.215 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.217 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance network_info: |[{"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.218 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.219 253665 DEBUG nova.network.neutron [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Refreshing network info cache for port dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.226 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start _get_guest_xml network_info=[{"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083126307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.235 253665 WARNING nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.249 253665 DEBUG nova.virt.libvirt.host [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.251 253665 DEBUG nova.virt.libvirt.host [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.257 253665 DEBUG nova.virt.libvirt.host [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.258 253665 DEBUG nova.virt.libvirt.host [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.259 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.260 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.261 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.262 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.263 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.263 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.264 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.265 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.266 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.266 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.267 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.267 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.275 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905178655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.325 253665 DEBUG oslo_concurrency.processutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.329 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.337 253665 DEBUG nova.virt.libvirt.vif [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:32Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.338 253665 DEBUG nova.network.os_vif_util [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.340 253665 DEBUG nova.network.os_vif_util [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.342 253665 DEBUG nova.objects.instance [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.351 253665 DEBUG nova.compute.provider_tree [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.360 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <uuid>6fc1c0e4-3bd1-44c5-a722-9a30961fc545</uuid>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <name>instance-0000003a</name>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1392829761</nova:name>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:36</nova:creationTime>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <nova:user uuid="5352d2182544454aab03bd4a74160247">tempest-ServerDiskConfigTestJSON-1778643933-project-member</nova:user>
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <nova:project uuid="a29f2c834c7a4a2ea6c4fc6dea996a8e">tempest-ServerDiskConfigTestJSON-1778643933</nova:project>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <nova:port uuid="2a28300e-6b6b-4513-831f-e30f3694fbcd">
Nov 22 09:19:37 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <entry name="serial">6fc1c0e4-3bd1-44c5-a722-9a30961fc545</entry>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <entry name="uuid">6fc1c0e4-3bd1-44c5-a722-9a30961fc545</entry>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk">
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config">
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:7c:af:ec"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <target dev="tap2a28300e-6b"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/console.log" append="off"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:37 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:37 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:37 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:37 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:37 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.369 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Preparing to wait for external event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.370 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.370 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.371 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.372 253665 DEBUG nova.virt.libvirt.vif [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:32Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.373 253665 DEBUG nova.network.os_vif_util [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.374 253665 DEBUG nova.network.os_vif_util [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.375 253665 DEBUG os_vif [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.383 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.384 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.386 253665 DEBUG nova.scheduler.client.report [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.394 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a28300e-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.395 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a28300e-6b, col_values=(('external_ids', {'iface-id': '2a28300e-6b6b-4513-831f-e30f3694fbcd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:af:ec', 'vm-uuid': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:37 compute-0 NetworkManager[48920]: <info>  [1763803177.3989] manager: (tap2a28300e-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Nov 22 09:19:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 177 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 148 op/s
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.413 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.419 253665 INFO os_vif [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b')
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.451 253665 INFO nova.scheduler.client.report [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Deleted allocations for instance 18709ea6-4d81-4329-8bbc-2d62e5344ef5
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.492 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.492 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.492 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No VIF found with MAC fa:16:3e:7c:af:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.493 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Using config drive
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.513 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.520 253665 DEBUG nova.network.neutron [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Updated VIF entry in instance network info cache for port 2a28300e-6b6b-4513-831f-e30f3694fbcd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.521 253665 DEBUG nova.network.neutron [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Updating instance_info_cache with network_info: [{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.523 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.546 253665 DEBUG oslo_concurrency.lockutils [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/568750702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3083126307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/905178655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.757 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating config drive at /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.763 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmeawbfsl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642242874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.812 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.838 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.842 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.915 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmeawbfsl" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.938 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:37 compute-0 nova_compute[253661]: 2025-11-22 09:19:37.941 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.115 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.116 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deleting local config drive /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config because it was imported into RBD.
Nov 22 09:19:38 compute-0 kernel: tap2a28300e-6b: entered promiscuous mode
Nov 22 09:19:38 compute-0 NetworkManager[48920]: <info>  [1763803178.1741] manager: (tap2a28300e-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Nov 22 09:19:38 compute-0 ovn_controller[152872]: 2025-11-22T09:19:38Z|00550|binding|INFO|Claiming lport 2a28300e-6b6b-4513-831f-e30f3694fbcd for this chassis.
Nov 22 09:19:38 compute-0 ovn_controller[152872]: 2025-11-22T09:19:38Z|00551|binding|INFO|2a28300e-6b6b-4513-831f-e30f3694fbcd: Claiming fa:16:3e:7c:af:ec 10.100.0.12
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.189 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:af:ec 10.100.0.12'], port_security=['fa:16:3e:7c:af:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a28300e-6b6b-4513-831f-e30f3694fbcd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.191 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a28300e-6b6b-4513-831f-e30f3694fbcd in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd bound to our chassis
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.195 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 ovn_controller[152872]: 2025-11-22T09:19:38Z|00552|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd ovn-installed in OVS
Nov 22 09:19:38 compute-0 ovn_controller[152872]: 2025-11-22T09:19:38Z|00553|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd up in Southbound
Nov 22 09:19:38 compute-0 systemd-udevd[317108]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.214 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aea7b08a-d03d-450f-a1b8-360c560c157c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.216 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d1bce2-e1 in ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:19:38 compute-0 systemd-machined[215941]: New machine qemu-67-instance-0000003a.
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.221 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d1bce2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.221 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6e80b8-e8f7-4bd4-9a2e-08dcb9994cb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.224 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3b9901-30e5-4d63-b360-cc3e100d64ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003a.
Nov 22 09:19:38 compute-0 NetworkManager[48920]: <info>  [1763803178.2312] device (tap2a28300e-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:19:38 compute-0 NetworkManager[48920]: <info>  [1763803178.2322] device (tap2a28300e-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.246 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a98bc613-d36e-4907-b40f-5c70d5d648d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.272 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7948337-f483-4b16-9026-6ac9ebc7fef0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.311 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[09a278db-4dc2-4fcf-84a2-d96809225c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 systemd-udevd[317112]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.320 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb52144-d941-47f0-8c16-0eaacedad424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 NetworkManager[48920]: <info>  [1763803178.3216] manager: (tap01d1bce2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/249)
Nov 22 09:19:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761882775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.354 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a367c4-dcc4-4916-aae8-79e41989c857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.357 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[134cdd8f-79f4-4dce-9926-197d31693133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.371 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.372 253665 DEBUG nova.virt.libvirt.vif [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2092993764',display_name='tempest-DeleteServersTestJSON-server-2092993764',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2092993764',id=59,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-c64g70hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:33Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=5a489088-2d5b-49b6-8280-e2d86fa4fbf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.373 253665 DEBUG nova.network.os_vif_util [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.374 253665 DEBUG nova.network.os_vif_util [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.375 253665 DEBUG nova.objects.instance [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.389 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <uuid>5a489088-2d5b-49b6-8280-e2d86fa4fbf3</uuid>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <name>instance-0000003b</name>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <nova:name>tempest-DeleteServersTestJSON-server-2092993764</nova:name>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:37</nova:creationTime>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <nova:port uuid="dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38">
Nov 22 09:19:38 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <entry name="serial">5a489088-2d5b-49b6-8280-e2d86fa4fbf3</entry>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <entry name="uuid">5a489088-2d5b-49b6-8280-e2d86fa4fbf3</entry>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk">
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config">
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:38 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:61:20:2a"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <target dev="tapdae1e68a-69"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/console.log" append="off"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:38 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:38 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:38 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:38 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:38 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.394 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Preparing to wait for external event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.395 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.395 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.395 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:38 compute-0 NetworkManager[48920]: <info>  [1763803178.3966] device (tap01d1bce2-e0): carrier: link connected
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.396 253665 DEBUG nova.virt.libvirt.vif [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2092993764',display_name='tempest-DeleteServersTestJSON-server-2092993764',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2092993764',id=59,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-c64g70hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:33Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=5a489088-2d5b-49b6-8280-e2d86fa4fbf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.396 253665 DEBUG nova.network.os_vif_util [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.397 253665 DEBUG nova.network.os_vif_util [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.397 253665 DEBUG os_vif [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.398 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.399 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.401 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdae1e68a-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.402 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdae1e68a-69, col_values=(('external_ids', {'iface-id': 'dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:20:2a', 'vm-uuid': '5a489088-2d5b-49b6-8280-e2d86fa4fbf3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.403 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[972d0c8c-5869-4c62-9b88-25723dbec5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[56c888ba-0212-468b-896c-9817affaf82d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602913, 'reachable_time': 31761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317143, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 NetworkManager[48920]: <info>  [1763803178.4467] manager: (tapdae1e68a-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[efb73dee-0a03-4c6c-ad6f-4ae57660f073]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:2279'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602913, 'tstamp': 602913}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317144, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.450 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.453 253665 INFO os_vif [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69')
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.474 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13b59599-a7bb-4dd9-9482-8e8b50291ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602913, 'reachable_time': 31761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317146, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.509 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30b3996b-f2af-4cda-a66a-d3e87463122d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.523 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.523 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.523 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:61:20:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.524 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Using config drive
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.554 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.615 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[63161003-2911-4ac1-b6c5-986200de48d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.618 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.618 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.619 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d1bce2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:38 compute-0 NetworkManager[48920]: <info>  [1763803178.6218] manager: (tap01d1bce2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 kernel: tap01d1bce2-e0: entered promiscuous mode
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.628 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d1bce2-e0, col_values=(('external_ids', {'iface-id': '23aa3d02-a12d-464a-8395-5aa8724c0fd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 ovn_controller[152872]: 2025-11-22T09:19:38Z|00554|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.655 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.660 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f5697c4f-3454-40fd-81d3-d44e62cd2b7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.662 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:19:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.662 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'env', 'PROCESS_TAG=haproxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:19:38 compute-0 ceph-mon[75021]: pgmap v1663: 305 pgs: 305 active+clean; 177 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 148 op/s
Nov 22 09:19:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1642242874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/761882775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.971 253665 DEBUG nova.network.neutron [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Updated VIF entry in instance network info cache for port dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.972 253665 DEBUG nova.network.neutron [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Updating instance_info_cache with network_info: [{"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.987 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:38 compute-0 nova_compute[253661]: 2025-11-22 09:19:38.998 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Creating config drive at /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.004 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sr2tzsj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:39 compute-0 podman[317199]: 2025-11-22 09:19:39.109007397 +0000 UTC m=+0.055100540 container create 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.117 253665 DEBUG nova.compute.manager [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.118 253665 DEBUG oslo_concurrency.lockutils [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.118 253665 DEBUG oslo_concurrency.lockutils [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.118 253665 DEBUG oslo_concurrency.lockutils [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.119 253665 DEBUG nova.compute.manager [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Processing event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:19:39 compute-0 systemd[1]: Started libpod-conmon-13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03.scope.
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.152 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sr2tzsj" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:19:39 compute-0 podman[317199]: 2025-11-22 09:19:39.081645362 +0000 UTC m=+0.027738495 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1241bc236f075e40e2ab81b6d24373f4bf0bae4ad4e81373a81f5c0654f230/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.186 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:39 compute-0 podman[317199]: 2025-11-22 09:19:39.192297351 +0000 UTC m=+0.138390514 container init 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.192 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:39 compute-0 podman[317199]: 2025-11-22 09:19:39.199861464 +0000 UTC m=+0.145954597 container start 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 09:19:39 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [NOTICE]   (317238) : New worker (317241) forked
Nov 22 09:19:39 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [NOTICE]   (317238) : Loading success.
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.366 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.367 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Deleting local config drive /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config because it was imported into RBD.
Nov 22 09:19:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 215 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.4 MiB/s wr, 286 op/s
Nov 22 09:19:39 compute-0 kernel: tapdae1e68a-69: entered promiscuous mode
Nov 22 09:19:39 compute-0 NetworkManager[48920]: <info>  [1763803179.4312] manager: (tapdae1e68a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Nov 22 09:19:39 compute-0 systemd-udevd[317137]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:39 compute-0 ovn_controller[152872]: 2025-11-22T09:19:39Z|00555|binding|INFO|Claiming lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 for this chassis.
Nov 22 09:19:39 compute-0 ovn_controller[152872]: 2025-11-22T09:19:39Z|00556|binding|INFO|dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38: Claiming fa:16:3e:61:20:2a 10.100.0.3
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.441 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:20:2a 10.100.0.3'], port_security=['fa:16:3e:61:20:2a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a489088-2d5b-49b6-8280-e2d86fa4fbf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.443 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 in datapath d93e3720-b00d-41f5-8283-164e9f857d24 bound to our chassis
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.444 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:19:39 compute-0 NetworkManager[48920]: <info>  [1763803179.4487] device (tapdae1e68a-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:19:39 compute-0 NetworkManager[48920]: <info>  [1763803179.4500] device (tapdae1e68a-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:19:39 compute-0 ovn_controller[152872]: 2025-11-22T09:19:39Z|00557|binding|INFO|Setting lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 ovn-installed in OVS
Nov 22 09:19:39 compute-0 ovn_controller[152872]: 2025-11-22T09:19:39Z|00558|binding|INFO|Setting lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 up in Southbound
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.461 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[732f88b6-a03b-466a-bdf5-53953da0e9c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.462 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.465 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.465 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d10a0a3f-7bc8-4ea8-82a9-3b737809bf5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.467 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa698dc-ce24-40f1-9244-32fc2a5f8a51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.480 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e2effba5-a4d5-4e3e-ab17-0aaacc40ce94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 systemd-machined[215941]: New machine qemu-68-instance-0000003b.
Nov 22 09:19:39 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003b.
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.500 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce80eb5-d7b2-4c35-96a8-277dfe310173]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.516 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.5152586, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.516 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Started (Lifecycle Event)
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.521 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.533 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.539 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.544 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance spawned successfully.
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.545 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.549 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.548 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44f4c2a5-8910-4ae5-8ef8-da5cd6b94524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 NetworkManager[48920]: <info>  [1763803179.5623] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.561 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8d5c56-c6c5-43a6-b84c-0cdb3b3373d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.572 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.573 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.5202582, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.573 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Paused (Lifecycle Event)
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.590 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.591 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.591 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.591 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.592 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.592 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.596 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.600 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.5287998, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.601 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Resumed (Lifecycle Event)
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.604 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[42c8b8de-7b42-4bb4-947c-e9c367084ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.607 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d747be05-06ce-4604-8c11-241b3649757a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.625 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.627 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:39 compute-0 NetworkManager[48920]: <info>  [1763803179.6323] device (tapd93e3720-b0): carrier: link connected
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.639 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[10d3fb62-165a-4ba6-aaba-8305481b2e74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.646 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.653 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f29c8a-610c-4b91-a95b-7a97a4ae573e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603037, 'reachable_time': 32531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317343, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.659 253665 INFO nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Took 6.75 seconds to spawn the instance on the hypervisor.
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.659 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.669 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f659b8b0-d87a-4583-9770-ce1467fe5cce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603037, 'tstamp': 603037}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317344, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.695 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[838e92de-e128-4b15-875c-4d63a32809df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603037, 'reachable_time': 32531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317345, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.711 253665 INFO nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Took 7.93 seconds to build instance.
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.726 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.739 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88e8400d-453f-4386-b46d-fcbbbeca3e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.825 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[39ead8b7-4d51-48e8-87d5-f466c8632878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.826 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.826 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.827 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:39 compute-0 NetworkManager[48920]: <info>  [1763803179.8291] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Nov 22 09:19:39 compute-0 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.837 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:39 compute-0 ovn_controller[152872]: 2025-11-22T09:19:39Z|00559|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.850 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803164.8495457, 2f0d9dce-1900-41c4-9b69-7e46f34dde81 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.851 253665 INFO nova.compute.manager [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] VM Stopped (Lifecycle Event)
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.851 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.852 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5b488f-2a01-42c0-918a-c210771e33ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.853 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:19:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.854 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.867 253665 DEBUG nova.compute.manager [None req-172b08c9-0d42-4898-b55e-9472a2edb81f - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.904 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.9036124, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.904 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Started (Lifecycle Event)
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.921 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.925 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.906118, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.926 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Paused (Lifecycle Event)
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.941 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.945 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:39 compute-0 nova_compute[253661]: 2025-11-22 09:19:39.963 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:40 compute-0 podman[317416]: 2025-11-22 09:19:40.288272861 +0000 UTC m=+0.054859423 container create 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 09:19:40 compute-0 systemd[1]: Started libpod-conmon-48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8.scope.
Nov 22 09:19:40 compute-0 podman[317416]: 2025-11-22 09:19:40.262361251 +0000 UTC m=+0.028947833 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:19:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334c5af23dba16e3d478964a076fed62cfc26886b13bd29e456185b8274415e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:19:40 compute-0 podman[317416]: 2025-11-22 09:19:40.378904114 +0000 UTC m=+0.145490696 container init 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:19:40 compute-0 podman[317416]: 2025-11-22 09:19:40.385550205 +0000 UTC m=+0.152136767 container start 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:19:40 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [NOTICE]   (317432) : New worker (317434) forked
Nov 22 09:19:40 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [NOTICE]   (317432) : Loading success.
Nov 22 09:19:40 compute-0 ceph-mon[75021]: pgmap v1664: 305 pgs: 305 active+clean; 215 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.4 MiB/s wr, 286 op/s
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.945 253665 DEBUG nova.compute.manager [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.945 253665 DEBUG oslo_concurrency.lockutils [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.946 253665 DEBUG oslo_concurrency.lockutils [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.946 253665 DEBUG oslo_concurrency.lockutils [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.946 253665 DEBUG nova.compute.manager [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Processing event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.947 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.956 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803180.951288, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.956 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Resumed (Lifecycle Event)
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.958 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.962 253665 INFO nova.virt.libvirt.driver [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance spawned successfully.
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.962 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.974 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.979 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.982 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.983 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.983 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.984 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.984 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:40 compute-0 nova_compute[253661]: 2025-11-22 09:19:40.985 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.006 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.037 253665 INFO nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Took 7.12 seconds to spawn the instance on the hypervisor.
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.038 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.094 253665 INFO nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Took 8.43 seconds to build instance.
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.110 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.213 253665 DEBUG nova.compute.manager [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.214 253665 DEBUG oslo_concurrency.lockutils [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.214 253665 DEBUG oslo_concurrency.lockutils [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.214 253665 DEBUG oslo_concurrency.lockutils [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.215 253665 DEBUG nova.compute.manager [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:41 compute-0 nova_compute[253661]: 2025-11-22 09:19:41.215 253665 WARNING nova.compute.manager [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state None.
Nov 22 09:19:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 215 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 276 op/s
Nov 22 09:19:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:42 compute-0 nova_compute[253661]: 2025-11-22 09:19:42.480 253665 INFO nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Rebuilding instance
Nov 22 09:19:42 compute-0 ceph-mon[75021]: pgmap v1665: 305 pgs: 305 active+clean; 215 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 276 op/s
Nov 22 09:19:42 compute-0 nova_compute[253661]: 2025-11-22 09:19:42.719 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:42 compute-0 nova_compute[253661]: 2025-11-22 09:19:42.735 253665 DEBUG nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:42 compute-0 nova_compute[253661]: 2025-11-22 09:19:42.775 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_requests' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:42 compute-0 nova_compute[253661]: 2025-11-22 09:19:42.785 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:42 compute-0 nova_compute[253661]: 2025-11-22 09:19:42.794 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'resources' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:42 compute-0 nova_compute[253661]: 2025-11-22 09:19:42.809 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'migration_context' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:42 compute-0 nova_compute[253661]: 2025-11-22 09:19:42.832 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:19:42 compute-0 nova_compute[253661]: 2025-11-22 09:19:42.838 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.277 253665 DEBUG nova.compute.manager [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.278 253665 DEBUG oslo_concurrency.lockutils [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.278 253665 DEBUG oslo_concurrency.lockutils [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.278 253665 DEBUG oslo_concurrency.lockutils [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.279 253665 DEBUG nova.compute.manager [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] No waiting events found dispatching network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.279 253665 WARNING nova.compute.manager [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received unexpected event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 for instance with vm_state active and task_state None.
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.380 253665 DEBUG nova.objects.instance [None req-400b75c6-a44d-48fd-ae34-077406dbd5e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.401 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803183.4008892, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.401 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Paused (Lifecycle Event)
Nov 22 09:19:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 5.4 MiB/s wr, 374 op/s
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.416 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.428 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.451 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 22 09:19:43 compute-0 kernel: tapdae1e68a-69 (unregistering): left promiscuous mode
Nov 22 09:19:43 compute-0 NetworkManager[48920]: <info>  [1763803183.8516] device (tapdae1e68a-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.868 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:43 compute-0 ovn_controller[152872]: 2025-11-22T09:19:43Z|00560|binding|INFO|Releasing lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 from this chassis (sb_readonly=0)
Nov 22 09:19:43 compute-0 ovn_controller[152872]: 2025-11-22T09:19:43Z|00561|binding|INFO|Setting lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 down in Southbound
Nov 22 09:19:43 compute-0 ovn_controller[152872]: 2025-11-22T09:19:43Z|00562|binding|INFO|Removing iface tapdae1e68a-69 ovn-installed in OVS
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.879 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.885 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:20:2a 10.100.0.3'], port_security=['fa:16:3e:61:20:2a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a489088-2d5b-49b6-8280-e2d86fa4fbf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.887 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis
Nov 22 09:19:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.889 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:19:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.890 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5104e2-d9b3-4150-98be-279825dfc605]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.895 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore
Nov 22 09:19:43 compute-0 nova_compute[253661]: 2025-11-22 09:19:43.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:43 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Nov 22 09:19:43 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003b.scope: Consumed 2.967s CPU time.
Nov 22 09:19:43 compute-0 systemd-machined[215941]: Machine qemu-68-instance-0000003b terminated.
Nov 22 09:19:44 compute-0 nova_compute[253661]: 2025-11-22 09:19:44.018 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:44 compute-0 nova_compute[253661]: 2025-11-22 09:19:44.027 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:44 compute-0 nova_compute[253661]: 2025-11-22 09:19:44.029 253665 DEBUG nova.compute.manager [None req-400b75c6-a44d-48fd-ae34-077406dbd5e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:44 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [NOTICE]   (317432) : haproxy version is 2.8.14-c23fe91
Nov 22 09:19:44 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [NOTICE]   (317432) : path to executable is /usr/sbin/haproxy
Nov 22 09:19:44 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [WARNING]  (317432) : Exiting Master process...
Nov 22 09:19:44 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [WARNING]  (317432) : Exiting Master process...
Nov 22 09:19:44 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [ALERT]    (317432) : Current worker (317434) exited with code 143 (Terminated)
Nov 22 09:19:44 compute-0 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [WARNING]  (317432) : All workers exited. Exiting... (0)
Nov 22 09:19:44 compute-0 systemd[1]: libpod-48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8.scope: Deactivated successfully.
Nov 22 09:19:44 compute-0 podman[317470]: 2025-11-22 09:19:44.060791737 +0000 UTC m=+0.059206149 container died 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 09:19:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8-userdata-shm.mount: Deactivated successfully.
Nov 22 09:19:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5334c5af23dba16e3d478964a076fed62cfc26886b13bd29e456185b8274415e-merged.mount: Deactivated successfully.
Nov 22 09:19:44 compute-0 podman[317470]: 2025-11-22 09:19:44.136280221 +0000 UTC m=+0.134694633 container cleanup 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:19:44 compute-0 systemd[1]: libpod-conmon-48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8.scope: Deactivated successfully.
Nov 22 09:19:44 compute-0 podman[317505]: 2025-11-22 09:19:44.211045859 +0000 UTC m=+0.043281273 container remove 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:19:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2356e2c7-087f-4314-82ce-75082ede1ab9]: (4, ('Sat Nov 22 09:19:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8)\n48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8\nSat Nov 22 09:19:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8)\n48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.219 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7e99416d-57ca-4e1d-8cb5-808e552483ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.220 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:44 compute-0 nova_compute[253661]: 2025-11-22 09:19:44.222 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:44 compute-0 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 09:19:44 compute-0 nova_compute[253661]: 2025-11-22 09:19:44.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.246 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a349315-8a09-4cd6-957d-887b91b2948b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a39320f6-e476-46e8-b760-8fb4cd70c487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.269 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38cae474-abcc-49dc-8317-aba9b659602e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.291 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ee148e-7851-48cd-990c-cb16ab7e4521]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603026, 'reachable_time': 27729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317524, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:44 compute-0 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 09:19:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.296 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:19:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.296 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[af8d26eb-977d-4121-9657-4a4d8ce2cdfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:44 compute-0 ceph-mon[75021]: pgmap v1666: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 5.4 MiB/s wr, 374 op/s
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.208 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.360 253665 DEBUG nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-unplugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.361 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.362 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.363 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.363 253665 DEBUG nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] No waiting events found dispatching network-vif-unplugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.364 253665 WARNING nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received unexpected event network-vif-unplugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 for instance with vm_state suspended and task_state None.
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.364 253665 DEBUG nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.364 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.365 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.365 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.366 253665 DEBUG nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] No waiting events found dispatching network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:45 compute-0 nova_compute[253661]: 2025-11-22 09:19:45.366 253665 WARNING nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received unexpected event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 for instance with vm_state suspended and task_state None.
Nov 22 09:19:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 4.4 MiB/s wr, 380 op/s
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.222 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.223 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.224 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.224 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.224 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.226 253665 INFO nova.compute.manager [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Terminating instance
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.227 253665 DEBUG nova.compute.manager [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.231 253665 INFO nova.virt.libvirt.driver [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance destroyed successfully.
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.232 253665 DEBUG nova.objects.instance [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.244 253665 DEBUG nova.virt.libvirt.vif [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:19:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2092993764',display_name='tempest-DeleteServersTestJSON-server-2092993764',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2092993764',id=59,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-c64g70hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:44Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=5a489088-2d5b-49b6-8280-e2d86fa4fbf3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.245 253665 DEBUG nova.network.os_vif_util [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.246 253665 DEBUG nova.network.os_vif_util [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.246 253665 DEBUG os_vif [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.248 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.248 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdae1e68a-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.253 253665 INFO os_vif [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69')
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.690 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.795 253665 INFO nova.virt.libvirt.driver [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Deleting instance files /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3_del
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.797 253665 INFO nova.virt.libvirt.driver [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Deletion of /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3_del complete
Nov 22 09:19:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:46 compute-0 ceph-mon[75021]: pgmap v1667: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 4.4 MiB/s wr, 380 op/s
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.864 253665 INFO nova.compute.manager [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Took 0.64 seconds to destroy the instance on the hypervisor.
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.865 253665 DEBUG oslo.service.loopingcall [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.865 253665 DEBUG nova.compute.manager [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:19:46 compute-0 nova_compute[253661]: 2025-11-22 09:19:46.865 253665 DEBUG nova.network.neutron [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:19:47 compute-0 ovn_controller[152872]: 2025-11-22T09:19:47Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:19:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 206 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 3.3 MiB/s wr, 377 op/s
Nov 22 09:19:47 compute-0 nova_compute[253661]: 2025-11-22 09:19:47.531 253665 DEBUG nova.network.neutron [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:47 compute-0 nova_compute[253661]: 2025-11-22 09:19:47.550 253665 INFO nova.compute.manager [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Took 0.68 seconds to deallocate network for instance.
Nov 22 09:19:47 compute-0 nova_compute[253661]: 2025-11-22 09:19:47.599 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:47 compute-0 nova_compute[253661]: 2025-11-22 09:19:47.599 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:47 compute-0 nova_compute[253661]: 2025-11-22 09:19:47.639 253665 DEBUG nova.compute.manager [req-5fec05f0-e2a9-47b9-aafd-4362c88f32d7 req-d0367400-344d-4c63-9be2-8020703e400f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-deleted-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:47 compute-0 nova_compute[253661]: 2025-11-22 09:19:47.701 253665 DEBUG oslo_concurrency.processutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3271679337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:48 compute-0 nova_compute[253661]: 2025-11-22 09:19:48.197 253665 DEBUG oslo_concurrency.processutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:48 compute-0 nova_compute[253661]: 2025-11-22 09:19:48.202 253665 DEBUG nova.compute.provider_tree [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:48 compute-0 nova_compute[253661]: 2025-11-22 09:19:48.219 253665 DEBUG nova.scheduler.client.report [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:48 compute-0 nova_compute[253661]: 2025-11-22 09:19:48.243 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:48 compute-0 nova_compute[253661]: 2025-11-22 09:19:48.271 253665 INFO nova.scheduler.client.report [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance 5a489088-2d5b-49b6-8280-e2d86fa4fbf3
Nov 22 09:19:48 compute-0 nova_compute[253661]: 2025-11-22 09:19:48.325 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:48 compute-0 ceph-mon[75021]: pgmap v1668: 305 pgs: 305 active+clean; 206 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 3.3 MiB/s wr, 377 op/s
Nov 22 09:19:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3271679337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 169 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.2 MiB/s wr, 352 op/s
Nov 22 09:19:49 compute-0 nova_compute[253661]: 2025-11-22 09:19:49.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:50 compute-0 nova_compute[253661]: 2025-11-22 09:19:50.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:50 compute-0 ceph-mon[75021]: pgmap v1669: 305 pgs: 305 active+clean; 169 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.2 MiB/s wr, 352 op/s
Nov 22 09:19:50 compute-0 nova_compute[253661]: 2025-11-22 09:19:50.931 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803175.9306934, 18709ea6-4d81-4329-8bbc-2d62e5344ef5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:50 compute-0 nova_compute[253661]: 2025-11-22 09:19:50.931 253665 INFO nova.compute.manager [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] VM Stopped (Lifecycle Event)
Nov 22 09:19:50 compute-0 nova_compute[253661]: 2025-11-22 09:19:50.956 253665 DEBUG nova.compute.manager [None req-cb6e4164-755c-4fe4-ab0a-44a34e2cb1e4 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:51 compute-0 nova_compute[253661]: 2025-11-22 09:19:51.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 169 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 35 KiB/s wr, 212 op/s
Nov 22 09:19:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:19:52
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.meta']
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:19:52 compute-0 nova_compute[253661]: 2025-11-22 09:19:52.456 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:52 compute-0 nova_compute[253661]: 2025-11-22 09:19:52.456 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:52 compute-0 nova_compute[253661]: 2025-11-22 09:19:52.470 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:19:52 compute-0 nova_compute[253661]: 2025-11-22 09:19:52.529 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:52 compute-0 nova_compute[253661]: 2025-11-22 09:19:52.529 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:52 compute-0 nova_compute[253661]: 2025-11-22 09:19:52.539 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:19:52 compute-0 nova_compute[253661]: 2025-11-22 09:19:52.539 253665 INFO nova.compute.claims [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:19:52 compute-0 nova_compute[253661]: 2025-11-22 09:19:52.660 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:19:52 compute-0 ceph-mon[75021]: pgmap v1670: 305 pgs: 305 active+clean; 169 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 35 KiB/s wr, 212 op/s
Nov 22 09:19:52 compute-0 nova_compute[253661]: 2025-11-22 09:19:52.895 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:19:52 compute-0 ovn_controller[152872]: 2025-11-22T09:19:52Z|00563|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:19:52 compute-0 ovn_controller[152872]: 2025-11-22T09:19:52Z|00564|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.021 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:19:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724326761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.140 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.146 253665 DEBUG nova.compute.provider_tree [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.161 253665 DEBUG nova.scheduler.client.report [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.197 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.199 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.244 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.246 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.271 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.290 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.377 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.379 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.379 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Creating image(s)
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.401 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 188 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.9 MiB/s wr, 242 op/s
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.427 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.450 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.453 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.489 253665 DEBUG nova.policy [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2c92c50f03874da0a9bd18e66157708e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dd04e58a339948e6b219ee858ce56620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.527 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.528 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.530 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.530 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.554 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.557 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:53.732 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:53.734 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:19:53 compute-0 nova_compute[253661]: 2025-11-22 09:19:53.733 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2724326761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:19:54 compute-0 nova_compute[253661]: 2025-11-22 09:19:54.149 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:54 compute-0 nova_compute[253661]: 2025-11-22 09:19:54.202 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] resizing rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:19:54 compute-0 nova_compute[253661]: 2025-11-22 09:19:54.227 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Successfully created port: 95d3860d-a485-46b6-8875-35bb61ae7e9d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:19:54 compute-0 nova_compute[253661]: 2025-11-22 09:19:54.289 253665 DEBUG nova.objects.instance [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lazy-loading 'migration_context' on Instance uuid 87fbaa81-3eae-4dac-9613-700a29ab0daf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:54 compute-0 nova_compute[253661]: 2025-11-22 09:19:54.300 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:19:54 compute-0 nova_compute[253661]: 2025-11-22 09:19:54.301 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Ensure instance console log exists: /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:19:54 compute-0 nova_compute[253661]: 2025-11-22 09:19:54.301 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:54 compute-0 nova_compute[253661]: 2025-11-22 09:19:54.301 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:54 compute-0 nova_compute[253661]: 2025-11-22 09:19:54.302 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:19:54 compute-0 ceph-mon[75021]: pgmap v1671: 305 pgs: 305 active+clean; 188 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.9 MiB/s wr, 242 op/s
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:19:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.215 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Successfully updated port: 95d3860d-a485-46b6-8875-35bb61ae7e9d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.230 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.230 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquired lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.230 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:19:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 196 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.481 253665 DEBUG nova.compute.manager [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-changed-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.481 253665 DEBUG nova.compute.manager [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Refreshing instance network info cache due to event network-changed-95d3860d-a485-46b6-8875-35bb61ae7e9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.482 253665 DEBUG oslo_concurrency.lockutils [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.521 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:19:55 compute-0 kernel: tap2a28300e-6b (unregistering): left promiscuous mode
Nov 22 09:19:55 compute-0 NetworkManager[48920]: <info>  [1763803195.6907] device (tap2a28300e-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:19:55 compute-0 ovn_controller[152872]: 2025-11-22T09:19:55Z|00565|binding|INFO|Releasing lport 2a28300e-6b6b-4513-831f-e30f3694fbcd from this chassis (sb_readonly=0)
Nov 22 09:19:55 compute-0 ovn_controller[152872]: 2025-11-22T09:19:55Z|00566|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd down in Southbound
Nov 22 09:19:55 compute-0 ovn_controller[152872]: 2025-11-22T09:19:55Z|00567|binding|INFO|Removing iface tap2a28300e-6b ovn-installed in OVS
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.706 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:af:ec 10.100.0.12'], port_security=['fa:16:3e:7c:af:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a28300e-6b6b-4513-831f-e30f3694fbcd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.707 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a28300e-6b6b-4513-831f-e30f3694fbcd in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd unbound from our chassis
Nov 22 09:19:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.709 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:19:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.710 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[259809f2-fdec-422b-ad18-23a85e962bba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.710 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace which is not needed anymore
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.717 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:55 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 22 09:19:55 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Consumed 14.399s CPU time.
Nov 22 09:19:55 compute-0 systemd-machined[215941]: Machine qemu-67-instance-0000003a terminated.
Nov 22 09:19:55 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [NOTICE]   (317238) : haproxy version is 2.8.14-c23fe91
Nov 22 09:19:55 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [NOTICE]   (317238) : path to executable is /usr/sbin/haproxy
Nov 22 09:19:55 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [WARNING]  (317238) : Exiting Master process...
Nov 22 09:19:55 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [WARNING]  (317238) : Exiting Master process...
Nov 22 09:19:55 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [ALERT]    (317238) : Current worker (317241) exited with code 143 (Terminated)
Nov 22 09:19:55 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [WARNING]  (317238) : All workers exited. Exiting... (0)
Nov 22 09:19:55 compute-0 systemd[1]: libpod-13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03.scope: Deactivated successfully.
Nov 22 09:19:55 compute-0 podman[317779]: 2025-11-22 09:19:55.863668238 +0000 UTC m=+0.074671825 container died 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.913 253665 DEBUG nova.compute.manager [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.913 253665 DEBUG oslo_concurrency.lockutils [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.913 253665 DEBUG oslo_concurrency.lockutils [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.914 253665 DEBUG oslo_concurrency.lockutils [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.914 253665 DEBUG nova.compute.manager [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.914 253665 WARNING nova.compute.manager [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state rebuilding.
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.928 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.941 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance shutdown successfully after 13 seconds.
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.949 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance destroyed successfully.
Nov 22 09:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03-userdata-shm.mount: Deactivated successfully.
Nov 22 09:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c1241bc236f075e40e2ab81b6d24373f4bf0bae4ad4e81373a81f5c0654f230-merged.mount: Deactivated successfully.
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.960 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance destroyed successfully.
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.961 253665 DEBUG nova.virt.libvirt.vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:42Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.961 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.962 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.962 253665 DEBUG os_vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.964 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a28300e-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.967 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:55 compute-0 nova_compute[253661]: 2025-11-22 09:19:55.969 253665 INFO os_vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b')
Nov 22 09:19:56 compute-0 podman[317779]: 2025-11-22 09:19:56.113093448 +0000 UTC m=+0.324097035 container cleanup 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:19:56 compute-0 systemd[1]: libpod-conmon-13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03.scope: Deactivated successfully.
Nov 22 09:19:56 compute-0 podman[317836]: 2025-11-22 09:19:56.363725568 +0000 UTC m=+0.227809887 container remove 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:19:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.370 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6caee83b-abdd-454e-ba61-2ab0f7d2db54]: (4, ('Sat Nov 22 09:19:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03)\n13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03\nSat Nov 22 09:19:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03)\n13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.373 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4cfe2c-8edb-49cc-bd8f-0e121962e71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.374 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:56 compute-0 kernel: tap01d1bce2-e0: left promiscuous mode
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f2231c-7b66-43c2-aef6-d4fc4f9b4ba8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.398 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Updating instance_info_cache with network_info: [{"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.407 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[227cbc3e-e43e-48f6-aa36-8eeb08e7b214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc10925-e45f-4ecf-92d4-a498ad283f17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.417 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Releasing lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.418 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance network_info: |[{"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.418 253665 DEBUG oslo_concurrency.lockutils [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.419 253665 DEBUG nova.network.neutron [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Refreshing network info cache for port 95d3860d-a485-46b6-8875-35bb61ae7e9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.421 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start _get_guest_xml network_info=[{"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[673d13bb-910b-4557-b9a8-3e699588a615]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602904, 'reachable_time': 32759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317851, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d01d1bce2\x2def3d\x2d44bf\x2da3f9\x2d13dc692c2ddd.mount: Deactivated successfully.
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.428 253665 WARNING nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:19:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.428 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:19:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.429 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c4e080-f1ab-4b04-b744-5ea42e5feb18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.434 253665 DEBUG nova.virt.libvirt.host [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.435 253665 DEBUG nova.virt.libvirt.host [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.440 253665 DEBUG nova.virt.libvirt.host [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.441 253665 DEBUG nova.virt.libvirt.host [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.441 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.441 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.442 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.442 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.442 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.444 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.444 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.447 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:56 compute-0 podman[317849]: 2025-11-22 09:19:56.514451041 +0000 UTC m=+0.090380788 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 22 09:19:56 compute-0 podman[317857]: 2025-11-22 09:19:56.515990748 +0000 UTC m=+0.060570103 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:19:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:19:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2377928327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.878 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.905 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:56 compute-0 nova_compute[253661]: 2025-11-22 09:19:56.909 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:57 compute-0 ceph-mon[75021]: pgmap v1672: 305 pgs: 305 active+clean; 196 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Nov 22 09:19:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2377928327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.293 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deleting instance files /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_del
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.294 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deletion of /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_del complete
Nov 22 09:19:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1392090696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 221 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 950 KiB/s rd, 3.2 MiB/s wr, 144 op/s
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.418 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.419 253665 DEBUG nova.virt.libvirt.vif [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1641561282',display_name='tempest-ServerAddressesTestJSON-server-1641561282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1641561282',id=60,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd04e58a339948e6b219ee858ce56620',ramdisk_id='',reservation_id='r-idyfcrg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1270862588',owner_user_name='tempest-ServerAddressesTestJSON-1270862588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:53Z,user_data=None,user_id='2c92c50f03874da0a9bd18e66157708e',uuid=87fbaa81-3eae-4dac-9613-700a29ab0daf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.419 253665 DEBUG nova.network.os_vif_util [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converting VIF {"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.420 253665 DEBUG nova.network.os_vif_util [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.421 253665 DEBUG nova.objects.instance [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87fbaa81-3eae-4dac-9613-700a29ab0daf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.424 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.424 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating image(s)
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.442 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.461 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.480 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.484 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.522 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <uuid>87fbaa81-3eae-4dac-9613-700a29ab0daf</uuid>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <name>instance-0000003c</name>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerAddressesTestJSON-server-1641561282</nova:name>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:56</nova:creationTime>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <nova:user uuid="2c92c50f03874da0a9bd18e66157708e">tempest-ServerAddressesTestJSON-1270862588-project-member</nova:user>
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <nova:project uuid="dd04e58a339948e6b219ee858ce56620">tempest-ServerAddressesTestJSON-1270862588</nova:project>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <nova:port uuid="95d3860d-a485-46b6-8875-35bb61ae7e9d">
Nov 22 09:19:57 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <entry name="serial">87fbaa81-3eae-4dac-9613-700a29ab0daf</entry>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <entry name="uuid">87fbaa81-3eae-4dac-9613-700a29ab0daf</entry>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/87fbaa81-3eae-4dac-9613-700a29ab0daf_disk">
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config">
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:57 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:22:69:07"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <target dev="tap95d3860d-a4"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/console.log" append="off"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:57 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:57 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:57 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:57 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:57 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.523 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Preparing to wait for external event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.523 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.524 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.524 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.524 253665 DEBUG nova.virt.libvirt.vif [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1641561282',display_name='tempest-ServerAddressesTestJSON-server-1641561282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1641561282',id=60,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd04e58a339948e6b219ee858ce56620',ramdisk_id='',reservation_id='r-idyfcrg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1270862588',owner_user_name='tempest-ServerAddressesTestJSON-1270862588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:53Z,user_data=None,user_id='2c92c50f03874da0a9bd18e66157708e',uuid=87fbaa81-3eae-4dac-9613-700a29ab0daf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.525 253665 DEBUG nova.network.os_vif_util [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converting VIF {"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.525 253665 DEBUG nova.network.os_vif_util [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.526 253665 DEBUG os_vif [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.527 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.527 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.527 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.530 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95d3860d-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.530 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95d3860d-a4, col_values=(('external_ids', {'iface-id': '95d3860d-a485-46b6-8875-35bb61ae7e9d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:69:07', 'vm-uuid': '87fbaa81-3eae-4dac-9613-700a29ab0daf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:57 compute-0 NetworkManager[48920]: <info>  [1763803197.5323] manager: (tap95d3860d-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.537 253665 INFO os_vif [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4')
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.561 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.562 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.562 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.563 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.586 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.589 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.643 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.644 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.644 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] No VIF found with MAC fa:16:3e:22:69:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.645 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Using config drive
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.667 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.884 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.932 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] resizing rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.994 253665 DEBUG nova.compute.manager [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.994 253665 DEBUG oslo_concurrency.lockutils [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.995 253665 DEBUG oslo_concurrency.lockutils [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.995 253665 DEBUG oslo_concurrency.lockutils [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.996 253665 DEBUG nova.compute.manager [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:19:57 compute-0 nova_compute[253661]: 2025-11-22 09:19:57.996 253665 WARNING nova.compute.manager [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state rebuild_spawning.
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.030 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.031 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Ensure instance console log exists: /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.031 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.031 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.032 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.033 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start _get_guest_xml network_info=[{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.037 253665 WARNING nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.042 253665 DEBUG nova.virt.libvirt.host [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.042 253665 DEBUG nova.virt.libvirt.host [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.045 253665 DEBUG nova.virt.libvirt.host [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.045 253665 DEBUG nova.virt.libvirt.host [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.045 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.045 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.046 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.046 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.046 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.046 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.048 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.048 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.063 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1392090696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701328283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.537 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.561 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.565 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.932 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Creating config drive at /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config
Nov 22 09:19:58 compute-0 nova_compute[253661]: 2025-11-22 09:19:58.937 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5jc7pgw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:19:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.009 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.011 253665 DEBUG nova.virt.libvirt.vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:57Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.011 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.012 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.015 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <uuid>6fc1c0e4-3bd1-44c5-a722-9a30961fc545</uuid>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <name>instance-0000003a</name>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1392829761</nova:name>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:19:58</nova:creationTime>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <nova:user uuid="5352d2182544454aab03bd4a74160247">tempest-ServerDiskConfigTestJSON-1778643933-project-member</nova:user>
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <nova:project uuid="a29f2c834c7a4a2ea6c4fc6dea996a8e">tempest-ServerDiskConfigTestJSON-1778643933</nova:project>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <nova:port uuid="2a28300e-6b6b-4513-831f-e30f3694fbcd">
Nov 22 09:19:59 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <system>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <entry name="serial">6fc1c0e4-3bd1-44c5-a722-9a30961fc545</entry>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <entry name="uuid">6fc1c0e4-3bd1-44c5-a722-9a30961fc545</entry>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     </system>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <os>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   </os>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <features>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   </features>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk">
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config">
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       </source>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:19:59 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:7c:af:ec"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <target dev="tap2a28300e-6b"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/console.log" append="off"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <video>
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     </video>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:19:59 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:19:59 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:19:59 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:19:59 compute-0 nova_compute[253661]: </domain>
Nov 22 09:19:59 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.015 253665 DEBUG nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Preparing to wait for external event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.015 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.015 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.016 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.016 253665 DEBUG nova.virt.libvirt.vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:57Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.016 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.017 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.017 253665 DEBUG os_vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.018 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.018 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.018 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.020 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.020 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a28300e-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.021 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a28300e-6b, col_values=(('external_ids', {'iface-id': '2a28300e-6b6b-4513-831f-e30f3694fbcd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:af:ec', 'vm-uuid': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.022 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 NetworkManager[48920]: <info>  [1763803199.0234] manager: (tap2a28300e-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.030 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803184.0254579, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.030 253665 INFO nova.compute.manager [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Stopped (Lifecycle Event)
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.032 253665 INFO os_vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b')
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.049 253665 DEBUG nova.compute.manager [None req-8103c6d8-e501-45f1-b770-5d2a87a95db4 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.067 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.067 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.068 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No VIF found with MAC fa:16:3e:7c:af:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.068 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Using config drive
Nov 22 09:19:59 compute-0 ceph-mon[75021]: pgmap v1673: 305 pgs: 305 active+clean; 221 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 950 KiB/s rd, 3.2 MiB/s wr, 144 op/s
Nov 22 09:19:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2701328283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/268445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.093 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.099 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5jc7pgw" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.118 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.121 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.157 253665 DEBUG nova.network.neutron [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Updated VIF entry in instance network info cache for port 95d3860d-a485-46b6-8875-35bb61ae7e9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.157 253665 DEBUG nova.network.neutron [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Updating instance_info_cache with network_info: [{"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.160 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.202 253665 DEBUG oslo_concurrency.lockutils [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.203 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'keypairs' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.293 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.294 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Deleting local config drive /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config because it was imported into RBD.
Nov 22 09:19:59 compute-0 kernel: tap95d3860d-a4: entered promiscuous mode
Nov 22 09:19:59 compute-0 NetworkManager[48920]: <info>  [1763803199.3542] manager: (tap95d3860d-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Nov 22 09:19:59 compute-0 ovn_controller[152872]: 2025-11-22T09:19:59Z|00568|binding|INFO|Claiming lport 95d3860d-a485-46b6-8875-35bb61ae7e9d for this chassis.
Nov 22 09:19:59 compute-0 ovn_controller[152872]: 2025-11-22T09:19:59Z|00569|binding|INFO|95d3860d-a485-46b6-8875-35bb61ae7e9d: Claiming fa:16:3e:22:69:07 10.100.0.3
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 ovn_controller[152872]: 2025-11-22T09:19:59Z|00570|binding|INFO|Setting lport 95d3860d-a485-46b6-8875-35bb61ae7e9d ovn-installed in OVS
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.389 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 systemd-udevd[318274]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:59 compute-0 systemd-machined[215941]: New machine qemu-69-instance-0000003c.
Nov 22 09:19:59 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003c.
Nov 22 09:19:59 compute-0 NetworkManager[48920]: <info>  [1763803199.4035] device (tap95d3860d-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:19:59 compute-0 NetworkManager[48920]: <info>  [1763803199.4052] device (tap95d3860d-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:19:59 compute-0 ovn_controller[152872]: 2025-11-22T09:19:59Z|00571|binding|INFO|Setting lport 95d3860d-a485-46b6-8875-35bb61ae7e9d up in Southbound
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.409 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:69:07 10.100.0.3'], port_security=['fa:16:3e:22:69:07 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '87fbaa81-3eae-4dac-9613-700a29ab0daf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd04e58a339948e6b219ee858ce56620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dfe1d73e-9743-4e1d-a71d-46f13de720cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=923cf162-d21a-49b5-93fe-032ba9e780ee, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=95d3860d-a485-46b6-8875-35bb61ae7e9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.411 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 95d3860d-a485-46b6-8875-35bb61ae7e9d in datapath 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 bound to our chassis
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.412 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4
Nov 22 09:19:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 208 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 691 KiB/s rd, 4.2 MiB/s wr, 181 op/s
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.428 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19e2a395-d1b9-4e5b-ac46-7c0df3660879]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.429 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap209ca7a4-91 in ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.432 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap209ca7a4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.432 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[160ed07d-0fc6-4b0e-b8db-20b95df4b467]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.433 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30b419db-48b9-4486-a42d-60684833c876]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.457 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[045553f5-f3b9-4a92-88c0-992a2a971562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 sudo[318273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:19:59 compute-0 sudo[318273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:19:59 compute-0 sudo[318273]: pam_unix(sudo:session): session closed for user root
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.486 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad4bcbb-c4eb-4794-aa82-270b4846eafa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.518 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0483e96c-9497-40f5-9ec2-cc92cb26c30a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 systemd-udevd[318285]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:19:59 compute-0 NetworkManager[48920]: <info>  [1763803199.5263] manager: (tap209ca7a4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26dcfd40-4241-4135-b725-356f395ac3e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 sudo[318311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:19:59 compute-0 sudo[318311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:19:59 compute-0 sudo[318311]: pam_unix(sudo:session): session closed for user root
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.565 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[19de77e7-6fc8-408f-8d84-6fff01b41ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.570 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a24cdf69-1990-4133-af59-2c7f78a36294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 NetworkManager[48920]: <info>  [1763803199.6022] device (tap209ca7a4-90): carrier: link connected
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.607 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[148eec53-e3af-47ce-a305-81ec79d439de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 sudo[318357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:19:59 compute-0 sudo[318357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:19:59 compute-0 sudo[318357]: pam_unix(sudo:session): session closed for user root
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.628 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[56bbf093-efad-41c8-a8ff-db90e231e72a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap209ca7a4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:6a:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605034, 'reachable_time': 24390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318384, 'error': None, 'target': 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.649 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f96877a2-9c0c-45b3-89df-8d42a970a269]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:6aa6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605034, 'tstamp': 605034}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318389, 'error': None, 'target': 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.670 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94053dcd-aced-456e-9dd9-2433c824f84c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap209ca7a4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:6a:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605034, 'reachable_time': 24390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318405, 'error': None, 'target': 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 sudo[318387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:19:59 compute-0 sudo[318387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.711 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7de7a075-b61f-4b27-83a7-7351275742a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.786 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[972a5437-846b-49f0-b4b3-4fa1e5734872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.787 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap209ca7a4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.788 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.788 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap209ca7a4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 kernel: tap209ca7a4-90: entered promiscuous mode
Nov 22 09:19:59 compute-0 NetworkManager[48920]: <info>  [1763803199.7913] manager: (tap209ca7a4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.798 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap209ca7a4-90, col_values=(('external_ids', {'iface-id': '38e24d56-d793-475f-b75d-30c0f92d5222'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 ovn_controller[152872]: 2025-11-22T09:19:59Z|00572|binding|INFO|Releasing lport 38e24d56-d793-475f-b75d-30c0f92d5222 from this chassis (sb_readonly=0)
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 nova_compute[253661]: 2025-11-22 09:19:59.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.823 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.824 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10ed19ed-42d4-41c1-ac1e-d279b5d5c15d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.825 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4.pid.haproxy
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:19:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.825 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'env', 'PROCESS_TAG=haproxy-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:20:00 compute-0 sudo[318387]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:00 compute-0 podman[318476]: 2025-11-22 09:20:00.214733651 +0000 UTC m=+0.051925923 container create 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:20:00 compute-0 nova_compute[253661]: 2025-11-22 09:20:00.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:20:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:20:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:20:00 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:20:00 compute-0 systemd[1]: Started libpod-conmon-0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6.scope.
Nov 22 09:20:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:20:00 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:20:00 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6f0eadfb-20f3-4eb9-adfd-ab612bfe70f9 does not exist
Nov 22 09:20:00 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c3f2e8a0-3273-464d-9e0d-cdbbfa5463b8 does not exist
Nov 22 09:20:00 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ce7ee678-ba3a-49c2-8a00-1c0fdcefd216 does not exist
Nov 22 09:20:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:20:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:20:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:20:00 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:20:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:20:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:20:00 compute-0 podman[318476]: 2025-11-22 09:20:00.188938344 +0000 UTC m=+0.026130636 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:20:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1960c85f8448b6f749a91f07ab39044bd5c3bd65ae0acbccc14832bb6734a46a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:00 compute-0 podman[318476]: 2025-11-22 09:20:00.321434993 +0000 UTC m=+0.158627295 container init 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:20:00 compute-0 podman[318476]: 2025-11-22 09:20:00.328846673 +0000 UTC m=+0.166038945 container start 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:20:00 compute-0 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [NOTICE]   (318563) : New worker (318566) forked
Nov 22 09:20:00 compute-0 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [NOTICE]   (318563) : Loading success.
Nov 22 09:20:00 compute-0 sudo[318537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:20:00 compute-0 sudo[318537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:00 compute-0 sudo[318537]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:00 compute-0 nova_compute[253661]: 2025-11-22 09:20:00.376 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803200.374917, 87fbaa81-3eae-4dac-9613-700a29ab0daf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:00 compute-0 nova_compute[253661]: 2025-11-22 09:20:00.377 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] VM Started (Lifecycle Event)
Nov 22 09:20:00 compute-0 nova_compute[253661]: 2025-11-22 09:20:00.413 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:00 compute-0 nova_compute[253661]: 2025-11-22 09:20:00.418 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803200.375687, 87fbaa81-3eae-4dac-9613-700a29ab0daf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:00 compute-0 nova_compute[253661]: 2025-11-22 09:20:00.418 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] VM Paused (Lifecycle Event)
Nov 22 09:20:00 compute-0 sudo[318576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:20:00 compute-0 sudo[318576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:00 compute-0 sudo[318576]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:00 compute-0 nova_compute[253661]: 2025-11-22 09:20:00.439 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:00 compute-0 nova_compute[253661]: 2025-11-22 09:20:00.443 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:00 compute-0 nova_compute[253661]: 2025-11-22 09:20:00.463 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:00 compute-0 sudo[318602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:20:00 compute-0 sudo[318602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:00 compute-0 sudo[318602]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:00 compute-0 sudo[318639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:20:00 compute-0 sudo[318639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:00 compute-0 podman[318601]: 2025-11-22 09:20:00.566346985 +0000 UTC m=+0.103158258 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 09:20:00 compute-0 podman[318718]: 2025-11-22 09:20:00.903842955 +0000 UTC m=+0.041296045 container create 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:20:00 compute-0 systemd[1]: Started libpod-conmon-802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42.scope.
Nov 22 09:20:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:00 compute-0 podman[318718]: 2025-11-22 09:20:00.88593353 +0000 UTC m=+0.023386650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:20:00 compute-0 podman[318718]: 2025-11-22 09:20:00.982099047 +0000 UTC m=+0.119552157 container init 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:20:00 compute-0 podman[318718]: 2025-11-22 09:20:00.989129057 +0000 UTC m=+0.126582147 container start 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:20:00 compute-0 dreamy_bassi[318734]: 167 167
Nov 22 09:20:00 compute-0 systemd[1]: libpod-802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42.scope: Deactivated successfully.
Nov 22 09:20:00 compute-0 conmon[318734]: conmon 802f32f1310d8a72bdee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42.scope/container/memory.events
Nov 22 09:20:00 compute-0 podman[318718]: 2025-11-22 09:20:00.995142553 +0000 UTC m=+0.132595673 container attach 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 09:20:00 compute-0 podman[318718]: 2025-11-22 09:20:00.996031985 +0000 UTC m=+0.133485075 container died 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:20:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-005f70835b4ba88eccf3c8f8e3e1896613a23767cd414d9d885050bf447f221d-merged.mount: Deactivated successfully.
Nov 22 09:20:01 compute-0 podman[318718]: 2025-11-22 09:20:01.036123789 +0000 UTC m=+0.173576869 container remove 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 09:20:01 compute-0 systemd[1]: libpod-conmon-802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42.scope: Deactivated successfully.
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.075 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating config drive at /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.080 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptrbdcnc0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:01 compute-0 ceph-mon[75021]: pgmap v1674: 305 pgs: 305 active+clean; 208 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 691 KiB/s rd, 4.2 MiB/s wr, 181 op/s
Nov 22 09:20:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:20:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:20:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:20:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:20:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:20:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.127 253665 DEBUG nova.compute.manager [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.128 253665 DEBUG oslo_concurrency.lockutils [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.129 253665 DEBUG oslo_concurrency.lockutils [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.129 253665 DEBUG oslo_concurrency.lockutils [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.130 253665 DEBUG nova.compute.manager [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Processing event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.131 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.135 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803201.1348798, 87fbaa81-3eae-4dac-9613-700a29ab0daf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.135 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] VM Resumed (Lifecycle Event)
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.137 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.142 253665 INFO nova.virt.libvirt.driver [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance spawned successfully.
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.142 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.171 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.176 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.177 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.177 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.177 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.178 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.178 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.182 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.221 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.222 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptrbdcnc0" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:01 compute-0 podman[318760]: 2025-11-22 09:20:01.223622635 +0000 UTC m=+0.044435071 container create 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.249 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.256 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:01 compute-0 systemd[1]: Started libpod-conmon-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope.
Nov 22 09:20:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.291 253665 INFO nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Took 7.91 seconds to spawn the instance on the hypervisor.
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.292 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:01 compute-0 podman[318760]: 2025-11-22 09:20:01.298541276 +0000 UTC m=+0.119353732 container init 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:20:01 compute-0 podman[318760]: 2025-11-22 09:20:01.203794103 +0000 UTC m=+0.024606559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:20:01 compute-0 podman[318760]: 2025-11-22 09:20:01.304786397 +0000 UTC m=+0.125598833 container start 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:20:01 compute-0 podman[318760]: 2025-11-22 09:20:01.308423056 +0000 UTC m=+0.129235512 container attach 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.355 253665 INFO nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Took 8.84 seconds to build instance.
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.377 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 208 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 4.2 MiB/s wr, 135 op/s
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.434 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.435 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deleting local config drive /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config because it was imported into RBD.
Nov 22 09:20:01 compute-0 NetworkManager[48920]: <info>  [1763803201.4949] manager: (tap2a28300e-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Nov 22 09:20:01 compute-0 kernel: tap2a28300e-6b: entered promiscuous mode
Nov 22 09:20:01 compute-0 systemd-udevd[318351]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:20:01 compute-0 ovn_controller[152872]: 2025-11-22T09:20:01Z|00573|binding|INFO|Claiming lport 2a28300e-6b6b-4513-831f-e30f3694fbcd for this chassis.
Nov 22 09:20:01 compute-0 ovn_controller[152872]: 2025-11-22T09:20:01Z|00574|binding|INFO|2a28300e-6b6b-4513-831f-e30f3694fbcd: Claiming fa:16:3e:7c:af:ec 10.100.0.12
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.496 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.504 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:af:ec 10.100.0.12'], port_security=['fa:16:3e:7c:af:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a28300e-6b6b-4513-831f-e30f3694fbcd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.506 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a28300e-6b6b-4513-831f-e30f3694fbcd in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd bound to our chassis
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.508 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:20:01 compute-0 NetworkManager[48920]: <info>  [1763803201.5128] device (tap2a28300e-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:20:01 compute-0 NetworkManager[48920]: <info>  [1763803201.5135] device (tap2a28300e-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.521 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66652709-b2c1-494f-b3cc-0cac3e11b2af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.522 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d1bce2-e1 in ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:01 compute-0 ovn_controller[152872]: 2025-11-22T09:20:01Z|00575|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd ovn-installed in OVS
Nov 22 09:20:01 compute-0 ovn_controller[152872]: 2025-11-22T09:20:01Z|00576|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd up in Southbound
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.525 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d1bce2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11e227c7-c80b-42d6-b989-9c1cbc7237d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.526 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0f4e5d-538f-4425-9be0-07568f0ee52e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.539 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a0757bf6-6bf0-490a-97da-e924fe379bd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 systemd-machined[215941]: New machine qemu-70-instance-0000003a.
Nov 22 09:20:01 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003a.
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.564 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e7d646-c576-4487-9d9f-fed3799e1f61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.598 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1933da21-b61f-4da7-9b1e-76d1058ef229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 NetworkManager[48920]: <info>  [1763803201.6058] manager: (tap01d1bce2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/261)
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.608 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[09a75e49-2a8e-4c97-94c8-35ad61320add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.648 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef6a28a-659b-450c-bc88-1a488d984293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.655 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[46b97107-9c32-414d-9c3c-235afae47354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 NetworkManager[48920]: <info>  [1763803201.6810] device (tap01d1bce2-e0): carrier: link connected
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.687 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b0b48e-94ba-4c59-b5df-c1df7c3277cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.710 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41d3fb08-6924-4c32-90ab-8fc7c62322c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605241, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318849, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.734 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[581d6ca7-e95c-4a1c-9253-75c6eb18cc3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:2279'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605241, 'tstamp': 605241}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318850, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.736 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.759 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d358b67a-d4ce-4e9c-a8d6-05f7dcbe8093]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605241, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318851, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.802 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b205bff9-899b-4534-bd68-2368d7714a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[445b30fe-14e1-4837-8da2-59b21f204578]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.869 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.869 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.870 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d1bce2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:01 compute-0 NetworkManager[48920]: <info>  [1763803201.8722] manager: (tap01d1bce2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Nov 22 09:20:01 compute-0 kernel: tap01d1bce2-e0: entered promiscuous mode
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d1bce2-e0, col_values=(('external_ids', {'iface-id': '23aa3d02-a12d-464a-8395-5aa8724c0fd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.875 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.877 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:20:01 compute-0 ovn_controller[152872]: 2025-11-22T09:20:01Z|00577|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.899 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec6bab8-18e7-4524-8ad8-e5410cfc2132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.901 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:20:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.902 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'env', 'PROCESS_TAG=haproxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:20:01 compute-0 nova_compute[253661]: 2025-11-22 09:20:01.903 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:02 compute-0 podman[318894]: 2025-11-22 09:20:02.342638915 +0000 UTC m=+0.056267638 container create a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:20:02 compute-0 systemd[1]: Started libpod-conmon-a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb.scope.
Nov 22 09:20:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba224bf444db80dea210a4884aaac2259dd20704596313fa2367533ab6ec9709/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:02 compute-0 podman[318894]: 2025-11-22 09:20:02.312667187 +0000 UTC m=+0.026295930 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:20:02 compute-0 podman[318894]: 2025-11-22 09:20:02.427788194 +0000 UTC m=+0.141416937 container init a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 09:20:02 compute-0 podman[318894]: 2025-11-22 09:20:02.436622458 +0000 UTC m=+0.150251181 container start a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:20:02 compute-0 sad_chaum[318794]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:20:02 compute-0 sad_chaum[318794]: --> relative data size: 1.0
Nov 22 09:20:02 compute-0 sad_chaum[318794]: --> All data devices are unavailable
Nov 22 09:20:02 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [NOTICE]   (318923) : New worker (318926) forked
Nov 22 09:20:02 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [NOTICE]   (318923) : Loading success.
Nov 22 09:20:02 compute-0 systemd[1]: libpod-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope: Deactivated successfully.
Nov 22 09:20:02 compute-0 systemd[1]: libpod-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope: Consumed 1.089s CPU time.
Nov 22 09:20:02 compute-0 conmon[318794]: conmon 761de1e76a5a2236d39b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope/container/memory.events
Nov 22 09:20:02 compute-0 podman[318760]: 2025-11-22 09:20:02.50128392 +0000 UTC m=+1.322096356 container died 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289-merged.mount: Deactivated successfully.
Nov 22 09:20:02 compute-0 podman[318760]: 2025-11-22 09:20:02.578971387 +0000 UTC m=+1.399783853 container remove 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:20:02 compute-0 systemd[1]: libpod-conmon-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope: Deactivated successfully.
Nov 22 09:20:02 compute-0 sudo[318639]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014442085653122354 of space, bias 1.0, pg target 0.4332625695936706 quantized to 32 (current 32)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:20:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:20:02 compute-0 sudo[318948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:20:02 compute-0 sudo[318948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:02 compute-0 sudo[318948]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:02 compute-0 sudo[318973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:20:02 compute-0 sudo[318973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:02 compute-0 sudo[318973]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:02 compute-0 sudo[318998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:20:02 compute-0 sudo[318998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:02 compute-0 sudo[318998]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:02 compute-0 sudo[319038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:20:02 compute-0 sudo[319038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:02 compute-0 nova_compute[253661]: 2025-11-22 09:20:02.997 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:20:02 compute-0 nova_compute[253661]: 2025-11-22 09:20:02.998 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803202.9971855, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:02 compute-0 nova_compute[253661]: 2025-11-22 09:20:02.998 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Started (Lifecycle Event)
Nov 22 09:20:03 compute-0 ceph-mon[75021]: pgmap v1675: 305 pgs: 305 active+clean; 208 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 4.2 MiB/s wr, 135 op/s
Nov 22 09:20:03 compute-0 nova_compute[253661]: 2025-11-22 09:20:03.158 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:03 compute-0 nova_compute[253661]: 2025-11-22 09:20:03.165 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803202.9973536, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:03 compute-0 nova_compute[253661]: 2025-11-22 09:20:03.165 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Paused (Lifecycle Event)
Nov 22 09:20:03 compute-0 nova_compute[253661]: 2025-11-22 09:20:03.182 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:03 compute-0 nova_compute[253661]: 2025-11-22 09:20:03.187 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:03 compute-0 nova_compute[253661]: 2025-11-22 09:20:03.206 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:20:03 compute-0 podman[319131]: 2025-11-22 09:20:03.261468131 +0000 UTC m=+0.043377535 container create 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 09:20:03 compute-0 systemd[1]: Started libpod-conmon-6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637.scope.
Nov 22 09:20:03 compute-0 podman[319131]: 2025-11-22 09:20:03.24168389 +0000 UTC m=+0.023593314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:20:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:03 compute-0 podman[319131]: 2025-11-22 09:20:03.375955323 +0000 UTC m=+0.157864747 container init 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:20:03 compute-0 podman[319131]: 2025-11-22 09:20:03.386498199 +0000 UTC m=+0.168407603 container start 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:20:03 compute-0 podman[319131]: 2025-11-22 09:20:03.390547768 +0000 UTC m=+0.172457192 container attach 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:20:03 compute-0 flamboyant_ride[319146]: 167 167
Nov 22 09:20:03 compute-0 systemd[1]: libpod-6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637.scope: Deactivated successfully.
Nov 22 09:20:03 compute-0 podman[319131]: 2025-11-22 09:20:03.39476194 +0000 UTC m=+0.176671344 container died 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:20:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 215 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 916 KiB/s rd, 5.7 MiB/s wr, 170 op/s
Nov 22 09:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf1e966886c63e8fa4ccb361aed3bed2944012bd61408b396d5f844d58c9c5a4-merged.mount: Deactivated successfully.
Nov 22 09:20:03 compute-0 podman[319131]: 2025-11-22 09:20:03.443085994 +0000 UTC m=+0.224995388 container remove 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:20:03 compute-0 systemd[1]: libpod-conmon-6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637.scope: Deactivated successfully.
Nov 22 09:20:03 compute-0 podman[319170]: 2025-11-22 09:20:03.668672965 +0000 UTC m=+0.064224381 container create 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:20:03 compute-0 systemd[1]: Started libpod-conmon-1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec.scope.
Nov 22 09:20:03 compute-0 podman[319170]: 2025-11-22 09:20:03.632296501 +0000 UTC m=+0.027847947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:20:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:03 compute-0 podman[319170]: 2025-11-22 09:20:03.770796767 +0000 UTC m=+0.166348193 container init 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 09:20:03 compute-0 podman[319170]: 2025-11-22 09:20:03.779429917 +0000 UTC m=+0.174981333 container start 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:20:03 compute-0 podman[319170]: 2025-11-22 09:20:03.783400033 +0000 UTC m=+0.178951469 container attach 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.023 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.288 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.289 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.290 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.290 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.290 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] No waiting events found dispatching network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.290 253665 WARNING nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received unexpected event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d for instance with vm_state active and task_state None.
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.291 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.291 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.291 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.291 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.292 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Processing event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.292 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.292 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.292 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.293 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.293 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.293 253665 WARNING nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state rebuild_spawning.
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.294 253665 DEBUG nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.300 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803204.2999496, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.300 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Resumed (Lifecycle Event)
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.302 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.307 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance spawned successfully.
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.307 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.320 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.325 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.328 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.329 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.329 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.330 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.330 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.330 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.353 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.386 253665 DEBUG nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.438 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.440 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.441 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.501 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.619 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.620 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.621 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.621 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.621 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.623 253665 INFO nova.compute.manager [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Terminating instance
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.624 253665 DEBUG nova.compute.manager [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]: {
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:     "0": [
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:         {
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "devices": [
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "/dev/loop3"
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             ],
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_name": "ceph_lv0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_size": "21470642176",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "name": "ceph_lv0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "tags": {
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.cluster_name": "ceph",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.crush_device_class": "",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.encrypted": "0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.osd_id": "0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.type": "block",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.vdo": "0"
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             },
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "type": "block",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "vg_name": "ceph_vg0"
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:         }
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:     ],
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:     "1": [
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:         {
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "devices": [
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "/dev/loop4"
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             ],
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_name": "ceph_lv1",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_size": "21470642176",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "name": "ceph_lv1",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "tags": {
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.cluster_name": "ceph",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.crush_device_class": "",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.encrypted": "0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.osd_id": "1",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.type": "block",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.vdo": "0"
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             },
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "type": "block",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "vg_name": "ceph_vg1"
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:         }
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:     ],
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:     "2": [
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:         {
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "devices": [
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "/dev/loop5"
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             ],
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_name": "ceph_lv2",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_size": "21470642176",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "name": "ceph_lv2",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "tags": {
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.cluster_name": "ceph",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.crush_device_class": "",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.encrypted": "0",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.osd_id": "2",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.type": "block",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:                 "ceph.vdo": "0"
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             },
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "type": "block",
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:             "vg_name": "ceph_vg2"
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:         }
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]:     ]
Nov 22 09:20:04 compute-0 serene_ardinghelli[319187]: }
Nov 22 09:20:04 compute-0 systemd[1]: libpod-1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec.scope: Deactivated successfully.
Nov 22 09:20:04 compute-0 podman[319170]: 2025-11-22 09:20:04.667372233 +0000 UTC m=+1.062923669 container died 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:20:04 compute-0 kernel: tap95d3860d-a4 (unregistering): left promiscuous mode
Nov 22 09:20:04 compute-0 NetworkManager[48920]: <info>  [1763803204.6757] device (tap95d3860d-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:20:04 compute-0 ovn_controller[152872]: 2025-11-22T09:20:04Z|00578|binding|INFO|Releasing lport 95d3860d-a485-46b6-8875-35bb61ae7e9d from this chassis (sb_readonly=0)
Nov 22 09:20:04 compute-0 ovn_controller[152872]: 2025-11-22T09:20:04Z|00579|binding|INFO|Setting lport 95d3860d-a485-46b6-8875-35bb61ae7e9d down in Southbound
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:04 compute-0 ovn_controller[152872]: 2025-11-22T09:20:04Z|00580|binding|INFO|Removing iface tap95d3860d-a4 ovn-installed in OVS
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d-merged.mount: Deactivated successfully.
Nov 22 09:20:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.708 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:69:07 10.100.0.3'], port_security=['fa:16:3e:22:69:07 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '87fbaa81-3eae-4dac-9613-700a29ab0daf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd04e58a339948e6b219ee858ce56620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dfe1d73e-9743-4e1d-a71d-46f13de720cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=923cf162-d21a-49b5-93fe-032ba9e780ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=95d3860d-a485-46b6-8875-35bb61ae7e9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.710 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 95d3860d-a485-46b6-8875-35bb61ae7e9d in datapath 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 unbound from our chassis
Nov 22 09:20:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.711 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:20:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.713 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[016e3db7-c6da-4ed8-b428-93edbc965bad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.714 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 namespace which is not needed anymore
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:04 compute-0 podman[319170]: 2025-11-22 09:20:04.734536494 +0000 UTC m=+1.130087910 container remove 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:20:04 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Nov 22 09:20:04 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Consumed 4.376s CPU time.
Nov 22 09:20:04 compute-0 systemd[1]: libpod-conmon-1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec.scope: Deactivated successfully.
Nov 22 09:20:04 compute-0 systemd-machined[215941]: Machine qemu-69-instance-0000003c terminated.
Nov 22 09:20:04 compute-0 sudo[319038]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:04 compute-0 sudo[319226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:20:04 compute-0 sudo[319226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:04 compute-0 sudo[319226]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.875 253665 INFO nova.virt.libvirt.driver [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance destroyed successfully.
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.877 253665 DEBUG nova.objects.instance [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lazy-loading 'resources' on Instance uuid 87fbaa81-3eae-4dac-9613-700a29ab0daf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:04 compute-0 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [NOTICE]   (318563) : haproxy version is 2.8.14-c23fe91
Nov 22 09:20:04 compute-0 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [NOTICE]   (318563) : path to executable is /usr/sbin/haproxy
Nov 22 09:20:04 compute-0 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [WARNING]  (318563) : Exiting Master process...
Nov 22 09:20:04 compute-0 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [ALERT]    (318563) : Current worker (318566) exited with code 143 (Terminated)
Nov 22 09:20:04 compute-0 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [WARNING]  (318563) : All workers exited. Exiting... (0)
Nov 22 09:20:04 compute-0 systemd[1]: libpod-0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6.scope: Deactivated successfully.
Nov 22 09:20:04 compute-0 podman[319230]: 2025-11-22 09:20:04.898795155 +0000 UTC m=+0.069794367 container died 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.902 253665 DEBUG nova.virt.libvirt.vif [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1641561282',display_name='tempest-ServerAddressesTestJSON-server-1641561282',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1641561282',id=60,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd04e58a339948e6b219ee858ce56620',ramdisk_id='',reservation_id='r-idyfcrg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1270862588',owner_user_name='tempest-ServerAddressesTestJSON-1270862588-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:20:01Z,user_data=None,user_id='2c92c50f03874da0a9bd18e66157708e',uuid=87fbaa81-3eae-4dac-9613-700a29ab0daf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.904 253665 DEBUG nova.network.os_vif_util [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converting VIF {"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.905 253665 DEBUG nova.network.os_vif_util [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.905 253665 DEBUG os_vif [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.908 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.908 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95d3860d-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.910 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:04 compute-0 nova_compute[253661]: 2025-11-22 09:20:04.915 253665 INFO os_vif [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4')
Nov 22 09:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6-userdata-shm.mount: Deactivated successfully.
Nov 22 09:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1960c85f8448b6f749a91f07ab39044bd5c3bd65ae0acbccc14832bb6734a46a-merged.mount: Deactivated successfully.
Nov 22 09:20:04 compute-0 podman[319230]: 2025-11-22 09:20:04.94875923 +0000 UTC m=+0.119758442 container cleanup 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 09:20:04 compute-0 sudo[319275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:20:04 compute-0 sudo[319275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:04 compute-0 sudo[319275]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:04 compute-0 systemd[1]: libpod-conmon-0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6.scope: Deactivated successfully.
Nov 22 09:20:05 compute-0 sudo[319330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:20:05 compute-0 sudo[319330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:05 compute-0 sudo[319330]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:05 compute-0 podman[319331]: 2025-11-22 09:20:05.034938833 +0000 UTC m=+0.056943954 container remove 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:20:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.043 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[456e367e-44a6-4e84-b5a2-56cad1c72095]: (4, ('Sat Nov 22 09:20:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 (0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6)\n0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6\nSat Nov 22 09:20:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 (0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6)\n0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.049 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef592f4-2719-4276-be05-40b0564276c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.050 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap209ca7a4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:05 compute-0 kernel: tap209ca7a4-90: left promiscuous mode
Nov 22 09:20:05 compute-0 nova_compute[253661]: 2025-11-22 09:20:05.057 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:05 compute-0 nova_compute[253661]: 2025-11-22 09:20:05.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.078 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df5b743b-101f-4a28-bfe4-09d5b6ed4f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.092 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98e57bd1-d572-4bd0-bd8c-b8de5d5be87b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.094 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[214b57d7-4bcf-497d-9524-94ea06159622]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:05 compute-0 sudo[319368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:20:05 compute-0 sudo[319368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:05 compute-0 ceph-mon[75021]: pgmap v1676: 305 pgs: 305 active+clean; 215 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 916 KiB/s rd, 5.7 MiB/s wr, 170 op/s
Nov 22 09:20:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.117 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[005cd859-8409-4dd1-9464-cdbeee77b21c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605025, 'reachable_time': 35782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319394, 'error': None, 'target': 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.122 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:20:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.122 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b660061a-e336-4787-98b2-3663ec756128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d209ca7a4\x2d9a63\x2d439a\x2d9ff5\x2d4a96e0ff3cf4.mount: Deactivated successfully.
Nov 22 09:20:05 compute-0 nova_compute[253661]: 2025-11-22 09:20:05.219 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 833 KiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 22 09:20:05 compute-0 nova_compute[253661]: 2025-11-22 09:20:05.435 253665 INFO nova.virt.libvirt.driver [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Deleting instance files /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf_del
Nov 22 09:20:05 compute-0 nova_compute[253661]: 2025-11-22 09:20:05.436 253665 INFO nova.virt.libvirt.driver [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Deletion of /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf_del complete
Nov 22 09:20:05 compute-0 podman[319436]: 2025-11-22 09:20:05.536967892 +0000 UTC m=+0.046880401 container create 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:20:05 compute-0 systemd[1]: Started libpod-conmon-5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b.scope.
Nov 22 09:20:05 compute-0 podman[319436]: 2025-11-22 09:20:05.516368381 +0000 UTC m=+0.026280910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:20:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:05 compute-0 nova_compute[253661]: 2025-11-22 09:20:05.633 253665 INFO nova.compute.manager [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Took 1.01 seconds to destroy the instance on the hypervisor.
Nov 22 09:20:05 compute-0 nova_compute[253661]: 2025-11-22 09:20:05.635 253665 DEBUG oslo.service.loopingcall [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:20:05 compute-0 nova_compute[253661]: 2025-11-22 09:20:05.636 253665 DEBUG nova.compute.manager [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:20:05 compute-0 nova_compute[253661]: 2025-11-22 09:20:05.636 253665 DEBUG nova.network.neutron [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:20:05 compute-0 podman[319436]: 2025-11-22 09:20:05.657269225 +0000 UTC m=+0.167181754 container init 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:20:05 compute-0 podman[319436]: 2025-11-22 09:20:05.667391561 +0000 UTC m=+0.177304070 container start 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:20:05 compute-0 podman[319436]: 2025-11-22 09:20:05.671812838 +0000 UTC m=+0.181725347 container attach 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 09:20:05 compute-0 fervent_brown[319453]: 167 167
Nov 22 09:20:05 compute-0 systemd[1]: libpod-5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b.scope: Deactivated successfully.
Nov 22 09:20:05 compute-0 podman[319458]: 2025-11-22 09:20:05.750720295 +0000 UTC m=+0.035436752 container died 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eeb6349cadd98f62d260a4797f4916ff424595bab0f0815a12f81ddfe912665-merged.mount: Deactivated successfully.
Nov 22 09:20:05 compute-0 podman[319458]: 2025-11-22 09:20:05.805205259 +0000 UTC m=+0.089921716 container remove 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:20:05 compute-0 systemd[1]: libpod-conmon-5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b.scope: Deactivated successfully.
Nov 22 09:20:06 compute-0 podman[319481]: 2025-11-22 09:20:06.024209771 +0000 UTC m=+0.046339477 container create fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 09:20:06 compute-0 systemd[1]: Started libpod-conmon-fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e.scope.
Nov 22 09:20:06 compute-0 podman[319481]: 2025-11-22 09:20:06.003840976 +0000 UTC m=+0.025970702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:20:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:06 compute-0 podman[319481]: 2025-11-22 09:20:06.13860802 +0000 UTC m=+0.160737736 container init fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 09:20:06 compute-0 podman[319481]: 2025-11-22 09:20:06.145875958 +0000 UTC m=+0.168005664 container start fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:20:06 compute-0 podman[319481]: 2025-11-22 09:20:06.149529066 +0000 UTC m=+0.171658792 container attach fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.383 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-unplugged-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.386 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.387 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.387 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.387 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] No waiting events found dispatching network-vif-unplugged-95d3860d-a485-46b6-8875-35bb61ae7e9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.388 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-unplugged-95d3860d-a485-46b6-8875-35bb61ae7e9d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.388 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.388 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.389 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.389 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.389 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] No waiting events found dispatching network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:06 compute-0 nova_compute[253661]: 2025-11-22 09:20:06.390 253665 WARNING nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received unexpected event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d for instance with vm_state active and task_state deleting.
Nov 22 09:20:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:07 compute-0 ceph-mon[75021]: pgmap v1677: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 833 KiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 22 09:20:07 compute-0 naughty_hoover[319498]: {
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "osd_id": 1,
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "type": "bluestore"
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:     },
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "osd_id": 0,
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "type": "bluestore"
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:     },
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "osd_id": 2,
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:         "type": "bluestore"
Nov 22 09:20:07 compute-0 naughty_hoover[319498]:     }
Nov 22 09:20:07 compute-0 naughty_hoover[319498]: }
Nov 22 09:20:07 compute-0 systemd[1]: libpod-fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e.scope: Deactivated successfully.
Nov 22 09:20:07 compute-0 podman[319481]: 2025-11-22 09:20:07.234009347 +0000 UTC m=+1.256139053 container died fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:20:07 compute-0 systemd[1]: libpod-fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e.scope: Consumed 1.093s CPU time.
Nov 22 09:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0-merged.mount: Deactivated successfully.
Nov 22 09:20:07 compute-0 podman[319481]: 2025-11-22 09:20:07.303152277 +0000 UTC m=+1.325281993 container remove fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:20:07 compute-0 systemd[1]: libpod-conmon-fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e.scope: Deactivated successfully.
Nov 22 09:20:07 compute-0 sudo[319368]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:20:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:20:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:20:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:20:07 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b73037fc-4de3-43d6-9fa3-6fcaf166a918 does not exist
Nov 22 09:20:07 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e1ded6ec-620f-4829-8f42-857f92634a08 does not exist
Nov 22 09:20:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 194 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 233 op/s
Nov 22 09:20:07 compute-0 sudo[319543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:20:07 compute-0 sudo[319543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:07 compute-0 sudo[319543]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.539 253665 DEBUG nova.network.neutron [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:07 compute-0 sudo[319568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:20:07 compute-0 sudo[319568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:20:07 compute-0 sudo[319568]: pam_unix(sudo:session): session closed for user root
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.560 253665 INFO nova.compute.manager [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Took 1.92 seconds to deallocate network for instance.
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.612 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.613 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.719 253665 DEBUG oslo_concurrency.processutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.783 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.785 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.786 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.786 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.786 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.789 253665 INFO nova.compute.manager [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Terminating instance
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.791 253665 DEBUG nova.compute.manager [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.828 253665 DEBUG nova.compute.manager [req-6e0fcf3b-914f-4248-ab5e-88deabc37c38 req-9232b74f-542e-42d4-b2ff-209af984e721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-deleted-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:07 compute-0 kernel: tap2a28300e-6b (unregistering): left promiscuous mode
Nov 22 09:20:07 compute-0 NetworkManager[48920]: <info>  [1763803207.8435] device (tap2a28300e-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:20:07 compute-0 ovn_controller[152872]: 2025-11-22T09:20:07Z|00581|binding|INFO|Releasing lport 2a28300e-6b6b-4513-831f-e30f3694fbcd from this chassis (sb_readonly=0)
Nov 22 09:20:07 compute-0 ovn_controller[152872]: 2025-11-22T09:20:07Z|00582|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd down in Southbound
Nov 22 09:20:07 compute-0 ovn_controller[152872]: 2025-11-22T09:20:07Z|00583|binding|INFO|Removing iface tap2a28300e-6b ovn-installed in OVS
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.868 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:af:ec 10.100.0.12'], port_security=['fa:16:3e:7c:af:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a28300e-6b6b-4513-831f-e30f3694fbcd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.870 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a28300e-6b6b-4513-831f-e30f3694fbcd in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd unbound from our chassis
Nov 22 09:20:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.872 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:20:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.873 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38f08ced-d268-4935-9afc-5c63c41e665d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.873 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace which is not needed anymore
Nov 22 09:20:07 compute-0 nova_compute[253661]: 2025-11-22 09:20:07.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:07 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 22 09:20:07 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003a.scope: Consumed 4.890s CPU time.
Nov 22 09:20:07 compute-0 systemd-machined[215941]: Machine qemu-70-instance-0000003a terminated.
Nov 22 09:20:08 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [NOTICE]   (318923) : haproxy version is 2.8.14-c23fe91
Nov 22 09:20:08 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [NOTICE]   (318923) : path to executable is /usr/sbin/haproxy
Nov 22 09:20:08 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [WARNING]  (318923) : Exiting Master process...
Nov 22 09:20:08 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [ALERT]    (318923) : Current worker (318926) exited with code 143 (Terminated)
Nov 22 09:20:08 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [WARNING]  (318923) : All workers exited. Exiting... (0)
Nov 22 09:20:08 compute-0 systemd[1]: libpod-a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb.scope: Deactivated successfully.
Nov 22 09:20:08 compute-0 podman[319638]: 2025-11-22 09:20:08.040517074 +0000 UTC m=+0.059663771 container died a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.043 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance destroyed successfully.
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.044 253665 DEBUG nova.objects.instance [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'resources' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.065 253665 DEBUG nova.virt.libvirt.vif [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:20:04Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.066 253665 DEBUG nova.network.os_vif_util [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.066 253665 DEBUG nova.network.os_vif_util [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.067 253665 DEBUG os_vif [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.069 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.069 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a28300e-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.071 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.073 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.076 253665 INFO os_vif [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b')
Nov 22 09:20:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb-userdata-shm.mount: Deactivated successfully.
Nov 22 09:20:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba224bf444db80dea210a4884aaac2259dd20704596313fa2367533ab6ec9709-merged.mount: Deactivated successfully.
Nov 22 09:20:08 compute-0 podman[319638]: 2025-11-22 09:20:08.15680745 +0000 UTC m=+0.175954137 container cleanup a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:20:08 compute-0 systemd[1]: libpod-conmon-a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb.scope: Deactivated successfully.
Nov 22 09:20:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1094984022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.220 253665 DEBUG oslo_concurrency.processutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.227 253665 DEBUG nova.compute.provider_tree [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:08 compute-0 podman[319697]: 2025-11-22 09:20:08.236862015 +0000 UTC m=+0.051608385 container remove a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.241 253665 DEBUG nova.scheduler.client.report [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.242 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1cd920-1cac-4a81-bc80-749366ca8c39]: (4, ('Sat Nov 22 09:20:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb)\na2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb\nSat Nov 22 09:20:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb)\na2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.244 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2556c1f5-d63a-42fc-97a5-8f83f92601bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.244 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:08 compute-0 kernel: tap01d1bce2-e0: left promiscuous mode
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.267 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d397bf-195d-4156-bd9d-f47644a3d4a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.282 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb95d16-79a6-4250-b22e-40bddd7cb222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.283 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c49a91e7-410d-42df-9e06-9cfcb0d9c854]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.296 253665 INFO nova.scheduler.client.report [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Deleted allocations for instance 87fbaa81-3eae-4dac-9613-700a29ab0daf
Nov 22 09:20:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.302 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1037863c-9872-421f-aee7-65dae1411fd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605233, 'reachable_time': 27853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319714, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d01d1bce2\x2def3d\x2d44bf\x2da3f9\x2d13dc692c2ddd.mount: Deactivated successfully.
Nov 22 09:20:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.306 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:20:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.307 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6cc8d8-76aa-4307-9232-cf4e35f16de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.373 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:20:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:20:08 compute-0 ceph-mon[75021]: pgmap v1678: 305 pgs: 305 active+clean; 194 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 233 op/s
Nov 22 09:20:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1094984022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.524 253665 INFO nova.virt.libvirt.driver [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deleting instance files /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_del
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.525 253665 INFO nova.virt.libvirt.driver [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deletion of /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_del complete
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.582 253665 INFO nova.compute.manager [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.583 253665 DEBUG oslo.service.loopingcall [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.583 253665 DEBUG nova.compute.manager [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.584 253665 DEBUG nova.network.neutron [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.781 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.782 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.782 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.784 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.784 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:08 compute-0 nova_compute[253661]: 2025-11-22 09:20:08.784 253665 WARNING nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state deleting.
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.157 253665 DEBUG nova.network.neutron [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.173 253665 INFO nova.compute.manager [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Took 0.59 seconds to deallocate network for instance.
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.222 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.223 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.297 253665 DEBUG oslo_concurrency.processutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 157 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 266 op/s
Nov 22 09:20:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428577081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.808 253665 DEBUG oslo_concurrency.processutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.817 253665 DEBUG nova.compute.provider_tree [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.846 253665 DEBUG nova.scheduler.client.report [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.866 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.892 253665 INFO nova.scheduler.client.report [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Deleted allocations for instance 6fc1c0e4-3bd1-44c5-a722-9a30961fc545
Nov 22 09:20:09 compute-0 nova_compute[253661]: 2025-11-22 09:20:09.961 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:10 compute-0 nova_compute[253661]: 2025-11-22 09:20:10.222 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:10 compute-0 nova_compute[253661]: 2025-11-22 09:20:10.381 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:10 compute-0 nova_compute[253661]: 2025-11-22 09:20:10.382 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:10 compute-0 nova_compute[253661]: 2025-11-22 09:20:10.396 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:20:10 compute-0 nova_compute[253661]: 2025-11-22 09:20:10.458 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:10 compute-0 nova_compute[253661]: 2025-11-22 09:20:10.459 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:10 compute-0 nova_compute[253661]: 2025-11-22 09:20:10.466 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:20:10 compute-0 nova_compute[253661]: 2025-11-22 09:20:10.466 253665 INFO nova.compute.claims [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:20:10 compute-0 ceph-mon[75021]: pgmap v1679: 305 pgs: 305 active+clean; 157 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 266 op/s
Nov 22 09:20:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1428577081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:10 compute-0 nova_compute[253661]: 2025-11-22 09:20:10.580 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1705343458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.052 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.060 253665 DEBUG nova.compute.manager [req-1a80dde3-06fd-4eb8-8889-48945e166964 req-e0b7b123-65b7-4c85-9687-433a7f5d862f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-deleted-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.066 253665 DEBUG nova.compute.provider_tree [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.088 253665 DEBUG nova.scheduler.client.report [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.110 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.111 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.164 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.164 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.181 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.196 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.286 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.288 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.288 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Creating image(s)
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.316 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.345 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.371 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.376 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 157 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 203 op/s
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.419 253665 DEBUG nova.policy [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5352d2182544454aab03bd4a74160247', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.467 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.467 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.468 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.468 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1705343458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.509 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.515 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aadc298c-a1ba-41ca-9015-0a4d08420487_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.689 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.690 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.695 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.696 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.712 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.716 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.804 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.805 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.813 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.814 253665 INFO nova.compute.claims [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.818 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:11 compute-0 nova_compute[253661]: 2025-11-22 09:20:11.958 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.080 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aadc298c-a1ba-41ca-9015-0a4d08420487_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.157 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] resizing rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:20:12 compute-0 ovn_controller[152872]: 2025-11-22T09:20:12Z|00584|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.252 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Successfully created port: 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:20:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790294290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:20:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:20:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790294290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.347 253665 DEBUG nova.objects.instance [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'migration_context' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.360 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.360 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Ensure instance console log exists: /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.361 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.361 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.361 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/294044666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.475 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.484 253665 DEBUG nova.compute.provider_tree [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.505 253665 DEBUG nova.scheduler.client.report [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:12 compute-0 ceph-mon[75021]: pgmap v1680: 305 pgs: 305 active+clean; 157 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 203 op/s
Nov 22 09:20:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1790294290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:20:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1790294290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:20:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/294044666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.526 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.527 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.529 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.539 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.539 253665 INFO nova.compute.claims [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.596 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.597 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.615 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.630 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.709 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.768 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.773 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.774 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Creating image(s)
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.805 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.835 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.864 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.870 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.922 253665 DEBUG nova.policy [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3457ea0f757244e8a49e3e224d581e8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8120d22470024f2197238c7c48c5ba0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.964 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.965 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.966 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.966 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:12 compute-0 nova_compute[253661]: 2025-11-22 09:20:12.995 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.000 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.071 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737140166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.222 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Successfully updated port: 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.237 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.238 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquired lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.238 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.252 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.259 253665 DEBUG nova.compute.provider_tree [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.275 253665 DEBUG nova.scheduler.client.report [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.298 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.300 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.356 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.357 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.377 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.395 253665 DEBUG nova.compute.manager [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-changed-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.396 253665 DEBUG nova.compute.manager [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Refreshing instance network info cache due to event network-changed-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.396 253665 DEBUG oslo_concurrency.lockutils [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.397 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:20:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 140 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 223 op/s
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.486 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.487 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.488 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Creating image(s)
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.523 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.585 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3737140166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.752 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.759 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.841 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.842 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.843 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.843 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.870 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:13 compute-0 nova_compute[253661]: 2025-11-22 09:20:13.874 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:14 compute-0 nova_compute[253661]: 2025-11-22 09:20:14.242 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:20:14 compute-0 nova_compute[253661]: 2025-11-22 09:20:14.255 253665 DEBUG nova.policy [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '559fd7e00a0a468797efe4955caffc4a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:20:14 compute-0 nova_compute[253661]: 2025-11-22 09:20:14.306 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:14 compute-0 nova_compute[253661]: 2025-11-22 09:20:14.380 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] resizing rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:20:14 compute-0 nova_compute[253661]: 2025-11-22 09:20:14.867 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Successfully created port: 0c106a61-dc2d-42f2-9c81-9c68f52ce123 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:20:15 compute-0 ceph-mon[75021]: pgmap v1681: 305 pgs: 305 active+clean; 140 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 223 op/s
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.272 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 154 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 195 op/s
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.456 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.518 253665 DEBUG nova.objects.instance [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lazy-loading 'migration_context' on Instance uuid 30c09d44-c691-4f03-a20d-2e86a0d0a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.526 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] resizing rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.581 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.582 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Ensure instance console log exists: /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.582 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.583 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.583 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.697 253665 DEBUG nova.objects.instance [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'migration_context' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.720 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.721 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Ensure instance console log exists: /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.721 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.721 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:15 compute-0 nova_compute[253661]: 2025-11-22 09:20:15.722 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.082 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Successfully created port: 43cec84a-e6cc-4492-8869-806f677f3026 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.130 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Updating instance_info_cache with network_info: [{"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.154 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Releasing lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.154 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance network_info: |[{"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.155 253665 DEBUG oslo_concurrency.lockutils [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.155 253665 DEBUG nova.network.neutron [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Refreshing network info cache for port 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.159 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start _get_guest_xml network_info=[{"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.166 253665 WARNING nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.174 253665 DEBUG nova.virt.libvirt.host [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.175 253665 DEBUG nova.virt.libvirt.host [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.178 253665 DEBUG nova.virt.libvirt.host [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.179 253665 DEBUG nova.virt.libvirt.host [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.179 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.185 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.404 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Successfully updated port: 0c106a61-dc2d-42f2-9c81-9c68f52ce123 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.418 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.418 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquired lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.418 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.567 253665 DEBUG nova.compute.manager [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-changed-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.567 253665 DEBUG nova.compute.manager [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Refreshing instance network info cache due to event network-changed-0c106a61-dc2d-42f2-9c81-9c68f52ce123. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.568 253665 DEBUG oslo_concurrency.lockutils [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1865413845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.658 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.681 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.686 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.725 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:20:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:16 compute-0 nova_compute[253661]: 2025-11-22 09:20:16.995 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Successfully updated port: 43cec84a-e6cc-4492-8869-806f677f3026 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.017 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.017 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.017 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:20:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/799981972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:17 compute-0 ceph-mon[75021]: pgmap v1682: 305 pgs: 305 active+clean; 154 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 195 op/s
Nov 22 09:20:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1865413845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/799981972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.157 253665 DEBUG nova.compute.manager [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-changed-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.157 253665 DEBUG nova.compute.manager [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Refreshing instance network info cache due to event network-changed-43cec84a-e6cc-4492-8869-806f677f3026. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.158 253665 DEBUG oslo_concurrency.lockutils [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.160 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.161 253665 DEBUG nova.virt.libvirt.vif [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-257400181',display_name='tempest-ServerDiskConfigTestJSON-server-257400181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-257400181',id=61,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-wzpxdi7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:11Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=aadc298c-a1ba-41ca-9015-0a4d08420487,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.162 253665 DEBUG nova.network.os_vif_util [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.163 253665 DEBUG nova.network.os_vif_util [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.164 253665 DEBUG nova.objects.instance [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_devices' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.177 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <uuid>aadc298c-a1ba-41ca-9015-0a4d08420487</uuid>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <name>instance-0000003d</name>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-257400181</nova:name>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:20:16</nova:creationTime>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <nova:user uuid="5352d2182544454aab03bd4a74160247">tempest-ServerDiskConfigTestJSON-1778643933-project-member</nova:user>
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <nova:project uuid="a29f2c834c7a4a2ea6c4fc6dea996a8e">tempest-ServerDiskConfigTestJSON-1778643933</nova:project>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <nova:port uuid="27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643">
Nov 22 09:20:17 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <system>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <entry name="serial">aadc298c-a1ba-41ca-9015-0a4d08420487</entry>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <entry name="uuid">aadc298c-a1ba-41ca-9015-0a4d08420487</entry>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     </system>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <os>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   </os>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <features>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   </features>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/aadc298c-a1ba-41ca-9015-0a4d08420487_disk">
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config">
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:17 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:84:ea:a6"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <target dev="tap27b3ab6b-d0"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/console.log" append="off"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <video>
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     </video>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:20:17 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:20:17 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:20:17 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:20:17 compute-0 nova_compute[253661]: </domain>
Nov 22 09:20:17 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.177 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Preparing to wait for external event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.178 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.178 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.178 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.179 253665 DEBUG nova.virt.libvirt.vif [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-257400181',display_name='tempest-ServerDiskConfigTestJSON-server-257400181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-257400181',id=61,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-wzpxdi7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:11Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=aadc298c-a1ba-41ca-9015-0a4d08420487,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.179 253665 DEBUG nova.network.os_vif_util [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.180 253665 DEBUG nova.network.os_vif_util [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.180 253665 DEBUG os_vif [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.181 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.181 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.185 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27b3ab6b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.186 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27b3ab6b-d0, col_values=(('external_ids', {'iface-id': '27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:ea:a6', 'vm-uuid': 'aadc298c-a1ba-41ca-9015-0a4d08420487'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:17 compute-0 NetworkManager[48920]: <info>  [1763803217.1890] manager: (tap27b3ab6b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.189 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.198 253665 INFO os_vif [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0')
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.200 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.253 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.253 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.253 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No VIF found with MAC fa:16:3e:84:ea:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.255 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Using config drive
Nov 22 09:20:17 compute-0 nova_compute[253661]: 2025-11-22 09:20:17.288 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 204 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 213 op/s
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.262 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Creating config drive at /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.266 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgvg01j3d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.301 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Updating instance_info_cache with network_info: [{"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.326 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Releasing lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.327 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance network_info: |[{"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.328 253665 DEBUG oslo_concurrency.lockutils [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.329 253665 DEBUG nova.network.neutron [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Refreshing network info cache for port 0c106a61-dc2d-42f2-9c81-9c68f52ce123 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.332 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start _get_guest_xml network_info=[{"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.337 253665 WARNING nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.342 253665 DEBUG nova.virt.libvirt.host [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.342 253665 DEBUG nova.virt.libvirt.host [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.351 253665 DEBUG nova.virt.libvirt.host [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.352 253665 DEBUG nova.virt.libvirt.host [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.352 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.353 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.353 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.355 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.355 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.355 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.355 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.360 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.411 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgvg01j3d" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.443 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.449 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.539 253665 DEBUG nova.network.neutron [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Updated VIF entry in instance network info cache for port 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.540 253665 DEBUG nova.network.neutron [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Updating instance_info_cache with network_info: [{"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.553 253665 DEBUG oslo_concurrency.lockutils [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.576 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Updating instance_info_cache with network_info: [{"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.591 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.592 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance network_info: |[{"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.592 253665 DEBUG oslo_concurrency.lockutils [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.592 253665 DEBUG nova.network.neutron [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Refreshing network info cache for port 43cec84a-e6cc-4492-8869-806f677f3026 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.595 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start _get_guest_xml network_info=[{"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.600 253665 WARNING nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.605 253665 DEBUG nova.virt.libvirt.host [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.606 253665 DEBUG nova.virt.libvirt.host [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.615 253665 DEBUG nova.virt.libvirt.host [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.616 253665 DEBUG nova.virt.libvirt.host [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.617 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.617 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.618 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.618 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.618 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.618 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.619 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.619 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.619 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.619 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.620 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.620 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.624 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.673 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.674 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Deleting local config drive /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config because it was imported into RBD.
Nov 22 09:20:18 compute-0 kernel: tap27b3ab6b-d0: entered promiscuous mode
Nov 22 09:20:18 compute-0 NetworkManager[48920]: <info>  [1763803218.7645] manager: (tap27b3ab6b-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Nov 22 09:20:18 compute-0 ovn_controller[152872]: 2025-11-22T09:20:18Z|00585|binding|INFO|Claiming lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for this chassis.
Nov 22 09:20:18 compute-0 ovn_controller[152872]: 2025-11-22T09:20:18Z|00586|binding|INFO|27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643: Claiming fa:16:3e:84:ea:a6 10.100.0.14
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.765 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.771 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:ea:a6 10.100.0.14'], port_security=['fa:16:3e:84:ea:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aadc298c-a1ba-41ca-9015-0a4d08420487', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.773 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd bound to our chassis
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.775 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:20:18 compute-0 systemd-udevd[320469]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.787 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9c13cc-9769-4310-9526-cbf1ab09a500]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.789 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d1bce2-e1 in ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.792 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d1bce2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.792 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d830098d-688e-4a51-b872-5abbe10c54c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 ovn_controller[152872]: 2025-11-22T09:20:18Z|00587|binding|INFO|Setting lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 ovn-installed in OVS
Nov 22 09:20:18 compute-0 ovn_controller[152872]: 2025-11-22T09:20:18Z|00588|binding|INFO|Setting lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 up in Southbound
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.795 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c14a7d93-c618-48b2-8969-d010f33ecd7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:18 compute-0 NetworkManager[48920]: <info>  [1763803218.8087] device (tap27b3ab6b-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:20:18 compute-0 NetworkManager[48920]: <info>  [1763803218.8099] device (tap27b3ab6b-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.815 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d4390402-9e10-401c-88dc-5d8355c9b028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 systemd-machined[215941]: New machine qemu-71-instance-0000003d.
Nov 22 09:20:18 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-0000003d.
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.835 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef21546-e0da-4f94-813c-b7082f098171]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.876 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e00962-f299-479f-af38-f6a9473d6688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 systemd-udevd[320484]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:20:18 compute-0 NetworkManager[48920]: <info>  [1763803218.8843] manager: (tap01d1bce2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/265)
Nov 22 09:20:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343977246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.881 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[545f21b3-1801-4914-95d2-0829d4d887da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.919 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.925 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[704c3e72-981e-47a6-92d8-d9b28d51dcfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.929 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5af68e5a-db9b-4a40-a569-9a3a344b36d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.945 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:18 compute-0 nova_compute[253661]: 2025-11-22 09:20:18.949 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:18 compute-0 NetworkManager[48920]: <info>  [1763803218.9522] device (tap01d1bce2-e0): carrier: link connected
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.959 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0388d8-dbce-4dc3-bdbd-768b3bd4c88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.977 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b71938e8-9bc5-4d4e-b6ec-51192f4e087f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606968, 'reachable_time': 27022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320534, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:18 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf167f47-4863-4a60-bccd-21ccae2416e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:2279'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606968, 'tstamp': 606968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320535, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea52952-ef2b-4257-947f-d1804a49c44c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606968, 'reachable_time': 27022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320536, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.059 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a287022d-1e82-47cb-8380-7a7137e5cc35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474094733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.146 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[287ed762-22b7-489d-a1a3-47d029eadf33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.147 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.148 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.149 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d1bce2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:19 compute-0 kernel: tap01d1bce2-e0: entered promiscuous mode
Nov 22 09:20:19 compute-0 NetworkManager[48920]: <info>  [1763803219.1525] manager: (tap01d1bce2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.154 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d1bce2-e0, col_values=(('external_ids', {'iface-id': '23aa3d02-a12d-464a-8395-5aa8724c0fd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 ovn_controller[152872]: 2025-11-22T09:20:19Z|00589|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.159 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:19 compute-0 ceph-mon[75021]: pgmap v1683: 305 pgs: 305 active+clean; 204 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 213 op/s
Nov 22 09:20:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1343977246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1474094733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.175 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.176 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58324faf-38f7-412d-a139-69e92f80135d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.177 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:20:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.178 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'env', 'PROCESS_TAG=haproxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.195 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.201 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.258 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.265 253665 DEBUG nova.compute.manager [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.265 253665 DEBUG oslo_concurrency.lockutils [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.266 253665 DEBUG oslo_concurrency.lockutils [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.266 253665 DEBUG oslo_concurrency.lockutils [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.266 253665 DEBUG nova.compute.manager [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Processing event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.375 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.383 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803219.3826275, aadc298c-a1ba-41ca-9015-0a4d08420487 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.383 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Started (Lifecycle Event)
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.387 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.391 253665 INFO nova.virt.libvirt.driver [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance spawned successfully.
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.392 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.407 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.412 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.417 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.418 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.418 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.419 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.419 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.420 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 262 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 785 KiB/s rd, 5.3 MiB/s wr, 150 op/s
Nov 22 09:20:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702820486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.447 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.447 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803219.382842, aadc298c-a1ba-41ca-9015-0a4d08420487 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.447 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Paused (Lifecycle Event)
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.466 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.468 253665 DEBUG nova.virt.libvirt.vif [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1373824792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1373824792',id=62,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8120d22470024f2197238c7c48c5ba0e',ramdisk_id='',reservation_id='r-q5mmcejy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-563581713',owner_user_name='tempest-InstanceActionsV221TestJSON-563581713-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:12Z,user_data=None,user_id='3457ea0f757244e8a49e3e224d581e8a',uuid=30c09d44-c691-4f03-a20d-2e86a0d0a762,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.468 253665 DEBUG nova.network.os_vif_util [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converting VIF {"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.469 253665 DEBUG nova.network.os_vif_util [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.470 253665 DEBUG nova.objects.instance [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 30c09d44-c691-4f03-a20d-2e86a0d0a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.473 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.474 253665 DEBUG nova.network.neutron [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Updated VIF entry in instance network info cache for port 0c106a61-dc2d-42f2-9c81-9c68f52ce123. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.475 253665 DEBUG nova.network.neutron [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Updating instance_info_cache with network_info: [{"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.479 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803219.3863351, aadc298c-a1ba-41ca-9015-0a4d08420487 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.479 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Resumed (Lifecycle Event)
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.493 253665 INFO nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Took 8.21 seconds to spawn the instance on the hypervisor.
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.493 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.495 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <uuid>30c09d44-c691-4f03-a20d-2e86a0d0a762</uuid>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <name>instance-0000003e</name>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-1373824792</nova:name>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:20:18</nova:creationTime>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:user uuid="3457ea0f757244e8a49e3e224d581e8a">tempest-InstanceActionsV221TestJSON-563581713-project-member</nova:user>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:project uuid="8120d22470024f2197238c7c48c5ba0e">tempest-InstanceActionsV221TestJSON-563581713</nova:project>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:port uuid="0c106a61-dc2d-42f2-9c81-9c68f52ce123">
Nov 22 09:20:19 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <system>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="serial">30c09d44-c691-4f03-a20d-2e86a0d0a762</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="uuid">30c09d44-c691-4f03-a20d-2e86a0d0a762</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </system>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <os>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </os>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <features>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </features>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/30c09d44-c691-4f03-a20d-2e86a0d0a762_disk">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:06:5d:4f"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <target dev="tap0c106a61-dc"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/console.log" append="off"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <video>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </video>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:20:19 compute-0 nova_compute[253661]: </domain>
Nov 22 09:20:19 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.495 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Preparing to wait for external event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.495 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.496 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.496 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.496 253665 DEBUG nova.virt.libvirt.vif [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1373824792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1373824792',id=62,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8120d22470024f2197238c7c48c5ba0e',ramdisk_id='',reservation_id='r-q5mmcejy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-563581713',owner_user_name='tempest-InstanceActionsV221TestJSON-563581713-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:12Z,user_data=None,user_id='3457ea0f757244e8a49e3e224d581e8a',uuid=30c09d44-c691-4f03-a20d-2e86a0d0a762,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.497 253665 DEBUG nova.network.os_vif_util [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converting VIF {"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.497 253665 DEBUG nova.network.os_vif_util [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.497 253665 DEBUG os_vif [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.498 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.498 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.503 253665 DEBUG oslo_concurrency.lockutils [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.503 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.504 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.504 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c106a61-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.505 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c106a61-dc, col_values=(('external_ids', {'iface-id': '0c106a61-dc2d-42f2-9c81-9c68f52ce123', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:5d:4f', 'vm-uuid': '30c09d44-c691-4f03-a20d-2e86a0d0a762'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:19 compute-0 NetworkManager[48920]: <info>  [1763803219.5079] manager: (tap0c106a61-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.513 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.515 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.516 253665 INFO os_vif [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc')
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.553 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.571 253665 INFO nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Took 9.13 seconds to build instance.
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.576 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.576 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.576 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] No VIF found with MAC fa:16:3e:06:5d:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.577 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Using config drive
Nov 22 09:20:19 compute-0 podman[320672]: 2025-11-22 09:20:19.583074788 +0000 UTC m=+0.058386240 container create 4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.609 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.626 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:19 compute-0 systemd[1]: Started libpod-conmon-4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a.scope.
Nov 22 09:20:19 compute-0 podman[320672]: 2025-11-22 09:20:19.550123667 +0000 UTC m=+0.025435139 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:20:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/165dbfb6280e5c3468bc4607011907ecf91b892441f109cec428a46150992cc3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:19 compute-0 podman[320672]: 2025-11-22 09:20:19.676781825 +0000 UTC m=+0.152093297 container init 4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:20:19 compute-0 podman[320672]: 2025-11-22 09:20:19.683421306 +0000 UTC m=+0.158732758 container start 4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:20:19 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [NOTICE]   (320710) : New worker (320712) forked
Nov 22 09:20:19 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [NOTICE]   (320710) : Loading success.
Nov 22 09:20:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341290301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.746 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.747 253665 DEBUG nova.virt.libvirt.vif [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783668956',display_name='tempest-tempest.common.compute-instance-783668956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783668956',id=63,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-k5tayptk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:13Z,user_data=None,user_id='559fd7e00a0a468797efe4955caffc4a',uuid=d364f1c2-d606-448a-b3bd-00f1d5c1b858,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.748 253665 DEBUG nova.network.os_vif_util [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.748 253665 DEBUG nova.network.os_vif_util [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.749 253665 DEBUG nova.objects.instance [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.764 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <uuid>d364f1c2-d606-448a-b3bd-00f1d5c1b858</uuid>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <name>instance-0000003f</name>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:name>tempest-tempest.common.compute-instance-783668956</nova:name>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:20:18</nova:creationTime>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <nova:port uuid="43cec84a-e6cc-4492-8869-806f677f3026">
Nov 22 09:20:19 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <system>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="serial">d364f1c2-d606-448a-b3bd-00f1d5c1b858</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="uuid">d364f1c2-d606-448a-b3bd-00f1d5c1b858</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </system>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <os>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </os>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <features>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </features>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:19 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:29:62:0d"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <target dev="tap43cec84a-e6"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/console.log" append="off"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <video>
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </video>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:20:19 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:20:19 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:20:19 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:20:19 compute-0 nova_compute[253661]: </domain>
Nov 22 09:20:19 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.764 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Preparing to wait for external event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.764 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.764 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.765 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.765 253665 DEBUG nova.virt.libvirt.vif [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783668956',display_name='tempest-tempest.common.compute-instance-783668956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783668956',id=63,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-k5tayptk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:13Z,user_data=None,user_id='559fd7e00a0a468797efe4955caffc4a',uuid=d364f1c2-d606-448a-b3bd-00f1d5c1b858,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.765 253665 DEBUG nova.network.os_vif_util [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.766 253665 DEBUG nova.network.os_vif_util [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.767 253665 DEBUG os_vif [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.767 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.767 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.768 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.770 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.771 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43cec84a-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.771 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43cec84a-e6, col_values=(('external_ids', {'iface-id': '43cec84a-e6cc-4492-8869-806f677f3026', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:62:0d', 'vm-uuid': 'd364f1c2-d606-448a-b3bd-00f1d5c1b858'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.773 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 NetworkManager[48920]: <info>  [1763803219.7741] manager: (tap43cec84a-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.780 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.781 253665 INFO os_vif [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6')
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.827 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.827 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.828 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No VIF found with MAC fa:16:3e:29:62:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.828 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Using config drive
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.849 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.873 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803204.8704858, 87fbaa81-3eae-4dac-9613-700a29ab0daf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.873 253665 INFO nova.compute.manager [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] VM Stopped (Lifecycle Event)
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.892 253665 DEBUG nova.compute.manager [None req-0bb5fb31-49e4-46d6-8e54-6f5cd95994ab - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.905 253665 DEBUG nova.network.neutron [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Updated VIF entry in instance network info cache for port 43cec84a-e6cc-4492-8869-806f677f3026. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.906 253665 DEBUG nova.network.neutron [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Updating instance_info_cache with network_info: [{"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.919 253665 DEBUG oslo_concurrency.lockutils [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.951 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Creating config drive at /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config
Nov 22 09:20:19 compute-0 nova_compute[253661]: 2025-11-22 09:20:19.956 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjut6r0o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.099 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjut6r0o" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.133 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.139 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2702820486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/341290301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.195 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Creating config drive at /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.202 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqd9jltl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.342 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.344 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Deleting local config drive /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config because it was imported into RBD.
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.368 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqd9jltl" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.403 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.409 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:20 compute-0 NetworkManager[48920]: <info>  [1763803220.4160] manager: (tap0c106a61-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Nov 22 09:20:20 compute-0 systemd-udevd[320508]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:20:20 compute-0 kernel: tap0c106a61-dc: entered promiscuous mode
Nov 22 09:20:20 compute-0 ovn_controller[152872]: 2025-11-22T09:20:20Z|00590|binding|INFO|Claiming lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 for this chassis.
Nov 22 09:20:20 compute-0 ovn_controller[152872]: 2025-11-22T09:20:20Z|00591|binding|INFO|0c106a61-dc2d-42f2-9c81-9c68f52ce123: Claiming fa:16:3e:06:5d:4f 10.100.0.9
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.436 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:5d:4f 10.100.0.9'], port_security=['fa:16:3e:06:5d:4f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30c09d44-c691-4f03-a20d-2e86a0d0a762', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8120d22470024f2197238c7c48c5ba0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab0be74d-f44a-43fd-be23-3e0ac42b6c84', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e425ea4f-8728-48a7-950a-425a7d828903, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0c106a61-dc2d-42f2-9c81-9c68f52ce123) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.437 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0c106a61-dc2d-42f2-9c81-9c68f52ce123 in datapath 6e6525e9-2fbb-452c-a3eb-9774aebbdb59 bound to our chassis
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.439 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e6525e9-2fbb-452c-a3eb-9774aebbdb59
Nov 22 09:20:20 compute-0 NetworkManager[48920]: <info>  [1763803220.4466] device (tap0c106a61-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:20:20 compute-0 NetworkManager[48920]: <info>  [1763803220.4481] device (tap0c106a61-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:20:20 compute-0 systemd-machined[215941]: New machine qemu-72-instance-0000003e.
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.461 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:20 compute-0 ovn_controller[152872]: 2025-11-22T09:20:20Z|00592|binding|INFO|Setting lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 ovn-installed in OVS
Nov 22 09:20:20 compute-0 ovn_controller[152872]: 2025-11-22T09:20:20Z|00593|binding|INFO|Setting lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 up in Southbound
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.463 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68f77aef-21b0-4a29-b404-0fc93cef462a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.464 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e6525e9-21 in ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.465 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.466 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e6525e9-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.466 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e61406-fa0e-4b28-b376-ac298c6cde3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.469 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae35bce6-8323-49f3-8d50-b55d81be8356]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-0000003e.
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.489 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[de69dbbd-a245-41d8-b00c-37ce67ac1bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.508 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[36fe66fc-bcc7-438d-b400-f650b4136235]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.547 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[333cc51f-caf2-4a5f-941d-bfc94c771efd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6eec84db-5d98-4465-a486-3fff72eba95e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 NetworkManager[48920]: <info>  [1763803220.5551] manager: (tap6e6525e9-20): new Veth device (/org/freedesktop/NetworkManager/Devices/270)
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.603 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44c06f0a-84e0-4e30-88a6-b6a0a1f0923e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.607 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[90e58b9a-0379-4a68-9343-5723223dbf4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.633 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.634 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Deleting local config drive /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config because it was imported into RBD.
Nov 22 09:20:20 compute-0 NetworkManager[48920]: <info>  [1763803220.6453] device (tap6e6525e9-20): carrier: link connected
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.653 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[733847c6-31f3-40a2-92dd-8f451fc9f9ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.679 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[083d4210-d262-4868-b8f5-aa660f6f8e80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e6525e9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:45:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607138, 'reachable_time': 18500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320861, 'error': None, 'target': 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 NetworkManager[48920]: <info>  [1763803220.6815] manager: (tap43cec84a-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Nov 22 09:20:20 compute-0 kernel: tap43cec84a-e6: entered promiscuous mode
Nov 22 09:20:20 compute-0 ovn_controller[152872]: 2025-11-22T09:20:20Z|00594|binding|INFO|Claiming lport 43cec84a-e6cc-4492-8869-806f677f3026 for this chassis.
Nov 22 09:20:20 compute-0 ovn_controller[152872]: 2025-11-22T09:20:20Z|00595|binding|INFO|43cec84a-e6cc-4492-8869-806f677f3026: Claiming fa:16:3e:29:62:0d 10.100.0.10
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.686 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:20 compute-0 NetworkManager[48920]: <info>  [1763803220.6949] device (tap43cec84a-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:20:20 compute-0 NetworkManager[48920]: <info>  [1763803220.6958] device (tap43cec84a-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.694 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:62:0d 10.100.0.10'], port_security=['fa:16:3e:29:62:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd364f1c2-d606-448a-b3bd-00f1d5c1b858', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c62f4ce9-5b21-4154-83ce-fbb32299e500', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=43cec84a-e6cc-4492-8869-806f677f3026) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.703 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0689e63e-70cc-4618-9dda-b2d06a1f4126]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:45d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607138, 'tstamp': 607138}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320868, 'error': None, 'target': 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_controller[152872]: 2025-11-22T09:20:20Z|00596|binding|INFO|Setting lport 43cec84a-e6cc-4492-8869-806f677f3026 ovn-installed in OVS
Nov 22 09:20:20 compute-0 ovn_controller[152872]: 2025-11-22T09:20:20Z|00597|binding|INFO|Setting lport 43cec84a-e6cc-4492-8869-806f677f3026 up in Southbound
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:20 compute-0 systemd-machined[215941]: New machine qemu-73-instance-0000003f.
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.733 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8292422e-b577-4d31-9e51-ce355ac71d8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e6525e9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:45:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607138, 'reachable_time': 18500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320871, 'error': None, 'target': 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-0000003f.
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.780 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[43a15588-df63-4513-b7b6-8ff25c1c1ae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.885 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[024b3602-1399-499d-b36e-d5974b990652]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.887 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e6525e9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.887 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.888 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e6525e9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:20 compute-0 NetworkManager[48920]: <info>  [1763803220.8913] manager: (tap6e6525e9-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Nov 22 09:20:20 compute-0 kernel: tap6e6525e9-20: entered promiscuous mode
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.897 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e6525e9-20, col_values=(('external_ids', {'iface-id': '7cdf9637-bfab-4d96-a02e-638779af4eb2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:20 compute-0 ovn_controller[152872]: 2025-11-22T09:20:20Z|00598|binding|INFO|Releasing lport 7cdf9637-bfab-4d96-a02e-638779af4eb2 from this chassis (sb_readonly=0)
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.917 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.919 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e6525e9-2fbb-452c-a3eb-9774aebbdb59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e6525e9-2fbb-452c-a3eb-9774aebbdb59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.920 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[415eb3c8-b448-41c8-b58e-712e75630672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.921 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-6e6525e9-2fbb-452c-a3eb-9774aebbdb59
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/6e6525e9-2fbb-452c-a3eb-9774aebbdb59.pid.haproxy
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 6e6525e9-2fbb-452c-a3eb-9774aebbdb59
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:20:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.922 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'env', 'PROCESS_TAG=haproxy-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e6525e9-2fbb-452c-a3eb-9774aebbdb59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.973 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803220.9692056, 30c09d44-c691-4f03-a20d-2e86a0d0a762 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.974 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] VM Started (Lifecycle Event)
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.978 253665 DEBUG nova.compute.manager [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.979 253665 DEBUG oslo_concurrency.lockutils [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.979 253665 DEBUG oslo_concurrency.lockutils [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.979 253665 DEBUG oslo_concurrency.lockutils [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:20 compute-0 nova_compute[253661]: 2025-11-22 09:20:20.980 253665 DEBUG nova.compute.manager [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Processing event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.001 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.008 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803220.9698238, 30c09d44-c691-4f03-a20d-2e86a0d0a762 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.008 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] VM Paused (Lifecycle Event)
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.029 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.039 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.057 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:21 compute-0 ceph-mon[75021]: pgmap v1684: 305 pgs: 305 active+clean; 262 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 785 KiB/s rd, 5.3 MiB/s wr, 150 op/s
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.220 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.221 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803221.2214315, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.221 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Started (Lifecycle Event)
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.224 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.228 253665 INFO nova.virt.libvirt.driver [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance spawned successfully.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.228 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.242 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.249 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.255 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.256 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.257 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.258 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.258 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.259 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.281 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.281 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803221.2239003, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.281 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Paused (Lifecycle Event)
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.322 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.325 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803221.2240658, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.326 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Resumed (Lifecycle Event)
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.347 253665 INFO nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Took 7.86 seconds to spawn the instance on the hypervisor.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.348 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.359 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:21 compute-0 podman[320993]: 2025-11-22 09:20:21.363432928 +0000 UTC m=+0.061350212 container create 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.363 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:21 compute-0 systemd[1]: Started libpod-conmon-48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685.scope.
Nov 22 09:20:21 compute-0 podman[320993]: 2025-11-22 09:20:21.330569679 +0000 UTC m=+0.028486993 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:20:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 262 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 5.3 MiB/s wr, 93 op/s
Nov 22 09:20:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3bb9c65833804f518fd940b222e6ab1d587a39e3be2ff7560dea3f1c278cb5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:21 compute-0 podman[320993]: 2025-11-22 09:20:21.469070135 +0000 UTC m=+0.166987419 container init 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:20:21 compute-0 podman[320993]: 2025-11-22 09:20:21.475646194 +0000 UTC m=+0.173563478 container start 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 09:20:21 compute-0 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [NOTICE]   (321012) : New worker (321014) forked
Nov 22 09:20:21 compute-0 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [NOTICE]   (321012) : Loading success.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.505 253665 INFO nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Took 9.73 seconds to build instance.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.523 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.572 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 43cec84a-e6cc-4492-8869-806f677f3026 in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.574 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.594 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccd9ec4-59c6-4171-803d-0844d7b56202]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.637 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f50ea833-b756-4fb9-ae15-68ffa8b172f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.640 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[583edd04-ed72-4cf0-a715-7e8b6fa38818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.683 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7833be-d100-48b3-a5de-8bdc1b4e06a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.685 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.685 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.686 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.686 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.686 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] No waiting events found dispatching network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 WARNING nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received unexpected event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for instance with vm_state active and task_state None.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.688 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Processing event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.688 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.688 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.688 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.689 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.689 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] No waiting events found dispatching network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.689 253665 WARNING nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received unexpected event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 for instance with vm_state building and task_state spawning.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.690 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.696 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.698 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803221.6981463, 30c09d44-c691-4f03-a20d-2e86a0d0a762 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.698 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] VM Resumed (Lifecycle Event)
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.701 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4a2340-478b-48f8-8d0c-573abcb852cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602324, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321028, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.703 253665 INFO nova.virt.libvirt.driver [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance spawned successfully.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.704 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.718 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.720 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0987a573-93b8-49b0-903a-aa8260a7fbe9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602336, 'tstamp': 602336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321029, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602340, 'tstamp': 602340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321029, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.723 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.725 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.727 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.727 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.727 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.728 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.736 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.737 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.737 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.738 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.738 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.739 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.772 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.812 253665 INFO nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Took 9.04 seconds to spawn the instance on the hypervisor.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.812 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.885 253665 INFO nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Took 10.12 seconds to build instance.
Nov 22 09:20:21 compute-0 nova_compute[253661]: 2025-11-22 09:20:21.902 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.506 253665 INFO nova.compute.manager [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Rebuilding instance
Nov 22 09:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.774 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'trusted_certs' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.788 253665 DEBUG nova.compute.manager [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.826 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_requests' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.836 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_devices' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.846 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'resources' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.855 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'migration_context' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.865 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:20:22 compute-0 nova_compute[253661]: 2025-11-22 09:20:22.869 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:20:23 compute-0 nova_compute[253661]: 2025-11-22 09:20:23.041 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803208.0392523, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:23 compute-0 nova_compute[253661]: 2025-11-22 09:20:23.042 253665 INFO nova.compute.manager [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Stopped (Lifecycle Event)
Nov 22 09:20:23 compute-0 nova_compute[253661]: 2025-11-22 09:20:23.066 253665 DEBUG nova.compute.manager [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:23 compute-0 nova_compute[253661]: 2025-11-22 09:20:23.067 253665 DEBUG oslo_concurrency.lockutils [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:23 compute-0 nova_compute[253661]: 2025-11-22 09:20:23.067 253665 DEBUG oslo_concurrency.lockutils [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:23 compute-0 nova_compute[253661]: 2025-11-22 09:20:23.068 253665 DEBUG oslo_concurrency.lockutils [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:23 compute-0 nova_compute[253661]: 2025-11-22 09:20:23.068 253665 DEBUG nova.compute.manager [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] No waiting events found dispatching network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:23 compute-0 nova_compute[253661]: 2025-11-22 09:20:23.069 253665 WARNING nova.compute.manager [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received unexpected event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 for instance with vm_state active and task_state None.
Nov 22 09:20:23 compute-0 nova_compute[253661]: 2025-11-22 09:20:23.084 253665 DEBUG nova.compute.manager [None req-c648a8ac-6f3a-48e1-9e80-c8f346071d08 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:23 compute-0 ceph-mon[75021]: pgmap v1685: 305 pgs: 305 active+clean; 262 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 5.3 MiB/s wr, 93 op/s
Nov 22 09:20:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 262 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.4 MiB/s wr, 244 op/s
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.356 253665 INFO nova.compute.manager [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Rebuilding instance
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.562 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.576 253665 DEBUG nova.compute.manager [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.719 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_requests' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.722 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.722 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.723 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.723 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.723 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.724 253665 INFO nova.compute.manager [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Terminating instance
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.725 253665 DEBUG nova.compute.manager [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.738 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.751 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.762 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'migration_context' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.773 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:20:24 compute-0 kernel: tap0c106a61-dc (unregistering): left promiscuous mode
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:24 compute-0 NetworkManager[48920]: <info>  [1763803224.7825] device (tap0c106a61-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.788 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.792 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:24 compute-0 ovn_controller[152872]: 2025-11-22T09:20:24Z|00599|binding|INFO|Releasing lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 from this chassis (sb_readonly=0)
Nov 22 09:20:24 compute-0 ovn_controller[152872]: 2025-11-22T09:20:24Z|00600|binding|INFO|Setting lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 down in Southbound
Nov 22 09:20:24 compute-0 ovn_controller[152872]: 2025-11-22T09:20:24Z|00601|binding|INFO|Removing iface tap0c106a61-dc ovn-installed in OVS
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.795 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.818 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.839 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:5d:4f 10.100.0.9'], port_security=['fa:16:3e:06:5d:4f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30c09d44-c691-4f03-a20d-2e86a0d0a762', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8120d22470024f2197238c7c48c5ba0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab0be74d-f44a-43fd-be23-3e0ac42b6c84', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e425ea4f-8728-48a7-950a-425a7d828903, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0c106a61-dc2d-42f2-9c81-9c68f52ce123) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.840 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0c106a61-dc2d-42f2-9c81-9c68f52ce123 in datapath 6e6525e9-2fbb-452c-a3eb-9774aebbdb59 unbound from our chassis
Nov 22 09:20:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.842 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e6525e9-2fbb-452c-a3eb-9774aebbdb59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:20:24 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 22 09:20:24 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003e.scope: Consumed 3.552s CPU time.
Nov 22 09:20:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.843 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11900ac9-baf6-4240-ae37-9128fc190599]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:24 compute-0 systemd-machined[215941]: Machine qemu-72-instance-0000003e terminated.
Nov 22 09:20:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.847 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 namespace which is not needed anymore
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.954 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.960 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.971 253665 INFO nova.virt.libvirt.driver [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance destroyed successfully.
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.972 253665 DEBUG nova.objects.instance [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lazy-loading 'resources' on Instance uuid 30c09d44-c691-4f03-a20d-2e86a0d0a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.986 253665 DEBUG nova.virt.libvirt.vif [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:20:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1373824792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1373824792',id=62,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8120d22470024f2197238c7c48c5ba0e',ramdisk_id='',reservation_id='r-q5mmcejy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-563581713',owner_user_name='tempest-InstanceActionsV221TestJSON-563581713-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:20:21Z,user_data=None,user_id='3457ea0f757244e8a49e3e224d581e8a',uuid=30c09d44-c691-4f03-a20d-2e86a0d0a762,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.987 253665 DEBUG nova.network.os_vif_util [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converting VIF {"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.988 253665 DEBUG nova.network.os_vif_util [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.988 253665 DEBUG os_vif [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.991 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c106a61-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:24 compute-0 nova_compute[253661]: 2025-11-22 09:20:24.997 253665 INFO os_vif [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc')
Nov 22 09:20:25 compute-0 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [NOTICE]   (321012) : haproxy version is 2.8.14-c23fe91
Nov 22 09:20:25 compute-0 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [NOTICE]   (321012) : path to executable is /usr/sbin/haproxy
Nov 22 09:20:25 compute-0 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [WARNING]  (321012) : Exiting Master process...
Nov 22 09:20:25 compute-0 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [WARNING]  (321012) : Exiting Master process...
Nov 22 09:20:25 compute-0 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [ALERT]    (321012) : Current worker (321014) exited with code 143 (Terminated)
Nov 22 09:20:25 compute-0 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [WARNING]  (321012) : All workers exited. Exiting... (0)
Nov 22 09:20:25 compute-0 systemd[1]: libpod-48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685.scope: Deactivated successfully.
Nov 22 09:20:25 compute-0 podman[321053]: 2025-11-22 09:20:25.023663845 +0000 UTC m=+0.067849129 container died 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685-userdata-shm.mount: Deactivated successfully.
Nov 22 09:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a3bb9c65833804f518fd940b222e6ab1d587a39e3be2ff7560dea3f1c278cb5-merged.mount: Deactivated successfully.
Nov 22 09:20:25 compute-0 podman[321053]: 2025-11-22 09:20:25.082581247 +0000 UTC m=+0.126766531 container cleanup 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.086 253665 DEBUG nova.compute.manager [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-unplugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.086 253665 DEBUG oslo_concurrency.lockutils [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.087 253665 DEBUG oslo_concurrency.lockutils [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.087 253665 DEBUG oslo_concurrency.lockutils [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.087 253665 DEBUG nova.compute.manager [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] No waiting events found dispatching network-vif-unplugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.087 253665 DEBUG nova.compute.manager [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-unplugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:20:25 compute-0 systemd[1]: libpod-conmon-48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685.scope: Deactivated successfully.
Nov 22 09:20:25 compute-0 podman[321108]: 2025-11-22 09:20:25.180185218 +0000 UTC m=+0.063343660 container remove 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:20:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e8e866-6653-4b2e-add8-8c3a1cbaffd4]: (4, ('Sat Nov 22 09:20:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 (48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685)\n48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685\nSat Nov 22 09:20:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 (48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685)\n48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.188 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[87dd580a-d45c-46df-8dae-a2566f4fa2be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.188 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e6525e9-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:25 compute-0 kernel: tap6e6525e9-20: left promiscuous mode
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.213 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[179c65af-a170-4edd-a674-887132829d49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:25 compute-0 ceph-mon[75021]: pgmap v1686: 305 pgs: 305 active+clean; 262 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.4 MiB/s wr, 244 op/s
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:20:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.230 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[318a5911-53a6-4bd8-a7ac-68f21f007f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.232 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ad12a490-cfaf-45e5-aeff-a1cbf18d8bb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:20:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.256 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b573331-e09a-4b30-a9e8-6b2f98430168]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607128, 'reachable_time': 28724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321122, 'error': None, 'target': 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d6e6525e9\x2d2fbb\x2d452c\x2da3eb\x2d9774aebbdb59.mount: Deactivated successfully.
Nov 22 09:20:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.263 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:20:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.263 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[df3502ff-6651-4216-bdea-fe559e22032b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.277 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 262 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 225 op/s
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.577 253665 INFO nova.virt.libvirt.driver [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Deleting instance files /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762_del
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.577 253665 INFO nova.virt.libvirt.driver [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Deletion of /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762_del complete
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.649 253665 INFO nova.compute.manager [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Took 0.92 seconds to destroy the instance on the hypervisor.
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.650 253665 DEBUG oslo.service.loopingcall [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.650 253665 DEBUG nova.compute.manager [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:20:25 compute-0 nova_compute[253661]: 2025-11-22 09:20:25.650 253665 DEBUG nova.network.neutron [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:20:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:27 compute-0 ceph-mon[75021]: pgmap v1687: 305 pgs: 305 active+clean; 262 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 225 op/s
Nov 22 09:20:27 compute-0 podman[321125]: 2025-11-22 09:20:27.371381941 +0000 UTC m=+0.063573036 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:20:27 compute-0 podman[321124]: 2025-11-22 09:20:27.386359405 +0000 UTC m=+0.083562222 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:20:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 249 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.1 MiB/s wr, 279 op/s
Nov 22 09:20:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:27.962 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:27.963 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:27.965 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.257 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613503102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.821 253665 DEBUG nova.compute.manager [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.822 253665 DEBUG oslo_concurrency.lockutils [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.822 253665 DEBUG oslo_concurrency.lockutils [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.823 253665 DEBUG oslo_concurrency.lockutils [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.823 253665 DEBUG nova.compute.manager [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] No waiting events found dispatching network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.824 253665 WARNING nova.compute.manager [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received unexpected event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 for instance with vm_state active and task_state deleting.
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.827 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.930 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.932 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.946 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.947 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.956 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:20:28 compute-0 nova_compute[253661]: 2025-11-22 09:20:28.956 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.151 253665 DEBUG nova.network.neutron [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.175 253665 INFO nova.compute.manager [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Took 3.53 seconds to deallocate network for instance.
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.228 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.229 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:29 compute-0 ceph-mon[75021]: pgmap v1688: 305 pgs: 305 active+clean; 249 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.1 MiB/s wr, 279 op/s
Nov 22 09:20:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3613503102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.243 253665 DEBUG nova.compute.manager [req-224a03e3-be1c-438c-9259-548742e90232 req-ca3db9ef-3e96-4395-af0a-b9533395bff1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-deleted-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.248 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3578MB free_disk=59.886558532714844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.319 253665 DEBUG oslo_concurrency.processutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 275 op/s
Nov 22 09:20:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352036944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.776 253665 DEBUG oslo_concurrency.processutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.789 253665 DEBUG nova.compute.provider_tree [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.817 253665 DEBUG nova.scheduler.client.report [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.851 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.857 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.887 253665 INFO nova.scheduler.client.report [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Deleted allocations for instance 30c09d44-c691-4f03-a20d-2e86a0d0a762
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.974 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 636b1046-fff8-4a45-8a14-04010b2f282e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance aadc298c-a1ba-41ca-9015-0a4d08420487 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d364f1c2-d606-448a-b3bd-00f1d5c1b858 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.986 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:29 compute-0 nova_compute[253661]: 2025-11-22 09:20:29.994 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:30 compute-0 nova_compute[253661]: 2025-11-22 09:20:30.023 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:30 compute-0 nova_compute[253661]: 2025-11-22 09:20:30.070 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/352036944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:30 compute-0 nova_compute[253661]: 2025-11-22 09:20:30.278 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550048957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:30 compute-0 nova_compute[253661]: 2025-11-22 09:20:30.536 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:30 compute-0 nova_compute[253661]: 2025-11-22 09:20:30.542 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:30 compute-0 nova_compute[253661]: 2025-11-22 09:20:30.557 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:30 compute-0 nova_compute[253661]: 2025-11-22 09:20:30.656 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:20:30 compute-0 nova_compute[253661]: 2025-11-22 09:20:30.657 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:31 compute-0 ceph-mon[75021]: pgmap v1689: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 275 op/s
Nov 22 09:20:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2550048957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:31 compute-0 podman[321226]: 2025-11-22 09:20:31.406538979 +0000 UTC m=+0.102329878 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:20:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 247 op/s
Nov 22 09:20:31 compute-0 nova_compute[253661]: 2025-11-22 09:20:31.656 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:20:31 compute-0 nova_compute[253661]: 2025-11-22 09:20:31.657 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:20:31 compute-0 nova_compute[253661]: 2025-11-22 09:20:31.657 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:20:31 compute-0 nova_compute[253661]: 2025-11-22 09:20:31.657 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:20:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:32 compute-0 ceph-mon[75021]: pgmap v1690: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 247 op/s
Nov 22 09:20:32 compute-0 nova_compute[253661]: 2025-11-22 09:20:32.956 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.238 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.239 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.252 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.313 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.313 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.323 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.324 253665 INFO nova.compute.claims [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:20:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 220 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 640 KiB/s wr, 264 op/s
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.489 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:33 compute-0 ovn_controller[152872]: 2025-11-22T09:20:33Z|00602|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:20:33 compute-0 ovn_controller[152872]: 2025-11-22T09:20:33Z|00603|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.786 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818487124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.978 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.983 253665 DEBUG nova.compute.provider_tree [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:33 compute-0 nova_compute[253661]: 2025-11-22 09:20:33.995 253665 DEBUG nova.scheduler.client.report [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.011 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.012 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.053 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.054 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.072 253665 INFO nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.086 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.180 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.182 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.183 253665 INFO nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Creating image(s)
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.206 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.232 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.254 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.260 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.296 253665 DEBUG nova.policy [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7fc7bde5e89f466d88e469ac1f35a435', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '933c51626a49465db409069a1b3eb7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.332 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.333 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.333 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.334 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.367 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.372 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:34 compute-0 ceph-mon[75021]: pgmap v1691: 305 pgs: 305 active+clean; 220 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 640 KiB/s wr, 264 op/s
Nov 22 09:20:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/818487124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.863 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.990 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Successfully created port: 1c553ce7-b95a-447b-9fed-01b378014028 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:20:34 compute-0 nova_compute[253661]: 2025-11-22 09:20:34.997 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.281 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 220 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 603 KiB/s wr, 112 op/s
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.700 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Successfully updated port: 1c553ce7-b95a-447b-9fed-01b378014028 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.702 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.702 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.718 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.718 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquired lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.718 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.720 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.788 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.789 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.797 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.797 253665 INFO nova.compute.claims [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.944 253665 DEBUG nova.compute.manager [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received event network-changed-1c553ce7-b95a-447b-9fed-01b378014028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.945 253665 DEBUG nova.compute.manager [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Refreshing instance network info cache due to event network-changed-1c553ce7-b95a-447b-9fed-01b378014028. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.945 253665 DEBUG oslo_concurrency.lockutils [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.955 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:35 compute-0 nova_compute[253661]: 2025-11-22 09:20:35.995 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.153 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.781s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.221 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] resizing rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:20:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156729813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.440 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.446 253665 DEBUG nova.compute.provider_tree [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.460 253665 DEBUG nova.scheduler.client.report [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.481 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.482 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.524 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.524 253665 DEBUG nova.network.neutron [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.541 253665 INFO nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.560 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.635 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.637 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.638 253665 INFO nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Creating image(s)
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.665 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.696 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.731 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.737 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:36 compute-0 ovn_controller[152872]: 2025-11-22T09:20:36Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:62:0d 10.100.0.10
Nov 22 09:20:36 compute-0 ovn_controller[152872]: 2025-11-22T09:20:36Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:62:0d 10.100.0.10
Nov 22 09:20:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.825 253665 DEBUG nova.objects.instance [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'migration_context' on Instance uuid 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.832 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.834 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.835 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.835 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.862 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.868 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:36 compute-0 ceph-mon[75021]: pgmap v1692: 305 pgs: 305 active+clean; 220 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 603 KiB/s wr, 112 op/s
Nov 22 09:20:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2156729813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.911 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.912 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Ensure instance console log exists: /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.913 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.913 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:36 compute-0 nova_compute[253661]: 2025-11-22 09:20:36.914 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.070 253665 DEBUG nova.policy [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7fc7bde5e89f466d88e469ac1f35a435', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '933c51626a49465db409069a1b3eb7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:20:37 compute-0 ovn_controller[152872]: 2025-11-22T09:20:37Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:ea:a6 10.100.0.14
Nov 22 09:20:37 compute-0 ovn_controller[152872]: 2025-11-22T09:20:37Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:ea:a6 10.100.0.14
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.323 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Updating instance_info_cache with network_info: [{"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.350 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Releasing lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.352 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Instance network_info: |[{"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.353 253665 DEBUG oslo_concurrency.lockutils [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.354 253665 DEBUG nova.network.neutron [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Refreshing network info cache for port 1c553ce7-b95a-447b-9fed-01b378014028 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.356 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Start _get_guest_xml network_info=[{"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.363 253665 WARNING nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.372 253665 DEBUG nova.virt.libvirt.host [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.372 253665 DEBUG nova.virt.libvirt.host [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.388 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.389 253665 DEBUG nova.virt.libvirt.host [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.390 253665 DEBUG nova.virt.libvirt.host [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.390 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.390 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.392 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.392 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.392 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.393 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.393 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.396 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 241 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 151 op/s
Nov 22 09:20:37 compute-0 nova_compute[253661]: 2025-11-22 09:20:37.520 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] resizing rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:20:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 328 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.8 MiB/s wr, 177 op/s
Nov 22 09:20:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012472494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:39 compute-0 nova_compute[253661]: 2025-11-22 09:20:39.684 253665 DEBUG nova.network.neutron [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Successfully created port: e043dc2b-6062-4cda-bf32-37ab692618c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:20:39 compute-0 nova_compute[253661]: 2025-11-22 09:20:39.693 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:39 compute-0 ceph-mon[75021]: pgmap v1693: 305 pgs: 305 active+clean; 241 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 151 op/s
Nov 22 09:20:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4012472494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:39 compute-0 nova_compute[253661]: 2025-11-22 09:20:39.749 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:39 compute-0 nova_compute[253661]: 2025-11-22 09:20:39.755 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:39 compute-0 nova_compute[253661]: 2025-11-22 09:20:39.967 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803224.966233, 30c09d44-c691-4f03-a20d-2e86a0d0a762 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:39 compute-0 nova_compute[253661]: 2025-11-22 09:20:39.967 253665 INFO nova.compute.manager [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] VM Stopped (Lifecycle Event)
Nov 22 09:20:39 compute-0 nova_compute[253661]: 2025-11-22 09:20:39.989 253665 DEBUG nova.compute.manager [None req-d8c063cf-0ce7-4c9c-913e-cbaecc4d76d9 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.002 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.102 253665 DEBUG nova.objects.instance [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'migration_context' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.112 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.112 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Ensure instance console log exists: /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.113 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.113 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.114 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519355073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.299 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.301 253665 DEBUG nova.virt.libvirt.vif [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1168981248',display_name='tempest-ServerRescueNegativeTestJSON-server-1168981248',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1168981248',id=64,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='933c51626a49465db409069a1b3eb7be',ramdisk_id='',reservation_id='r-4kcyxfno',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1742140611',owner_user_name='tempest-ServerRescueNegativeTestJSON-1742140611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:34Z,user_data=None,user_id='7fc7bde5e89f466d88e469ac1f35a435',uuid=094a1e4e-c6c0-4994-907c-aae7c2cdbe36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.301 253665 DEBUG nova.network.os_vif_util [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converting VIF {"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.302 253665 DEBUG nova.network.os_vif_util [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.303 253665 DEBUG nova.objects.instance [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.317 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <uuid>094a1e4e-c6c0-4994-907c-aae7c2cdbe36</uuid>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <name>instance-00000040</name>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1168981248</nova:name>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:20:37</nova:creationTime>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <nova:user uuid="7fc7bde5e89f466d88e469ac1f35a435">tempest-ServerRescueNegativeTestJSON-1742140611-project-member</nova:user>
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <nova:project uuid="933c51626a49465db409069a1b3eb7be">tempest-ServerRescueNegativeTestJSON-1742140611</nova:project>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <nova:port uuid="1c553ce7-b95a-447b-9fed-01b378014028">
Nov 22 09:20:40 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <system>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <entry name="serial">094a1e4e-c6c0-4994-907c-aae7c2cdbe36</entry>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <entry name="uuid">094a1e4e-c6c0-4994-907c-aae7c2cdbe36</entry>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     </system>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <os>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   </os>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <features>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   </features>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk">
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk.config">
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:40 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:62:09:c9"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <target dev="tap1c553ce7-b9"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/console.log" append="off"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <video>
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     </video>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:20:40 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:20:40 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:20:40 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:20:40 compute-0 nova_compute[253661]: </domain>
Nov 22 09:20:40 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.318 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Preparing to wait for external event network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.318 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.319 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.319 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.319 253665 DEBUG nova.virt.libvirt.vif [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1168981248',display_name='tempest-ServerRescueNegativeTestJSON-server-1168981248',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1168981248',id=64,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='933c51626a49465db409069a1b3eb7be',ramdisk_id='',reservation_id='r-4kcyxfno',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1742140611',owner_user_name='tempest-ServerRescueNegativeTestJSON-1742140611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:34Z,user_data=None,user_id='7fc7bde5e89f466d88e469ac1f35a435',uuid=094a1e4e-c6c0-4994-907c-aae7c2cdbe36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.320 253665 DEBUG nova.network.os_vif_util [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converting VIF {"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.320 253665 DEBUG nova.network.os_vif_util [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.321 253665 DEBUG os_vif [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.321 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.322 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.322 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.326 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c553ce7-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.326 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c553ce7-b9, col_values=(('external_ids', {'iface-id': '1c553ce7-b95a-447b-9fed-01b378014028', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:09:c9', 'vm-uuid': '094a1e4e-c6c0-4994-907c-aae7c2cdbe36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:20:40 compute-0 NetworkManager[48920]: <info>  [1763803240.3304] manager: (tap1c553ce7-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.337 253665 INFO os_vif [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9')
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.381 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.381 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.381 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No VIF found with MAC fa:16:3e:62:09:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.382 253665 INFO nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Using config drive
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.404 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.475 253665 DEBUG nova.network.neutron [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Successfully updated port: e043dc2b-6062-4cda-bf32-37ab692618c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.491 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.492 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquired lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.492 253665 DEBUG nova.network.neutron [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:20:40 compute-0 rsyslogd[1005]: imjournal from <np0005532048:nova_compute>: begin to drop messages due to rate-limiting
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.573 253665 DEBUG nova.compute.manager [req-e9eef013-fe3e-46b6-9249-d1126a73cf42 req-ef6d94de-f233-4098-8fd9-2136d6ea6f84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-changed-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.574 253665 DEBUG nova.compute.manager [req-e9eef013-fe3e-46b6-9249-d1126a73cf42 req-ef6d94de-f233-4098-8fd9-2136d6ea6f84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Refreshing instance network info cache due to event network-changed-e043dc2b-6062-4cda-bf32-37ab692618c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.574 253665 DEBUG oslo_concurrency.lockutils [req-e9eef013-fe3e-46b6-9249-d1126a73cf42 req-ef6d94de-f233-4098-8fd9-2136d6ea6f84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.667 253665 DEBUG nova.network.neutron [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:20:40 compute-0 ceph-mon[75021]: pgmap v1694: 305 pgs: 305 active+clean; 328 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.8 MiB/s wr, 177 op/s
Nov 22 09:20:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2519355073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.765 253665 INFO nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Creating config drive at /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/disk.config
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.770 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsujpy9ml execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.911 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsujpy9ml" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.935 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.939 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/disk.config 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.995 253665 DEBUG nova.network.neutron [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Updated VIF entry in instance network info cache for port 1c553ce7-b95a-447b-9fed-01b378014028. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:20:40 compute-0 nova_compute[253661]: 2025-11-22 09:20:40.996 253665 DEBUG nova.network.neutron [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Updating instance_info_cache with network_info: [{"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.010 253665 DEBUG oslo_concurrency.lockutils [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.368 253665 DEBUG nova.network.neutron [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Updating instance_info_cache with network_info: [{"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.385 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Releasing lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.385 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance network_info: |[{"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.386 253665 DEBUG oslo_concurrency.lockutils [req-e9eef013-fe3e-46b6-9249-d1126a73cf42 req-ef6d94de-f233-4098-8fd9-2136d6ea6f84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.386 253665 DEBUG nova.network.neutron [req-e9eef013-fe3e-46b6-9249-d1126a73cf42 req-ef6d94de-f233-4098-8fd9-2136d6ea6f84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Refreshing network info cache for port e043dc2b-6062-4cda-bf32-37ab692618c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.390 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Start _get_guest_xml network_info=[{"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.395 253665 WARNING nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.405 253665 DEBUG nova.virt.libvirt.host [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.406 253665 DEBUG nova.virt.libvirt.host [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.410 253665 DEBUG nova.virt.libvirt.host [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.411 253665 DEBUG nova.virt.libvirt.host [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.411 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.411 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.412 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.412 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.412 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.412 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.413 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.413 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.413 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.413 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.413 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:20:41 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.414 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:20:41 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.418 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 328 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 6.8 MiB/s wr, 144 op/s
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.767 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115753785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.889 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/disk.config 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.951s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.890 253665 INFO nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Deleting local config drive /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/disk.config because it was imported into RBD.
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.904 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.924 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.929 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:41 compute-0 NetworkManager[48920]: <info>  [1763803241.9376] manager: (tap1c553ce7-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Nov 22 09:20:41 compute-0 kernel: tap1c553ce7-b9: entered promiscuous mode
Nov 22 09:20:41 compute-0 ovn_controller[152872]: 2025-11-22T09:20:41Z|00604|binding|INFO|Claiming lport 1c553ce7-b95a-447b-9fed-01b378014028 for this chassis.
Nov 22 09:20:41 compute-0 ovn_controller[152872]: 2025-11-22T09:20:41Z|00605|binding|INFO|1c553ce7-b95a-447b-9fed-01b378014028: Claiming fa:16:3e:62:09:c9 10.100.0.3
Nov 22 09:20:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:41.962 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:09:c9 10.100.0.3'], port_security=['fa:16:3e:62:09:c9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094a1e4e-c6c0-4994-907c-aae7c2cdbe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '933c51626a49465db409069a1b3eb7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '593f17fc-d03e-4917-afb8-683e49d809be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1d04103-3da8-47e2-b1fa-1a0a204f148a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1c553ce7-b95a-447b-9fed-01b378014028) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:41.964 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1c553ce7-b95a-447b-9fed-01b378014028 in datapath 851968f3-2dc6-498b-a08c-7b5b2c2383d4 bound to our chassis
Nov 22 09:20:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:41.965 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 851968f3-2dc6-498b-a08c-7b5b2c2383d4
Nov 22 09:20:41 compute-0 ovn_controller[152872]: 2025-11-22T09:20:41Z|00606|binding|INFO|Setting lport 1c553ce7-b95a-447b-9fed-01b378014028 ovn-installed in OVS
Nov 22 09:20:41 compute-0 ovn_controller[152872]: 2025-11-22T09:20:41Z|00607|binding|INFO|Setting lport 1c553ce7-b95a-447b-9fed-01b378014028 up in Southbound
Nov 22 09:20:41 compute-0 nova_compute[253661]: 2025-11-22 09:20:41.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:41 compute-0 systemd-udevd[321808]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:20:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:41.979 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[64d50bb0-6efb-4cb2-b3d0-bed57438a320]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:41.980 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap851968f3-21 in ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:20:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:41.982 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap851968f3-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:20:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:41.982 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[588cd56f-0b61-4b97-b27c-ad5b5bfb6b16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:41.983 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a058c4-bac7-4ea6-b981-7c0402aae4d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:41 compute-0 systemd-machined[215941]: New machine qemu-74-instance-00000040.
Nov 22 09:20:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:41.994 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[84ec81b6-ab89-4a38-a36c-bd3ba4d3f411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:41 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000040.
Nov 22 09:20:41 compute-0 NetworkManager[48920]: <info>  [1763803241.9982] device (tap1c553ce7-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:20:41 compute-0 NetworkManager[48920]: <info>  [1763803241.9992] device (tap1c553ce7-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.011 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07f5e750-9a1a-4f60-bc0f-67f9bf11533c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.047 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c636dd-f35b-4a6b-b666-a56beb8c55c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 NetworkManager[48920]: <info>  [1763803242.0606] manager: (tap851968f3-20): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.059 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25fcca9c-a9c7-4f50-8fad-721573f2996d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.095 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c3b4fc-278c-42ae-aff1-fea0f0c2ba62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.098 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[53edd3c9-1ce3-44ab-a52d-5a83f3e313b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 NetworkManager[48920]: <info>  [1763803242.1241] device (tap851968f3-20): carrier: link connected
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.128 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[25c60b47-e4c6-4e72-b906-661e4fd6b275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.145 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5c55b8-9326-493b-95b7-e7b5b8983367]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap851968f3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:3f:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609286, 'reachable_time': 22638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321860, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.162 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c17dce8-8781-4f23-9d72-3ddbb8482b59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:3fa0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609286, 'tstamp': 609286}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321861, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.179 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[894ed044-32c6-4bca-984a-4d85b8715ce5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap851968f3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:3f:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609286, 'reachable_time': 22638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321862, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.210 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f13b3604-ea19-4f1a-a52f-73b454b6f30d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9db1b6-731e-4582-bc6c-c000b41c4879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.270 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851968f3-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.270 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.270 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851968f3-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:42 compute-0 NetworkManager[48920]: <info>  [1763803242.2731] manager: (tap851968f3-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.273 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:42 compute-0 kernel: tap851968f3-20: entered promiscuous mode
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.276 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.276 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap851968f3-20, col_values=(('external_ids', {'iface-id': '2459339b-2f2c-469c-aa5d-2df42fbac2a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:42 compute-0 ovn_controller[152872]: 2025-11-22T09:20:42Z|00608|binding|INFO|Releasing lport 2459339b-2f2c-469c-aa5d-2df42fbac2a2 from this chassis (sb_readonly=0)
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.295 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.296 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/851968f3-2dc6-498b-a08c-7b5b2c2383d4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/851968f3-2dc6-498b-a08c-7b5b2c2383d4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.297 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d063781-509b-418b-b744-6d5ae6217956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.297 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-851968f3-2dc6-498b-a08c-7b5b2c2383d4
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/851968f3-2dc6-498b-a08c-7b5b2c2383d4.pid.haproxy
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 851968f3-2dc6-498b-a08c-7b5b2c2383d4
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:20:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:42.298 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'env', 'PROCESS_TAG=haproxy-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/851968f3-2dc6-498b-a08c-7b5b2c2383d4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:20:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:20:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973774302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.437 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.439 253665 DEBUG nova.virt.libvirt.vif [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1163743873',display_name='tempest-ServerRescueNegativeTestJSON-server-1163743873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1163743873',id=65,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='933c51626a49465db409069a1b3eb7be',ramdisk_id='',reservation_id='r-wwkpbtwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1742140611',owner_user_name='tempest-ServerRescueNegativeTestJSON-1742140611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:36Z,user_data=None,user_id='7fc7bde5e89f466d88e469ac1f35a435',uuid=f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.439 253665 DEBUG nova.network.os_vif_util [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converting VIF {"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.440 253665 DEBUG nova.network.os_vif_util [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:dc:4f,bridge_name='br-int',has_traffic_filtering=True,id=e043dc2b-6062-4cda-bf32-37ab692618c1,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape043dc2b-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.441 253665 DEBUG nova.objects.instance [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'pci_devices' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.458 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <uuid>f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5</uuid>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <name>instance-00000041</name>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1163743873</nova:name>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:20:41</nova:creationTime>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <nova:user uuid="7fc7bde5e89f466d88e469ac1f35a435">tempest-ServerRescueNegativeTestJSON-1742140611-project-member</nova:user>
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <nova:project uuid="933c51626a49465db409069a1b3eb7be">tempest-ServerRescueNegativeTestJSON-1742140611</nova:project>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <nova:port uuid="e043dc2b-6062-4cda-bf32-37ab692618c1">
Nov 22 09:20:42 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <system>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <entry name="serial">f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5</entry>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <entry name="uuid">f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5</entry>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     </system>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <os>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   </os>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <features>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   </features>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk">
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config">
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:20:42 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:c9:dc:4f"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <target dev="tape043dc2b-60"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/console.log" append="off"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <video>
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     </video>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:20:42 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:20:42 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:20:42 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:20:42 compute-0 nova_compute[253661]: </domain>
Nov 22 09:20:42 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.459 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Preparing to wait for external event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.459 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.459 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.459 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.460 253665 DEBUG nova.virt.libvirt.vif [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1163743873',display_name='tempest-ServerRescueNegativeTestJSON-server-1163743873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1163743873',id=65,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='933c51626a49465db409069a1b3eb7be',ramdisk_id='',reservation_id='r-wwkpbtwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1742140611',owner_user_name='tempest-ServerRescueNegativeTestJSON-1742140611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:36Z,user_data=None,user_id='7fc7bde5e89f466d88e469ac1f35a435',uuid=f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.460 253665 DEBUG nova.network.os_vif_util [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converting VIF {"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.461 253665 DEBUG nova.network.os_vif_util [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:dc:4f,bridge_name='br-int',has_traffic_filtering=True,id=e043dc2b-6062-4cda-bf32-37ab692618c1,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape043dc2b-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.461 253665 DEBUG os_vif [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:dc:4f,bridge_name='br-int',has_traffic_filtering=True,id=e043dc2b-6062-4cda-bf32-37ab692618c1,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape043dc2b-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.462 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.462 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.465 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.466 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape043dc2b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.466 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape043dc2b-60, col_values=(('external_ids', {'iface-id': 'e043dc2b-6062-4cda-bf32-37ab692618c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:dc:4f', 'vm-uuid': 'f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:42 compute-0 NetworkManager[48920]: <info>  [1763803242.4686] manager: (tape043dc2b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.476 253665 INFO os_vif [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:dc:4f,bridge_name='br-int',has_traffic_filtering=True,id=e043dc2b-6062-4cda-bf32-37ab692618c1,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape043dc2b-60')
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.556 253665 DEBUG nova.network.neutron [req-e9eef013-fe3e-46b6-9249-d1126a73cf42 req-ef6d94de-f233-4098-8fd9-2136d6ea6f84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Updated VIF entry in instance network info cache for port e043dc2b-6062-4cda-bf32-37ab692618c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.556 253665 DEBUG nova.network.neutron [req-e9eef013-fe3e-46b6-9249-d1126a73cf42 req-ef6d94de-f233-4098-8fd9-2136d6ea6f84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Updating instance_info_cache with network_info: [{"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.572 253665 DEBUG oslo_concurrency.lockutils [req-e9eef013-fe3e-46b6-9249-d1126a73cf42 req-ef6d94de-f233-4098-8fd9-2136d6ea6f84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.593 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.593 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.593 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No VIF found with MAC fa:16:3e:c9:dc:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.594 253665 INFO nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Using config drive
Nov 22 09:20:42 compute-0 nova_compute[253661]: 2025-11-22 09:20:42.617 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:42 compute-0 podman[321918]: 2025-11-22 09:20:42.64324047 +0000 UTC m=+0.023435400 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:20:43 compute-0 ceph-mon[75021]: pgmap v1695: 305 pgs: 305 active+clean; 328 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 6.8 MiB/s wr, 144 op/s
Nov 22 09:20:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2115753785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2973774302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.013 253665 INFO nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Creating config drive at /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.018 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe5azfsie execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:43 compute-0 podman[321918]: 2025-11-22 09:20:43.065121181 +0000 UTC m=+0.445316081 container create f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.107 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803243.1068661, 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.108 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] VM Started (Lifecycle Event)
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.128 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.132 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803243.1069598, 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.132 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] VM Paused (Lifecycle Event)
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.154 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.158 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.159 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe5azfsie" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.185 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.188 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:43 compute-0 nova_compute[253661]: 2025-11-22 09:20:43.232 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:43 compute-0 systemd[1]: Started libpod-conmon-f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7.scope.
Nov 22 09:20:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f5e444636f3cd34fb5378d22b59af7b7195ae7f4200a64bf369efd16fd461a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:20:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 7.8 MiB/s wr, 183 op/s
Nov 22 09:20:43 compute-0 podman[321918]: 2025-11-22 09:20:43.953152928 +0000 UTC m=+1.333347838 container init f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:20:43 compute-0 podman[321918]: 2025-11-22 09:20:43.961421849 +0000 UTC m=+1.341616749 container start f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:20:43 compute-0 neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4[322012]: [NOTICE]   (322016) : New worker (322018) forked
Nov 22 09:20:43 compute-0 neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4[322012]: [NOTICE]   (322016) : Loading success.
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.742 253665 DEBUG nova.compute.manager [req-00969399-fd0f-43d8-830f-b79e36ce91d9 req-d33fdda5-d72e-4f3d-878e-2cc450e1daaf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received event network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.742 253665 DEBUG oslo_concurrency.lockutils [req-00969399-fd0f-43d8-830f-b79e36ce91d9 req-d33fdda5-d72e-4f3d-878e-2cc450e1daaf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.742 253665 DEBUG oslo_concurrency.lockutils [req-00969399-fd0f-43d8-830f-b79e36ce91d9 req-d33fdda5-d72e-4f3d-878e-2cc450e1daaf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.743 253665 DEBUG oslo_concurrency.lockutils [req-00969399-fd0f-43d8-830f-b79e36ce91d9 req-d33fdda5-d72e-4f3d-878e-2cc450e1daaf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.743 253665 DEBUG nova.compute.manager [req-00969399-fd0f-43d8-830f-b79e36ce91d9 req-d33fdda5-d72e-4f3d-878e-2cc450e1daaf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Processing event network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.744 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.748 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803244.7473319, 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.748 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] VM Resumed (Lifecycle Event)
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.750 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.754 253665 INFO nova.virt.libvirt.driver [-] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Instance spawned successfully.
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.754 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.767 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.775 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.779 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.779 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.780 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.780 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.780 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.781 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.799 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.855 253665 INFO nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Took 10.67 seconds to spawn the instance on the hypervisor.
Nov 22 09:20:44 compute-0 nova_compute[253661]: 2025-11-22 09:20:44.855 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:45 compute-0 nova_compute[253661]: 2025-11-22 09:20:45.020 253665 INFO nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Took 11.73 seconds to build instance.
Nov 22 09:20:45 compute-0 nova_compute[253661]: 2025-11-22 09:20:45.036 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:45 compute-0 nova_compute[253661]: 2025-11-22 09:20:45.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 7.2 MiB/s wr, 167 op/s
Nov 22 09:20:45 compute-0 ceph-mon[75021]: pgmap v1696: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 7.8 MiB/s wr, 183 op/s
Nov 22 09:20:45 compute-0 nova_compute[253661]: 2025-11-22 09:20:45.831 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:20:46 compute-0 nova_compute[253661]: 2025-11-22 09:20:46.833 253665 DEBUG nova.compute.manager [req-afcf3e3f-4dd2-4132-a27e-bfc6bc4a5313 req-21105265-7ada-4a65-91d0-8366f83832f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received event network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:46 compute-0 nova_compute[253661]: 2025-11-22 09:20:46.833 253665 DEBUG oslo_concurrency.lockutils [req-afcf3e3f-4dd2-4132-a27e-bfc6bc4a5313 req-21105265-7ada-4a65-91d0-8366f83832f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:46 compute-0 nova_compute[253661]: 2025-11-22 09:20:46.833 253665 DEBUG oslo_concurrency.lockutils [req-afcf3e3f-4dd2-4132-a27e-bfc6bc4a5313 req-21105265-7ada-4a65-91d0-8366f83832f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:46 compute-0 nova_compute[253661]: 2025-11-22 09:20:46.834 253665 DEBUG oslo_concurrency.lockutils [req-afcf3e3f-4dd2-4132-a27e-bfc6bc4a5313 req-21105265-7ada-4a65-91d0-8366f83832f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:46 compute-0 nova_compute[253661]: 2025-11-22 09:20:46.834 253665 DEBUG nova.compute.manager [req-afcf3e3f-4dd2-4132-a27e-bfc6bc4a5313 req-21105265-7ada-4a65-91d0-8366f83832f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] No waiting events found dispatching network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:46 compute-0 nova_compute[253661]: 2025-11-22 09:20:46.834 253665 WARNING nova.compute.manager [req-afcf3e3f-4dd2-4132-a27e-bfc6bc4a5313 req-21105265-7ada-4a65-91d0-8366f83832f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received unexpected event network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 for instance with vm_state active and task_state None.
Nov 22 09:20:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:47 compute-0 ceph-mon[75021]: pgmap v1697: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 7.2 MiB/s wr, 167 op/s
Nov 22 09:20:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 7.2 MiB/s wr, 172 op/s
Nov 22 09:20:47 compute-0 nova_compute[253661]: 2025-11-22 09:20:47.468 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:47 compute-0 nova_compute[253661]: 2025-11-22 09:20:47.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:47 compute-0 nova_compute[253661]: 2025-11-22 09:20:47.842 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:20:48 compute-0 ceph-mon[75021]: pgmap v1698: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 7.2 MiB/s wr, 172 op/s
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.377 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.377 253665 INFO nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Deleting local config drive /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config because it was imported into RBD.
Nov 22 09:20:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 204 op/s
Nov 22 09:20:49 compute-0 NetworkManager[48920]: <info>  [1763803249.4632] manager: (tape043dc2b-60): new Tun device (/org/freedesktop/NetworkManager/Devices/278)
Nov 22 09:20:49 compute-0 kernel: tape043dc2b-60: entered promiscuous mode
Nov 22 09:20:49 compute-0 ovn_controller[152872]: 2025-11-22T09:20:49Z|00609|binding|INFO|Claiming lport e043dc2b-6062-4cda-bf32-37ab692618c1 for this chassis.
Nov 22 09:20:49 compute-0 ovn_controller[152872]: 2025-11-22T09:20:49Z|00610|binding|INFO|e043dc2b-6062-4cda-bf32-37ab692618c1: Claiming fa:16:3e:c9:dc:4f 10.100.0.13
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.475 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:dc:4f 10.100.0.13'], port_security=['fa:16:3e:c9:dc:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '933c51626a49465db409069a1b3eb7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '593f17fc-d03e-4917-afb8-683e49d809be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1d04103-3da8-47e2-b1fa-1a0a204f148a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e043dc2b-6062-4cda-bf32-37ab692618c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.476 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e043dc2b-6062-4cda-bf32-37ab692618c1 in datapath 851968f3-2dc6-498b-a08c-7b5b2c2383d4 bound to our chassis
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.479 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 851968f3-2dc6-498b-a08c-7b5b2c2383d4
Nov 22 09:20:49 compute-0 systemd-udevd[322042]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.491 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:49 compute-0 ovn_controller[152872]: 2025-11-22T09:20:49Z|00611|binding|INFO|Setting lport e043dc2b-6062-4cda-bf32-37ab692618c1 ovn-installed in OVS
Nov 22 09:20:49 compute-0 ovn_controller[152872]: 2025-11-22T09:20:49Z|00612|binding|INFO|Setting lport e043dc2b-6062-4cda-bf32-37ab692618c1 up in Southbound
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.506 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0890a2-e7df-42bb-a253-6f51a6fffb15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:49 compute-0 NetworkManager[48920]: <info>  [1763803249.5092] device (tape043dc2b-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:20:49 compute-0 NetworkManager[48920]: <info>  [1763803249.5105] device (tape043dc2b-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:20:49 compute-0 systemd-machined[215941]: New machine qemu-75-instance-00000041.
Nov 22 09:20:49 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000041.
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.539 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[99f80cb3-a32f-4fb7-add8-e4c1ed5e8e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.542 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[247ed516-f82d-4cd1-85e4-302cb3b1be67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.571 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8f8230-8cc1-4c2b-a17c-92a4d214ce89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.596 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47a7ebba-0062-4e1e-92a7-c58b6e0d1b8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap851968f3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:3f:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609286, 'reachable_time': 22638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322057, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.618 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8021e0-6c51-40ac-9baf-df8211ba5679]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap851968f3-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609297, 'tstamp': 609297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322058, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap851968f3-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609300, 'tstamp': 609300}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322058, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.619 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851968f3-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.625 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851968f3-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.625 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.626 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap851968f3-20, col_values=(('external_ids', {'iface-id': '2459339b-2f2c-469c-aa5d-2df42fbac2a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:49.626 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.627 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.695 253665 DEBUG nova.compute.manager [req-a0593762-94c0-4ef5-92e9-843b6478405d req-45f544d2-3c56-4c77-91fe-5b9c96709f7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.695 253665 DEBUG oslo_concurrency.lockutils [req-a0593762-94c0-4ef5-92e9-843b6478405d req-45f544d2-3c56-4c77-91fe-5b9c96709f7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.695 253665 DEBUG oslo_concurrency.lockutils [req-a0593762-94c0-4ef5-92e9-843b6478405d req-45f544d2-3c56-4c77-91fe-5b9c96709f7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.696 253665 DEBUG oslo_concurrency.lockutils [req-a0593762-94c0-4ef5-92e9-843b6478405d req-45f544d2-3c56-4c77-91fe-5b9c96709f7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:49 compute-0 nova_compute[253661]: 2025-11-22 09:20:49.696 253665 DEBUG nova.compute.manager [req-a0593762-94c0-4ef5-92e9-843b6478405d req-45f544d2-3c56-4c77-91fe-5b9c96709f7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Processing event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:20:50 compute-0 nova_compute[253661]: 2025-11-22 09:20:50.290 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:50 compute-0 nova_compute[253661]: 2025-11-22 09:20:50.405 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquiring lock "b8ca3796-0135-4d73-96eb-c1534303c4a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:50 compute-0 nova_compute[253661]: 2025-11-22 09:20:50.407 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:50 compute-0 nova_compute[253661]: 2025-11-22 09:20:50.420 253665 DEBUG nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:20:50 compute-0 nova_compute[253661]: 2025-11-22 09:20:50.486 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:50 compute-0 nova_compute[253661]: 2025-11-22 09:20:50.487 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:50 compute-0 nova_compute[253661]: 2025-11-22 09:20:50.494 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:20:50 compute-0 nova_compute[253661]: 2025-11-22 09:20:50.495 253665 INFO nova.compute.claims [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:20:50 compute-0 nova_compute[253661]: 2025-11-22 09:20:50.647 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:20:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3261050212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.117 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.122 253665 DEBUG nova.compute.provider_tree [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.125 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803251.125411, f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.126 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] VM Started (Lifecycle Event)
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.129 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.132 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.135 253665 INFO nova.virt.libvirt.driver [-] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance spawned successfully.
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.136 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.145 253665 DEBUG nova.scheduler.client.report [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.150 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.155 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.160 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.161 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.162 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.162 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.163 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.163 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.171 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.172 253665 DEBUG nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.174 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.175 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803251.1255994, f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.175 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] VM Paused (Lifecycle Event)
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.211 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.215 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803251.1324499, f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.216 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] VM Resumed (Lifecycle Event)
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.239 253665 DEBUG nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.240 253665 DEBUG nova.network.neutron [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.247 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.251 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.259 253665 INFO nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Took 14.62 seconds to spawn the instance on the hypervisor.
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.260 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.263 253665 INFO nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.267 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.286 253665 DEBUG nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.346 253665 INFO nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Took 15.58 seconds to build instance.
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.373 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.395 253665 DEBUG nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.396 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.397 253665 INFO nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Creating image(s)
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.417 253665 DEBUG nova.storage.rbd_utils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] rbd image b8ca3796-0135-4d73-96eb-c1534303c4a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 115 op/s
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.439 253665 DEBUG nova.storage.rbd_utils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] rbd image b8ca3796-0135-4d73-96eb-c1534303c4a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.463 253665 DEBUG nova.storage.rbd_utils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] rbd image b8ca3796-0135-4d73-96eb-c1534303c4a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.467 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.536 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.538 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.538 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.539 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.564 253665 DEBUG nova.storage.rbd_utils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] rbd image b8ca3796-0135-4d73-96eb-c1534303c4a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:20:51 compute-0 nova_compute[253661]: 2025-11-22 09:20:51.568 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b8ca3796-0135-4d73-96eb-c1534303c4a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:20:51 compute-0 ceph-mon[75021]: pgmap v1699: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 204 op/s
Nov 22 09:20:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3261050212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:20:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:20:52
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root']
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:20:52 compute-0 nova_compute[253661]: 2025-11-22 09:20:52.234 253665 DEBUG nova.policy [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '455fded60f87468985deee6f550c818d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3192624a4dbf42b1a2a7efd75234d9c8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:20:52 compute-0 nova_compute[253661]: 2025-11-22 09:20:52.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:20:53 compute-0 nova_compute[253661]: 2025-11-22 09:20:53.323 253665 DEBUG nova.compute.manager [req-c9762215-ccdd-4990-8289-447bf1c96800 req-2543cdd8-5bda-4ff2-9989-11a59b9c777a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:53 compute-0 nova_compute[253661]: 2025-11-22 09:20:53.323 253665 DEBUG oslo_concurrency.lockutils [req-c9762215-ccdd-4990-8289-447bf1c96800 req-2543cdd8-5bda-4ff2-9989-11a59b9c777a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:53 compute-0 nova_compute[253661]: 2025-11-22 09:20:53.324 253665 DEBUG oslo_concurrency.lockutils [req-c9762215-ccdd-4990-8289-447bf1c96800 req-2543cdd8-5bda-4ff2-9989-11a59b9c777a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:53 compute-0 nova_compute[253661]: 2025-11-22 09:20:53.324 253665 DEBUG oslo_concurrency.lockutils [req-c9762215-ccdd-4990-8289-447bf1c96800 req-2543cdd8-5bda-4ff2-9989-11a59b9c777a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:53 compute-0 nova_compute[253661]: 2025-11-22 09:20:53.325 253665 DEBUG nova.compute.manager [req-c9762215-ccdd-4990-8289-447bf1c96800 req-2543cdd8-5bda-4ff2-9989-11a59b9c777a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] No waiting events found dispatching network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:53 compute-0 nova_compute[253661]: 2025-11-22 09:20:53.325 253665 WARNING nova.compute.manager [req-c9762215-ccdd-4990-8289-447bf1c96800 req-2543cdd8-5bda-4ff2-9989-11a59b9c777a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received unexpected event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 for instance with vm_state active and task_state None.
Nov 22 09:20:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.1 MiB/s wr, 151 op/s
Nov 22 09:20:53 compute-0 ceph-mon[75021]: pgmap v1700: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 115 op/s
Nov 22 09:20:53 compute-0 nova_compute[253661]: 2025-11-22 09:20:53.981 253665 DEBUG nova.network.neutron [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Successfully created port: f06c1184-ae19-4aa6-80b0-9a1e029e76ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:20:54 compute-0 nova_compute[253661]: 2025-11-22 09:20:54.877 253665 INFO nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance shutdown successfully after 30 seconds.
Nov 22 09:20:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:20:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:20:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:20:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:20:54 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:20:55 compute-0 nova_compute[253661]: 2025-11-22 09:20:55.208 253665 DEBUG nova.network.neutron [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Successfully updated port: f06c1184-ae19-4aa6-80b0-9a1e029e76ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:20:55 compute-0 nova_compute[253661]: 2025-11-22 09:20:55.291 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:55 compute-0 nova_compute[253661]: 2025-11-22 09:20:55.314 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquiring lock "refresh_cache-b8ca3796-0135-4d73-96eb-c1534303c4a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:55 compute-0 nova_compute[253661]: 2025-11-22 09:20:55.315 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquired lock "refresh_cache-b8ca3796-0135-4d73-96eb-c1534303c4a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:55 compute-0 nova_compute[253661]: 2025-11-22 09:20:55.315 253665 DEBUG nova.network.neutron [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:20:55 compute-0 nova_compute[253661]: 2025-11-22 09:20:55.421 253665 DEBUG nova.network.neutron [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:20:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 77 KiB/s wr, 112 op/s
Nov 22 09:20:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:20:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:20:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:20:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:20:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:20:55 compute-0 ceph-mon[75021]: pgmap v1701: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.1 MiB/s wr, 151 op/s
Nov 22 09:20:55 compute-0 nova_compute[253661]: 2025-11-22 09:20:55.885 253665 INFO nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance shutdown successfully after 29 seconds.
Nov 22 09:20:56 compute-0 nova_compute[253661]: 2025-11-22 09:20:56.107 253665 DEBUG nova.compute.manager [req-c1e65922-c34d-4622-a18f-eb0b35af01d2 req-b61297d5-6361-4635-8e90-2a92ca21bf5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Received event network-changed-f06c1184-ae19-4aa6-80b0-9a1e029e76ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:56 compute-0 nova_compute[253661]: 2025-11-22 09:20:56.108 253665 DEBUG nova.compute.manager [req-c1e65922-c34d-4622-a18f-eb0b35af01d2 req-b61297d5-6361-4635-8e90-2a92ca21bf5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Refreshing instance network info cache due to event network-changed-f06c1184-ae19-4aa6-80b0-9a1e029e76ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:20:56 compute-0 nova_compute[253661]: 2025-11-22 09:20:56.108 253665 DEBUG oslo_concurrency.lockutils [req-c1e65922-c34d-4622-a18f-eb0b35af01d2 req-b61297d5-6361-4635-8e90-2a92ca21bf5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b8ca3796-0135-4d73-96eb-c1534303c4a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:56 compute-0 nova_compute[253661]: 2025-11-22 09:20:56.345 253665 INFO nova.compute.manager [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Rescuing
Nov 22 09:20:56 compute-0 nova_compute[253661]: 2025-11-22 09:20:56.346 253665 DEBUG oslo_concurrency.lockutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:20:56 compute-0 nova_compute[253661]: 2025-11-22 09:20:56.346 253665 DEBUG oslo_concurrency.lockutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquired lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:56 compute-0 nova_compute[253661]: 2025-11-22 09:20:56.346 253665 DEBUG nova.network.neutron [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:20:56 compute-0 ceph-mon[75021]: pgmap v1702: 305 pgs: 305 active+clean; 374 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 77 KiB/s wr, 112 op/s
Nov 22 09:20:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:20:57 compute-0 kernel: tap27b3ab6b-d0 (unregistering): left promiscuous mode
Nov 22 09:20:57 compute-0 NetworkManager[48920]: <info>  [1763803257.2487] device (tap27b3ab6b-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.248 253665 DEBUG nova.network.neutron [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Updating instance_info_cache with network_info: [{"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:57 compute-0 ovn_controller[152872]: 2025-11-22T09:20:57Z|00613|binding|INFO|Releasing lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 from this chassis (sb_readonly=0)
Nov 22 09:20:57 compute-0 ovn_controller[152872]: 2025-11-22T09:20:57Z|00614|binding|INFO|Setting lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 down in Southbound
Nov 22 09:20:57 compute-0 ovn_controller[152872]: 2025-11-22T09:20:57Z|00615|binding|INFO|Removing iface tap27b3ab6b-d0 ovn-installed in OVS
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.269 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:57.302 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:ea:a6 10.100.0.14'], port_security=['fa:16:3e:84:ea:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aadc298c-a1ba-41ca-9015-0a4d08420487', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:57.303 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd unbound from our chassis
Nov 22 09:20:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:57.305 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:20:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:57.307 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ea57bb-e316-405a-b99d-9c114e5e73e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:20:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:57.308 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace which is not needed anymore
Nov 22 09:20:57 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 22 09:20:57 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003d.scope: Consumed 15.209s CPU time.
Nov 22 09:20:57 compute-0 systemd-machined[215941]: Machine qemu-71-instance-0000003d terminated.
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.333 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Releasing lock "refresh_cache-b8ca3796-0135-4d73-96eb-c1534303c4a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.334 253665 DEBUG nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Instance network_info: |[{"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.334 253665 DEBUG oslo_concurrency.lockutils [req-c1e65922-c34d-4622-a18f-eb0b35af01d2 req-b61297d5-6361-4635-8e90-2a92ca21bf5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b8ca3796-0135-4d73-96eb-c1534303c4a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.335 253665 DEBUG nova.network.neutron [req-c1e65922-c34d-4622-a18f-eb0b35af01d2 req-b61297d5-6361-4635-8e90-2a92ca21bf5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Refreshing network info cache for port f06c1184-ae19-4aa6-80b0-9a1e029e76ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:20:57 compute-0 kernel: tap43cec84a-e6 (unregistering): left promiscuous mode
Nov 22 09:20:57 compute-0 NetworkManager[48920]: <info>  [1763803257.4142] device (tap43cec84a-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:57 compute-0 ovn_controller[152872]: 2025-11-22T09:20:57Z|00616|binding|INFO|Releasing lport 43cec84a-e6cc-4492-8869-806f677f3026 from this chassis (sb_readonly=0)
Nov 22 09:20:57 compute-0 ovn_controller[152872]: 2025-11-22T09:20:57Z|00617|binding|INFO|Setting lport 43cec84a-e6cc-4492-8869-806f677f3026 down in Southbound
Nov 22 09:20:57 compute-0 ovn_controller[152872]: 2025-11-22T09:20:57Z|00618|binding|INFO|Removing iface tap43cec84a-e6 ovn-installed in OVS
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.436 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 388 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 578 KiB/s wr, 151 op/s
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.449 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b8ca3796-0135-4d73-96eb-c1534303c4a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.881s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:20:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:57.450 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:62:0d 10.100.0.10'], port_security=['fa:16:3e:29:62:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd364f1c2-d606-448a-b3bd-00f1d5c1b858', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c62f4ce9-5b21-4154-83ce-fbb32299e500', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=43cec84a-e6cc-4492-8869-806f677f3026) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:57 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Nov 22 09:20:57 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003f.scope: Consumed 14.526s CPU time.
Nov 22 09:20:57 compute-0 systemd-machined[215941]: Machine qemu-73-instance-0000003f terminated.
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.496 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:57 compute-0 systemd-udevd[322222]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:20:57 compute-0 NetworkManager[48920]: <info>  [1763803257.5002] manager: (tap27b3ab6b-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.553 253665 DEBUG nova.storage.rbd_utils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] resizing rbd image b8ca3796-0135-4d73-96eb-c1534303c4a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:20:57 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [NOTICE]   (320710) : haproxy version is 2.8.14-c23fe91
Nov 22 09:20:57 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [NOTICE]   (320710) : path to executable is /usr/sbin/haproxy
Nov 22 09:20:57 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [WARNING]  (320710) : Exiting Master process...
Nov 22 09:20:57 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [WARNING]  (320710) : Exiting Master process...
Nov 22 09:20:57 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [ALERT]    (320710) : Current worker (320712) exited with code 143 (Terminated)
Nov 22 09:20:57 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [WARNING]  (320710) : All workers exited. Exiting... (0)
Nov 22 09:20:57 compute-0 systemd[1]: libpod-4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a.scope: Deactivated successfully.
Nov 22 09:20:57 compute-0 podman[322240]: 2025-11-22 09:20:57.663943618 +0000 UTC m=+0.260212814 container died 4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:20:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:20:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7597 writes, 34K keys, 7597 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7597 writes, 7597 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1579 writes, 7403 keys, 1579 commit groups, 1.0 writes per commit group, ingest: 9.67 MB, 0.02 MB/s
                                           Interval WAL: 1579 writes, 1579 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     52.0      0.74              0.14        20    0.037       0      0       0.0       0.0
                                             L6      1/0    9.19 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.7     86.6     71.7      1.97              0.43        19    0.104     95K    10K       0.0       0.0
                                            Sum      1/0    9.19 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     63.0     66.3      2.71              0.57        39    0.070     95K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.5     81.9     85.3      0.59              0.16        10    0.059     30K   3062       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     86.6     71.7      1.97              0.43        19    0.104     95K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     52.1      0.74              0.14        19    0.039       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.06 MB/s write, 0.17 GB read, 0.06 MB/s read, 2.7 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 19.74 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00022 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1295,18.99 MB,6.24781%) FilterBlock(40,279.05 KB,0.0896404%) IndexBlock(40,483.83 KB,0.155424%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.969 253665 INFO nova.virt.libvirt.driver [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance destroyed successfully.
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.972 253665 INFO nova.virt.libvirt.driver [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance destroyed successfully.
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.982 253665 INFO nova.virt.libvirt.driver [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance destroyed successfully.
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.983 253665 DEBUG nova.virt.libvirt.vif [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-257400181',display_name='tempest-ServerDiskConfigTestJSON-server-257400181',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-257400181',id=61,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-wzpxdi7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:22Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=aadc298c-a1ba-41ca-9015-0a4d08420487,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.984 253665 DEBUG nova.network.os_vif_util [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.985 253665 DEBUG nova.network.os_vif_util [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.985 253665 DEBUG os_vif [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.991 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27b3ab6b-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.995 253665 INFO nova.virt.libvirt.driver [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance destroyed successfully.
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.997 253665 DEBUG nova.virt.libvirt.vif [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783668956',display_name='tempest-ServerActionsTestJSON-server-957521895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783668956',id=63,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-k5tayptk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:23Z,user_data=None,user_id='559fd7e00a0a468797efe4955caffc4a',uuid=d364f1c2-d606-448a-b3bd-00f1d5c1b858,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.997 253665 DEBUG nova.network.os_vif_util [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.998 253665 DEBUG nova.network.os_vif_util [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:20:57 compute-0 nova_compute[253661]: 2025-11-22 09:20:57.998 253665 DEBUG os_vif [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.000 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.001 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.002 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43cec84a-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.003 253665 INFO os_vif [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0')
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.023 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.026 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.028 253665 INFO os_vif [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6')
Nov 22 09:20:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a-userdata-shm.mount: Deactivated successfully.
Nov 22 09:20:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-165dbfb6280e5c3468bc4607011907ecf91b892441f109cec428a46150992cc3-merged.mount: Deactivated successfully.
Nov 22 09:20:58 compute-0 podman[322249]: 2025-11-22 09:20:58.42787032 +0000 UTC m=+0.985512218 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:20:58 compute-0 podman[322254]: 2025-11-22 09:20:58.429042549 +0000 UTC m=+0.986404819 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.467 253665 DEBUG nova.compute.manager [req-4ff988ec-32bf-4039-a658-f077f26f367f req-289719fa-12f9-4ab9-a455-4645d12df8cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-unplugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.467 253665 DEBUG oslo_concurrency.lockutils [req-4ff988ec-32bf-4039-a658-f077f26f367f req-289719fa-12f9-4ab9-a455-4645d12df8cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.468 253665 DEBUG oslo_concurrency.lockutils [req-4ff988ec-32bf-4039-a658-f077f26f367f req-289719fa-12f9-4ab9-a455-4645d12df8cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.468 253665 DEBUG oslo_concurrency.lockutils [req-4ff988ec-32bf-4039-a658-f077f26f367f req-289719fa-12f9-4ab9-a455-4645d12df8cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.468 253665 DEBUG nova.compute.manager [req-4ff988ec-32bf-4039-a658-f077f26f367f req-289719fa-12f9-4ab9-a455-4645d12df8cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] No waiting events found dispatching network-vif-unplugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.468 253665 WARNING nova.compute.manager [req-4ff988ec-32bf-4039-a658-f077f26f367f req-289719fa-12f9-4ab9-a455-4645d12df8cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received unexpected event network-vif-unplugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for instance with vm_state active and task_state rebuilding.
Nov 22 09:20:58 compute-0 podman[322240]: 2025-11-22 09:20:58.790306417 +0000 UTC m=+1.386575583 container cleanup 4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:20:58 compute-0 systemd[1]: libpod-conmon-4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a.scope: Deactivated successfully.
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.938 253665 DEBUG nova.compute.manager [req-2286547f-5c82-461b-ac21-b16860c88d41 req-c2b697aa-1b62-4b71-bb9c-d0c83c376639 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-unplugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.939 253665 DEBUG oslo_concurrency.lockutils [req-2286547f-5c82-461b-ac21-b16860c88d41 req-c2b697aa-1b62-4b71-bb9c-d0c83c376639 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.939 253665 DEBUG oslo_concurrency.lockutils [req-2286547f-5c82-461b-ac21-b16860c88d41 req-c2b697aa-1b62-4b71-bb9c-d0c83c376639 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.939 253665 DEBUG oslo_concurrency.lockutils [req-2286547f-5c82-461b-ac21-b16860c88d41 req-c2b697aa-1b62-4b71-bb9c-d0c83c376639 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.939 253665 DEBUG nova.compute.manager [req-2286547f-5c82-461b-ac21-b16860c88d41 req-c2b697aa-1b62-4b71-bb9c-d0c83c376639 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] No waiting events found dispatching network-vif-unplugged-43cec84a-e6cc-4492-8869-806f677f3026 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:20:58 compute-0 nova_compute[253661]: 2025-11-22 09:20:58.940 253665 WARNING nova.compute.manager [req-2286547f-5c82-461b-ac21-b16860c88d41 req-c2b697aa-1b62-4b71-bb9c-d0c83c376639 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received unexpected event network-vif-unplugged-43cec84a-e6cc-4492-8869-806f677f3026 for instance with vm_state active and task_state rebuilding.
Nov 22 09:20:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:20:59.143 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:20:59 compute-0 nova_compute[253661]: 2025-11-22 09:20:59.144 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:20:59 compute-0 nova_compute[253661]: 2025-11-22 09:20:59.350 253665 DEBUG nova.network.neutron [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Updating instance_info_cache with network_info: [{"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:20:59 compute-0 nova_compute[253661]: 2025-11-22 09:20:59.395 253665 DEBUG oslo_concurrency.lockutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Releasing lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:20:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 420 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 22 09:20:59 compute-0 ceph-mon[75021]: pgmap v1703: 305 pgs: 305 active+clean; 388 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 578 KiB/s wr, 151 op/s
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.028 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.219 253665 DEBUG nova.network.neutron [req-c1e65922-c34d-4622-a18f-eb0b35af01d2 req-b61297d5-6361-4635-8e90-2a92ca21bf5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Updated VIF entry in instance network info cache for port f06c1184-ae19-4aa6-80b0-9a1e029e76ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.220 253665 DEBUG nova.network.neutron [req-c1e65922-c34d-4622-a18f-eb0b35af01d2 req-b61297d5-6361-4635-8e90-2a92ca21bf5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Updating instance_info_cache with network_info: [{"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.234 253665 DEBUG oslo_concurrency.lockutils [req-c1e65922-c34d-4622-a18f-eb0b35af01d2 req-b61297d5-6361-4635-8e90-2a92ca21bf5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b8ca3796-0135-4d73-96eb-c1534303c4a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.294 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.530 253665 DEBUG nova.compute.manager [req-32c23e3d-07af-4dc9-b817-03e4e7a79792 req-78a08340-bbcb-427c-bc81-e199a718c70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.531 253665 DEBUG oslo_concurrency.lockutils [req-32c23e3d-07af-4dc9-b817-03e4e7a79792 req-78a08340-bbcb-427c-bc81-e199a718c70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.531 253665 DEBUG oslo_concurrency.lockutils [req-32c23e3d-07af-4dc9-b817-03e4e7a79792 req-78a08340-bbcb-427c-bc81-e199a718c70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.531 253665 DEBUG oslo_concurrency.lockutils [req-32c23e3d-07af-4dc9-b817-03e4e7a79792 req-78a08340-bbcb-427c-bc81-e199a718c70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.532 253665 DEBUG nova.compute.manager [req-32c23e3d-07af-4dc9-b817-03e4e7a79792 req-78a08340-bbcb-427c-bc81-e199a718c70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] No waiting events found dispatching network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.532 253665 WARNING nova.compute.manager [req-32c23e3d-07af-4dc9-b817-03e4e7a79792 req-78a08340-bbcb-427c-bc81-e199a718c70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received unexpected event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for instance with vm_state active and task_state rebuilding.
Nov 22 09:21:00 compute-0 podman[322420]: 2025-11-22 09:21:00.761632047 +0000 UTC m=+1.945747830 container remove 4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.769 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84311168-5610-4fe7-8de1-9f883214d72e]: (4, ('Sat Nov 22 09:20:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a)\n4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a\nSat Nov 22 09:20:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a)\n4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.771 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[77bbe4c9-3f2e-4145-b326-c967fa14d82f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.772 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.773 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:00 compute-0 kernel: tap01d1bce2-e0: left promiscuous mode
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.779 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.781 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[35cb0136-3790-4c28-8de6-848e2d68d0da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.798 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.800 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a55e2c6d-1194-4bbc-9b7c-321efe007562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.801 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[12f8d004-1d42-4632-8166-a6b17007a865]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.826 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9cea3686-cddf-4de5-bf46-259dda71696f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606960, 'reachable_time': 27317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322434, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d01d1bce2\x2def3d\x2d44bf\x2da3f9\x2d13dc692c2ddd.mount: Deactivated successfully.
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.833 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.833 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f46ee1a7-fc88-4b59-a802-ef0ec2d6c16f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.835 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 43cec84a-e6cc-4492-8869-806f677f3026 in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.836 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.856 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[579f8a0e-edf1-46e8-a2ca-76a37f712073]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.888 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[79f6c30f-3c38-44fa-a541-e1e0fdd5ac88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.891 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[223ef188-7b7d-4fbc-b720-129200095b93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.925 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bf891b-6a64-4280-95e8-5f0ed421636b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.943 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0d42ec96-bfb2-41a9-abda-9c27541cb43a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602324, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322444, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.957 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6343ee89-61a7-41bb-b928-0e5e069fdb25]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602336, 'tstamp': 602336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322445, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602340, 'tstamp': 602340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322445, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.959 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:00 compute-0 nova_compute[253661]: 2025-11-22 09:21:00.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.965 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.965 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.966 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.966 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:00.967 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:21:01 compute-0 nova_compute[253661]: 2025-11-22 09:21:01.026 253665 DEBUG nova.compute.manager [req-dc777141-e025-4246-a50c-a4f9184a28d2 req-7b55bb19-5b6e-4aa4-872b-da683ff11d80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:01 compute-0 nova_compute[253661]: 2025-11-22 09:21:01.027 253665 DEBUG oslo_concurrency.lockutils [req-dc777141-e025-4246-a50c-a4f9184a28d2 req-7b55bb19-5b6e-4aa4-872b-da683ff11d80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:01 compute-0 nova_compute[253661]: 2025-11-22 09:21:01.027 253665 DEBUG oslo_concurrency.lockutils [req-dc777141-e025-4246-a50c-a4f9184a28d2 req-7b55bb19-5b6e-4aa4-872b-da683ff11d80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:01 compute-0 nova_compute[253661]: 2025-11-22 09:21:01.027 253665 DEBUG oslo_concurrency.lockutils [req-dc777141-e025-4246-a50c-a4f9184a28d2 req-7b55bb19-5b6e-4aa4-872b-da683ff11d80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:01 compute-0 nova_compute[253661]: 2025-11-22 09:21:01.027 253665 DEBUG nova.compute.manager [req-dc777141-e025-4246-a50c-a4f9184a28d2 req-7b55bb19-5b6e-4aa4-872b-da683ff11d80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] No waiting events found dispatching network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:01 compute-0 nova_compute[253661]: 2025-11-22 09:21:01.028 253665 WARNING nova.compute.manager [req-dc777141-e025-4246-a50c-a4f9184a28d2 req-7b55bb19-5b6e-4aa4-872b-da683ff11d80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received unexpected event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 for instance with vm_state active and task_state rebuilding.
Nov 22 09:21:01 compute-0 ceph-mon[75021]: pgmap v1704: 305 pgs: 305 active+clean; 420 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 22 09:21:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 420 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 22 09:21:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:02 compute-0 podman[322446]: 2025-11-22 09:21:02.395522038 +0000 UTC m=+0.090070370 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003322105757960539 of space, bias 1.0, pg target 0.9966317273881617 quantized to 32 (current 32)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:21:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:21:02 compute-0 ceph-mon[75021]: pgmap v1705: 305 pgs: 305 active+clean; 420 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.042 253665 DEBUG nova.objects.instance [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lazy-loading 'migration_context' on Instance uuid b8ca3796-0135-4d73-96eb-c1534303c4a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.053 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.053 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Ensure instance console log exists: /var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.054 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.054 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.054 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.056 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Start _get_guest_xml network_info=[{"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.061 253665 WARNING nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.067 253665 DEBUG nova.virt.libvirt.host [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.068 253665 DEBUG nova.virt.libvirt.host [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.071 253665 DEBUG nova.virt.libvirt.host [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.071 253665 DEBUG nova.virt.libvirt.host [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.072 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.072 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.073 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.073 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.073 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.073 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.073 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.074 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.074 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.074 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.074 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.075 253665 DEBUG nova.virt.hardware [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.077 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 433 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 128 op/s
Nov 22 09:21:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2456360437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.558 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.580 253665 DEBUG nova.storage.rbd_utils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] rbd image b8ca3796-0135-4d73-96eb-c1534303c4a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:03 compute-0 nova_compute[253661]: 2025-11-22 09:21:03.585 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/205554403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.709 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.711 253665 DEBUG nova.virt.libvirt.vif [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1854257035',display_name='tempest-InstanceActionsNegativeTestJSON-server-1854257035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1854257035',id=66,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3192624a4dbf42b1a2a7efd75234d9c8',ramdisk_id='',reservation_id='r-acrbxrtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-252675147',owner_user_name='tempest-InstanceActionsNegativeTestJSON-252675147-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:51Z,user_data=None,user_id='455fded60f87468985deee6f550c818d',uuid=b8ca3796-0135-4d73-96eb-c1534303c4a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.711 253665 DEBUG nova.network.os_vif_util [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Converting VIF {"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.712 253665 DEBUG nova.network.os_vif_util [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:15,bridge_name='br-int',has_traffic_filtering=True,id=f06c1184-ae19-4aa6-80b0-9a1e029e76ed,network=Network(034abe75-2438-4507-a76b-fa852f2f7d5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf06c1184-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.713 253665 DEBUG nova.objects.instance [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8ca3796-0135-4d73-96eb-c1534303c4a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.728 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <uuid>b8ca3796-0135-4d73-96eb-c1534303c4a3</uuid>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <name>instance-00000042</name>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1854257035</nova:name>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:21:03</nova:creationTime>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <nova:user uuid="455fded60f87468985deee6f550c818d">tempest-InstanceActionsNegativeTestJSON-252675147-project-member</nova:user>
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <nova:project uuid="3192624a4dbf42b1a2a7efd75234d9c8">tempest-InstanceActionsNegativeTestJSON-252675147</nova:project>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <nova:port uuid="f06c1184-ae19-4aa6-80b0-9a1e029e76ed">
Nov 22 09:21:04 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <system>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <entry name="serial">b8ca3796-0135-4d73-96eb-c1534303c4a3</entry>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <entry name="uuid">b8ca3796-0135-4d73-96eb-c1534303c4a3</entry>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     </system>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <os>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   </os>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <features>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   </features>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b8ca3796-0135-4d73-96eb-c1534303c4a3_disk">
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b8ca3796-0135-4d73-96eb-c1534303c4a3_disk.config">
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:04 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:65:30:15"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <target dev="tapf06c1184-ae"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3/console.log" append="off"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <video>
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     </video>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:21:04 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:21:04 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:21:04 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:21:04 compute-0 nova_compute[253661]: </domain>
Nov 22 09:21:04 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.730 253665 DEBUG nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Preparing to wait for external event network-vif-plugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.730 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquiring lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.731 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.731 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.732 253665 DEBUG nova.virt.libvirt.vif [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1854257035',display_name='tempest-InstanceActionsNegativeTestJSON-server-1854257035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1854257035',id=66,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3192624a4dbf42b1a2a7efd75234d9c8',ramdisk_id='',reservation_id='r-acrbxrtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-252675147',owner_user_name='tempest-InstanceActionsNegativeTestJSON-252675147-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:51Z,user_data=None,user_id='455fded60f87468985deee6f550c818d',uuid=b8ca3796-0135-4d73-96eb-c1534303c4a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.732 253665 DEBUG nova.network.os_vif_util [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Converting VIF {"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.733 253665 DEBUG nova.network.os_vif_util [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:15,bridge_name='br-int',has_traffic_filtering=True,id=f06c1184-ae19-4aa6-80b0-9a1e029e76ed,network=Network(034abe75-2438-4507-a76b-fa852f2f7d5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf06c1184-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.733 253665 DEBUG os_vif [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:15,bridge_name='br-int',has_traffic_filtering=True,id=f06c1184-ae19-4aa6-80b0-9a1e029e76ed,network=Network(034abe75-2438-4507-a76b-fa852f2f7d5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf06c1184-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.734 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.735 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.737 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.737 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf06c1184-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.738 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf06c1184-ae, col_values=(('external_ids', {'iface-id': 'f06c1184-ae19-4aa6-80b0-9a1e029e76ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:30:15', 'vm-uuid': 'b8ca3796-0135-4d73-96eb-c1534303c4a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.739 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:04 compute-0 NetworkManager[48920]: <info>  [1763803264.7400] manager: (tapf06c1184-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.741 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.744 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:04 compute-0 nova_compute[253661]: 2025-11-22 09:21:04.744 253665 INFO os_vif [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:15,bridge_name='br-int',has_traffic_filtering=True,id=f06c1184-ae19-4aa6-80b0-9a1e029e76ed,network=Network(034abe75-2438-4507-a76b-fa852f2f7d5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf06c1184-ae')
Nov 22 09:21:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2456360437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.067 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.068 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.068 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] No VIF found with MAC fa:16:3e:65:30:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.069 253665 INFO nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Using config drive
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.139 253665 DEBUG nova.storage.rbd_utils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] rbd image b8ca3796-0135-4d73-96eb-c1534303c4a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.388 253665 INFO nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Creating config drive at /var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3/disk.config
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.394 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbk7xknfc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 433 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 92 op/s
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.538 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbk7xknfc" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.566 253665 DEBUG nova.storage.rbd_utils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] rbd image b8ca3796-0135-4d73-96eb-c1534303c4a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:05 compute-0 nova_compute[253661]: 2025-11-22 09:21:05.570 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3/disk.config b8ca3796-0135-4d73-96eb-c1534303c4a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:06 compute-0 ceph-mon[75021]: pgmap v1706: 305 pgs: 305 active+clean; 433 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 128 op/s
Nov 22 09:21:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/205554403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:07 compute-0 ceph-mon[75021]: pgmap v1707: 305 pgs: 305 active+clean; 433 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 92 op/s
Nov 22 09:21:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 433 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 100 op/s
Nov 22 09:21:07 compute-0 sudo[322615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:21:07 compute-0 sudo[322615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:07 compute-0 sudo[322615]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:07 compute-0 sudo[322640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:21:07 compute-0 sudo[322640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:07 compute-0 sudo[322640]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:07 compute-0 sudo[322665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:21:07 compute-0 sudo[322665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:07 compute-0 sudo[322665]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:07 compute-0 sudo[322690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:21:07 compute-0 sudo[322690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:08 compute-0 sudo[322690]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:21:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:21:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:21:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:21:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:21:08 compute-0 ceph-mon[75021]: pgmap v1708: 305 pgs: 305 active+clean; 433 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 100 op/s
Nov 22 09:21:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:21:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:08.969 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8610f172-45d6-40be-bf4b-9f2b413b4b74 does not exist
Nov 22 09:21:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0f0877ee-3638-4c72-a611-33317ff7864b does not exist
Nov 22 09:21:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 20c92ea7-a4ac-4fbb-9245-8fe3667d8002 does not exist
Nov 22 09:21:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:21:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:21:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:21:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:21:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:21:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:21:09 compute-0 nova_compute[253661]: 2025-11-22 09:21:09.027 253665 DEBUG oslo_concurrency.processutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3/disk.config b8ca3796-0135-4d73-96eb-c1534303c4a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:09 compute-0 nova_compute[253661]: 2025-11-22 09:21:09.028 253665 INFO nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Deleting local config drive /var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3/disk.config because it was imported into RBD.
Nov 22 09:21:09 compute-0 sudo[322746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:21:09 compute-0 sudo[322746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:09 compute-0 sudo[322746]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:09 compute-0 kernel: tapf06c1184-ae: entered promiscuous mode
Nov 22 09:21:09 compute-0 NetworkManager[48920]: <info>  [1763803269.0808] manager: (tapf06c1184-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Nov 22 09:21:09 compute-0 ovn_controller[152872]: 2025-11-22T09:21:09Z|00619|binding|INFO|Claiming lport f06c1184-ae19-4aa6-80b0-9a1e029e76ed for this chassis.
Nov 22 09:21:09 compute-0 ovn_controller[152872]: 2025-11-22T09:21:09Z|00620|binding|INFO|f06c1184-ae19-4aa6-80b0-9a1e029e76ed: Claiming fa:16:3e:65:30:15 10.100.0.6
Nov 22 09:21:09 compute-0 nova_compute[253661]: 2025-11-22 09:21:09.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.097 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:30:15 10.100.0.6'], port_security=['fa:16:3e:65:30:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8ca3796-0135-4d73-96eb-c1534303c4a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-034abe75-2438-4507-a76b-fa852f2f7d5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3192624a4dbf42b1a2a7efd75234d9c8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5948518c-3e0e-488d-bf2a-8b41e88a10c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fe4cf62-9f2c-4872-9250-4504ff07f1d4, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f06c1184-ae19-4aa6-80b0-9a1e029e76ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.099 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f06c1184-ae19-4aa6-80b0-9a1e029e76ed in datapath 034abe75-2438-4507-a76b-fa852f2f7d5a bound to our chassis
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.102 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 034abe75-2438-4507-a76b-fa852f2f7d5a
Nov 22 09:21:09 compute-0 ovn_controller[152872]: 2025-11-22T09:21:09Z|00621|binding|INFO|Setting lport f06c1184-ae19-4aa6-80b0-9a1e029e76ed ovn-installed in OVS
Nov 22 09:21:09 compute-0 ovn_controller[152872]: 2025-11-22T09:21:09Z|00622|binding|INFO|Setting lport f06c1184-ae19-4aa6-80b0-9a1e029e76ed up in Southbound
Nov 22 09:21:09 compute-0 sudo[322773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:21:09 compute-0 nova_compute[253661]: 2025-11-22 09:21:09.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:09 compute-0 sudo[322773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:09 compute-0 sudo[322773]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.118 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30776fec-48d0-4a45-83da-45155f707c30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.119 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap034abe75-21 in ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.122 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap034abe75-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.122 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[806a80a8-f2c4-4a9d-bae5-66fd29cd82c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.123 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0373397b-1259-48ab-b472-90046c53b3ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 systemd-udevd[322810]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:09 compute-0 systemd-machined[215941]: New machine qemu-76-instance-00000042.
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.139 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e313e787-fc31-41ff-8aea-795506215a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000042.
Nov 22 09:21:09 compute-0 NetworkManager[48920]: <info>  [1763803269.1474] device (tapf06c1184-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:21:09 compute-0 NetworkManager[48920]: <info>  [1763803269.1483] device (tapf06c1184-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.159 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8335f641-eeed-45fe-9dd7-7b372d8ecaaf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 sudo[322811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:21:09 compute-0 sudo[322811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:09 compute-0 sudo[322811]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.197 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[18de16a6-ab69-4ce1-a665-1475f3c8c804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.204 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20e6511e-cf30-4b7b-9f4e-d51ce8d2646d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 systemd-udevd[322820]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:09 compute-0 NetworkManager[48920]: <info>  [1763803269.2112] manager: (tap034abe75-20): new Veth device (/org/freedesktop/NetworkManager/Devices/282)
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.242 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ce56f3-52e6-4c66-8d00-ae3991d6d528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.245 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b92c3bee-5760-4dd8-b4ae-85a883a3e59b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 sudo[322846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:21:09 compute-0 sudo[322846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:09 compute-0 NetworkManager[48920]: <info>  [1763803269.2719] device (tap034abe75-20): carrier: link connected
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.278 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9e302a-e218-47d7-90f9-f28f58b7485e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.298 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0543ac70-91e8-4dd6-a7a8-3fa3aa58c138]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap034abe75-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:54:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612001, 'reachable_time': 20221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322892, 'error': None, 'target': 'ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.324 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5755d2eb-4d52-4f87-88e5-b5b3f32d0cdd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:5468'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612001, 'tstamp': 612001}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322893, 'error': None, 'target': 'ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.349 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b01cb1d8-3adb-4727-93d4-5453b14d8236]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap034abe75-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:54:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612001, 'reachable_time': 20221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322894, 'error': None, 'target': 'ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.397 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20071a72-08a2-4888-a7f3-8e4fe13100fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 393 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 140 KiB/s rd, 3.5 MiB/s wr, 83 op/s
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.461 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b53d2df5-527e-40d2-b624-db6b829c0cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.462 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap034abe75-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.462 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.462 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap034abe75-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:09 compute-0 nova_compute[253661]: 2025-11-22 09:21:09.464 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:09 compute-0 kernel: tap034abe75-20: entered promiscuous mode
Nov 22 09:21:09 compute-0 NetworkManager[48920]: <info>  [1763803269.4650] manager: (tap034abe75-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Nov 22 09:21:09 compute-0 nova_compute[253661]: 2025-11-22 09:21:09.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.468 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap034abe75-20, col_values=(('external_ids', {'iface-id': 'daf0c6f0-5fcf-4270-8722-cf1f9bf5fd77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:09 compute-0 nova_compute[253661]: 2025-11-22 09:21:09.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:09 compute-0 ovn_controller[152872]: 2025-11-22T09:21:09Z|00623|binding|INFO|Releasing lport daf0c6f0-5fcf-4270-8722-cf1f9bf5fd77 from this chassis (sb_readonly=0)
Nov 22 09:21:09 compute-0 nova_compute[253661]: 2025-11-22 09:21:09.487 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.489 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/034abe75-2438-4507-a76b-fa852f2f7d5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/034abe75-2438-4507-a76b-fa852f2f7d5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.489 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[12239890-ff61-4f16-99ee-2672c3ae8c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.490 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-034abe75-2438-4507-a76b-fa852f2f7d5a
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/034abe75-2438-4507-a76b-fa852f2f7d5a.pid.haproxy
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 034abe75-2438-4507-a76b-fa852f2f7d5a
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:21:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:09.491 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a', 'env', 'PROCESS_TAG=haproxy-034abe75-2438-4507-a76b-fa852f2f7d5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/034abe75-2438-4507-a76b-fa852f2f7d5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:21:09 compute-0 podman[322942]: 2025-11-22 09:21:09.605695734 +0000 UTC m=+0.021150706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:21:09 compute-0 nova_compute[253661]: 2025-11-22 09:21:09.740 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:09 compute-0 podman[322942]: 2025-11-22 09:21:09.754483499 +0000 UTC m=+0.169938451 container create fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:21:09 compute-0 systemd[1]: Started libpod-conmon-fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42.scope.
Nov 22 09:21:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:21:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:21:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:21:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:21:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:21:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:21:10 compute-0 podman[322942]: 2025-11-22 09:21:10.024600892 +0000 UTC m=+0.440055864 container init fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 09:21:10 compute-0 podman[322942]: 2025-11-22 09:21:10.032012412 +0000 UTC m=+0.447467354 container start fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:21:10 compute-0 strange_hopper[322975]: 167 167
Nov 22 09:21:10 compute-0 systemd[1]: libpod-fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42.scope: Deactivated successfully.
Nov 22 09:21:10 compute-0 conmon[322975]: conmon fb20a71208719a4ec7f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42.scope/container/memory.events
Nov 22 09:21:10 compute-0 nova_compute[253661]: 2025-11-22 09:21:10.078 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:21:10 compute-0 podman[322942]: 2025-11-22 09:21:10.119661802 +0000 UTC m=+0.535116754 container attach fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:21:10 compute-0 podman[322942]: 2025-11-22 09:21:10.120673936 +0000 UTC m=+0.536128888 container died fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:21:10 compute-0 ovn_controller[152872]: 2025-11-22T09:21:10Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:62:09:c9 10.100.0.3
Nov 22 09:21:10 compute-0 nova_compute[253661]: 2025-11-22 09:21:10.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:10 compute-0 ovn_controller[152872]: 2025-11-22T09:21:10Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:09:c9 10.100.0.3
Nov 22 09:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-beac365229058c1d21f5e295da3bee7633cb9043d7ac3407876ec94fec45abda-merged.mount: Deactivated successfully.
Nov 22 09:21:10 compute-0 nova_compute[253661]: 2025-11-22 09:21:10.992 253665 DEBUG nova.compute.manager [req-1698c532-ad95-498f-ba25-02abc207aebd req-20373d31-6e40-4466-9e1b-3d912da82a82 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Received event network-vif-plugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:10 compute-0 nova_compute[253661]: 2025-11-22 09:21:10.992 253665 DEBUG oslo_concurrency.lockutils [req-1698c532-ad95-498f-ba25-02abc207aebd req-20373d31-6e40-4466-9e1b-3d912da82a82 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:10 compute-0 nova_compute[253661]: 2025-11-22 09:21:10.992 253665 DEBUG oslo_concurrency.lockutils [req-1698c532-ad95-498f-ba25-02abc207aebd req-20373d31-6e40-4466-9e1b-3d912da82a82 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:10 compute-0 nova_compute[253661]: 2025-11-22 09:21:10.993 253665 DEBUG oslo_concurrency.lockutils [req-1698c532-ad95-498f-ba25-02abc207aebd req-20373d31-6e40-4466-9e1b-3d912da82a82 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:10 compute-0 nova_compute[253661]: 2025-11-22 09:21:10.993 253665 DEBUG nova.compute.manager [req-1698c532-ad95-498f-ba25-02abc207aebd req-20373d31-6e40-4466-9e1b-3d912da82a82 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Processing event network-vif-plugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.165 253665 DEBUG nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.166 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803271.164962, b8ca3796-0135-4d73-96eb-c1534303c4a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.166 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] VM Started (Lifecycle Event)
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.169 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.173 253665 INFO nova.virt.libvirt.driver [-] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Instance spawned successfully.
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.173 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:21:11 compute-0 podman[322942]: 2025-11-22 09:21:11.179945145 +0000 UTC m=+1.595400097 container remove fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.186 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.192 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.196 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.196 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.197 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.197 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.197 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.198 253665 DEBUG nova.virt.libvirt.driver [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:11 compute-0 systemd[1]: libpod-conmon-fb20a71208719a4ec7f48c16964cc63b787e5e7b8363b4529443fcf8e58d7b42.scope: Deactivated successfully.
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.217 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.217 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803271.1651676, b8ca3796-0135-4d73-96eb-c1534303c4a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.218 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] VM Paused (Lifecycle Event)
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.242 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.245 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803271.169132, b8ca3796-0135-4d73-96eb-c1534303c4a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.246 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] VM Resumed (Lifecycle Event)
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.261 253665 INFO nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Took 19.87 seconds to spawn the instance on the hypervisor.
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.261 253665 DEBUG nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.262 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.268 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.298 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.316 253665 INFO nova.compute.manager [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Took 20.86 seconds to build instance.
Nov 22 09:21:11 compute-0 nova_compute[253661]: 2025-11-22 09:21:11.338 253665 DEBUG oslo_concurrency.lockutils [None req-a3c34975-0a86-47d8-a436-f600d2d646f4 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:11 compute-0 ceph-mon[75021]: pgmap v1709: 305 pgs: 305 active+clean; 393 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 140 KiB/s rd, 3.5 MiB/s wr, 83 op/s
Nov 22 09:21:11 compute-0 podman[322983]: 2025-11-22 09:21:11.284006893 +0000 UTC m=+1.358891019 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:21:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 393 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 2.2 MiB/s wr, 54 op/s
Nov 22 09:21:11 compute-0 podman[323061]: 2025-11-22 09:21:11.526201728 +0000 UTC m=+0.194952608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:21:11 compute-0 podman[322983]: 2025-11-22 09:21:11.584280579 +0000 UTC m=+1.659164685 container create f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:21:11 compute-0 podman[323061]: 2025-11-22 09:21:11.633729631 +0000 UTC m=+0.302480481 container create 4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:21:11 compute-0 systemd[1]: Started libpod-conmon-f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516.scope.
Nov 22 09:21:11 compute-0 systemd[1]: Started libpod-conmon-4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f.scope.
Nov 22 09:21:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41ee3a1f015046d681450785c6137fe3684a98536e20f9b857e3aae117772e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9d1ed374ed8236b474d7ae1216c6f8db146de3d30b792662f3e8996bbfb57e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9d1ed374ed8236b474d7ae1216c6f8db146de3d30b792662f3e8996bbfb57e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9d1ed374ed8236b474d7ae1216c6f8db146de3d30b792662f3e8996bbfb57e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9d1ed374ed8236b474d7ae1216c6f8db146de3d30b792662f3e8996bbfb57e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9d1ed374ed8236b474d7ae1216c6f8db146de3d30b792662f3e8996bbfb57e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:11 compute-0 podman[322983]: 2025-11-22 09:21:11.737116843 +0000 UTC m=+1.812000949 container init f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:21:11 compute-0 podman[322983]: 2025-11-22 09:21:11.749374091 +0000 UTC m=+1.824258197 container start f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:21:11 compute-0 podman[323061]: 2025-11-22 09:21:11.772291757 +0000 UTC m=+0.441042627 container init 4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:21:11 compute-0 neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a[323077]: [NOTICE]   (323086) : New worker (323089) forked
Nov 22 09:21:11 compute-0 neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a[323077]: [NOTICE]   (323086) : Loading success.
Nov 22 09:21:11 compute-0 podman[323061]: 2025-11-22 09:21:11.784388982 +0000 UTC m=+0.453139832 container start 4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 09:21:11 compute-0 podman[323061]: 2025-11-22 09:21:11.803836194 +0000 UTC m=+0.472587044 container attach 4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 09:21:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:21:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2668969268' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:21:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:21:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2668969268' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.334 253665 INFO nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Deleting instance files /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487_del
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.335 253665 INFO nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Deletion of /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487_del complete
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.370 253665 INFO nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Deleting instance files /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858_del
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.370 253665 INFO nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Deletion of /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858_del complete
Nov 22 09:21:12 compute-0 ceph-mon[75021]: pgmap v1710: 305 pgs: 305 active+clean; 393 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 2.2 MiB/s wr, 54 op/s
Nov 22 09:21:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2668969268' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:21:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2668969268' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.498 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.499 253665 INFO nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Creating image(s)
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.528 253665 DEBUG nova.storage.rbd_utils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.552 253665 DEBUG nova.storage.rbd_utils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.576 253665 DEBUG nova.storage.rbd_utils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.581 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.626 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803257.5195565, aadc298c-a1ba-41ca-9015-0a4d08420487 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.627 253665 INFO nova.compute.manager [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Stopped (Lifecycle Event)
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.630 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803257.5262578, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.631 253665 INFO nova.compute.manager [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Stopped (Lifecycle Event)
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.636 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.636 253665 INFO nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Creating image(s)
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.673 253665 DEBUG nova.storage.rbd_utils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.708 253665 DEBUG nova.storage.rbd_utils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.734 253665 DEBUG nova.storage.rbd_utils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.739 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.782 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.784 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.785 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.785 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.810 253665 DEBUG nova.storage.rbd_utils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.813 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 aadc298c-a1ba-41ca-9015-0a4d08420487_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.856 253665 DEBUG nova.compute.manager [None req-152df207-9d3f-4112-b46d-4f1538ae19b8 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.858 253665 DEBUG nova.compute.manager [None req-0b648caf-3d0a-4ead-b51a-7f0c5f101599 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.859 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.860 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.861 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.863 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.897 253665 DEBUG nova.storage.rbd_utils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:12 compute-0 nova_compute[253661]: 2025-11-22 09:21:12.903 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:12 compute-0 distracted_elgamal[323082]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:21:12 compute-0 distracted_elgamal[323082]: --> relative data size: 1.0
Nov 22 09:21:12 compute-0 distracted_elgamal[323082]: --> All data devices are unavailable
Nov 22 09:21:13 compute-0 systemd[1]: libpod-4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f.scope: Deactivated successfully.
Nov 22 09:21:13 compute-0 systemd[1]: libpod-4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f.scope: Consumed 1.116s CPU time.
Nov 22 09:21:13 compute-0 podman[323061]: 2025-11-22 09:21:13.015101616 +0000 UTC m=+1.683852476 container died 4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a9d1ed374ed8236b474d7ae1216c6f8db146de3d30b792662f3e8996bbfb57e-merged.mount: Deactivated successfully.
Nov 22 09:21:13 compute-0 podman[323061]: 2025-11-22 09:21:13.242268666 +0000 UTC m=+1.911019516 container remove 4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:21:13 compute-0 systemd[1]: libpod-conmon-4595d87b0201f724aac72c564a3381a815bd93ff397ceb42d4a3d2d7d1cdd15f.scope: Deactivated successfully.
Nov 22 09:21:13 compute-0 sudo[322846]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.331 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 aadc298c-a1ba-41ca-9015-0a4d08420487_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:13 compute-0 sudo[323323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:21:13 compute-0 sudo[323323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:13 compute-0 sudo[323323]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:13 compute-0 sudo[323364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:21:13 compute-0 sudo[323364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:13 compute-0 sudo[323364]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.433 253665 DEBUG nova.storage.rbd_utils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] resizing rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:21:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 331 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 521 KiB/s rd, 3.8 MiB/s wr, 121 op/s
Nov 22 09:21:13 compute-0 sudo[323409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:21:13 compute-0 sudo[323409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:13 compute-0 sudo[323409]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:13 compute-0 sudo[323452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:21:13 compute-0 sudo[323452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.633 253665 DEBUG nova.compute.manager [req-e4f9f23d-18bf-4091-bcd3-7cdf88caa659 req-e2168922-e0f9-4287-8d59-c275d3205970 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Received event network-vif-plugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.633 253665 DEBUG oslo_concurrency.lockutils [req-e4f9f23d-18bf-4091-bcd3-7cdf88caa659 req-e2168922-e0f9-4287-8d59-c275d3205970 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.633 253665 DEBUG oslo_concurrency.lockutils [req-e4f9f23d-18bf-4091-bcd3-7cdf88caa659 req-e2168922-e0f9-4287-8d59-c275d3205970 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.633 253665 DEBUG oslo_concurrency.lockutils [req-e4f9f23d-18bf-4091-bcd3-7cdf88caa659 req-e2168922-e0f9-4287-8d59-c275d3205970 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.634 253665 DEBUG nova.compute.manager [req-e4f9f23d-18bf-4091-bcd3-7cdf88caa659 req-e2168922-e0f9-4287-8d59-c275d3205970 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] No waiting events found dispatching network-vif-plugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.634 253665 WARNING nova.compute.manager [req-e4f9f23d-18bf-4091-bcd3-7cdf88caa659 req-e2168922-e0f9-4287-8d59-c275d3205970 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Received unexpected event network-vif-plugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed for instance with vm_state active and task_state None.
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.651 253665 DEBUG oslo_concurrency.lockutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquiring lock "b8ca3796-0135-4d73-96eb-c1534303c4a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.654 253665 DEBUG oslo_concurrency.lockutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.656 253665 DEBUG oslo_concurrency.lockutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquiring lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.657 253665 DEBUG oslo_concurrency.lockutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.658 253665 DEBUG oslo_concurrency.lockutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.662 253665 INFO nova.compute.manager [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Terminating instance
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.667 253665 DEBUG nova.compute.manager [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:21:13 compute-0 podman[323516]: 2025-11-22 09:21:13.936040123 +0000 UTC m=+0.115547928 container create 3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:21:13 compute-0 podman[323516]: 2025-11-22 09:21:13.845129814 +0000 UTC m=+0.024637649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:21:13 compute-0 kernel: tapf06c1184-ae (unregistering): left promiscuous mode
Nov 22 09:21:13 compute-0 NetworkManager[48920]: <info>  [1763803273.9643] device (tapf06c1184-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:13 compute-0 ovn_controller[152872]: 2025-11-22T09:21:13Z|00624|binding|INFO|Releasing lport f06c1184-ae19-4aa6-80b0-9a1e029e76ed from this chassis (sb_readonly=0)
Nov 22 09:21:13 compute-0 ovn_controller[152872]: 2025-11-22T09:21:13Z|00625|binding|INFO|Setting lport f06c1184-ae19-4aa6-80b0-9a1e029e76ed down in Southbound
Nov 22 09:21:13 compute-0 ovn_controller[152872]: 2025-11-22T09:21:13Z|00626|binding|INFO|Removing iface tapf06c1184-ae ovn-installed in OVS
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:13.983 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:30:15 10.100.0.6'], port_security=['fa:16:3e:65:30:15 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8ca3796-0135-4d73-96eb-c1534303c4a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-034abe75-2438-4507-a76b-fa852f2f7d5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3192624a4dbf42b1a2a7efd75234d9c8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5948518c-3e0e-488d-bf2a-8b41e88a10c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fe4cf62-9f2c-4872-9250-4504ff07f1d4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f06c1184-ae19-4aa6-80b0-9a1e029e76ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:13.985 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f06c1184-ae19-4aa6-80b0-9a1e029e76ed in datapath 034abe75-2438-4507-a76b-fa852f2f7d5a unbound from our chassis
Nov 22 09:21:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:13.987 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 034abe75-2438-4507-a76b-fa852f2f7d5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:13.989 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c200594c-aee4-4820-8895-de2fa3b5c294]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:13.990 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a namespace which is not needed anymore
Nov 22 09:21:13 compute-0 systemd[1]: Started libpod-conmon-3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c.scope.
Nov 22 09:21:13 compute-0 nova_compute[253661]: 2025-11-22 09:21:13.997 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:14 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Deactivated successfully.
Nov 22 09:21:14 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Consumed 3.460s CPU time.
Nov 22 09:21:14 compute-0 systemd-machined[215941]: Machine qemu-76-instance-00000042 terminated.
Nov 22 09:21:14 compute-0 ovn_controller[152872]: 2025-11-22T09:21:14Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:dc:4f 10.100.0.13
Nov 22 09:21:14 compute-0 ovn_controller[152872]: 2025-11-22T09:21:14Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:dc:4f 10.100.0.13
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.113 253665 INFO nova.virt.libvirt.driver [-] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Instance destroyed successfully.
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.113 253665 DEBUG nova.objects.instance [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lazy-loading 'resources' on Instance uuid b8ca3796-0135-4d73-96eb-c1534303c4a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.125 253665 DEBUG nova.virt.libvirt.vif [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:20:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1854257035',display_name='tempest-InstanceActionsNegativeTestJSON-server-1854257035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1854257035',id=66,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:21:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3192624a4dbf42b1a2a7efd75234d9c8',ramdisk_id='',reservation_id='r-acrbxrtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-252675147',owner_user_name='tempest-InstanceActionsNegativeTestJSON-252675147-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:11Z,user_data=None,user_id='455fded60f87468985deee6f550c818d',uuid=b8ca3796-0135-4d73-96eb-c1534303c4a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.126 253665 DEBUG nova.network.os_vif_util [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Converting VIF {"id": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "address": "fa:16:3e:65:30:15", "network": {"id": "034abe75-2438-4507-a76b-fa852f2f7d5a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1304826469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3192624a4dbf42b1a2a7efd75234d9c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf06c1184-ae", "ovs_interfaceid": "f06c1184-ae19-4aa6-80b0-9a1e029e76ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.127 253665 DEBUG nova.network.os_vif_util [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:15,bridge_name='br-int',has_traffic_filtering=True,id=f06c1184-ae19-4aa6-80b0-9a1e029e76ed,network=Network(034abe75-2438-4507-a76b-fa852f2f7d5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf06c1184-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.127 253665 DEBUG os_vif [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:15,bridge_name='br-int',has_traffic_filtering=True,id=f06c1184-ae19-4aa6-80b0-9a1e029e76ed,network=Network(034abe75-2438-4507-a76b-fa852f2f7d5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf06c1184-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.129 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf06c1184-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.131 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.133 253665 INFO os_vif [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:15,bridge_name='br-int',has_traffic_filtering=True,id=f06c1184-ae19-4aa6-80b0-9a1e029e76ed,network=Network(034abe75-2438-4507-a76b-fa852f2f7d5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf06c1184-ae')
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.178 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:14 compute-0 podman[323516]: 2025-11-22 09:21:14.187926153 +0000 UTC m=+0.367433988 container init 3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 09:21:14 compute-0 podman[323516]: 2025-11-22 09:21:14.198130922 +0000 UTC m=+0.377638727 container start 3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:21:14 compute-0 clever_lehmann[323534]: 167 167
Nov 22 09:21:14 compute-0 systemd[1]: libpod-3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c.scope: Deactivated successfully.
Nov 22 09:21:14 compute-0 conmon[323534]: conmon 3873822a925312c641d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c.scope/container/memory.events
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.250 253665 DEBUG nova.storage.rbd_utils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] resizing rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:21:14 compute-0 podman[323516]: 2025-11-22 09:21:14.265384346 +0000 UTC m=+0.444892151 container attach 3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:21:14 compute-0 podman[323516]: 2025-11-22 09:21:14.266920353 +0000 UTC m=+0.446428178 container died 3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 09:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9265ea37425de018b46b3816472333238fe9136ab6d61421feb0f4c54f09994e-merged.mount: Deactivated successfully.
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.531 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.532 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Ensure instance console log exists: /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.533 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.533 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.533 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.536 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start _get_guest_xml network_info=[{"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.540 253665 WARNING nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.545 253665 DEBUG nova.virt.libvirt.host [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.546 253665 DEBUG nova.virt.libvirt.host [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.548 253665 DEBUG nova.virt.libvirt.host [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.549 253665 DEBUG nova.virt.libvirt.host [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.549 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.549 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.550 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.550 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.550 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.550 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.550 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.550 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.551 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.551 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.551 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.551 253665 DEBUG nova.virt.hardware [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.551 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'vcpu_model' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:14 compute-0 nova_compute[253661]: 2025-11-22 09:21:14.565 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:14 compute-0 ceph-mon[75021]: pgmap v1711: 305 pgs: 305 active+clean; 331 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 521 KiB/s rd, 3.8 MiB/s wr, 121 op/s
Nov 22 09:21:14 compute-0 podman[323516]: 2025-11-22 09:21:14.901137034 +0000 UTC m=+1.080644839 container remove 3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:21:14 compute-0 systemd[1]: libpod-conmon-3873822a925312c641d9bf8bf3edde5f05cb64ea4fe540d587adc2145798377c.scope: Deactivated successfully.
Nov 22 09:21:15 compute-0 neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a[323077]: [NOTICE]   (323086) : haproxy version is 2.8.14-c23fe91
Nov 22 09:21:15 compute-0 neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a[323077]: [NOTICE]   (323086) : path to executable is /usr/sbin/haproxy
Nov 22 09:21:15 compute-0 neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a[323077]: [WARNING]  (323086) : Exiting Master process...
Nov 22 09:21:15 compute-0 neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a[323077]: [WARNING]  (323086) : Exiting Master process...
Nov 22 09:21:15 compute-0 neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a[323077]: [ALERT]    (323086) : Current worker (323089) exited with code 143 (Terminated)
Nov 22 09:21:15 compute-0 neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a[323077]: [WARNING]  (323086) : All workers exited. Exiting... (0)
Nov 22 09:21:15 compute-0 systemd[1]: libpod-f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516.scope: Deactivated successfully.
Nov 22 09:21:15 compute-0 podman[323695]: 2025-11-22 09:21:15.059388889 +0000 UTC m=+0.094761074 container died f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 09:21:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609720045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.178 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.201 253665 DEBUG nova.storage.rbd_utils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.206 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 345 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.0 MiB/s wr, 194 op/s
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.505 253665 DEBUG nova.compute.manager [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Received event network-vif-unplugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.505 253665 DEBUG oslo_concurrency.lockutils [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.506 253665 DEBUG oslo_concurrency.lockutils [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.506 253665 DEBUG oslo_concurrency.lockutils [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.506 253665 DEBUG nova.compute.manager [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] No waiting events found dispatching network-vif-unplugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.506 253665 DEBUG nova.compute.manager [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Received event network-vif-unplugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.507 253665 DEBUG nova.compute.manager [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Received event network-vif-plugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.507 253665 DEBUG oslo_concurrency.lockutils [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.507 253665 DEBUG oslo_concurrency.lockutils [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.507 253665 DEBUG oslo_concurrency.lockutils [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.507 253665 DEBUG nova.compute.manager [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] No waiting events found dispatching network-vif-plugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.508 253665 WARNING nova.compute.manager [req-14ab0102-9113-4683-b505-bf45c4e41025 req-8a9efb76-8451-458b-8f67-ba08f51d9dfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Received unexpected event network-vif-plugged-f06c1184-ae19-4aa6-80b0-9a1e029e76ed for instance with vm_state active and task_state deleting.
Nov 22 09:21:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516-userdata-shm.mount: Deactivated successfully.
Nov 22 09:21:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a41ee3a1f015046d681450785c6137fe3684a98536e20f9b857e3aae117772e5-merged.mount: Deactivated successfully.
Nov 22 09:21:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.675 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.677 253665 DEBUG nova.virt.libvirt.vif [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-257400181',display_name='tempest-ServerDiskConfigTestJSON-server-257400181',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-257400181',id=61,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-wzpxdi7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:12Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=aadc298c-a1ba-41ca-9015-0a4d08420487,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.677 253665 DEBUG nova.network.os_vif_util [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.678 253665 DEBUG nova.network.os_vif_util [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.680 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <uuid>aadc298c-a1ba-41ca-9015-0a4d08420487</uuid>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <name>instance-0000003d</name>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-257400181</nova:name>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:21:14</nova:creationTime>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <nova:user uuid="5352d2182544454aab03bd4a74160247">tempest-ServerDiskConfigTestJSON-1778643933-project-member</nova:user>
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <nova:project uuid="a29f2c834c7a4a2ea6c4fc6dea996a8e">tempest-ServerDiskConfigTestJSON-1778643933</nova:project>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <nova:port uuid="27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643">
Nov 22 09:21:15 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <system>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <entry name="serial">aadc298c-a1ba-41ca-9015-0a4d08420487</entry>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <entry name="uuid">aadc298c-a1ba-41ca-9015-0a4d08420487</entry>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     </system>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <os>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   </os>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <features>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   </features>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/aadc298c-a1ba-41ca-9015-0a4d08420487_disk">
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config">
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:84:ea:a6"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <target dev="tap27b3ab6b-d0"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/console.log" append="off"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <video>
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     </video>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:21:15 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:21:15 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:21:15 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:21:15 compute-0 nova_compute[253661]: </domain>
Nov 22 09:21:15 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.680 253665 DEBUG nova.compute.manager [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Preparing to wait for external event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.681 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.681 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.681 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.682 253665 DEBUG nova.virt.libvirt.vif [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-257400181',display_name='tempest-ServerDiskConfigTestJSON-server-257400181',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-257400181',id=61,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-wzpxdi7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:12Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=aadc298c-a1ba-41ca-9015-0a4d08420487,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.682 253665 DEBUG nova.network.os_vif_util [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.682 253665 DEBUG nova.network.os_vif_util [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.683 253665 DEBUG os_vif [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.683 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.683 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.684 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.686 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.686 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27b3ab6b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.687 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27b3ab6b-d0, col_values=(('external_ids', {'iface-id': '27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:ea:a6', 'vm-uuid': 'aadc298c-a1ba-41ca-9015-0a4d08420487'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:15 compute-0 NetworkManager[48920]: <info>  [1763803275.6899] manager: (tap27b3ab6b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.690 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.696 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:15 compute-0 nova_compute[253661]: 2025-11-22 09:21:15.697 253665 INFO os_vif [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0')
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.077 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.078 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.078 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No VIF found with MAC fa:16:3e:84:ea:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.078 253665 INFO nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Using config drive
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.098 253665 DEBUG nova.storage.rbd_utils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:16 compute-0 podman[323712]: 2025-11-22 09:21:16.052641913 +0000 UTC m=+0.996137495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.114 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'ec2_ids' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/609720045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2938521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.140 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'keypairs' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:16 compute-0 podman[323695]: 2025-11-22 09:21:16.332557755 +0000 UTC m=+1.367929940 container cleanup f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:21:16 compute-0 systemd[1]: libpod-conmon-f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516.scope: Deactivated successfully.
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.535 253665 INFO nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Creating config drive at /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.544 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_xq21k8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.621 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.623 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Ensure instance console log exists: /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.624 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.626 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.626 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.631 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start _get_guest_xml network_info=[{"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:21:16 compute-0 podman[323800]: 2025-11-22 09:21:16.634919261 +0000 UTC m=+0.281658704 container remove f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.639 253665 WARNING nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.644 253665 DEBUG nova.virt.libvirt.host [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:21:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:16.645 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07978e86-b6f2-45ea-86fd-dece7a31504f]: (4, ('Sat Nov 22 09:21:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a (f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516)\nf610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516\nSat Nov 22 09:21:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a (f610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516)\nf610f2612945110a6dffb8ab93417467533d8c9440044970d2d8de56cb87a516\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.646 253665 DEBUG nova.virt.libvirt.host [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:21:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:16.647 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[57fc84d3-8141-4d48-a44b-1195b6d7857d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.650 253665 DEBUG nova.virt.libvirt.host [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:21:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:16.651 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap034abe75-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.651 253665 DEBUG nova.virt.libvirt.host [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.652 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.652 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.653 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.653 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.653 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.654 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.654 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.654 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.655 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.655 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.655 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.655 253665 DEBUG nova.virt.hardware [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.656 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:16 compute-0 kernel: tap034abe75-20: left promiscuous mode
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.678 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:16.684 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cd5f400d-6485-4e0b-bec2-a927c9e440a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:16.702 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb6ae2e-fdc1-4ecc-8170-7f5e3312b1b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:16.705 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bff71c71-e504-41cb-89e9-28bfa5d69635]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:16 compute-0 podman[323712]: 2025-11-22 09:21:16.723957455 +0000 UTC m=+1.667453027 container create 8d46f83981f83255919c058b3c11fd7210234db2508fca28f5e96c23c6f0889b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.727 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_xq21k8" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.728 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:16.728 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f75ff45c-8f38-4975-a4fb-369c3ef049cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611992, 'reachable_time': 19497, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323844, 'error': None, 'target': 'ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d034abe75\x2d2438\x2d4507\x2da76b\x2dfa852f2f7d5a.mount: Deactivated successfully.
Nov 22 09:21:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:16.735 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-034abe75-2438-4507-a76b-fa852f2f7d5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:21:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:16.735 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf4d5a5-873c-45d7-95cf-555f7c976759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.773 253665 DEBUG nova.storage.rbd_utils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:16 compute-0 nova_compute[253661]: 2025-11-22 09:21:16.783 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:16 compute-0 systemd[1]: Started libpod-conmon-8d46f83981f83255919c058b3c11fd7210234db2508fca28f5e96c23c6f0889b.scope.
Nov 22 09:21:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e8c2f436e73051b34553cf34663b9421cbb9829a5356bfefb5a410ec9b8941/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e8c2f436e73051b34553cf34663b9421cbb9829a5356bfefb5a410ec9b8941/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e8c2f436e73051b34553cf34663b9421cbb9829a5356bfefb5a410ec9b8941/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e8c2f436e73051b34553cf34663b9421cbb9829a5356bfefb5a410ec9b8941/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:16 compute-0 podman[323712]: 2025-11-22 09:21:16.882687002 +0000 UTC m=+1.826182654 container init 8d46f83981f83255919c058b3c11fd7210234db2508fca28f5e96c23c6f0889b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 09:21:16 compute-0 podman[323712]: 2025-11-22 09:21:16.893617508 +0000 UTC m=+1.837113070 container start 8d46f83981f83255919c058b3c11fd7210234db2508fca28f5e96c23c6f0889b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:21:16 compute-0 podman[323712]: 2025-11-22 09:21:16.928501255 +0000 UTC m=+1.871996847 container attach 8d46f83981f83255919c058b3c11fd7210234db2508fca28f5e96c23c6f0889b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_albattani, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 09:21:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558389927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.145 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.234 253665 DEBUG nova.storage.rbd_utils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.238 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:17 compute-0 ceph-mon[75021]: pgmap v1712: 305 pgs: 305 active+clean; 345 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.0 MiB/s wr, 194 op/s
Nov 22 09:21:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2558389927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 390 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.5 MiB/s wr, 248 op/s
Nov 22 09:21:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178083485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.702 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.704 253665 DEBUG nova.virt.libvirt.vif [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783668956',display_name='tempest-ServerActionsTestJSON-server-957521895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783668956',id=63,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-k5tayptk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:12Z,user_data=None,user_id='559fd7e00a0a468797efe4955caffc4a',uuid=d364f1c2-d606-448a-b3bd-00f1d5c1b858,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.705 253665 DEBUG nova.network.os_vif_util [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.706 253665 DEBUG nova.network.os_vif_util [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.708 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <uuid>d364f1c2-d606-448a-b3bd-00f1d5c1b858</uuid>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <name>instance-0000003f</name>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestJSON-server-957521895</nova:name>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:21:16</nova:creationTime>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <nova:port uuid="43cec84a-e6cc-4492-8869-806f677f3026">
Nov 22 09:21:17 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <system>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <entry name="serial">d364f1c2-d606-448a-b3bd-00f1d5c1b858</entry>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <entry name="uuid">d364f1c2-d606-448a-b3bd-00f1d5c1b858</entry>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     </system>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <os>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   </os>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <features>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   </features>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk">
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config">
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:17 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:29:62:0d"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <target dev="tap43cec84a-e6"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/console.log" append="off"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <video>
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     </video>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:21:17 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:21:17 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:21:17 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:21:17 compute-0 nova_compute[253661]: </domain>
Nov 22 09:21:17 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.710 253665 DEBUG nova.compute.manager [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Preparing to wait for external event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.710 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.710 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.711 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.711 253665 DEBUG nova.virt.libvirt.vif [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783668956',display_name='tempest-ServerActionsTestJSON-server-957521895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783668956',id=63,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-k5tayptk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:12Z,user_data=None,user_id='559fd7e00a0a468797efe4955caffc4a',uuid=d364f1c2-d606-448a-b3bd-00f1d5c1b858,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.711 253665 DEBUG nova.network.os_vif_util [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.712 253665 DEBUG nova.network.os_vif_util [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.712 253665 DEBUG os_vif [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.713 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.714 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.717 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43cec84a-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.717 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43cec84a-e6, col_values=(('external_ids', {'iface-id': '43cec84a-e6cc-4492-8869-806f677f3026', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:62:0d', 'vm-uuid': 'd364f1c2-d606-448a-b3bd-00f1d5c1b858'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:17 compute-0 NetworkManager[48920]: <info>  [1763803277.7203] manager: (tap43cec84a-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.721 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.730 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.731 253665 INFO os_vif [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6')
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.869 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.870 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.870 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No VIF found with MAC fa:16:3e:29:62:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.871 253665 INFO nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Using config drive
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.894 253665 DEBUG nova.storage.rbd_utils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.909 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:17 compute-0 nova_compute[253661]: 2025-11-22 09:21:17.943 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'keypairs' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:17 compute-0 zealous_albattani[323867]: {
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:     "0": [
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:         {
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "devices": [
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "/dev/loop3"
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             ],
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_name": "ceph_lv0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_size": "21470642176",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "name": "ceph_lv0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "tags": {
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.cluster_name": "ceph",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.crush_device_class": "",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.encrypted": "0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.osd_id": "0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.type": "block",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.vdo": "0"
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             },
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "type": "block",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "vg_name": "ceph_vg0"
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:         }
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:     ],
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:     "1": [
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:         {
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "devices": [
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "/dev/loop4"
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             ],
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_name": "ceph_lv1",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_size": "21470642176",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "name": "ceph_lv1",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "tags": {
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.cluster_name": "ceph",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.crush_device_class": "",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.encrypted": "0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.osd_id": "1",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.type": "block",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.vdo": "0"
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             },
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "type": "block",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "vg_name": "ceph_vg1"
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:         }
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:     ],
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:     "2": [
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:         {
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "devices": [
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "/dev/loop5"
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             ],
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_name": "ceph_lv2",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_size": "21470642176",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "name": "ceph_lv2",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "tags": {
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.cluster_name": "ceph",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.crush_device_class": "",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.encrypted": "0",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.osd_id": "2",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.type": "block",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:                 "ceph.vdo": "0"
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             },
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "type": "block",
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:             "vg_name": "ceph_vg2"
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:         }
Nov 22 09:21:17 compute-0 zealous_albattani[323867]:     ]
Nov 22 09:21:17 compute-0 zealous_albattani[323867]: }
Nov 22 09:21:18 compute-0 systemd[1]: libpod-8d46f83981f83255919c058b3c11fd7210234db2508fca28f5e96c23c6f0889b.scope: Deactivated successfully.
Nov 22 09:21:18 compute-0 podman[323712]: 2025-11-22 09:21:18.003747142 +0000 UTC m=+2.947242714 container died 8d46f83981f83255919c058b3c11fd7210234db2508fca28f5e96c23c6f0889b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:21:18 compute-0 nova_compute[253661]: 2025-11-22 09:21:18.359 253665 INFO nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Creating config drive at /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config
Nov 22 09:21:18 compute-0 nova_compute[253661]: 2025-11-22 09:21:18.364 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73qof4w3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:18 compute-0 nova_compute[253661]: 2025-11-22 09:21:18.502 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73qof4w3" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:18 compute-0 nova_compute[253661]: 2025-11-22 09:21:18.530 253665 DEBUG nova.storage.rbd_utils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:18 compute-0 nova_compute[253661]: 2025-11-22 09:21:18.533 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4178083485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9e8c2f436e73051b34553cf34663b9421cbb9829a5356bfefb5a410ec9b8941-merged.mount: Deactivated successfully.
Nov 22 09:21:19 compute-0 podman[323712]: 2025-11-22 09:21:19.355712722 +0000 UTC m=+4.299208284 container remove 8d46f83981f83255919c058b3c11fd7210234db2508fca28f5e96c23c6f0889b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_albattani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:21:19 compute-0 systemd[1]: libpod-conmon-8d46f83981f83255919c058b3c11fd7210234db2508fca28f5e96c23c6f0889b.scope: Deactivated successfully.
Nov 22 09:21:19 compute-0 sudo[323452]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 400 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.4 MiB/s wr, 288 op/s
Nov 22 09:21:19 compute-0 sudo[324026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:21:19 compute-0 sudo[324026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:19 compute-0 sudo[324026]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:19 compute-0 sudo[324051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:21:19 compute-0 sudo[324051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:19 compute-0 sudo[324051]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:19 compute-0 sudo[324076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:21:19 compute-0 sudo[324076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:19 compute-0 sudo[324076]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:19 compute-0 ceph-mon[75021]: pgmap v1713: 305 pgs: 305 active+clean; 390 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.5 MiB/s wr, 248 op/s
Nov 22 09:21:19 compute-0 sudo[324101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:21:19 compute-0 sudo[324101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:19 compute-0 nova_compute[253661]: 2025-11-22 09:21:19.982 253665 DEBUG oslo_concurrency.processutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:19 compute-0 nova_compute[253661]: 2025-11-22 09:21:19.983 253665 INFO nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Deleting local config drive /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config because it was imported into RBD.
Nov 22 09:21:20 compute-0 NetworkManager[48920]: <info>  [1763803280.0485] manager: (tap27b3ab6b-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Nov 22 09:21:20 compute-0 kernel: tap27b3ab6b-d0: entered promiscuous mode
Nov 22 09:21:20 compute-0 ovn_controller[152872]: 2025-11-22T09:21:20Z|00627|binding|INFO|Claiming lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for this chassis.
Nov 22 09:21:20 compute-0 ovn_controller[152872]: 2025-11-22T09:21:20Z|00628|binding|INFO|27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643: Claiming fa:16:3e:84:ea:a6 10.100.0.14
Nov 22 09:21:20 compute-0 nova_compute[253661]: 2025-11-22 09:21:20.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:20 compute-0 ovn_controller[152872]: 2025-11-22T09:21:20Z|00629|binding|INFO|Setting lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 ovn-installed in OVS
Nov 22 09:21:20 compute-0 nova_compute[253661]: 2025-11-22 09:21:20.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:20 compute-0 nova_compute[253661]: 2025-11-22 09:21:20.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:20 compute-0 systemd-udevd[324201]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:20 compute-0 systemd-machined[215941]: New machine qemu-77-instance-0000003d.
Nov 22 09:21:20 compute-0 NetworkManager[48920]: <info>  [1763803280.1068] device (tap27b3ab6b-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:21:20 compute-0 NetworkManager[48920]: <info>  [1763803280.1084] device (tap27b3ab6b-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:21:20 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-0000003d.
Nov 22 09:21:20 compute-0 podman[324170]: 2025-11-22 09:21:20.046141788 +0000 UTC m=+0.027881098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:21:20 compute-0 ovn_controller[152872]: 2025-11-22T09:21:20Z|00630|binding|INFO|Setting lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 up in Southbound
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.187 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:ea:a6 10.100.0.14'], port_security=['fa:16:3e:84:ea:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aadc298c-a1ba-41ca-9015-0a4d08420487', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.188 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd bound to our chassis
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.191 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.204 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2586a034-f905-45e7-a3ec-8aab7cdc7cc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.206 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d1bce2-e1 in ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.208 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d1bce2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc903243-29ad-470c-bcb1-3c4ced4537be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d5cbcc-e216-4d7f-885d-ab96e8fbc2c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.224 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[69a73500-e037-416c-8e26-ebcd4090f03e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.254 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e27f161-2515-4cfe-95b5-72518d4af309]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.293 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b4666a13-e297-431a-b957-4462479bdd63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.299 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9ab9a8-5bd6-44a7-af4f-9501430d7650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 NetworkManager[48920]: <info>  [1763803280.3006] manager: (tap01d1bce2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/287)
Nov 22 09:21:20 compute-0 nova_compute[253661]: 2025-11-22 09:21:20.317 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.341 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[99820c5b-6bf9-4e54-a23a-9f7bd1ac364d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.345 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f45eb1-8e22-431c-b307-7f5a4a776338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 NetworkManager[48920]: <info>  [1763803280.3769] device (tap01d1bce2-e0): carrier: link connected
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.385 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0600d738-d052-4362-941d-a5b88d86455e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.402 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a379097e-826b-422b-8efe-3f9145faca32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613111, 'reachable_time': 38457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324237, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.415 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f0949caa-4f1b-4bf0-bc00-ed01a496be61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:2279'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613111, 'tstamp': 613111}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324238, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.435 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81947b1d-941a-4ff2-8d46-03647065a72d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613111, 'reachable_time': 38457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324239, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.470 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3b1751-2612-46ee-830b-d2bcbb5a8013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.548 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78cd8034-d536-4636-ac19-11be4f51e5ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.550 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.550 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.551 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d1bce2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:20 compute-0 NetworkManager[48920]: <info>  [1763803280.5538] manager: (tap01d1bce2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Nov 22 09:21:20 compute-0 nova_compute[253661]: 2025-11-22 09:21:20.553 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:20 compute-0 kernel: tap01d1bce2-e0: entered promiscuous mode
Nov 22 09:21:20 compute-0 nova_compute[253661]: 2025-11-22 09:21:20.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.559 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d1bce2-e0, col_values=(('external_ids', {'iface-id': '23aa3d02-a12d-464a-8395-5aa8724c0fd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:20 compute-0 nova_compute[253661]: 2025-11-22 09:21:20.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:20 compute-0 ovn_controller[152872]: 2025-11-22T09:21:20Z|00631|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 09:21:20 compute-0 podman[324170]: 2025-11-22 09:21:20.570454439 +0000 UTC m=+0.552193739 container create 10778f8af536b36c422818b659a07794aa7b39d1f03b0f0e27d8e61ecfff16f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:21:20 compute-0 nova_compute[253661]: 2025-11-22 09:21:20.583 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:20 compute-0 nova_compute[253661]: 2025-11-22 09:21:20.586 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.587 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.588 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6d4d1f-16e9-42cf-8d59-b2986803fe75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.590 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:21:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:20.590 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'env', 'PROCESS_TAG=haproxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:21:21 compute-0 systemd[1]: Started libpod-conmon-10778f8af536b36c422818b659a07794aa7b39d1f03b0f0e27d8e61ecfff16f7.scope.
Nov 22 09:21:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.094 253665 DEBUG nova.compute.manager [req-1337b7d0-3dc4-4445-b232-d679586a3be2 req-fb77e3e0-0bea-4bf7-acf8-db6d7a600560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.095 253665 DEBUG oslo_concurrency.lockutils [req-1337b7d0-3dc4-4445-b232-d679586a3be2 req-fb77e3e0-0bea-4bf7-acf8-db6d7a600560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.096 253665 DEBUG oslo_concurrency.lockutils [req-1337b7d0-3dc4-4445-b232-d679586a3be2 req-fb77e3e0-0bea-4bf7-acf8-db6d7a600560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.096 253665 DEBUG oslo_concurrency.lockutils [req-1337b7d0-3dc4-4445-b232-d679586a3be2 req-fb77e3e0-0bea-4bf7-acf8-db6d7a600560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.096 253665 DEBUG nova.compute.manager [req-1337b7d0-3dc4-4445-b232-d679586a3be2 req-fb77e3e0-0bea-4bf7-acf8-db6d7a600560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Processing event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.147 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:21:21 compute-0 podman[324293]: 2025-11-22 09:21:21.072841806 +0000 UTC m=+0.030628256 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:21:21 compute-0 podman[324170]: 2025-11-22 09:21:21.346928945 +0000 UTC m=+1.328668295 container init 10778f8af536b36c422818b659a07794aa7b39d1f03b0f0e27d8e61ecfff16f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:21:21 compute-0 ceph-mon[75021]: pgmap v1714: 305 pgs: 305 active+clean; 400 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.4 MiB/s wr, 288 op/s
Nov 22 09:21:21 compute-0 podman[324170]: 2025-11-22 09:21:21.359024229 +0000 UTC m=+1.340763509 container start 10778f8af536b36c422818b659a07794aa7b39d1f03b0f0e27d8e61ecfff16f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 09:21:21 compute-0 crazy_dirac[324294]: 167 167
Nov 22 09:21:21 compute-0 systemd[1]: libpod-10778f8af536b36c422818b659a07794aa7b39d1f03b0f0e27d8e61ecfff16f7.scope: Deactivated successfully.
Nov 22 09:21:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 400 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.6 MiB/s wr, 266 op/s
Nov 22 09:21:21 compute-0 podman[324170]: 2025-11-22 09:21:21.540191171 +0000 UTC m=+1.521930491 container attach 10778f8af536b36c422818b659a07794aa7b39d1f03b0f0e27d8e61ecfff16f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:21:21 compute-0 podman[324170]: 2025-11-22 09:21:21.541107233 +0000 UTC m=+1.522846543 container died 10778f8af536b36c422818b659a07794aa7b39d1f03b0f0e27d8e61ecfff16f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.650 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803281.6495075, aadc298c-a1ba-41ca-9015-0a4d08420487 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.650 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Started (Lifecycle Event)
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.652 253665 DEBUG nova.compute.manager [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.655 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.658 253665 INFO nova.virt.libvirt.driver [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance spawned successfully.
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.659 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.664 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.671 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.676 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.676 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.677 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.677 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.677 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.678 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.695 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.695 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803281.6496367, aadc298c-a1ba-41ca-9015-0a4d08420487 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.695 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Paused (Lifecycle Event)
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.715 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.718 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803281.6546388, aadc298c-a1ba-41ca-9015-0a4d08420487 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.718 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Resumed (Lifecycle Event)
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.736 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.740 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.761 253665 DEBUG nova.compute.manager [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.762 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.841 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.842 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.842 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:21:21 compute-0 nova_compute[253661]: 2025-11-22 09:21:21.897 253665 DEBUG oslo_concurrency.lockutils [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d8660a0ae3773b820a18e12f509eba4d570f2a0ee5bef03c411310fb217b29f-merged.mount: Deactivated successfully.
Nov 22 09:21:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:22 compute-0 ceph-mon[75021]: pgmap v1715: 305 pgs: 305 active+clean; 400 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.6 MiB/s wr, 266 op/s
Nov 22 09:21:22 compute-0 nova_compute[253661]: 2025-11-22 09:21:22.488 253665 DEBUG oslo_concurrency.processutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.955s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:22 compute-0 nova_compute[253661]: 2025-11-22 09:21:22.489 253665 INFO nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Deleting local config drive /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config because it was imported into RBD.
Nov 22 09:21:22 compute-0 kernel: tap43cec84a-e6: entered promiscuous mode
Nov 22 09:21:22 compute-0 NetworkManager[48920]: <info>  [1763803282.5664] manager: (tap43cec84a-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Nov 22 09:21:22 compute-0 systemd-udevd[324226]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:22 compute-0 ovn_controller[152872]: 2025-11-22T09:21:22Z|00632|binding|INFO|Claiming lport 43cec84a-e6cc-4492-8869-806f677f3026 for this chassis.
Nov 22 09:21:22 compute-0 nova_compute[253661]: 2025-11-22 09:21:22.568 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:22 compute-0 ovn_controller[152872]: 2025-11-22T09:21:22Z|00633|binding|INFO|43cec84a-e6cc-4492-8869-806f677f3026: Claiming fa:16:3e:29:62:0d 10.100.0.10
Nov 22 09:21:22 compute-0 podman[324170]: 2025-11-22 09:21:22.576333027 +0000 UTC m=+2.558072307 container remove 10778f8af536b36c422818b659a07794aa7b39d1f03b0f0e27d8e61ecfff16f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:21:22 compute-0 NetworkManager[48920]: <info>  [1763803282.5868] device (tap43cec84a-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:21:22 compute-0 NetworkManager[48920]: <info>  [1763803282.5875] device (tap43cec84a-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:21:22 compute-0 ovn_controller[152872]: 2025-11-22T09:21:22Z|00634|binding|INFO|Setting lport 43cec84a-e6cc-4492-8869-806f677f3026 ovn-installed in OVS
Nov 22 09:21:22 compute-0 nova_compute[253661]: 2025-11-22 09:21:22.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:22 compute-0 nova_compute[253661]: 2025-11-22 09:21:22.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:22 compute-0 systemd-machined[215941]: New machine qemu-78-instance-0000003f.
Nov 22 09:21:22 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-0000003f.
Nov 22 09:21:22 compute-0 podman[324293]: 2025-11-22 09:21:22.643790577 +0000 UTC m=+1.601576997 container create 303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:21:22 compute-0 nova_compute[253661]: 2025-11-22 09:21:22.720 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:21:22 compute-0 ovn_controller[152872]: 2025-11-22T09:21:22Z|00635|binding|INFO|Setting lport 43cec84a-e6cc-4492-8869-806f677f3026 up in Southbound
Nov 22 09:21:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:22.733 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:62:0d 10.100.0.10'], port_security=['fa:16:3e:29:62:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd364f1c2-d606-448a-b3bd-00f1d5c1b858', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c62f4ce9-5b21-4154-83ce-fbb32299e500', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=43cec84a-e6cc-4492-8869-806f677f3026) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:22 compute-0 systemd[1]: Started libpod-conmon-303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a.scope.
Nov 22 09:21:22 compute-0 systemd[1]: libpod-conmon-10778f8af536b36c422818b659a07794aa7b39d1f03b0f0e27d8e61ecfff16f7.scope: Deactivated successfully.
Nov 22 09:21:22 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 22 09:21:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f21b1e6b707a81583128beb0337fc6958028d4746633aa5c12c1a3e42e992288/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:22 compute-0 podman[324293]: 2025-11-22 09:21:22.867885212 +0000 UTC m=+1.825671652 container init 303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:21:22 compute-0 podman[324293]: 2025-11-22 09:21:22.874517873 +0000 UTC m=+1.832304293 container start 303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:21:22 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[324375]: [NOTICE]   (324395) : New worker (324397) forked
Nov 22 09:21:22 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[324375]: [NOTICE]   (324395) : Loading success.
Nov 22 09:21:22 compute-0 podman[324382]: 2025-11-22 09:21:22.948247865 +0000 UTC m=+0.160134862 container create 811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_payne, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 09:21:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:22.976 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 43cec84a-e6cc-4492-8869-806f677f3026 in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:21:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:22.978 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:21:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:22.993 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bff35eb6-676d-4ba0-874f-b727defb318b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:22 compute-0 systemd[1]: Started libpod-conmon-811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d.scope.
Nov 22 09:21:23 compute-0 podman[324382]: 2025-11-22 09:21:22.922262253 +0000 UTC m=+0.134149270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:21:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.028 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e4be9351-b291-48fb-a5c2-ad5625bfbc53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4cfb173eb130d483ee9cf113355dd102d71bae7a760d70f28a47900b5b7366/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.032 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8b23a7b2-d5f6-42d2-b737-35b5a50d13b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4cfb173eb130d483ee9cf113355dd102d71bae7a760d70f28a47900b5b7366/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4cfb173eb130d483ee9cf113355dd102d71bae7a760d70f28a47900b5b7366/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4cfb173eb130d483ee9cf113355dd102d71bae7a760d70f28a47900b5b7366/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.074 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0673d4-67b8-4419-ac52-bee9b107f170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.094 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d8736f-778d-4a88-b058-e5952b34c1d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602324, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324454, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.111 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a901c3cb-c03e-4a28-87c2-ae3abf212a30]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602336, 'tstamp': 602336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324455, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602340, 'tstamp': 602340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324455, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.113 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.115 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.118 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.118 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.118 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:23.119 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:23 compute-0 podman[324382]: 2025-11-22 09:21:23.125263395 +0000 UTC m=+0.337150402 container init 811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 22 09:21:23 compute-0 podman[324382]: 2025-11-22 09:21:23.133803134 +0000 UTC m=+0.345690131 container start 811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:21:23 compute-0 podman[324382]: 2025-11-22 09:21:23.171446288 +0000 UTC m=+0.383333315 container attach 811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_payne, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.282 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803283.2817929, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.282 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Started (Lifecycle Event)
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.298 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.303 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803283.2835975, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.304 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Paused (Lifecycle Event)
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.320 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.324 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.340 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:21:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 374 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.6 MiB/s wr, 283 op/s
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.466 253665 DEBUG nova.compute.manager [req-de5ee441-5ee0-42f8-8751-54c34db4131d req-f4d7cca7-c229-409b-ba53-7e079b980766 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.467 253665 DEBUG oslo_concurrency.lockutils [req-de5ee441-5ee0-42f8-8751-54c34db4131d req-f4d7cca7-c229-409b-ba53-7e079b980766 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.467 253665 DEBUG oslo_concurrency.lockutils [req-de5ee441-5ee0-42f8-8751-54c34db4131d req-f4d7cca7-c229-409b-ba53-7e079b980766 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.467 253665 DEBUG oslo_concurrency.lockutils [req-de5ee441-5ee0-42f8-8751-54c34db4131d req-f4d7cca7-c229-409b-ba53-7e079b980766 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.468 253665 DEBUG nova.compute.manager [req-de5ee441-5ee0-42f8-8751-54c34db4131d req-f4d7cca7-c229-409b-ba53-7e079b980766 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] No waiting events found dispatching network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.468 253665 WARNING nova.compute.manager [req-de5ee441-5ee0-42f8-8751-54c34db4131d req-f4d7cca7-c229-409b-ba53-7e079b980766 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received unexpected event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for instance with vm_state active and task_state None.
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.522 253665 DEBUG nova.compute.manager [req-cff88072-7fea-4803-8130-8d8b87140da7 req-0cc0094d-2df5-40b2-b389-2c69a8add388 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.523 253665 DEBUG oslo_concurrency.lockutils [req-cff88072-7fea-4803-8130-8d8b87140da7 req-0cc0094d-2df5-40b2-b389-2c69a8add388 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.523 253665 DEBUG oslo_concurrency.lockutils [req-cff88072-7fea-4803-8130-8d8b87140da7 req-0cc0094d-2df5-40b2-b389-2c69a8add388 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.523 253665 DEBUG oslo_concurrency.lockutils [req-cff88072-7fea-4803-8130-8d8b87140da7 req-0cc0094d-2df5-40b2-b389-2c69a8add388 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.524 253665 DEBUG nova.compute.manager [req-cff88072-7fea-4803-8130-8d8b87140da7 req-0cc0094d-2df5-40b2-b389-2c69a8add388 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Processing event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.525 253665 DEBUG nova.compute.manager [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.528 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803283.5282605, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.528 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Resumed (Lifecycle Event)
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.531 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.534 253665 INFO nova.virt.libvirt.driver [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance spawned successfully.
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.535 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.557 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.562 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.562 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.563 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.563 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.563 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.564 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.568 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.591 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.732 253665 DEBUG nova.compute.manager [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.830 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.831 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.831 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:21:23 compute-0 nova_compute[253661]: 2025-11-22 09:21:23.880 253665 DEBUG oslo_concurrency.lockutils [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:24 compute-0 clever_payne[324429]: {
Nov 22 09:21:24 compute-0 clever_payne[324429]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "osd_id": 1,
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "type": "bluestore"
Nov 22 09:21:24 compute-0 clever_payne[324429]:     },
Nov 22 09:21:24 compute-0 clever_payne[324429]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "osd_id": 0,
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "type": "bluestore"
Nov 22 09:21:24 compute-0 clever_payne[324429]:     },
Nov 22 09:21:24 compute-0 clever_payne[324429]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "osd_id": 2,
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:21:24 compute-0 clever_payne[324429]:         "type": "bluestore"
Nov 22 09:21:24 compute-0 clever_payne[324429]:     }
Nov 22 09:21:24 compute-0 clever_payne[324429]: }
Nov 22 09:21:24 compute-0 nova_compute[253661]: 2025-11-22 09:21:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:24 compute-0 systemd[1]: libpod-811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d.scope: Deactivated successfully.
Nov 22 09:21:24 compute-0 systemd[1]: libpod-811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d.scope: Consumed 1.071s CPU time.
Nov 22 09:21:24 compute-0 podman[324382]: 2025-11-22 09:21:24.235147574 +0000 UTC m=+1.447034591 container died 811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_payne, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 09:21:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4cfb173eb130d483ee9cf113355dd102d71bae7a760d70f28a47900b5b7366-merged.mount: Deactivated successfully.
Nov 22 09:21:24 compute-0 nova_compute[253661]: 2025-11-22 09:21:24.541 253665 INFO nova.virt.libvirt.driver [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Deleting instance files /var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3_del
Nov 22 09:21:24 compute-0 nova_compute[253661]: 2025-11-22 09:21:24.541 253665 INFO nova.virt.libvirt.driver [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Deletion of /var/lib/nova/instances/b8ca3796-0135-4d73-96eb-c1534303c4a3_del complete
Nov 22 09:21:24 compute-0 ceph-mon[75021]: pgmap v1716: 305 pgs: 305 active+clean; 374 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.6 MiB/s wr, 283 op/s
Nov 22 09:21:24 compute-0 nova_compute[253661]: 2025-11-22 09:21:24.656 253665 INFO nova.compute.manager [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Took 10.99 seconds to destroy the instance on the hypervisor.
Nov 22 09:21:24 compute-0 nova_compute[253661]: 2025-11-22 09:21:24.657 253665 DEBUG oslo.service.loopingcall [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:21:24 compute-0 nova_compute[253661]: 2025-11-22 09:21:24.657 253665 DEBUG nova.compute.manager [-] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:21:24 compute-0 nova_compute[253661]: 2025-11-22 09:21:24.658 253665 DEBUG nova.network.neutron [-] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:21:24 compute-0 podman[324382]: 2025-11-22 09:21:24.775449573 +0000 UTC m=+1.987336570 container remove 811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:21:24 compute-0 systemd[1]: libpod-conmon-811bc2cc1cad403453b89671057352bf8a1b4085a5f662850fe0e6dd6a51569d.scope: Deactivated successfully.
Nov 22 09:21:24 compute-0 sudo[324101]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:21:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:21:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:21:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:21:24 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b9b3e44b-7ef5-464f-b2e2-8c00837b1d03 does not exist
Nov 22 09:21:25 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev f9be3399-b81c-451b-afe2-a21d8b1181d5 does not exist
Nov 22 09:21:25 compute-0 sudo[324505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:21:25 compute-0 sudo[324505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:25 compute-0 sudo[324505]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:25 compute-0 sudo[324530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:21:25 compute-0 sudo[324530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:21:25 compute-0 sudo[324530]: pam_unix(sudo:session): session closed for user root
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.200 253665 DEBUG oslo_concurrency.lockutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.200 253665 DEBUG oslo_concurrency.lockutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.200 253665 DEBUG oslo_concurrency.lockutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.201 253665 DEBUG oslo_concurrency.lockutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.201 253665 DEBUG oslo_concurrency.lockutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.202 253665 INFO nova.compute.manager [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Terminating instance
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.203 253665 DEBUG nova.compute.manager [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.412 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.413 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.413 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.413 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:25 compute-0 kernel: tap27b3ab6b-d0 (unregistering): left promiscuous mode
Nov 22 09:21:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 374 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.1 MiB/s wr, 250 op/s
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.471 253665 DEBUG nova.network.neutron [-] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:25 compute-0 NetworkManager[48920]: <info>  [1763803285.4724] device (tap27b3ab6b-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:25 compute-0 ovn_controller[152872]: 2025-11-22T09:21:25Z|00636|binding|INFO|Releasing lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 from this chassis (sb_readonly=0)
Nov 22 09:21:25 compute-0 ovn_controller[152872]: 2025-11-22T09:21:25Z|00637|binding|INFO|Setting lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 down in Southbound
Nov 22 09:21:25 compute-0 ovn_controller[152872]: 2025-11-22T09:21:25Z|00638|binding|INFO|Removing iface tap27b3ab6b-d0 ovn-installed in OVS
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.487 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:25.517 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:ea:a6 10.100.0.14'], port_security=['fa:16:3e:84:ea:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aadc298c-a1ba-41ca-9015-0a4d08420487', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:25.518 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd unbound from our chassis
Nov 22 09:21:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:25.520 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:25.521 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5acc98e0-df44-4b4c-9930-6cf11d5e1d49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:25.522 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace which is not needed anymore
Nov 22 09:21:25 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 22 09:21:25 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003d.scope: Consumed 3.990s CPU time.
Nov 22 09:21:25 compute-0 systemd-machined[215941]: Machine qemu-77-instance-0000003d terminated.
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.536 253665 INFO nova.compute.manager [-] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Took 0.88 seconds to deallocate network for instance.
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.596 253665 DEBUG oslo_concurrency.lockutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.597 253665 DEBUG oslo_concurrency.lockutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.628 253665 DEBUG nova.scheduler.client.report [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.638 253665 INFO nova.virt.libvirt.driver [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance destroyed successfully.
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.639 253665 DEBUG nova.objects.instance [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'resources' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.645 253665 DEBUG nova.scheduler.client.report [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.645 253665 DEBUG nova.compute.provider_tree [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.658 253665 DEBUG nova.virt.libvirt.vif [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-257400181',display_name='tempest-ServerDiskConfigTestJSON-server-257400181',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-257400181',id=61,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:21:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-wzpxdi7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:21Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=aadc298c-a1ba-41ca-9015-0a4d08420487,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.659 253665 DEBUG nova.network.os_vif_util [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.659 253665 DEBUG nova.network.os_vif_util [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.660 253665 DEBUG os_vif [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.662 253665 DEBUG nova.scheduler.client.report [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.664 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.664 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27b3ab6b-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.669 253665 DEBUG nova.compute.manager [req-c00450b9-1f90-4b52-9297-86359b7f44c1 req-b887e75d-f5f8-4dec-9e37-9ed6f63ffc78 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.669 253665 DEBUG oslo_concurrency.lockutils [req-c00450b9-1f90-4b52-9297-86359b7f44c1 req-b887e75d-f5f8-4dec-9e37-9ed6f63ffc78 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.669 253665 DEBUG oslo_concurrency.lockutils [req-c00450b9-1f90-4b52-9297-86359b7f44c1 req-b887e75d-f5f8-4dec-9e37-9ed6f63ffc78 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.669 253665 DEBUG oslo_concurrency.lockutils [req-c00450b9-1f90-4b52-9297-86359b7f44c1 req-b887e75d-f5f8-4dec-9e37-9ed6f63ffc78 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.670 253665 DEBUG nova.compute.manager [req-c00450b9-1f90-4b52-9297-86359b7f44c1 req-b887e75d-f5f8-4dec-9e37-9ed6f63ffc78 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] No waiting events found dispatching network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.670 253665 WARNING nova.compute.manager [req-c00450b9-1f90-4b52-9297-86359b7f44c1 req-b887e75d-f5f8-4dec-9e37-9ed6f63ffc78 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received unexpected event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 for instance with vm_state active and task_state None.
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.670 253665 DEBUG nova.compute.manager [req-c00450b9-1f90-4b52-9297-86359b7f44c1 req-b887e75d-f5f8-4dec-9e37-9ed6f63ffc78 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Received event network-vif-deleted-f06c1184-ae19-4aa6-80b0-9a1e029e76ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.672 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.674 253665 INFO os_vif [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0')
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.690 253665 DEBUG nova.scheduler.client.report [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.800 253665 DEBUG oslo_concurrency.processutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:25 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[324375]: [NOTICE]   (324395) : haproxy version is 2.8.14-c23fe91
Nov 22 09:21:25 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[324375]: [NOTICE]   (324395) : path to executable is /usr/sbin/haproxy
Nov 22 09:21:25 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[324375]: [WARNING]  (324395) : Exiting Master process...
Nov 22 09:21:25 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[324375]: [WARNING]  (324395) : Exiting Master process...
Nov 22 09:21:25 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[324375]: [ALERT]    (324395) : Current worker (324397) exited with code 143 (Terminated)
Nov 22 09:21:25 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[324375]: [WARNING]  (324395) : All workers exited. Exiting... (0)
Nov 22 09:21:25 compute-0 systemd[1]: libpod-303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a.scope: Deactivated successfully.
Nov 22 09:21:25 compute-0 podman[324577]: 2025-11-22 09:21:25.83262934 +0000 UTC m=+0.192723274 container died 303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:21:25 compute-0 kernel: tape043dc2b-60 (unregistering): left promiscuous mode
Nov 22 09:21:25 compute-0 NetworkManager[48920]: <info>  [1763803285.8470] device (tape043dc2b-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:25 compute-0 ovn_controller[152872]: 2025-11-22T09:21:25Z|00639|binding|INFO|Releasing lport e043dc2b-6062-4cda-bf32-37ab692618c1 from this chassis (sb_readonly=0)
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:25 compute-0 ovn_controller[152872]: 2025-11-22T09:21:25Z|00640|binding|INFO|Setting lport e043dc2b-6062-4cda-bf32-37ab692618c1 down in Southbound
Nov 22 09:21:25 compute-0 ovn_controller[152872]: 2025-11-22T09:21:25Z|00641|binding|INFO|Removing iface tape043dc2b-60 ovn-installed in OVS
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:25 compute-0 nova_compute[253661]: 2025-11-22 09:21:25.875 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:25 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000041.scope: Deactivated successfully.
Nov 22 09:21:25 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000041.scope: Consumed 17.762s CPU time.
Nov 22 09:21:25 compute-0 systemd-machined[215941]: Machine qemu-75-instance-00000041 terminated.
Nov 22 09:21:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:25.933 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:dc:4f 10.100.0.13'], port_security=['fa:16:3e:c9:dc:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '933c51626a49465db409069a1b3eb7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '593f17fc-d03e-4917-afb8-683e49d809be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1d04103-3da8-47e2-b1fa-1a0a204f148a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e043dc2b-6062-4cda-bf32-37ab692618c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:26 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:21:26 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.135 253665 DEBUG nova.compute.manager [req-ddb8a778-1fd1-423d-9b03-356d00847ce9 req-f0b3a835-090c-4c54-9551-e65dad43560f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-unplugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.136 253665 DEBUG oslo_concurrency.lockutils [req-ddb8a778-1fd1-423d-9b03-356d00847ce9 req-f0b3a835-090c-4c54-9551-e65dad43560f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.136 253665 DEBUG oslo_concurrency.lockutils [req-ddb8a778-1fd1-423d-9b03-356d00847ce9 req-f0b3a835-090c-4c54-9551-e65dad43560f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.136 253665 DEBUG oslo_concurrency.lockutils [req-ddb8a778-1fd1-423d-9b03-356d00847ce9 req-f0b3a835-090c-4c54-9551-e65dad43560f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.136 253665 DEBUG nova.compute.manager [req-ddb8a778-1fd1-423d-9b03-356d00847ce9 req-f0b3a835-090c-4c54-9551-e65dad43560f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] No waiting events found dispatching network-vif-unplugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.137 253665 DEBUG nova.compute.manager [req-ddb8a778-1fd1-423d-9b03-356d00847ce9 req-f0b3a835-090c-4c54-9551-e65dad43560f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-unplugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.169 253665 INFO nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance shutdown successfully after 26 seconds.
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.175 253665 INFO nova.virt.libvirt.driver [-] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance destroyed successfully.
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.175 253665 DEBUG nova.objects.instance [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'numa_topology' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.188 253665 INFO nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Attempting rescue
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.189 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.192 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.192 253665 INFO nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Creating image(s)
Nov 22 09:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a-userdata-shm.mount: Deactivated successfully.
Nov 22 09:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f21b1e6b707a81583128beb0337fc6958028d4746633aa5c12c1a3e42e992288-merged.mount: Deactivated successfully.
Nov 22 09:21:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164557130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.605 253665 DEBUG nova.storage.rbd_utils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.608 253665 DEBUG nova.objects.instance [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'trusted_certs' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.614 253665 DEBUG oslo_concurrency.processutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.814s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.619 253665 DEBUG nova.compute.provider_tree [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.641 253665 DEBUG nova.storage.rbd_utils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.664 253665 DEBUG nova.storage.rbd_utils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.668 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.702 253665 DEBUG nova.scheduler.client.report [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:26 compute-0 podman[324577]: 2025-11-22 09:21:26.72772869 +0000 UTC m=+1.087822624 container cleanup 303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:21:26 compute-0 systemd[1]: libpod-conmon-303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a.scope: Deactivated successfully.
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.745 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.746 253665 DEBUG oslo_concurrency.lockutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.746 253665 DEBUG oslo_concurrency.lockutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.747 253665 DEBUG oslo_concurrency.lockutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.769 253665 DEBUG nova.storage.rbd_utils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.774 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.809 253665 DEBUG oslo_concurrency.lockutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:26 compute-0 nova_compute[253661]: 2025-11-22 09:21:26.887 253665 INFO nova.scheduler.client.report [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Deleted allocations for instance b8ca3796-0135-4d73-96eb-c1534303c4a3
Nov 22 09:21:27 compute-0 nova_compute[253661]: 2025-11-22 09:21:27.002 253665 DEBUG oslo_concurrency.lockutils [None req-0a1a09d6-2616-4362-adb5-d75cbb23ffe0 455fded60f87468985deee6f550c818d 3192624a4dbf42b1a2a7efd75234d9c8 - - default default] Lock "b8ca3796-0135-4d73-96eb-c1534303c4a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:27 compute-0 podman[324723]: 2025-11-22 09:21:27.151015866 +0000 UTC m=+0.397596412 container remove 303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:21:27 compute-0 ceph-mon[75021]: pgmap v1717: 305 pgs: 305 active+clean; 374 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.1 MiB/s wr, 250 op/s
Nov 22 09:21:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3164557130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.159 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[54b6be07-4082-4485-a1b9-a91e9e03a247]: (4, ('Sat Nov 22 09:21:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a)\n303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a\nSat Nov 22 09:21:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a)\n303e21b208b61c87a83bf83664e5ac53ffdd05a0da0bcc768b07ef4574fe2a8a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.161 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2bbf4a-6e19-4f58-aa94-2283bc08c444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.162 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:27 compute-0 nova_compute[253661]: 2025-11-22 09:21:27.164 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:27 compute-0 kernel: tap01d1bce2-e0: left promiscuous mode
Nov 22 09:21:27 compute-0 nova_compute[253661]: 2025-11-22 09:21:27.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.188 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[63d7f6cc-326c-47c2-acdb-96191bcb29e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.212 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[14314169-833f-446f-b949-07e703997472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.213 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[266f14d2-386f-406c-ae60-9811d097315c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.234 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[54a76567-cc16-44c6-9122-6047f4429501]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613102, 'reachable_time': 18759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324775, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.245 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.245 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbc9295-b7c2-45e5-a31e-c6facef1919e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.246 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e043dc2b-6062-4cda-bf32-37ab692618c1 in datapath 851968f3-2dc6-498b-a08c-7b5b2c2383d4 unbound from our chassis
Nov 22 09:21:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d01d1bce2\x2def3d\x2d44bf\x2da3f9\x2d13dc692c2ddd.mount: Deactivated successfully.
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.247 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 851968f3-2dc6-498b-a08c-7b5b2c2383d4
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.263 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c73b4917-bd68-4a18-8acc-30221aac02d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.291 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6278aca2-83e1-4d4c-b3d6-5776901b2239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.294 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee3f2a7-c5ed-48ee-be60-bbd99bd9c9ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.319 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[be04f9c8-e05c-4e6e-b1a5-c7a1b711d1e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.337 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[35411665-11aa-4a32-bead-75eb166420cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap851968f3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:3f:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609286, 'reachable_time': 22638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324788, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.353 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3dd7fa-fa3d-4117-87d5-d8329398e4f6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap851968f3-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609297, 'tstamp': 609297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324789, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap851968f3-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609300, 'tstamp': 609300}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324789, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.354 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851968f3-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:27 compute-0 nova_compute[253661]: 2025-11-22 09:21:27.356 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:27 compute-0 nova_compute[253661]: 2025-11-22 09:21:27.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.366 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851968f3-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.367 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.367 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap851968f3-20, col_values=(('external_ids', {'iface-id': '2459339b-2f2c-469c-aa5d-2df42fbac2a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.368 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:27 compute-0 nova_compute[253661]: 2025-11-22 09:21:27.402 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:27 compute-0 nova_compute[253661]: 2025-11-22 09:21:27.413 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:21:27 compute-0 nova_compute[253661]: 2025-11-22 09:21:27.414 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:21:27 compute-0 nova_compute[253661]: 2025-11-22 09:21:27.414 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 374 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 218 op/s
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.964 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.965 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:27.965 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.238 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "90d94c7c-4c60-4842-b18c-1eecc3997b15" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.238 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.249 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.249 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.282 253665 DEBUG oslo_concurrency.lockutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.283 253665 DEBUG oslo_concurrency.lockutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.283 253665 DEBUG oslo_concurrency.lockutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.283 253665 DEBUG oslo_concurrency.lockutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.283 253665 DEBUG oslo_concurrency.lockutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.285 253665 INFO nova.compute.manager [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Terminating instance
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.286 253665 DEBUG nova.compute.manager [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.287 253665 DEBUG nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.317 253665 DEBUG nova.compute.manager [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.317 253665 DEBUG oslo_concurrency.lockutils [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.318 253665 DEBUG oslo_concurrency.lockutils [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.318 253665 DEBUG oslo_concurrency.lockutils [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.318 253665 DEBUG nova.compute.manager [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] No waiting events found dispatching network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.318 253665 WARNING nova.compute.manager [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received unexpected event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for instance with vm_state active and task_state deleting.
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.318 253665 DEBUG nova.compute.manager [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-unplugged-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.319 253665 DEBUG oslo_concurrency.lockutils [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.319 253665 DEBUG oslo_concurrency.lockutils [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.319 253665 DEBUG oslo_concurrency.lockutils [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.319 253665 DEBUG nova.compute.manager [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] No waiting events found dispatching network-vif-unplugged-e043dc2b-6062-4cda-bf32-37ab692618c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.319 253665 WARNING nova.compute.manager [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received unexpected event network-vif-unplugged-e043dc2b-6062-4cda-bf32-37ab692618c1 for instance with vm_state active and task_state rescuing.
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.319 253665 DEBUG nova.compute.manager [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.320 253665 DEBUG oslo_concurrency.lockutils [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.320 253665 DEBUG oslo_concurrency.lockutils [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.320 253665 DEBUG oslo_concurrency.lockutils [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.320 253665 DEBUG nova.compute.manager [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] No waiting events found dispatching network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.320 253665 WARNING nova.compute.manager [req-e48151dc-504e-4ca4-ab0d-3e9e78e6fa61 req-b5890d9e-b4f8-4e94-a204-518797dd0c4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received unexpected event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 for instance with vm_state active and task_state rescuing.
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.430 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.430 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.437 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.437 253665 INFO nova.compute.claims [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:21:28 compute-0 ceph-mon[75021]: pgmap v1718: 305 pgs: 305 active+clean; 374 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 218 op/s
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.656 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543602415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.704 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:28 compute-0 kernel: tap43cec84a-e6 (unregistering): left promiscuous mode
Nov 22 09:21:28 compute-0 NetworkManager[48920]: <info>  [1763803288.7825] device (tap43cec84a-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.794 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:28 compute-0 ovn_controller[152872]: 2025-11-22T09:21:28Z|00642|binding|INFO|Releasing lport 43cec84a-e6cc-4492-8869-806f677f3026 from this chassis (sb_readonly=0)
Nov 22 09:21:28 compute-0 ovn_controller[152872]: 2025-11-22T09:21:28Z|00643|binding|INFO|Setting lport 43cec84a-e6cc-4492-8869-806f677f3026 down in Southbound
Nov 22 09:21:28 compute-0 ovn_controller[152872]: 2025-11-22T09:21:28Z|00644|binding|INFO|Removing iface tap43cec84a-e6 ovn-installed in OVS
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.818 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:28 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Nov 22 09:21:28 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000003f.scope: Consumed 5.148s CPU time.
Nov 22 09:21:28 compute-0 systemd-machined[215941]: Machine qemu-78-instance-0000003f terminated.
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.843 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:62:0d 10.100.0.10'], port_security=['fa:16:3e:29:62:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd364f1c2-d606-448a-b3bd-00f1d5c1b858', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c62f4ce9-5b21-4154-83ce-fbb32299e500', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=43cec84a-e6cc-4492-8869-806f677f3026) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.844 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 43cec84a-e6cc-4492-8869-806f677f3026 in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.846 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[afc3834e-285b-4a49-9291-175ec923577f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:28 compute-0 podman[324838]: 2025-11-22 09:21:28.882026746 +0000 UTC m=+0.074733747 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 09:21:28 compute-0 podman[324818]: 2025-11-22 09:21:28.888871582 +0000 UTC m=+0.081464790 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.903 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[73225e71-24af-415a-8161-1b21b47b57b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.910 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7674b402-d2b1-4ede-a39c-04da94a4033f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.920 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.921 253665 DEBUG nova.objects.instance [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'migration_context' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.934 253665 INFO nova.virt.libvirt.driver [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance destroyed successfully.
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.935 253665 DEBUG nova.objects.instance [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.938 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.939 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Start _get_guest_xml network_info=[{"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "vif_mac": "fa:16:3e:c9:dc:4f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.939 253665 DEBUG nova.objects.instance [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'resources' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.944 253665 DEBUG nova.virt.libvirt.vif [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783668956',display_name='tempest-ServerActionsTestJSON-server-957521895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783668956',id=63,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:21:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-k5tayptk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:23Z,user_data=None,user_id='559fd7e00a0a468797efe4955caffc4a',uuid=d364f1c2-d606-448a-b3bd-00f1d5c1b858,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.945 253665 DEBUG nova.network.os_vif_util [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.946 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c0198445-b291-4579-9964-00a6aa8aebd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.947 253665 DEBUG nova.network.os_vif_util [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.947 253665 DEBUG os_vif [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.949 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.950 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43cec84a-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.956 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.959 253665 INFO os_vif [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6')
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.963 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5dccfb6a-7c5f-45b4-962f-fff25b5a3b83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602324, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324893, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.981 253665 WARNING nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.983 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0e31a0-299d-4532-b5be-b817f41d13d1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602336, 'tstamp': 602336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324902, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602340, 'tstamp': 602340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324902, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.984 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.986 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.989 253665 DEBUG nova.virt.libvirt.host [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.990 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.990 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.991 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:28.991 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.992 253665 DEBUG nova.virt.libvirt.host [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.996 253665 DEBUG nova.virt.libvirt.host [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.997 253665 DEBUG nova.virt.libvirt.host [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.997 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.997 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.998 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.998 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.998 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.998 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.999 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.999 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.999 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.999 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:21:28 compute-0 nova_compute[253661]: 2025-11-22 09:21:28.999 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.000 253665 DEBUG nova.virt.hardware [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.000 253665 DEBUG nova.objects.instance [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'vcpu_model' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.015 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.068 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.068 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.074 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.074 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.078 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.078 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.083 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.083 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.090 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.090 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.110 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803274.109717, b8ca3796-0135-4d73-96eb-c1534303c4a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.111 253665 INFO nova.compute.manager [-] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] VM Stopped (Lifecycle Event)
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.127 253665 DEBUG nova.compute.manager [None req-38024636-85a6-4654-b503-69928fc12d08 - - - - - -] [instance: b8ca3796-0135-4d73-96eb-c1534303c4a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643777277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.164 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.170 253665 DEBUG nova.compute.provider_tree [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.182 253665 DEBUG nova.scheduler.client.report [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.199 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.200 253665 DEBUG nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.246 253665 DEBUG nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.246 253665 DEBUG nova.network.neutron [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.262 253665 INFO nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.278 253665 DEBUG nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.309 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.310 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3517MB free_disk=59.809661865234375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.311 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.311 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.354 253665 DEBUG nova.compute.manager [req-b7aa1e5d-703e-447c-9f21-1de1063369e6 req-75e13bbe-e3eb-4ea3-8bed-d0e8382624c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-unplugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.354 253665 DEBUG oslo_concurrency.lockutils [req-b7aa1e5d-703e-447c-9f21-1de1063369e6 req-75e13bbe-e3eb-4ea3-8bed-d0e8382624c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.354 253665 DEBUG oslo_concurrency.lockutils [req-b7aa1e5d-703e-447c-9f21-1de1063369e6 req-75e13bbe-e3eb-4ea3-8bed-d0e8382624c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.355 253665 DEBUG oslo_concurrency.lockutils [req-b7aa1e5d-703e-447c-9f21-1de1063369e6 req-75e13bbe-e3eb-4ea3-8bed-d0e8382624c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.355 253665 DEBUG nova.compute.manager [req-b7aa1e5d-703e-447c-9f21-1de1063369e6 req-75e13bbe-e3eb-4ea3-8bed-d0e8382624c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] No waiting events found dispatching network-vif-unplugged-43cec84a-e6cc-4492-8869-806f677f3026 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.355 253665 DEBUG nova.compute.manager [req-b7aa1e5d-703e-447c-9f21-1de1063369e6 req-75e13bbe-e3eb-4ea3-8bed-d0e8382624c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-unplugged-43cec84a-e6cc-4492-8869-806f677f3026 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.384 253665 DEBUG nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.385 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.385 253665 INFO nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Creating image(s)
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.407 253665 DEBUG nova.storage.rbd_utils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.429 253665 DEBUG nova.storage.rbd_utils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.449 253665 DEBUG nova.storage.rbd_utils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.453 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 385 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 230 op/s
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.498 253665 DEBUG nova.policy [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5352d2182544454aab03bd4a74160247', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:21:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994324727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.519 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 636b1046-fff8-4a45-8a14-04010b2f282e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.519 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance aadc298c-a1ba-41ca-9015-0a4d08420487 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.519 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d364f1c2-d606-448a-b3bd-00f1d5c1b858 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.520 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.520 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.520 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 90d94c7c-4c60-4842-b18c-1eecc3997b15 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.520 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.524 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.525 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.563 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.564 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.565 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.565 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.592 253665 DEBUG nova.storage.rbd_utils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.595 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1543602415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/643777277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3994324727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.719 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767678411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.972 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:29 compute-0 nova_compute[253661]: 2025-11-22 09:21:29.973 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666720092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.182 253665 DEBUG nova.network.neutron [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Successfully created port: 196bfb3e-6b8e-4174-8aca-2c419d527946 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.192 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.197 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.209 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.231 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.231 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.325 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3137424596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.440 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.441 253665 DEBUG nova.virt.libvirt.vif [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:20:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1163743873',display_name='tempest-ServerRescueNegativeTestJSON-server-1163743873',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1163743873',id=65,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='933c51626a49465db409069a1b3eb7be',ramdisk_id='',reservation_id='r-wwkpbtwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1742140611',owner_user_name='tempest-ServerRescueNegativeTestJSON-1742140611-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:51Z,user_data=None,user_id='7fc7bde5e89f466d88e469ac1f35a435',uuid=f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "vif_mac": "fa:16:3e:c9:dc:4f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.442 253665 DEBUG nova.network.os_vif_util [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converting VIF {"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "vif_mac": "fa:16:3e:c9:dc:4f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.443 253665 DEBUG nova.network.os_vif_util [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:dc:4f,bridge_name='br-int',has_traffic_filtering=True,id=e043dc2b-6062-4cda-bf32-37ab692618c1,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape043dc2b-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.444 253665 DEBUG nova.objects.instance [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'pci_devices' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.460 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <uuid>f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5</uuid>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <name>instance-00000041</name>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1163743873</nova:name>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:21:28</nova:creationTime>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <nova:user uuid="7fc7bde5e89f466d88e469ac1f35a435">tempest-ServerRescueNegativeTestJSON-1742140611-project-member</nova:user>
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <nova:project uuid="933c51626a49465db409069a1b3eb7be">tempest-ServerRescueNegativeTestJSON-1742140611</nova:project>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <nova:port uuid="e043dc2b-6062-4cda-bf32-37ab692618c1">
Nov 22 09:21:30 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <system>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <entry name="serial">f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5</entry>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <entry name="uuid">f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5</entry>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </system>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <os>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   </os>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <features>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   </features>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.rescue">
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk">
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <target dev="vdb" bus="virtio"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config.rescue">
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:30 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:c9:dc:4f"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <target dev="tape043dc2b-60"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/console.log" append="off"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <video>
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </video>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:21:30 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:21:30 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:21:30 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:21:30 compute-0 nova_compute[253661]: </domain>
Nov 22 09:21:30 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.468 253665 INFO nova.virt.libvirt.driver [-] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance destroyed successfully.
Nov 22 09:21:30 compute-0 ovn_controller[152872]: 2025-11-22T09:21:30Z|00645|binding|INFO|Releasing lport 2459339b-2f2c-469c-aa5d-2df42fbac2a2 from this chassis (sb_readonly=0)
Nov 22 09:21:30 compute-0 ovn_controller[152872]: 2025-11-22T09:21:30Z|00646|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.568 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.568 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.569 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.569 253665 DEBUG nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No VIF found with MAC fa:16:3e:c9:dc:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.569 253665 INFO nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Using config drive
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.590 253665 DEBUG nova.storage.rbd_utils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.613 253665 DEBUG nova.objects.instance [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'ec2_ids' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.637 253665 DEBUG nova.objects.instance [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'keypairs' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.811 253665 DEBUG nova.network.neutron [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Successfully updated port: 196bfb3e-6b8e-4174-8aca-2c419d527946 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.825 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "refresh_cache-90d94c7c-4c60-4842-b18c-1eecc3997b15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.826 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquired lock "refresh_cache-90d94c7c-4c60-4842-b18c-1eecc3997b15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.826 253665 DEBUG nova.network.neutron [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:21:30 compute-0 ceph-mon[75021]: pgmap v1719: 305 pgs: 305 active+clean; 385 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 230 op/s
Nov 22 09:21:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1767678411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3666720092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3137424596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.885 253665 DEBUG nova.compute.manager [req-a3cb0da7-0573-496e-8bdb-2b050f45a5e8 req-b8157342-8e90-4e9a-8c9d-8251d867b892 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Received event network-changed-196bfb3e-6b8e-4174-8aca-2c419d527946 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.886 253665 DEBUG nova.compute.manager [req-a3cb0da7-0573-496e-8bdb-2b050f45a5e8 req-b8157342-8e90-4e9a-8c9d-8251d867b892 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Refreshing instance network info cache due to event network-changed-196bfb3e-6b8e-4174-8aca-2c419d527946. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:21:30 compute-0 nova_compute[253661]: 2025-11-22 09:21:30.886 253665 DEBUG oslo_concurrency.lockutils [req-a3cb0da7-0573-496e-8bdb-2b050f45a5e8 req-b8157342-8e90-4e9a-8c9d-8251d867b892 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-90d94c7c-4c60-4842-b18c-1eecc3997b15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.235 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.236 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.236 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.236 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.281 253665 INFO nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Creating config drive at /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config.rescue
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.287 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptxeqs_j0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.358 253665 DEBUG nova.network.neutron [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.425 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptxeqs_j0" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 385 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 826 KiB/s wr, 182 op/s
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.907 253665 DEBUG nova.storage.rbd_utils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.910 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config.rescue f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.953 253665 DEBUG nova.compute.manager [req-01a99f1c-9230-4ca1-9961-584be0d8c50c req-8816e8e3-c658-49a8-84d7-a8cf75c384c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.953 253665 DEBUG oslo_concurrency.lockutils [req-01a99f1c-9230-4ca1-9961-584be0d8c50c req-8816e8e3-c658-49a8-84d7-a8cf75c384c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.954 253665 DEBUG oslo_concurrency.lockutils [req-01a99f1c-9230-4ca1-9961-584be0d8c50c req-8816e8e3-c658-49a8-84d7-a8cf75c384c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.954 253665 DEBUG oslo_concurrency.lockutils [req-01a99f1c-9230-4ca1-9961-584be0d8c50c req-8816e8e3-c658-49a8-84d7-a8cf75c384c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.954 253665 DEBUG nova.compute.manager [req-01a99f1c-9230-4ca1-9961-584be0d8c50c req-8816e8e3-c658-49a8-84d7-a8cf75c384c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] No waiting events found dispatching network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:31 compute-0 nova_compute[253661]: 2025-11-22 09:21:31.954 253665 WARNING nova.compute.manager [req-01a99f1c-9230-4ca1-9961-584be0d8c50c req-8816e8e3-c658-49a8-84d7-a8cf75c384c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received unexpected event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 for instance with vm_state active and task_state deleting.
Nov 22 09:21:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:32 compute-0 nova_compute[253661]: 2025-11-22 09:21:32.544 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.948s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:32 compute-0 nova_compute[253661]: 2025-11-22 09:21:32.601 253665 DEBUG nova.storage.rbd_utils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] resizing rbd image 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:21:32 compute-0 nova_compute[253661]: 2025-11-22 09:21:32.728 253665 DEBUG nova.network.neutron [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Updating instance_info_cache with network_info: [{"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:32 compute-0 nova_compute[253661]: 2025-11-22 09:21:32.744 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Releasing lock "refresh_cache-90d94c7c-4c60-4842-b18c-1eecc3997b15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:21:32 compute-0 nova_compute[253661]: 2025-11-22 09:21:32.745 253665 DEBUG nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Instance network_info: |[{"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:21:32 compute-0 nova_compute[253661]: 2025-11-22 09:21:32.745 253665 DEBUG oslo_concurrency.lockutils [req-a3cb0da7-0573-496e-8bdb-2b050f45a5e8 req-b8157342-8e90-4e9a-8c9d-8251d867b892 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-90d94c7c-4c60-4842-b18c-1eecc3997b15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:21:32 compute-0 nova_compute[253661]: 2025-11-22 09:21:32.746 253665 DEBUG nova.network.neutron [req-a3cb0da7-0573-496e-8bdb-2b050f45a5e8 req-b8157342-8e90-4e9a-8c9d-8251d867b892 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Refreshing network info cache for port 196bfb3e-6b8e-4174-8aca-2c419d527946 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:21:33 compute-0 ceph-mon[75021]: pgmap v1720: 305 pgs: 305 active+clean; 385 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 826 KiB/s wr, 182 op/s
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.117 253665 DEBUG oslo_concurrency.processutils [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config.rescue f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.117 253665 INFO nova.virt.libvirt.driver [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Deleting local config drive /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/disk.config.rescue because it was imported into RBD.
Nov 22 09:21:33 compute-0 kernel: tape043dc2b-60: entered promiscuous mode
Nov 22 09:21:33 compute-0 NetworkManager[48920]: <info>  [1763803293.1625] manager: (tape043dc2b-60): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.162 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:33 compute-0 ovn_controller[152872]: 2025-11-22T09:21:33Z|00647|binding|INFO|Claiming lport e043dc2b-6062-4cda-bf32-37ab692618c1 for this chassis.
Nov 22 09:21:33 compute-0 ovn_controller[152872]: 2025-11-22T09:21:33Z|00648|binding|INFO|e043dc2b-6062-4cda-bf32-37ab692618c1: Claiming fa:16:3e:c9:dc:4f 10.100.0.13
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.170 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:dc:4f 10.100.0.13'], port_security=['fa:16:3e:c9:dc:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '933c51626a49465db409069a1b3eb7be', 'neutron:revision_number': '5', 'neutron:security_group_ids': '593f17fc-d03e-4917-afb8-683e49d809be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1d04103-3da8-47e2-b1fa-1a0a204f148a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e043dc2b-6062-4cda-bf32-37ab692618c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.171 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e043dc2b-6062-4cda-bf32-37ab692618c1 in datapath 851968f3-2dc6-498b-a08c-7b5b2c2383d4 bound to our chassis
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.172 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 851968f3-2dc6-498b-a08c-7b5b2c2383d4
Nov 22 09:21:33 compute-0 ovn_controller[152872]: 2025-11-22T09:21:33Z|00649|binding|INFO|Setting lport e043dc2b-6062-4cda-bf32-37ab692618c1 ovn-installed in OVS
Nov 22 09:21:33 compute-0 ovn_controller[152872]: 2025-11-22T09:21:33Z|00650|binding|INFO|Setting lport e043dc2b-6062-4cda-bf32-37ab692618c1 up in Southbound
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.184 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.187 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[92704db0-7c4b-4d6e-bbd3-32537014fc08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:33 compute-0 systemd-machined[215941]: New machine qemu-79-instance-00000041.
Nov 22 09:21:33 compute-0 systemd-udevd[325238]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:33 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000041.
Nov 22 09:21:33 compute-0 NetworkManager[48920]: <info>  [1763803293.2253] device (tape043dc2b-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.224 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b444d6c9-1a64-4e2a-bee9-66d5b646f6ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:33 compute-0 NetworkManager[48920]: <info>  [1763803293.2281] device (tape043dc2b-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.228 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e22577c9-8f7d-4999-ba94-3bc2fd124e53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.257 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb79eac-9294-4873-a954-6aa8cd7130f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.275 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a91eb6-5578-4cbf-9dab-9832481b5181]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap851968f3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:3f:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609286, 'reachable_time': 22638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325254, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.293 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2249a4-3775-4975-808b-b09767637749]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap851968f3-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609297, 'tstamp': 609297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325262, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap851968f3-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609300, 'tstamp': 609300}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325262, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.295 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851968f3-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.298 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.298 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851968f3-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.299 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.300 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap851968f3-20, col_values=(('external_ids', {'iface-id': '2459339b-2f2c-469c-aa5d-2df42fbac2a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:33.300 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:33 compute-0 podman[325223]: 2025-11-22 09:21:33.302413414 +0000 UTC m=+0.108669512 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:21:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 397 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 198 op/s
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.546 253665 DEBUG nova.compute.manager [req-690f1206-3e52-4c54-aefa-712f44f355ae req-e4d22782-d1aa-4c32-a7b3-9a6af1bd4577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.546 253665 DEBUG oslo_concurrency.lockutils [req-690f1206-3e52-4c54-aefa-712f44f355ae req-e4d22782-d1aa-4c32-a7b3-9a6af1bd4577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.546 253665 DEBUG oslo_concurrency.lockutils [req-690f1206-3e52-4c54-aefa-712f44f355ae req-e4d22782-d1aa-4c32-a7b3-9a6af1bd4577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.546 253665 DEBUG oslo_concurrency.lockutils [req-690f1206-3e52-4c54-aefa-712f44f355ae req-e4d22782-d1aa-4c32-a7b3-9a6af1bd4577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.547 253665 DEBUG nova.compute.manager [req-690f1206-3e52-4c54-aefa-712f44f355ae req-e4d22782-d1aa-4c32-a7b3-9a6af1bd4577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] No waiting events found dispatching network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.547 253665 WARNING nova.compute.manager [req-690f1206-3e52-4c54-aefa-712f44f355ae req-e4d22782-d1aa-4c32-a7b3-9a6af1bd4577 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received unexpected event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 for instance with vm_state active and task_state rescuing.
Nov 22 09:21:33 compute-0 nova_compute[253661]: 2025-11-22 09:21:33.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:34 compute-0 nova_compute[253661]: 2025-11-22 09:21:34.233 253665 DEBUG nova.network.neutron [req-a3cb0da7-0573-496e-8bdb-2b050f45a5e8 req-b8157342-8e90-4e9a-8c9d-8251d867b892 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Updated VIF entry in instance network info cache for port 196bfb3e-6b8e-4174-8aca-2c419d527946. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:21:34 compute-0 nova_compute[253661]: 2025-11-22 09:21:34.233 253665 DEBUG nova.network.neutron [req-a3cb0da7-0573-496e-8bdb-2b050f45a5e8 req-b8157342-8e90-4e9a-8c9d-8251d867b892 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Updating instance_info_cache with network_info: [{"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:34 compute-0 nova_compute[253661]: 2025-11-22 09:21:34.247 253665 DEBUG oslo_concurrency.lockutils [req-a3cb0da7-0573-496e-8bdb-2b050f45a5e8 req-b8157342-8e90-4e9a-8c9d-8251d867b892 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-90d94c7c-4c60-4842-b18c-1eecc3997b15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.046 253665 DEBUG nova.objects.instance [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'migration_context' on Instance uuid 90d94c7c-4c60-4842-b18c-1eecc3997b15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.055 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.055 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Ensure instance console log exists: /var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.056 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.056 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.056 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.058 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Start _get_guest_xml network_info=[{"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.064 253665 WARNING nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.069 253665 DEBUG nova.virt.libvirt.host [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.069 253665 DEBUG nova.virt.libvirt.host [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.072 253665 DEBUG nova.virt.libvirt.host [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.073 253665 DEBUG nova.virt.libvirt.host [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.074 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.074 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.075 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.075 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.075 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.076 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.076 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.076 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.076 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.076 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.077 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.077 253665 DEBUG nova.virt.hardware [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.079 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:21:35 compute-0 ceph-mon[75021]: pgmap v1721: 305 pgs: 305 active+clean; 397 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 198 op/s
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.367 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.437 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.438 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803295.4367526, f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.438 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] VM Resumed (Lifecycle Event)
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.444 253665 DEBUG nova.compute.manager [None req-6750c311-6b65-4a5b-aefb-882e1d05bbf8 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.455 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 396 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 221 op/s
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.459 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.483 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.483 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803295.439137, f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.484 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] VM Started (Lifecycle Event)
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.505 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.511 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501432190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.594 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.616 253665 DEBUG nova.storage.rbd_utils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.635 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.676 253665 DEBUG nova.compute.manager [req-20d8ac3d-a0b4-4cf5-9697-9cc07d4d11e1 req-3f4c99f1-1f58-4a16-a1ca-0a6c44680620 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.677 253665 DEBUG oslo_concurrency.lockutils [req-20d8ac3d-a0b4-4cf5-9697-9cc07d4d11e1 req-3f4c99f1-1f58-4a16-a1ca-0a6c44680620 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.677 253665 DEBUG oslo_concurrency.lockutils [req-20d8ac3d-a0b4-4cf5-9697-9cc07d4d11e1 req-3f4c99f1-1f58-4a16-a1ca-0a6c44680620 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.678 253665 DEBUG oslo_concurrency.lockutils [req-20d8ac3d-a0b4-4cf5-9697-9cc07d4d11e1 req-3f4c99f1-1f58-4a16-a1ca-0a6c44680620 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.678 253665 DEBUG nova.compute.manager [req-20d8ac3d-a0b4-4cf5-9697-9cc07d4d11e1 req-3f4c99f1-1f58-4a16-a1ca-0a6c44680620 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] No waiting events found dispatching network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:35 compute-0 nova_compute[253661]: 2025-11-22 09:21:35.678 253665 WARNING nova.compute.manager [req-20d8ac3d-a0b4-4cf5-9697-9cc07d4d11e1 req-3f4c99f1-1f58-4a16-a1ca-0a6c44680620 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received unexpected event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 for instance with vm_state rescued and task_state None.
Nov 22 09:21:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589252117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.128 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.130 253665 DEBUG nova.virt.libvirt.vif [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-633701782',display_name='tempest-ServerDiskConfigTestJSON-server-633701782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-633701782',id=67,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-sq06q5gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:29Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=90d94c7c-4c60-4842-b18c-1eecc3997b15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.131 253665 DEBUG nova.network.os_vif_util [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.132 253665 DEBUG nova.network.os_vif_util [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f0:36,bridge_name='br-int',has_traffic_filtering=True,id=196bfb3e-6b8e-4174-8aca-2c419d527946,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196bfb3e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.134 253665 DEBUG nova.objects.instance [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_devices' on Instance uuid 90d94c7c-4c60-4842-b18c-1eecc3997b15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.146 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <uuid>90d94c7c-4c60-4842-b18c-1eecc3997b15</uuid>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <name>instance-00000043</name>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-633701782</nova:name>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:21:35</nova:creationTime>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <nova:user uuid="5352d2182544454aab03bd4a74160247">tempest-ServerDiskConfigTestJSON-1778643933-project-member</nova:user>
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <nova:project uuid="a29f2c834c7a4a2ea6c4fc6dea996a8e">tempest-ServerDiskConfigTestJSON-1778643933</nova:project>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <nova:port uuid="196bfb3e-6b8e-4174-8aca-2c419d527946">
Nov 22 09:21:36 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <system>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <entry name="serial">90d94c7c-4c60-4842-b18c-1eecc3997b15</entry>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <entry name="uuid">90d94c7c-4c60-4842-b18c-1eecc3997b15</entry>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     </system>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <os>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   </os>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <features>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   </features>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/90d94c7c-4c60-4842-b18c-1eecc3997b15_disk">
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/90d94c7c-4c60-4842-b18c-1eecc3997b15_disk.config">
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:36 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:64:f0:36"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <target dev="tap196bfb3e-6b"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15/console.log" append="off"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <video>
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     </video>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:21:36 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:21:36 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:21:36 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:21:36 compute-0 nova_compute[253661]: </domain>
Nov 22 09:21:36 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.152 253665 DEBUG nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Preparing to wait for external event network-vif-plugged-196bfb3e-6b8e-4174-8aca-2c419d527946 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.152 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.153 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.153 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.154 253665 DEBUG nova.virt.libvirt.vif [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-633701782',display_name='tempest-ServerDiskConfigTestJSON-server-633701782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-633701782',id=67,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-sq06q5gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:29Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=90d94c7c-4c60-4842-b18c-1eecc3997b15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.154 253665 DEBUG nova.network.os_vif_util [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.155 253665 DEBUG nova.network.os_vif_util [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f0:36,bridge_name='br-int',has_traffic_filtering=True,id=196bfb3e-6b8e-4174-8aca-2c419d527946,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196bfb3e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.155 253665 DEBUG os_vif [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f0:36,bridge_name='br-int',has_traffic_filtering=True,id=196bfb3e-6b8e-4174-8aca-2c419d527946,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196bfb3e-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.156 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.157 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.157 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.161 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap196bfb3e-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.162 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap196bfb3e-6b, col_values=(('external_ids', {'iface-id': '196bfb3e-6b8e-4174-8aca-2c419d527946', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:f0:36', 'vm-uuid': '90d94c7c-4c60-4842-b18c-1eecc3997b15'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.163 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:36 compute-0 NetworkManager[48920]: <info>  [1763803296.1643] manager: (tap196bfb3e-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.165 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.170 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.172 253665 INFO os_vif [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f0:36,bridge_name='br-int',has_traffic_filtering=True,id=196bfb3e-6b8e-4174-8aca-2c419d527946,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196bfb3e-6b')
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.257 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.258 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.258 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No VIF found with MAC fa:16:3e:64:f0:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.259 253665 INFO nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Using config drive
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.282 253665 DEBUG nova.storage.rbd_utils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:36 compute-0 ceph-mon[75021]: pgmap v1722: 305 pgs: 305 active+clean; 396 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 221 op/s
Nov 22 09:21:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2501432190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/589252117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.686 253665 INFO nova.virt.libvirt.driver [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Deleting instance files /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487_del
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.687 253665 INFO nova.virt.libvirt.driver [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Deletion of /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487_del complete
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.731 253665 INFO nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Creating config drive at /var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15/disk.config
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.736 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_2_nmyh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.774 253665 INFO nova.compute.manager [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Took 11.57 seconds to destroy the instance on the hypervisor.
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.776 253665 DEBUG oslo.service.loopingcall [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.777 253665 DEBUG nova.compute.manager [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.778 253665 DEBUG nova.network.neutron [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.881 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_2_nmyh" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.905 253665 DEBUG nova.storage.rbd_utils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.909 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15/disk.config 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.974 253665 INFO nova.compute.manager [None req-00f453cf-2fb5-43d3-a5cc-0a901263c62b 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Pausing
Nov 22 09:21:36 compute-0 nova_compute[253661]: 2025-11-22 09:21:36.976 253665 DEBUG nova.objects.instance [None req-00f453cf-2fb5-43d3-a5cc-0a901263c62b 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'flavor' on Instance uuid 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.021 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803297.0202754, 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.022 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] VM Paused (Lifecycle Event)
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.025 253665 DEBUG nova.compute.manager [None req-00f453cf-2fb5-43d3-a5cc-0a901263c62b 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.043 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.048 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.081 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 22 09:21:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.429 253665 DEBUG nova.network.neutron [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.445 253665 INFO nova.compute.manager [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Took 0.67 seconds to deallocate network for instance.
Nov 22 09:21:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 389 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.498 253665 DEBUG oslo_concurrency.lockutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.498 253665 DEBUG oslo_concurrency.lockutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.525 253665 DEBUG nova.compute.manager [req-cf7116b2-4dcf-40e8-afb2-70b29c9b4e6f req-c8f71f7b-bfca-41b3-acfb-c141ba71cd68 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-deleted-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:37 compute-0 nova_compute[253661]: 2025-11-22 09:21:37.636 253665 DEBUG oslo_concurrency.processutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985730728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.080 253665 DEBUG oslo_concurrency.processutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.089 253665 DEBUG nova.compute.provider_tree [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.101 253665 DEBUG nova.scheduler.client.report [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.122 253665 DEBUG oslo_concurrency.lockutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.158 253665 INFO nova.scheduler.client.report [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Deleted allocations for instance aadc298c-a1ba-41ca-9015-0a4d08420487
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.212 253665 DEBUG oslo_concurrency.lockutils [None req-4ce5f18d-3023-4b60-bb0f-9ee64449f197 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.684 253665 DEBUG oslo_concurrency.processutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15/disk.config 90d94c7c-4c60-4842-b18c-1eecc3997b15_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.685 253665 INFO nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Deleting local config drive /var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15/disk.config because it was imported into RBD.
Nov 22 09:21:38 compute-0 NetworkManager[48920]: <info>  [1763803298.7344] manager: (tap196bfb3e-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Nov 22 09:21:38 compute-0 kernel: tap196bfb3e-6b: entered promiscuous mode
Nov 22 09:21:38 compute-0 ovn_controller[152872]: 2025-11-22T09:21:38Z|00651|binding|INFO|Claiming lport 196bfb3e-6b8e-4174-8aca-2c419d527946 for this chassis.
Nov 22 09:21:38 compute-0 ovn_controller[152872]: 2025-11-22T09:21:38Z|00652|binding|INFO|196bfb3e-6b8e-4174-8aca-2c419d527946: Claiming fa:16:3e:64:f0:36 10.100.0.7
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.744 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.748 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f0:36 10.100.0.7'], port_security=['fa:16:3e:64:f0:36 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '90d94c7c-4c60-4842-b18c-1eecc3997b15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=196bfb3e-6b8e-4174-8aca-2c419d527946) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.750 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 196bfb3e-6b8e-4174-8aca-2c419d527946 in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd bound to our chassis
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.757 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:21:38 compute-0 ovn_controller[152872]: 2025-11-22T09:21:38Z|00653|binding|INFO|Setting lport 196bfb3e-6b8e-4174-8aca-2c419d527946 ovn-installed in OVS
Nov 22 09:21:38 compute-0 ovn_controller[152872]: 2025-11-22T09:21:38Z|00654|binding|INFO|Setting lport 196bfb3e-6b8e-4174-8aca-2c419d527946 up in Southbound
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.772 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec721fe-c7aa-48b7-a773-2495d4002ddc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.773 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d1bce2-e1 in ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.778 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d1bce2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.778 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[544a7d99-b730-47ca-bf39-3579227a6635]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.778 253665 INFO nova.virt.libvirt.driver [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Deleting instance files /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858_del
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.779 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce5db01-cf4d-4b25-a318-bdac50439270]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.780 253665 INFO nova.virt.libvirt.driver [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Deletion of /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858_del complete
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.783 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:38 compute-0 ceph-mon[75021]: pgmap v1723: 305 pgs: 305 active+clean; 389 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Nov 22 09:21:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3985730728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:38 compute-0 systemd-machined[215941]: New machine qemu-80-instance-00000043.
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.795 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c3063d77-88bd-4eaf-8052-5e93755137b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000043.
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.817 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06a104fc-fc69-468f-a092-d0cebf0af0c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 systemd-udevd[325507]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:38 compute-0 NetworkManager[48920]: <info>  [1763803298.8516] device (tap196bfb3e-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:21:38 compute-0 NetworkManager[48920]: <info>  [1763803298.8545] device (tap196bfb3e-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.855 253665 INFO nova.compute.manager [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Took 10.57 seconds to destroy the instance on the hypervisor.
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.856 253665 DEBUG oslo.service.loopingcall [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.857 253665 DEBUG nova.compute.manager [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:21:38 compute-0 nova_compute[253661]: 2025-11-22 09:21:38.857 253665 DEBUG nova.network.neutron [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.871 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a848cb13-1648-48ed-a6ec-5195ec5597aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 NetworkManager[48920]: <info>  [1763803298.8792] manager: (tap01d1bce2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/293)
Nov 22 09:21:38 compute-0 systemd-udevd[325514]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.880 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0886b2e-920f-41ec-942e-d9c1a761528e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.917 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0cdaad10-4d58-49af-a77a-c967b6d01c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.921 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8aecd5-a0b9-4145-9aed-0f0b2185a6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 NetworkManager[48920]: <info>  [1763803298.9459] device (tap01d1bce2-e0): carrier: link connected
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.954 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c46fe7-c162-4372-9508-a217641b1828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.972 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[537a1c0d-6237-4611-b33b-dfe61eecda4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614968, 'reachable_time': 37031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325537, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:38.992 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[435d1f99-bbff-44e2-a4f7-ca6a79628688]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:2279'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 614968, 'tstamp': 614968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325538, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.014 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4168d849-63b5-45c4-8060-3c1f0b951d44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614968, 'reachable_time': 37031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325539, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.021 253665 INFO nova.compute.manager [None req-d7c13430-9699-4f72-ad72-6bad90120fab 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Unpausing
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.022 253665 DEBUG nova.objects.instance [None req-d7c13430-9699-4f72-ad72-6bad90120fab 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'flavor' on Instance uuid 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.044 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803299.044554, 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.045 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] VM Resumed (Lifecycle Event)
Nov 22 09:21:39 compute-0 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.050 253665 DEBUG nova.virt.libvirt.guest [None req-d7c13430-9699-4f72-ad72-6bad90120fab 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.051 253665 DEBUG nova.compute.manager [None req-d7c13430-9699-4f72-ad72-6bad90120fab 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.053 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3d3c85-3880-4408-8152-5ad006e861d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.067 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.071 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.084 253665 DEBUG nova.compute.manager [req-c27ead8b-2a97-4407-8061-8c31d1d54956 req-40c96fe0-c23b-46ed-b18f-ee5551a2e28b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Received event network-vif-plugged-196bfb3e-6b8e-4174-8aca-2c419d527946 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.084 253665 DEBUG oslo_concurrency.lockutils [req-c27ead8b-2a97-4407-8061-8c31d1d54956 req-40c96fe0-c23b-46ed-b18f-ee5551a2e28b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.085 253665 DEBUG oslo_concurrency.lockutils [req-c27ead8b-2a97-4407-8061-8c31d1d54956 req-40c96fe0-c23b-46ed-b18f-ee5551a2e28b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.085 253665 DEBUG oslo_concurrency.lockutils [req-c27ead8b-2a97-4407-8061-8c31d1d54956 req-40c96fe0-c23b-46ed-b18f-ee5551a2e28b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.085 253665 DEBUG nova.compute.manager [req-c27ead8b-2a97-4407-8061-8c31d1d54956 req-40c96fe0-c23b-46ed-b18f-ee5551a2e28b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Processing event network-vif-plugged-196bfb3e-6b8e-4174-8aca-2c419d527946 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.097 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.114 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e48ffff-c484-435d-8542-e6055537bbd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.115 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.115 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.115 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d1bce2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:39 compute-0 kernel: tap01d1bce2-e0: entered promiscuous mode
Nov 22 09:21:39 compute-0 NetworkManager[48920]: <info>  [1763803299.1179] manager: (tap01d1bce2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.122 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d1bce2-e0, col_values=(('external_ids', {'iface-id': '23aa3d02-a12d-464a-8395-5aa8724c0fd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.123 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:39 compute-0 ovn_controller[152872]: 2025-11-22T09:21:39Z|00655|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.124 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.126 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.126 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[51df6735-6e18-45bd-aeba-fe8a0a43818d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.127 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:21:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:39.128 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'env', 'PROCESS_TAG=haproxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.139 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 374 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 214 op/s
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.575 253665 DEBUG nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.576 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803299.5746133, 90d94c7c-4c60-4842-b18c-1eecc3997b15 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.577 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] VM Started (Lifecycle Event)
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.578 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:21:39 compute-0 podman[325607]: 2025-11-22 09:21:39.485196625 +0000 UTC m=+0.024747633 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.582 253665 INFO nova.virt.libvirt.driver [-] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Instance spawned successfully.
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.582 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.597 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.603 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.611 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.611 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.612 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.613 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.613 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.614 253665 DEBUG nova.virt.libvirt.driver [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.635 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.636 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803299.5756068, 90d94c7c-4c60-4842-b18c-1eecc3997b15 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.636 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] VM Paused (Lifecycle Event)
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.638 253665 DEBUG nova.network.neutron [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:39 compute-0 podman[325607]: 2025-11-22 09:21:39.696381107 +0000 UTC m=+0.235932095 container create 1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.728 253665 DEBUG nova.compute.manager [req-0d358b09-4d65-4b4b-8ddb-9ef5bfaa5e33 req-6c18fd15-c905-4e5e-9e6c-e15a75963d50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-deleted-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.730 253665 INFO nova.compute.manager [req-0d358b09-4d65-4b4b-8ddb-9ef5bfaa5e33 req-6c18fd15-c905-4e5e-9e6c-e15a75963d50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Neutron deleted interface 43cec84a-e6cc-4492-8869-806f677f3026; detaching it from the instance and deleting it from the info cache
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.730 253665 DEBUG nova.network.neutron [req-0d358b09-4d65-4b4b-8ddb-9ef5bfaa5e33 req-6c18fd15-c905-4e5e-9e6c-e15a75963d50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:39 compute-0 systemd[1]: Started libpod-conmon-1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61.scope.
Nov 22 09:21:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b04b1be8860a5edbaf2c79bb00a1829eaadd5a4d25559a1db4f661bab41e673/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:39 compute-0 podman[325607]: 2025-11-22 09:21:39.809107345 +0000 UTC m=+0.348658353 container init 1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:21:39 compute-0 podman[325607]: 2025-11-22 09:21:39.815373908 +0000 UTC m=+0.354924896 container start 1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.818 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.823 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803299.5785387, 90d94c7c-4c60-4842-b18c-1eecc3997b15 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.823 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] VM Resumed (Lifecycle Event)
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.826 253665 INFO nova.compute.manager [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Took 0.97 seconds to deallocate network for instance.
Nov 22 09:21:39 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[325629]: [NOTICE]   (325633) : New worker (325635) forked
Nov 22 09:21:39 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[325629]: [NOTICE]   (325633) : Loading success.
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.843 253665 INFO nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Took 10.46 seconds to spawn the instance on the hypervisor.
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.844 253665 DEBUG nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.853 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.857 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.859 253665 DEBUG nova.compute.manager [req-0d358b09-4d65-4b4b-8ddb-9ef5bfaa5e33 req-6c18fd15-c905-4e5e-9e6c-e15a75963d50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Detach interface failed, port_id=43cec84a-e6cc-4492-8869-806f677f3026, reason: Instance d364f1c2-d606-448a-b3bd-00f1d5c1b858 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.881 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.887 253665 DEBUG oslo_concurrency.lockutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.888 253665 DEBUG oslo_concurrency.lockutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.905 253665 DEBUG oslo_concurrency.lockutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.906 253665 DEBUG oslo_concurrency.lockutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.906 253665 DEBUG oslo_concurrency.lockutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.906 253665 DEBUG oslo_concurrency.lockutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.907 253665 DEBUG oslo_concurrency.lockutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.908 253665 INFO nova.compute.manager [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Terminating instance
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.909 253665 DEBUG nova.compute.manager [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.920 253665 INFO nova.compute.manager [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Took 11.51 seconds to build instance.
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.934 253665 DEBUG oslo_concurrency.lockutils [None req-72457c3c-235b-4b33-9343-8c4f15854f08 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:39 compute-0 kernel: tape043dc2b-60 (unregistering): left promiscuous mode
Nov 22 09:21:39 compute-0 NetworkManager[48920]: <info>  [1763803299.9865] device (tape043dc2b-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:39 compute-0 ovn_controller[152872]: 2025-11-22T09:21:39Z|00656|binding|INFO|Releasing lport e043dc2b-6062-4cda-bf32-37ab692618c1 from this chassis (sb_readonly=0)
Nov 22 09:21:39 compute-0 ovn_controller[152872]: 2025-11-22T09:21:39Z|00657|binding|INFO|Setting lport e043dc2b-6062-4cda-bf32-37ab692618c1 down in Southbound
Nov 22 09:21:39 compute-0 ovn_controller[152872]: 2025-11-22T09:21:39Z|00658|binding|INFO|Removing iface tape043dc2b-60 ovn-installed in OVS
Nov 22 09:21:39 compute-0 nova_compute[253661]: 2025-11-22 09:21:39.999 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.008 253665 DEBUG oslo_concurrency.processutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.009 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:dc:4f 10.100.0.13'], port_security=['fa:16:3e:c9:dc:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '933c51626a49465db409069a1b3eb7be', 'neutron:revision_number': '6', 'neutron:security_group_ids': '593f17fc-d03e-4917-afb8-683e49d809be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1d04103-3da8-47e2-b1fa-1a0a204f148a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e043dc2b-6062-4cda-bf32-37ab692618c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.011 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e043dc2b-6062-4cda-bf32-37ab692618c1 in datapath 851968f3-2dc6-498b-a08c-7b5b2c2383d4 unbound from our chassis
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.014 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 851968f3-2dc6-498b-a08c-7b5b2c2383d4
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.033 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1710f53b-4075-4100-ac6d-3e2ffd89c37e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:40 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000041.scope: Deactivated successfully.
Nov 22 09:21:40 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000041.scope: Consumed 5.438s CPU time.
Nov 22 09:21:40 compute-0 systemd-machined[215941]: Machine qemu-79-instance-00000041 terminated.
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.056 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.070 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[829e20b8-1a86-41af-8b24-24375941845e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.074 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0cd095-9db5-4918-bc8b-a459018dc210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.101 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[51fbf688-2312-44b2-8aab-d00f226fc1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.124 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1db7b59e-b11c-40e3-a495-5476417d70c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap851968f3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:3f:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609286, 'reachable_time': 22638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325653, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.145 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[799aa694-8761-405f-8ae9-3d2582d0115a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap851968f3-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609297, 'tstamp': 609297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325658, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap851968f3-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609300, 'tstamp': 609300}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325658, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.147 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851968f3-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.167 253665 INFO nova.virt.libvirt.driver [-] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Instance destroyed successfully.
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.168 253665 DEBUG nova.objects.instance [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'resources' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.180 253665 DEBUG nova.virt.libvirt.vif [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:20:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1163743873',display_name='tempest-ServerRescueNegativeTestJSON-server-1163743873',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1163743873',id=65,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:21:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='933c51626a49465db409069a1b3eb7be',ramdisk_id='',reservation_id='r-wwkpbtwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1742140611',owner_user_name='tempest-ServerRescueNegativeTestJSON-1742140611-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:35Z,user_data=None,user_id='7fc7bde5e89f466d88e469ac1f35a435',uuid=f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.181 253665 DEBUG nova.network.os_vif_util [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converting VIF {"id": "e043dc2b-6062-4cda-bf32-37ab692618c1", "address": "fa:16:3e:c9:dc:4f", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape043dc2b-60", "ovs_interfaceid": "e043dc2b-6062-4cda-bf32-37ab692618c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.183 253665 DEBUG nova.network.os_vif_util [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:dc:4f,bridge_name='br-int',has_traffic_filtering=True,id=e043dc2b-6062-4cda-bf32-37ab692618c1,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape043dc2b-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.184 253665 DEBUG os_vif [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:dc:4f,bridge_name='br-int',has_traffic_filtering=True,id=e043dc2b-6062-4cda-bf32-37ab692618c1,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape043dc2b-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.185 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap851968f3-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.185 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.186 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap851968f3-20, col_values=(('external_ids', {'iface-id': '2459339b-2f2c-469c-aa5d-2df42fbac2a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:40.187 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.188 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape043dc2b-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.189 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.191 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.192 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.195 253665 INFO os_vif [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:dc:4f,bridge_name='br-int',has_traffic_filtering=True,id=e043dc2b-6062-4cda-bf32-37ab692618c1,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape043dc2b-60')
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536028097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.549 253665 DEBUG oslo_concurrency.processutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.556 253665 DEBUG nova.compute.provider_tree [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.572 253665 DEBUG nova.scheduler.client.report [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.599 253665 DEBUG oslo_concurrency.lockutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.633 253665 INFO nova.scheduler.client.report [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Deleted allocations for instance d364f1c2-d606-448a-b3bd-00f1d5c1b858
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.636 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803285.6348207, aadc298c-a1ba-41ca-9015-0a4d08420487 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.636 253665 INFO nova.compute.manager [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Stopped (Lifecycle Event)
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.653 253665 DEBUG nova.compute.manager [None req-a501aba9-adf1-44d7-a1f0-ebfdfc9e2fc7 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:40 compute-0 nova_compute[253661]: 2025-11-22 09:21:40.701 253665 DEBUG oslo_concurrency.lockutils [None req-b09f9561-0100-426d-8909-2211a07feb5a 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:40 compute-0 ceph-mon[75021]: pgmap v1724: 305 pgs: 305 active+clean; 374 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 214 op/s
Nov 22 09:21:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3536028097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 374 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 148 op/s
Nov 22 09:21:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:43 compute-0 ceph-mon[75021]: pgmap v1725: 305 pgs: 305 active+clean; 374 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 148 op/s
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.150 253665 INFO nova.virt.libvirt.driver [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Deleting instance files /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_del
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.151 253665 INFO nova.virt.libvirt.driver [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Deletion of /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_del complete
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.205 253665 INFO nova.compute.manager [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Took 3.30 seconds to destroy the instance on the hypervisor.
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.206 253665 DEBUG oslo.service.loopingcall [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.206 253665 DEBUG nova.compute.manager [-] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.207 253665 DEBUG nova.network.neutron [-] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.352 253665 DEBUG nova.compute.manager [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Received event network-vif-plugged-196bfb3e-6b8e-4174-8aca-2c419d527946 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.353 253665 DEBUG oslo_concurrency.lockutils [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.353 253665 DEBUG oslo_concurrency.lockutils [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.353 253665 DEBUG oslo_concurrency.lockutils [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.354 253665 DEBUG nova.compute.manager [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] No waiting events found dispatching network-vif-plugged-196bfb3e-6b8e-4174-8aca-2c419d527946 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.354 253665 WARNING nova.compute.manager [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Received unexpected event network-vif-plugged-196bfb3e-6b8e-4174-8aca-2c419d527946 for instance with vm_state active and task_state None.
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.354 253665 DEBUG nova.compute.manager [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-unplugged-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.354 253665 DEBUG oslo_concurrency.lockutils [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.355 253665 DEBUG oslo_concurrency.lockutils [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.355 253665 DEBUG oslo_concurrency.lockutils [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.355 253665 DEBUG nova.compute.manager [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] No waiting events found dispatching network-vif-unplugged-e043dc2b-6062-4cda-bf32-37ab692618c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.355 253665 DEBUG nova.compute.manager [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-unplugged-e043dc2b-6062-4cda-bf32-37ab692618c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.356 253665 DEBUG nova.compute.manager [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.356 253665 DEBUG oslo_concurrency.lockutils [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.356 253665 DEBUG oslo_concurrency.lockutils [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.357 253665 DEBUG oslo_concurrency.lockutils [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.357 253665 DEBUG nova.compute.manager [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] No waiting events found dispatching network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.357 253665 WARNING nova.compute.manager [req-af24c944-ef93-4bf3-ab13-cafab5137c0c req-dc37dd33-693e-46fc-ba5c-1a92e60f5cc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received unexpected event network-vif-plugged-e043dc2b-6062-4cda-bf32-37ab692618c1 for instance with vm_state rescued and task_state deleting.
Nov 22 09:21:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 318 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.8 MiB/s wr, 220 op/s
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.926 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803288.9257514, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.927 253665 INFO nova.compute.manager [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Stopped (Lifecycle Event)
Nov 22 09:21:43 compute-0 nova_compute[253661]: 2025-11-22 09:21:43.940 253665 DEBUG nova.compute.manager [None req-c7597db6-8cb2-47f0-8a90-3ecc092cab99 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:44 compute-0 nova_compute[253661]: 2025-11-22 09:21:44.611 253665 DEBUG nova.network.neutron [-] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:44 compute-0 nova_compute[253661]: 2025-11-22 09:21:44.626 253665 INFO nova.compute.manager [-] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Took 1.42 seconds to deallocate network for instance.
Nov 22 09:21:44 compute-0 nova_compute[253661]: 2025-11-22 09:21:44.669 253665 DEBUG oslo_concurrency.lockutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:44 compute-0 nova_compute[253661]: 2025-11-22 09:21:44.669 253665 DEBUG oslo_concurrency.lockutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:44 compute-0 nova_compute[253661]: 2025-11-22 09:21:44.750 253665 DEBUG oslo_concurrency.processutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:45 compute-0 ceph-mon[75021]: pgmap v1726: 305 pgs: 305 active+clean; 318 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.8 MiB/s wr, 220 op/s
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040023812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.224 253665 DEBUG oslo_concurrency.processutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.230 253665 DEBUG nova.compute.provider_tree [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.242 253665 DEBUG nova.scheduler.client.report [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.265 253665 DEBUG oslo_concurrency.lockutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.289 253665 INFO nova.scheduler.client.report [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Deleted allocations for instance f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.342 253665 DEBUG oslo_concurrency.lockutils [None req-279dc975-6b10-4dd4-a22f-6fe02d6c0ae3 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.369 253665 DEBUG oslo_concurrency.lockutils [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.369 253665 DEBUG oslo_concurrency.lockutils [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.370 253665 DEBUG nova.compute.manager [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.371 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.375 253665 DEBUG nova.compute.manager [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.375 253665 DEBUG nova.objects.instance [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.393 253665 DEBUG nova.virt.libvirt.driver [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:21:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 271 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 254 op/s
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.486 253665 DEBUG nova.compute.manager [req-307c2dc4-54f6-4248-a95e-a7843ea28679 req-ff17a29b-47c7-46dd-8177-1aeb1185ae4f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Received event network-vif-deleted-e043dc2b-6062-4cda-bf32-37ab692618c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.558 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.558 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.571 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.593 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.594 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.615 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.631 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.632 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.639 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.640 253665 INFO nova.compute.claims [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.677 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:45 compute-0 nova_compute[253661]: 2025-11-22 09:21:45.816 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.050 253665 DEBUG oslo_concurrency.lockutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.051 253665 DEBUG oslo_concurrency.lockutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.052 253665 DEBUG oslo_concurrency.lockutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.052 253665 DEBUG oslo_concurrency.lockutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.052 253665 DEBUG oslo_concurrency.lockutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.054 253665 INFO nova.compute.manager [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Terminating instance
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.055 253665 DEBUG nova.compute.manager [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:21:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4040023812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/558201758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.320 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.328 253665 DEBUG nova.compute.provider_tree [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.344 253665 DEBUG nova.scheduler.client.report [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.368 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.369 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.372 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.380 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.380 253665 INFO nova.compute.claims [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.441 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.442 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:21:46 compute-0 kernel: tap1c553ce7-b9 (unregistering): left promiscuous mode
Nov 22 09:21:46 compute-0 NetworkManager[48920]: <info>  [1763803306.4591] device (tap1c553ce7-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.461 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:46 compute-0 ovn_controller[152872]: 2025-11-22T09:21:46Z|00659|binding|INFO|Releasing lport 1c553ce7-b95a-447b-9fed-01b378014028 from this chassis (sb_readonly=0)
Nov 22 09:21:46 compute-0 ovn_controller[152872]: 2025-11-22T09:21:46Z|00660|binding|INFO|Setting lport 1c553ce7-b95a-447b-9fed-01b378014028 down in Southbound
Nov 22 09:21:46 compute-0 ovn_controller[152872]: 2025-11-22T09:21:46Z|00661|binding|INFO|Removing iface tap1c553ce7-b9 ovn-installed in OVS
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.477 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.480 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:21:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:46.480 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:09:c9 10.100.0.3'], port_security=['fa:16:3e:62:09:c9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094a1e4e-c6c0-4994-907c-aae7c2cdbe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '933c51626a49465db409069a1b3eb7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '593f17fc-d03e-4917-afb8-683e49d809be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1d04103-3da8-47e2-b1fa-1a0a204f148a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1c553ce7-b95a-447b-9fed-01b378014028) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:46.482 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1c553ce7-b95a-447b-9fed-01b378014028 in datapath 851968f3-2dc6-498b-a08c-7b5b2c2383d4 unbound from our chassis
Nov 22 09:21:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:46.485 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 851968f3-2dc6-498b-a08c-7b5b2c2383d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:46.486 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c98f7f0d-5014-449b-860b-fb40405d84e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:46.486 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4 namespace which is not needed anymore
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.496 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:46 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000040.scope: Deactivated successfully.
Nov 22 09:21:46 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000040.scope: Consumed 16.927s CPU time.
Nov 22 09:21:46 compute-0 systemd-machined[215941]: Machine qemu-74-instance-00000040 terminated.
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.545 253665 DEBUG oslo_concurrency.lockutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "90d94c7c-4c60-4842-b18c-1eecc3997b15" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.546 253665 DEBUG oslo_concurrency.lockutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.546 253665 DEBUG oslo_concurrency.lockutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.546 253665 DEBUG oslo_concurrency.lockutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.547 253665 DEBUG oslo_concurrency.lockutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.548 253665 INFO nova.compute.manager [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Terminating instance
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.549 253665 DEBUG nova.compute.manager [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.587 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.588 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.588 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Creating image(s)
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.608 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.628 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.648 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.653 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:46 compute-0 kernel: tap1c553ce7-b9: entered promiscuous mode
Nov 22 09:21:46 compute-0 systemd-udevd[325753]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:46 compute-0 kernel: tap1c553ce7-b9 (unregistering): left promiscuous mode
Nov 22 09:21:46 compute-0 NetworkManager[48920]: <info>  [1763803306.6765] manager: (tap1c553ce7-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Nov 22 09:21:46 compute-0 ovn_controller[152872]: 2025-11-22T09:21:46Z|00662|binding|INFO|Claiming lport 1c553ce7-b95a-447b-9fed-01b378014028 for this chassis.
Nov 22 09:21:46 compute-0 ovn_controller[152872]: 2025-11-22T09:21:46Z|00663|binding|INFO|1c553ce7-b95a-447b-9fed-01b378014028: Claiming fa:16:3e:62:09:c9 10.100.0.3
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.688 253665 DEBUG nova.policy [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c57e583444d64b2a80c940052ff754eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.692 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.697 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:46 compute-0 ovn_controller[152872]: 2025-11-22T09:21:46Z|00664|binding|INFO|Setting lport 1c553ce7-b95a-447b-9fed-01b378014028 ovn-installed in OVS
Nov 22 09:21:46 compute-0 ovn_controller[152872]: 2025-11-22T09:21:46Z|00665|if_status|INFO|Dropped 3 log messages in last 469 seconds (most recently, 469 seconds ago) due to excessive rate
Nov 22 09:21:46 compute-0 ovn_controller[152872]: 2025-11-22T09:21:46Z|00666|if_status|INFO|Not setting lport 1c553ce7-b95a-447b-9fed-01b378014028 down as sb is readonly
Nov 22 09:21:46 compute-0 ovn_controller[152872]: 2025-11-22T09:21:46Z|00667|binding|INFO|Releasing lport 1c553ce7-b95a-447b-9fed-01b378014028 from this chassis (sb_readonly=0)
Nov 22 09:21:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:46.708 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:09:c9 10.100.0.3'], port_security=['fa:16:3e:62:09:c9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094a1e4e-c6c0-4994-907c-aae7c2cdbe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '933c51626a49465db409069a1b3eb7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '593f17fc-d03e-4917-afb8-683e49d809be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1d04103-3da8-47e2-b1fa-1a0a204f148a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1c553ce7-b95a-447b-9fed-01b378014028) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:46.727 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:09:c9 10.100.0.3'], port_security=['fa:16:3e:62:09:c9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094a1e4e-c6c0-4994-907c-aae7c2cdbe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '933c51626a49465db409069a1b3eb7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '593f17fc-d03e-4917-afb8-683e49d809be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1d04103-3da8-47e2-b1fa-1a0a204f148a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1c553ce7-b95a-447b-9fed-01b378014028) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.733 253665 DEBUG nova.compute.manager [req-0343ae02-1ab9-4beb-84be-86abf2c4c74e req-17b53cc0-2eab-40d8-aa49-f58fb528a5bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received event network-vif-unplugged-1c553ce7-b95a-447b-9fed-01b378014028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.733 253665 DEBUG oslo_concurrency.lockutils [req-0343ae02-1ab9-4beb-84be-86abf2c4c74e req-17b53cc0-2eab-40d8-aa49-f58fb528a5bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.734 253665 DEBUG oslo_concurrency.lockutils [req-0343ae02-1ab9-4beb-84be-86abf2c4c74e req-17b53cc0-2eab-40d8-aa49-f58fb528a5bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.734 253665 DEBUG oslo_concurrency.lockutils [req-0343ae02-1ab9-4beb-84be-86abf2c4c74e req-17b53cc0-2eab-40d8-aa49-f58fb528a5bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.734 253665 DEBUG nova.compute.manager [req-0343ae02-1ab9-4beb-84be-86abf2c4c74e req-17b53cc0-2eab-40d8-aa49-f58fb528a5bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] No waiting events found dispatching network-vif-unplugged-1c553ce7-b95a-447b-9fed-01b378014028 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.734 253665 DEBUG nova.compute.manager [req-0343ae02-1ab9-4beb-84be-86abf2c4c74e req-17b53cc0-2eab-40d8-aa49-f58fb528a5bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received event network-vif-unplugged-1c553ce7-b95a-447b-9fed-01b378014028 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.736 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.736 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.737 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.737 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.757 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.760 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.793 253665 INFO nova.virt.libvirt.driver [-] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Instance destroyed successfully.
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.794 253665 DEBUG nova.objects.instance [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'resources' on Instance uuid 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.806 253665 DEBUG nova.virt.libvirt.vif [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:20:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1168981248',display_name='tempest-ServerRescueNegativeTestJSON-server-1168981248',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1168981248',id=64,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='933c51626a49465db409069a1b3eb7be',ramdisk_id='',reservation_id='r-4kcyxfno',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1742140611',owner_user_name='tempest-ServerRescueNegativeTestJSON-1742140611-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:39Z,user_data=None,user_id='7fc7bde5e89f466d88e469ac1f35a435',uuid=094a1e4e-c6c0-4994-907c-aae7c2cdbe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.807 253665 DEBUG nova.network.os_vif_util [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converting VIF {"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.808 253665 DEBUG nova.network.os_vif_util [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.809 253665 DEBUG os_vif [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.812 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c553ce7-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:46 compute-0 nova_compute[253661]: 2025-11-22 09:21:46.818 253665 INFO os_vif [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9')
Nov 22 09:21:46 compute-0 neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4[322012]: [NOTICE]   (322016) : haproxy version is 2.8.14-c23fe91
Nov 22 09:21:46 compute-0 neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4[322012]: [NOTICE]   (322016) : path to executable is /usr/sbin/haproxy
Nov 22 09:21:46 compute-0 neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4[322012]: [WARNING]  (322016) : Exiting Master process...
Nov 22 09:21:46 compute-0 neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4[322012]: [ALERT]    (322016) : Current worker (322018) exited with code 143 (Terminated)
Nov 22 09:21:46 compute-0 neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4[322012]: [WARNING]  (322016) : All workers exited. Exiting... (0)
Nov 22 09:21:46 compute-0 systemd[1]: libpod-f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7.scope: Deactivated successfully.
Nov 22 09:21:46 compute-0 podman[325775]: 2025-11-22 09:21:46.987245793 +0000 UTC m=+0.395499751 container died f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:21:47 compute-0 kernel: tap196bfb3e-6b (unregistering): left promiscuous mode
Nov 22 09:21:47 compute-0 NetworkManager[48920]: <info>  [1763803307.0211] device (tap196bfb3e-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:47 compute-0 ovn_controller[152872]: 2025-11-22T09:21:47Z|00668|binding|INFO|Releasing lport 196bfb3e-6b8e-4174-8aca-2c419d527946 from this chassis (sb_readonly=0)
Nov 22 09:21:47 compute-0 ovn_controller[152872]: 2025-11-22T09:21:47Z|00669|binding|INFO|Setting lport 196bfb3e-6b8e-4174-8aca-2c419d527946 down in Southbound
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.028 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 ovn_controller[152872]: 2025-11-22T09:21:47Z|00670|binding|INFO|Removing iface tap196bfb3e-6b ovn-installed in OVS
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.030 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.037 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f0:36 10.100.0.7'], port_security=['fa:16:3e:64:f0:36 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '90d94c7c-4c60-4842-b18c-1eecc3997b15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=196bfb3e-6b8e-4174-8aca-2c419d527946) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000043.scope: Deactivated successfully.
Nov 22 09:21:47 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000043.scope: Consumed 7.518s CPU time.
Nov 22 09:21:47 compute-0 systemd-machined[215941]: Machine qemu-80-instance-00000043 terminated.
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.192 253665 INFO nova.virt.libvirt.driver [-] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Instance destroyed successfully.
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.192 253665 DEBUG nova.objects.instance [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'resources' on Instance uuid 90d94c7c-4c60-4842-b18c-1eecc3997b15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.206 253665 DEBUG nova.virt.libvirt.vif [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:21:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-633701782',display_name='tempest-ServerDiskConfigTestJSON-server-633701782',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-633701782',id=67,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:21:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-sq06q5gm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:45Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=90d94c7c-4c60-4842-b18c-1eecc3997b15,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.207 253665 DEBUG nova.network.os_vif_util [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "196bfb3e-6b8e-4174-8aca-2c419d527946", "address": "fa:16:3e:64:f0:36", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196bfb3e-6b", "ovs_interfaceid": "196bfb3e-6b8e-4174-8aca-2c419d527946", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.208 253665 DEBUG nova.network.os_vif_util [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f0:36,bridge_name='br-int',has_traffic_filtering=True,id=196bfb3e-6b8e-4174-8aca-2c419d527946,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196bfb3e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.208 253665 DEBUG os_vif [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f0:36,bridge_name='br-int',has_traffic_filtering=True,id=196bfb3e-6b8e-4174-8aca-2c419d527946,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196bfb3e-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.210 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.210 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap196bfb3e-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.215 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.219 253665 INFO os_vif [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f0:36,bridge_name='br-int',has_traffic_filtering=True,id=196bfb3e-6b8e-4174-8aca-2c419d527946,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196bfb3e-6b')
Nov 22 09:21:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254433973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.317 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Successfully created port: 446e69e1-2312-4e4f-8815-cdffe466ff52 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.329 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.334 253665 DEBUG nova.compute.provider_tree [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.346 253665 DEBUG nova.scheduler.client.report [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:47 compute-0 ceph-mon[75021]: pgmap v1727: 305 pgs: 305 active+clean; 271 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 254 op/s
Nov 22 09:21:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/558201758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.365 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.366 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.432 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.432 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.447 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:21:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 252 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 422 KiB/s wr, 225 op/s
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.464 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:21:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7-userdata-shm.mount: Deactivated successfully.
Nov 22 09:21:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4f5e444636f3cd34fb5378d22b59af7b7195ae7f4200a64bf369efd16fd461a-merged.mount: Deactivated successfully.
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.548 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.549 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.550 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Creating image(s)
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.573 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 366b8461-3c67-44e6-a791-f49b343cef76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.593 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 366b8461-3c67-44e6-a791-f49b343cef76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.621 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 366b8461-3c67-44e6-a791-f49b343cef76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.626 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:47 compute-0 podman[325775]: 2025-11-22 09:21:47.658296748 +0000 UTC m=+1.066550706 container cleanup f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:21:47 compute-0 systemd[1]: libpod-conmon-f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7.scope: Deactivated successfully.
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.667 253665 DEBUG nova.compute.manager [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Received event network-vif-unplugged-196bfb3e-6b8e-4174-8aca-2c419d527946 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.667 253665 DEBUG oslo_concurrency.lockutils [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.668 253665 DEBUG oslo_concurrency.lockutils [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.668 253665 DEBUG oslo_concurrency.lockutils [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.668 253665 DEBUG nova.compute.manager [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] No waiting events found dispatching network-vif-unplugged-196bfb3e-6b8e-4174-8aca-2c419d527946 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.668 253665 DEBUG nova.compute.manager [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Received event network-vif-unplugged-196bfb3e-6b8e-4174-8aca-2c419d527946 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.669 253665 DEBUG nova.compute.manager [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Received event network-vif-plugged-196bfb3e-6b8e-4174-8aca-2c419d527946 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.669 253665 DEBUG oslo_concurrency.lockutils [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.669 253665 DEBUG oslo_concurrency.lockutils [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.669 253665 DEBUG oslo_concurrency.lockutils [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.669 253665 DEBUG nova.compute.manager [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] No waiting events found dispatching network-vif-plugged-196bfb3e-6b8e-4174-8aca-2c419d527946 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.670 253665 WARNING nova.compute.manager [req-f1f5cdfd-52a7-4fd0-9800-49b9204baedf req-92b61cba-aae4-4bcf-853e-8d265fabf9b6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Received unexpected event network-vif-plugged-196bfb3e-6b8e-4174-8aca-2c419d527946 for instance with vm_state active and task_state deleting.
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.706 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.707 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.707 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.708 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.732 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 366b8461-3c67-44e6-a791-f49b343cef76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.736 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 366b8461-3c67-44e6-a791-f49b343cef76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.836 253665 DEBUG nova.policy [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c57e583444d64b2a80c940052ff754eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:21:47 compute-0 podman[326027]: 2025-11-22 09:21:47.879292008 +0000 UTC m=+0.198126125 container remove f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.886 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[28cf7d81-37d2-4267-9f75-292aa7787ebe]: (4, ('Sat Nov 22 09:21:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4 (f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7)\nf4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7\nSat Nov 22 09:21:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4 (f4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7)\nf4e0afdfbd128012d45d33b68892def471973e995a5b56ce8c3a4353264a2fd7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.889 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4549ca13-a4b3-475c-bb00-9498077b03e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.891 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap851968f3-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:47 compute-0 kernel: tap851968f3-20: left promiscuous mode
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.905 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[693c34f4-1a95-4b00-a680-21231df1fdad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.915 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 kernel: tapa288a5e5-7b (unregistering): left promiscuous mode
Nov 22 09:21:47 compute-0 NetworkManager[48920]: <info>  [1763803307.9228] device (tapa288a5e5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.923 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[231a96e8-558d-40a7-b280-f529b05f7962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.929 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[070ff560-dc6e-475b-b37e-8e35d2d66a81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 ovn_controller[152872]: 2025-11-22T09:21:47Z|00671|binding|INFO|Releasing lport a288a5e5-7b57-4be8-9617-3271ea1e210f from this chassis (sb_readonly=0)
Nov 22 09:21:47 compute-0 ovn_controller[152872]: 2025-11-22T09:21:47Z|00672|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f down in Southbound
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.931 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 ovn_controller[152872]: 2025-11-22T09:21:47Z|00673|binding|INFO|Removing iface tapa288a5e5-7b ovn-installed in OVS
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.935 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.939 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.955 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce955bb3-0364-420c-b235-a57d86bb6332]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609277, 'reachable_time': 39769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326081, 'error': None, 'target': 'ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d851968f3\x2d2dc6\x2d498b\x2da08c\x2d7b5b2c2383d4.mount: Deactivated successfully.
Nov 22 09:21:47 compute-0 nova_compute[253661]: 2025-11-22 09:21:47.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.959 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-851968f3-2dc6-498b-a08c-7b5b2c2383d4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.960 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9f5c32-6825-41b1-af50-1384a3cad113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.961 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1c553ce7-b95a-447b-9fed-01b378014028 in datapath 851968f3-2dc6-498b-a08c-7b5b2c2383d4 unbound from our chassis
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.962 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 851968f3-2dc6-498b-a08c-7b5b2c2383d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.963 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1c41ab04-8ee4-414a-ab85-739d663ea3f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.965 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1c553ce7-b95a-447b-9fed-01b378014028 in datapath 851968f3-2dc6-498b-a08c-7b5b2c2383d4 unbound from our chassis
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.967 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 851968f3-2dc6-498b-a08c-7b5b2c2383d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.968 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[37db3442-dec9-4dbb-a6f7-e2c8797cdcb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.968 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 196bfb3e-6b8e-4174-8aca-2c419d527946 in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd unbound from our chassis
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.969 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.970 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc8aa7f-81a7-417f-836b-48dde0df51fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:47 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 22 09:21:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:47.972 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace which is not needed anymore
Nov 22 09:21:47 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000032.scope: Consumed 19.333s CPU time.
Nov 22 09:21:47 compute-0 systemd-machined[215941]: Machine qemu-65-instance-00000032 terminated.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.007 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.099 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] resizing rbd image 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:21:48 compute-0 NetworkManager[48920]: <info>  [1763803308.1501] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[325629]: [NOTICE]   (325633) : haproxy version is 2.8.14-c23fe91
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[325629]: [NOTICE]   (325633) : path to executable is /usr/sbin/haproxy
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[325629]: [WARNING]  (325633) : Exiting Master process...
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[325629]: [ALERT]    (325633) : Current worker (325635) exited with code 143 (Terminated)
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[325629]: [WARNING]  (325633) : All workers exited. Exiting... (0)
Nov 22 09:21:48 compute-0 systemd[1]: libpod-1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61.scope: Deactivated successfully.
Nov 22 09:21:48 compute-0 podman[326126]: 2025-11-22 09:21:48.19423713 +0000 UTC m=+0.118448819 container died 1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.234 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Successfully updated port: 446e69e1-2312-4e4f-8815-cdffe466ff52 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.251 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "refresh_cache-4d38da27-a529-427f-bf7a-90bbfbfeb0b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.251 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquired lock "refresh_cache-4d38da27-a529-427f-bf7a-90bbfbfeb0b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.252 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b04b1be8860a5edbaf2c79bb00a1829eaadd5a4d25559a1db4f661bab41e673-merged.mount: Deactivated successfully.
Nov 22 09:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61-userdata-shm.mount: Deactivated successfully.
Nov 22 09:21:48 compute-0 podman[326126]: 2025-11-22 09:21:48.274184903 +0000 UTC m=+0.198396592 container cleanup 1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:21:48 compute-0 systemd[1]: libpod-conmon-1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61.scope: Deactivated successfully.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.306 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 366b8461-3c67-44e6-a791-f49b343cef76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1254433973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:48 compute-0 ceph-mon[75021]: pgmap v1728: 305 pgs: 305 active+clean; 252 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 422 KiB/s wr, 225 op/s
Nov 22 09:21:48 compute-0 podman[326189]: 2025-11-22 09:21:48.374571392 +0000 UTC m=+0.073668111 container remove 1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.382 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47d137f8-a7ff-42b7-a603-4e5b19ebad07]: (4, ('Sat Nov 22 09:21:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61)\n1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61\nSat Nov 22 09:21:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61)\n1fb4ca5723e4c409f096187485584b178bc85f5da45d133a829c7bac5ef5ce61\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.384 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[218b5e64-ef31-4ad3-b874-cd90a59779c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.386 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:48 compute-0 kernel: tap01d1bce2-e0: left promiscuous mode
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.418 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f9192327-c20b-4299-8ed7-01a23140d7d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.434 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3029b4-b284-4905-a2f7-3b6677486525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.436 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f47b2c38-fa0d-4d1c-93e0-45352b794d56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.444 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.451 253665 DEBUG nova.objects.instance [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'migration_context' on Instance uuid 4d38da27-a529-427f-bf7a-90bbfbfeb0b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.457 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dea8b2e0-d597-4e55-b86c-d296c68a67c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614960, 'reachable_time': 32816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326263, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.460 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] resizing rbd image 366b8461-3c67-44e6-a791-f49b343cef76_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.460 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.460 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f02e68-2c92-4c6b-a33b-0660c1434340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.461 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.463 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebc42408-7b03-480c-a016-1e5bb2ebcc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.464 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38bd730e-0baf-4ec0-8927-cf03c5725cd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.465 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace which is not needed anymore
Nov 22 09:21:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d01d1bce2\x2def3d\x2d44bf\x2da3f9\x2d13dc692c2ddd.mount: Deactivated successfully.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.673 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.681 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.682 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Ensure instance console log exists: /var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.682 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.683 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.684 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.687 253665 INFO nova.virt.libvirt.driver [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance shutdown successfully after 3 seconds.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.697 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.697 253665 DEBUG nova.objects.instance [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [NOTICE]   (316521) : haproxy version is 2.8.14-c23fe91
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [NOTICE]   (316521) : path to executable is /usr/sbin/haproxy
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [WARNING]  (316521) : Exiting Master process...
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [WARNING]  (316521) : Exiting Master process...
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [ALERT]    (316521) : Current worker (316526) exited with code 143 (Terminated)
Nov 22 09:21:48 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [WARNING]  (316521) : All workers exited. Exiting... (0)
Nov 22 09:21:48 compute-0 systemd[1]: libpod-719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9.scope: Deactivated successfully.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.709 253665 DEBUG nova.compute.manager [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:48 compute-0 podman[326298]: 2025-11-22 09:21:48.712404471 +0000 UTC m=+0.151321768 container died 719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.733 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Successfully created port: 96d9f3b1-144c-4ec0-bbcc-410ded52262c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.757 253665 DEBUG oslo_concurrency.lockutils [None req-bb9d5f09-5611-4c67-af0d-52477fa3a922 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9-userdata-shm.mount: Deactivated successfully.
Nov 22 09:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e741de3208f7891e80bf1534293c26fdb1a7bf352ddd9a294c705d9b5426cd1b-merged.mount: Deactivated successfully.
Nov 22 09:21:48 compute-0 podman[326298]: 2025-11-22 09:21:48.772395799 +0000 UTC m=+0.211313066 container cleanup 719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:21:48 compute-0 systemd[1]: libpod-conmon-719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9.scope: Deactivated successfully.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.811 253665 DEBUG nova.compute.manager [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received event network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.811 253665 DEBUG oslo_concurrency.lockutils [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.812 253665 DEBUG oslo_concurrency.lockutils [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.812 253665 DEBUG oslo_concurrency.lockutils [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.813 253665 DEBUG nova.compute.manager [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] No waiting events found dispatching network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.813 253665 WARNING nova.compute.manager [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received unexpected event network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 for instance with vm_state active and task_state deleting.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.813 253665 DEBUG nova.compute.manager [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.814 253665 DEBUG oslo_concurrency.lockutils [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.815 253665 DEBUG oslo_concurrency.lockutils [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.815 253665 DEBUG oslo_concurrency.lockutils [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.815 253665 DEBUG nova.compute.manager [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.815 253665 WARNING nova.compute.manager [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state stopped and task_state None.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.816 253665 DEBUG nova.compute.manager [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.816 253665 DEBUG oslo_concurrency.lockutils [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.816 253665 DEBUG oslo_concurrency.lockutils [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.817 253665 DEBUG oslo_concurrency.lockutils [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.817 253665 DEBUG nova.compute.manager [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.817 253665 WARNING nova.compute.manager [req-5e0f4ade-c912-421d-93da-43673c420590 req-8647226e-82de-424f-ab4c-61b9cb40e7da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state stopped and task_state None.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.824 253665 DEBUG nova.objects.instance [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'migration_context' on Instance uuid 366b8461-3c67-44e6-a791-f49b343cef76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.837 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.838 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Ensure instance console log exists: /var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.839 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.839 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.839 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:48 compute-0 podman[326335]: 2025-11-22 09:21:48.872824509 +0000 UTC m=+0.065359789 container remove 719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.883 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a90d083-bd1e-4b30-a153-927307b3d474]: (4, ('Sat Nov 22 09:21:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9)\n719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9\nSat Nov 22 09:21:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9)\n719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.885 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1110f4-26e2-4aaf-a31f-0908db2f80c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.886 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.888 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:48 compute-0 kernel: tapebc42408-70: left promiscuous mode
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.910 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.912 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f2339f94-3587-4243-81e4-38942a0fbc3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.926 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c211e68-ed41-4858-be24-3a7838ab8493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.927 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3158025-7dfb-47d7-9e8b-df3ef14664c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.945 253665 INFO nova.virt.libvirt.driver [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Deleting instance files /var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15_del
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.947 253665 INFO nova.virt.libvirt.driver [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Deletion of /var/lib/nova/instances/90d94c7c-4c60-4842-b18c-1eecc3997b15_del complete
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.952 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4048e3a5-3527-472d-a0f3-c41e84ab74df]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602314, 'reachable_time': 30308, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326363, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 systemd[1]: run-netns-ovnmeta\x2debc42408\x2d7b03\x2d480c\x2da016\x2d1e5bb2ebcc93.mount: Deactivated successfully.
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.956 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:21:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:48.957 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b877b712-d51a-4024-a47f-f059d296cae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.988 253665 INFO nova.virt.libvirt.driver [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Deleting instance files /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36_del
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.989 253665 INFO nova.virt.libvirt.driver [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Deletion of /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36_del complete
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.995 253665 INFO nova.compute.manager [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Took 2.45 seconds to destroy the instance on the hypervisor.
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.996 253665 DEBUG oslo.service.loopingcall [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.998 253665 DEBUG nova.compute.manager [-] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:21:48 compute-0 nova_compute[253661]: 2025-11-22 09:21:48.998 253665 DEBUG nova.network.neutron [-] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.039 253665 INFO nova.compute.manager [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Took 2.98 seconds to destroy the instance on the hypervisor.
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.040 253665 DEBUG oslo.service.loopingcall [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.040 253665 DEBUG nova.compute.manager [-] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.040 253665 DEBUG nova.network.neutron [-] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:21:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 213 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.1 MiB/s wr, 242 op/s
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.752 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Updating instance_info_cache with network_info: [{"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.767 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Releasing lock "refresh_cache-4d38da27-a529-427f-bf7a-90bbfbfeb0b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.768 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Instance network_info: |[{"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.771 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Start _get_guest_xml network_info=[{"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.777 253665 WARNING nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.789 253665 DEBUG nova.virt.libvirt.host [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.790 253665 DEBUG nova.virt.libvirt.host [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.793 253665 DEBUG nova.virt.libvirt.host [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.793 253665 DEBUG nova.virt.libvirt.host [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.794 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.794 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.795 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.795 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.795 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.795 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.795 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.796 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.796 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.796 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.796 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.797 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.799 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.932 253665 DEBUG nova.network.neutron [-] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.945 253665 INFO nova.compute.manager [-] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Took 0.90 seconds to deallocate network for instance.
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.991 253665 DEBUG oslo_concurrency.lockutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:49 compute-0 nova_compute[253661]: 2025-11-22 09:21:49.992 253665 DEBUG oslo_concurrency.lockutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.011 253665 DEBUG nova.compute.manager [req-e6bb30f5-e50b-4c97-a1a5-2babedbc5d00 req-58c283bb-2f4b-4f08-a2f6-f5125555f378 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Received event network-changed-446e69e1-2312-4e4f-8815-cdffe466ff52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.012 253665 DEBUG nova.compute.manager [req-e6bb30f5-e50b-4c97-a1a5-2babedbc5d00 req-58c283bb-2f4b-4f08-a2f6-f5125555f378 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Refreshing instance network info cache due to event network-changed-446e69e1-2312-4e4f-8815-cdffe466ff52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.012 253665 DEBUG oslo_concurrency.lockutils [req-e6bb30f5-e50b-4c97-a1a5-2babedbc5d00 req-58c283bb-2f4b-4f08-a2f6-f5125555f378 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4d38da27-a529-427f-bf7a-90bbfbfeb0b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.012 253665 DEBUG oslo_concurrency.lockutils [req-e6bb30f5-e50b-4c97-a1a5-2babedbc5d00 req-58c283bb-2f4b-4f08-a2f6-f5125555f378 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4d38da27-a529-427f-bf7a-90bbfbfeb0b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.013 253665 DEBUG nova.network.neutron [req-e6bb30f5-e50b-4c97-a1a5-2babedbc5d00 req-58c283bb-2f4b-4f08-a2f6-f5125555f378 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Refreshing network info cache for port 446e69e1-2312-4e4f-8815-cdffe466ff52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.020 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Successfully updated port: 96d9f3b1-144c-4ec0-bbcc-410ded52262c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.038 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "refresh_cache-366b8461-3c67-44e6-a791-f49b343cef76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.039 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquired lock "refresh_cache-366b8461-3c67-44e6-a791-f49b343cef76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.039 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.060 253665 DEBUG nova.network.neutron [-] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.074 253665 INFO nova.compute.manager [-] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Took 1.08 seconds to deallocate network for instance.
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.090 253665 DEBUG nova.objects.instance [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.113 253665 DEBUG oslo_concurrency.lockutils [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.114 253665 DEBUG oslo_concurrency.lockutils [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.114 253665 DEBUG nova.network.neutron [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.115 253665 DEBUG nova.objects.instance [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'info_cache' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.118 253665 DEBUG oslo_concurrency.processutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.159 253665 DEBUG oslo_concurrency.lockutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.182 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:21:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2165525955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.289 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.313 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.318 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.373 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:50 compute-0 ceph-mon[75021]: pgmap v1729: 305 pgs: 305 active+clean; 213 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.1 MiB/s wr, 242 op/s
Nov 22 09:21:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2165525955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/248911520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.607 253665 DEBUG oslo_concurrency.processutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.617 253665 DEBUG nova.compute.provider_tree [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.632 253665 DEBUG nova.scheduler.client.report [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.650 253665 DEBUG oslo_concurrency.lockutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.653 253665 DEBUG oslo_concurrency.lockutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.678 253665 INFO nova.scheduler.client.report [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Deleted allocations for instance 094a1e4e-c6c0-4994-907c-aae7c2cdbe36
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.746 253665 DEBUG oslo_concurrency.lockutils [None req-2b86b03d-d9ed-41a3-a734-bcf68cb63358 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255111553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.763 253665 DEBUG oslo_concurrency.processutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.799 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.801 253665 DEBUG nova.virt.libvirt.vif [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:21:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-598537995',display_name='tempest-tempest.common.compute-instance-598537995-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-598537995-1',id=68,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-eqk59leb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:46Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=4d38da27-a529-427f-bf7a-90bbfbfeb0b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.802 253665 DEBUG nova.network.os_vif_util [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.803 253665 DEBUG nova.network.os_vif_util [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:4f:04,bridge_name='br-int',has_traffic_filtering=True,id=446e69e1-2312-4e4f-8815-cdffe466ff52,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446e69e1-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.804 253665 DEBUG nova.objects.instance [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d38da27-a529-427f-bf7a-90bbfbfeb0b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.817 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <uuid>4d38da27-a529-427f-bf7a-90bbfbfeb0b1</uuid>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <name>instance-00000044</name>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <nova:name>tempest-tempest.common.compute-instance-598537995-1</nova:name>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:21:49</nova:creationTime>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <nova:user uuid="c57e583444d64b2a80c940052ff754eb">tempest-MultipleCreateTestJSON-1224529962-project-member</nova:user>
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <nova:project uuid="fb7ce66b616d43838eb71b9c62cb2354">tempest-MultipleCreateTestJSON-1224529962</nova:project>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <nova:port uuid="446e69e1-2312-4e4f-8815-cdffe466ff52">
Nov 22 09:21:50 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <system>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <entry name="serial">4d38da27-a529-427f-bf7a-90bbfbfeb0b1</entry>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <entry name="uuid">4d38da27-a529-427f-bf7a-90bbfbfeb0b1</entry>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     </system>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <os>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   </os>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <features>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   </features>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk">
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk.config">
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:50 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:98:4f:04"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <target dev="tap446e69e1-23"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1/console.log" append="off"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <video>
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     </video>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:21:50 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:21:50 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:21:50 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:21:50 compute-0 nova_compute[253661]: </domain>
Nov 22 09:21:50 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.819 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Preparing to wait for external event network-vif-plugged-446e69e1-2312-4e4f-8815-cdffe466ff52 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.819 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.819 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.819 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.820 253665 DEBUG nova.virt.libvirt.vif [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:21:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-598537995',display_name='tempest-tempest.common.compute-instance-598537995-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-598537995-1',id=68,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-eqk59leb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:46Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=4d38da27-a529-427f-bf7a-90bbfbfeb0b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.820 253665 DEBUG nova.network.os_vif_util [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.821 253665 DEBUG nova.network.os_vif_util [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:4f:04,bridge_name='br-int',has_traffic_filtering=True,id=446e69e1-2312-4e4f-8815-cdffe466ff52,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446e69e1-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.821 253665 DEBUG os_vif [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:4f:04,bridge_name='br-int',has_traffic_filtering=True,id=446e69e1-2312-4e4f-8815-cdffe466ff52,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446e69e1-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.822 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.822 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.826 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap446e69e1-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.827 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap446e69e1-23, col_values=(('external_ids', {'iface-id': '446e69e1-2312-4e4f-8815-cdffe466ff52', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:4f:04', 'vm-uuid': '4d38da27-a529-427f-bf7a-90bbfbfeb0b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:50 compute-0 NetworkManager[48920]: <info>  [1763803310.8296] manager: (tap446e69e1-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.836 253665 INFO os_vif [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:4f:04,bridge_name='br-int',has_traffic_filtering=True,id=446e69e1-2312-4e4f-8815-cdffe466ff52,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446e69e1-23')
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.880 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.880 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.880 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No VIF found with MAC fa:16:3e:98:4f:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.881 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Using config drive
Nov 22 09:21:50 compute-0 nova_compute[253661]: 2025-11-22 09:21:50.902 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.070 253665 DEBUG nova.network.neutron [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Updating instance_info_cache with network_info: [{"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.087 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Releasing lock "refresh_cache-366b8461-3c67-44e6-a791-f49b343cef76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.088 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Instance network_info: |[{"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.091 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Start _get_guest_xml network_info=[{"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.097 253665 WARNING nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.103 253665 DEBUG nova.virt.libvirt.host [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.104 253665 DEBUG nova.virt.libvirt.host [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.113 253665 DEBUG nova.virt.libvirt.host [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.114 253665 DEBUG nova.virt.libvirt.host [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.114 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.115 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.115 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.115 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.116 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.116 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.116 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.116 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.116 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.117 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.117 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.117 253665 DEBUG nova.virt.hardware [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.120 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.170 253665 DEBUG nova.network.neutron [req-e6bb30f5-e50b-4c97-a1a5-2babedbc5d00 req-58c283bb-2f4b-4f08-a2f6-f5125555f378 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Updated VIF entry in instance network info cache for port 446e69e1-2312-4e4f-8815-cdffe466ff52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.171 253665 DEBUG nova.network.neutron [req-e6bb30f5-e50b-4c97-a1a5-2babedbc5d00 req-58c283bb-2f4b-4f08-a2f6-f5125555f378 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Updating instance_info_cache with network_info: [{"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.186 253665 DEBUG oslo_concurrency.lockutils [req-e6bb30f5-e50b-4c97-a1a5-2babedbc5d00 req-58c283bb-2f4b-4f08-a2f6-f5125555f378 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4d38da27-a529-427f-bf7a-90bbfbfeb0b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.208 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Creating config drive at /var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1/disk.config
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.214 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0yadqwl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:21:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1937953538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.264 253665 DEBUG nova.network.neutron [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.267 253665 DEBUG oslo_concurrency.processutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.273 253665 DEBUG nova.compute.provider_tree [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.290 253665 DEBUG nova.scheduler.client.report [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.294 253665 DEBUG oslo_concurrency.lockutils [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.314 253665 DEBUG oslo_concurrency.lockutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.328 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.329 253665 DEBUG nova.objects.instance [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.344 253665 INFO nova.scheduler.client.report [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Deleted allocations for instance 90d94c7c-4c60-4842-b18c-1eecc3997b15
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.345 253665 DEBUG nova.objects.instance [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.360 253665 DEBUG nova.virt.libvirt.vif [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.361 253665 DEBUG nova.network.os_vif_util [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.362 253665 DEBUG nova.network.os_vif_util [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.363 253665 DEBUG os_vif [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.369 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa288a5e5-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.371 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0yadqwl" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.400 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.405 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1/disk.config 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.458 253665 DEBUG oslo_concurrency.lockutils [None req-fd9b91ba-0e7e-4bcb-9e98-f0e62d7087f5 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "90d94c7c-4c60-4842-b18c-1eecc3997b15" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 213 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 173 op/s
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.464 253665 INFO os_vif [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.473 253665 DEBUG nova.virt.libvirt.driver [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start _get_guest_xml network_info=[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.480 253665 WARNING nova.virt.libvirt.driver [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.485 253665 DEBUG nova.virt.libvirt.host [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.486 253665 DEBUG nova.virt.libvirt.host [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.489 253665 DEBUG nova.virt.libvirt.host [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.489 253665 DEBUG nova.virt.libvirt.host [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.490 253665 DEBUG nova.virt.libvirt.driver [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.490 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.490 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.491 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.491 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.491 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.492 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.492 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.492 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.493 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.493 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.493 253665 DEBUG nova.virt.hardware [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.494 253665 DEBUG nova.objects.instance [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.509 253665 DEBUG oslo_concurrency.processutils [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614406966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.768 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.797 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 366b8461-3c67-44e6-a791-f49b343cef76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:51 compute-0 nova_compute[253661]: 2025-11-22 09:21:51.802 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/248911520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1255111553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1937953538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:21:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3010543036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.077 253665 DEBUG oslo_concurrency.processutils [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.115 253665 DEBUG oslo_concurrency.processutils [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:21:52
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.meta', 'vms', 'backups', '.rgw.root', 'default.rgw.control', '.mgr']
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:21:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1993532534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.302 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.305 253665 DEBUG nova.virt.libvirt.vif [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:21:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-598537995',display_name='tempest-tempest.common.compute-instance-598537995-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-598537995-2',id=69,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-eqk59leb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:47Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=366b8461-3c67-44e6-a791-f49b343cef76,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.306 253665 DEBUG nova.network.os_vif_util [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.307 253665 DEBUG nova.network.os_vif_util [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:56:df,bridge_name='br-int',has_traffic_filtering=True,id=96d9f3b1-144c-4ec0-bbcc-410ded52262c,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96d9f3b1-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.308 253665 DEBUG nova.objects.instance [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'pci_devices' on Instance uuid 366b8461-3c67-44e6-a791-f49b343cef76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.314 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1/disk.config 4d38da27-a529-427f-bf7a-90bbfbfeb0b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.909s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.315 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Deleting local config drive /var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1/disk.config because it was imported into RBD.
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.324 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <uuid>366b8461-3c67-44e6-a791-f49b343cef76</uuid>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <name>instance-00000045</name>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:name>tempest-tempest.common.compute-instance-598537995-2</nova:name>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:21:51</nova:creationTime>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:user uuid="c57e583444d64b2a80c940052ff754eb">tempest-MultipleCreateTestJSON-1224529962-project-member</nova:user>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:project uuid="fb7ce66b616d43838eb71b9c62cb2354">tempest-MultipleCreateTestJSON-1224529962</nova:project>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:port uuid="96d9f3b1-144c-4ec0-bbcc-410ded52262c">
Nov 22 09:21:52 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <system>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="serial">366b8461-3c67-44e6-a791-f49b343cef76</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="uuid">366b8461-3c67-44e6-a791-f49b343cef76</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </system>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <os>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </os>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <features>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </features>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/366b8461-3c67-44e6-a791-f49b343cef76_disk">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/366b8461-3c67-44e6-a791-f49b343cef76_disk.config">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:e5:56:df"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <target dev="tap96d9f3b1-14"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76/console.log" append="off"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <video>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </video>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:21:52 compute-0 nova_compute[253661]: </domain>
Nov 22 09:21:52 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.327 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Preparing to wait for external event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.327 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.327 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.328 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.328 253665 DEBUG nova.virt.libvirt.vif [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:21:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-598537995',display_name='tempest-tempest.common.compute-instance-598537995-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-598537995-2',id=69,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-eqk59leb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:21:47Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=366b8461-3c67-44e6-a791-f49b343cef76,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.329 253665 DEBUG nova.network.os_vif_util [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.329 253665 DEBUG nova.network.os_vif_util [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:56:df,bridge_name='br-int',has_traffic_filtering=True,id=96d9f3b1-144c-4ec0-bbcc-410ded52262c,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96d9f3b1-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.330 253665 DEBUG os_vif [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:56:df,bridge_name='br-int',has_traffic_filtering=True,id=96d9f3b1-144c-4ec0-bbcc-410ded52262c,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96d9f3b1-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.331 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.331 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.335 253665 DEBUG nova.compute.manager [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received event network-vif-deleted-1c553ce7-b95a-447b-9fed-01b378014028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.336 253665 DEBUG nova.compute.manager [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-changed-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.336 253665 DEBUG nova.compute.manager [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Refreshing instance network info cache due to event network-changed-96d9f3b1-144c-4ec0-bbcc-410ded52262c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.336 253665 DEBUG oslo_concurrency.lockutils [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-366b8461-3c67-44e6-a791-f49b343cef76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.336 253665 DEBUG oslo_concurrency.lockutils [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-366b8461-3c67-44e6-a791-f49b343cef76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.337 253665 DEBUG nova.network.neutron [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Refreshing network info cache for port 96d9f3b1-144c-4ec0-bbcc-410ded52262c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.339 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.340 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96d9f3b1-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.340 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96d9f3b1-14, col_values=(('external_ids', {'iface-id': '96d9f3b1-144c-4ec0-bbcc-410ded52262c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:56:df', 'vm-uuid': '366b8461-3c67-44e6-a791-f49b343cef76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.3432] manager: (tap96d9f3b1-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.350 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.351 253665 INFO os_vif [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:56:df,bridge_name='br-int',has_traffic_filtering=True,id=96d9f3b1-144c-4ec0-bbcc-410ded52262c,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96d9f3b1-14')
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.3979] manager: (tap446e69e1-23): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Nov 22 09:21:52 compute-0 kernel: tap446e69e1-23: entered promiscuous mode
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 ovn_controller[152872]: 2025-11-22T09:21:52Z|00674|binding|INFO|Claiming lport 446e69e1-2312-4e4f-8815-cdffe466ff52 for this chassis.
Nov 22 09:21:52 compute-0 ovn_controller[152872]: 2025-11-22T09:21:52Z|00675|binding|INFO|446e69e1-2312-4e4f-8815-cdffe466ff52: Claiming fa:16:3e:98:4f:04 10.100.0.11
Nov 22 09:21:52 compute-0 ovn_controller[152872]: 2025-11-22T09:21:52Z|00676|binding|INFO|Setting lport 446e69e1-2312-4e4f-8815-cdffe466ff52 ovn-installed in OVS
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 systemd-udevd[326670]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.439 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.439 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.439 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No VIF found with MAC fa:16:3e:e5:56:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.440 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Using config drive
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.4446] device (tap446e69e1-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.4472] device (tap446e69e1-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:21:52 compute-0 systemd-machined[215941]: New machine qemu-81-instance-00000044.
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.463 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 366b8461-3c67-44e6-a791-f49b343cef76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:52 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000044.
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.468 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 ovn_controller[152872]: 2025-11-22T09:21:52Z|00677|binding|INFO|Setting lport 446e69e1-2312-4e4f-8815-cdffe466ff52 up in Southbound
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.519 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:4f:04 10.100.0.11'], port_security=['fa:16:3e:98:4f:04 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4d38da27-a529-427f-bf7a-90bbfbfeb0b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=446e69e1-2312-4e4f-8815-cdffe466ff52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.520 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 446e69e1-2312-4e4f-8815-cdffe466ff52 in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 bound to our chassis
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.522 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.535 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1778a3-a3d4-4f4d-9090-92f7b0a47e23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.536 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8ee5c4d-61 in ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.537 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8ee5c4d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.538 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[79f333ee-1954-4249-915c-98824afba244]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.538 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47d598f2-e855-4416-8ed9-0effd6fbcd66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.553 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[60c7bdc7-6479-406c-82fa-6ac86c1a9c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.565 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f93d4104-3b99-4ec1-af5f-a57870948d92]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.608 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[01e8e1d6-8f3d-4dbc-8796-0f24fa5a88db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.613 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5c9340-d6eb-4fef-936c-559c50061947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 systemd-udevd[326675]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.6144] manager: (tapb8ee5c4d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/300)
Nov 22 09:21:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:21:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602538463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.636 253665 DEBUG oslo_concurrency.processutils [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.637 253665 DEBUG nova.virt.libvirt.vif [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.637 253665 DEBUG nova.network.os_vif_util [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.638 253665 DEBUG nova.network.os_vif_util [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.640 253665 DEBUG nova.objects.instance [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.647 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed6ae1a-fdb1-4e11-96eb-414e3fb53b7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.650 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8e994958-cf1b-4c4c-87ba-c5a00692b4d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.652 253665 DEBUG nova.virt.libvirt.driver [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <uuid>636b1046-fff8-4a45-8a14-04010b2f282e</uuid>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <name>instance-00000032</name>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestJSON-server-149918095</nova:name>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:21:51</nova:creationTime>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <nova:port uuid="a288a5e5-7b57-4be8-9617-3271ea1e210f">
Nov 22 09:21:52 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <system>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="serial">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="uuid">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </system>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <os>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </os>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <features>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </features>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk.config">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:21:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:70:38:8e"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <target dev="tapa288a5e5-7b"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log" append="off"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <video>
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </video>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:21:52 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:21:52 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:21:52 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:21:52 compute-0 nova_compute[253661]: </domain>
Nov 22 09:21:52 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.653 253665 DEBUG nova.virt.libvirt.driver [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.653 253665 DEBUG nova.virt.libvirt.driver [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.654 253665 DEBUG nova.virt.libvirt.vif [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.654 253665 DEBUG nova.network.os_vif_util [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.655 253665 DEBUG nova.network.os_vif_util [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.655 253665 DEBUG os_vif [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.655 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.656 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.658 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.658 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.658 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.6610] manager: (tapa288a5e5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.666 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.671 253665 INFO os_vif [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.6830] device (tapb8ee5c4d-60): carrier: link connected
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.689 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6fb2d4-b879-4718-adcd-0bd689b50b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.709 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8716d269-b936-4ca6-9c7d-60409248882e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8ee5c4d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:6e:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616342, 'reachable_time': 19578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326732, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.729 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af26df6c-22e6-4271-ae95-861e12485824]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:6eab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616342, 'tstamp': 616342}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326737, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.7412] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/302)
Nov 22 09:21:52 compute-0 systemd-udevd[326723]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:52 compute-0 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 09:21:52 compute-0 ovn_controller[152872]: 2025-11-22T09:21:52Z|00678|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 09:21:52 compute-0 ovn_controller[152872]: 2025-11-22T09:21:52Z|00679|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.748 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.751 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5545ebd2-9b31-4eff-b2d3-085f1ffb0bb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8ee5c4d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:6e:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616342, 'reachable_time': 19578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326741, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.7635] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.7645] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:21:52 compute-0 ovn_controller[152872]: 2025-11-22T09:21:52Z|00680|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.777 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 systemd-machined[215941]: New machine qemu-82-instance-00000032.
Nov 22 09:21:52 compute-0 ovn_controller[152872]: 2025-11-22T09:21:52Z|00681|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.780 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:52 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-00000032.
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.801 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59a68b61-22a6-408c-b081-031bfd9f0715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[28045278-3c44-49c1-b0c7-bf4a1829d24a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.873 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8ee5c4d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8ee5c4d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 NetworkManager[48920]: <info>  [1763803312.8795] manager: (tapb8ee5c4d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Nov 22 09:21:52 compute-0 kernel: tapb8ee5c4d-60: entered promiscuous mode
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.885 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8ee5c4d-60, col_values=(('external_ids', {'iface-id': '170e2a44-25ef-429e-ae6e-cbd8fd0b45d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:52 compute-0 ovn_controller[152872]: 2025-11-22T09:21:52Z|00682|binding|INFO|Releasing lport 170e2a44-25ef-429e-ae6e-cbd8fd0b45d8 from this chassis (sb_readonly=0)
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 ceph-mon[75021]: pgmap v1730: 305 pgs: 305 active+clean; 213 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 173 op/s
Nov 22 09:21:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/614406966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3010543036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1993532534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2602538463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.910 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.917 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.918 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8ee5c4d-6313-4ba5-9d89-7cce7062ce25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8ee5c4d-6313-4ba5-9d89-7cce7062ce25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.919 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[72a4d1a3-111d-4467-aae5-687ba85656dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.920 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/b8ee5c4d-6313-4ba5-9d89-7cce7062ce25.pid.haproxy
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:21:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:52.921 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'env', 'PROCESS_TAG=haproxy-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8ee5c4d-6313-4ba5-9d89-7cce7062ce25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.944 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803312.9436872, 4d38da27-a529-427f-bf7a-90bbfbfeb0b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.945 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] VM Started (Lifecycle Event)
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.962 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.967 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803312.9438918, 4d38da27-a529-427f-bf7a-90bbfbfeb0b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.967 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] VM Paused (Lifecycle Event)
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.981 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.985 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:52 compute-0 nova_compute[253661]: 2025-11-22 09:21:52.998 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.233 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 636b1046-fff8-4a45-8a14-04010b2f282e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.234 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803313.2335606, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.234 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.236 253665 DEBUG nova.compute.manager [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.240 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance rebooted successfully.
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.240 253665 DEBUG nova.compute.manager [None req-e89eb5bb-91e5-4d09-871f-92d6ef9e36fc 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.250 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Creating config drive at /var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76/disk.config
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.255 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuwg_ym0d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.299 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.304 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.326 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.327 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803313.234369, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.327 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.342 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.346 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:53 compute-0 podman[326878]: 2025-11-22 09:21:53.349565434 +0000 UTC m=+0.102807936 container create be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.362 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 22 09:21:53 compute-0 podman[326878]: 2025-11-22 09:21:53.272307406 +0000 UTC m=+0.025549928 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.403 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuwg_ym0d" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:53 compute-0 systemd[1]: Started libpod-conmon-be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573.scope.
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.428 253665 DEBUG nova.storage.rbd_utils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 366b8461-3c67-44e6-a791-f49b343cef76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.432 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76/disk.config 366b8461-3c67-44e6-a791-f49b343cef76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:21:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42afe2cf81ae6414a51119a14a78e19dac73a4d0a667e90570e3edd3a8c3f4e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 215 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 244 op/s
Nov 22 09:21:53 compute-0 podman[326878]: 2025-11-22 09:21:53.480600462 +0000 UTC m=+0.233842994 container init be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:21:53 compute-0 podman[326878]: 2025-11-22 09:21:53.486665328 +0000 UTC m=+0.239907830 container start be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 09:21:53 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[326910]: [NOTICE]   (326919) : New worker (326936) forked
Nov 22 09:21:53 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[326910]: [NOTICE]   (326919) : Loading success.
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.572 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.574 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.587 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0af922fd-6cbc-4f84-a86e-f337885fc946]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.587 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.589 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.590 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[97d8f02a-4b51-4616-a3ba-1e1a569b6fd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.591 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e55d100f-baef-4866-94bb-e5dd31cc079b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.605 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7c5671-b7e9-4293-b134-152abe74fc41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.620 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb29dc2-b25d-4b2a-a3fc-94703d9dc9dc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.654 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b28965c8-76e6-4ec7-ace3-6b9e42af0e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.660 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[35adff75-6e61-476b-a99b-17928d9b3561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 NetworkManager[48920]: <info>  [1763803313.6619] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/304)
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.694 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a7347765-1005-4432-84c8-3c336e818491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.698 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[adf0ed31-cb75-4db7-a46e-9f3640d78b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 NetworkManager[48920]: <info>  [1763803313.7237] device (tapebc42408-70): carrier: link connected
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.727 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f34c9b76-1777-404b-8765-e0f026662a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.745 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[512c1746-91e9-4132-93e0-9020ff3a10da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616445, 'reachable_time': 40416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326961, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.772 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e35d362-ddf6-41fe-a5ea-9b9a6f5b78ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616445, 'tstamp': 616445}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326962, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.794 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9162fd87-139a-4302-8560-c36476c074ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616445, 'reachable_time': 40416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326963, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.847 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bae43518-8dc3-46f7-920a-54089ee65077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.908 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[639549e8-bd00-4130-b90b-8b1fc451a633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.909 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.909 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.910 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:53 compute-0 NetworkManager[48920]: <info>  [1763803313.9130] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Nov 22 09:21:53 compute-0 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.917 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.917 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:53 compute-0 ovn_controller[152872]: 2025-11-22T09:21:53Z|00683|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:53 compute-0 nova_compute[253661]: 2025-11-22 09:21:53.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.942 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.943 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a13f25-9358-42c3-9ed5-d44b5eb8d9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.943 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:21:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:53.944 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.363 253665 DEBUG nova.compute.manager [req-346c19c6-f881-4bec-ade2-078027fcff15 req-253ff2c1-10bb-44b3-abe2-dd412ed93d96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Received event network-vif-plugged-446e69e1-2312-4e4f-8815-cdffe466ff52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.363 253665 DEBUG oslo_concurrency.lockutils [req-346c19c6-f881-4bec-ade2-078027fcff15 req-253ff2c1-10bb-44b3-abe2-dd412ed93d96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.364 253665 DEBUG oslo_concurrency.lockutils [req-346c19c6-f881-4bec-ade2-078027fcff15 req-253ff2c1-10bb-44b3-abe2-dd412ed93d96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.364 253665 DEBUG oslo_concurrency.lockutils [req-346c19c6-f881-4bec-ade2-078027fcff15 req-253ff2c1-10bb-44b3-abe2-dd412ed93d96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.364 253665 DEBUG nova.compute.manager [req-346c19c6-f881-4bec-ade2-078027fcff15 req-253ff2c1-10bb-44b3-abe2-dd412ed93d96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Processing event network-vif-plugged-446e69e1-2312-4e4f-8815-cdffe466ff52 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.365 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.368 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803314.3683798, 4d38da27-a529-427f-bf7a-90bbfbfeb0b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.368 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] VM Resumed (Lifecycle Event)
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.382 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.688 253665 DEBUG nova.network.neutron [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Updated VIF entry in instance network info cache for port 96d9f3b1-144c-4ec0-bbcc-410ded52262c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.689 253665 DEBUG nova.network.neutron [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Updating instance_info_cache with network_info: [{"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.779 253665 DEBUG oslo_concurrency.lockutils [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-366b8461-3c67-44e6-a791-f49b343cef76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:21:54 compute-0 nova_compute[253661]: 2025-11-22 09:21:54.780 253665 DEBUG nova.compute.manager [req-010ed394-d5f7-4138-9441-decdc42ab8f9 req-a06a7be7-44ef-49fc-a812-c70c053fe475 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Received event network-vif-deleted-196bfb3e-6b8e-4174-8aca-2c419d527946 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.144 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803300.1426623, f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.144 253665 INFO nova.compute.manager [-] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] VM Stopped (Lifecycle Event)
Nov 22 09:21:55 compute-0 podman[326998]: 2025-11-22 09:21:54.29665796 +0000 UTC m=+0.024588245 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.152 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.154 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.158 253665 INFO nova.virt.libvirt.driver [-] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Instance spawned successfully.
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.159 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.170 253665 DEBUG nova.compute.manager [None req-5597d726-83f9-4222-8fe1-4afe72488c6e - - - - - -] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.179 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.183 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.184 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.185 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.185 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.186 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.187 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.252 253665 INFO nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Took 8.66 seconds to spawn the instance on the hypervisor.
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.253 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.318 253665 INFO nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Took 9.71 seconds to build instance.
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.338 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 215 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 869 KiB/s rd, 3.6 MiB/s wr, 177 op/s
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:21:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:21:55 compute-0 ceph-mon[75021]: pgmap v1731: 305 pgs: 305 active+clean; 215 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 244 op/s
Nov 22 09:21:55 compute-0 ovn_controller[152872]: 2025-11-22T09:21:55Z|00684|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:21:55 compute-0 ovn_controller[152872]: 2025-11-22T09:21:55Z|00685|binding|INFO|Releasing lport 170e2a44-25ef-429e-ae6e-cbd8fd0b45d8 from this chassis (sb_readonly=0)
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:55 compute-0 podman[326998]: 2025-11-22 09:21:55.827567862 +0000 UTC m=+1.555498157 container create 9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.844 253665 DEBUG oslo_concurrency.processutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76/disk.config 366b8461-3c67-44e6-a791-f49b343cef76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.845 253665 INFO nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Deleting local config drive /var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76/disk.config because it was imported into RBD.
Nov 22 09:21:55 compute-0 systemd[1]: Started libpod-conmon-9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0.scope.
Nov 22 09:21:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b17627e96fda81c5e24ef1b93a00d41b2c9308d0b49b7637d3cafa26cfcd7df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:21:55 compute-0 NetworkManager[48920]: <info>  [1763803315.9097] manager: (tap96d9f3b1-14): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Nov 22 09:21:55 compute-0 kernel: tap96d9f3b1-14: entered promiscuous mode
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:55 compute-0 ovn_controller[152872]: 2025-11-22T09:21:55Z|00686|binding|INFO|Claiming lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c for this chassis.
Nov 22 09:21:55 compute-0 ovn_controller[152872]: 2025-11-22T09:21:55Z|00687|binding|INFO|96d9f3b1-144c-4ec0-bbcc-410ded52262c: Claiming fa:16:3e:e5:56:df 10.100.0.6
Nov 22 09:21:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:55.925 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:56:df 10.100.0.6'], port_security=['fa:16:3e:e5:56:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '366b8461-3c67-44e6-a791-f49b343cef76', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=96d9f3b1-144c-4ec0-bbcc-410ded52262c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:55 compute-0 podman[326998]: 2025-11-22 09:21:55.932668623 +0000 UTC m=+1.660598928 container init 9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:21:55 compute-0 podman[326998]: 2025-11-22 09:21:55.939694233 +0000 UTC m=+1.667624498 container start 9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:21:55 compute-0 ovn_controller[152872]: 2025-11-22T09:21:55Z|00688|binding|INFO|Setting lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c ovn-installed in OVS
Nov 22 09:21:55 compute-0 ovn_controller[152872]: 2025-11-22T09:21:55Z|00689|binding|INFO|Setting lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c up in Southbound
Nov 22 09:21:55 compute-0 nova_compute[253661]: 2025-11-22 09:21:55.944 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:55 compute-0 systemd-machined[215941]: New machine qemu-83-instance-00000045.
Nov 22 09:21:55 compute-0 systemd-udevd[327032]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:21:55 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327015]: [NOTICE]   (327029) : New worker (327034) forked
Nov 22 09:21:55 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327015]: [NOTICE]   (327029) : Loading success.
Nov 22 09:21:55 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-00000045.
Nov 22 09:21:55 compute-0 NetworkManager[48920]: <info>  [1763803315.9770] device (tap96d9f3b1-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:21:55 compute-0 NetworkManager[48920]: <info>  [1763803315.9780] device (tap96d9f3b1-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.002 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 96d9f3b1-144c-4ec0-bbcc-410ded52262c in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 unbound from our chassis
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.004 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.020 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[76e18ac1-24fb-4f0c-9441-3475b5d3ed88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.058 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5dfaf2-3877-4ed8-ad6f-1ad0729ffc31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.061 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8ad909-2219-491b-8b18-1869481254fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.103 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c87caf90-d314-4ac5-829f-8634db290f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.124 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e198d2d6-3292-43d7-a74b-84a30c143540]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8ee5c4d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:6e:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616342, 'reachable_time': 19578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327055, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.145 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3426187c-aea5-4f48-8d8f-aa34a428ba3a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb8ee5c4d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616356, 'tstamp': 616356}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327056, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8ee5c4d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616360, 'tstamp': 616360}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327056, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.146 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8ee5c4d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.150 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.152 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8ee5c4d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.153 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.153 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8ee5c4d-60, col_values=(('external_ids', {'iface-id': '170e2a44-25ef-429e-ae6e-cbd8fd0b45d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:56.154 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.196 253665 DEBUG nova.compute.manager [req-21d28044-496e-4343-9ba0-b368bc3efaae req-08177770-43da-4edf-8532-e5cc538fce04 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.196 253665 DEBUG oslo_concurrency.lockutils [req-21d28044-496e-4343-9ba0-b368bc3efaae req-08177770-43da-4edf-8532-e5cc538fce04 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.197 253665 DEBUG oslo_concurrency.lockutils [req-21d28044-496e-4343-9ba0-b368bc3efaae req-08177770-43da-4edf-8532-e5cc538fce04 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.197 253665 DEBUG oslo_concurrency.lockutils [req-21d28044-496e-4343-9ba0-b368bc3efaae req-08177770-43da-4edf-8532-e5cc538fce04 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.198 253665 DEBUG nova.compute.manager [req-21d28044-496e-4343-9ba0-b368bc3efaae req-08177770-43da-4edf-8532-e5cc538fce04 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Processing event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.326 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803316.3260314, 366b8461-3c67-44e6-a791-f49b343cef76 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.326 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] VM Started (Lifecycle Event)
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.328 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.331 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.334 253665 INFO nova.virt.libvirt.driver [-] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Instance spawned successfully.
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.335 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.350 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.355 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.359 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.359 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.359 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.360 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.360 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.361 253665 DEBUG nova.virt.libvirt.driver [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.387 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.388 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803316.326171, 366b8461-3c67-44e6-a791-f49b343cef76 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.388 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] VM Paused (Lifecycle Event)
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.418 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.421 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803316.3310003, 366b8461-3c67-44e6-a791-f49b343cef76 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.421 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] VM Resumed (Lifecycle Event)
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.434 253665 INFO nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Took 8.89 seconds to spawn the instance on the hypervisor.
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.434 253665 DEBUG nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.459 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.464 253665 DEBUG nova.compute.manager [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Received event network-vif-plugged-446e69e1-2312-4e4f-8815-cdffe466ff52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.464 253665 DEBUG oslo_concurrency.lockutils [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.465 253665 DEBUG oslo_concurrency.lockutils [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.465 253665 DEBUG oslo_concurrency.lockutils [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.465 253665 DEBUG nova.compute.manager [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] No waiting events found dispatching network-vif-plugged-446e69e1-2312-4e4f-8815-cdffe466ff52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.466 253665 WARNING nova.compute.manager [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Received unexpected event network-vif-plugged-446e69e1-2312-4e4f-8815-cdffe466ff52 for instance with vm_state active and task_state None.
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.467 253665 DEBUG nova.compute.manager [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.468 253665 DEBUG oslo_concurrency.lockutils [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.469 253665 DEBUG oslo_concurrency.lockutils [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.469 253665 DEBUG oslo_concurrency.lockutils [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.469 253665 DEBUG nova.compute.manager [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.470 253665 WARNING nova.compute.manager [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.470 253665 DEBUG nova.compute.manager [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.470 253665 DEBUG oslo_concurrency.lockutils [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.471 253665 DEBUG oslo_concurrency.lockutils [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.471 253665 DEBUG oslo_concurrency.lockutils [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.471 253665 DEBUG nova.compute.manager [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.471 253665 WARNING nova.compute.manager [req-1bccf754-f491-4244-9f5d-026945be57ac req-e3885113-7a2b-443f-baf6-a586ed33e5b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.475 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.492 253665 INFO nova.compute.manager [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Took 10.83 seconds to build instance.
Nov 22 09:21:56 compute-0 nova_compute[253661]: 2025-11-22 09:21:56.505 253665 DEBUG oslo_concurrency.lockutils [None req-5bce6f20-0e99-4308-a7a2-a8ace1b8d123 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:56 compute-0 ceph-mon[75021]: pgmap v1732: 305 pgs: 305 active+clean; 215 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 869 KiB/s rd, 3.6 MiB/s wr, 177 op/s
Nov 22 09:21:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:21:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 215 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 197 op/s
Nov 22 09:21:57 compute-0 nova_compute[253661]: 2025-11-22 09:21:57.661 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00690|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00691|binding|INFO|Releasing lport 170e2a44-25ef-429e-ae6e-cbd8fd0b45d8 from this chassis (sb_readonly=0)
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.322 253665 DEBUG nova.compute.manager [req-1205b4ee-9e37-4fb4-a093-6a8c841e44a0 req-bb5c81cc-cbb3-4ebe-af99-606f2a259d25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.322 253665 DEBUG oslo_concurrency.lockutils [req-1205b4ee-9e37-4fb4-a093-6a8c841e44a0 req-bb5c81cc-cbb3-4ebe-af99-606f2a259d25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.322 253665 DEBUG oslo_concurrency.lockutils [req-1205b4ee-9e37-4fb4-a093-6a8c841e44a0 req-bb5c81cc-cbb3-4ebe-af99-606f2a259d25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.323 253665 DEBUG oslo_concurrency.lockutils [req-1205b4ee-9e37-4fb4-a093-6a8c841e44a0 req-bb5c81cc-cbb3-4ebe-af99-606f2a259d25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.323 253665 DEBUG nova.compute.manager [req-1205b4ee-9e37-4fb4-a093-6a8c841e44a0 req-bb5c81cc-cbb3-4ebe-af99-606f2a259d25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] No waiting events found dispatching network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.323 253665 WARNING nova.compute.manager [req-1205b4ee-9e37-4fb4-a093-6a8c841e44a0 req-bb5c81cc-cbb3-4ebe-af99-606f2a259d25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received unexpected event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c for instance with vm_state active and task_state None.
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.520 253665 DEBUG oslo_concurrency.lockutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.521 253665 DEBUG oslo_concurrency.lockutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.521 253665 DEBUG oslo_concurrency.lockutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.521 253665 DEBUG oslo_concurrency.lockutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.521 253665 DEBUG oslo_concurrency.lockutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.523 253665 INFO nova.compute.manager [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Terminating instance
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.523 253665 DEBUG nova.compute.manager [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:21:58 compute-0 kernel: tap446e69e1-23 (unregistering): left promiscuous mode
Nov 22 09:21:58 compute-0 NetworkManager[48920]: <info>  [1763803318.6111] device (tap446e69e1-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00692|binding|INFO|Releasing lport 446e69e1-2312-4e4f-8815-cdffe466ff52 from this chassis (sb_readonly=0)
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00693|binding|INFO|Setting lport 446e69e1-2312-4e4f-8815-cdffe466ff52 down in Southbound
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00694|binding|INFO|Removing iface tap446e69e1-23 ovn-installed in OVS
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.646 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:4f:04 10.100.0.11'], port_security=['fa:16:3e:98:4f:04 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4d38da27-a529-427f-bf7a-90bbfbfeb0b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=446e69e1-2312-4e4f-8815-cdffe466ff52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.647 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 446e69e1-2312-4e4f-8815-cdffe466ff52 in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 unbound from our chassis
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.649 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000044.scope: Deactivated successfully.
Nov 22 09:21:58 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000044.scope: Consumed 3.822s CPU time.
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.672 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d22834-0da3-4b58-b4fc-840b808d5dd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:58 compute-0 systemd-machined[215941]: Machine qemu-81-instance-00000044 terminated.
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.707 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[45155052-964f-4bf7-bd3d-4c9f6943ea2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.710 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd3ef4e-3be8-446f-b807-197b99c29bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.716 253665 DEBUG oslo_concurrency.lockutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.717 253665 DEBUG oslo_concurrency.lockutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.717 253665 DEBUG oslo_concurrency.lockutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.718 253665 DEBUG oslo_concurrency.lockutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.718 253665 DEBUG oslo_concurrency.lockutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.719 253665 INFO nova.compute.manager [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Terminating instance
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.720 253665 DEBUG nova.compute.manager [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.754 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[75853908-b186-48da-9f89-d59e754f485e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.768 253665 INFO nova.virt.libvirt.driver [-] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Instance destroyed successfully.
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.769 253665 DEBUG nova.objects.instance [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'resources' on Instance uuid 4d38da27-a529-427f-bf7a-90bbfbfeb0b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.773 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8297505d-8db9-448b-a35a-1b91513795ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8ee5c4d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:6e:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616342, 'reachable_time': 19578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327118, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.781 253665 DEBUG nova.virt.libvirt.vif [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:21:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-598537995',display_name='tempest-tempest.common.compute-instance-598537995-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-598537995-1',id=68,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:21:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-eqk59leb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:55Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=4d38da27-a529-427f-bf7a-90bbfbfeb0b1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.782 253665 DEBUG nova.network.os_vif_util [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "446e69e1-2312-4e4f-8815-cdffe466ff52", "address": "fa:16:3e:98:4f:04", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446e69e1-23", "ovs_interfaceid": "446e69e1-2312-4e4f-8815-cdffe466ff52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.782 253665 DEBUG nova.network.os_vif_util [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:4f:04,bridge_name='br-int',has_traffic_filtering=True,id=446e69e1-2312-4e4f-8815-cdffe466ff52,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446e69e1-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.783 253665 DEBUG os_vif [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:4f:04,bridge_name='br-int',has_traffic_filtering=True,id=446e69e1-2312-4e4f-8815-cdffe466ff52,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446e69e1-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.785 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap446e69e1-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.788 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.794 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:21:58 compute-0 kernel: tap96d9f3b1-14 (unregistering): left promiscuous mode
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.799 253665 INFO os_vif [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:4f:04,bridge_name='br-int',has_traffic_filtering=True,id=446e69e1-2312-4e4f-8815-cdffe466ff52,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446e69e1-23')
Nov 22 09:21:58 compute-0 NetworkManager[48920]: <info>  [1763803318.8038] device (tap96d9f3b1-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.808 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee90a2d7-cacb-4bb8-9d06-4681d20584cf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb8ee5c4d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616356, 'tstamp': 616356}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327120, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8ee5c4d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616360, 'tstamp': 616360}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327120, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.810 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8ee5c4d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00695|binding|INFO|Releasing lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c from this chassis (sb_readonly=0)
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00696|binding|INFO|Setting lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c down in Southbound
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00697|binding|INFO|Removing iface tap96d9f3b1-14 ovn-installed in OVS
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.832 253665 DEBUG nova.objects.instance [None req-3ac47797-8c2b-4bac-9e79-7317eb49ef83 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.837 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:56:df 10.100.0.6'], port_security=['fa:16:3e:e5:56:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '366b8461-3c67-44e6-a791-f49b343cef76', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=96d9f3b1-144c-4ec0-bbcc-410ded52262c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.839 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8ee5c4d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.839 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.840 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8ee5c4d-60, col_values=(('external_ids', {'iface-id': '170e2a44-25ef-429e-ae6e-cbd8fd0b45d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.840 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.842 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 96d9f3b1-144c-4ec0-bbcc-410ded52262c in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 unbound from our chassis
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.843 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.844 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8972c666-93ab-43fc-8ebf-404df260fa65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.845 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 namespace which is not needed anymore
Nov 22 09:21:58 compute-0 ceph-mon[75021]: pgmap v1733: 305 pgs: 305 active+clean; 215 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 197 op/s
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.856 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803318.8559165, 636b1046-fff8-4a45-8a14-04010b2f282e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.857 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Paused (Lifecycle Event)
Nov 22 09:21:58 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000045.scope: Deactivated successfully.
Nov 22 09:21:58 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000045.scope: Consumed 2.661s CPU time.
Nov 22 09:21:58 compute-0 systemd-machined[215941]: Machine qemu-83-instance-00000045 terminated.
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.878 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.883 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.901 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 22 09:21:58 compute-0 kernel: tap96d9f3b1-14: entered promiscuous mode
Nov 22 09:21:58 compute-0 NetworkManager[48920]: <info>  [1763803318.9447] manager: (tap96d9f3b1-14): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.951 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00698|binding|INFO|Claiming lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c for this chassis.
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00699|binding|INFO|96d9f3b1-144c-4ec0-bbcc-410ded52262c: Claiming fa:16:3e:e5:56:df 10.100.0.6
Nov 22 09:21:58 compute-0 kernel: tap96d9f3b1-14 (unregistering): left promiscuous mode
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.959 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:56:df 10.100.0.6'], port_security=['fa:16:3e:e5:56:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '366b8461-3c67-44e6-a791-f49b343cef76', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=96d9f3b1-144c-4ec0-bbcc-410ded52262c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00700|binding|INFO|Setting lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c ovn-installed in OVS
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00701|binding|INFO|Setting lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c up in Southbound
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.987 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00702|binding|INFO|Releasing lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c from this chassis (sb_readonly=0)
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00703|binding|INFO|Setting lport 96d9f3b1-144c-4ec0-bbcc-410ded52262c down in Southbound
Nov 22 09:21:58 compute-0 ovn_controller[152872]: 2025-11-22T09:21:58Z|00704|binding|INFO|Removing iface tap96d9f3b1-14 ovn-installed in OVS
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.992 253665 INFO nova.virt.libvirt.driver [-] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Instance destroyed successfully.
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.993 253665 DEBUG nova.objects.instance [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'resources' on Instance uuid 366b8461-3c67-44e6-a791-f49b343cef76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:21:58 compute-0 nova_compute[253661]: 2025-11-22 09:21:58.994 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:58.997 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:56:df 10.100.0.6'], port_security=['fa:16:3e:e5:56:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '366b8461-3c67-44e6-a791-f49b343cef76', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=96d9f3b1-144c-4ec0-bbcc-410ded52262c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.008 253665 DEBUG nova.virt.libvirt.vif [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:21:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-598537995',display_name='tempest-tempest.common.compute-instance-598537995-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-598537995-2',id=69,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-22T09:21:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-eqk59leb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:56Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=366b8461-3c67-44e6-a791-f49b343cef76,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.008 253665 DEBUG nova.network.os_vif_util [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "address": "fa:16:3e:e5:56:df", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96d9f3b1-14", "ovs_interfaceid": "96d9f3b1-144c-4ec0-bbcc-410ded52262c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.009 253665 DEBUG nova.network.os_vif_util [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:56:df,bridge_name='br-int',has_traffic_filtering=True,id=96d9f3b1-144c-4ec0-bbcc-410ded52262c,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96d9f3b1-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.009 253665 DEBUG os_vif [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:56:df,bridge_name='br-int',has_traffic_filtering=True,id=96d9f3b1-144c-4ec0-bbcc-410ded52262c,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96d9f3b1-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.010 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.010 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96d9f3b1-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.013 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.015 253665 INFO os_vif [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:56:df,bridge_name='br-int',has_traffic_filtering=True,id=96d9f3b1-144c-4ec0-bbcc-410ded52262c,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96d9f3b1-14')
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.206 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[326910]: [NOTICE]   (326919) : haproxy version is 2.8.14-c23fe91
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[326910]: [NOTICE]   (326919) : path to executable is /usr/sbin/haproxy
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[326910]: [WARNING]  (326919) : Exiting Master process...
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[326910]: [ALERT]    (326919) : Current worker (326936) exited with code 143 (Terminated)
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[326910]: [WARNING]  (326919) : All workers exited. Exiting... (0)
Nov 22 09:21:59 compute-0 systemd[1]: libpod-be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573.scope: Deactivated successfully.
Nov 22 09:21:59 compute-0 podman[327165]: 2025-11-22 09:21:59.275945741 +0000 UTC m=+0.320196713 container died be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:21:59 compute-0 kernel: tapa288a5e5-7b (unregistering): left promiscuous mode
Nov 22 09:21:59 compute-0 NetworkManager[48920]: <info>  [1763803319.3241] device (tapa288a5e5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:59 compute-0 ovn_controller[152872]: 2025-11-22T09:21:59Z|00705|binding|INFO|Releasing lport a288a5e5-7b57-4be8-9617-3271ea1e210f from this chassis (sb_readonly=0)
Nov 22 09:21:59 compute-0 ovn_controller[152872]: 2025-11-22T09:21:59Z|00706|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f down in Southbound
Nov 22 09:21:59 compute-0 ovn_controller[152872]: 2025-11-22T09:21:59Z|00707|binding|INFO|Removing iface tapa288a5e5-7b ovn-installed in OVS
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.333 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.348 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573-userdata-shm.mount: Deactivated successfully.
Nov 22 09:21:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-42afe2cf81ae6414a51119a14a78e19dac73a4d0a667e90570e3edd3a8c3f4e7-merged.mount: Deactivated successfully.
Nov 22 09:21:59 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 22 09:21:59 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000032.scope: Consumed 6.286s CPU time.
Nov 22 09:21:59 compute-0 systemd-machined[215941]: Machine qemu-82-instance-00000032 terminated.
Nov 22 09:21:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 216 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 296 op/s
Nov 22 09:21:59 compute-0 NetworkManager[48920]: <info>  [1763803319.4767] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.493 253665 DEBUG nova.compute.manager [None req-3ac47797-8c2b-4bac-9e79-7317eb49ef83 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:21:59 compute-0 podman[327165]: 2025-11-22 09:21:59.503462121 +0000 UTC m=+0.547713093 container cleanup be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 09:21:59 compute-0 systemd[1]: libpod-conmon-be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573.scope: Deactivated successfully.
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.515 253665 DEBUG nova.compute.manager [req-5d15d435-7bec-415c-bcac-1873eb099774 req-2ebdfd84-9523-4643-b570-d0d874a3c9b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Received event network-vif-unplugged-446e69e1-2312-4e4f-8815-cdffe466ff52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.515 253665 DEBUG oslo_concurrency.lockutils [req-5d15d435-7bec-415c-bcac-1873eb099774 req-2ebdfd84-9523-4643-b570-d0d874a3c9b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.515 253665 DEBUG oslo_concurrency.lockutils [req-5d15d435-7bec-415c-bcac-1873eb099774 req-2ebdfd84-9523-4643-b570-d0d874a3c9b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.515 253665 DEBUG oslo_concurrency.lockutils [req-5d15d435-7bec-415c-bcac-1873eb099774 req-2ebdfd84-9523-4643-b570-d0d874a3c9b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.516 253665 DEBUG nova.compute.manager [req-5d15d435-7bec-415c-bcac-1873eb099774 req-2ebdfd84-9523-4643-b570-d0d874a3c9b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] No waiting events found dispatching network-vif-unplugged-446e69e1-2312-4e4f-8815-cdffe466ff52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.516 253665 DEBUG nova.compute.manager [req-5d15d435-7bec-415c-bcac-1873eb099774 req-2ebdfd84-9523-4643-b570-d0d874a3c9b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Received event network-vif-unplugged-446e69e1-2312-4e4f-8815-cdffe466ff52 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:21:59 compute-0 podman[327167]: 2025-11-22 09:21:59.520507033 +0000 UTC m=+0.541566144 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 22 09:21:59 compute-0 podman[327261]: 2025-11-22 09:21:59.673927322 +0000 UTC m=+0.143770437 container remove be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.679 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa464ebd-4438-4abd-9035-3236bf4ffef1]: (4, ('Sat Nov 22 09:21:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 (be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573)\nbe227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573\nSat Nov 22 09:21:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 (be227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573)\nbe227d37c34e088fd442794dc639b11c1eb6bd5a56d5ae5dc1979322602c8573\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.681 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[37a01c8c-937a-4088-9c58-2d6f620e8bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.682 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8ee5c4d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.683 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:59 compute-0 podman[327173]: 2025-11-22 09:21:59.684951409 +0000 UTC m=+0.703524920 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:21:59 compute-0 kernel: tapb8ee5c4d-60: left promiscuous mode
Nov 22 09:21:59 compute-0 nova_compute[253661]: 2025-11-22 09:21:59.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.708 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[64204e20-be93-4bd3-bbf9-5d497631d541]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.724 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[306666bc-0564-4f19-a098-3ce4b4e13bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.725 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[64722c9e-166d-4058-8378-24a89ad3c032]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.741 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7447a8a5-86e5-4e81-ad48-9ee25c5bc095]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616334, 'reachable_time': 23535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327291, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.744 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.745 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b343348d-ba33-44cb-bb7a-f4b8cb16a18a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.746 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 96d9f3b1-144c-4ec0-bbcc-410ded52262c in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 unbound from our chassis
Nov 22 09:21:59 compute-0 systemd[1]: run-netns-ovnmeta\x2db8ee5c4d\x2d6313\x2d4ba5\x2d9d89\x2d7cce7062ce25.mount: Deactivated successfully.
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.747 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.748 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa447399-24ab-49ff-a0c8-7bdde97c5de6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.748 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 96d9f3b1-144c-4ec0-bbcc-410ded52262c in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 unbound from our chassis
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.749 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.750 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3b286f-e697-4072-a00c-03f173b84c94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.751 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.751 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.752 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebc42408-7b03-480c-a016-1e5bb2ebcc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.752 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9604d12d-62be-4e68-b063-5dad95a9d937]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:21:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:21:59.753 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace which is not needed anymore
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327015]: [NOTICE]   (327029) : haproxy version is 2.8.14-c23fe91
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327015]: [NOTICE]   (327029) : path to executable is /usr/sbin/haproxy
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327015]: [WARNING]  (327029) : Exiting Master process...
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327015]: [ALERT]    (327029) : Current worker (327034) exited with code 143 (Terminated)
Nov 22 09:21:59 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327015]: [WARNING]  (327029) : All workers exited. Exiting... (0)
Nov 22 09:21:59 compute-0 systemd[1]: libpod-9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0.scope: Deactivated successfully.
Nov 22 09:21:59 compute-0 podman[327310]: 2025-11-22 09:21:59.901464463 +0000 UTC m=+0.068090457 container died 9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:21:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0-userdata-shm.mount: Deactivated successfully.
Nov 22 09:21:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b17627e96fda81c5e24ef1b93a00d41b2c9308d0b49b7637d3cafa26cfcd7df-merged.mount: Deactivated successfully.
Nov 22 09:21:59 compute-0 podman[327310]: 2025-11-22 09:21:59.972686874 +0000 UTC m=+0.139312848 container cleanup 9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:21:59 compute-0 systemd[1]: libpod-conmon-9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0.scope: Deactivated successfully.
Nov 22 09:22:00 compute-0 podman[327340]: 2025-11-22 09:22:00.09824236 +0000 UTC m=+0.103227477 container remove 9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 09:22:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:00.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d57bb159-5b55-4940-8d0a-609250bd5deb]: (4, ('Sat Nov 22 09:21:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0)\n9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0\nSat Nov 22 09:21:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0)\n9c02a1b10e62e3d474967487e867d379e0fc198713da5889ea1e016c06db98c0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:00.108 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5d470058-6f34-4a39-bf4f-398b96af3d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:00.109 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.111 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:00 compute-0 kernel: tapebc42408-70: left promiscuous mode
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:00.135 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a054def9-5169-4e23-bd1e-e02e26196462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:00.148 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e11b1008-df32-4d44-8c44-7e8c41c50ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:00.150 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ce0b87-a94e-4dbc-a739-d2ecf7d8aaac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:00.169 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f88d71-7fda-4888-9944-303a58f4adb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616438, 'reachable_time': 21684, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327360, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:00.171 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:22:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:00.172 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f5776d-6f9d-41ed-ba0d-295e1aa36ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:00 compute-0 systemd[1]: run-netns-ovnmeta\x2debc42408\x2d7b03\x2d480c\x2da016\x2d1e5bb2ebcc93.mount: Deactivated successfully.
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.490 253665 INFO nova.virt.libvirt.driver [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Deleting instance files /var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76_del
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.490 253665 INFO nova.virt.libvirt.driver [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Deletion of /var/lib/nova/instances/366b8461-3c67-44e6-a791-f49b343cef76_del complete
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.494 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-unplugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.495 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.495 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.495 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.495 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] No waiting events found dispatching network-vif-unplugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.495 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-unplugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.496 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.496 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.496 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.496 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.497 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] No waiting events found dispatching network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.497 253665 WARNING nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received unexpected event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c for instance with vm_state active and task_state deleting.
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.497 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.497 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.497 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.497 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.498 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] No waiting events found dispatching network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.498 253665 WARNING nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received unexpected event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c for instance with vm_state active and task_state deleting.
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.498 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.498 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.498 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.498 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.499 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] No waiting events found dispatching network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.499 253665 WARNING nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received unexpected event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c for instance with vm_state active and task_state deleting.
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.499 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-unplugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.499 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.499 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.499 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.499 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] No waiting events found dispatching network-vif-unplugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.500 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-unplugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.500 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.500 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "366b8461-3c67-44e6-a791-f49b343cef76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.500 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.500 253665 DEBUG oslo_concurrency.lockutils [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.500 253665 DEBUG nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] No waiting events found dispatching network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.501 253665 WARNING nova.compute.manager [req-4348d7d1-f2d1-4076-a0c8-c785d14856f8 req-893ee81d-768a-418f-b84e-fdff63045b9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received unexpected event network-vif-plugged-96d9f3b1-144c-4ec0-bbcc-410ded52262c for instance with vm_state active and task_state deleting.
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.504 253665 INFO nova.virt.libvirt.driver [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Deleting instance files /var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1_del
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.505 253665 INFO nova.virt.libvirt.driver [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Deletion of /var/lib/nova/instances/4d38da27-a529-427f-bf7a-90bbfbfeb0b1_del complete
Nov 22 09:22:00 compute-0 ceph-mon[75021]: pgmap v1734: 305 pgs: 305 active+clean; 216 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 296 op/s
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.893 253665 INFO nova.compute.manager [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Took 2.17 seconds to destroy the instance on the hypervisor.
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.894 253665 DEBUG oslo.service.loopingcall [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.894 253665 DEBUG nova.compute.manager [-] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.895 253665 DEBUG nova.network.neutron [-] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.916 253665 INFO nova.compute.manager [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Took 2.39 seconds to destroy the instance on the hypervisor.
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.917 253665 DEBUG oslo.service.loopingcall [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.917 253665 DEBUG nova.compute.manager [-] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:22:00 compute-0 nova_compute[253661]: 2025-11-22 09:22:00.917 253665 DEBUG nova.network.neutron [-] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:22:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 216 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.5 MiB/s wr, 256 op/s
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.697 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803306.6950324, 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.697 253665 INFO nova.compute.manager [-] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] VM Stopped (Lifecycle Event)
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.714 253665 DEBUG nova.compute.manager [None req-6f90b356-0ca2-4008-8cd2-cbabc1f02bd2 - - - - - -] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.898 253665 DEBUG nova.compute.manager [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Received event network-vif-plugged-446e69e1-2312-4e4f-8815-cdffe466ff52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.899 253665 DEBUG oslo_concurrency.lockutils [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.899 253665 DEBUG oslo_concurrency.lockutils [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.899 253665 DEBUG oslo_concurrency.lockutils [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.899 253665 DEBUG nova.compute.manager [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] No waiting events found dispatching network-vif-plugged-446e69e1-2312-4e4f-8815-cdffe466ff52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.900 253665 WARNING nova.compute.manager [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Received unexpected event network-vif-plugged-446e69e1-2312-4e4f-8815-cdffe466ff52 for instance with vm_state active and task_state deleting.
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.900 253665 DEBUG nova.compute.manager [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.900 253665 DEBUG oslo_concurrency.lockutils [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.900 253665 DEBUG oslo_concurrency.lockutils [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.900 253665 DEBUG oslo_concurrency.lockutils [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.901 253665 DEBUG nova.compute.manager [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.901 253665 WARNING nova.compute.manager [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state suspended and task_state resuming.
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.901 253665 DEBUG nova.compute.manager [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.901 253665 DEBUG oslo_concurrency.lockutils [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.901 253665 DEBUG oslo_concurrency.lockutils [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.902 253665 DEBUG oslo_concurrency.lockutils [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.902 253665 DEBUG nova.compute.manager [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:01 compute-0 nova_compute[253661]: 2025-11-22 09:22:01.902 253665 WARNING nova.compute.manager [req-2683059e-ab87-4b1e-90ab-5e43704a083a req-c22e5b94-67ad-476b-b3e4-532872e3c9b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state suspended and task_state resuming.
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.109 253665 INFO nova.compute.manager [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Resuming
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.110 253665 DEBUG nova.objects.instance [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.142 253665 DEBUG oslo_concurrency.lockutils [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.143 253665 DEBUG oslo_concurrency.lockutils [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.143 253665 DEBUG nova.network.neutron [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.190 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803307.1895275, 90d94c7c-4c60-4842-b18c-1eecc3997b15 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.191 253665 INFO nova.compute.manager [-] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] VM Stopped (Lifecycle Event)
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.209 253665 DEBUG nova.compute.manager [None req-3d33af02-1be9-4f18-bac4-2b3a72ecba77 - - - - - -] [instance: 90d94c7c-4c60-4842-b18c-1eecc3997b15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.231 253665 DEBUG nova.network.neutron [-] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.323 253665 INFO nova.compute.manager [-] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Took 1.41 seconds to deallocate network for instance.
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.405 253665 DEBUG oslo_concurrency.lockutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.405 253665 DEBUG oslo_concurrency.lockutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.494 253665 DEBUG oslo_concurrency.processutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014579441582014715 of space, bias 1.0, pg target 0.43738324746044144 quantized to 32 (current 32)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:22:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.761 253665 DEBUG nova.network.neutron [-] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.801 253665 INFO nova.compute.manager [-] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Took 1.91 seconds to deallocate network for instance.
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.859 253665 DEBUG oslo_concurrency.lockutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:02 compute-0 ceph-mon[75021]: pgmap v1735: 305 pgs: 305 active+clean; 216 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.5 MiB/s wr, 256 op/s
Nov 22 09:22:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571087695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.963 253665 DEBUG oslo_concurrency.processutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.969 253665 DEBUG nova.compute.provider_tree [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:02 compute-0 nova_compute[253661]: 2025-11-22 09:22:02.985 253665 DEBUG nova.scheduler.client.report [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.006 253665 DEBUG oslo_concurrency.lockutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.009 253665 DEBUG oslo_concurrency.lockutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.045 253665 INFO nova.scheduler.client.report [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Deleted allocations for instance 4d38da27-a529-427f-bf7a-90bbfbfeb0b1
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.069 253665 DEBUG nova.compute.manager [req-ca2c1dbe-6b45-4cc7-8cdc-835543d1b508 req-941fc93e-0553-4194-9907-6f137880cb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Received event network-vif-deleted-446e69e1-2312-4e4f-8815-cdffe466ff52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.070 253665 DEBUG nova.compute.manager [req-ca2c1dbe-6b45-4cc7-8cdc-835543d1b508 req-941fc93e-0553-4194-9907-6f137880cb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Received event network-vif-deleted-96d9f3b1-144c-4ec0-bbcc-410ded52262c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.091 253665 DEBUG oslo_concurrency.lockutils [None req-afc5d3f3-b7ea-4c8f-a588-7be13029bbb1 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "4d38da27-a529-427f-bf7a-90bbfbfeb0b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.099 253665 DEBUG oslo_concurrency.processutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 145 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.5 MiB/s wr, 320 op/s
Nov 22 09:22:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2587625164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.550 253665 DEBUG oslo_concurrency.processutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.557 253665 DEBUG nova.compute.provider_tree [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.574 253665 DEBUG nova.scheduler.client.report [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.593 253665 DEBUG oslo_concurrency.lockutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.620 253665 INFO nova.scheduler.client.report [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Deleted allocations for instance 366b8461-3c67-44e6-a791-f49b343cef76
Nov 22 09:22:03 compute-0 nova_compute[253661]: 2025-11-22 09:22:03.670 253665 DEBUG oslo_concurrency.lockutils [None req-9307a2b3-25ac-4358-8d33-2a56cceb797d c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "366b8461-3c67-44e6-a791-f49b343cef76" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3571087695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2587625164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:03.968030) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803323968443, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1609, "num_deletes": 251, "total_data_size": 2430403, "memory_usage": 2469504, "flush_reason": "Manual Compaction"}
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803323984035, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2394878, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33818, "largest_seqno": 35426, "table_properties": {"data_size": 2387391, "index_size": 4366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16323, "raw_average_key_size": 20, "raw_value_size": 2372224, "raw_average_value_size": 2961, "num_data_blocks": 194, "num_entries": 801, "num_filter_entries": 801, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803167, "oldest_key_time": 1763803167, "file_creation_time": 1763803323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 16094 microseconds, and 7669 cpu microseconds.
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:03.984116) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2394878 bytes OK
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:03.984153) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:03.990833) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:03.990890) EVENT_LOG_v1 {"time_micros": 1763803323990878, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:03.990925) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2423281, prev total WAL file size 2423281, number of live WAL files 2.
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:03.992122) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2338KB)], [74(9406KB)]
Nov 22 09:22:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803323992182, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12026692, "oldest_snapshot_seqno": -1}
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.013 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6043 keys, 10342785 bytes, temperature: kUnknown
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803324064572, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10342785, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10299206, "index_size": 27341, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 152639, "raw_average_key_size": 25, "raw_value_size": 10187627, "raw_average_value_size": 1685, "num_data_blocks": 1109, "num_entries": 6043, "num_filter_entries": 6043, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:04.064853) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10342785 bytes
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:04.068220) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.9 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(9.3) write-amplify(4.3) OK, records in: 6557, records dropped: 514 output_compression: NoCompression
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:04.068244) EVENT_LOG_v1 {"time_micros": 1763803324068232, "job": 42, "event": "compaction_finished", "compaction_time_micros": 72493, "compaction_time_cpu_micros": 22978, "output_level": 6, "num_output_files": 1, "total_output_size": 10342785, "num_input_records": 6557, "num_output_records": 6043, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803324068824, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803324070804, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:03.992020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:04.070867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:04.070871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:04.070872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:04.070874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:22:04.070875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.109 253665 DEBUG nova.network.neutron [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.125 253665 DEBUG oslo_concurrency.lockutils [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.129 253665 DEBUG nova.virt.libvirt.vif [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:21:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.130 253665 DEBUG nova.network.os_vif_util [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.130 253665 DEBUG nova.network.os_vif_util [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.131 253665 DEBUG os_vif [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.131 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.132 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.132 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.134 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.135 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.135 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.135 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.136 253665 INFO os_vif [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.169 253665 DEBUG nova.objects.instance [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:04 compute-0 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 09:22:04 compute-0 NetworkManager[48920]: <info>  [1763803324.2822] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Nov 22 09:22:04 compute-0 ovn_controller[152872]: 2025-11-22T09:22:04Z|00708|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 09:22:04 compute-0 systemd-udevd[327439]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:22:04 compute-0 ovn_controller[152872]: 2025-11-22T09:22:04Z|00709|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.282 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 NetworkManager[48920]: <info>  [1763803324.3008] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:22:04 compute-0 NetworkManager[48920]: <info>  [1763803324.3019] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:22:04 compute-0 ovn_controller[152872]: 2025-11-22T09:22:04Z|00710|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.303 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 podman[327405]: 2025-11-22 09:22:04.314399551 +0000 UTC m=+0.150678403 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:22:04 compute-0 systemd-machined[215941]: New machine qemu-84-instance-00000032.
Nov 22 09:22:04 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-00000032.
Nov 22 09:22:04 compute-0 ovn_controller[152872]: 2025-11-22T09:22:04Z|00711|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.354 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.355 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 bound to our chassis
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.357 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.371 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c154229-6cf4-40de-8523-5bcb391a6605]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.372 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.375 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.375 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7467c36-1a10-4668-be85-e934c7a0ed0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.376 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26321ca1-884c-413a-b3de-ee30f5aaadcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.389 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1b41a05b-a722-46a2-ac0f-79b4c98869dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.418 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2ecccf-b26b-4c8e-a1cc-e1e9a44927c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.448 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4716e3-6c0a-4598-82d7-7aca732b3047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.454 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1276069-d1ad-4e84-9914-730fc9e46075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 NetworkManager[48920]: <info>  [1763803324.4567] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.497 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[00ae9f42-205d-4e3b-b10c-08774306992f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.502 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a3636740-ab06-4f18-95d1-d9bce410a107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 NetworkManager[48920]: <info>  [1763803324.5327] device (tapebc42408-70): carrier: link connected
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.542 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[993ffb26-c1f3-49d4-a5d5-69985be41617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.568 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4402e853-10d4-4c6f-b86a-797108736af3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617527, 'reachable_time': 29735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327503, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.588 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f67aa42f-85f1-4e33-b8de-c0dd2d43c8b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617527, 'tstamp': 617527}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327515, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.608 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a720f7b2-d877-41f4-9bc0-d186df549daa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617527, 'reachable_time': 29735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327517, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.642 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dd428df7-9943-4fdd-b551-472626a3e4d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.700 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[856610a8-e596-40dd-8261-ecdd7660043d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.701 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.702 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.702 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:04 compute-0 NetworkManager[48920]: <info>  [1763803324.7051] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.708 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:04 compute-0 ovn_controller[152872]: 2025-11-22T09:22:04Z|00712|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.711 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.712 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[025e898f-943a-4e65-abc3-fdeb52c1d253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.713 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:22:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:04.715 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.771 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 636b1046-fff8-4a45-8a14-04010b2f282e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.771 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803324.7709281, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.771 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.788 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.792 253665 DEBUG nova.compute.manager [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.793 253665 DEBUG nova.objects.instance [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.795 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.810 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance running successfully.
Nov 22 09:22:04 compute-0 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.816 253665 DEBUG nova.compute.manager [req-933f57e6-db9f-4926-99b1-00ec8556421a req-9cf34760-f09b-40a5-aa2d-999ce361cb12 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.816 253665 DEBUG oslo_concurrency.lockutils [req-933f57e6-db9f-4926-99b1-00ec8556421a req-9cf34760-f09b-40a5-aa2d-999ce361cb12 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.817 253665 DEBUG oslo_concurrency.lockutils [req-933f57e6-db9f-4926-99b1-00ec8556421a req-9cf34760-f09b-40a5-aa2d-999ce361cb12 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.817 253665 DEBUG oslo_concurrency.lockutils [req-933f57e6-db9f-4926-99b1-00ec8556421a req-9cf34760-f09b-40a5-aa2d-999ce361cb12 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.817 253665 DEBUG nova.compute.manager [req-933f57e6-db9f-4926-99b1-00ec8556421a req-9cf34760-f09b-40a5-aa2d-999ce361cb12 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.817 253665 WARNING nova.compute.manager [req-933f57e6-db9f-4926-99b1-00ec8556421a req-9cf34760-f09b-40a5-aa2d-999ce361cb12 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state suspended and task_state resuming.
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.819 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.819 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803324.7752755, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.819 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.821 253665 DEBUG nova.virt.libvirt.guest [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.821 253665 DEBUG nova.compute.manager [None req-894b88c4-27a9-475a-9c70-7efeedc219e1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.843 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.845 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:04 compute-0 nova_compute[253661]: 2025-11-22 09:22:04.863 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 22 09:22:04 compute-0 ceph-mon[75021]: pgmap v1736: 305 pgs: 305 active+clean; 145 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.5 MiB/s wr, 320 op/s
Nov 22 09:22:05 compute-0 podman[327554]: 2025-11-22 09:22:05.091577159 +0000 UTC m=+0.061419546 container create dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:22:05 compute-0 systemd[1]: Started libpod-conmon-dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0.scope.
Nov 22 09:22:05 compute-0 podman[327554]: 2025-11-22 09:22:05.053685503 +0000 UTC m=+0.023527890 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:22:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4280448f8ad99374d3ce396f8af8dbbd44980447b52b74ffe854cff7923f481/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:05 compute-0 podman[327554]: 2025-11-22 09:22:05.200669186 +0000 UTC m=+0.170511583 container init dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:22:05 compute-0 podman[327554]: 2025-11-22 09:22:05.207460901 +0000 UTC m=+0.177303268 container start dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:22:05 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327568]: [NOTICE]   (327572) : New worker (327574) forked
Nov 22 09:22:05 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327568]: [NOTICE]   (327572) : Loading success.
Nov 22 09:22:05 compute-0 nova_compute[253661]: 2025-11-22 09:22:05.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 123 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 27 KiB/s wr, 269 op/s
Nov 22 09:22:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:06.756 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:07 compute-0 ceph-mon[75021]: pgmap v1737: 305 pgs: 305 active+clean; 123 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 27 KiB/s wr, 269 op/s
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.262 253665 DEBUG nova.compute.manager [req-41df7a37-0a6d-4456-a9dc-15059b4a1473 req-21592d80-373e-4e12-827d-460802089301 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.263 253665 DEBUG oslo_concurrency.lockutils [req-41df7a37-0a6d-4456-a9dc-15059b4a1473 req-21592d80-373e-4e12-827d-460802089301 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.263 253665 DEBUG oslo_concurrency.lockutils [req-41df7a37-0a6d-4456-a9dc-15059b4a1473 req-21592d80-373e-4e12-827d-460802089301 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.264 253665 DEBUG oslo_concurrency.lockutils [req-41df7a37-0a6d-4456-a9dc-15059b4a1473 req-21592d80-373e-4e12-827d-460802089301 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.264 253665 DEBUG nova.compute.manager [req-41df7a37-0a6d-4456-a9dc-15059b4a1473 req-21592d80-373e-4e12-827d-460802089301 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.264 253665 WARNING nova.compute.manager [req-41df7a37-0a6d-4456-a9dc-15059b4a1473 req-21592d80-373e-4e12-827d-460802089301 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.
Nov 22 09:22:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.437 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "688e9583-4ea7-4a94-b8d0-2758f83279af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.438 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.440 253665 DEBUG oslo_concurrency.lockutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.441 253665 DEBUG oslo_concurrency.lockutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.441 253665 DEBUG oslo_concurrency.lockutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.442 253665 DEBUG oslo_concurrency.lockutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.442 253665 DEBUG oslo_concurrency.lockutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.444 253665 INFO nova.compute.manager [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Terminating instance
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.446 253665 DEBUG nova.compute.manager [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.461 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:22:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 123 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 27 KiB/s wr, 264 op/s
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.476 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.476 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:07 compute-0 kernel: tapa288a5e5-7b (unregistering): left promiscuous mode
Nov 22 09:22:07 compute-0 NetworkManager[48920]: <info>  [1763803327.4924] device (tapa288a5e5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:07 compute-0 ovn_controller[152872]: 2025-11-22T09:22:07Z|00713|binding|INFO|Releasing lport a288a5e5-7b57-4be8-9617-3271ea1e210f from this chassis (sb_readonly=0)
Nov 22 09:22:07 compute-0 ovn_controller[152872]: 2025-11-22T09:22:07Z|00714|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f down in Southbound
Nov 22 09:22:07 compute-0 ovn_controller[152872]: 2025-11-22T09:22:07Z|00715|binding|INFO|Removing iface tapa288a5e5-7b ovn-installed in OVS
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.501 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.510 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.512 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.514 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebc42408-7b03-480c-a016-1e5bb2ebcc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.518 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e783645-aa24-4f7a-b769-2137b917d77d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.519 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace which is not needed anymore
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.547 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.548 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:07 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 22 09:22:07 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000032.scope: Consumed 3.150s CPU time.
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.555 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.556 253665 INFO nova.compute.claims [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:22:07 compute-0 systemd-machined[215941]: Machine qemu-84-instance-00000032 terminated.
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.579 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:07 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327568]: [NOTICE]   (327572) : haproxy version is 2.8.14-c23fe91
Nov 22 09:22:07 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327568]: [NOTICE]   (327572) : path to executable is /usr/sbin/haproxy
Nov 22 09:22:07 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327568]: [WARNING]  (327572) : Exiting Master process...
Nov 22 09:22:07 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327568]: [ALERT]    (327572) : Current worker (327574) exited with code 143 (Terminated)
Nov 22 09:22:07 compute-0 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[327568]: [WARNING]  (327572) : All workers exited. Exiting... (0)
Nov 22 09:22:07 compute-0 systemd[1]: libpod-dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0.scope: Deactivated successfully.
Nov 22 09:22:07 compute-0 podman[327607]: 2025-11-22 09:22:07.669981205 +0000 UTC m=+0.045938002 container died dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.684 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.685 253665 DEBUG nova.objects.instance [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.691 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0-userdata-shm.mount: Deactivated successfully.
Nov 22 09:22:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4280448f8ad99374d3ce396f8af8dbbd44980447b52b74ffe854cff7923f481-merged.mount: Deactivated successfully.
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.723 253665 DEBUG nova.virt.libvirt.vif [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.725 253665 DEBUG nova.network.os_vif_util [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:07 compute-0 podman[327607]: 2025-11-22 09:22:07.726649634 +0000 UTC m=+0.102606431 container cleanup dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.727 253665 DEBUG nova.network.os_vif_util [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.727 253665 DEBUG os_vif [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.730 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa288a5e5-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.732 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:22:07 compute-0 systemd[1]: libpod-conmon-dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0.scope: Deactivated successfully.
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.737 253665 INFO os_vif [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')
Nov 22 09:22:07 compute-0 podman[327648]: 2025-11-22 09:22:07.800679914 +0000 UTC m=+0.048493572 container remove dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.808 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d7b728-b23a-494a-81f0-447d6895dbb7]: (4, ('Sat Nov 22 09:22:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0)\ndff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0\nSat Nov 22 09:22:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (dff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0)\ndff239b6fe3f0f9d2ad0cdd3a04015e51f58af50ad16d53f5cb4947c47c8c2d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.810 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0411d40e-2f73-4b21-8c3a-e23ce99c80a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.818 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:07 compute-0 kernel: tapebc42408-70: left promiscuous mode
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.829 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8e448ac2-78b8-4a94-b39e-37ea1e96a704]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:07 compute-0 nova_compute[253661]: 2025-11-22 09:22:07.843 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.843 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e976a4-1b24-4fee-8be6-a3b6eb651492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.845 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7997148-7a88-4414-bbb2-072fbab21a0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95524d86-cc02-4805-94c6-5422fedb1d31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617518, 'reachable_time': 22780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327699, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:07 compute-0 systemd[1]: run-netns-ovnmeta\x2debc42408\x2d7b03\x2d480c\x2da016\x2d1e5bb2ebcc93.mount: Deactivated successfully.
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.869 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:22:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:07.869 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f5334508-5153-410b-b571-1ebb1a42b73e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/215263375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.161 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.166 253665 DEBUG nova.compute.provider_tree [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.181 253665 DEBUG nova.scheduler.client.report [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.213 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.214 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.216 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.222 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.223 253665 INFO nova.compute.claims [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.302 253665 INFO nova.virt.libvirt.driver [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Deleting instance files /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e_del
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.304 253665 INFO nova.virt.libvirt.driver [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Deletion of /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e_del complete
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.332 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.332 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.388 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.404 253665 INFO nova.compute.manager [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Took 0.96 seconds to destroy the instance on the hypervisor.
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.405 253665 DEBUG oslo.service.loopingcall [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.405 253665 DEBUG nova.compute.manager [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.405 253665 DEBUG nova.network.neutron [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.416 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.445 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.557 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.560 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.561 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Creating image(s)
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.591 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 688e9583-4ea7-4a94-b8d0-2758f83279af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.622 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 688e9583-4ea7-4a94-b8d0-2758f83279af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.648 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 688e9583-4ea7-4a94-b8d0-2758f83279af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.653 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.698 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.699 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.709 253665 DEBUG nova.policy [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c57e583444d64b2a80c940052ff754eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.727 253665 DEBUG nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.740 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.740 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.741 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.741 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.761 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 688e9583-4ea7-4a94-b8d0-2758f83279af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.764 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 688e9583-4ea7-4a94-b8d0-2758f83279af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.822 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/902222173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.948 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.958 253665 DEBUG nova.compute.provider_tree [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.973 253665 DEBUG nova.scheduler.client.report [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.996 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.996 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:22:08 compute-0 nova_compute[253661]: 2025-11-22 09:22:08.998 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.005 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.005 253665 INFO nova.compute.claims [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:22:09 compute-0 ceph-mon[75021]: pgmap v1738: 305 pgs: 305 active+clean; 123 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 27 KiB/s wr, 264 op/s
Nov 22 09:22:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/215263375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/902222173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.077 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.077 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.100 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.134 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.192 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 688e9583-4ea7-4a94-b8d0-2758f83279af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.216 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.292 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] resizing rbd image 688e9583-4ea7-4a94-b8d0-2758f83279af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.330 253665 DEBUG nova.policy [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c57e583444d64b2a80c940052ff754eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.361 253665 DEBUG nova.compute.manager [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.361 253665 DEBUG oslo_concurrency.lockutils [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.362 253665 DEBUG oslo_concurrency.lockutils [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.362 253665 DEBUG oslo_concurrency.lockutils [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.362 253665 DEBUG nova.compute.manager [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.362 253665 DEBUG nova.compute.manager [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.363 253665 DEBUG nova.compute.manager [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.363 253665 DEBUG oslo_concurrency.lockutils [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.363 253665 DEBUG oslo_concurrency.lockutils [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.363 253665 DEBUG oslo_concurrency.lockutils [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.364 253665 DEBUG nova.compute.manager [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.364 253665 WARNING nova.compute.manager [req-0879c5ce-64bf-49f7-b496-b19fe2be251a req-6634862c-1e58-4aac-96c8-ba57c9bf9ff9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state deleting.
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.368 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.369 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.369 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Creating image(s)
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.395 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.422 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.449 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.453 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 86 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 210 op/s
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.523 253665 DEBUG nova.objects.instance [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'migration_context' on Instance uuid 688e9583-4ea7-4a94-b8d0-2758f83279af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.532 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.532 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.533 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.533 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.552 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.556 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.714 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.715 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Ensure instance console log exists: /var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.716 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.716 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.716 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/399051053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.744 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.751 253665 DEBUG nova.compute.provider_tree [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.767 253665 DEBUG nova.scheduler.client.report [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.786 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.787 253665 DEBUG nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.843 253665 DEBUG nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:22:09 compute-0 nova_compute[253661]: 2025-11-22 09:22:09.844 253665 DEBUG nova.network.neutron [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.013 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.044 253665 INFO nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.110 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Successfully created port: 14926b0c-50b5-475d-9ee6-f20c27dab8f0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.116 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] resizing rbd image 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:22:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/399051053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.241 253665 DEBUG nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.389 253665 DEBUG nova.policy [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a611caa9b8c247f083ce8b67780fbc01', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.399 253665 DEBUG nova.objects.instance [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'migration_context' on Instance uuid 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.411 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.411 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Ensure instance console log exists: /var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.412 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.412 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.412 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.463 253665 DEBUG nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.466 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.467 253665 INFO nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Creating image(s)
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.490 253665 DEBUG nova.storage.rbd_utils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image b5f49b05-1b70-4479-98d9-83b995037a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.522 253665 DEBUG nova.storage.rbd_utils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image b5f49b05-1b70-4479-98d9-83b995037a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.554 253665 DEBUG nova.storage.rbd_utils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image b5f49b05-1b70-4479-98d9-83b995037a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.560 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.629 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.630 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.631 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.631 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.656 253665 DEBUG nova.storage.rbd_utils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image b5f49b05-1b70-4479-98d9-83b995037a41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.661 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b5f49b05-1b70-4479-98d9-83b995037a41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.705 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Successfully created port: a6df8b13-e328-4092-b9ed-01f8d6de489a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.734 253665 DEBUG nova.network.neutron [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.774 253665 INFO nova.compute.manager [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Took 2.37 seconds to deallocate network for instance.
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.904 253665 DEBUG oslo_concurrency.lockutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:10 compute-0 nova_compute[253661]: 2025-11-22 09:22:10.904 253665 DEBUG oslo_concurrency.lockutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.004 253665 DEBUG oslo_concurrency.processutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.171 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Successfully updated port: 14926b0c-50b5-475d-9ee6-f20c27dab8f0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.187 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b5f49b05-1b70-4479-98d9-83b995037a41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.240 253665 DEBUG nova.storage.rbd_utils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] resizing rbd image b5f49b05-1b70-4479-98d9-83b995037a41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:22:11 compute-0 ceph-mon[75021]: pgmap v1739: 305 pgs: 305 active+clean; 86 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 210 op/s
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.279 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "refresh_cache-023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.279 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquired lock "refresh_cache-023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.280 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.337 253665 DEBUG nova.objects.instance [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'migration_context' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.353 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.354 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Ensure instance console log exists: /var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.354 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.355 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.355 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/125391595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.470 253665 DEBUG oslo_concurrency.processutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 86 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.3 KiB/s wr, 99 op/s
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.475 253665 DEBUG nova.compute.provider_tree [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.489 253665 DEBUG nova.scheduler.client.report [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.530 253665 DEBUG oslo_concurrency.lockutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.631 253665 INFO nova.scheduler.client.report [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Deleted allocations for instance 636b1046-fff8-4a45-8a14-04010b2f282e
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.650 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.889 253665 DEBUG nova.compute.manager [req-8d0eeea9-6fb0-4c26-ac4b-500cd2d5e05a req-d4fbb0a1-a355-44f8-bd4d-81121580ac59 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Received event network-changed-14926b0c-50b5-475d-9ee6-f20c27dab8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.890 253665 DEBUG nova.compute.manager [req-8d0eeea9-6fb0-4c26-ac4b-500cd2d5e05a req-d4fbb0a1-a355-44f8-bd4d-81121580ac59 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Refreshing instance network info cache due to event network-changed-14926b0c-50b5-475d-9ee6-f20c27dab8f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.890 253665 DEBUG oslo_concurrency.lockutils [req-8d0eeea9-6fb0-4c26-ac4b-500cd2d5e05a req-d4fbb0a1-a355-44f8-bd4d-81121580ac59 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:11 compute-0 nova_compute[253661]: 2025-11-22 09:22:11.904 253665 DEBUG oslo_concurrency.lockutils [None req-e952ea06-ab5b-4fde-b233-ea9bc18c6aa8 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:12 compute-0 nova_compute[253661]: 2025-11-22 09:22:12.063 253665 DEBUG nova.compute.manager [req-eb2a4b31-1c98-420c-8165-174e792b5e40 req-1e06cc57-0678-4f1d-873a-737e95e80bb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-deleted-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/125391595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:22:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/910484831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:22:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:22:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/910484831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:22:12 compute-0 nova_compute[253661]: 2025-11-22 09:22:12.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:12 compute-0 nova_compute[253661]: 2025-11-22 09:22:12.858 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "41d22150-10c5-4aac-84a2-bebea895286e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:12 compute-0 nova_compute[253661]: 2025-11-22 09:22:12.858 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:12 compute-0 nova_compute[253661]: 2025-11-22 09:22:12.896 253665 DEBUG nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.083 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.084 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.089 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.090 253665 INFO nova.compute.claims [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:22:13 compute-0 ceph-mon[75021]: pgmap v1740: 305 pgs: 305 active+clean; 86 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.3 KiB/s wr, 99 op/s
Nov 22 09:22:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/910484831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:22:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/910484831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.304 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.367 253665 DEBUG nova.network.neutron [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Successfully created port: db173c1b-18d4-496e-96f0-d12c680c680e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.428 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Updating instance_info_cache with network_info: [{"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 138 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.0 MiB/s wr, 182 op/s
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.624 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Releasing lock "refresh_cache-023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.625 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Instance network_info: |[{"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.625 253665 DEBUG oslo_concurrency.lockutils [req-8d0eeea9-6fb0-4c26-ac4b-500cd2d5e05a req-d4fbb0a1-a355-44f8-bd4d-81121580ac59 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.626 253665 DEBUG nova.network.neutron [req-8d0eeea9-6fb0-4c26-ac4b-500cd2d5e05a req-d4fbb0a1-a355-44f8-bd4d-81121580ac59 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Refreshing network info cache for port 14926b0c-50b5-475d-9ee6-f20c27dab8f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.628 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Start _get_guest_xml network_info=[{"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.632 253665 WARNING nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.637 253665 DEBUG nova.virt.libvirt.host [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.638 253665 DEBUG nova.virt.libvirt.host [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.644 253665 DEBUG nova.virt.libvirt.host [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.644 253665 DEBUG nova.virt.libvirt.host [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.644 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.645 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.645 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.646 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.646 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.646 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.646 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.647 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.647 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.647 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.647 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.648 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.651 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.753 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803318.7519834, 4d38da27-a529-427f-bf7a-90bbfbfeb0b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.754 253665 INFO nova.compute.manager [-] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] VM Stopped (Lifecycle Event)
Nov 22 09:22:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3363157050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.770 253665 DEBUG nova.compute.manager [None req-dc7dc2bb-5e4f-418e-8769-fd43aeacbff4 - - - - - -] [instance: 4d38da27-a529-427f-bf7a-90bbfbfeb0b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.785 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.792 253665 DEBUG nova.compute.provider_tree [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.804 253665 DEBUG nova.scheduler.client.report [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.907 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.909 253665 DEBUG nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.982 253665 DEBUG nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.982 253665 DEBUG nova.network.neutron [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.986 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803318.983729, 366b8461-3c67-44e6-a791-f49b343cef76 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:13 compute-0 nova_compute[253661]: 2025-11-22 09:22:13.986 253665 INFO nova.compute.manager [-] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] VM Stopped (Lifecycle Event)
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.001 253665 DEBUG nova.compute.manager [None req-caf09fa5-dc53-422f-ad97-e3f80b2f0892 - - - - - -] [instance: 366b8461-3c67-44e6-a791-f49b343cef76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.048 253665 INFO nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:22:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244672161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3363157050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2244672161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.322 253665 DEBUG nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.326 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.350 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.354 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.459 253665 DEBUG nova.policy [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a611caa9b8c247f083ce8b67780fbc01', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.518 253665 DEBUG nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.521 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.522 253665 INFO nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Creating image(s)
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.551 253665 DEBUG nova.storage.rbd_utils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image 41d22150-10c5-4aac-84a2-bebea895286e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.578 253665 DEBUG nova.storage.rbd_utils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image 41d22150-10c5-4aac-84a2-bebea895286e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.611 253665 DEBUG nova.storage.rbd_utils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image 41d22150-10c5-4aac-84a2-bebea895286e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.616 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.690 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.691 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.692 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.692 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.715 253665 DEBUG nova.storage.rbd_utils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image 41d22150-10c5-4aac-84a2-bebea895286e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.719 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 41d22150-10c5-4aac-84a2-bebea895286e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.838 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Successfully updated port: a6df8b13-e328-4092-b9ed-01f8d6de489a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:22:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685128943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.865 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.867 253665 DEBUG nova.virt.libvirt.vif [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2050555024',display_name='tempest-MultipleCreateTestJSON-server-2050555024-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2050555024-2',id=71,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-rsawzag9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:09Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=023bf3c2-b4eb-4f0f-b471-a20fb9907e1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.867 253665 DEBUG nova.network.os_vif_util [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.868 253665 DEBUG nova.network.os_vif_util [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:7b:70,bridge_name='br-int',has_traffic_filtering=True,id=14926b0c-50b5-475d-9ee6-f20c27dab8f0,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14926b0c-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.870 253665 DEBUG nova.objects.instance [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'pci_devices' on Instance uuid 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.880 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "refresh_cache-688e9583-4ea7-4a94-b8d0-2758f83279af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.880 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquired lock "refresh_cache-688e9583-4ea7-4a94-b8d0-2758f83279af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.880 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.884 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <uuid>023bf3c2-b4eb-4f0f-b471-a20fb9907e1e</uuid>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <name>instance-00000047</name>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <nova:name>tempest-MultipleCreateTestJSON-server-2050555024-2</nova:name>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:22:13</nova:creationTime>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <nova:user uuid="c57e583444d64b2a80c940052ff754eb">tempest-MultipleCreateTestJSON-1224529962-project-member</nova:user>
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <nova:project uuid="fb7ce66b616d43838eb71b9c62cb2354">tempest-MultipleCreateTestJSON-1224529962</nova:project>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <nova:port uuid="14926b0c-50b5-475d-9ee6-f20c27dab8f0">
Nov 22 09:22:14 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <system>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <entry name="serial">023bf3c2-b4eb-4f0f-b471-a20fb9907e1e</entry>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <entry name="uuid">023bf3c2-b4eb-4f0f-b471-a20fb9907e1e</entry>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     </system>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <os>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   </os>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <features>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   </features>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk">
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk.config">
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:14 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:df:7b:70"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <target dev="tap14926b0c-50"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e/console.log" append="off"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <video>
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     </video>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:22:14 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:22:14 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:22:14 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:22:14 compute-0 nova_compute[253661]: </domain>
Nov 22 09:22:14 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.884 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Preparing to wait for external event network-vif-plugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.885 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.885 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.885 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.886 253665 DEBUG nova.virt.libvirt.vif [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2050555024',display_name='tempest-MultipleCreateTestJSON-server-2050555024-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2050555024-2',id=71,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-rsawzag9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:09Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=023bf3c2-b4eb-4f0f-b471-a20fb9907e1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.886 253665 DEBUG nova.network.os_vif_util [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.886 253665 DEBUG nova.network.os_vif_util [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:7b:70,bridge_name='br-int',has_traffic_filtering=True,id=14926b0c-50b5-475d-9ee6-f20c27dab8f0,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14926b0c-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.887 253665 DEBUG os_vif [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:7b:70,bridge_name='br-int',has_traffic_filtering=True,id=14926b0c-50b5-475d-9ee6-f20c27dab8f0,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14926b0c-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.888 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.888 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.893 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.893 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14926b0c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.894 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14926b0c-50, col_values=(('external_ids', {'iface-id': '14926b0c-50b5-475d-9ee6-f20c27dab8f0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:7b:70', 'vm-uuid': '023bf3c2-b4eb-4f0f-b471-a20fb9907e1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:14 compute-0 NetworkManager[48920]: <info>  [1763803334.9296] manager: (tap14926b0c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.938 253665 DEBUG nova.compute.manager [req-a096efa3-e1ff-4a80-886a-30c29405fb27 req-abddb34f-7f1c-4180-91c9-94c474c07a2f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Received event network-changed-a6df8b13-e328-4092-b9ed-01f8d6de489a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.938 253665 DEBUG nova.compute.manager [req-a096efa3-e1ff-4a80-886a-30c29405fb27 req-abddb34f-7f1c-4180-91c9-94c474c07a2f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Refreshing instance network info cache due to event network-changed-a6df8b13-e328-4092-b9ed-01f8d6de489a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.938 253665 DEBUG oslo_concurrency.lockutils [req-a096efa3-e1ff-4a80-886a-30c29405fb27 req-abddb34f-7f1c-4180-91c9-94c474c07a2f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-688e9583-4ea7-4a94-b8d0-2758f83279af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.938 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:22:14 compute-0 nova_compute[253661]: 2025-11-22 09:22:14.939 253665 INFO os_vif [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:7b:70,bridge_name='br-int',has_traffic_filtering=True,id=14926b0c-50b5-475d-9ee6-f20c27dab8f0,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14926b0c-50')
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.003 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.004 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.004 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No VIF found with MAC fa:16:3e:df:7b:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.005 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Using config drive
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.028 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.173 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.404 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Creating config drive at /var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e/disk.config
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.410 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcsrbah7m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:15 compute-0 ceph-mon[75021]: pgmap v1741: 305 pgs: 305 active+clean; 138 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.0 MiB/s wr, 182 op/s
Nov 22 09:22:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2685128943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 180 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 5.3 MiB/s wr, 133 op/s
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.567 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcsrbah7m" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.612 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.617 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e/disk.config 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.673 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 41d22150-10c5-4aac-84a2-bebea895286e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.954s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.759 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "d4350763-7fea-40fa-ade2-5aada492b3c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.759 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.770 253665 DEBUG nova.storage.rbd_utils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] resizing rbd image 41d22150-10c5-4aac-84a2-bebea895286e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.827 253665 DEBUG nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.963 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e/disk.config 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.965 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Deleting local config drive /var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e/disk.config because it was imported into RBD.
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.973 253665 DEBUG nova.objects.instance [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 41d22150-10c5-4aac-84a2-bebea895286e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.979 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.979 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.986 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.987 253665 INFO nova.compute.claims [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.990 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.990 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Ensure instance console log exists: /var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.991 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.991 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.991 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:15 compute-0 nova_compute[253661]: 2025-11-22 09:22:15.993 253665 DEBUG nova.network.neutron [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Successfully updated port: db173c1b-18d4-496e-96f0-d12c680c680e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.014 253665 DEBUG nova.network.neutron [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Successfully created port: 3862e425-07d8-49da-b68e-63d3381670f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.032 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "refresh_cache-b5f49b05-1b70-4479-98d9-83b995037a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.033 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquired lock "refresh_cache-b5f49b05-1b70-4479-98d9-83b995037a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.033 253665 DEBUG nova.network.neutron [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:22:16 compute-0 kernel: tap14926b0c-50: entered promiscuous mode
Nov 22 09:22:16 compute-0 NetworkManager[48920]: <info>  [1763803336.0436] manager: (tap14926b0c-50): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Nov 22 09:22:16 compute-0 ovn_controller[152872]: 2025-11-22T09:22:16Z|00716|binding|INFO|Claiming lport 14926b0c-50b5-475d-9ee6-f20c27dab8f0 for this chassis.
Nov 22 09:22:16 compute-0 ovn_controller[152872]: 2025-11-22T09:22:16Z|00717|binding|INFO|14926b0c-50b5-475d-9ee6-f20c27dab8f0: Claiming fa:16:3e:df:7b:70 10.100.0.14
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.149 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:7b:70 10.100.0.14'], port_security=['fa:16:3e:df:7b:70 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '023bf3c2-b4eb-4f0f-b471-a20fb9907e1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=14926b0c-50b5-475d-9ee6-f20c27dab8f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.150 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 14926b0c-50b5-475d-9ee6-f20c27dab8f0 in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 bound to our chassis
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.152 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:22:16 compute-0 ovn_controller[152872]: 2025-11-22T09:22:16Z|00718|binding|INFO|Setting lport 14926b0c-50b5-475d-9ee6-f20c27dab8f0 ovn-installed in OVS
Nov 22 09:22:16 compute-0 ovn_controller[152872]: 2025-11-22T09:22:16Z|00719|binding|INFO|Setting lport 14926b0c-50b5-475d-9ee6-f20c27dab8f0 up in Southbound
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:16 compute-0 systemd-machined[215941]: New machine qemu-85-instance-00000047.
Nov 22 09:22:16 compute-0 systemd-udevd[328593]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.170 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc71772-6a17-422a-9efe-c1bb9f631e7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.172 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8ee5c4d-61 in ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.175 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8ee5c4d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.175 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66a8624e-918d-4094-b949-6a972d9f30a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.177 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6012b4-cb24-4671-962f-b99e1358eabf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-00000047.
Nov 22 09:22:16 compute-0 NetworkManager[48920]: <info>  [1763803336.1851] device (tap14926b0c-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:22:16 compute-0 NetworkManager[48920]: <info>  [1763803336.1865] device (tap14926b0c-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.193 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ef3ee1-52c8-4b2f-9072-20ab820c0e68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[04bba3c4-eb8f-4ea3-9143-71ec5c3753bc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.264 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6e4293-8142-4e40-8e54-6bc1d8d64f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.270 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[617e5962-7bd9-4e8e-b6ae-a2eb3ee13de8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 NetworkManager[48920]: <info>  [1763803336.2715] manager: (tapb8ee5c4d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/314)
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.338 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.361 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d70c105d-eccc-4e2e-8577-92cd81ec6726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.366 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d955b540-63c7-4945-9e2f-9283664836fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.391 253665 DEBUG nova.network.neutron [req-8d0eeea9-6fb0-4c26-ac4b-500cd2d5e05a req-d4fbb0a1-a355-44f8-bd4d-81121580ac59 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Updated VIF entry in instance network info cache for port 14926b0c-50b5-475d-9ee6-f20c27dab8f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.392 253665 DEBUG nova.network.neutron [req-8d0eeea9-6fb0-4c26-ac4b-500cd2d5e05a req-d4fbb0a1-a355-44f8-bd4d-81121580ac59 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Updating instance_info_cache with network_info: [{"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:16 compute-0 NetworkManager[48920]: <info>  [1763803336.3985] device (tapb8ee5c4d-60): carrier: link connected
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.405 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9b2a6d-1fd4-4bd9-a96c-fb8a0eea7dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.406 253665 DEBUG oslo_concurrency.lockutils [req-8d0eeea9-6fb0-4c26-ac4b-500cd2d5e05a req-d4fbb0a1-a355-44f8-bd4d-81121580ac59 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.428 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32c5e89d-f8e5-470c-b5ff-458b9cae2605]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8ee5c4d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:6e:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618713, 'reachable_time': 22936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328626, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ceph-mon[75021]: pgmap v1742: 305 pgs: 305 active+clean; 180 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 5.3 MiB/s wr, 133 op/s
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.452 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee16056-5499-451e-9321-ab5fa8d52c3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:6eab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618713, 'tstamp': 618713}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328627, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.472 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7793f6a8-f641-45b7-9697-9cd91ee6c381]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8ee5c4d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:6e:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618713, 'reachable_time': 22936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328628, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.511 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f41527a-86ad-4d56-921e-9d566de7a775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.594 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b005f6-6c5d-4b64-8f71-53db050a157b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.596 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8ee5c4d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.597 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.598 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8ee5c4d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:16 compute-0 kernel: tapb8ee5c4d-60: entered promiscuous mode
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:16 compute-0 NetworkManager[48920]: <info>  [1763803336.6029] manager: (tapb8ee5c4d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.603 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.608 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8ee5c4d-60, col_values=(('external_ids', {'iface-id': '170e2a44-25ef-429e-ae6e-cbd8fd0b45d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:16 compute-0 ovn_controller[152872]: 2025-11-22T09:22:16Z|00720|binding|INFO|Releasing lport 170e2a44-25ef-429e-ae6e-cbd8fd0b45d8 from this chassis (sb_readonly=0)
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.613 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8ee5c4d-6313-4ba5-9d89-7cce7062ce25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8ee5c4d-6313-4ba5-9d89-7cce7062ce25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.627 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e326ff97-d63e-44d7-9cbc-e64a884284b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.630 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/b8ee5c4d-6313-4ba5-9d89-7cce7062ce25.pid.haproxy
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:22:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:16.632 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'env', 'PROCESS_TAG=haproxy-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8ee5c4d-6313-4ba5-9d89-7cce7062ce25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:22:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524216862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.812 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.822 253665 DEBUG nova.compute.provider_tree [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.837 253665 DEBUG nova.scheduler.client.report [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.867 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.868 253665 DEBUG nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.935 253665 DEBUG nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.935 253665 DEBUG nova.network.neutron [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.962 253665 INFO nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:22:16 compute-0 nova_compute[253661]: 2025-11-22 09:22:16.991 253665 DEBUG nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.030 253665 DEBUG nova.network.neutron [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:22:17 compute-0 podman[328681]: 2025-11-22 09:22:17.042218508 +0000 UTC m=+0.069295007 container create 44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:22:17 compute-0 systemd[1]: Started libpod-conmon-44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4.scope.
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.098 253665 DEBUG nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.100 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.100 253665 INFO nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Creating image(s)
Nov 22 09:22:17 compute-0 podman[328681]: 2025-11-22 09:22:17.006003852 +0000 UTC m=+0.033080381 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:22:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b1cb886d6c0d9f91b091e7d096ee2fbe20284e4808999d06dd7ab7716e2fd1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.132 253665 DEBUG nova.storage.rbd_utils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image d4350763-7fea-40fa-ade2-5aada492b3c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.163 253665 DEBUG nova.storage.rbd_utils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image d4350763-7fea-40fa-ade2-5aada492b3c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:17 compute-0 podman[328681]: 2025-11-22 09:22:17.167087396 +0000 UTC m=+0.194163915 container init 44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:22:17 compute-0 podman[328681]: 2025-11-22 09:22:17.173273926 +0000 UTC m=+0.200350425 container start 44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 09:22:17 compute-0 ovn_controller[152872]: 2025-11-22T09:22:17Z|00721|binding|INFO|Releasing lport 170e2a44-25ef-429e-ae6e-cbd8fd0b45d8 from this chassis (sb_readonly=0)
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.201 253665 DEBUG nova.storage.rbd_utils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image d4350763-7fea-40fa-ade2-5aada492b3c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:17 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[328696]: [NOTICE]   (328743) : New worker (328756) forked
Nov 22 09:22:17 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[328696]: [NOTICE]   (328743) : Loading success.
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.226 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.261 253665 DEBUG nova.network.neutron [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Updating instance_info_cache with network_info: [{"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.277 253665 DEBUG nova.policy [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a611caa9b8c247f083ce8b67780fbc01', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.285 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Releasing lock "refresh_cache-688e9583-4ea7-4a94-b8d0-2758f83279af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.285 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Instance network_info: |[{"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.286 253665 DEBUG oslo_concurrency.lockutils [req-a096efa3-e1ff-4a80-886a-30c29405fb27 req-abddb34f-7f1c-4180-91c9-94c474c07a2f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-688e9583-4ea7-4a94-b8d0-2758f83279af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.286 253665 DEBUG nova.network.neutron [req-a096efa3-e1ff-4a80-886a-30c29405fb27 req-abddb34f-7f1c-4180-91c9-94c474c07a2f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Refreshing network info cache for port a6df8b13-e328-4092-b9ed-01f8d6de489a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.289 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Start _get_guest_xml network_info=[{"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.296 253665 WARNING nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:22:17 compute-0 ovn_controller[152872]: 2025-11-22T09:22:17Z|00722|binding|INFO|Releasing lport 170e2a44-25ef-429e-ae6e-cbd8fd0b45d8 from this chassis (sb_readonly=0)
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.308 253665 DEBUG nova.virt.libvirt.host [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.308 253665 DEBUG nova.virt.libvirt.host [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.309 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.310 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.310 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.310 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.331 253665 DEBUG nova.storage.rbd_utils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image d4350763-7fea-40fa-ade2-5aada492b3c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.336 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d4350763-7fea-40fa-ade2-5aada492b3c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.380 253665 DEBUG nova.compute.manager [req-5b4f6d3e-225b-479d-ba10-14acc585d14b req-f44ae92b-b7d1-4170-817e-8dfb4fef4d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-changed-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.380 253665 DEBUG nova.compute.manager [req-5b4f6d3e-225b-479d-ba10-14acc585d14b req-f44ae92b-b7d1-4170-817e-8dfb4fef4d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Refreshing instance network info cache due to event network-changed-db173c1b-18d4-496e-96f0-d12c680c680e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.380 253665 DEBUG oslo_concurrency.lockutils [req-5b4f6d3e-225b-479d-ba10-14acc585d14b req-f44ae92b-b7d1-4170-817e-8dfb4fef4d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b5f49b05-1b70-4479-98d9-83b995037a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.387 253665 DEBUG nova.virt.libvirt.host [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.388 253665 DEBUG nova.virt.libvirt.host [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.389 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.389 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.389 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.390 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.390 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.390 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.390 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.390 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.391 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.391 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.391 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.391 253665 DEBUG nova.virt.hardware [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.395 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1524216862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.458 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803337.4579275, 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.459 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] VM Started (Lifecycle Event)
Nov 22 09:22:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 197 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 5.8 MiB/s wr, 117 op/s
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.483 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.490 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803337.458341, 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.491 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] VM Paused (Lifecycle Event)
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.519 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.527 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.546 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.721 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d4350763-7fea-40fa-ade2-5aada492b3c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.798 253665 DEBUG nova.storage.rbd_utils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] resizing rbd image d4350763-7fea-40fa-ade2-5aada492b3c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:22:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178557970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.913 253665 DEBUG nova.objects.instance [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'migration_context' on Instance uuid d4350763-7fea-40fa-ade2-5aada492b3c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.924 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.945 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 688e9583-4ea7-4a94-b8d0-2758f83279af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.950 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.991 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.992 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Ensure instance console log exists: /var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.992 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.993 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:17 compute-0 nova_compute[253661]: 2025-11-22 09:22:17.993 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385292189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.444 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.446 253665 DEBUG nova.virt.libvirt.vif [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2050555024',display_name='tempest-MultipleCreateTestJSON-server-2050555024-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2050555024-1',id=70,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-rsawzag9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:08Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=688e9583-4ea7-4a94-b8d0-2758f83279af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.447 253665 DEBUG nova.network.os_vif_util [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.449 253665 DEBUG nova.network.os_vif_util [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:c9:ac,bridge_name='br-int',has_traffic_filtering=True,id=a6df8b13-e328-4092-b9ed-01f8d6de489a,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6df8b13-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.451 253665 DEBUG nova.objects.instance [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'pci_devices' on Instance uuid 688e9583-4ea7-4a94-b8d0-2758f83279af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:18 compute-0 ceph-mon[75021]: pgmap v1743: 305 pgs: 305 active+clean; 197 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 5.8 MiB/s wr, 117 op/s
Nov 22 09:22:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1178557970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1385292189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.465 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <uuid>688e9583-4ea7-4a94-b8d0-2758f83279af</uuid>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <name>instance-00000046</name>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <nova:name>tempest-MultipleCreateTestJSON-server-2050555024-1</nova:name>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:22:17</nova:creationTime>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <nova:user uuid="c57e583444d64b2a80c940052ff754eb">tempest-MultipleCreateTestJSON-1224529962-project-member</nova:user>
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <nova:project uuid="fb7ce66b616d43838eb71b9c62cb2354">tempest-MultipleCreateTestJSON-1224529962</nova:project>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <nova:port uuid="a6df8b13-e328-4092-b9ed-01f8d6de489a">
Nov 22 09:22:18 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <system>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <entry name="serial">688e9583-4ea7-4a94-b8d0-2758f83279af</entry>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <entry name="uuid">688e9583-4ea7-4a94-b8d0-2758f83279af</entry>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     </system>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <os>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   </os>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <features>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   </features>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/688e9583-4ea7-4a94-b8d0-2758f83279af_disk">
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/688e9583-4ea7-4a94-b8d0-2758f83279af_disk.config">
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:18 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:54:c9:ac"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <target dev="tapa6df8b13-e3"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af/console.log" append="off"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <video>
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     </video>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:22:18 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:22:18 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:22:18 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:22:18 compute-0 nova_compute[253661]: </domain>
Nov 22 09:22:18 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.467 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Preparing to wait for external event network-vif-plugged-a6df8b13-e328-4092-b9ed-01f8d6de489a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.468 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.469 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.469 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.470 253665 DEBUG nova.virt.libvirt.vif [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2050555024',display_name='tempest-MultipleCreateTestJSON-server-2050555024-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2050555024-1',id=70,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-rsawzag9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:08Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=688e9583-4ea7-4a94-b8d0-2758f83279af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.471 253665 DEBUG nova.network.os_vif_util [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.471 253665 DEBUG nova.network.os_vif_util [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:c9:ac,bridge_name='br-int',has_traffic_filtering=True,id=a6df8b13-e328-4092-b9ed-01f8d6de489a,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6df8b13-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.472 253665 DEBUG os_vif [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:c9:ac,bridge_name='br-int',has_traffic_filtering=True,id=a6df8b13-e328-4092-b9ed-01f8d6de489a,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6df8b13-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.473 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.474 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.477 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6df8b13-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.477 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6df8b13-e3, col_values=(('external_ids', {'iface-id': 'a6df8b13-e328-4092-b9ed-01f8d6de489a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:c9:ac', 'vm-uuid': '688e9583-4ea7-4a94-b8d0-2758f83279af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.479 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:18 compute-0 NetworkManager[48920]: <info>  [1763803338.4804] manager: (tapa6df8b13-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.488 253665 INFO os_vif [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:c9:ac,bridge_name='br-int',has_traffic_filtering=True,id=a6df8b13-e328-4092-b9ed-01f8d6de489a,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6df8b13-e3')
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.557 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.559 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.559 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] No VIF found with MAC fa:16:3e:54:c9:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.560 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Using config drive
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.583 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 688e9583-4ea7-4a94-b8d0-2758f83279af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:18 compute-0 nova_compute[253661]: 2025-11-22 09:22:18.610 253665 DEBUG nova.network.neutron [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Successfully created port: b4377651-ae31-406d-9f8b-b7b241d511df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.233 253665 DEBUG nova.network.neutron [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Updating instance_info_cache with network_info: [{"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.252 253665 DEBUG nova.network.neutron [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Successfully updated port: 3862e425-07d8-49da-b68e-63d3381670f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.254 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Releasing lock "refresh_cache-b5f49b05-1b70-4479-98d9-83b995037a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.254 253665 DEBUG nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance network_info: |[{"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.255 253665 DEBUG oslo_concurrency.lockutils [req-5b4f6d3e-225b-479d-ba10-14acc585d14b req-f44ae92b-b7d1-4170-817e-8dfb4fef4d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b5f49b05-1b70-4479-98d9-83b995037a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.255 253665 DEBUG nova.network.neutron [req-5b4f6d3e-225b-479d-ba10-14acc585d14b req-f44ae92b-b7d1-4170-817e-8dfb4fef4d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Refreshing network info cache for port db173c1b-18d4-496e-96f0-d12c680c680e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.258 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Start _get_guest_xml network_info=[{"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.264 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "refresh_cache-41d22150-10c5-4aac-84a2-bebea895286e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.265 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquired lock "refresh_cache-41d22150-10c5-4aac-84a2-bebea895286e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.265 253665 DEBUG nova.network.neutron [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.270 253665 WARNING nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.289 253665 DEBUG nova.virt.libvirt.host [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.291 253665 DEBUG nova.virt.libvirt.host [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.298 253665 DEBUG nova.virt.libvirt.host [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.299 253665 DEBUG nova.virt.libvirt.host [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.299 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.299 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.300 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.300 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.300 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.300 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.301 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.301 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.301 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.301 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.301 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.301 253665 DEBUG nova.virt.hardware [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.304 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.340 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Creating config drive at /var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af/disk.config
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.345 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcazkdau5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.452 253665 DEBUG nova.compute.manager [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Received event network-vif-plugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.452 253665 DEBUG oslo_concurrency.lockutils [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.453 253665 DEBUG oslo_concurrency.lockutils [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.454 253665 DEBUG oslo_concurrency.lockutils [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.454 253665 DEBUG nova.compute.manager [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Processing event network-vif-plugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.454 253665 DEBUG nova.compute.manager [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Received event network-vif-plugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.454 253665 DEBUG oslo_concurrency.lockutils [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.455 253665 DEBUG oslo_concurrency.lockutils [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.455 253665 DEBUG oslo_concurrency.lockutils [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.455 253665 DEBUG nova.compute.manager [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] No waiting events found dispatching network-vif-plugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.455 253665 WARNING nova.compute.manager [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Received unexpected event network-vif-plugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 for instance with vm_state building and task_state spawning.
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.455 253665 DEBUG nova.compute.manager [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Received event network-changed-3862e425-07d8-49da-b68e-63d3381670f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.456 253665 DEBUG nova.compute.manager [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Refreshing instance network info cache due to event network-changed-3862e425-07d8-49da-b68e-63d3381670f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.458 253665 DEBUG oslo_concurrency.lockutils [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-41d22150-10c5-4aac-84a2-bebea895286e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.459 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.464 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803339.4636426, 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.465 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] VM Resumed (Lifecycle Event)
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.476 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:22:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 256 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 8.2 MiB/s wr, 165 op/s
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.483 253665 INFO nova.virt.libvirt.driver [-] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Instance spawned successfully.
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.484 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.488 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.491 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcazkdau5" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.522 253665 DEBUG nova.storage.rbd_utils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] rbd image 688e9583-4ea7-4a94-b8d0-2758f83279af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.527 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af/disk.config 688e9583-4ea7-4a94-b8d0-2758f83279af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.585 253665 DEBUG nova.network.neutron [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.593 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.598 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.599 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.599 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.599 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.600 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.600 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.622 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.655 253665 INFO nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Took 10.29 seconds to spawn the instance on the hypervisor.
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.656 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.681 253665 DEBUG nova.network.neutron [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Successfully updated port: b4377651-ae31-406d-9f8b-b7b241d511df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.706 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "refresh_cache-d4350763-7fea-40fa-ade2-5aada492b3c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.707 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquired lock "refresh_cache-d4350763-7fea-40fa-ade2-5aada492b3c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.707 253665 DEBUG nova.network.neutron [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.721 253665 INFO nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Took 12.16 seconds to build instance.
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.739 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3216210626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.796 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.824 253665 DEBUG nova.storage.rbd_utils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image b5f49b05-1b70-4479-98d9-83b995037a41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.831 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.888 253665 DEBUG nova.compute.manager [req-c94dfe0a-8aec-4bfb-998a-665beda0bdfb req-24b0e8e5-7181-4eee-8430-75f2691d007a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Received event network-changed-b4377651-ae31-406d-9f8b-b7b241d511df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.888 253665 DEBUG nova.compute.manager [req-c94dfe0a-8aec-4bfb-998a-665beda0bdfb req-24b0e8e5-7181-4eee-8430-75f2691d007a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Refreshing instance network info cache due to event network-changed-b4377651-ae31-406d-9f8b-b7b241d511df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.889 253665 DEBUG oslo_concurrency.lockutils [req-c94dfe0a-8aec-4bfb-998a-665beda0bdfb req-24b0e8e5-7181-4eee-8430-75f2691d007a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d4350763-7fea-40fa-ade2-5aada492b3c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.897 253665 DEBUG oslo_concurrency.processutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af/disk.config 688e9583-4ea7-4a94-b8d0-2758f83279af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.897 253665 INFO nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Deleting local config drive /var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af/disk.config because it was imported into RBD.
Nov 22 09:22:19 compute-0 NetworkManager[48920]: <info>  [1763803339.9642] manager: (tapa6df8b13-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Nov 22 09:22:19 compute-0 kernel: tapa6df8b13-e3: entered promiscuous mode
Nov 22 09:22:19 compute-0 nova_compute[253661]: 2025-11-22 09:22:19.970 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:19 compute-0 ovn_controller[152872]: 2025-11-22T09:22:19Z|00723|binding|INFO|Claiming lport a6df8b13-e328-4092-b9ed-01f8d6de489a for this chassis.
Nov 22 09:22:19 compute-0 ovn_controller[152872]: 2025-11-22T09:22:19Z|00724|binding|INFO|a6df8b13-e328-4092-b9ed-01f8d6de489a: Claiming fa:16:3e:54:c9:ac 10.100.0.5
Nov 22 09:22:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:19.981 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:c9:ac 10.100.0.5'], port_security=['fa:16:3e:54:c9:ac 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '688e9583-4ea7-4a94-b8d0-2758f83279af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a6df8b13-e328-4092-b9ed-01f8d6de489a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:19.983 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a6df8b13-e328-4092-b9ed-01f8d6de489a in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 bound to our chassis
Nov 22 09:22:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:19.984 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:22:20 compute-0 ovn_controller[152872]: 2025-11-22T09:22:19Z|00725|binding|INFO|Setting lport a6df8b13-e328-4092-b9ed-01f8d6de489a ovn-installed in OVS
Nov 22 09:22:20 compute-0 ovn_controller[152872]: 2025-11-22T09:22:19Z|00726|binding|INFO|Setting lport a6df8b13-e328-4092-b9ed-01f8d6de489a up in Southbound
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.005 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.007 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[54a3649c-8870-4f12-a511-f9efc74d7d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:20 compute-0 systemd-udevd[329117]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:22:20 compute-0 systemd-machined[215941]: New machine qemu-86-instance-00000046.
Nov 22 09:22:20 compute-0 NetworkManager[48920]: <info>  [1763803340.0456] device (tapa6df8b13-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:22:20 compute-0 NetworkManager[48920]: <info>  [1763803340.0468] device (tapa6df8b13-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:22:20 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-00000046.
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.051 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a09670de-f86d-4d5d-9649-3ff9312de5de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.055 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[664f9a81-2151-4341-b7dd-cc2ec3b5962c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.096 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[edb09afa-eae9-42ba-8f9f-1b87b10b5735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.101 253665 DEBUG nova.network.neutron [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.120 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3c870ebf-e5f9-4f9c-a490-c5ec92ff0f2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8ee5c4d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:6e:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618713, 'reachable_time': 22936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329127, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.145 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1955fec-dbe9-463a-a044-5a8d9d322b1e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb8ee5c4d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618728, 'tstamp': 618728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329130, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8ee5c4d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618732, 'tstamp': 618732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329130, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.147 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8ee5c4d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8ee5c4d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.152 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8ee5c4d-60, col_values=(('external_ids', {'iface-id': '170e2a44-25ef-429e-ae6e-cbd8fd0b45d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:20.152 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.167 253665 DEBUG nova.network.neutron [req-a096efa3-e1ff-4a80-886a-30c29405fb27 req-abddb34f-7f1c-4180-91c9-94c474c07a2f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Updated VIF entry in instance network info cache for port a6df8b13-e328-4092-b9ed-01f8d6de489a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.168 253665 DEBUG nova.network.neutron [req-a096efa3-e1ff-4a80-886a-30c29405fb27 req-abddb34f-7f1c-4180-91c9-94c474c07a2f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Updating instance_info_cache with network_info: [{"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.187 253665 DEBUG oslo_concurrency.lockutils [req-a096efa3-e1ff-4a80-886a-30c29405fb27 req-abddb34f-7f1c-4180-91c9-94c474c07a2f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-688e9583-4ea7-4a94-b8d0-2758f83279af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385048348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.440 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.442 253665 DEBUG nova.virt.libvirt.vif [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1090017716',display_name='tempest-ListServerFiltersTestJSON-instance-1090017716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1090017716',id=72,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-n7lha9f5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:10Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=b5f49b05-1b70-4479-98d9-83b995037a41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.442 253665 DEBUG nova.network.os_vif_util [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.443 253665 DEBUG nova.network.os_vif_util [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.445 253665 DEBUG nova.objects.instance [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.463 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <uuid>b5f49b05-1b70-4479-98d9-83b995037a41</uuid>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <name>instance-00000048</name>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1090017716</nova:name>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:22:19</nova:creationTime>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <nova:user uuid="a611caa9b8c247f083ce8b67780fbc01">tempest-ListServerFiltersTestJSON-1747318421-project-member</nova:user>
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <nova:project uuid="e5a1bb02c0c047be92aba24831aef1a5">tempest-ListServerFiltersTestJSON-1747318421</nova:project>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <nova:port uuid="db173c1b-18d4-496e-96f0-d12c680c680e">
Nov 22 09:22:20 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <system>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <entry name="serial">b5f49b05-1b70-4479-98d9-83b995037a41</entry>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <entry name="uuid">b5f49b05-1b70-4479-98d9-83b995037a41</entry>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     </system>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <os>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   </os>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <features>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   </features>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b5f49b05-1b70-4479-98d9-83b995037a41_disk">
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b5f49b05-1b70-4479-98d9-83b995037a41_disk.config">
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:20 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:25:27:be"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <target dev="tapdb173c1b-18"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41/console.log" append="off"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <video>
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     </video>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:22:20 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:22:20 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:22:20 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:22:20 compute-0 nova_compute[253661]: </domain>
Nov 22 09:22:20 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.464 253665 DEBUG nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Preparing to wait for external event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.465 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.465 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.465 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.466 253665 DEBUG nova.virt.libvirt.vif [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1090017716',display_name='tempest-ListServerFiltersTestJSON-instance-1090017716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1090017716',id=72,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-n7lha9f5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:10Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=b5f49b05-1b70-4479-98d9-83b995037a41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.466 253665 DEBUG nova.network.os_vif_util [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.467 253665 DEBUG nova.network.os_vif_util [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.467 253665 DEBUG os_vif [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.468 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.468 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.469 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.473 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb173c1b-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.473 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb173c1b-18, col_values=(('external_ids', {'iface-id': 'db173c1b-18d4-496e-96f0-d12c680c680e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:27:be', 'vm-uuid': 'b5f49b05-1b70-4479-98d9-83b995037a41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:20 compute-0 NetworkManager[48920]: <info>  [1763803340.5021] manager: (tapdb173c1b-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.505 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.506 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.507 253665 INFO os_vif [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18')
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.570 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803340.5688143, 688e9583-4ea7-4a94-b8d0-2758f83279af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.571 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] VM Started (Lifecycle Event)
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.583 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.584 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.585 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] No VIF found with MAC fa:16:3e:25:27:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:22:20 compute-0 ceph-mon[75021]: pgmap v1744: 305 pgs: 305 active+clean; 256 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 8.2 MiB/s wr, 165 op/s
Nov 22 09:22:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3216210626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1385048348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.586 253665 INFO nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Using config drive
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.618 253665 DEBUG nova.storage.rbd_utils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image b5f49b05-1b70-4479-98d9-83b995037a41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.629 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.637 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803340.5691526, 688e9583-4ea7-4a94-b8d0-2758f83279af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.638 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] VM Paused (Lifecycle Event)
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.659 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.663 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:20 compute-0 nova_compute[253661]: 2025-11-22 09:22:20.681 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.098 253665 DEBUG nova.network.neutron [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Updating instance_info_cache with network_info: [{"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.120 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Releasing lock "refresh_cache-41d22150-10c5-4aac-84a2-bebea895286e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.120 253665 DEBUG nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Instance network_info: |[{"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.121 253665 DEBUG oslo_concurrency.lockutils [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-41d22150-10c5-4aac-84a2-bebea895286e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.121 253665 DEBUG nova.network.neutron [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Refreshing network info cache for port 3862e425-07d8-49da-b68e-63d3381670f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.124 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Start _get_guest_xml network_info=[{"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'baf70c6a-4f18-40eb-9d40-874af269a47f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.127 253665 WARNING nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.132 253665 DEBUG nova.virt.libvirt.host [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.132 253665 DEBUG nova.virt.libvirt.host [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.134 253665 DEBUG nova.virt.libvirt.host [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.135 253665 DEBUG nova.virt.libvirt.host [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.135 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.135 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.136 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.136 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.136 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.137 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.137 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.137 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.137 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.137 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.138 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.138 253665 DEBUG nova.virt.hardware [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.141 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.179 253665 INFO nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Creating config drive at /var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41/disk.config
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.186 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgph92pkp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.252 253665 DEBUG nova.network.neutron [req-5b4f6d3e-225b-479d-ba10-14acc585d14b req-f44ae92b-b7d1-4170-817e-8dfb4fef4d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Updated VIF entry in instance network info cache for port db173c1b-18d4-496e-96f0-d12c680c680e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.252 253665 DEBUG nova.network.neutron [req-5b4f6d3e-225b-479d-ba10-14acc585d14b req-f44ae92b-b7d1-4170-817e-8dfb4fef4d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Updating instance_info_cache with network_info: [{"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.267 253665 DEBUG oslo_concurrency.lockutils [req-5b4f6d3e-225b-479d-ba10-14acc585d14b req-f44ae92b-b7d1-4170-817e-8dfb4fef4d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b5f49b05-1b70-4479-98d9-83b995037a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.326 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgph92pkp" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.357 253665 DEBUG nova.storage.rbd_utils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image b5f49b05-1b70-4479-98d9-83b995037a41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.364 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41/disk.config b5f49b05-1b70-4479-98d9-83b995037a41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 256 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 8.2 MiB/s wr, 150 op/s
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.544 253665 DEBUG oslo_concurrency.processutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41/disk.config b5f49b05-1b70-4479-98d9-83b995037a41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.545 253665 INFO nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Deleting local config drive /var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41/disk.config because it was imported into RBD.
Nov 22 09:22:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990074037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3990074037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:21 compute-0 kernel: tapdb173c1b-18: entered promiscuous mode
Nov 22 09:22:21 compute-0 NetworkManager[48920]: <info>  [1763803341.5980] manager: (tapdb173c1b-18): new Tun device (/org/freedesktop/NetworkManager/Devices/319)
Nov 22 09:22:21 compute-0 systemd-udevd[329121]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:22:21 compute-0 NetworkManager[48920]: <info>  [1763803341.6164] device (tapdb173c1b-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:22:21 compute-0 NetworkManager[48920]: <info>  [1763803341.6176] device (tapdb173c1b-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.637 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:21 compute-0 ovn_controller[152872]: 2025-11-22T09:22:21Z|00727|binding|INFO|Claiming lport db173c1b-18d4-496e-96f0-d12c680c680e for this chassis.
Nov 22 09:22:21 compute-0 ovn_controller[152872]: 2025-11-22T09:22:21Z|00728|binding|INFO|db173c1b-18d4-496e-96f0-d12c680c680e: Claiming fa:16:3e:25:27:be 10.100.0.10
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.644 253665 DEBUG nova.network.neutron [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Updating instance_info_cache with network_info: [{"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.650 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.655 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:27:be 10.100.0.10'], port_security=['fa:16:3e:25:27:be 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b5f49b05-1b70-4479-98d9-83b995037a41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f77386c-0230-4d59-9773-818360efc15e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0031069a-4f3d-4896-9271-51ffba2d6dff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80bb3efb-942b-4e13-a368-b443d149a62b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=db173c1b-18d4-496e-96f0-d12c680c680e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.656 162862 INFO neutron.agent.ovn.metadata.agent [-] Port db173c1b-18d4-496e-96f0-d12c680c680e in datapath 6f77386c-0230-4d59-9773-818360efc15e bound to our chassis
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.657 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f77386c-0230-4d59-9773-818360efc15e
Nov 22 09:22:21 compute-0 systemd-machined[215941]: New machine qemu-87-instance-00000048.
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.675 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e67e7960-664a-4cc2-a437-32ddb714ebb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.676 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6f77386c-01 in ovnmeta-6f77386c-0230-4d59-9773-818360efc15e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.680 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6f77386c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.681 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc08fbc5-c350-45a8-ae3c-3ac15bde44cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.682 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c4eb85-f197-4b31-b52b-02465bc8b1bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-00000048.
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.700 253665 DEBUG nova.storage.rbd_utils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image 41d22150-10c5-4aac-84a2-bebea895286e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.702 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb89a18-4b52-4f92-b324-a9c30b8c4700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.709 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:21 compute-0 ovn_controller[152872]: 2025-11-22T09:22:21Z|00729|binding|INFO|Setting lport db173c1b-18d4-496e-96f0-d12c680c680e ovn-installed in OVS
Nov 22 09:22:21 compute-0 ovn_controller[152872]: 2025-11-22T09:22:21Z|00730|binding|INFO|Setting lport db173c1b-18d4-496e-96f0-d12c680c680e up in Southbound
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.736 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be299ca7-d927-48b8-8e7c-fcb085755f89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.759 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.768 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Releasing lock "refresh_cache-d4350763-7fea-40fa-ade2-5aada492b3c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.769 253665 DEBUG nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Instance network_info: |[{"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.770 253665 DEBUG oslo_concurrency.lockutils [req-c94dfe0a-8aec-4bfb-998a-665beda0bdfb req-24b0e8e5-7181-4eee-8430-75f2691d007a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d4350763-7fea-40fa-ade2-5aada492b3c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.770 253665 DEBUG nova.network.neutron [req-c94dfe0a-8aec-4bfb-998a-665beda0bdfb req-24b0e8e5-7181-4eee-8430-75f2691d007a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Refreshing network info cache for port b4377651-ae31-406d-9f8b-b7b241d511df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.773 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Start _get_guest_xml network_info=[{"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.778 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[40425576-4bbd-484a-8b04-adaa2579b567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.785 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5eccfe-6605-4a9d-8d80-32e4c749e9d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 NetworkManager[48920]: <info>  [1763803341.7866] manager: (tap6f77386c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/320)
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.818 253665 WARNING nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.824 253665 DEBUG nova.virt.libvirt.host [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.825 253665 DEBUG nova.virt.libvirt.host [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.828 253665 DEBUG nova.virt.libvirt.host [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.828 253665 DEBUG nova.virt.libvirt.host [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.828 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.829 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f14b6aa0-75da-45ae-8243-755ac1edc9f4',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.829 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.829 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.830 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.830 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.830 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.830 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.830 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.831 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.831 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.831 253665 DEBUG nova.virt.hardware [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.836 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.840 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[87daf498-f01d-4060-b0df-e5954ff470e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.846 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[271887c5-7bc6-46d7-ab7d-957f58966e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 NetworkManager[48920]: <info>  [1763803341.8753] device (tap6f77386c-00): carrier: link connected
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.886 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b7018aa4-ec19-4cd9-b89c-5829801dd1f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.908 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c322efee-6d39-4ae8-b52b-95157b995a5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f77386c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:7d:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619261, 'reachable_time': 20506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329332, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.943 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd4278d-c888-4b20-bdff-03a8a865c09c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:7dc0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619261, 'tstamp': 619261}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329357, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:21.968 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31b0b331-2a27-4f6c-b099-f7c2b018aeee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f77386c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:7d:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619261, 'reachable_time': 20506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329361, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.975 253665 DEBUG nova.compute.manager [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Received event network-vif-plugged-a6df8b13-e328-4092-b9ed-01f8d6de489a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.976 253665 DEBUG oslo_concurrency.lockutils [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.976 253665 DEBUG oslo_concurrency.lockutils [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.976 253665 DEBUG oslo_concurrency.lockutils [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.976 253665 DEBUG nova.compute.manager [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Processing event network-vif-plugged-a6df8b13-e328-4092-b9ed-01f8d6de489a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.977 253665 DEBUG nova.compute.manager [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Received event network-vif-plugged-a6df8b13-e328-4092-b9ed-01f8d6de489a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.977 253665 DEBUG oslo_concurrency.lockutils [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.977 253665 DEBUG oslo_concurrency.lockutils [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.977 253665 DEBUG oslo_concurrency.lockutils [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.977 253665 DEBUG nova.compute.manager [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] No waiting events found dispatching network-vif-plugged-a6df8b13-e328-4092-b9ed-01f8d6de489a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.978 253665 WARNING nova.compute.manager [req-13353096-c17c-4323-a781-d4834050f1d2 req-9c46cd2a-3650-47fa-b85f-2f51f59238e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Received unexpected event network-vif-plugged-a6df8b13-e328-4092-b9ed-01f8d6de489a for instance with vm_state building and task_state spawning.
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.978 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.983 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803341.983257, 688e9583-4ea7-4a94-b8d0-2758f83279af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.984 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] VM Resumed (Lifecycle Event)
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.987 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.990 253665 INFO nova.virt.libvirt.driver [-] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Instance spawned successfully.
Nov 22 09:22:21 compute-0 nova_compute[253661]: 2025-11-22 09:22:21.991 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.013 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a381a841-6be6-4d7e-a739-403d691356d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.018 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.044 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.049 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.050 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.051 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.051 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.052 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.052 253665 DEBUG nova.virt.libvirt.driver [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.067 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.088 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21826f4d-f7f8-44e0-9357-eb4d15b14354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.090 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f77386c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.091 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.091 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f77386c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:22 compute-0 kernel: tap6f77386c-00: entered promiscuous mode
Nov 22 09:22:22 compute-0 NetworkManager[48920]: <info>  [1763803342.0955] manager: (tap6f77386c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.100 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f77386c-00, col_values=(('external_ids', {'iface-id': '22c0239c-bdb2-4af4-a6a4-7ca3983ded8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:22 compute-0 ovn_controller[152872]: 2025-11-22T09:22:22Z|00731|binding|INFO|Releasing lport 22c0239c-bdb2-4af4-a6a4-7ca3983ded8c from this chassis (sb_readonly=0)
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.116 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6f77386c-0230-4d59-9773-818360efc15e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6f77386c-0230-4d59-9773-818360efc15e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.117 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e125b20a-5970-4733-b3a1-26cf48cec544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.116 253665 INFO nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Took 13.56 seconds to spawn the instance on the hypervisor.
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.118 253665 DEBUG nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.118 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-6f77386c-0230-4d59-9773-818360efc15e
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/6f77386c-0230-4d59-9773-818360efc15e.pid.haproxy
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 6f77386c-0230-4d59-9773-818360efc15e
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:22:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:22.119 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'env', 'PROCESS_TAG=haproxy-6f77386c-0230-4d59-9773-818360efc15e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6f77386c-0230-4d59-9773-818360efc15e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.125 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.199 253665 INFO nova.compute.manager [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Took 14.68 seconds to build instance.
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.214 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803342.2134955, b5f49b05-1b70-4479-98d9-83b995037a41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.214 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] VM Started (Lifecycle Event)
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.217 253665 DEBUG oslo_concurrency.lockutils [None req-0ccc6026-e61c-47fa-b172-29f79f5e52d0 c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.230 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.240 253665 DEBUG nova.compute.manager [req-ea24e77c-1ca1-4fa2-8c4a-9551475813bb req-8d6e390c-3f71-4f0c-97a4-e20dfcbbe8c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.240 253665 DEBUG oslo_concurrency.lockutils [req-ea24e77c-1ca1-4fa2-8c4a-9551475813bb req-8d6e390c-3f71-4f0c-97a4-e20dfcbbe8c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.240 253665 DEBUG oslo_concurrency.lockutils [req-ea24e77c-1ca1-4fa2-8c4a-9551475813bb req-8d6e390c-3f71-4f0c-97a4-e20dfcbbe8c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.240 253665 DEBUG oslo_concurrency.lockutils [req-ea24e77c-1ca1-4fa2-8c4a-9551475813bb req-8d6e390c-3f71-4f0c-97a4-e20dfcbbe8c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.241 253665 DEBUG nova.compute.manager [req-ea24e77c-1ca1-4fa2-8c4a-9551475813bb req-8d6e390c-3f71-4f0c-97a4-e20dfcbbe8c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Processing event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.242 253665 DEBUG nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:22:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945016511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.248 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803342.2137477, b5f49b05-1b70-4479-98d9-83b995037a41 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.249 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] VM Paused (Lifecycle Event)
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.252 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.260 253665 INFO nova.virt.libvirt.driver [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance spawned successfully.
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.261 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.265 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.275 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.278 253665 DEBUG nova.virt.libvirt.vif [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-238841664',display_name='tempest-ListServerFiltersTestJSON-instance-238841664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-238841664',id=73,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-h09ea14g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:14Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=41d22150-10c5-4aac-84a2-bebea895286e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.278 253665 DEBUG nova.network.os_vif_util [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.279 253665 DEBUG nova.network.os_vif_util [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:70:49,bridge_name='br-int',has_traffic_filtering=True,id=3862e425-07d8-49da-b68e-63d3381670f7,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3862e425-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.281 253665 DEBUG nova.objects.instance [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 41d22150-10c5-4aac-84a2-bebea895286e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.284 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803342.2514992, b5f49b05-1b70-4479-98d9-83b995037a41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.284 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] VM Resumed (Lifecycle Event)
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.288 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.288 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.289 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.289 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.290 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.290 253665 DEBUG nova.virt.libvirt.driver [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.301 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.303 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <uuid>41d22150-10c5-4aac-84a2-bebea895286e</uuid>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <name>instance-00000049</name>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-238841664</nova:name>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:22:21</nova:creationTime>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <nova:user uuid="a611caa9b8c247f083ce8b67780fbc01">tempest-ListServerFiltersTestJSON-1747318421-project-member</nova:user>
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <nova:project uuid="e5a1bb02c0c047be92aba24831aef1a5">tempest-ListServerFiltersTestJSON-1747318421</nova:project>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <nova:port uuid="3862e425-07d8-49da-b68e-63d3381670f7">
Nov 22 09:22:22 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <system>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <entry name="serial">41d22150-10c5-4aac-84a2-bebea895286e</entry>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <entry name="uuid">41d22150-10c5-4aac-84a2-bebea895286e</entry>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     </system>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <os>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   </os>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <features>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   </features>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/41d22150-10c5-4aac-84a2-bebea895286e_disk">
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/41d22150-10c5-4aac-84a2-bebea895286e_disk.config">
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:22 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:e4:70:49"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <target dev="tap3862e425-07"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e/console.log" append="off"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <video>
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     </video>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:22:22 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:22:22 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:22:22 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:22:22 compute-0 nova_compute[253661]: </domain>
Nov 22 09:22:22 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.304 253665 DEBUG nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Preparing to wait for external event network-vif-plugged-3862e425-07d8-49da-b68e-63d3381670f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.304 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "41d22150-10c5-4aac-84a2-bebea895286e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.304 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.304 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.305 253665 DEBUG nova.virt.libvirt.vif [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-238841664',display_name='tempest-ListServerFiltersTestJSON-instance-238841664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-238841664',id=73,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-h09ea14g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:14Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=41d22150-10c5-4aac-84a2-bebea895286e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.306 253665 DEBUG nova.network.os_vif_util [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.306 253665 DEBUG nova.network.os_vif_util [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:70:49,bridge_name='br-int',has_traffic_filtering=True,id=3862e425-07d8-49da-b68e-63d3381670f7,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3862e425-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.307 253665 DEBUG os_vif [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:70:49,bridge_name='br-int',has_traffic_filtering=True,id=3862e425-07d8-49da-b68e-63d3381670f7,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3862e425-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.308 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.308 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.309 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.314 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.317 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3862e425-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.318 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3862e425-07, col_values=(('external_ids', {'iface-id': '3862e425-07d8-49da-b68e-63d3381670f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:70:49', 'vm-uuid': '41d22150-10c5-4aac-84a2-bebea895286e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:22 compute-0 NetworkManager[48920]: <info>  [1763803342.3211] manager: (tap3862e425-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.322 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.367 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.369 253665 INFO os_vif [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:70:49,bridge_name='br-int',has_traffic_filtering=True,id=3862e425-07d8-49da-b68e-63d3381670f7,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3862e425-07')
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.371 253665 INFO nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Took 11.91 seconds to spawn the instance on the hypervisor.
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.372 253665 DEBUG nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2328988614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.442 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.442 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.442 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] No VIF found with MAC fa:16:3e:e4:70:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.443 253665 INFO nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Using config drive
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.467 253665 DEBUG nova.storage.rbd_utils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image 41d22150-10c5-4aac-84a2-bebea895286e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.477 253665 INFO nova.compute.manager [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Took 13.68 seconds to build instance.
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.478 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.506 253665 DEBUG nova.storage.rbd_utils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image d4350763-7fea-40fa-ade2-5aada492b3c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.511 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.571 253665 DEBUG oslo_concurrency.lockutils [None req-74549012-85ca-412b-9170-823ee38bd753 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:22 compute-0 ceph-mon[75021]: pgmap v1745: 305 pgs: 305 active+clean; 256 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 8.2 MiB/s wr, 150 op/s
Nov 22 09:22:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1945016511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2328988614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:22 compute-0 podman[329476]: 2025-11-22 09:22:22.65281609 +0000 UTC m=+0.067434111 container create 44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.681 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803327.6801605, 636b1046-fff8-4a45-8a14-04010b2f282e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.682 253665 INFO nova.compute.manager [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Stopped (Lifecycle Event)
Nov 22 09:22:22 compute-0 nova_compute[253661]: 2025-11-22 09:22:22.703 253665 DEBUG nova.compute.manager [None req-09563403-3c12-4877-a2d3-f19f9138ef3d - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:22 compute-0 systemd[1]: Started libpod-conmon-44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e.scope.
Nov 22 09:22:22 compute-0 podman[329476]: 2025-11-22 09:22:22.620880618 +0000 UTC m=+0.035498659 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:22:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33337ae79b275de281ee4767f0a431fd141e98668de09fe92020957d853d264e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:22 compute-0 podman[329476]: 2025-11-22 09:22:22.770901555 +0000 UTC m=+0.185519576 container init 44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:22:22 compute-0 podman[329476]: 2025-11-22 09:22:22.778455248 +0000 UTC m=+0.193073269 container start 44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:22:22 compute-0 neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e[329509]: [NOTICE]   (329513) : New worker (329515) forked
Nov 22 09:22:22 compute-0 neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e[329509]: [NOTICE]   (329513) : Loading success.
Nov 22 09:22:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3581748614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.071 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.074 253665 DEBUG nova.virt.libvirt.vif [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-578190102',display_name='tempest-ListServerFiltersTestJSON-instance-578190102',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-578190102',id=74,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-czi9aaxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:17Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=d4350763-7fea-40fa-ade2-5aada492b3c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.074 253665 DEBUG nova.network.os_vif_util [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.075 253665 DEBUG nova.network.os_vif_util [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ac:3e,bridge_name='br-int',has_traffic_filtering=True,id=b4377651-ae31-406d-9f8b-b7b241d511df,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4377651-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.077 253665 DEBUG nova.objects.instance [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid d4350763-7fea-40fa-ade2-5aada492b3c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.093 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <uuid>d4350763-7fea-40fa-ade2-5aada492b3c0</uuid>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <name>instance-0000004a</name>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <memory>196608</memory>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-578190102</nova:name>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:22:21</nova:creationTime>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <nova:flavor name="m1.micro">
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <nova:memory>192</nova:memory>
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <nova:user uuid="a611caa9b8c247f083ce8b67780fbc01">tempest-ListServerFiltersTestJSON-1747318421-project-member</nova:user>
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <nova:project uuid="e5a1bb02c0c047be92aba24831aef1a5">tempest-ListServerFiltersTestJSON-1747318421</nova:project>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <nova:port uuid="b4377651-ae31-406d-9f8b-b7b241d511df">
Nov 22 09:22:23 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <system>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <entry name="serial">d4350763-7fea-40fa-ade2-5aada492b3c0</entry>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <entry name="uuid">d4350763-7fea-40fa-ade2-5aada492b3c0</entry>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     </system>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <os>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   </os>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <features>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   </features>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d4350763-7fea-40fa-ade2-5aada492b3c0_disk">
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d4350763-7fea-40fa-ade2-5aada492b3c0_disk.config">
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:23 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:6d:ac:3e"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <target dev="tapb4377651-ae"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0/console.log" append="off"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <video>
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     </video>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:22:23 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:22:23 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:22:23 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:22:23 compute-0 nova_compute[253661]: </domain>
Nov 22 09:22:23 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.096 253665 DEBUG nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Preparing to wait for external event network-vif-plugged-b4377651-ae31-406d-9f8b-b7b241d511df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.096 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.096 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.097 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.098 253665 DEBUG nova.virt.libvirt.vif [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-578190102',display_name='tempest-ListServerFiltersTestJSON-instance-578190102',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-578190102',id=74,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-czi9aaxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:17Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=d4350763-7fea-40fa-ade2-5aada492b3c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.098 253665 DEBUG nova.network.os_vif_util [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.099 253665 DEBUG nova.network.os_vif_util [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ac:3e,bridge_name='br-int',has_traffic_filtering=True,id=b4377651-ae31-406d-9f8b-b7b241d511df,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4377651-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.099 253665 DEBUG os_vif [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ac:3e,bridge_name='br-int',has_traffic_filtering=True,id=b4377651-ae31-406d-9f8b-b7b241d511df,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4377651-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.100 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.101 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.102 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.108 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4377651-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.110 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4377651-ae, col_values=(('external_ids', {'iface-id': 'b4377651-ae31-406d-9f8b-b7b241d511df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:ac:3e', 'vm-uuid': 'd4350763-7fea-40fa-ade2-5aada492b3c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:23 compute-0 NetworkManager[48920]: <info>  [1763803343.1133] manager: (tapb4377651-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.113 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.120 253665 INFO nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Creating config drive at /var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e/disk.config
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.127 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0_yo6ui execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.165 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.167 253665 INFO os_vif [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ac:3e,bridge_name='br-int',has_traffic_filtering=True,id=b4377651-ae31-406d-9f8b-b7b241d511df,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4377651-ae')
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.199 253665 DEBUG nova.network.neutron [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Updated VIF entry in instance network info cache for port 3862e425-07d8-49da-b68e-63d3381670f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.200 253665 DEBUG nova.network.neutron [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Updating instance_info_cache with network_info: [{"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.216 253665 DEBUG oslo_concurrency.lockutils [req-59c4ecfa-3604-4cea-8157-35766491597e req-7ea7e4c8-430d-4e18-855c-6562d555a560 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-41d22150-10c5-4aac-84a2-bebea895286e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.224 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.226 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.226 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] No VIF found with MAC fa:16:3e:6d:ac:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.227 253665 INFO nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Using config drive
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.260 253665 DEBUG nova.storage.rbd_utils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image d4350763-7fea-40fa-ade2-5aada492b3c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.274 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0_yo6ui" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.309 253665 DEBUG nova.storage.rbd_utils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image 41d22150-10c5-4aac-84a2-bebea895286e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.327 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e/disk.config 41d22150-10c5-4aac-84a2-bebea895286e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 273 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.9 MiB/s wr, 227 op/s
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.515 253665 DEBUG nova.network.neutron [req-c94dfe0a-8aec-4bfb-998a-665beda0bdfb req-24b0e8e5-7181-4eee-8430-75f2691d007a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Updated VIF entry in instance network info cache for port b4377651-ae31-406d-9f8b-b7b241d511df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.516 253665 DEBUG nova.network.neutron [req-c94dfe0a-8aec-4bfb-998a-665beda0bdfb req-24b0e8e5-7181-4eee-8430-75f2691d007a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Updating instance_info_cache with network_info: [{"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.544 253665 DEBUG oslo_concurrency.lockutils [req-c94dfe0a-8aec-4bfb-998a-665beda0bdfb req-24b0e8e5-7181-4eee-8430-75f2691d007a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d4350763-7fea-40fa-ade2-5aada492b3c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.563 253665 DEBUG oslo_concurrency.processutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e/disk.config 41d22150-10c5-4aac-84a2-bebea895286e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.564 253665 INFO nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Deleting local config drive /var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e/disk.config because it was imported into RBD.
Nov 22 09:22:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3581748614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:23 compute-0 kernel: tap3862e425-07: entered promiscuous mode
Nov 22 09:22:23 compute-0 NetworkManager[48920]: <info>  [1763803343.6351] manager: (tap3862e425-07): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.638 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00732|binding|INFO|Claiming lport 3862e425-07d8-49da-b68e-63d3381670f7 for this chassis.
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00733|binding|INFO|3862e425-07d8-49da-b68e-63d3381670f7: Claiming fa:16:3e:e4:70:49 10.100.0.8
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.653 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:70:49 10.100.0.8'], port_security=['fa:16:3e:e4:70:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '41d22150-10c5-4aac-84a2-bebea895286e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f77386c-0230-4d59-9773-818360efc15e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0031069a-4f3d-4896-9271-51ffba2d6dff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80bb3efb-942b-4e13-a368-b443d149a62b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=3862e425-07d8-49da-b68e-63d3381670f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.655 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 3862e425-07d8-49da-b68e-63d3381670f7 in datapath 6f77386c-0230-4d59-9773-818360efc15e bound to our chassis
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.657 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f77386c-0230-4d59-9773-818360efc15e
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00734|binding|INFO|Setting lport 3862e425-07d8-49da-b68e-63d3381670f7 ovn-installed in OVS
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00735|binding|INFO|Setting lport 3862e425-07d8-49da-b68e-63d3381670f7 up in Southbound
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.679 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 systemd-udevd[329601]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:22:23 compute-0 systemd-machined[215941]: New machine qemu-88-instance-00000049.
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.690 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ee4d6e-3ff1-4598-a4e0-82565eb3da6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:23 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-00000049.
Nov 22 09:22:23 compute-0 NetworkManager[48920]: <info>  [1763803343.7102] device (tap3862e425-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:22:23 compute-0 NetworkManager[48920]: <info>  [1763803343.7118] device (tap3862e425-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.739 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[91f4ae76-16fa-4b48-901f-a67254a16f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.752 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[13be2c65-d77b-4409-b808-347431648316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.767 253665 DEBUG oslo_concurrency.lockutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "688e9583-4ea7-4a94-b8d0-2758f83279af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.768 253665 DEBUG oslo_concurrency.lockutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.768 253665 DEBUG oslo_concurrency.lockutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.769 253665 DEBUG oslo_concurrency.lockutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.769 253665 DEBUG oslo_concurrency.lockutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.770 253665 INFO nova.compute.manager [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Terminating instance
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.771 253665 DEBUG nova.compute.manager [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.787 253665 INFO nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Creating config drive at /var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0/disk.config
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.798 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[748f0f7d-8597-4d89-a965-893f70a27396]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.802 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy25d3flf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:23 compute-0 kernel: tapa6df8b13-e3 (unregistering): left promiscuous mode
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.827 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0b79a3cb-aa08-4e83-a125-c312bfe8a9fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f77386c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:7d:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619261, 'reachable_time': 20506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329615, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:23 compute-0 NetworkManager[48920]: <info>  [1763803343.8313] device (tapa6df8b13-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00736|binding|INFO|Releasing lport a6df8b13-e328-4092-b9ed-01f8d6de489a from this chassis (sb_readonly=0)
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00737|binding|INFO|Setting lport a6df8b13-e328-4092-b9ed-01f8d6de489a down in Southbound
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00738|binding|INFO|Removing iface tapa6df8b13-e3 ovn-installed in OVS
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.854 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:c9:ac 10.100.0.5'], port_security=['fa:16:3e:54:c9:ac 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '688e9583-4ea7-4a94-b8d0-2758f83279af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a6df8b13-e328-4092-b9ed-01f8d6de489a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.856 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.866 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d22daa92-6a03-4b3e-a93c-fe6e38feefc9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619278, 'tstamp': 619278}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329621, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619281, 'tstamp': 619281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329621, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.869 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f77386c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000046.scope: Deactivated successfully.
Nov 22 09:22:23 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000046.scope: Consumed 2.167s CPU time.
Nov 22 09:22:23 compute-0 systemd-machined[215941]: Machine qemu-86-instance-00000046 terminated.
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.880 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f77386c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.881 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.883 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f77386c-00, col_values=(('external_ids', {'iface-id': '22c0239c-bdb2-4af4-a6a4-7ca3983ded8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.883 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.885 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a6df8b13-e328-4092-b9ed-01f8d6de489a in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 unbound from our chassis
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.886 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.893 253665 DEBUG oslo_concurrency.lockutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.893 253665 DEBUG oslo_concurrency.lockutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.894 253665 DEBUG oslo_concurrency.lockutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.894 253665 DEBUG oslo_concurrency.lockutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.894 253665 DEBUG oslo_concurrency.lockutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.898 253665 INFO nova.compute.manager [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Terminating instance
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.900 253665 DEBUG nova.compute.manager [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.910 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[851a7833-b140-46ba-b4f4-96e44ae4177c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:23 compute-0 kernel: tap14926b0c-50 (unregistering): left promiscuous mode
Nov 22 09:22:23 compute-0 NetworkManager[48920]: <info>  [1763803343.9419] device (tap14926b0c-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00739|binding|INFO|Releasing lport 14926b0c-50b5-475d-9ee6-f20c27dab8f0 from this chassis (sb_readonly=0)
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00740|binding|INFO|Setting lport 14926b0c-50b5-475d-9ee6-f20c27dab8f0 down in Southbound
Nov 22 09:22:23 compute-0 ovn_controller[152872]: 2025-11-22T09:22:23Z|00741|binding|INFO|Removing iface tap14926b0c-50 ovn-installed in OVS
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.967 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:7b:70 10.100.0.14'], port_security=['fa:16:3e:df:7b:70 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '023bf3c2-b4eb-4f0f-b471-a20fb9907e1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb7ce66b616d43838eb71b9c62cb2354', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b47edba3-e1ad-4b96-8859-b808f57c3dbb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f436c736-aee1-403e-b270-36221c323c75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=14926b0c-50b5-475d-9ee6-f20c27dab8f0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.965 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[256ede3a-7ac8-4527-b276-9b6099ecd5ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:23 compute-0 nova_compute[253661]: 2025-11-22 09:22:23.969 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy25d3flf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:23.974 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[57e664ba-8684-42f5-8633-5fad4dec36eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:23 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000047.scope: Deactivated successfully.
Nov 22 09:22:23 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000047.scope: Consumed 5.563s CPU time.
Nov 22 09:22:23 compute-0 systemd-machined[215941]: Machine qemu-85-instance-00000047 terminated.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.014 253665 DEBUG nova.storage.rbd_utils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] rbd image d4350763-7fea-40fa-ade2-5aada492b3c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.024 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0/disk.config d4350763-7fea-40fa-ade2-5aada492b3c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.026 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6283fd40-6602-4efd-aa6f-2d8c33931c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.051 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[de625000-1a9f-4904-9a92-07929ae9e822]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8ee5c4d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:6e:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618713, 'reachable_time': 22936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329714, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.072 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ea39ef-032d-4979-b21f-56f7fc4c390d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb8ee5c4d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618728, 'tstamp': 618728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329719, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8ee5c4d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618732, 'tstamp': 618732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329719, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.074 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8ee5c4d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.076 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.080 253665 DEBUG nova.compute.manager [req-1ad2b354-9038-42e3-ba1e-35080bddd0b8 req-d330c0ae-fdeb-4e1e-8a31-4f2b95003fc2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Received event network-vif-plugged-3862e425-07d8-49da-b68e-63d3381670f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.081 253665 DEBUG oslo_concurrency.lockutils [req-1ad2b354-9038-42e3-ba1e-35080bddd0b8 req-d330c0ae-fdeb-4e1e-8a31-4f2b95003fc2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "41d22150-10c5-4aac-84a2-bebea895286e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.081 253665 DEBUG oslo_concurrency.lockutils [req-1ad2b354-9038-42e3-ba1e-35080bddd0b8 req-d330c0ae-fdeb-4e1e-8a31-4f2b95003fc2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.081 253665 DEBUG oslo_concurrency.lockutils [req-1ad2b354-9038-42e3-ba1e-35080bddd0b8 req-d330c0ae-fdeb-4e1e-8a31-4f2b95003fc2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.082 253665 DEBUG nova.compute.manager [req-1ad2b354-9038-42e3-ba1e-35080bddd0b8 req-d330c0ae-fdeb-4e1e-8a31-4f2b95003fc2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Processing event network-vif-plugged-3862e425-07d8-49da-b68e-63d3381670f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.087 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8ee5c4d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.087 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.088 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8ee5c4d-60, col_values=(('external_ids', {'iface-id': '170e2a44-25ef-429e-ae6e-cbd8fd0b45d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.088 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.089 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 14926b0c-50b5-475d-9ee6-f20c27dab8f0 in datapath b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 unbound from our chassis
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.090 253665 INFO nova.virt.libvirt.driver [-] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Instance destroyed successfully.
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.091 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.091 253665 DEBUG nova.objects.instance [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'resources' on Instance uuid 688e9583-4ea7-4a94-b8d0-2758f83279af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.092 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1bd559-c8ab-4974-8fd3-466ca4964f1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.093 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 namespace which is not needed anymore
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.111 253665 DEBUG nova.virt.libvirt.vif [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2050555024',display_name='tempest-MultipleCreateTestJSON-server-2050555024-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2050555024-1',id=70,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-rsawzag9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:22Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=688e9583-4ea7-4a94-b8d0-2758f83279af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.112 253665 DEBUG nova.network.os_vif_util [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "address": "fa:16:3e:54:c9:ac", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6df8b13-e3", "ovs_interfaceid": "a6df8b13-e328-4092-b9ed-01f8d6de489a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.113 253665 DEBUG nova.network.os_vif_util [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:c9:ac,bridge_name='br-int',has_traffic_filtering=True,id=a6df8b13-e328-4092-b9ed-01f8d6de489a,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6df8b13-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.113 253665 DEBUG os_vif [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:c9:ac,bridge_name='br-int',has_traffic_filtering=True,id=a6df8b13-e328-4092-b9ed-01f8d6de489a,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6df8b13-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.117 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6df8b13-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.119 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.131 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.134 253665 INFO os_vif [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:c9:ac,bridge_name='br-int',has_traffic_filtering=True,id=a6df8b13-e328-4092-b9ed-01f8d6de489a,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6df8b13-e3')
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.164 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803344.1357503, 41d22150-10c5-4aac-84a2-bebea895286e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.165 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] VM Started (Lifecycle Event)
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.170 253665 DEBUG nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.176 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.182 253665 INFO nova.virt.libvirt.driver [-] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Instance destroyed successfully.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.183 253665 DEBUG nova.objects.instance [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lazy-loading 'resources' on Instance uuid 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.191 253665 INFO nova.virt.libvirt.driver [-] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Instance spawned successfully.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.191 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.199 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.202 253665 DEBUG nova.virt.libvirt.vif [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2050555024',display_name='tempest-MultipleCreateTestJSON-server-2050555024-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2050555024-2',id=71,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-22T09:22:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fb7ce66b616d43838eb71b9c62cb2354',ramdisk_id='',reservation_id='r-rsawzag9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1224529962',owner_user_name='tempest-MultipleCreateTestJSON-1224529962-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:19Z,user_data=None,user_id='c57e583444d64b2a80c940052ff754eb',uuid=023bf3c2-b4eb-4f0f-b471-a20fb9907e1e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.203 253665 DEBUG nova.network.os_vif_util [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converting VIF {"id": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "address": "fa:16:3e:df:7b:70", "network": {"id": "b8ee5c4d-6313-4ba5-9d89-7cce7062ce25", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1133659467-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb7ce66b616d43838eb71b9c62cb2354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14926b0c-50", "ovs_interfaceid": "14926b0c-50b5-475d-9ee6-f20c27dab8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.204 253665 DEBUG nova.network.os_vif_util [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:7b:70,bridge_name='br-int',has_traffic_filtering=True,id=14926b0c-50b5-475d-9ee6-f20c27dab8f0,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14926b0c-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.204 253665 DEBUG os_vif [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:7b:70,bridge_name='br-int',has_traffic_filtering=True,id=14926b0c-50b5-475d-9ee6-f20c27dab8f0,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14926b0c-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.207 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14926b0c-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.210 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.218 253665 INFO os_vif [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:7b:70,bridge_name='br-int',has_traffic_filtering=True,id=14926b0c-50b5-475d-9ee6-f20c27dab8f0,network=Network(b8ee5c4d-6313-4ba5-9d89-7cce7062ce25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14926b0c-50')
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.247 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.260 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.261 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.262 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.262 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.264 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.264 253665 DEBUG nova.virt.libvirt.driver [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:24 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[328696]: [NOTICE]   (328743) : haproxy version is 2.8.14-c23fe91
Nov 22 09:22:24 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[328696]: [NOTICE]   (328743) : path to executable is /usr/sbin/haproxy
Nov 22 09:22:24 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[328696]: [WARNING]  (328743) : Exiting Master process...
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.280 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.280 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803344.1359408, 41d22150-10c5-4aac-84a2-bebea895286e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.281 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] VM Paused (Lifecycle Event)
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.282 253665 DEBUG oslo_concurrency.processutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0/disk.config d4350763-7fea-40fa-ade2-5aada492b3c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.283 253665 INFO nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Deleting local config drive /var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0/disk.config because it was imported into RBD.
Nov 22 09:22:24 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[328696]: [ALERT]    (328743) : Current worker (328756) exited with code 143 (Terminated)
Nov 22 09:22:24 compute-0 neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25[328696]: [WARNING]  (328743) : All workers exited. Exiting... (0)
Nov 22 09:22:24 compute-0 systemd[1]: libpod-44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4.scope: Deactivated successfully.
Nov 22 09:22:24 compute-0 conmon[328696]: conmon 44deda72d3221d2bc78a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4.scope/container/memory.events
Nov 22 09:22:24 compute-0 podman[329786]: 2025-11-22 09:22:24.293188137 +0000 UTC m=+0.068927447 container died 44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.317 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.333 253665 INFO nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Took 9.81 seconds to spawn the instance on the hypervisor.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.334 253665 DEBUG nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4-userdata-shm.mount: Deactivated successfully.
Nov 22 09:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-96b1cb886d6c0d9f91b091e7d096ee2fbe20284e4808999d06dd7ab7716e2fd1-merged.mount: Deactivated successfully.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.352 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803344.1847641, 41d22150-10c5-4aac-84a2-bebea895286e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.352 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] VM Resumed (Lifecycle Event)
Nov 22 09:22:24 compute-0 kernel: tapb4377651-ae: entered promiscuous mode
Nov 22 09:22:24 compute-0 NetworkManager[48920]: <info>  [1763803344.3652] manager: (tapb4377651-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Nov 22 09:22:24 compute-0 ovn_controller[152872]: 2025-11-22T09:22:24Z|00742|binding|INFO|Claiming lport b4377651-ae31-406d-9f8b-b7b241d511df for this chassis.
Nov 22 09:22:24 compute-0 ovn_controller[152872]: 2025-11-22T09:22:24Z|00743|binding|INFO|b4377651-ae31-406d-9f8b-b7b241d511df: Claiming fa:16:3e:6d:ac:3e 10.100.0.7
Nov 22 09:22:24 compute-0 podman[329786]: 2025-11-22 09:22:24.366429529 +0000 UTC m=+0.142168839 container cleanup 44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.365 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.374 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.373 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:ac:3e 10.100.0.7'], port_security=['fa:16:3e:6d:ac:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd4350763-7fea-40fa-ade2-5aada492b3c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f77386c-0230-4d59-9773-818360efc15e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0031069a-4f3d-4896-9271-51ffba2d6dff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80bb3efb-942b-4e13-a368-b443d149a62b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b4377651-ae31-406d-9f8b-b7b241d511df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:24 compute-0 NetworkManager[48920]: <info>  [1763803344.3814] device (tapb4377651-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:22:24 compute-0 NetworkManager[48920]: <info>  [1763803344.3821] device (tapb4377651-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:22:24 compute-0 systemd[1]: libpod-conmon-44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4.scope: Deactivated successfully.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.392 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:24 compute-0 ovn_controller[152872]: 2025-11-22T09:22:24Z|00744|binding|INFO|Setting lport b4377651-ae31-406d-9f8b-b7b241d511df ovn-installed in OVS
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 ovn_controller[152872]: 2025-11-22T09:22:24Z|00745|binding|INFO|Setting lport b4377651-ae31-406d-9f8b-b7b241d511df up in Southbound
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.418 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:24 compute-0 systemd-machined[215941]: New machine qemu-89-instance-0000004a.
Nov 22 09:22:24 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-0000004a.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.437 253665 INFO nova.compute.manager [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Took 11.37 seconds to build instance.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.454 253665 DEBUG oslo_concurrency.lockutils [None req-590a111a-dd22-4476-a2cf-26fb8b462d4a a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:24 compute-0 podman[329845]: 2025-11-22 09:22:24.457547942 +0000 UTC m=+0.056448707 container remove 44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.466 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[308715f1-261a-470d-b302-4d4258b7c91e]: (4, ('Sat Nov 22 09:22:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 (44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4)\n44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4\nSat Nov 22 09:22:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 (44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4)\n44deda72d3221d2bc78afe367ebf49064942afa2f8b8e225bfd45a05a08244c4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.470 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cb1dc8-06c1-44ec-9644-c9f9c6153d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.471 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8ee5c4d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:24 compute-0 kernel: tapb8ee5c4d-60: left promiscuous mode
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.489 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be7d4e97-c39f-4f9b-b6db-f6b7cf7d0e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.509 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc938f72-b131-464d-a654-7df896934ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.511 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[feadeb41-0f8e-46a9-88e5-b358dc16d7b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.541 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[da380b19-9074-4c64-bcf5-8feec31148a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618699, 'reachable_time': 40390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329868, 'error': None, 'target': 'ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 systemd[1]: run-netns-ovnmeta\x2db8ee5c4d\x2d6313\x2d4ba5\x2d9d89\x2d7cce7062ce25.mount: Deactivated successfully.
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.550 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8ee5c4d-6313-4ba5-9d89-7cce7062ce25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.550 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[584a2a66-8373-4254-89b6-d105395fc2f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.553 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b4377651-ae31-406d-9f8b-b7b241d511df in datapath 6f77386c-0230-4d59-9773-818360efc15e unbound from our chassis
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.555 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f77386c-0230-4d59-9773-818360efc15e
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.575 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b58a612b-776e-4e4d-b48e-e507a3a239fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.622 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[934a15a7-0e34-4dc8-afce-36b60c220951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ceph-mon[75021]: pgmap v1746: 305 pgs: 305 active+clean; 273 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.9 MiB/s wr, 227 op/s
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.631 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b2721616-5fff-4ccb-900e-15ba40ed93fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.641 253665 DEBUG nova.compute.manager [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.641 253665 DEBUG oslo_concurrency.lockutils [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.641 253665 DEBUG oslo_concurrency.lockutils [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.642 253665 DEBUG oslo_concurrency.lockutils [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.642 253665 DEBUG nova.compute.manager [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] No waiting events found dispatching network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.642 253665 WARNING nova.compute.manager [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received unexpected event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e for instance with vm_state active and task_state None.
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.642 253665 DEBUG nova.compute.manager [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Received event network-vif-unplugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.643 253665 DEBUG oslo_concurrency.lockutils [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.643 253665 DEBUG oslo_concurrency.lockutils [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.643 253665 DEBUG oslo_concurrency.lockutils [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.643 253665 DEBUG nova.compute.manager [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] No waiting events found dispatching network-vif-unplugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.644 253665 DEBUG nova.compute.manager [req-d0ccf63b-aeb6-4e7c-93f1-65c365f68e61 req-ecf4423b-4687-45c9-8af6-e180b4314640 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Received event network-vif-unplugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.686 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1e68b1bf-25ba-4735-af21-c945b3e78c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.713 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[336a54f0-b1ef-451d-b28c-867586ff1b6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f77386c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:7d:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619261, 'reachable_time': 20506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329877, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.741 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d57ceeb-9dc9-4868-9b13-8c575a76d565]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619278, 'tstamp': 619278}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329878, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619281, 'tstamp': 619281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329878, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.745 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f77386c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:24 compute-0 nova_compute[253661]: 2025-11-22 09:22:24.747 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.754 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f77386c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.755 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.756 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f77386c-00, col_values=(('external_ids', {'iface-id': '22c0239c-bdb2-4af4-a6a4-7ca3983ded8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:24.757 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.109 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803345.108652, d4350763-7fea-40fa-ade2-5aada492b3c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.110 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] VM Started (Lifecycle Event)
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.126 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.130 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803345.1089566, d4350763-7fea-40fa-ade2-5aada492b3c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.130 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] VM Paused (Lifecycle Event)
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.143 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.154 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.172 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:25 compute-0 sudo[329921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:22:25 compute-0 sudo[329921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:25 compute-0 sudo[329921]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:25 compute-0 sudo[329946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:22:25 compute-0 sudo[329946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:25 compute-0 sudo[329946]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:25 compute-0 sudo[329971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:22:25 compute-0 sudo[329971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:25 compute-0 sudo[329971]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:25 compute-0 sudo[329996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:22:25 compute-0 sudo[329996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 273 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.9 MiB/s wr, 196 op/s
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.895 253665 INFO nova.virt.libvirt.driver [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Deleting instance files /var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af_del
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.896 253665 INFO nova.virt.libvirt.driver [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Deletion of /var/lib/nova/instances/688e9583-4ea7-4a94-b8d0-2758f83279af_del complete
Nov 22 09:22:25 compute-0 sudo[329996]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.962 253665 INFO nova.compute.manager [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Took 2.19 seconds to destroy the instance on the hypervisor.
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.963 253665 DEBUG oslo.service.loopingcall [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.963 253665 DEBUG nova.compute.manager [-] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:22:25 compute-0 nova_compute[253661]: 2025-11-22 09:22:25.963 253665 DEBUG nova.network.neutron [-] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:22:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:22:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:22:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:22:25 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:22:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:22:26 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:22:26 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8d02d42a-44de-4aa2-8c7a-f826abbad285 does not exist
Nov 22 09:22:26 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6b373c90-544a-467e-9aa6-041f20dadff8 does not exist
Nov 22 09:22:26 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 65f63888-d742-49b0-b784-a8319861195e does not exist
Nov 22 09:22:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:22:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:22:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:22:26 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:22:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:22:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:22:26 compute-0 sudo[330053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:22:26 compute-0 sudo[330053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:26 compute-0 sudo[330053]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:22:26 compute-0 sudo[330078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:22:26 compute-0 sudo[330078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:26 compute-0 sudo[330078]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.284 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.284 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:22:26 compute-0 sudo[330103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:22:26 compute-0 sudo[330103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:26 compute-0 sudo[330103]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:26 compute-0 sudo[330128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:22:26 compute-0 sudo[330128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.478 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Received event network-vif-plugged-3862e425-07d8-49da-b68e-63d3381670f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.479 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "41d22150-10c5-4aac-84a2-bebea895286e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.479 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.479 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.479 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] No waiting events found dispatching network-vif-plugged-3862e425-07d8-49da-b68e-63d3381670f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.480 253665 WARNING nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Received unexpected event network-vif-plugged-3862e425-07d8-49da-b68e-63d3381670f7 for instance with vm_state active and task_state None.
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.480 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Received event network-vif-unplugged-a6df8b13-e328-4092-b9ed-01f8d6de489a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.480 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.480 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.480 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.480 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] No waiting events found dispatching network-vif-unplugged-a6df8b13-e328-4092-b9ed-01f8d6de489a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.481 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Received event network-vif-unplugged-a6df8b13-e328-4092-b9ed-01f8d6de489a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.481 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Received event network-vif-plugged-a6df8b13-e328-4092-b9ed-01f8d6de489a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.481 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.481 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.481 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.481 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] No waiting events found dispatching network-vif-plugged-a6df8b13-e328-4092-b9ed-01f8d6de489a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.482 253665 WARNING nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Received unexpected event network-vif-plugged-a6df8b13-e328-4092-b9ed-01f8d6de489a for instance with vm_state active and task_state deleting.
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.482 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Received event network-vif-plugged-b4377651-ae31-406d-9f8b-b7b241d511df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.482 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.482 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.482 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.482 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Processing event network-vif-plugged-b4377651-ae31-406d-9f8b-b7b241d511df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.483 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Received event network-vif-plugged-b4377651-ae31-406d-9f8b-b7b241d511df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.483 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.483 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.483 253665 DEBUG oslo_concurrency.lockutils [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.483 253665 DEBUG nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] No waiting events found dispatching network-vif-plugged-b4377651-ae31-406d-9f8b-b7b241d511df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.483 253665 WARNING nova.compute.manager [req-86814d20-3d67-422d-bd53-499867cda824 req-c546fde0-277d-4081-9670-d45bf4cce293 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Received unexpected event network-vif-plugged-b4377651-ae31-406d-9f8b-b7b241d511df for instance with vm_state building and task_state spawning.
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.484 253665 DEBUG nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.494 253665 INFO nova.virt.libvirt.driver [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Deleting instance files /var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_del
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.494 253665 INFO nova.virt.libvirt.driver [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Deletion of /var/lib/nova/instances/023bf3c2-b4eb-4f0f-b471-a20fb9907e1e_del complete
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.497 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803346.4882307, d4350763-7fea-40fa-ade2-5aada492b3c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.497 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] VM Resumed (Lifecycle Event)
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.500 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.504 253665 INFO nova.virt.libvirt.driver [-] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Instance spawned successfully.
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.505 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.535 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.542 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.546 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.546 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.547 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.547 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.548 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.548 253665 DEBUG nova.virt.libvirt.driver [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.587 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.611 253665 INFO nova.compute.manager [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Took 2.71 seconds to destroy the instance on the hypervisor.
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.612 253665 DEBUG oslo.service.loopingcall [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.612 253665 DEBUG nova.compute.manager [-] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.612 253665 DEBUG nova.network.neutron [-] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.632 253665 INFO nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Took 9.53 seconds to spawn the instance on the hypervisor.
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.632 253665 DEBUG nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.732 253665 INFO nova.compute.manager [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Took 10.81 seconds to build instance.
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.750 253665 DEBUG oslo_concurrency.lockutils [None req-74652c0b-2e1f-4d01-ba13-729b23303394 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.837 253665 DEBUG nova.compute.manager [req-a6f26bda-e535-43fc-8f84-6900f91f9494 req-24df1a06-b25b-4b22-92c9-d478f1551934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Received event network-vif-plugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.838 253665 DEBUG oslo_concurrency.lockutils [req-a6f26bda-e535-43fc-8f84-6900f91f9494 req-24df1a06-b25b-4b22-92c9-d478f1551934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.838 253665 DEBUG oslo_concurrency.lockutils [req-a6f26bda-e535-43fc-8f84-6900f91f9494 req-24df1a06-b25b-4b22-92c9-d478f1551934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.838 253665 DEBUG oslo_concurrency.lockutils [req-a6f26bda-e535-43fc-8f84-6900f91f9494 req-24df1a06-b25b-4b22-92c9-d478f1551934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.838 253665 DEBUG nova.compute.manager [req-a6f26bda-e535-43fc-8f84-6900f91f9494 req-24df1a06-b25b-4b22-92c9-d478f1551934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] No waiting events found dispatching network-vif-plugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:26 compute-0 nova_compute[253661]: 2025-11-22 09:22:26.839 253665 WARNING nova.compute.manager [req-a6f26bda-e535-43fc-8f84-6900f91f9494 req-24df1a06-b25b-4b22-92c9-d478f1551934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Received unexpected event network-vif-plugged-14926b0c-50b5-475d-9ee6-f20c27dab8f0 for instance with vm_state active and task_state deleting.
Nov 22 09:22:26 compute-0 podman[330190]: 2025-11-22 09:22:26.743873556 +0000 UTC m=+0.030681523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:22:26 compute-0 podman[330190]: 2025-11-22 09:22:26.901411904 +0000 UTC m=+0.188219851 container create 8721af14922020437a6c552139e2efc37c46dfa5f89dffc81864627f1702cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:22:26 compute-0 ceph-mon[75021]: pgmap v1747: 305 pgs: 305 active+clean; 273 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.9 MiB/s wr, 196 op/s
Nov 22 09:22:26 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:22:26 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:22:26 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:22:26 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:22:26 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:22:26 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:22:27 compute-0 systemd[1]: Started libpod-conmon-8721af14922020437a6c552139e2efc37c46dfa5f89dffc81864627f1702cd38.scope.
Nov 22 09:22:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:27 compute-0 podman[330190]: 2025-11-22 09:22:27.269420121 +0000 UTC m=+0.556228068 container init 8721af14922020437a6c552139e2efc37c46dfa5f89dffc81864627f1702cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_meitner, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:22:27 compute-0 podman[330190]: 2025-11-22 09:22:27.277995269 +0000 UTC m=+0.564803206 container start 8721af14922020437a6c552139e2efc37c46dfa5f89dffc81864627f1702cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:22:27 compute-0 hungry_meitner[330204]: 167 167
Nov 22 09:22:27 compute-0 systemd[1]: libpod-8721af14922020437a6c552139e2efc37c46dfa5f89dffc81864627f1702cd38.scope: Deactivated successfully.
Nov 22 09:22:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:27 compute-0 podman[330190]: 2025-11-22 09:22:27.417040801 +0000 UTC m=+0.703848748 container attach 8721af14922020437a6c552139e2efc37c46dfa5f89dffc81864627f1702cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:22:27 compute-0 podman[330190]: 2025-11-22 09:22:27.417592944 +0000 UTC m=+0.704400911 container died 8721af14922020437a6c552139e2efc37c46dfa5f89dffc81864627f1702cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 09:22:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 238 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 286 op/s
Nov 22 09:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb1894addeae8c2644425854f066ea2266b533f952413cab1ec054d7d3c750e0-merged.mount: Deactivated successfully.
Nov 22 09:22:27 compute-0 podman[330190]: 2025-11-22 09:22:27.864387626 +0000 UTC m=+1.151195573 container remove 8721af14922020437a6c552139e2efc37c46dfa5f89dffc81864627f1702cd38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:22:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:27.965 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:27 compute-0 systemd[1]: libpod-conmon-8721af14922020437a6c552139e2efc37c46dfa5f89dffc81864627f1702cd38.scope: Deactivated successfully.
Nov 22 09:22:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:27.968 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:28 compute-0 podman[330228]: 2025-11-22 09:22:28.136219187 +0000 UTC m=+0.102419267 container create bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:22:28 compute-0 podman[330228]: 2025-11-22 09:22:28.063621622 +0000 UTC m=+0.029821722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:22:28 compute-0 systemd[1]: Started libpod-conmon-bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443.scope.
Nov 22 09:22:28 compute-0 nova_compute[253661]: 2025-11-22 09:22:28.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:22:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163e83691124f8793f25549b7896c04e785fdee5e512a71574dc4242c4ef2457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163e83691124f8793f25549b7896c04e785fdee5e512a71574dc4242c4ef2457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163e83691124f8793f25549b7896c04e785fdee5e512a71574dc4242c4ef2457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163e83691124f8793f25549b7896c04e785fdee5e512a71574dc4242c4ef2457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163e83691124f8793f25549b7896c04e785fdee5e512a71574dc4242c4ef2457/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:28 compute-0 podman[330228]: 2025-11-22 09:22:28.31460174 +0000 UTC m=+0.280801820 container init bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_grothendieck, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:22:28 compute-0 podman[330228]: 2025-11-22 09:22:28.323789912 +0000 UTC m=+0.289989992 container start bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:22:28 compute-0 podman[330228]: 2025-11-22 09:22:28.364496336 +0000 UTC m=+0.330696416 container attach bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_grothendieck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:22:28 compute-0 nova_compute[253661]: 2025-11-22 09:22:28.756 253665 DEBUG nova.network.neutron [-] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:28 compute-0 nova_compute[253661]: 2025-11-22 09:22:28.775 253665 INFO nova.compute.manager [-] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Took 2.81 seconds to deallocate network for instance.
Nov 22 09:22:28 compute-0 nova_compute[253661]: 2025-11-22 09:22:28.851 253665 DEBUG oslo_concurrency.lockutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:28 compute-0 nova_compute[253661]: 2025-11-22 09:22:28.852 253665 DEBUG oslo_concurrency.lockutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:28.999 253665 DEBUG nova.network.neutron [-] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.002 253665 DEBUG oslo_concurrency.processutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:29 compute-0 ceph-mon[75021]: pgmap v1748: 305 pgs: 305 active+clean; 238 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 286 op/s
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.072 253665 DEBUG nova.compute.manager [req-9ffde210-c5d1-4385-8cb9-e251e4e83136 req-e346750c-088d-4d28-a962-6d655633fbeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Received event network-vif-deleted-a6df8b13-e328-4092-b9ed-01f8d6de489a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.078 253665 INFO nova.compute.manager [-] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Took 2.47 seconds to deallocate network for instance.
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.133 253665 DEBUG oslo_concurrency.lockutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.210 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:29 compute-0 hardcore_grothendieck[330244]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:22:29 compute-0 hardcore_grothendieck[330244]: --> relative data size: 1.0
Nov 22 09:22:29 compute-0 hardcore_grothendieck[330244]: --> All data devices are unavailable
Nov 22 09:22:29 compute-0 podman[330228]: 2025-11-22 09:22:29.45086856 +0000 UTC m=+1.417068640 container died bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:22:29 compute-0 systemd[1]: libpod-bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443.scope: Deactivated successfully.
Nov 22 09:22:29 compute-0 systemd[1]: libpod-bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443.scope: Consumed 1.058s CPU time.
Nov 22 09:22:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 181 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 3.1 MiB/s wr, 436 op/s
Nov 22 09:22:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1053172801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.518 253665 DEBUG oslo_concurrency.processutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.526 253665 DEBUG nova.compute.provider_tree [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.538 253665 DEBUG nova.scheduler.client.report [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-163e83691124f8793f25549b7896c04e785fdee5e512a71574dc4242c4ef2457-merged.mount: Deactivated successfully.
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.587 253665 DEBUG oslo_concurrency.lockutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.591 253665 DEBUG oslo_concurrency.lockutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:29 compute-0 podman[330228]: 2025-11-22 09:22:29.638841235 +0000 UTC m=+1.605041315 container remove bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 09:22:29 compute-0 systemd[1]: libpod-conmon-bcde76978134c35d1ea6e827a0ae54fbee4f84e387ab2048df3d8230b1e73443.scope: Deactivated successfully.
Nov 22 09:22:29 compute-0 sudo[330128]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:29 compute-0 podman[330307]: 2025-11-22 09:22:29.687541312 +0000 UTC m=+0.107775156 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.698 253665 DEBUG oslo_concurrency.processutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.745 253665 INFO nova.scheduler.client.report [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Deleted allocations for instance 688e9583-4ea7-4a94-b8d0-2758f83279af
Nov 22 09:22:29 compute-0 sudo[330325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:22:29 compute-0 sudo[330325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:29 compute-0 sudo[330325]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:29 compute-0 sudo[330363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:22:29 compute-0 sudo[330363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:29 compute-0 sudo[330363]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:29 compute-0 nova_compute[253661]: 2025-11-22 09:22:29.840 253665 DEBUG oslo_concurrency.lockutils [None req-d55bd0f0-660c-4751-8640-14d80215315f c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "688e9583-4ea7-4a94-b8d0-2758f83279af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:29 compute-0 podman[330333]: 2025-11-22 09:22:29.842650622 +0000 UTC m=+0.111016084 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:22:29 compute-0 sudo[330395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:22:29 compute-0 sudo[330395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:29 compute-0 sudo[330395]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:29 compute-0 sudo[330439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:22:29 compute-0 sudo[330439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1053172801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683120742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.241 253665 DEBUG oslo_concurrency.processutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.256 253665 DEBUG nova.compute.provider_tree [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.272 253665 DEBUG nova.scheduler.client.report [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.308 253665 DEBUG oslo_concurrency.lockutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.312 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.313 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.313 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.313 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.410 253665 INFO nova.scheduler.client.report [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Deleted allocations for instance 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e
Nov 22 09:22:30 compute-0 podman[330503]: 2025-11-22 09:22:30.334134804 +0000 UTC m=+0.042553579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:22:30 compute-0 podman[330503]: 2025-11-22 09:22:30.435214228 +0000 UTC m=+0.143632983 container create ec1c48d47986bc7a203923fd25d70d889297b2ba28fbdea666d75983fc5855ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.490 253665 DEBUG oslo_concurrency.lockutils [None req-0dea23f5-8328-4f01-9c08-46a523ee313c c57e583444d64b2a80c940052ff754eb fb7ce66b616d43838eb71b9c62cb2354 - - default default] Lock "023bf3c2-b4eb-4f0f-b471-a20fb9907e1e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:30 compute-0 systemd[1]: Started libpod-conmon-ec1c48d47986bc7a203923fd25d70d889297b2ba28fbdea666d75983fc5855ca.scope.
Nov 22 09:22:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:30 compute-0 podman[330503]: 2025-11-22 09:22:30.619152626 +0000 UTC m=+0.327571411 container init ec1c48d47986bc7a203923fd25d70d889297b2ba28fbdea666d75983fc5855ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:22:30 compute-0 podman[330503]: 2025-11-22 09:22:30.627282331 +0000 UTC m=+0.335701076 container start ec1c48d47986bc7a203923fd25d70d889297b2ba28fbdea666d75983fc5855ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:22:30 compute-0 quirky_goldberg[330537]: 167 167
Nov 22 09:22:30 compute-0 systemd[1]: libpod-ec1c48d47986bc7a203923fd25d70d889297b2ba28fbdea666d75983fc5855ca.scope: Deactivated successfully.
Nov 22 09:22:30 compute-0 podman[330503]: 2025-11-22 09:22:30.660529795 +0000 UTC m=+0.368948580 container attach ec1c48d47986bc7a203923fd25d70d889297b2ba28fbdea666d75983fc5855ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:22:30 compute-0 podman[330503]: 2025-11-22 09:22:30.661022607 +0000 UTC m=+0.369441362 container died ec1c48d47986bc7a203923fd25d70d889297b2ba28fbdea666d75983fc5855ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-95103c736b11eac6d02ab9f6a4c5ec474df28d431bde63c5c45ee64d8490c6f8-merged.mount: Deactivated successfully.
Nov 22 09:22:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2619140970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.858 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:30 compute-0 podman[330503]: 2025-11-22 09:22:30.91305919 +0000 UTC m=+0.621477945 container remove ec1c48d47986bc7a203923fd25d70d889297b2ba28fbdea666d75983fc5855ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.954 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.955 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.960 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.960 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.962 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:22:30 compute-0 nova_compute[253661]: 2025-11-22 09:22:30.963 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:22:31 compute-0 systemd[1]: libpod-conmon-ec1c48d47986bc7a203923fd25d70d889297b2ba28fbdea666d75983fc5855ca.scope: Deactivated successfully.
Nov 22 09:22:31 compute-0 podman[330565]: 2025-11-22 09:22:31.186401539 +0000 UTC m=+0.083175962 container create a97b1b0fa9ace64ab51e1437ac15b1a7ad341258efbc935fa0df8af454810057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hellman, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.223 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:22:31 compute-0 podman[330565]: 2025-11-22 09:22:31.130359744 +0000 UTC m=+0.027134187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.227 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3532MB free_disk=59.92558670043945GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.227 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.228 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:31 compute-0 ceph-mon[75021]: pgmap v1749: 305 pgs: 305 active+clean; 181 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 3.1 MiB/s wr, 436 op/s
Nov 22 09:22:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1683120742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2619140970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:31 compute-0 systemd[1]: Started libpod-conmon-a97b1b0fa9ace64ab51e1437ac15b1a7ad341258efbc935fa0df8af454810057.scope.
Nov 22 09:22:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.303 253665 DEBUG nova.compute.manager [req-bdc0618a-c781-4bb6-8387-67524141f916 req-7e213482-8fd8-48a1-aa25-572d495db0cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Received event network-vif-deleted-14926b0c-50b5-475d-9ee6-f20c27dab8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57871e5ac0e354d8bb6459944363a085b3d87abc1ac99d0210cca02c1d8b552/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57871e5ac0e354d8bb6459944363a085b3d87abc1ac99d0210cca02c1d8b552/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57871e5ac0e354d8bb6459944363a085b3d87abc1ac99d0210cca02c1d8b552/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57871e5ac0e354d8bb6459944363a085b3d87abc1ac99d0210cca02c1d8b552/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.308 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance b5f49b05-1b70-4479-98d9-83b995037a41 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.308 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 41d22150-10c5-4aac-84a2-bebea895286e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.308 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d4350763-7fea-40fa-ade2-5aada492b3c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.309 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.309 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.397 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:31 compute-0 podman[330565]: 2025-11-22 09:22:31.412894294 +0000 UTC m=+0.309668757 container init a97b1b0fa9ace64ab51e1437ac15b1a7ad341258efbc935fa0df8af454810057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hellman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:22:31 compute-0 podman[330565]: 2025-11-22 09:22:31.428917082 +0000 UTC m=+0.325691505 container start a97b1b0fa9ace64ab51e1437ac15b1a7ad341258efbc935fa0df8af454810057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:22:31 compute-0 podman[330565]: 2025-11-22 09:22:31.475522659 +0000 UTC m=+0.372297082 container attach a97b1b0fa9ace64ab51e1437ac15b1a7ad341258efbc935fa0df8af454810057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:22:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 181 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 771 KiB/s wr, 387 op/s
Nov 22 09:22:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032256261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.889 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.899 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:31 compute-0 nova_compute[253661]: 2025-11-22 09:22:31.918 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:32 compute-0 nova_compute[253661]: 2025-11-22 09:22:32.064 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:22:32 compute-0 nova_compute[253661]: 2025-11-22 09:22:32.065 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3032256261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:32 compute-0 sweet_hellman[330581]: {
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:     "0": [
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:         {
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "devices": [
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "/dev/loop3"
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             ],
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_name": "ceph_lv0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_size": "21470642176",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "name": "ceph_lv0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "tags": {
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.cluster_name": "ceph",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.crush_device_class": "",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.encrypted": "0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.osd_id": "0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.type": "block",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.vdo": "0"
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             },
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "type": "block",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "vg_name": "ceph_vg0"
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:         }
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:     ],
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:     "1": [
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:         {
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "devices": [
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "/dev/loop4"
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             ],
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_name": "ceph_lv1",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_size": "21470642176",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "name": "ceph_lv1",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "tags": {
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.cluster_name": "ceph",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.crush_device_class": "",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.encrypted": "0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.osd_id": "1",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.type": "block",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.vdo": "0"
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             },
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "type": "block",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "vg_name": "ceph_vg1"
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:         }
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:     ],
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:     "2": [
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:         {
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "devices": [
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "/dev/loop5"
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             ],
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_name": "ceph_lv2",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_size": "21470642176",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "name": "ceph_lv2",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "tags": {
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.cluster_name": "ceph",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.crush_device_class": "",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.encrypted": "0",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.osd_id": "2",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.type": "block",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:                 "ceph.vdo": "0"
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             },
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "type": "block",
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:             "vg_name": "ceph_vg2"
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:         }
Nov 22 09:22:32 compute-0 sweet_hellman[330581]:     ]
Nov 22 09:22:32 compute-0 sweet_hellman[330581]: }
Nov 22 09:22:32 compute-0 systemd[1]: libpod-a97b1b0fa9ace64ab51e1437ac15b1a7ad341258efbc935fa0df8af454810057.scope: Deactivated successfully.
Nov 22 09:22:32 compute-0 podman[330565]: 2025-11-22 09:22:32.385774575 +0000 UTC m=+1.282548998 container died a97b1b0fa9ace64ab51e1437ac15b1a7ad341258efbc935fa0df8af454810057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hellman, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 09:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e57871e5ac0e354d8bb6459944363a085b3d87abc1ac99d0210cca02c1d8b552-merged.mount: Deactivated successfully.
Nov 22 09:22:32 compute-0 podman[330565]: 2025-11-22 09:22:32.779718929 +0000 UTC m=+1.676493352 container remove a97b1b0fa9ace64ab51e1437ac15b1a7ad341258efbc935fa0df8af454810057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hellman, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 09:22:32 compute-0 systemd[1]: libpod-conmon-a97b1b0fa9ace64ab51e1437ac15b1a7ad341258efbc935fa0df8af454810057.scope: Deactivated successfully.
Nov 22 09:22:32 compute-0 sudo[330439]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:32 compute-0 sudo[330621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:22:32 compute-0 sudo[330621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:32 compute-0 sudo[330621]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:32 compute-0 sudo[330646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:22:32 compute-0 sudo[330646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:32 compute-0 sudo[330646]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:33 compute-0 sudo[330671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:22:33 compute-0 sudo[330671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:33 compute-0 sudo[330671]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:33 compute-0 nova_compute[253661]: 2025-11-22 09:22:33.065 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:22:33 compute-0 nova_compute[253661]: 2025-11-22 09:22:33.066 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:22:33 compute-0 nova_compute[253661]: 2025-11-22 09:22:33.066 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:22:33 compute-0 nova_compute[253661]: 2025-11-22 09:22:33.066 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:22:33 compute-0 sudo[330696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:22:33 compute-0 sudo[330696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:33 compute-0 ceph-mon[75021]: pgmap v1750: 305 pgs: 305 active+clean; 181 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 771 KiB/s wr, 387 op/s
Nov 22 09:22:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 181 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 771 KiB/s wr, 415 op/s
Nov 22 09:22:33 compute-0 podman[330761]: 2025-11-22 09:22:33.410454358 +0000 UTC m=+0.025285343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:22:33 compute-0 podman[330761]: 2025-11-22 09:22:33.562020682 +0000 UTC m=+0.176851657 container create 1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ramanujan, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:22:33 compute-0 systemd[1]: Started libpod-conmon-1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca.scope.
Nov 22 09:22:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:33 compute-0 podman[330761]: 2025-11-22 09:22:33.895834963 +0000 UTC m=+0.510665948 container init 1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ramanujan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:22:33 compute-0 podman[330761]: 2025-11-22 09:22:33.903014246 +0000 UTC m=+0.517845211 container start 1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ramanujan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:22:33 compute-0 magical_ramanujan[330777]: 167 167
Nov 22 09:22:33 compute-0 systemd[1]: libpod-1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca.scope: Deactivated successfully.
Nov 22 09:22:33 compute-0 conmon[330777]: conmon 1e6b4b007a9d08b776c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca.scope/container/memory.events
Nov 22 09:22:33 compute-0 podman[330761]: 2025-11-22 09:22:33.993713019 +0000 UTC m=+0.608543984 container attach 1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 09:22:33 compute-0 podman[330761]: 2025-11-22 09:22:33.994110278 +0000 UTC m=+0.608941243 container died 1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:22:34 compute-0 nova_compute[253661]: 2025-11-22 09:22:34.213 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9166b9e9b7a9e5e77ceddf100be7cdd3345b419485ec95fbad23758062334469-merged.mount: Deactivated successfully.
Nov 22 09:22:34 compute-0 podman[330761]: 2025-11-22 09:22:34.392584612 +0000 UTC m=+1.007415597 container remove 1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ramanujan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 22 09:22:34 compute-0 systemd[1]: libpod-conmon-1e6b4b007a9d08b776c0927e4a4e85cb4eeca00c09f08d3a4c2726bcb394f6ca.scope: Deactivated successfully.
Nov 22 09:22:34 compute-0 podman[330796]: 2025-11-22 09:22:34.551303549 +0000 UTC m=+0.095748215 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 09:22:34 compute-0 podman[330822]: 2025-11-22 09:22:34.584682236 +0000 UTC m=+0.029310739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:22:35 compute-0 podman[330822]: 2025-11-22 09:22:35.096407247 +0000 UTC m=+0.541035720 container create 0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:22:35 compute-0 ceph-mon[75021]: pgmap v1751: 305 pgs: 305 active+clean; 181 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 771 KiB/s wr, 415 op/s
Nov 22 09:22:35 compute-0 nova_compute[253661]: 2025-11-22 09:22:35.402 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:35 compute-0 systemd[1]: Started libpod-conmon-0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced.scope.
Nov 22 09:22:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47dc2e26dcdbbb60bb2375c2c54192a956d67551342c716f4a38eaf0439a9d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47dc2e26dcdbbb60bb2375c2c54192a956d67551342c716f4a38eaf0439a9d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47dc2e26dcdbbb60bb2375c2c54192a956d67551342c716f4a38eaf0439a9d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47dc2e26dcdbbb60bb2375c2c54192a956d67551342c716f4a38eaf0439a9d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 181 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 28 KiB/s wr, 337 op/s
Nov 22 09:22:35 compute-0 podman[330822]: 2025-11-22 09:22:35.854632259 +0000 UTC m=+1.299260762 container init 0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:22:35 compute-0 podman[330822]: 2025-11-22 09:22:35.867132931 +0000 UTC m=+1.311761404 container start 0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 09:22:36 compute-0 podman[330822]: 2025-11-22 09:22:36.133007559 +0000 UTC m=+1.577636072 container attach 0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:22:36 compute-0 ceph-mon[75021]: pgmap v1752: 305 pgs: 305 active+clean; 181 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 28 KiB/s wr, 337 op/s
Nov 22 09:22:36 compute-0 nova_compute[253661]: 2025-11-22 09:22:36.715 253665 DEBUG oslo_concurrency.lockutils [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:36 compute-0 nova_compute[253661]: 2025-11-22 09:22:36.718 253665 DEBUG oslo_concurrency.lockutils [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:36 compute-0 nova_compute[253661]: 2025-11-22 09:22:36.719 253665 DEBUG nova.compute.manager [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:36 compute-0 ovn_controller[152872]: 2025-11-22T09:22:36Z|00746|binding|INFO|Releasing lport 22c0239c-bdb2-4af4-a6a4-7ca3983ded8c from this chassis (sb_readonly=0)
Nov 22 09:22:36 compute-0 nova_compute[253661]: 2025-11-22 09:22:36.726 253665 DEBUG nova.compute.manager [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 22 09:22:36 compute-0 nova_compute[253661]: 2025-11-22 09:22:36.728 253665 DEBUG nova.objects.instance [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'flavor' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:36 compute-0 nova_compute[253661]: 2025-11-22 09:22:36.773 253665 DEBUG nova.virt.libvirt.driver [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:22:36 compute-0 nova_compute[253661]: 2025-11-22 09:22:36.795 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]: {
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "osd_id": 1,
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "type": "bluestore"
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:     },
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "osd_id": 0,
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "type": "bluestore"
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:     },
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "osd_id": 2,
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:         "type": "bluestore"
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]:     }
Nov 22 09:22:36 compute-0 optimistic_shtern[330841]: }
Nov 22 09:22:37 compute-0 systemd[1]: libpod-0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced.scope: Deactivated successfully.
Nov 22 09:22:37 compute-0 podman[330822]: 2025-11-22 09:22:37.008203458 +0000 UTC m=+2.452831931 container died 0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 09:22:37 compute-0 systemd[1]: libpod-0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced.scope: Consumed 1.099s CPU time.
Nov 22 09:22:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a47dc2e26dcdbbb60bb2375c2c54192a956d67551342c716f4a38eaf0439a9d0-merged.mount: Deactivated successfully.
Nov 22 09:22:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 184 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 462 KiB/s wr, 293 op/s
Nov 22 09:22:37 compute-0 podman[330822]: 2025-11-22 09:22:37.861593249 +0000 UTC m=+3.306221722 container remove 0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 09:22:37 compute-0 systemd[1]: libpod-conmon-0833c8af1642acc1d8f85eb95984a2315e568ce4cabb598a3dd913fec0db1ced.scope: Deactivated successfully.
Nov 22 09:22:37 compute-0 sudo[330696]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:22:38 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:22:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:22:38 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:22:38 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev acb4f14f-983c-4b44-b13c-28a5c2c6c19e does not exist
Nov 22 09:22:38 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5e93afad-7fd3-47b0-8183-f0686c8d9221 does not exist
Nov 22 09:22:38 compute-0 sudo[330885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:22:38 compute-0 sudo[330885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:38 compute-0 sudo[330885]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:38 compute-0 sudo[330910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:22:38 compute-0 ceph-mon[75021]: pgmap v1753: 305 pgs: 305 active+clean; 184 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 462 KiB/s wr, 293 op/s
Nov 22 09:22:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:22:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:22:38 compute-0 sudo[330910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:22:38 compute-0 sudo[330910]: pam_unix(sudo:session): session closed for user root
Nov 22 09:22:39 compute-0 nova_compute[253661]: 2025-11-22 09:22:39.077 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803344.005186, 688e9583-4ea7-4a94-b8d0-2758f83279af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:39 compute-0 nova_compute[253661]: 2025-11-22 09:22:39.078 253665 INFO nova.compute.manager [-] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] VM Stopped (Lifecycle Event)
Nov 22 09:22:39 compute-0 nova_compute[253661]: 2025-11-22 09:22:39.100 253665 DEBUG nova.compute.manager [None req-f167a175-871b-41ee-9c84-aefb7960f43f - - - - - -] [instance: 688e9583-4ea7-4a94-b8d0-2758f83279af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:39 compute-0 nova_compute[253661]: 2025-11-22 09:22:39.220 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:39 compute-0 nova_compute[253661]: 2025-11-22 09:22:39.352 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803344.137492, 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:39 compute-0 nova_compute[253661]: 2025-11-22 09:22:39.353 253665 INFO nova.compute.manager [-] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] VM Stopped (Lifecycle Event)
Nov 22 09:22:39 compute-0 nova_compute[253661]: 2025-11-22 09:22:39.373 253665 DEBUG nova.compute.manager [None req-434e0366-46f0-4211-9ae3-a80b981ec961 - - - - - -] [instance: 023bf3c2-b4eb-4f0f-b471-a20fb9907e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 195 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.5 MiB/s wr, 206 op/s
Nov 22 09:22:40 compute-0 nova_compute[253661]: 2025-11-22 09:22:40.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:41 compute-0 ceph-mon[75021]: pgmap v1754: 305 pgs: 305 active+clean; 195 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.5 MiB/s wr, 206 op/s
Nov 22 09:22:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 195 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Nov 22 09:22:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:42 compute-0 nova_compute[253661]: 2025-11-22 09:22:42.334 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:42 compute-0 nova_compute[253661]: 2025-11-22 09:22:42.335 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:42 compute-0 nova_compute[253661]: 2025-11-22 09:22:42.404 253665 DEBUG nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:22:42 compute-0 ceph-mon[75021]: pgmap v1755: 305 pgs: 305 active+clean; 195 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Nov 22 09:22:42 compute-0 nova_compute[253661]: 2025-11-22 09:22:42.525 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:42 compute-0 nova_compute[253661]: 2025-11-22 09:22:42.526 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:42 compute-0 nova_compute[253661]: 2025-11-22 09:22:42.535 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:22:42 compute-0 nova_compute[253661]: 2025-11-22 09:22:42.537 253665 INFO nova.compute.claims [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:22:42 compute-0 nova_compute[253661]: 2025-11-22 09:22:42.790 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:42 compute-0 ovn_controller[152872]: 2025-11-22T09:22:42Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:ac:3e 10.100.0.7
Nov 22 09:22:42 compute-0 ovn_controller[152872]: 2025-11-22T09:22:42Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:ac:3e 10.100.0.7
Nov 22 09:22:43 compute-0 ovn_controller[152872]: 2025-11-22T09:22:43Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:70:49 10.100.0.8
Nov 22 09:22:43 compute-0 ovn_controller[152872]: 2025-11-22T09:22:43Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:70:49 10.100.0.8
Nov 22 09:22:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:22:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/354359956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.288 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.297 253665 DEBUG nova.compute.provider_tree [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.312 253665 DEBUG nova.scheduler.client.report [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.444 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.446 253665 DEBUG nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:22:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 241 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.2 MiB/s wr, 108 op/s
Nov 22 09:22:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/354359956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.584 253665 DEBUG nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.584 253665 DEBUG nova.network.neutron [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.612 253665 INFO nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.640 253665 DEBUG nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:22:43 compute-0 ovn_controller[152872]: 2025-11-22T09:22:43Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:27:be 10.100.0.10
Nov 22 09:22:43 compute-0 ovn_controller[152872]: 2025-11-22T09:22:43Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:27:be 10.100.0.10
Nov 22 09:22:43 compute-0 nova_compute[253661]: 2025-11-22 09:22:43.959 253665 DEBUG nova.policy [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7e5709393702478dbf0bd566dc94d7fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b06c711e582499ab500917d85e27e3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.007 253665 DEBUG nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.009 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.009 253665 INFO nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Creating image(s)
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.034 253665 DEBUG nova.storage.rbd_utils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9096405c-eb66-4d27-abbb-e709b767afea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.063 253665 DEBUG nova.storage.rbd_utils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9096405c-eb66-4d27-abbb-e709b767afea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.091 253665 DEBUG nova.storage.rbd_utils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9096405c-eb66-4d27-abbb-e709b767afea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.095 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.165 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.167 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.169 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.169 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.204 253665 DEBUG nova.storage.rbd_utils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9096405c-eb66-4d27-abbb-e709b767afea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.210 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9096405c-eb66-4d27-abbb-e709b767afea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:44 compute-0 ceph-mon[75021]: pgmap v1756: 305 pgs: 305 active+clean; 241 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.2 MiB/s wr, 108 op/s
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.869 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9096405c-eb66-4d27-abbb-e709b767afea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:44 compute-0 nova_compute[253661]: 2025-11-22 09:22:44.950 253665 DEBUG nova.storage.rbd_utils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] resizing rbd image 9096405c-eb66-4d27-abbb-e709b767afea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:22:45 compute-0 nova_compute[253661]: 2025-11-22 09:22:45.059 253665 DEBUG nova.objects.instance [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'migration_context' on Instance uuid 9096405c-eb66-4d27-abbb-e709b767afea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:45 compute-0 nova_compute[253661]: 2025-11-22 09:22:45.073 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:22:45 compute-0 nova_compute[253661]: 2025-11-22 09:22:45.074 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Ensure instance console log exists: /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:22:45 compute-0 nova_compute[253661]: 2025-11-22 09:22:45.074 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:45 compute-0 nova_compute[253661]: 2025-11-22 09:22:45.075 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:45 compute-0 nova_compute[253661]: 2025-11-22 09:22:45.075 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:45 compute-0 nova_compute[253661]: 2025-11-22 09:22:45.185 253665 DEBUG nova.network.neutron [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Successfully created port: ecdb3a4e-ac28-4357-9db5-41ebf06a4adc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:22:45 compute-0 nova_compute[253661]: 2025-11-22 09:22:45.406 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 248 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 6.1 MiB/s wr, 109 op/s
Nov 22 09:22:46 compute-0 nova_compute[253661]: 2025-11-22 09:22:46.547 253665 DEBUG nova.network.neutron [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Successfully updated port: ecdb3a4e-ac28-4357-9db5-41ebf06a4adc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:22:46 compute-0 nova_compute[253661]: 2025-11-22 09:22:46.582 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:46 compute-0 nova_compute[253661]: 2025-11-22 09:22:46.582 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquired lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:46 compute-0 nova_compute[253661]: 2025-11-22 09:22:46.582 253665 DEBUG nova.network.neutron [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:22:46 compute-0 ceph-mon[75021]: pgmap v1757: 305 pgs: 305 active+clean; 248 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 6.1 MiB/s wr, 109 op/s
Nov 22 09:22:46 compute-0 nova_compute[253661]: 2025-11-22 09:22:46.698 253665 DEBUG nova.compute.manager [req-18c13f6b-4431-4940-ae83-7cae949b1b65 req-a049a810-c287-464e-b740-133050e561bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-changed-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:46 compute-0 nova_compute[253661]: 2025-11-22 09:22:46.698 253665 DEBUG nova.compute.manager [req-18c13f6b-4431-4940-ae83-7cae949b1b65 req-a049a810-c287-464e-b740-133050e561bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Refreshing instance network info cache due to event network-changed-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:22:46 compute-0 nova_compute[253661]: 2025-11-22 09:22:46.698 253665 DEBUG oslo_concurrency.lockutils [req-18c13f6b-4431-4940-ae83-7cae949b1b65 req-a049a810-c287-464e-b740-133050e561bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:46 compute-0 nova_compute[253661]: 2025-11-22 09:22:46.726 253665 DEBUG nova.network.neutron [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:22:46 compute-0 nova_compute[253661]: 2025-11-22 09:22:46.861 253665 DEBUG nova.virt.libvirt.driver [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:22:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 280 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 810 KiB/s rd, 7.0 MiB/s wr, 165 op/s
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.778 253665 DEBUG nova.network.neutron [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updating instance_info_cache with network_info: [{"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.826 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Releasing lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.827 253665 DEBUG nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Instance network_info: |[{"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.827 253665 DEBUG oslo_concurrency.lockutils [req-18c13f6b-4431-4940-ae83-7cae949b1b65 req-a049a810-c287-464e-b740-133050e561bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.828 253665 DEBUG nova.network.neutron [req-18c13f6b-4431-4940-ae83-7cae949b1b65 req-a049a810-c287-464e-b740-133050e561bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Refreshing network info cache for port ecdb3a4e-ac28-4357-9db5-41ebf06a4adc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.830 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Start _get_guest_xml network_info=[{"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.834 253665 WARNING nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.838 253665 DEBUG nova.virt.libvirt.host [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.839 253665 DEBUG nova.virt.libvirt.host [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.844 253665 DEBUG nova.virt.libvirt.host [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.845 253665 DEBUG nova.virt.libvirt.host [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.845 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.845 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.845 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.846 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.846 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.846 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.846 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.846 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.846 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.847 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.847 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.847 253665 DEBUG nova.virt.hardware [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:22:47 compute-0 nova_compute[253661]: 2025-11-22 09:22:47.849 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191415368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.286 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.307 253665 DEBUG nova.storage.rbd_utils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9096405c-eb66-4d27-abbb-e709b767afea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.312 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:48 compute-0 ceph-mon[75021]: pgmap v1758: 305 pgs: 305 active+clean; 280 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 810 KiB/s rd, 7.0 MiB/s wr, 165 op/s
Nov 22 09:22:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4191415368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255501502' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.830 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.832 253665 DEBUG nova.virt.libvirt.vif [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-903328611',display_name='tempest-ServerActionsTestOtherA-server-903328611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-903328611',id=75,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFom9y+7W1OzHUVkvflqnu/6xnxZe0N+aQAyRSLRBCSgO6CoYZ20Adms5sFPGUitwuO09dh9qM8uob9/gGVzUyIJo9HanjWjMYRoIceLs8pZBGhLtn51xjZTJ05EGeq1rA==',key_name='tempest-keypair-2136084686',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-o7pshihd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9096405c-eb66-4d27-abbb-e709b767afea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.833 253665 DEBUG nova.network.os_vif_util [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.834 253665 DEBUG nova.network.os_vif_util [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.835 253665 DEBUG nova.objects.instance [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9096405c-eb66-4d27-abbb-e709b767afea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.845 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <uuid>9096405c-eb66-4d27-abbb-e709b767afea</uuid>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <name>instance-0000004b</name>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestOtherA-server-903328611</nova:name>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:22:47</nova:creationTime>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <nova:user uuid="7e5709393702478dbf0bd566dc94d7fe">tempest-ServerActionsTestOtherA-1527475006-project-member</nova:user>
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <nova:project uuid="9b06c711e582499ab500917d85e27e3c">tempest-ServerActionsTestOtherA-1527475006</nova:project>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <nova:port uuid="ecdb3a4e-ac28-4357-9db5-41ebf06a4adc">
Nov 22 09:22:48 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <system>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <entry name="serial">9096405c-eb66-4d27-abbb-e709b767afea</entry>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <entry name="uuid">9096405c-eb66-4d27-abbb-e709b767afea</entry>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     </system>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <os>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   </os>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <features>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   </features>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9096405c-eb66-4d27-abbb-e709b767afea_disk">
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9096405c-eb66-4d27-abbb-e709b767afea_disk.config">
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:48 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:e0:3e:fb"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <target dev="tapecdb3a4e-ac"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea/console.log" append="off"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <video>
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     </video>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:22:48 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:22:48 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:22:48 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:22:48 compute-0 nova_compute[253661]: </domain>
Nov 22 09:22:48 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.847 253665 DEBUG nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Preparing to wait for external event network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.847 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.847 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.847 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.848 253665 DEBUG nova.virt.libvirt.vif [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:22:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-903328611',display_name='tempest-ServerActionsTestOtherA-server-903328611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-903328611',id=75,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFom9y+7W1OzHUVkvflqnu/6xnxZe0N+aQAyRSLRBCSgO6CoYZ20Adms5sFPGUitwuO09dh9qM8uob9/gGVzUyIJo9HanjWjMYRoIceLs8pZBGhLtn51xjZTJ05EGeq1rA==',key_name='tempest-keypair-2136084686',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-o7pshihd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:22:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9096405c-eb66-4d27-abbb-e709b767afea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.849 253665 DEBUG nova.network.os_vif_util [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.849 253665 DEBUG nova.network.os_vif_util [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.850 253665 DEBUG os_vif [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.851 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.851 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.855 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapecdb3a4e-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.856 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapecdb3a4e-ac, col_values=(('external_ids', {'iface-id': 'ecdb3a4e-ac28-4357-9db5-41ebf06a4adc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:3e:fb', 'vm-uuid': '9096405c-eb66-4d27-abbb-e709b767afea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:48 compute-0 NetworkManager[48920]: <info>  [1763803368.8589] manager: (tapecdb3a4e-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.860 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.866 253665 INFO os_vif [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac')
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.915 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.915 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.916 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No VIF found with MAC fa:16:3e:e0:3e:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.916 253665 INFO nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Using config drive
Nov 22 09:22:48 compute-0 nova_compute[253661]: 2025-11-22 09:22:48.936 253665 DEBUG nova.storage.rbd_utils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9096405c-eb66-4d27-abbb-e709b767afea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:49 compute-0 kernel: tapdb173c1b-18 (unregistering): left promiscuous mode
Nov 22 09:22:49 compute-0 NetworkManager[48920]: <info>  [1763803369.1596] device (tapdb173c1b-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:22:49 compute-0 ovn_controller[152872]: 2025-11-22T09:22:49Z|00747|binding|INFO|Releasing lport db173c1b-18d4-496e-96f0-d12c680c680e from this chassis (sb_readonly=0)
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:49 compute-0 ovn_controller[152872]: 2025-11-22T09:22:49Z|00748|binding|INFO|Setting lport db173c1b-18d4-496e-96f0-d12c680c680e down in Southbound
Nov 22 09:22:49 compute-0 ovn_controller[152872]: 2025-11-22T09:22:49Z|00749|binding|INFO|Removing iface tapdb173c1b-18 ovn-installed in OVS
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.172 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.191 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:49 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 22 09:22:49 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Consumed 15.736s CPU time.
Nov 22 09:22:49 compute-0 systemd-machined[215941]: Machine qemu-87-instance-00000048 terminated.
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.251 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:27:be 10.100.0.10'], port_security=['fa:16:3e:25:27:be 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b5f49b05-1b70-4479-98d9-83b995037a41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f77386c-0230-4d59-9773-818360efc15e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0031069a-4f3d-4896-9271-51ffba2d6dff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80bb3efb-942b-4e13-a368-b443d149a62b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=db173c1b-18d4-496e-96f0-d12c680c680e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.252 162862 INFO neutron.agent.ovn.metadata.agent [-] Port db173c1b-18d4-496e-96f0-d12c680c680e in datapath 6f77386c-0230-4d59-9773-818360efc15e unbound from our chassis
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.253 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f77386c-0230-4d59-9773-818360efc15e
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.269 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[516ad806-9aa8-40ac-9ff0-877e81cc5b5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.298 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[02c57356-ae36-41b5-82d4-2ff7f113de01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.301 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[78a95cc3-4b9f-4995-88fb-23f251f3e3cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.329 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[14ae7222-39c4-4754-9a21-430da2f9ee89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b731c29-2ab0-48e4-ad63-445c89d23e8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f77386c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:7d:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619261, 'reachable_time': 20506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331218, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.358 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fccf30bf-d2e1-436a-b378-14a4601c82fc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619278, 'tstamp': 619278}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331219, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619281, 'tstamp': 619281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331219, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.360 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f77386c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.361 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.369 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f77386c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.370 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.370 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f77386c-00, col_values=(('external_ids', {'iface-id': '22c0239c-bdb2-4af4-a6a4-7ca3983ded8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:49.370 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 325 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 7.7 MiB/s wr, 209 op/s
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.627 253665 INFO nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Creating config drive at /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea/disk.config
Nov 22 09:22:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2255501502' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.634 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_vdn11yp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.675 253665 DEBUG nova.network.neutron [req-18c13f6b-4431-4940-ae83-7cae949b1b65 req-a049a810-c287-464e-b740-133050e561bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updated VIF entry in instance network info cache for port ecdb3a4e-ac28-4357-9db5-41ebf06a4adc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.676 253665 DEBUG nova.network.neutron [req-18c13f6b-4431-4940-ae83-7cae949b1b65 req-a049a810-c287-464e-b740-133050e561bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updating instance_info_cache with network_info: [{"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.690 253665 DEBUG oslo_concurrency.lockutils [req-18c13f6b-4431-4940-ae83-7cae949b1b65 req-a049a810-c287-464e-b740-133050e561bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.782 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_vdn11yp" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.810 253665 DEBUG nova.storage.rbd_utils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9096405c-eb66-4d27-abbb-e709b767afea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.814 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea/disk.config 9096405c-eb66-4d27-abbb-e709b767afea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.898 253665 INFO nova.virt.libvirt.driver [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance shutdown successfully after 13 seconds.
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.905 253665 INFO nova.virt.libvirt.driver [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance destroyed successfully.
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.906 253665 DEBUG nova.objects.instance [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'numa_topology' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.916 253665 DEBUG nova.compute.manager [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.952 253665 DEBUG nova.compute.manager [req-f48ce2d2-08bb-4d85-9ae0-b6d2d67b0dab req-25f67da7-fdca-4fad-8a3f-18aba1c7b83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-unplugged-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.953 253665 DEBUG oslo_concurrency.lockutils [req-f48ce2d2-08bb-4d85-9ae0-b6d2d67b0dab req-25f67da7-fdca-4fad-8a3f-18aba1c7b83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.953 253665 DEBUG oslo_concurrency.lockutils [req-f48ce2d2-08bb-4d85-9ae0-b6d2d67b0dab req-25f67da7-fdca-4fad-8a3f-18aba1c7b83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.954 253665 DEBUG oslo_concurrency.lockutils [req-f48ce2d2-08bb-4d85-9ae0-b6d2d67b0dab req-25f67da7-fdca-4fad-8a3f-18aba1c7b83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.954 253665 DEBUG nova.compute.manager [req-f48ce2d2-08bb-4d85-9ae0-b6d2d67b0dab req-25f67da7-fdca-4fad-8a3f-18aba1c7b83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] No waiting events found dispatching network-vif-unplugged-db173c1b-18d4-496e-96f0-d12c680c680e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:49 compute-0 nova_compute[253661]: 2025-11-22 09:22:49.954 253665 WARNING nova.compute.manager [req-f48ce2d2-08bb-4d85-9ae0-b6d2d67b0dab req-25f67da7-fdca-4fad-8a3f-18aba1c7b83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received unexpected event network-vif-unplugged-db173c1b-18d4-496e-96f0-d12c680c680e for instance with vm_state active and task_state powering-off.
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.004 253665 DEBUG oslo_concurrency.processutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea/disk.config 9096405c-eb66-4d27-abbb-e709b767afea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.004 253665 INFO nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Deleting local config drive /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea/disk.config because it was imported into RBD.
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.030 253665 DEBUG oslo_concurrency.lockutils [None req-85f6ccdd-f2b4-4cd6-b394-14ace280784b a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:50 compute-0 kernel: tapecdb3a4e-ac: entered promiscuous mode
Nov 22 09:22:50 compute-0 NetworkManager[48920]: <info>  [1763803370.0697] manager: (tapecdb3a4e-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Nov 22 09:22:50 compute-0 systemd-udevd[331210]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:22:50 compute-0 ovn_controller[152872]: 2025-11-22T09:22:50Z|00750|binding|INFO|Claiming lport ecdb3a4e-ac28-4357-9db5-41ebf06a4adc for this chassis.
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:50 compute-0 ovn_controller[152872]: 2025-11-22T09:22:50Z|00751|binding|INFO|ecdb3a4e-ac28-4357-9db5-41ebf06a4adc: Claiming fa:16:3e:e0:3e:fb 10.100.0.5
Nov 22 09:22:50 compute-0 NetworkManager[48920]: <info>  [1763803370.0844] device (tapecdb3a4e-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:22:50 compute-0 NetworkManager[48920]: <info>  [1763803370.0857] device (tapecdb3a4e-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:22:50 compute-0 systemd-machined[215941]: New machine qemu-90-instance-0000004b.
Nov 22 09:22:50 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-0000004b.
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:50 compute-0 ovn_controller[152872]: 2025-11-22T09:22:50Z|00752|binding|INFO|Setting lport ecdb3a4e-ac28-4357-9db5-41ebf06a4adc ovn-installed in OVS
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:50 compute-0 ovn_controller[152872]: 2025-11-22T09:22:50Z|00753|binding|INFO|Setting lport ecdb3a4e-ac28-4357-9db5-41ebf06a4adc up in Southbound
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.427 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3e:fb 10.100.0.5'], port_security=['fa:16:3e:e0:3e:fb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9096405c-eb66-4d27-abbb-e709b767afea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6456c660-bee8-4527-8966-f035b8f73def', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.429 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ecdb3a4e-ac28-4357-9db5-41ebf06a4adc in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 bound to our chassis
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.430 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0936cc0d-3697-4210-9c23-8f3e8e452e86
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.442 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7cc557-9bf6-4d00-bb0b-378dfcd5faee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.448 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0936cc0d-31 in ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.450 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0936cc0d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.450 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18b10a33-939c-49de-a8e2-2cc16913854e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.451 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66b7d712-7585-49e8-9cf9-db1d4fa465bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.463 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9fb24c-ddf1-4c4b-8221-bc3739141f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.478 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5680a5-2a38-40b6-8d81-de7dcfeab91d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.511 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b34d511b-23bc-4932-b6e3-4deeba6c8080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.519 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c42795f6-62a3-489d-b067-7e8d8d91d6f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 NetworkManager[48920]: <info>  [1763803370.5200] manager: (tap0936cc0d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/328)
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.555 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d63f84-2ef9-4e2f-9032-4263fc10c313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.558 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[370c55fe-b32c-4a89-aac7-a28613224c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 NetworkManager[48920]: <info>  [1763803370.5794] device (tap0936cc0d-30): carrier: link connected
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.584 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[154e6770-4e18-4f93-b8d9-f3de851d3616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.602 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b630b9-f01f-473d-80f9-cd8e6cc50668]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 30845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331369, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.619 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03dc1fcd-2f7e-4696-b2c4-5fdb12d70812]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:f05e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622131, 'tstamp': 622131}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331370, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.628 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803370.6282182, 9096405c-eb66-4d27-abbb-e709b767afea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.629 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] VM Started (Lifecycle Event)
Nov 22 09:22:50 compute-0 ceph-mon[75021]: pgmap v1759: 305 pgs: 305 active+clean; 325 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 7.7 MiB/s wr, 209 op/s
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.642 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[27f3b1ae-1416-4cbd-ab9f-c4fe639d128f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 30845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331371, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.645 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.650 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803370.628468, 9096405c-eb66-4d27-abbb-e709b767afea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.650 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] VM Paused (Lifecycle Event)
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.669 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.672 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.676 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f56a3249-e98b-4857-bbce-d25ef4bbe070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.688 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.740 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74018686-fc99-48dd-a3bd-defa91e7c98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.741 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.742 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.742 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0936cc0d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.744 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:50 compute-0 NetworkManager[48920]: <info>  [1763803370.7450] manager: (tap0936cc0d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Nov 22 09:22:50 compute-0 kernel: tap0936cc0d-30: entered promiscuous mode
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.749 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.750 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0936cc0d-30, col_values=(('external_ids', {'iface-id': 'a1484e81-5431-4cb7-9298-4572e8674d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:50 compute-0 ovn_controller[152872]: 2025-11-22T09:22:50Z|00754|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.771 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:50 compute-0 nova_compute[253661]: 2025-11-22 09:22:50.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.777 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0936cc0d-3697-4210-9c23-8f3e8e452e86.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0936cc0d-3697-4210-9c23-8f3e8e452e86.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.777 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7034bc29-b6a1-4c57-b33e-805bb8fa6f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.778 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-0936cc0d-3697-4210-9c23-8f3e8e452e86
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/0936cc0d-3697-4210-9c23-8f3e8e452e86.pid.haproxy
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 0936cc0d-3697-4210-9c23-8f3e8e452e86
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:22:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:50.779 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'env', 'PROCESS_TAG=haproxy-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0936cc0d-3697-4210-9c23-8f3e8e452e86.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:22:51 compute-0 podman[331406]: 2025-11-22 09:22:51.142501029 +0000 UTC m=+0.053322641 container create 4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:22:51 compute-0 systemd[1]: Started libpod-conmon-4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3.scope.
Nov 22 09:22:51 compute-0 podman[331406]: 2025-11-22 09:22:51.112880602 +0000 UTC m=+0.023702244 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:22:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1019a479c4f9b61416483ab6dc0c9e11f221309433f7114f4bdcf465ffa0866d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:22:51 compute-0 podman[331406]: 2025-11-22 09:22:51.234676677 +0000 UTC m=+0.145498289 container init 4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:22:51 compute-0 podman[331406]: 2025-11-22 09:22:51.23974689 +0000 UTC m=+0.150568502 container start 4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:22:51 compute-0 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [NOTICE]   (331425) : New worker (331427) forked
Nov 22 09:22:51 compute-0 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [NOTICE]   (331425) : Loading success.
Nov 22 09:22:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 325 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 904 KiB/s rd, 6.6 MiB/s wr, 189 op/s
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:22:52
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:22:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.378 253665 DEBUG nova.compute.manager [req-99f8bb91-232c-4489-b681-baf38ae22ff3 req-e5414753-1bba-4a60-a922-c9ccc5b8957e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.378 253665 DEBUG oslo_concurrency.lockutils [req-99f8bb91-232c-4489-b681-baf38ae22ff3 req-e5414753-1bba-4a60-a922-c9ccc5b8957e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.378 253665 DEBUG oslo_concurrency.lockutils [req-99f8bb91-232c-4489-b681-baf38ae22ff3 req-e5414753-1bba-4a60-a922-c9ccc5b8957e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.379 253665 DEBUG oslo_concurrency.lockutils [req-99f8bb91-232c-4489-b681-baf38ae22ff3 req-e5414753-1bba-4a60-a922-c9ccc5b8957e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.379 253665 DEBUG nova.compute.manager [req-99f8bb91-232c-4489-b681-baf38ae22ff3 req-e5414753-1bba-4a60-a922-c9ccc5b8957e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] No waiting events found dispatching network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.379 253665 WARNING nova.compute.manager [req-99f8bb91-232c-4489-b681-baf38ae22ff3 req-e5414753-1bba-4a60-a922-c9ccc5b8957e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received unexpected event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e for instance with vm_state stopped and task_state None.
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:22:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.807 253665 DEBUG nova.objects.instance [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'flavor' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.828 253665 DEBUG oslo_concurrency.lockutils [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "refresh_cache-b5f49b05-1b70-4479-98d9-83b995037a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.829 253665 DEBUG oslo_concurrency.lockutils [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquired lock "refresh_cache-b5f49b05-1b70-4479-98d9-83b995037a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.829 253665 DEBUG nova.network.neutron [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:22:52 compute-0 nova_compute[253661]: 2025-11-22 09:22:52.829 253665 DEBUG nova.objects.instance [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'info_cache' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:53 compute-0 ceph-mon[75021]: pgmap v1760: 305 pgs: 305 active+clean; 325 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 904 KiB/s rd, 6.6 MiB/s wr, 189 op/s
Nov 22 09:22:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 6.7 MiB/s wr, 198 op/s
Nov 22 09:22:53 compute-0 nova_compute[253661]: 2025-11-22 09:22:53.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.453 253665 DEBUG nova.compute.manager [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.453 253665 DEBUG oslo_concurrency.lockutils [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.454 253665 DEBUG oslo_concurrency.lockutils [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.454 253665 DEBUG oslo_concurrency.lockutils [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.454 253665 DEBUG nova.compute.manager [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Processing event network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.454 253665 DEBUG nova.compute.manager [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.455 253665 DEBUG oslo_concurrency.lockutils [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.455 253665 DEBUG oslo_concurrency.lockutils [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.455 253665 DEBUG oslo_concurrency.lockutils [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.455 253665 DEBUG nova.compute.manager [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] No waiting events found dispatching network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.455 253665 WARNING nova.compute.manager [req-ea59be73-22ed-41ac-88a7-306cd2ed5d1f req-d82ab73c-2b5f-41fa-b2c1-22bb24d48cf2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received unexpected event network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc for instance with vm_state building and task_state spawning.
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.456 253665 DEBUG nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.462 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803374.4617736, 9096405c-eb66-4d27-abbb-e709b767afea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.462 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] VM Resumed (Lifecycle Event)
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.465 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.469 253665 INFO nova.virt.libvirt.driver [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Instance spawned successfully.
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.470 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.483 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.491 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.495 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.496 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.497 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.498 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.498 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.499 253665 DEBUG nova.virt.libvirt.driver [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.508 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.555 253665 DEBUG nova.network.neutron [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Updating instance_info_cache with network_info: [{"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.567 253665 DEBUG oslo_concurrency.lockutils [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Releasing lock "refresh_cache-b5f49b05-1b70-4479-98d9-83b995037a41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:22:54 compute-0 ceph-mon[75021]: pgmap v1761: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 6.7 MiB/s wr, 198 op/s
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.592 253665 INFO nova.virt.libvirt.driver [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance destroyed successfully.
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.593 253665 DEBUG nova.objects.instance [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'numa_topology' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:54 compute-0 nova_compute[253661]: 2025-11-22 09:22:54.616 253665 DEBUG nova.objects.instance [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'resources' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.077 253665 DEBUG nova.virt.libvirt.vif [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1090017716',display_name='tempest-ListServerFiltersTestJSON-instance-1090017716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1090017716',id=72,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-n7lha9f5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:50Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=b5f49b05-1b70-4479-98d9-83b995037a41,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.077 253665 DEBUG nova.network.os_vif_util [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.078 253665 DEBUG nova.network.os_vif_util [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.079 253665 DEBUG os_vif [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.082 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb173c1b-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.084 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.087 253665 INFO os_vif [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18')
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.094 253665 DEBUG nova.virt.libvirt.driver [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Start _get_guest_xml network_info=[{"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.096 253665 INFO nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Took 11.09 seconds to spawn the instance on the hypervisor.
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.096 253665 DEBUG nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.103 253665 WARNING nova.virt.libvirt.driver [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.108 253665 DEBUG nova.virt.libvirt.host [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.109 253665 DEBUG nova.virt.libvirt.host [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.111 253665 DEBUG nova.virt.libvirt.host [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.111 253665 DEBUG nova.virt.libvirt.host [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.112 253665 DEBUG nova.virt.libvirt.driver [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.112 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.113 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.113 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.113 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.113 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.114 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.114 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.114 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.114 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.115 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.115 253665 DEBUG nova.virt.hardware [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.115 253665 DEBUG nova.objects.instance [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.129 253665 DEBUG oslo_concurrency.processutils [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.337 253665 INFO nova.compute.manager [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Took 12.85 seconds to build instance.
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.412 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.428 253665 DEBUG oslo_concurrency.lockutils [None req-51f722e3-1e28-42ad-bceb-a9e2e45f4960 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 3.0 MiB/s wr, 146 op/s
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:22:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:22:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169009233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.648 253665 DEBUG oslo_concurrency.processutils [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:55 compute-0 nova_compute[253661]: 2025-11-22 09:22:55.688 253665 DEBUG oslo_concurrency.processutils [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:22:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:22:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86109872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.181 253665 DEBUG oslo_concurrency.processutils [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.184 253665 DEBUG nova.virt.libvirt.vif [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1090017716',display_name='tempest-ListServerFiltersTestJSON-instance-1090017716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1090017716',id=72,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-n7lha9f5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:50Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=b5f49b05-1b70-4479-98d9-83b995037a41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.185 253665 DEBUG nova.network.os_vif_util [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.186 253665 DEBUG nova.network.os_vif_util [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.188 253665 DEBUG nova.objects.instance [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.203 253665 DEBUG nova.virt.libvirt.driver [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <uuid>b5f49b05-1b70-4479-98d9-83b995037a41</uuid>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <name>instance-00000048</name>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1090017716</nova:name>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:22:55</nova:creationTime>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <nova:user uuid="a611caa9b8c247f083ce8b67780fbc01">tempest-ListServerFiltersTestJSON-1747318421-project-member</nova:user>
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <nova:project uuid="e5a1bb02c0c047be92aba24831aef1a5">tempest-ListServerFiltersTestJSON-1747318421</nova:project>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <nova:port uuid="db173c1b-18d4-496e-96f0-d12c680c680e">
Nov 22 09:22:56 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <system>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <entry name="serial">b5f49b05-1b70-4479-98d9-83b995037a41</entry>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <entry name="uuid">b5f49b05-1b70-4479-98d9-83b995037a41</entry>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     </system>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <os>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   </os>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <features>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   </features>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b5f49b05-1b70-4479-98d9-83b995037a41_disk">
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b5f49b05-1b70-4479-98d9-83b995037a41_disk.config">
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:22:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:25:27:be"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <target dev="tapdb173c1b-18"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41/console.log" append="off"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <video>
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     </video>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:22:56 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:22:56 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:22:56 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:22:56 compute-0 nova_compute[253661]: </domain>
Nov 22 09:22:56 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.205 253665 DEBUG nova.virt.libvirt.driver [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.206 253665 DEBUG nova.virt.libvirt.driver [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.206 253665 DEBUG nova.virt.libvirt.vif [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1090017716',display_name='tempest-ListServerFiltersTestJSON-instance-1090017716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1090017716',id=72,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-n7lha9f5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:50Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=b5f49b05-1b70-4479-98d9-83b995037a41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.207 253665 DEBUG nova.network.os_vif_util [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.207 253665 DEBUG nova.network.os_vif_util [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.208 253665 DEBUG os_vif [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.210 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.210 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.213 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.213 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb173c1b-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.214 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb173c1b-18, col_values=(('external_ids', {'iface-id': 'db173c1b-18d4-496e-96f0-d12c680c680e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:27:be', 'vm-uuid': 'b5f49b05-1b70-4479-98d9-83b995037a41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:56 compute-0 NetworkManager[48920]: <info>  [1763803376.2168] manager: (tapdb173c1b-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.218 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.220 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.221 253665 INFO os_vif [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18')
Nov 22 09:22:56 compute-0 NetworkManager[48920]: <info>  [1763803376.3090] manager: (tapdb173c1b-18): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Nov 22 09:22:56 compute-0 kernel: tapdb173c1b-18: entered promiscuous mode
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:56 compute-0 ovn_controller[152872]: 2025-11-22T09:22:56Z|00755|binding|INFO|Claiming lport db173c1b-18d4-496e-96f0-d12c680c680e for this chassis.
Nov 22 09:22:56 compute-0 ovn_controller[152872]: 2025-11-22T09:22:56Z|00756|binding|INFO|db173c1b-18d4-496e-96f0-d12c680c680e: Claiming fa:16:3e:25:27:be 10.100.0.10
Nov 22 09:22:56 compute-0 ovn_controller[152872]: 2025-11-22T09:22:56Z|00757|binding|INFO|Setting lport db173c1b-18d4-496e-96f0-d12c680c680e ovn-installed in OVS
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:56 compute-0 systemd-machined[215941]: New machine qemu-91-instance-00000048.
Nov 22 09:22:56 compute-0 systemd-udevd[331513]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:22:56 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-00000048.
Nov 22 09:22:56 compute-0 NetworkManager[48920]: <info>  [1763803376.3760] device (tapdb173c1b-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:22:56 compute-0 NetworkManager[48920]: <info>  [1763803376.3791] device (tapdb173c1b-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:22:56 compute-0 ovn_controller[152872]: 2025-11-22T09:22:56Z|00758|binding|INFO|Setting lport db173c1b-18d4-496e-96f0-d12c680c680e up in Southbound
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.459 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:27:be 10.100.0.10'], port_security=['fa:16:3e:25:27:be 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b5f49b05-1b70-4479-98d9-83b995037a41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f77386c-0230-4d59-9773-818360efc15e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0031069a-4f3d-4896-9271-51ffba2d6dff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80bb3efb-942b-4e13-a368-b443d149a62b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=db173c1b-18d4-496e-96f0-d12c680c680e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.461 162862 INFO neutron.agent.ovn.metadata.agent [-] Port db173c1b-18d4-496e-96f0-d12c680c680e in datapath 6f77386c-0230-4d59-9773-818360efc15e bound to our chassis
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.463 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f77386c-0230-4d59-9773-818360efc15e
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.480 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[62f8dd02-cbc2-4bf9-a6d8-5e2772841282]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.519 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b4190ac0-a9c1-43ce-a652-184ea3b81044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.523 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8c94fd58-a553-4e05-bbc2-4471e331192c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.557 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c0185d21-8058-4bbf-8ea0-8b7c2c4af4c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.578 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aef93ce5-60ae-4a99-908e-8a80056405dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f77386c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:7d:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619261, 'reachable_time': 20506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331531, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:56 compute-0 ceph-mon[75021]: pgmap v1762: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 3.0 MiB/s wr, 146 op/s
Nov 22 09:22:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1169009233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/86109872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.602 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d27394-0e9f-421b-9435-c9f0ae921204]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619278, 'tstamp': 619278}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331543, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619281, 'tstamp': 619281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331543, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.605 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f77386c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.608 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f77386c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.608 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.609 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f77386c-00, col_values=(('external_ids', {'iface-id': '22c0239c-bdb2-4af4-a6a4-7ca3983ded8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:22:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:22:56.609 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.753 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for b5f49b05-1b70-4479-98d9-83b995037a41 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.754 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803376.7529366, b5f49b05-1b70-4479-98d9-83b995037a41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.754 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] VM Resumed (Lifecycle Event)
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.756 253665 DEBUG nova.compute.manager [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.760 253665 INFO nova.virt.libvirt.driver [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance rebooted successfully.
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.761 253665 DEBUG nova.compute.manager [None req-41f973c4-1dd4-4314-a812-8634d565c2ef a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.788 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.792 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.820 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.820 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803376.754213, b5f49b05-1b70-4479-98d9-83b995037a41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.821 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] VM Started (Lifecycle Event)
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.842 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.847 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:22:56 compute-0 nova_compute[253661]: 2025-11-22 09:22:56.877 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 22 09:22:57 compute-0 nova_compute[253661]: 2025-11-22 09:22:57.209 253665 DEBUG nova.compute.manager [req-1921f625-9224-4d53-a63b-b7b9c3bde117 req-0d5cd147-bffa-42ab-9716-d2c25d6ec5d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:57 compute-0 nova_compute[253661]: 2025-11-22 09:22:57.210 253665 DEBUG oslo_concurrency.lockutils [req-1921f625-9224-4d53-a63b-b7b9c3bde117 req-0d5cd147-bffa-42ab-9716-d2c25d6ec5d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:57 compute-0 nova_compute[253661]: 2025-11-22 09:22:57.210 253665 DEBUG oslo_concurrency.lockutils [req-1921f625-9224-4d53-a63b-b7b9c3bde117 req-0d5cd147-bffa-42ab-9716-d2c25d6ec5d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:57 compute-0 nova_compute[253661]: 2025-11-22 09:22:57.210 253665 DEBUG oslo_concurrency.lockutils [req-1921f625-9224-4d53-a63b-b7b9c3bde117 req-0d5cd147-bffa-42ab-9716-d2c25d6ec5d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:57 compute-0 nova_compute[253661]: 2025-11-22 09:22:57.210 253665 DEBUG nova.compute.manager [req-1921f625-9224-4d53-a63b-b7b9c3bde117 req-0d5cd147-bffa-42ab-9716-d2c25d6ec5d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] No waiting events found dispatching network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:57 compute-0 nova_compute[253661]: 2025-11-22 09:22:57.210 253665 WARNING nova.compute.manager [req-1921f625-9224-4d53-a63b-b7b9c3bde117 req-0d5cd147-bffa-42ab-9716-d2c25d6ec5d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received unexpected event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e for instance with vm_state active and task_state None.
Nov 22 09:22:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:22:57 compute-0 ovn_controller[152872]: 2025-11-22T09:22:57Z|00759|binding|INFO|Releasing lport 22c0239c-bdb2-4af4-a6a4-7ca3983ded8c from this chassis (sb_readonly=0)
Nov 22 09:22:57 compute-0 NetworkManager[48920]: <info>  [1763803377.4695] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Nov 22 09:22:57 compute-0 ovn_controller[152872]: 2025-11-22T09:22:57Z|00760|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 09:22:57 compute-0 nova_compute[253661]: 2025-11-22 09:22:57.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:57 compute-0 NetworkManager[48920]: <info>  [1763803377.4711] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Nov 22 09:22:57 compute-0 ovn_controller[152872]: 2025-11-22T09:22:57Z|00761|binding|INFO|Releasing lport 22c0239c-bdb2-4af4-a6a4-7ca3983ded8c from this chassis (sb_readonly=0)
Nov 22 09:22:57 compute-0 ovn_controller[152872]: 2025-11-22T09:22:57Z|00762|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 09:22:57 compute-0 nova_compute[253661]: 2025-11-22 09:22:57.487 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:22:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 22 09:22:58 compute-0 ceph-mon[75021]: pgmap v1763: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.411 253665 DEBUG nova.compute.manager [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.412 253665 DEBUG oslo_concurrency.lockutils [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.413 253665 DEBUG oslo_concurrency.lockutils [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.413 253665 DEBUG oslo_concurrency.lockutils [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.414 253665 DEBUG nova.compute.manager [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] No waiting events found dispatching network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.415 253665 WARNING nova.compute.manager [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received unexpected event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e for instance with vm_state active and task_state None.
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.415 253665 DEBUG nova.compute.manager [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-changed-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.415 253665 DEBUG nova.compute.manager [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Refreshing instance network info cache due to event network-changed-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.416 253665 DEBUG oslo_concurrency.lockutils [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.416 253665 DEBUG oslo_concurrency.lockutils [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.416 253665 DEBUG nova.network.neutron [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Refreshing network info cache for port ecdb3a4e-ac28-4357-9db5-41ebf06a4adc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:22:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 160 op/s
Nov 22 09:22:59 compute-0 nova_compute[253661]: 2025-11-22 09:22:59.742 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:00 compute-0 podman[331572]: 2025-11-22 09:23:00.414912079 +0000 UTC m=+0.079785440 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 09:23:00 compute-0 nova_compute[253661]: 2025-11-22 09:23:00.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:00 compute-0 podman[331573]: 2025-11-22 09:23:00.422517903 +0000 UTC m=+0.087728212 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd)
Nov 22 09:23:00 compute-0 nova_compute[253661]: 2025-11-22 09:23:00.434 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:00.433 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:23:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:00.434 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:23:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:00.435 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:01 compute-0 ceph-mon[75021]: pgmap v1764: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 160 op/s
Nov 22 09:23:01 compute-0 nova_compute[253661]: 2025-11-22 09:23:01.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 41 KiB/s wr, 109 op/s
Nov 22 09:23:01 compute-0 nova_compute[253661]: 2025-11-22 09:23:01.524 253665 DEBUG nova.network.neutron [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updated VIF entry in instance network info cache for port ecdb3a4e-ac28-4357-9db5-41ebf06a4adc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:23:01 compute-0 nova_compute[253661]: 2025-11-22 09:23:01.526 253665 DEBUG nova.network.neutron [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updating instance_info_cache with network_info: [{"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:01 compute-0 nova_compute[253661]: 2025-11-22 09:23:01.614 253665 DEBUG oslo_concurrency.lockutils [req-d275d56d-8d11-4a2f-b3f0-a9a719cd57f3 req-96d19b8f-5c9b-47d2-83c6-1b8fed5278ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:23:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026235618325519306 of space, bias 1.0, pg target 0.7870685497655792 quantized to 32 (current 32)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:23:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:23:03 compute-0 ceph-mon[75021]: pgmap v1765: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 41 KiB/s wr, 109 op/s
Nov 22 09:23:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 43 KiB/s wr, 146 op/s
Nov 22 09:23:05 compute-0 ceph-mon[75021]: pgmap v1766: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 43 KiB/s wr, 146 op/s
Nov 22 09:23:05 compute-0 nova_compute[253661]: 2025-11-22 09:23:05.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:05 compute-0 podman[331609]: 2025-11-22 09:23:05.455171633 +0000 UTC m=+0.139819151 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 22 09:23:05 compute-0 nova_compute[253661]: 2025-11-22 09:23:05.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.1 KiB/s wr, 137 op/s
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.040 253665 DEBUG oslo_concurrency.lockutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "d4350763-7fea-40fa-ade2-5aada492b3c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.042 253665 DEBUG oslo_concurrency.lockutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.042 253665 DEBUG oslo_concurrency.lockutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.042 253665 DEBUG oslo_concurrency.lockutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.042 253665 DEBUG oslo_concurrency.lockutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.044 253665 INFO nova.compute.manager [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Terminating instance
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.045 253665 DEBUG nova.compute.manager [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.259 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:06 compute-0 kernel: tapb4377651-ae (unregistering): left promiscuous mode
Nov 22 09:23:06 compute-0 NetworkManager[48920]: <info>  [1763803386.2942] device (tapb4377651-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.307 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:06 compute-0 ovn_controller[152872]: 2025-11-22T09:23:06Z|00763|binding|INFO|Releasing lport b4377651-ae31-406d-9f8b-b7b241d511df from this chassis (sb_readonly=0)
Nov 22 09:23:06 compute-0 ovn_controller[152872]: 2025-11-22T09:23:06Z|00764|binding|INFO|Setting lport b4377651-ae31-406d-9f8b-b7b241d511df down in Southbound
Nov 22 09:23:06 compute-0 ovn_controller[152872]: 2025-11-22T09:23:06Z|00765|binding|INFO|Removing iface tapb4377651-ae ovn-installed in OVS
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.311 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.314 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:ac:3e 10.100.0.7'], port_security=['fa:16:3e:6d:ac:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd4350763-7fea-40fa-ade2-5aada492b3c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f77386c-0230-4d59-9773-818360efc15e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0031069a-4f3d-4896-9271-51ffba2d6dff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80bb3efb-942b-4e13-a368-b443d149a62b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b4377651-ae31-406d-9f8b-b7b241d511df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.315 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b4377651-ae31-406d-9f8b-b7b241d511df in datapath 6f77386c-0230-4d59-9773-818360efc15e unbound from our chassis
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.317 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f77386c-0230-4d59-9773-818360efc15e
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.341 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5803192f-7105-4265-948f-20a91f5593f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:06 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Nov 22 09:23:06 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004a.scope: Consumed 14.814s CPU time.
Nov 22 09:23:06 compute-0 systemd-machined[215941]: Machine qemu-89-instance-0000004a terminated.
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.381 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[de2aec46-0d8d-450d-8dd0-d97bb921a9a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.387 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[500884e5-c83d-4ad2-8d7f-bd9659207bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.438 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[79290edd-518e-44cd-8ef5-46535ca5a503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.461 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0091382c-5835-4f9b-ba7d-f09f9f7d583a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f77386c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:7d:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619261, 'reachable_time': 20506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331645, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.488 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[37d453ed-2a41-40cd-8134-c1dc01e40e8d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619278, 'tstamp': 619278}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331648, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619281, 'tstamp': 619281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331648, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.491 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f77386c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.502 253665 INFO nova.virt.libvirt.driver [-] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Instance destroyed successfully.
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.503 253665 DEBUG nova.objects.instance [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'resources' on Instance uuid d4350763-7fea-40fa-ade2-5aada492b3c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.503 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f77386c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.503 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.503 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f77386c-00, col_values=(('external_ids', {'iface-id': '22c0239c-bdb2-4af4-a6a4-7ca3983ded8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:06.504 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.515 253665 DEBUG nova.virt.libvirt.vif [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-578190102',display_name='tempest-ListServerFiltersTestJSON-instance-578190102',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-578190102',id=74,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:22:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-czi9aaxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:26Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=d4350763-7fea-40fa-ade2-5aada492b3c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.516 253665 DEBUG nova.network.os_vif_util [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "b4377651-ae31-406d-9f8b-b7b241d511df", "address": "fa:16:3e:6d:ac:3e", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4377651-ae", "ovs_interfaceid": "b4377651-ae31-406d-9f8b-b7b241d511df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.517 253665 DEBUG nova.network.os_vif_util [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ac:3e,bridge_name='br-int',has_traffic_filtering=True,id=b4377651-ae31-406d-9f8b-b7b241d511df,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4377651-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.517 253665 DEBUG os_vif [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ac:3e,bridge_name='br-int',has_traffic_filtering=True,id=b4377651-ae31-406d-9f8b-b7b241d511df,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4377651-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.521 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4377651-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:23:06 compute-0 nova_compute[253661]: 2025-11-22 09:23:06.530 253665 INFO os_vif [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ac:3e,bridge_name='br-int',has_traffic_filtering=True,id=b4377651-ae31-406d-9f8b-b7b241d511df,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4377651-ae')
Nov 22 09:23:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:07 compute-0 ceph-mon[75021]: pgmap v1767: 305 pgs: 305 active+clean; 326 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.1 KiB/s wr, 137 op/s
Nov 22 09:23:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 328 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 132 KiB/s wr, 140 op/s
Nov 22 09:23:09 compute-0 ceph-mon[75021]: pgmap v1768: 305 pgs: 305 active+clean; 328 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 132 KiB/s wr, 140 op/s
Nov 22 09:23:09 compute-0 nova_compute[253661]: 2025-11-22 09:23:09.288 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquiring lock "93493d5d-7044-4597-9113-3231f49f8263" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:09 compute-0 nova_compute[253661]: 2025-11-22 09:23:09.290 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:09 compute-0 nova_compute[253661]: 2025-11-22 09:23:09.369 253665 DEBUG nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:23:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 316 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.1 MiB/s wr, 130 op/s
Nov 22 09:23:09 compute-0 nova_compute[253661]: 2025-11-22 09:23:09.542 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:09 compute-0 nova_compute[253661]: 2025-11-22 09:23:09.543 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:09 compute-0 nova_compute[253661]: 2025-11-22 09:23:09.552 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:23:09 compute-0 nova_compute[253661]: 2025-11-22 09:23:09.552 253665 INFO nova.compute.claims [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:23:09 compute-0 nova_compute[253661]: 2025-11-22 09:23:09.716 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:23:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3712562446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.225 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.232 253665 DEBUG nova.compute.provider_tree [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.246 253665 DEBUG nova.scheduler.client.report [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.269 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.270 253665 DEBUG nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.302 253665 DEBUG nova.compute.manager [req-030c9a2c-f782-4fb8-99c7-c04fc88065eb req-8a099262-1498-4bcb-b625-83fcd477ac08 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Received event network-vif-unplugged-b4377651-ae31-406d-9f8b-b7b241d511df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.302 253665 DEBUG oslo_concurrency.lockutils [req-030c9a2c-f782-4fb8-99c7-c04fc88065eb req-8a099262-1498-4bcb-b625-83fcd477ac08 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.303 253665 DEBUG oslo_concurrency.lockutils [req-030c9a2c-f782-4fb8-99c7-c04fc88065eb req-8a099262-1498-4bcb-b625-83fcd477ac08 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.303 253665 DEBUG oslo_concurrency.lockutils [req-030c9a2c-f782-4fb8-99c7-c04fc88065eb req-8a099262-1498-4bcb-b625-83fcd477ac08 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.303 253665 DEBUG nova.compute.manager [req-030c9a2c-f782-4fb8-99c7-c04fc88065eb req-8a099262-1498-4bcb-b625-83fcd477ac08 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] No waiting events found dispatching network-vif-unplugged-b4377651-ae31-406d-9f8b-b7b241d511df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.303 253665 DEBUG nova.compute.manager [req-030c9a2c-f782-4fb8-99c7-c04fc88065eb req-8a099262-1498-4bcb-b625-83fcd477ac08 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Received event network-vif-unplugged-b4377651-ae31-406d-9f8b-b7b241d511df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:23:10 compute-0 ovn_controller[152872]: 2025-11-22T09:23:10Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:3e:fb 10.100.0.5
Nov 22 09:23:10 compute-0 ovn_controller[152872]: 2025-11-22T09:23:10Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:3e:fb 10.100.0.5
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.324 253665 DEBUG nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.325 253665 DEBUG nova.network.neutron [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.342 253665 INFO nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.357 253665 DEBUG nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.451 253665 DEBUG nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.453 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.454 253665 INFO nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Creating image(s)
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.556 253665 DEBUG nova.storage.rbd_utils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] rbd image 93493d5d-7044-4597-9113-3231f49f8263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.584 253665 DEBUG nova.storage.rbd_utils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] rbd image 93493d5d-7044-4597-9113-3231f49f8263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.611 253665 DEBUG nova.storage.rbd_utils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] rbd image 93493d5d-7044-4597-9113-3231f49f8263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.617 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3712562446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.696 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.697 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.698 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.698 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.800 253665 DEBUG nova.storage.rbd_utils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] rbd image 93493d5d-7044-4597-9113-3231f49f8263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.809 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 93493d5d-7044-4597-9113-3231f49f8263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:10 compute-0 nova_compute[253661]: 2025-11-22 09:23:10.875 253665 DEBUG nova.policy [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6bc178aa031d46cf982ad58e57d0b59c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ec2330389984b7b9d32e9c941ed8f06', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:23:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 316 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.1 MiB/s wr, 71 op/s
Nov 22 09:23:11 compute-0 nova_compute[253661]: 2025-11-22 09:23:11.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:11 compute-0 nova_compute[253661]: 2025-11-22 09:23:11.529 253665 DEBUG nova.network.neutron [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Successfully created port: 7f696d3e-337d-48cb-bb42-5caf1246db05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:23:11 compute-0 ceph-mon[75021]: pgmap v1769: 305 pgs: 305 active+clean; 316 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.1 MiB/s wr, 130 op/s
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.087 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 93493d5d-7044-4597-9113-3231f49f8263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.163 253665 DEBUG nova.network.neutron [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Successfully updated port: 7f696d3e-337d-48cb-bb42-5caf1246db05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.171 253665 DEBUG nova.storage.rbd_utils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] resizing rbd image 93493d5d-7044-4597-9113-3231f49f8263_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:23:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:23:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1030217966' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:23:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:23:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1030217966' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:23:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.430 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquiring lock "refresh_cache-93493d5d-7044-4597-9113-3231f49f8263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.431 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquired lock "refresh_cache-93493d5d-7044-4597-9113-3231f49f8263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.431 253665 DEBUG nova.network.neutron [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.436830) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803392436890, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 857, "num_deletes": 251, "total_data_size": 1053737, "memory_usage": 1082104, "flush_reason": "Manual Compaction"}
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.497 253665 DEBUG nova.compute.manager [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Received event network-vif-plugged-b4377651-ae31-406d-9f8b-b7b241d511df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.497 253665 DEBUG oslo_concurrency.lockutils [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.497 253665 DEBUG oslo_concurrency.lockutils [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.497 253665 DEBUG oslo_concurrency.lockutils [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.498 253665 DEBUG nova.compute.manager [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] No waiting events found dispatching network-vif-plugged-b4377651-ae31-406d-9f8b-b7b241d511df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.498 253665 WARNING nova.compute.manager [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Received unexpected event network-vif-plugged-b4377651-ae31-406d-9f8b-b7b241d511df for instance with vm_state active and task_state deleting.
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.498 253665 DEBUG nova.compute.manager [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Received event network-changed-7f696d3e-337d-48cb-bb42-5caf1246db05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.498 253665 DEBUG nova.compute.manager [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Refreshing instance network info cache due to event network-changed-7f696d3e-337d-48cb-bb42-5caf1246db05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.498 253665 DEBUG oslo_concurrency.lockutils [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-93493d5d-7044-4597-9113-3231f49f8263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803392521435, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 674206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35427, "largest_seqno": 36283, "table_properties": {"data_size": 670644, "index_size": 1278, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9814, "raw_average_key_size": 20, "raw_value_size": 662969, "raw_average_value_size": 1407, "num_data_blocks": 57, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803324, "oldest_key_time": 1763803324, "file_creation_time": 1763803392, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 84694 microseconds, and 4775 cpu microseconds.
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.521502) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 674206 bytes OK
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.521554) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.537175) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.537234) EVENT_LOG_v1 {"time_micros": 1763803392537222, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.537264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1049479, prev total WAL file size 1049479, number of live WAL files 2.
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.538167) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353035' seq:0, type:0; will stop at (end)
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(658KB)], [77(10100KB)]
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803392538204, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11016991, "oldest_snapshot_seqno": -1}
Nov 22 09:23:12 compute-0 nova_compute[253661]: 2025-11-22 09:23:12.549 253665 DEBUG nova.network.neutron [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6028 keys, 8063667 bytes, temperature: kUnknown
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803392663740, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8063667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8024258, "index_size": 23235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 152580, "raw_average_key_size": 25, "raw_value_size": 7916956, "raw_average_value_size": 1313, "num_data_blocks": 940, "num_entries": 6028, "num_filter_entries": 6028, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803392, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.664733) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8063667 bytes
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.762389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 87.2 rd, 63.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(28.3) write-amplify(12.0) OK, records in: 6514, records dropped: 486 output_compression: NoCompression
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.762440) EVENT_LOG_v1 {"time_micros": 1763803392762423, "job": 44, "event": "compaction_finished", "compaction_time_micros": 126324, "compaction_time_cpu_micros": 25168, "output_level": 6, "num_output_files": 1, "total_output_size": 8063667, "num_input_records": 6514, "num_output_records": 6028, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803392762985, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803392764603, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.538037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.764739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.764748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.764750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.764752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:23:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:23:12.764754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:23:12 compute-0 ovn_controller[152872]: 2025-11-22T09:23:12Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:27:be 10.100.0.10
Nov 22 09:23:12 compute-0 ovn_controller[152872]: 2025-11-22T09:23:12Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:27:be 10.100.0.10
Nov 22 09:23:13 compute-0 ceph-mon[75021]: pgmap v1770: 305 pgs: 305 active+clean; 316 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.1 MiB/s wr, 71 op/s
Nov 22 09:23:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1030217966' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:23:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1030217966' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.345 253665 DEBUG nova.objects.instance [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lazy-loading 'migration_context' on Instance uuid 93493d5d-7044-4597-9113-3231f49f8263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.358 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.358 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Ensure instance console log exists: /var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.359 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.359 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.359 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 297 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.1 MiB/s wr, 150 op/s
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.510 253665 INFO nova.virt.libvirt.driver [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Deleting instance files /var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0_del
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.512 253665 INFO nova.virt.libvirt.driver [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Deletion of /var/lib/nova/instances/d4350763-7fea-40fa-ade2-5aada492b3c0_del complete
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.565 253665 INFO nova.compute.manager [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Took 7.52 seconds to destroy the instance on the hypervisor.
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.565 253665 DEBUG oslo.service.loopingcall [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.566 253665 DEBUG nova.compute.manager [-] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:23:13 compute-0 nova_compute[253661]: 2025-11-22 09:23:13.566 253665 DEBUG nova.network.neutron [-] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.289 253665 DEBUG nova.network.neutron [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Updating instance_info_cache with network_info: [{"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.311 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Releasing lock "refresh_cache-93493d5d-7044-4597-9113-3231f49f8263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.312 253665 DEBUG nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Instance network_info: |[{"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.313 253665 DEBUG oslo_concurrency.lockutils [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-93493d5d-7044-4597-9113-3231f49f8263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.313 253665 DEBUG nova.network.neutron [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Refreshing network info cache for port 7f696d3e-337d-48cb-bb42-5caf1246db05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.316 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Start _get_guest_xml network_info=[{"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.321 253665 WARNING nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.329 253665 DEBUG nova.virt.libvirt.host [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.330 253665 DEBUG nova.virt.libvirt.host [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.335 253665 DEBUG nova.virt.libvirt.host [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.336 253665 DEBUG nova.virt.libvirt.host [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.337 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.337 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.338 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.338 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.339 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.339 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.339 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.340 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.340 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.340 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.341 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.341 253665 DEBUG nova.virt.hardware [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.344 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.602 253665 DEBUG nova.network.neutron [-] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.617 253665 INFO nova.compute.manager [-] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Took 1.05 seconds to deallocate network for instance.
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.667 253665 DEBUG oslo_concurrency.lockutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.668 253665 DEBUG oslo_concurrency.lockutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.791 253665 DEBUG oslo_concurrency.processutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:23:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3166746481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.837 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.861 253665 DEBUG nova.storage.rbd_utils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] rbd image 93493d5d-7044-4597-9113-3231f49f8263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:14 compute-0 nova_compute[253661]: 2025-11-22 09:23:14.865 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:15 compute-0 ceph-mon[75021]: pgmap v1771: 305 pgs: 305 active+clean; 297 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.1 MiB/s wr, 150 op/s
Nov 22 09:23:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3166746481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:23:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1506530565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.271 253665 DEBUG oslo_concurrency.processutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.279 253665 DEBUG nova.compute.provider_tree [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.294 253665 DEBUG nova.scheduler.client.report [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:23:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:23:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2448959482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.357 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.359 253665 DEBUG nova.virt.libvirt.vif [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:23:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1900692880',display_name='tempest-ServerMetadataNegativeTestJSON-server-1900692880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1900692880',id=76,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ec2330389984b7b9d32e9c941ed8f06',ramdisk_id='',reservation_id='r-wnd0n42y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-2076041723',owner_user_name='tempest-ServerMetadataNegativeTestJSON-2076041723-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:23:10Z,user_data=None,user_id='6bc178aa031d46cf982ad58e57d0b59c',uuid=93493d5d-7044-4597-9113-3231f49f8263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.359 253665 DEBUG nova.network.os_vif_util [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Converting VIF {"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.360 253665 DEBUG nova.network.os_vif_util [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:e2:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f696d3e-337d-48cb-bb42-5caf1246db05,network=Network(b5450041-8eb3-45e2-bc64-a16e319f0f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f696d3e-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.361 253665 DEBUG nova.objects.instance [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lazy-loading 'pci_devices' on Instance uuid 93493d5d-7044-4597-9113-3231f49f8263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.374 253665 DEBUG oslo_concurrency.lockutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.379 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <uuid>93493d5d-7044-4597-9113-3231f49f8263</uuid>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <name>instance-0000004c</name>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1900692880</nova:name>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:23:14</nova:creationTime>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <nova:user uuid="6bc178aa031d46cf982ad58e57d0b59c">tempest-ServerMetadataNegativeTestJSON-2076041723-project-member</nova:user>
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <nova:project uuid="2ec2330389984b7b9d32e9c941ed8f06">tempest-ServerMetadataNegativeTestJSON-2076041723</nova:project>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <nova:port uuid="7f696d3e-337d-48cb-bb42-5caf1246db05">
Nov 22 09:23:15 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <system>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <entry name="serial">93493d5d-7044-4597-9113-3231f49f8263</entry>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <entry name="uuid">93493d5d-7044-4597-9113-3231f49f8263</entry>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     </system>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <os>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   </os>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <features>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   </features>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/93493d5d-7044-4597-9113-3231f49f8263_disk">
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/93493d5d-7044-4597-9113-3231f49f8263_disk.config">
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:23:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:c7:e2:2a"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <target dev="tap7f696d3e-33"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263/console.log" append="off"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <video>
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     </video>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:23:15 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:23:15 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:23:15 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:23:15 compute-0 nova_compute[253661]: </domain>
Nov 22 09:23:15 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.380 253665 DEBUG nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Preparing to wait for external event network-vif-plugged-7f696d3e-337d-48cb-bb42-5caf1246db05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.381 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquiring lock "93493d5d-7044-4597-9113-3231f49f8263-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.381 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.381 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.382 253665 DEBUG nova.virt.libvirt.vif [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:23:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1900692880',display_name='tempest-ServerMetadataNegativeTestJSON-server-1900692880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1900692880',id=76,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ec2330389984b7b9d32e9c941ed8f06',ramdisk_id='',reservation_id='r-wnd0n42y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-2076041723',owner_user_name='tempest-ServerMetadataNegativeTestJSON-2076041723-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:23:10Z,user_data=None,user_id='6bc178aa031d46cf982ad58e57d0b59c',uuid=93493d5d-7044-4597-9113-3231f49f8263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.382 253665 DEBUG nova.network.os_vif_util [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Converting VIF {"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.383 253665 DEBUG nova.network.os_vif_util [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:e2:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f696d3e-337d-48cb-bb42-5caf1246db05,network=Network(b5450041-8eb3-45e2-bc64-a16e319f0f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f696d3e-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.383 253665 DEBUG os_vif [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:e2:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f696d3e-337d-48cb-bb42-5caf1246db05,network=Network(b5450041-8eb3-45e2-bc64-a16e319f0f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f696d3e-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.384 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.385 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f696d3e-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f696d3e-33, col_values=(('external_ids', {'iface-id': '7f696d3e-337d-48cb-bb42-5caf1246db05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:e2:2a', 'vm-uuid': '93493d5d-7044-4597-9113-3231f49f8263'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:15 compute-0 NetworkManager[48920]: <info>  [1763803395.3927] manager: (tap7f696d3e-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.396 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.401 253665 INFO os_vif [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:e2:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f696d3e-337d-48cb-bb42-5caf1246db05,network=Network(b5450041-8eb3-45e2-bc64-a16e319f0f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f696d3e-33')
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.436 253665 DEBUG nova.compute.manager [req-d92f2953-1ec3-4a3d-bb1b-3702cfeb3793 req-458987db-875a-4c0f-88ac-d46dfac5ad9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Received event network-vif-deleted-b4377651-ae31-406d-9f8b-b7b241d511df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 315 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 799 KiB/s rd, 3.7 MiB/s wr, 142 op/s
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.553 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.554 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.554 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] No VIF found with MAC fa:16:3e:c7:e2:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.555 253665 INFO nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Using config drive
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.581 253665 DEBUG nova.storage.rbd_utils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] rbd image 93493d5d-7044-4597-9113-3231f49f8263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.643 253665 INFO nova.scheduler.client.report [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Deleted allocations for instance d4350763-7fea-40fa-ade2-5aada492b3c0
Nov 22 09:23:15 compute-0 nova_compute[253661]: 2025-11-22 09:23:15.701 253665 DEBUG oslo_concurrency.lockutils [None req-55630d39-d56d-4b92-91a9-3ab11552d6d5 a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "d4350763-7fea-40fa-ade2-5aada492b3c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.068 253665 DEBUG oslo_concurrency.lockutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "41d22150-10c5-4aac-84a2-bebea895286e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.068 253665 DEBUG oslo_concurrency.lockutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.069 253665 DEBUG oslo_concurrency.lockutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "41d22150-10c5-4aac-84a2-bebea895286e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.069 253665 DEBUG oslo_concurrency.lockutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.069 253665 DEBUG oslo_concurrency.lockutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.070 253665 INFO nova.compute.manager [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Terminating instance
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.072 253665 DEBUG nova.compute.manager [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:23:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1506530565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2448959482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.276 253665 INFO nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Creating config drive at /var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263/disk.config
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.284 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjifej3jw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:16 compute-0 kernel: tap3862e425-07 (unregistering): left promiscuous mode
Nov 22 09:23:16 compute-0 NetworkManager[48920]: <info>  [1763803396.3326] device (tap3862e425-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:23:16 compute-0 ovn_controller[152872]: 2025-11-22T09:23:16Z|00766|binding|INFO|Releasing lport 3862e425-07d8-49da-b68e-63d3381670f7 from this chassis (sb_readonly=0)
Nov 22 09:23:16 compute-0 ovn_controller[152872]: 2025-11-22T09:23:16Z|00767|binding|INFO|Setting lport 3862e425-07d8-49da-b68e-63d3381670f7 down in Southbound
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:16 compute-0 ovn_controller[152872]: 2025-11-22T09:23:16Z|00768|binding|INFO|Removing iface tap3862e425-07 ovn-installed in OVS
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.355 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:70:49 10.100.0.8'], port_security=['fa:16:3e:e4:70:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '41d22150-10c5-4aac-84a2-bebea895286e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f77386c-0230-4d59-9773-818360efc15e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0031069a-4f3d-4896-9271-51ffba2d6dff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80bb3efb-942b-4e13-a368-b443d149a62b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=3862e425-07d8-49da-b68e-63d3381670f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.357 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 3862e425-07d8-49da-b68e-63d3381670f7 in datapath 6f77386c-0230-4d59-9773-818360efc15e unbound from our chassis
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.360 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f77386c-0230-4d59-9773-818360efc15e
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.365 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.384 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[768a0e46-711c-4608-9de4-257bb4516020]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:16 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000049.scope: Deactivated successfully.
Nov 22 09:23:16 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000049.scope: Consumed 15.647s CPU time.
Nov 22 09:23:16 compute-0 systemd-machined[215941]: Machine qemu-88-instance-00000049 terminated.
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.420 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[78d1e4a6-47a8-4c6c-a115-86f05fb08351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.424 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c0eb13de-c8df-4266-a520-dc332482fc85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.454 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjifej3jw" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.455 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f30f3ad4-666d-4983-98c5-30494b0c02ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.481 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bc92b7-7136-4ae0-9683-e6e35386823f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f77386c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:7d:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619261, 'reachable_time': 20506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331985, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.487 253665 DEBUG nova.storage.rbd_utils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] rbd image 93493d5d-7044-4597-9113-3231f49f8263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.492 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263/disk.config 93493d5d-7044-4597-9113-3231f49f8263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.503 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a70c68c-7f51-499a-95bd-f8767a087ac6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619278, 'tstamp': 619278}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332005, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6f77386c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619281, 'tstamp': 619281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332005, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f77386c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.520 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f77386c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.521 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.521 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f77386c-00, col_values=(('external_ids', {'iface-id': '22c0239c-bdb2-4af4-a6a4-7ca3983ded8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:16.522 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.536 253665 DEBUG nova.compute.manager [req-dfd4e3df-ca0c-4798-9b15-f42624ed079b req-1098d0ef-6066-4ca8-b8aa-71ad0e628474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Received event network-vif-unplugged-3862e425-07d8-49da-b68e-63d3381670f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.537 253665 DEBUG oslo_concurrency.lockutils [req-dfd4e3df-ca0c-4798-9b15-f42624ed079b req-1098d0ef-6066-4ca8-b8aa-71ad0e628474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "41d22150-10c5-4aac-84a2-bebea895286e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.537 253665 DEBUG oslo_concurrency.lockutils [req-dfd4e3df-ca0c-4798-9b15-f42624ed079b req-1098d0ef-6066-4ca8-b8aa-71ad0e628474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.538 253665 DEBUG oslo_concurrency.lockutils [req-dfd4e3df-ca0c-4798-9b15-f42624ed079b req-1098d0ef-6066-4ca8-b8aa-71ad0e628474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.538 253665 DEBUG nova.compute.manager [req-dfd4e3df-ca0c-4798-9b15-f42624ed079b req-1098d0ef-6066-4ca8-b8aa-71ad0e628474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] No waiting events found dispatching network-vif-unplugged-3862e425-07d8-49da-b68e-63d3381670f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.538 253665 DEBUG nova.compute.manager [req-dfd4e3df-ca0c-4798-9b15-f42624ed079b req-1098d0ef-6066-4ca8-b8aa-71ad0e628474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Received event network-vif-unplugged-3862e425-07d8-49da-b68e-63d3381670f7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.543 253665 INFO nova.virt.libvirt.driver [-] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Instance destroyed successfully.
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.543 253665 DEBUG nova.objects.instance [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'resources' on Instance uuid 41d22150-10c5-4aac-84a2-bebea895286e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.558 253665 DEBUG nova.virt.libvirt.vif [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-238841664',display_name='tempest-ListServerFiltersTestJSON-instance-238841664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-238841664',id=73,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:22:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-h09ea14g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:24Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=41d22150-10c5-4aac-84a2-bebea895286e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.559 253665 DEBUG nova.network.os_vif_util [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "3862e425-07d8-49da-b68e-63d3381670f7", "address": "fa:16:3e:e4:70:49", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3862e425-07", "ovs_interfaceid": "3862e425-07d8-49da-b68e-63d3381670f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.560 253665 DEBUG nova.network.os_vif_util [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:70:49,bridge_name='br-int',has_traffic_filtering=True,id=3862e425-07d8-49da-b68e-63d3381670f7,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3862e425-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.560 253665 DEBUG os_vif [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:70:49,bridge_name='br-int',has_traffic_filtering=True,id=3862e425-07d8-49da-b68e-63d3381670f7,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3862e425-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.562 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.563 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3862e425-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.567 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.570 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.574 253665 INFO os_vif [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:70:49,bridge_name='br-int',has_traffic_filtering=True,id=3862e425-07d8-49da-b68e-63d3381670f7,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3862e425-07')
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.597 253665 DEBUG nova.network.neutron [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Updated VIF entry in instance network info cache for port 7f696d3e-337d-48cb-bb42-5caf1246db05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.598 253665 DEBUG nova.network.neutron [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Updating instance_info_cache with network_info: [{"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:16 compute-0 nova_compute[253661]: 2025-11-22 09:23:16.609 253665 DEBUG oslo_concurrency.lockutils [req-aabb0931-f716-4052-acfe-685e2de46b06 req-fe123370-c526-4e2c-b7e1-f8d6a75ae317 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-93493d5d-7044-4597-9113-3231f49f8263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:23:17 compute-0 ceph-mon[75021]: pgmap v1772: 305 pgs: 305 active+clean; 315 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 799 KiB/s rd, 3.7 MiB/s wr, 142 op/s
Nov 22 09:23:17 compute-0 nova_compute[253661]: 2025-11-22 09:23:17.401 253665 DEBUG oslo_concurrency.processutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263/disk.config 93493d5d-7044-4597-9113-3231f49f8263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.909s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:17 compute-0 nova_compute[253661]: 2025-11-22 09:23:17.401 253665 INFO nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Deleting local config drive /var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263/disk.config because it was imported into RBD.
Nov 22 09:23:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:17 compute-0 kernel: tap7f696d3e-33: entered promiscuous mode
Nov 22 09:23:17 compute-0 systemd-udevd[331977]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:23:17 compute-0 NetworkManager[48920]: <info>  [1763803397.4883] manager: (tap7f696d3e-33): new Tun device (/org/freedesktop/NetworkManager/Devices/335)
Nov 22 09:23:17 compute-0 ovn_controller[152872]: 2025-11-22T09:23:17Z|00769|binding|INFO|Claiming lport 7f696d3e-337d-48cb-bb42-5caf1246db05 for this chassis.
Nov 22 09:23:17 compute-0 ovn_controller[152872]: 2025-11-22T09:23:17Z|00770|binding|INFO|7f696d3e-337d-48cb-bb42-5caf1246db05: Claiming fa:16:3e:c7:e2:2a 10.100.0.5
Nov 22 09:23:17 compute-0 nova_compute[253661]: 2025-11-22 09:23:17.489 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.497 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:e2:2a 10.100.0.5'], port_security=['fa:16:3e:c7:e2:2a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '93493d5d-7044-4597-9113-3231f49f8263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5450041-8eb3-45e2-bc64-a16e319f0f87', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ec2330389984b7b9d32e9c941ed8f06', 'neutron:revision_number': '2', 'neutron:security_group_ids': '014d5232-1e8f-44d0-a9aa-a18f90f4f90c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bed22a6-a448-40f3-ad18-c18d9a1d752a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7f696d3e-337d-48cb-bb42-5caf1246db05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.498 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7f696d3e-337d-48cb-bb42-5caf1246db05 in datapath b5450041-8eb3-45e2-bc64-a16e319f0f87 bound to our chassis
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.499 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5450041-8eb3-45e2-bc64-a16e319f0f87
Nov 22 09:23:17 compute-0 NetworkManager[48920]: <info>  [1763803397.5019] device (tap7f696d3e-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:23:17 compute-0 NetworkManager[48920]: <info>  [1763803397.5040] device (tap7f696d3e-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:23:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 326 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 880 KiB/s rd, 3.9 MiB/s wr, 159 op/s
Nov 22 09:23:17 compute-0 ovn_controller[152872]: 2025-11-22T09:23:17Z|00771|binding|INFO|Setting lport 7f696d3e-337d-48cb-bb42-5caf1246db05 ovn-installed in OVS
Nov 22 09:23:17 compute-0 ovn_controller[152872]: 2025-11-22T09:23:17Z|00772|binding|INFO|Setting lport 7f696d3e-337d-48cb-bb42-5caf1246db05 up in Southbound
Nov 22 09:23:17 compute-0 nova_compute[253661]: 2025-11-22 09:23:17.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:17 compute-0 nova_compute[253661]: 2025-11-22 09:23:17.515 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.517 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f49e9e1-fe68-451a-bed7-56776ff85a6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.518 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5450041-81 in ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.521 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5450041-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.521 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa0f46c-1218-484e-a41c-9125f0210ea7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.523 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f97b994a-beeb-4d5e-ae5a-eafd5c94c379]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.536 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4c52f770-682e-4daf-8a1e-7b0b7bdee1b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 systemd-machined[215941]: New machine qemu-92-instance-0000004c.
Nov 22 09:23:17 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-0000004c.
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d53751-b665-4df3-b5fd-5dde81d99cdf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.591 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d9953432-8080-407b-8104-f6d64292fbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 NetworkManager[48920]: <info>  [1763803397.6004] manager: (tapb5450041-80): new Veth device (/org/freedesktop/NetworkManager/Devices/336)
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.599 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58ebaa05-1c1f-4a19-9ca0-37ee86d5c015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.636 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3b0610-80c8-4b08-b635-81e837bc371c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.642 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[89f779da-845e-47a0-aad6-1042cbcdbe43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 NetworkManager[48920]: <info>  [1763803397.6717] device (tapb5450041-80): carrier: link connected
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.679 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[18edfffd-02a7-4256-b8e2-2461c55d0a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.702 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0a3e33-4b69-42f3-89a5-1dc28ca3625c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5450041-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:3c:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624840, 'reachable_time': 32135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332102, 'error': None, 'target': 'ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.725 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c4a5c9-f1d9-43de-8e83-8e6bd293e297]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:3c79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624840, 'tstamp': 624840}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332103, 'error': None, 'target': 'ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.749 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31d9513c-d2c4-4c46-9158-5a60a8970be4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5450041-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:3c:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624840, 'reachable_time': 32135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332104, 'error': None, 'target': 'ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.782 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ace17818-0727-4681-8351-4c1af7ebf5b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.852 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b5c503b-f18c-40b1-8cd8-36192691010d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.854 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5450041-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.854 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.854 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5450041-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:17 compute-0 nova_compute[253661]: 2025-11-22 09:23:17.856 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:17 compute-0 NetworkManager[48920]: <info>  [1763803397.8570] manager: (tapb5450041-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Nov 22 09:23:17 compute-0 kernel: tapb5450041-80: entered promiscuous mode
Nov 22 09:23:17 compute-0 nova_compute[253661]: 2025-11-22 09:23:17.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.859 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5450041-80, col_values=(('external_ids', {'iface-id': '5e832592-fa77-491b-b987-7873f66ebbbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:17 compute-0 nova_compute[253661]: 2025-11-22 09:23:17.860 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:17 compute-0 ovn_controller[152872]: 2025-11-22T09:23:17Z|00773|binding|INFO|Releasing lport 5e832592-fa77-491b-b987-7873f66ebbbe from this chassis (sb_readonly=0)
Nov 22 09:23:17 compute-0 nova_compute[253661]: 2025-11-22 09:23:17.876 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.877 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5450041-8eb3-45e2-bc64-a16e319f0f87.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5450041-8eb3-45e2-bc64-a16e319f0f87.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.878 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7bae12d8-dfdb-473f-9e66-70076cd92376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.879 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-b5450041-8eb3-45e2-bc64-a16e319f0f87
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/b5450041-8eb3-45e2-bc64-a16e319f0f87.pid.haproxy
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID b5450041-8eb3-45e2-bc64-a16e319f0f87
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:23:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:17.880 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87', 'env', 'PROCESS_TAG=haproxy-b5450041-8eb3-45e2-bc64-a16e319f0f87', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5450041-8eb3-45e2-bc64-a16e319f0f87.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.300 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803398.3004448, 93493d5d-7044-4597-9113-3231f49f8263 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.301 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] VM Started (Lifecycle Event)
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.317 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.321 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803398.301469, 93493d5d-7044-4597-9113-3231f49f8263 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.322 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] VM Paused (Lifecycle Event)
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.345 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.348 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.368 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:23:18 compute-0 podman[332176]: 2025-11-22 09:23:18.276938372 +0000 UTC m=+0.026642425 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:23:18 compute-0 podman[332176]: 2025-11-22 09:23:18.458602144 +0000 UTC m=+0.208306197 container create 914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:23:18 compute-0 systemd[1]: Started libpod-conmon-914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64.scope.
Nov 22 09:23:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c81b1100a2b6460d1d2296909d461cbfecd9336ecab92d48c42115cc9dbea1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:18 compute-0 podman[332176]: 2025-11-22 09:23:18.600261919 +0000 UTC m=+0.349966072 container init 914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:23:18 compute-0 podman[332176]: 2025-11-22 09:23:18.605988947 +0000 UTC m=+0.355693030 container start 914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:23:18 compute-0 neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87[332193]: [NOTICE]   (332197) : New worker (332199) forked
Nov 22 09:23:18 compute-0 neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87[332193]: [NOTICE]   (332197) : Loading success.
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.698 253665 DEBUG nova.compute.manager [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Received event network-vif-plugged-3862e425-07d8-49da-b68e-63d3381670f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.699 253665 DEBUG oslo_concurrency.lockutils [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "41d22150-10c5-4aac-84a2-bebea895286e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.699 253665 DEBUG oslo_concurrency.lockutils [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.699 253665 DEBUG oslo_concurrency.lockutils [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.699 253665 DEBUG nova.compute.manager [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] No waiting events found dispatching network-vif-plugged-3862e425-07d8-49da-b68e-63d3381670f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.699 253665 WARNING nova.compute.manager [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Received unexpected event network-vif-plugged-3862e425-07d8-49da-b68e-63d3381670f7 for instance with vm_state active and task_state deleting.
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.700 253665 DEBUG nova.compute.manager [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Received event network-vif-plugged-7f696d3e-337d-48cb-bb42-5caf1246db05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.700 253665 DEBUG oslo_concurrency.lockutils [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "93493d5d-7044-4597-9113-3231f49f8263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.700 253665 DEBUG oslo_concurrency.lockutils [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.700 253665 DEBUG oslo_concurrency.lockutils [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.700 253665 DEBUG nova.compute.manager [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Processing event network-vif-plugged-7f696d3e-337d-48cb-bb42-5caf1246db05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.700 253665 DEBUG nova.compute.manager [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Received event network-vif-plugged-7f696d3e-337d-48cb-bb42-5caf1246db05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.701 253665 DEBUG oslo_concurrency.lockutils [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "93493d5d-7044-4597-9113-3231f49f8263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.701 253665 DEBUG oslo_concurrency.lockutils [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.701 253665 DEBUG oslo_concurrency.lockutils [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.701 253665 DEBUG nova.compute.manager [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] No waiting events found dispatching network-vif-plugged-7f696d3e-337d-48cb-bb42-5caf1246db05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.701 253665 WARNING nova.compute.manager [req-5ce13a45-b9e2-480f-a84b-53dcf5298c0f req-e6f7b835-48d3-437a-b8cf-1748b3c16cd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Received unexpected event network-vif-plugged-7f696d3e-337d-48cb-bb42-5caf1246db05 for instance with vm_state building and task_state spawning.
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.702 253665 DEBUG nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.705 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803398.7057307, 93493d5d-7044-4597-9113-3231f49f8263 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.706 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] VM Resumed (Lifecycle Event)
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.708 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.711 253665 INFO nova.virt.libvirt.driver [-] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Instance spawned successfully.
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.711 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.728 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.735 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.739 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.740 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.740 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.741 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.741 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.741 253665 DEBUG nova.virt.libvirt.driver [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.773 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.821 253665 INFO nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Took 8.37 seconds to spawn the instance on the hypervisor.
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.822 253665 DEBUG nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.881 253665 INFO nova.compute.manager [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Took 9.36 seconds to build instance.
Nov 22 09:23:18 compute-0 nova_compute[253661]: 2025-11-22 09:23:18.897 253665 DEBUG oslo_concurrency.lockutils [None req-427c1b73-631e-45a3-ad1f-94c9f6e2746f 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:19 compute-0 ceph-mon[75021]: pgmap v1773: 305 pgs: 305 active+clean; 326 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 880 KiB/s rd, 3.9 MiB/s wr, 159 op/s
Nov 22 09:23:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 308 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 882 KiB/s rd, 3.8 MiB/s wr, 178 op/s
Nov 22 09:23:20 compute-0 nova_compute[253661]: 2025-11-22 09:23:20.149 253665 INFO nova.virt.libvirt.driver [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Deleting instance files /var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e_del
Nov 22 09:23:20 compute-0 nova_compute[253661]: 2025-11-22 09:23:20.150 253665 INFO nova.virt.libvirt.driver [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Deletion of /var/lib/nova/instances/41d22150-10c5-4aac-84a2-bebea895286e_del complete
Nov 22 09:23:20 compute-0 nova_compute[253661]: 2025-11-22 09:23:20.203 253665 INFO nova.compute.manager [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Took 4.13 seconds to destroy the instance on the hypervisor.
Nov 22 09:23:20 compute-0 nova_compute[253661]: 2025-11-22 09:23:20.204 253665 DEBUG oslo.service.loopingcall [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:23:20 compute-0 nova_compute[253661]: 2025-11-22 09:23:20.204 253665 DEBUG nova.compute.manager [-] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:23:20 compute-0 nova_compute[253661]: 2025-11-22 09:23:20.204 253665 DEBUG nova.network.neutron [-] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:23:20 compute-0 nova_compute[253661]: 2025-11-22 09:23:20.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:20 compute-0 nova_compute[253661]: 2025-11-22 09:23:20.424 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:21 compute-0 ceph-mon[75021]: pgmap v1774: 305 pgs: 305 active+clean; 308 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 882 KiB/s rd, 3.8 MiB/s wr, 178 op/s
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.496 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803386.4956014, d4350763-7fea-40fa-ade2-5aada492b3c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.497 253665 INFO nova.compute.manager [-] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] VM Stopped (Lifecycle Event)
Nov 22 09:23:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 308 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 744 KiB/s rd, 2.8 MiB/s wr, 149 op/s
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.515 253665 DEBUG nova.compute.manager [None req-42d9ed0a-4e01-47bc-b91f-48fce24dc23d - - - - - -] [instance: d4350763-7fea-40fa-ade2-5aada492b3c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.670 253665 DEBUG nova.network.neutron [-] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.688 253665 INFO nova.compute.manager [-] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Took 1.48 seconds to deallocate network for instance.
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.732 253665 DEBUG oslo_concurrency.lockutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.732 253665 DEBUG oslo_concurrency.lockutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.742 253665 DEBUG nova.compute.manager [req-64686c34-0665-4767-9a89-cd400d0fd7e3 req-4b6b686d-04da-4bd0-af00-9a8d22e98283 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Received event network-vif-deleted-3862e425-07d8-49da-b68e-63d3381670f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:21 compute-0 nova_compute[253661]: 2025-11-22 09:23:21.834 253665 DEBUG oslo_concurrency.processutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:23:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/830924457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.366 253665 DEBUG oslo_concurrency.processutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.374 253665 DEBUG nova.compute.provider_tree [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.390 253665 DEBUG nova.scheduler.client.report [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.414 253665 DEBUG oslo_concurrency.lockutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/830924457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.444 253665 INFO nova.scheduler.client.report [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Deleted allocations for instance 41d22150-10c5-4aac-84a2-bebea895286e
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.504 253665 DEBUG oslo_concurrency.lockutils [None req-34d66f16-8cd2-455b-8090-6692fb375afb a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "41d22150-10c5-4aac-84a2-bebea895286e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:23:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:23:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:23:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:23:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:23:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.850 253665 DEBUG oslo_concurrency.lockutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.851 253665 DEBUG oslo_concurrency.lockutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.851 253665 DEBUG oslo_concurrency.lockutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.852 253665 DEBUG oslo_concurrency.lockutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.852 253665 DEBUG oslo_concurrency.lockutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.853 253665 INFO nova.compute.manager [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Terminating instance
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.854 253665 DEBUG nova.compute.manager [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:23:22 compute-0 kernel: tapdb173c1b-18 (unregistering): left promiscuous mode
Nov 22 09:23:22 compute-0 NetworkManager[48920]: <info>  [1763803402.9077] device (tapdb173c1b-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.922 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:22 compute-0 ovn_controller[152872]: 2025-11-22T09:23:22Z|00774|binding|INFO|Releasing lport db173c1b-18d4-496e-96f0-d12c680c680e from this chassis (sb_readonly=0)
Nov 22 09:23:22 compute-0 ovn_controller[152872]: 2025-11-22T09:23:22Z|00775|binding|INFO|Setting lport db173c1b-18d4-496e-96f0-d12c680c680e down in Southbound
Nov 22 09:23:22 compute-0 ovn_controller[152872]: 2025-11-22T09:23:22Z|00776|binding|INFO|Removing iface tapdb173c1b-18 ovn-installed in OVS
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.929 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:22.935 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:27:be 10.100.0.10'], port_security=['fa:16:3e:25:27:be 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b5f49b05-1b70-4479-98d9-83b995037a41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f77386c-0230-4d59-9773-818360efc15e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5a1bb02c0c047be92aba24831aef1a5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0031069a-4f3d-4896-9271-51ffba2d6dff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80bb3efb-942b-4e13-a368-b443d149a62b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=db173c1b-18d4-496e-96f0-d12c680c680e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:23:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:22.936 162862 INFO neutron.agent.ovn.metadata.agent [-] Port db173c1b-18d4-496e-96f0-d12c680c680e in datapath 6f77386c-0230-4d59-9773-818360efc15e unbound from our chassis
Nov 22 09:23:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:22.937 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f77386c-0230-4d59-9773-818360efc15e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:23:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:22.938 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a6da9f-2f6f-4d19-a60a-6eb3eb067f40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:22.939 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6f77386c-0230-4d59-9773-818360efc15e namespace which is not needed anymore
Nov 22 09:23:22 compute-0 nova_compute[253661]: 2025-11-22 09:23:22.951 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:22 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 22 09:23:22 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000048.scope: Consumed 14.887s CPU time.
Nov 22 09:23:22 compute-0 systemd-machined[215941]: Machine qemu-91-instance-00000048 terminated.
Nov 22 09:23:23 compute-0 neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e[329509]: [NOTICE]   (329513) : haproxy version is 2.8.14-c23fe91
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.096 253665 INFO nova.virt.libvirt.driver [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Instance destroyed successfully.
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.097 253665 DEBUG nova.objects.instance [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lazy-loading 'resources' on Instance uuid b5f49b05-1b70-4479-98d9-83b995037a41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:23 compute-0 neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e[329509]: [NOTICE]   (329513) : path to executable is /usr/sbin/haproxy
Nov 22 09:23:23 compute-0 neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e[329509]: [WARNING]  (329513) : Exiting Master process...
Nov 22 09:23:23 compute-0 neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e[329509]: [ALERT]    (329513) : Current worker (329515) exited with code 143 (Terminated)
Nov 22 09:23:23 compute-0 neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e[329509]: [WARNING]  (329513) : All workers exited. Exiting... (0)
Nov 22 09:23:23 compute-0 systemd[1]: libpod-44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e.scope: Deactivated successfully.
Nov 22 09:23:23 compute-0 podman[332252]: 2025-11-22 09:23:23.108834668 +0000 UTC m=+0.056543707 container died 44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.110 253665 DEBUG nova.virt.libvirt.vif [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1090017716',display_name='tempest-ListServerFiltersTestJSON-instance-1090017716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1090017716',id=72,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e5a1bb02c0c047be92aba24831aef1a5',ramdisk_id='',reservation_id='r-n7lha9f5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1747318421',owner_user_name='tempest-ListServerFiltersTestJSON-1747318421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:56Z,user_data=None,user_id='a611caa9b8c247f083ce8b67780fbc01',uuid=b5f49b05-1b70-4479-98d9-83b995037a41,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.111 253665 DEBUG nova.network.os_vif_util [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converting VIF {"id": "db173c1b-18d4-496e-96f0-d12c680c680e", "address": "fa:16:3e:25:27:be", "network": {"id": "6f77386c-0230-4d59-9773-818360efc15e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-457035479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5a1bb02c0c047be92aba24831aef1a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb173c1b-18", "ovs_interfaceid": "db173c1b-18d4-496e-96f0-d12c680c680e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.112 253665 DEBUG nova.network.os_vif_util [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.112 253665 DEBUG os_vif [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.117 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb173c1b-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.167 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.170 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.172 253665 INFO os_vif [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:27:be,bridge_name='br-int',has_traffic_filtering=True,id=db173c1b-18d4-496e-96f0-d12c680c680e,network=Network(6f77386c-0230-4d59-9773-818360efc15e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb173c1b-18')
Nov 22 09:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-33337ae79b275de281ee4767f0a431fd141e98668de09fe92020957d853d264e-merged.mount: Deactivated successfully.
Nov 22 09:23:23 compute-0 podman[332252]: 2025-11-22 09:23:23.199621673 +0000 UTC m=+0.147330712 container cleanup 44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:23:23 compute-0 systemd[1]: libpod-conmon-44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e.scope: Deactivated successfully.
Nov 22 09:23:23 compute-0 podman[332303]: 2025-11-22 09:23:23.287477848 +0000 UTC m=+0.054777266 container remove 44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:23:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:23.300 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c537b32-99e0-421c-b391-a83592eae3bd]: (4, ('Sat Nov 22 09:23:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e (44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e)\n44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e\nSat Nov 22 09:23:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6f77386c-0230-4d59-9773-818360efc15e (44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e)\n44f61f59eb200175e4fe26fe9cf0dd796d2fd6b665a34f1b2b9c6a39e61aad0e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:23.304 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad0f923-641e-48c6-a0ce-7c617710a28c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:23.305 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f77386c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:23 compute-0 kernel: tap6f77386c-00: left promiscuous mode
Nov 22 09:23:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:23.316 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0ea418-b515-4ef2-b9bd-2f4357601740]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.335 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:23.338 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[61b52f38-b578-4a2d-934b-828e903c56c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:23.342 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d650d545-defc-4524-9dc4-2e7f851f3908]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:23.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[140085dd-b09a-4bf8-906a-c1dddbf6d6f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619250, 'reachable_time': 16490, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332321, 'error': None, 'target': 'ovnmeta-6f77386c-0230-4d59-9773-818360efc15e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d6f77386c\x2d0230\x2d4d59\x2d9773\x2d818360efc15e.mount: Deactivated successfully.
Nov 22 09:23:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:23.372 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6f77386c-0230-4d59-9773-818360efc15e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:23:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:23.373 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f217d858-7c81-4966-853b-31be85969810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:23 compute-0 ceph-mon[75021]: pgmap v1775: 305 pgs: 305 active+clean; 308 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 744 KiB/s rd, 2.8 MiB/s wr, 149 op/s
Nov 22 09:23:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 248 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 208 op/s
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.726 253665 INFO nova.virt.libvirt.driver [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Deleting instance files /var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41_del
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.727 253665 INFO nova.virt.libvirt.driver [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Deletion of /var/lib/nova/instances/b5f49b05-1b70-4479-98d9-83b995037a41_del complete
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.844 253665 INFO nova.compute.manager [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Took 0.99 seconds to destroy the instance on the hypervisor.
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.845 253665 DEBUG oslo.service.loopingcall [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.846 253665 DEBUG nova.compute.manager [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.846 253665 DEBUG nova.network.neutron [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.955 253665 DEBUG nova.compute.manager [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-unplugged-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.956 253665 DEBUG oslo_concurrency.lockutils [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.956 253665 DEBUG oslo_concurrency.lockutils [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.956 253665 DEBUG oslo_concurrency.lockutils [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.957 253665 DEBUG nova.compute.manager [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] No waiting events found dispatching network-vif-unplugged-db173c1b-18d4-496e-96f0-d12c680c680e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.957 253665 DEBUG nova.compute.manager [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-unplugged-db173c1b-18d4-496e-96f0-d12c680c680e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.958 253665 DEBUG nova.compute.manager [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.958 253665 DEBUG oslo_concurrency.lockutils [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.958 253665 DEBUG oslo_concurrency.lockutils [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.959 253665 DEBUG oslo_concurrency.lockutils [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.959 253665 DEBUG nova.compute.manager [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] No waiting events found dispatching network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:23 compute-0 nova_compute[253661]: 2025-11-22 09:23:23.959 253665 WARNING nova.compute.manager [req-12e28b16-f492-4753-b08e-e9ee141d3fe3 req-202b28f1-2913-4086-8464-211f355c8e44 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received unexpected event network-vif-plugged-db173c1b-18d4-496e-96f0-d12c680c680e for instance with vm_state active and task_state deleting.
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.320 253665 DEBUG oslo_concurrency.lockutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquiring lock "93493d5d-7044-4597-9113-3231f49f8263" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.321 253665 DEBUG oslo_concurrency.lockutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.321 253665 DEBUG oslo_concurrency.lockutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquiring lock "93493d5d-7044-4597-9113-3231f49f8263-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.321 253665 DEBUG oslo_concurrency.lockutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.321 253665 DEBUG oslo_concurrency.lockutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.322 253665 INFO nova.compute.manager [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Terminating instance
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.323 253665 DEBUG nova.compute.manager [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:23:24 compute-0 kernel: tap7f696d3e-33 (unregistering): left promiscuous mode
Nov 22 09:23:24 compute-0 NetworkManager[48920]: <info>  [1763803404.3782] device (tap7f696d3e-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:23:24 compute-0 ovn_controller[152872]: 2025-11-22T09:23:24Z|00777|binding|INFO|Releasing lport 7f696d3e-337d-48cb-bb42-5caf1246db05 from this chassis (sb_readonly=0)
Nov 22 09:23:24 compute-0 ovn_controller[152872]: 2025-11-22T09:23:24Z|00778|binding|INFO|Setting lport 7f696d3e-337d-48cb-bb42-5caf1246db05 down in Southbound
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:24 compute-0 ovn_controller[152872]: 2025-11-22T09:23:24Z|00779|binding|INFO|Removing iface tap7f696d3e-33 ovn-installed in OVS
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.413 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:e2:2a 10.100.0.5'], port_security=['fa:16:3e:c7:e2:2a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '93493d5d-7044-4597-9113-3231f49f8263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5450041-8eb3-45e2-bc64-a16e319f0f87', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ec2330389984b7b9d32e9c941ed8f06', 'neutron:revision_number': '4', 'neutron:security_group_ids': '014d5232-1e8f-44d0-a9aa-a18f90f4f90c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bed22a6-a448-40f3-ad18-c18d9a1d752a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7f696d3e-337d-48cb-bb42-5caf1246db05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.414 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7f696d3e-337d-48cb-bb42-5caf1246db05 in datapath b5450041-8eb3-45e2-bc64-a16e319f0f87 unbound from our chassis
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.416 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5450041-8eb3-45e2-bc64-a16e319f0f87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.417 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2c025c-5e44-4048-9b70-518b5fe7ffcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.418 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87 namespace which is not needed anymore
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.426 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:24 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Nov 22 09:23:24 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004c.scope: Consumed 6.182s CPU time.
Nov 22 09:23:24 compute-0 systemd-machined[215941]: Machine qemu-92-instance-0000004c terminated.
Nov 22 09:23:24 compute-0 neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87[332193]: [NOTICE]   (332197) : haproxy version is 2.8.14-c23fe91
Nov 22 09:23:24 compute-0 neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87[332193]: [NOTICE]   (332197) : path to executable is /usr/sbin/haproxy
Nov 22 09:23:24 compute-0 neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87[332193]: [WARNING]  (332197) : Exiting Master process...
Nov 22 09:23:24 compute-0 neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87[332193]: [WARNING]  (332197) : Exiting Master process...
Nov 22 09:23:24 compute-0 neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87[332193]: [ALERT]    (332197) : Current worker (332199) exited with code 143 (Terminated)
Nov 22 09:23:24 compute-0 neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87[332193]: [WARNING]  (332197) : All workers exited. Exiting... (0)
Nov 22 09:23:24 compute-0 systemd[1]: libpod-914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64.scope: Deactivated successfully.
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.558 253665 INFO nova.virt.libvirt.driver [-] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Instance destroyed successfully.
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.558 253665 DEBUG nova.objects.instance [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lazy-loading 'resources' on Instance uuid 93493d5d-7044-4597-9113-3231f49f8263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:24 compute-0 podman[332345]: 2025-11-22 09:23:24.56237403 +0000 UTC m=+0.051933738 container died 914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.571 253665 DEBUG nova.virt.libvirt.vif [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:23:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1900692880',display_name='tempest-ServerMetadataNegativeTestJSON-server-1900692880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1900692880',id=76,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:23:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ec2330389984b7b9d32e9c941ed8f06',ramdisk_id='',reservation_id='r-wnd0n42y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-2076041723',owner_user_name='tempest-ServerMetadataNegativeTestJSON-2076041723-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:23:18Z,user_data=None,user_id='6bc178aa031d46cf982ad58e57d0b59c',uuid=93493d5d-7044-4597-9113-3231f49f8263,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.571 253665 DEBUG nova.network.os_vif_util [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Converting VIF {"id": "7f696d3e-337d-48cb-bb42-5caf1246db05", "address": "fa:16:3e:c7:e2:2a", "network": {"id": "b5450041-8eb3-45e2-bc64-a16e319f0f87", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-905391586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ec2330389984b7b9d32e9c941ed8f06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f696d3e-33", "ovs_interfaceid": "7f696d3e-337d-48cb-bb42-5caf1246db05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.572 253665 DEBUG nova.network.os_vif_util [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:e2:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f696d3e-337d-48cb-bb42-5caf1246db05,network=Network(b5450041-8eb3-45e2-bc64-a16e319f0f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f696d3e-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.572 253665 DEBUG os_vif [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:e2:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f696d3e-337d-48cb-bb42-5caf1246db05,network=Network(b5450041-8eb3-45e2-bc64-a16e319f0f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f696d3e-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.575 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f696d3e-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.579 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.581 253665 INFO os_vif [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:e2:2a,bridge_name='br-int',has_traffic_filtering=True,id=7f696d3e-337d-48cb-bb42-5caf1246db05,network=Network(b5450041-8eb3-45e2-bc64-a16e319f0f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f696d3e-33')
Nov 22 09:23:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64-userdata-shm.mount: Deactivated successfully.
Nov 22 09:23:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-30c81b1100a2b6460d1d2296909d461cbfecd9336ecab92d48c42115cc9dbea1-merged.mount: Deactivated successfully.
Nov 22 09:23:24 compute-0 podman[332345]: 2025-11-22 09:23:24.610631576 +0000 UTC m=+0.100191274 container cleanup 914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:23:24 compute-0 systemd[1]: libpod-conmon-914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64.scope: Deactivated successfully.
Nov 22 09:23:24 compute-0 podman[332402]: 2025-11-22 09:23:24.685191478 +0000 UTC m=+0.050414019 container remove 914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.692 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[acc57b5d-938b-43ed-89e5-9db794d80fc0]: (4, ('Sat Nov 22 09:23:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87 (914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64)\n914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64\nSat Nov 22 09:23:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87 (914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64)\n914779a616f6ebf39560be563a6478d58627698cc9d70b4dccf687254355bd64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.694 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48e0d152-0910-46e6-8ac0-e1744bbbb259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.695 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5450041-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:24 compute-0 kernel: tapb5450041-80: left promiscuous mode
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.718 253665 DEBUG nova.compute.manager [req-d068fc34-4d06-4c88-9284-cd7c28c2e9ae req-f2f2d23b-c586-4b80-9ccb-475df373d6b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Received event network-vif-unplugged-7f696d3e-337d-48cb-bb42-5caf1246db05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.719 253665 DEBUG oslo_concurrency.lockutils [req-d068fc34-4d06-4c88-9284-cd7c28c2e9ae req-f2f2d23b-c586-4b80-9ccb-475df373d6b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "93493d5d-7044-4597-9113-3231f49f8263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.719 253665 DEBUG oslo_concurrency.lockutils [req-d068fc34-4d06-4c88-9284-cd7c28c2e9ae req-f2f2d23b-c586-4b80-9ccb-475df373d6b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.719 253665 DEBUG oslo_concurrency.lockutils [req-d068fc34-4d06-4c88-9284-cd7c28c2e9ae req-f2f2d23b-c586-4b80-9ccb-475df373d6b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.719 253665 DEBUG nova.compute.manager [req-d068fc34-4d06-4c88-9284-cd7c28c2e9ae req-f2f2d23b-c586-4b80-9ccb-475df373d6b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] No waiting events found dispatching network-vif-unplugged-7f696d3e-337d-48cb-bb42-5caf1246db05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.720 253665 DEBUG nova.compute.manager [req-d068fc34-4d06-4c88-9284-cd7c28c2e9ae req-f2f2d23b-c586-4b80-9ccb-475df373d6b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Received event network-vif-unplugged-7f696d3e-337d-48cb-bb42-5caf1246db05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.720 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.720 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0a8012-5926-48b5-ae4a-d2afad2b314b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.736 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[caf735dc-7cfa-4eb7-b7c4-75c2709df13a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.739 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[54e4a5ea-32ba-4413-abb0-a2146233564e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.759 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07fb1c7d-471e-4462-bd38-1cd0a8f5ea3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624832, 'reachable_time': 34640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332420, 'error': None, 'target': 'ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.763 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5450041-8eb3-45e2-bc64-a16e319f0f87 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:23:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:24.763 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8029fa55-ac82-4aa9-bab7-8b551295c5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:24 compute-0 systemd[1]: run-netns-ovnmeta\x2db5450041\x2d8eb3\x2d45e2\x2dbc64\x2da16e319f0f87.mount: Deactivated successfully.
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.971 253665 DEBUG nova.network.neutron [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:24 compute-0 nova_compute[253661]: 2025-11-22 09:23:24.988 253665 INFO nova.compute.manager [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Took 1.14 seconds to deallocate network for instance.
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.027 253665 DEBUG oslo_concurrency.lockutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.028 253665 DEBUG oslo_concurrency.lockutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.097 253665 INFO nova.virt.libvirt.driver [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Deleting instance files /var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263_del
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.098 253665 INFO nova.virt.libvirt.driver [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Deletion of /var/lib/nova/instances/93493d5d-7044-4597-9113-3231f49f8263_del complete
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.146 253665 DEBUG oslo_concurrency.processutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.199 253665 INFO nova.compute.manager [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.201 253665 DEBUG oslo.service.loopingcall [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.201 253665 DEBUG nova.compute.manager [-] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.202 253665 DEBUG nova.network.neutron [-] [instance: 93493d5d-7044-4597-9113-3231f49f8263] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 225 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 885 KiB/s wr, 154 op/s
Nov 22 09:23:25 compute-0 ceph-mon[75021]: pgmap v1776: 305 pgs: 305 active+clean; 248 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 208 op/s
Nov 22 09:23:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:23:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310730192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.688 253665 DEBUG oslo_concurrency.processutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.695 253665 DEBUG nova.compute.provider_tree [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.710 253665 DEBUG nova.scheduler.client.report [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.731 253665 DEBUG oslo_concurrency.lockutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.755 253665 DEBUG nova.network.neutron [-] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.758 253665 INFO nova.scheduler.client.report [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Deleted allocations for instance b5f49b05-1b70-4479-98d9-83b995037a41
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.773 253665 INFO nova.compute.manager [-] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Took 0.57 seconds to deallocate network for instance.
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.832 253665 DEBUG oslo_concurrency.lockutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.833 253665 DEBUG oslo_concurrency.lockutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.836 253665 DEBUG oslo_concurrency.lockutils [None req-6cc3bde6-ad56-4749-952c-758491b3d73e a611caa9b8c247f083ce8b67780fbc01 e5a1bb02c0c047be92aba24831aef1a5 - - default default] Lock "b5f49b05-1b70-4479-98d9-83b995037a41" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:25 compute-0 nova_compute[253661]: 2025-11-22 09:23:25.891 253665 DEBUG oslo_concurrency.processutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.180 253665 DEBUG nova.compute.manager [req-65e98a6f-f3aa-4387-8181-0057bc5c4da3 req-dd7964ac-dc12-49c4-993b-1e4a56530495 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Received event network-vif-deleted-db173c1b-18d4-496e-96f0-d12c680c680e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.180 253665 DEBUG nova.compute.manager [req-65e98a6f-f3aa-4387-8181-0057bc5c4da3 req-dd7964ac-dc12-49c4-993b-1e4a56530495 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Received event network-vif-deleted-7f696d3e-337d-48cb-bb42-5caf1246db05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:23:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954166709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.359 253665 DEBUG oslo_concurrency.processutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.366 253665 DEBUG nova.compute.provider_tree [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.383 253665 DEBUG nova.scheduler.client.report [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.403 253665 DEBUG oslo_concurrency.lockutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.430 253665 INFO nova.scheduler.client.report [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Deleted allocations for instance 93493d5d-7044-4597-9113-3231f49f8263
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.502 253665 DEBUG oslo_concurrency.lockutils [None req-eacba686-9b85-4147-8202-fa64874e3b54 6bc178aa031d46cf982ad58e57d0b59c 2ec2330389984b7b9d32e9c941ed8f06 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:26 compute-0 ceph-mon[75021]: pgmap v1777: 305 pgs: 305 active+clean; 225 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 885 KiB/s wr, 154 op/s
Nov 22 09:23:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1310730192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/954166709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.818 253665 DEBUG nova.compute.manager [req-c806e921-f6fd-48e6-9fed-3596ddf280de req-a0a5e21f-3553-4970-8ff8-d839ec0b2f9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Received event network-vif-plugged-7f696d3e-337d-48cb-bb42-5caf1246db05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.818 253665 DEBUG oslo_concurrency.lockutils [req-c806e921-f6fd-48e6-9fed-3596ddf280de req-a0a5e21f-3553-4970-8ff8-d839ec0b2f9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "93493d5d-7044-4597-9113-3231f49f8263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.818 253665 DEBUG oslo_concurrency.lockutils [req-c806e921-f6fd-48e6-9fed-3596ddf280de req-a0a5e21f-3553-4970-8ff8-d839ec0b2f9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.819 253665 DEBUG oslo_concurrency.lockutils [req-c806e921-f6fd-48e6-9fed-3596ddf280de req-a0a5e21f-3553-4970-8ff8-d839ec0b2f9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "93493d5d-7044-4597-9113-3231f49f8263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.820 253665 DEBUG nova.compute.manager [req-c806e921-f6fd-48e6-9fed-3596ddf280de req-a0a5e21f-3553-4970-8ff8-d839ec0b2f9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] No waiting events found dispatching network-vif-plugged-7f696d3e-337d-48cb-bb42-5caf1246db05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:26 compute-0 nova_compute[253661]: 2025-11-22 09:23:26.820 253665 WARNING nova.compute.manager [req-c806e921-f6fd-48e6-9fed-3596ddf280de req-a0a5e21f-3553-4970-8ff8-d839ec0b2f9d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Received unexpected event network-vif-plugged-7f696d3e-337d-48cb-bb42-5caf1246db05 for instance with vm_state deleted and task_state None.
Nov 22 09:23:27 compute-0 ovn_controller[152872]: 2025-11-22T09:23:27Z|00780|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 09:23:27 compute-0 nova_compute[253661]: 2025-11-22 09:23:27.081 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:27 compute-0 nova_compute[253661]: 2025-11-22 09:23:27.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 178 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 292 KiB/s wr, 145 op/s
Nov 22 09:23:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:27.966 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:27.966 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:27.967 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:28 compute-0 nova_compute[253661]: 2025-11-22 09:23:28.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:28 compute-0 nova_compute[253661]: 2025-11-22 09:23:28.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:23:28 compute-0 nova_compute[253661]: 2025-11-22 09:23:28.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:23:28 compute-0 nova_compute[253661]: 2025-11-22 09:23:28.423 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:23:28 compute-0 nova_compute[253661]: 2025-11-22 09:23:28.424 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:23:28 compute-0 nova_compute[253661]: 2025-11-22 09:23:28.424 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:23:28 compute-0 nova_compute[253661]: 2025-11-22 09:23:28.424 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9096405c-eb66-4d27-abbb-e709b767afea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:28 compute-0 ceph-mon[75021]: pgmap v1778: 305 pgs: 305 active+clean; 178 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 292 KiB/s wr, 145 op/s
Nov 22 09:23:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 121 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 32 KiB/s wr, 159 op/s
Nov 22 09:23:29 compute-0 nova_compute[253661]: 2025-11-22 09:23:29.578 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:30 compute-0 nova_compute[253661]: 2025-11-22 09:23:30.267 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updating instance_info_cache with network_info: [{"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:30 compute-0 nova_compute[253661]: 2025-11-22 09:23:30.280 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:23:30 compute-0 nova_compute[253661]: 2025-11-22 09:23:30.281 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:23:30 compute-0 nova_compute[253661]: 2025-11-22 09:23:30.281 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:30 compute-0 nova_compute[253661]: 2025-11-22 09:23:30.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:30 compute-0 ceph-mon[75021]: pgmap v1779: 305 pgs: 305 active+clean; 121 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 32 KiB/s wr, 159 op/s
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.260 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.260 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:31 compute-0 podman[332468]: 2025-11-22 09:23:31.36542251 +0000 UTC m=+0.057161422 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:23:31 compute-0 podman[332467]: 2025-11-22 09:23:31.390473576 +0000 UTC m=+0.081374038 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:23:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 121 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 135 op/s
Nov 22 09:23:31 compute-0 ovn_controller[152872]: 2025-11-22T09:23:31Z|00781|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.533 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803396.5188098, 41d22150-10c5-4aac-84a2-bebea895286e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.534 253665 INFO nova.compute.manager [-] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] VM Stopped (Lifecycle Event)
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.555 253665 DEBUG nova.compute.manager [None req-abe95734-8036-4e04-bfc1-bc6e7164bea4 - - - - - -] [instance: 41d22150-10c5-4aac-84a2-bebea895286e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.584 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:23:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486200017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.765 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/486200017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.832 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:23:31 compute-0 nova_compute[253661]: 2025-11-22 09:23:31.833 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.045 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.046 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3766MB free_disk=59.94272232055664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.047 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.047 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.241 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9096405c-eb66-4d27-abbb-e709b767afea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.242 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.243 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.408 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:23:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2435130370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.878 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.885 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.899 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:23:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 22 09:23:32 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.922 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:23:32 compute-0 nova_compute[253661]: 2025-11-22 09:23:32.923 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:32 compute-0 ceph-mon[75021]: pgmap v1780: 305 pgs: 305 active+clean; 121 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 135 op/s
Nov 22 09:23:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2435130370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 121 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 817 KiB/s rd, 5.4 KiB/s wr, 92 op/s
Nov 22 09:23:33 compute-0 nova_compute[253661]: 2025-11-22 09:23:33.924 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:33 compute-0 nova_compute[253661]: 2025-11-22 09:23:33.924 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:33 compute-0 nova_compute[253661]: 2025-11-22 09:23:33.925 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:33 compute-0 nova_compute[253661]: 2025-11-22 09:23:33.925 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:23:34 compute-0 ceph-mon[75021]: osdmap e241: 3 total, 3 up, 3 in
Nov 22 09:23:34 compute-0 nova_compute[253661]: 2025-11-22 09:23:34.582 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:34 compute-0 nova_compute[253661]: 2025-11-22 09:23:34.771 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:35 compute-0 ceph-mon[75021]: pgmap v1782: 305 pgs: 305 active+clean; 121 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 817 KiB/s rd, 5.4 KiB/s wr, 92 op/s
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.313 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.313 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.330 253665 DEBUG nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.394 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.394 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.400 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.400 253665 INFO nova.compute.claims [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.512 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 121 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 4.8 KiB/s wr, 66 op/s
Nov 22 09:23:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:23:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527727421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:35 compute-0 nova_compute[253661]: 2025-11-22 09:23:35.997 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.004 253665 DEBUG nova.compute.provider_tree [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.017 253665 DEBUG nova.scheduler.client.report [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.047 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.048 253665 DEBUG nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.101 253665 DEBUG nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.102 253665 DEBUG nova.network.neutron [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.119 253665 INFO nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.135 253665 DEBUG nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:23:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2527727421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.346 253665 DEBUG nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.347 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.348 253665 INFO nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Creating image(s)
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.375 253665 DEBUG nova.storage.rbd_utils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.403 253665 DEBUG nova.storage.rbd_utils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:36 compute-0 podman[332568]: 2025-11-22 09:23:36.412921488 +0000 UTC m=+0.108096444 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.426 253665 DEBUG nova.storage.rbd_utils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.429 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.502 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.503 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.504 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.504 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.525 253665 DEBUG nova.storage.rbd_utils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.529 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:36 compute-0 nova_compute[253661]: 2025-11-22 09:23:36.703 253665 DEBUG nova.policy [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7e5709393702478dbf0bd566dc94d7fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b06c711e582499ab500917d85e27e3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:23:37 compute-0 ceph-mon[75021]: pgmap v1783: 305 pgs: 305 active+clean; 121 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 4.8 KiB/s wr, 66 op/s
Nov 22 09:23:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:37 compute-0 nova_compute[253661]: 2025-11-22 09:23:37.478 253665 DEBUG nova.network.neutron [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Successfully created port: edd81944-578b-4533-9db7-f17a3fb84211 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:23:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 121 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 7.0 KiB/s wr, 60 op/s
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.092 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803403.0901318, b5f49b05-1b70-4479-98d9-83b995037a41 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.092 253665 INFO nova.compute.manager [-] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] VM Stopped (Lifecycle Event)
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.113 253665 DEBUG nova.compute.manager [None req-89b7536c-e170-4007-8fe3-e372aec1cbbe - - - - - -] [instance: b5f49b05-1b70-4479-98d9-83b995037a41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.214 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.268 253665 DEBUG nova.storage.rbd_utils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] resizing rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.509 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.824 253665 DEBUG nova.objects.instance [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'migration_context' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.847 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.848 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Ensure instance console log exists: /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.849 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.850 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:38 compute-0 nova_compute[253661]: 2025-11-22 09:23:38.850 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:38 compute-0 sudo[332760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:23:38 compute-0 sudo[332760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:38 compute-0 sudo[332760]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:38 compute-0 sudo[332785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:23:38 compute-0 sudo[332785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:38 compute-0 sudo[332785]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:39 compute-0 sudo[332810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:23:39 compute-0 sudo[332810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:39 compute-0 sudo[332810]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.043 253665 DEBUG nova.network.neutron [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Successfully updated port: edd81944-578b-4533-9db7-f17a3fb84211 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.062 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "refresh_cache-9cef4b12-b28c-47df-9af2-a0bf9934e4d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.062 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquired lock "refresh_cache-9cef4b12-b28c-47df-9af2-a0bf9934e4d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.062 253665 DEBUG nova.network.neutron [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:23:39 compute-0 sudo[332835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:23:39 compute-0 sudo[332835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.137 253665 DEBUG nova.compute.manager [req-8413216a-4ab3-4655-af0e-39b82d4177fe req-028147f7-cbf5-4359-b1a1-0fda42793b27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-changed-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.138 253665 DEBUG nova.compute.manager [req-8413216a-4ab3-4655-af0e-39b82d4177fe req-028147f7-cbf5-4359-b1a1-0fda42793b27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Refreshing instance network info cache due to event network-changed-edd81944-578b-4533-9db7-f17a3fb84211. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.138 253665 DEBUG oslo_concurrency.lockutils [req-8413216a-4ab3-4655-af0e-39b82d4177fe req-028147f7-cbf5-4359-b1a1-0fda42793b27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9cef4b12-b28c-47df-9af2-a0bf9934e4d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.250 253665 DEBUG nova.network.neutron [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.292 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:39 compute-0 ceph-mon[75021]: pgmap v1784: 305 pgs: 305 active+clean; 121 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 7.0 KiB/s wr, 60 op/s
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.493 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 157 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.5 MiB/s wr, 48 op/s
Nov 22 09:23:39 compute-0 sudo[332835]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.557 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803404.5562494, 93493d5d-7044-4597-9113-3231f49f8263 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.557 253665 INFO nova.compute.manager [-] [instance: 93493d5d-7044-4597-9113-3231f49f8263] VM Stopped (Lifecycle Event)
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.578 253665 DEBUG nova.compute.manager [None req-8a65c371-1633-4e06-8a6e-f57bea55267d - - - - - -] [instance: 93493d5d-7044-4597-9113-3231f49f8263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:39 compute-0 nova_compute[253661]: 2025-11-22 09:23:39.585 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:23:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:23:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:23:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:23:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:23:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:23:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev bda1c198-4233-4e3b-aef6-89df81217db6 does not exist
Nov 22 09:23:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e54a8eae-3491-4b06-9be3-c68a4f1ce3cb does not exist
Nov 22 09:23:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8a4fa9dc-f75c-438a-bcbb-5cd4be40d317 does not exist
Nov 22 09:23:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:23:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:23:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:23:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:23:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:23:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:23:39 compute-0 sudo[332890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:23:39 compute-0 sudo[332890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:39 compute-0 sudo[332890]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:39 compute-0 sudo[332916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:23:39 compute-0 sudo[332916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:39 compute-0 sudo[332916]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:39 compute-0 sudo[332941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:23:39 compute-0 sudo[332941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:39 compute-0 sudo[332941]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:39 compute-0 sudo[332966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:23:39 compute-0 sudo[332966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:23:40 compute-0 podman[333032]: 2025-11-22 09:23:40.355886394 +0000 UTC m=+0.114390486 container create aafb1572a98812f104ce92ed821a57659b4fcd26fecd65d30f91925693cb38cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 09:23:40 compute-0 podman[333032]: 2025-11-22 09:23:40.263983382 +0000 UTC m=+0.022487494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.431 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:40 compute-0 systemd[1]: Started libpod-conmon-aafb1572a98812f104ce92ed821a57659b4fcd26fecd65d30f91925693cb38cd.scope.
Nov 22 09:23:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:23:40 compute-0 podman[333032]: 2025-11-22 09:23:40.537233398 +0000 UTC m=+0.295737520 container init aafb1572a98812f104ce92ed821a57659b4fcd26fecd65d30f91925693cb38cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:23:40 compute-0 podman[333032]: 2025-11-22 09:23:40.544169226 +0000 UTC m=+0.302673318 container start aafb1572a98812f104ce92ed821a57659b4fcd26fecd65d30f91925693cb38cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:23:40 compute-0 condescending_pascal[333048]: 167 167
Nov 22 09:23:40 compute-0 systemd[1]: libpod-aafb1572a98812f104ce92ed821a57659b4fcd26fecd65d30f91925693cb38cd.scope: Deactivated successfully.
Nov 22 09:23:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:23:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:23:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:23:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:23:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:23:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:23:40 compute-0 podman[333032]: 2025-11-22 09:23:40.641843328 +0000 UTC m=+0.400347450 container attach aafb1572a98812f104ce92ed821a57659b4fcd26fecd65d30f91925693cb38cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pascal, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:23:40 compute-0 podman[333032]: 2025-11-22 09:23:40.644116922 +0000 UTC m=+0.402621044 container died aafb1572a98812f104ce92ed821a57659b4fcd26fecd65d30f91925693cb38cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-560b947f62446694838312bef5d47545a04f7b42e286d1f258ddae8d78996782-merged.mount: Deactivated successfully.
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.741 253665 DEBUG nova.network.neutron [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Updating instance_info_cache with network_info: [{"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.868 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Releasing lock "refresh_cache-9cef4b12-b28c-47df-9af2-a0bf9934e4d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.869 253665 DEBUG nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance network_info: |[{"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.869 253665 DEBUG oslo_concurrency.lockutils [req-8413216a-4ab3-4655-af0e-39b82d4177fe req-028147f7-cbf5-4359-b1a1-0fda42793b27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9cef4b12-b28c-47df-9af2-a0bf9934e4d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.869 253665 DEBUG nova.network.neutron [req-8413216a-4ab3-4655-af0e-39b82d4177fe req-028147f7-cbf5-4359-b1a1-0fda42793b27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Refreshing network info cache for port edd81944-578b-4533-9db7-f17a3fb84211 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.873 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Start _get_guest_xml network_info=[{"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:23:40 compute-0 podman[333032]: 2025-11-22 09:23:40.877987836 +0000 UTC m=+0.636491928 container remove aafb1572a98812f104ce92ed821a57659b4fcd26fecd65d30f91925693cb38cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.882 253665 WARNING nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.890 253665 DEBUG nova.virt.libvirt.host [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.892 253665 DEBUG nova.virt.libvirt.host [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.898 253665 DEBUG nova.virt.libvirt.host [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.899 253665 DEBUG nova.virt.libvirt.host [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.900 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.900 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.900 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.901 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.901 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.901 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.901 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.902 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.902 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.902 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.902 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.902 253665 DEBUG nova.virt.hardware [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:23:40 compute-0 nova_compute[253661]: 2025-11-22 09:23:40.906 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:40 compute-0 systemd[1]: libpod-conmon-aafb1572a98812f104ce92ed821a57659b4fcd26fecd65d30f91925693cb38cd.scope: Deactivated successfully.
Nov 22 09:23:41 compute-0 podman[333075]: 2025-11-22 09:23:41.090225017 +0000 UTC m=+0.073151909 container create c6715feaeb489d256dbc09caf0bb4592723ffe7fb9585080e16ce32310214079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:23:41 compute-0 podman[333075]: 2025-11-22 09:23:41.050142498 +0000 UTC m=+0.033069420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:23:41 compute-0 systemd[1]: Started libpod-conmon-c6715feaeb489d256dbc09caf0bb4592723ffe7fb9585080e16ce32310214079.scope.
Nov 22 09:23:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6a651123dba555082d36059c8c5030122e0c41e93ba945d42c9c60d007e0c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6a651123dba555082d36059c8c5030122e0c41e93ba945d42c9c60d007e0c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6a651123dba555082d36059c8c5030122e0c41e93ba945d42c9c60d007e0c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6a651123dba555082d36059c8c5030122e0c41e93ba945d42c9c60d007e0c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6a651123dba555082d36059c8c5030122e0c41e93ba945d42c9c60d007e0c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:23:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1498102937' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.378 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.404 253665 DEBUG nova.storage.rbd_utils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:41 compute-0 podman[333075]: 2025-11-22 09:23:41.40799804 +0000 UTC m=+0.390924942 container init c6715feaeb489d256dbc09caf0bb4592723ffe7fb9585080e16ce32310214079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.410 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:41 compute-0 podman[333075]: 2025-11-22 09:23:41.42084851 +0000 UTC m=+0.403775392 container start c6715feaeb489d256dbc09caf0bb4592723ffe7fb9585080e16ce32310214079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:23:41 compute-0 podman[333075]: 2025-11-22 09:23:41.480255317 +0000 UTC m=+0.463182209 container attach c6715feaeb489d256dbc09caf0bb4592723ffe7fb9585080e16ce32310214079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:23:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 157 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.5 MiB/s wr, 48 op/s
Nov 22 09:23:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 22 09:23:41 compute-0 ceph-mon[75021]: pgmap v1785: 305 pgs: 305 active+clean; 157 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.5 MiB/s wr, 48 op/s
Nov 22 09:23:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1498102937' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:23:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565009483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.969 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.971 253665 DEBUG nova.virt.libvirt.vif [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-566833626',display_name='tempest-tempest.common.compute-instance-566833626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-566833626',id=77,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-3abserjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:23:36Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9cef4b12-b28c-47df-9af2-a0bf9934e4d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.972 253665 DEBUG nova.network.os_vif_util [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.972 253665 DEBUG nova.network.os_vif_util [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.974 253665 DEBUG nova.objects.instance [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.987 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <uuid>9cef4b12-b28c-47df-9af2-a0bf9934e4d7</uuid>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <name>instance-0000004d</name>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <nova:name>tempest-tempest.common.compute-instance-566833626</nova:name>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:23:40</nova:creationTime>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <nova:user uuid="7e5709393702478dbf0bd566dc94d7fe">tempest-ServerActionsTestOtherA-1527475006-project-member</nova:user>
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <nova:project uuid="9b06c711e582499ab500917d85e27e3c">tempest-ServerActionsTestOtherA-1527475006</nova:project>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <nova:port uuid="edd81944-578b-4533-9db7-f17a3fb84211">
Nov 22 09:23:41 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <system>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <entry name="serial">9cef4b12-b28c-47df-9af2-a0bf9934e4d7</entry>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <entry name="uuid">9cef4b12-b28c-47df-9af2-a0bf9934e4d7</entry>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     </system>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <os>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   </os>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <features>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   </features>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk">
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       </source>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config">
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       </source>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:23:41 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:70:b6:3f"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <target dev="tapedd81944-57"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/console.log" append="off"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <video>
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     </video>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:23:41 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:23:41 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:23:41 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:23:41 compute-0 nova_compute[253661]: </domain>
Nov 22 09:23:41 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.988 253665 DEBUG nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Preparing to wait for external event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.988 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.988 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.988 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.989 253665 DEBUG nova.virt.libvirt.vif [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-566833626',display_name='tempest-tempest.common.compute-instance-566833626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-566833626',id=77,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-3abserjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:23:36Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9cef4b12-b28c-47df-9af2-a0bf9934e4d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.989 253665 DEBUG nova.network.os_vif_util [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.990 253665 DEBUG nova.network.os_vif_util [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.990 253665 DEBUG os_vif [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.991 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.992 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.995 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedd81944-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.996 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedd81944-57, col_values=(('external_ids', {'iface-id': 'edd81944-578b-4533-9db7-f17a3fb84211', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:b6:3f', 'vm-uuid': '9cef4b12-b28c-47df-9af2-a0bf9934e4d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:41 compute-0 nova_compute[253661]: 2025-11-22 09:23:41.997 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:41 compute-0 NetworkManager[48920]: <info>  [1763803421.9987] manager: (tapedd81944-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.000 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:23:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.012 253665 INFO os_vif [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57')
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.095 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.096 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.096 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No VIF found with MAC fa:16:3e:70:b6:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.096 253665 INFO nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Using config drive
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.117 253665 DEBUG nova.storage.rbd_utils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.472 253665 INFO nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Creating config drive at /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.480 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0cv0jm2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:42 compute-0 objective_hertz[333111]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:23:42 compute-0 objective_hertz[333111]: --> relative data size: 1.0
Nov 22 09:23:42 compute-0 objective_hertz[333111]: --> All data devices are unavailable
Nov 22 09:23:42 compute-0 systemd[1]: libpod-c6715feaeb489d256dbc09caf0bb4592723ffe7fb9585080e16ce32310214079.scope: Deactivated successfully.
Nov 22 09:23:42 compute-0 podman[333075]: 2025-11-22 09:23:42.541035122 +0000 UTC m=+1.523962024 container died c6715feaeb489d256dbc09caf0bb4592723ffe7fb9585080e16ce32310214079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hertz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.639 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0cv0jm2" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.671 253665 DEBUG nova.storage.rbd_utils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b6a651123dba555082d36059c8c5030122e0c41e93ba945d42c9c60d007e0c3-merged.mount: Deactivated successfully.
Nov 22 09:23:42 compute-0 nova_compute[253661]: 2025-11-22 09:23:42.681 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:42 compute-0 ceph-mon[75021]: pgmap v1786: 305 pgs: 305 active+clean; 157 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.5 MiB/s wr, 48 op/s
Nov 22 09:23:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3565009483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:42 compute-0 ceph-mon[75021]: osdmap e242: 3 total, 3 up, 3 in
Nov 22 09:23:42 compute-0 podman[333075]: 2025-11-22 09:23:42.941064243 +0000 UTC m=+1.923991115 container remove c6715feaeb489d256dbc09caf0bb4592723ffe7fb9585080e16ce32310214079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hertz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 09:23:42 compute-0 systemd[1]: libpod-conmon-c6715feaeb489d256dbc09caf0bb4592723ffe7fb9585080e16ce32310214079.scope: Deactivated successfully.
Nov 22 09:23:42 compute-0 sudo[332966]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:43 compute-0 sudo[333251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:23:43 compute-0 sudo[333251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:43 compute-0 sudo[333251]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:43 compute-0 sudo[333279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:23:43 compute-0 sudo[333279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:43 compute-0 sudo[333279]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:43 compute-0 sudo[333304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:23:43 compute-0 sudo[333304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:43 compute-0 sudo[333304]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:43 compute-0 sudo[333329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:23:43 compute-0 sudo[333329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:43 compute-0 nova_compute[253661]: 2025-11-22 09:23:43.376 253665 DEBUG oslo_concurrency.processutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.695s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:43 compute-0 nova_compute[253661]: 2025-11-22 09:23:43.377 253665 INFO nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deleting local config drive /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config because it was imported into RBD.
Nov 22 09:23:43 compute-0 kernel: tapedd81944-57: entered promiscuous mode
Nov 22 09:23:43 compute-0 NetworkManager[48920]: <info>  [1763803423.4439] manager: (tapedd81944-57): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Nov 22 09:23:43 compute-0 nova_compute[253661]: 2025-11-22 09:23:43.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:43 compute-0 ovn_controller[152872]: 2025-11-22T09:23:43Z|00782|binding|INFO|Claiming lport edd81944-578b-4533-9db7-f17a3fb84211 for this chassis.
Nov 22 09:23:43 compute-0 ovn_controller[152872]: 2025-11-22T09:23:43Z|00783|binding|INFO|edd81944-578b-4533-9db7-f17a3fb84211: Claiming fa:16:3e:70:b6:3f 10.100.0.7
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.459 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b6:3f 10.100.0.7'], port_security=['fa:16:3e:70:b6:3f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9cef4b12-b28c-47df-9af2-a0bf9934e4d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb487cef-189d-444c-a09e-c2cc59f79353', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=edd81944-578b-4533-9db7-f17a3fb84211) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.461 162862 INFO neutron.agent.ovn.metadata.agent [-] Port edd81944-578b-4533-9db7-f17a3fb84211 in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 bound to our chassis
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.462 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0936cc0d-3697-4210-9c23-8f3e8e452e86
Nov 22 09:23:43 compute-0 ovn_controller[152872]: 2025-11-22T09:23:43Z|00784|binding|INFO|Setting lport edd81944-578b-4533-9db7-f17a3fb84211 ovn-installed in OVS
Nov 22 09:23:43 compute-0 ovn_controller[152872]: 2025-11-22T09:23:43Z|00785|binding|INFO|Setting lport edd81944-578b-4533-9db7-f17a3fb84211 up in Southbound
Nov 22 09:23:43 compute-0 nova_compute[253661]: 2025-11-22 09:23:43.468 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:43 compute-0 systemd-udevd[333386]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.483 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[462d9e96-da9b-496d-ba09-6c0d087c1ec1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:43 compute-0 systemd-machined[215941]: New machine qemu-93-instance-0000004d.
Nov 22 09:23:43 compute-0 NetworkManager[48920]: <info>  [1763803423.4999] device (tapedd81944-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:23:43 compute-0 NetworkManager[48920]: <info>  [1763803423.5006] device (tapedd81944-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:23:43 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-0000004d.
Nov 22 09:23:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 263 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 12 MiB/s wr, 72 op/s
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.517 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3f86c3-ee2d-4630-a43d-90986cb7c067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.521 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0bea7f5a-9e5f-4835-ba88-3bec3af6b247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.560 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5e37a28d-74bb-446d-8c06-5b01510e1bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.582 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8caa6efe-b758-4fe0-a4eb-7a077fce59bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 30845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333412, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.604 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[335e80ae-782d-4f91-aaa0-a45564f7468b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622144, 'tstamp': 622144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333414, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622147, 'tstamp': 622147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333414, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.606 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:43 compute-0 nova_compute[253661]: 2025-11-22 09:23:43.619 253665 DEBUG nova.network.neutron [req-8413216a-4ab3-4655-af0e-39b82d4177fe req-028147f7-cbf5-4359-b1a1-0fda42793b27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Updated VIF entry in instance network info cache for port edd81944-578b-4533-9db7-f17a3fb84211. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:23:43 compute-0 nova_compute[253661]: 2025-11-22 09:23:43.620 253665 DEBUG nova.network.neutron [req-8413216a-4ab3-4655-af0e-39b82d4177fe req-028147f7-cbf5-4359-b1a1-0fda42793b27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Updating instance_info_cache with network_info: [{"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:43 compute-0 nova_compute[253661]: 2025-11-22 09:23:43.635 253665 DEBUG oslo_concurrency.lockutils [req-8413216a-4ab3-4655-af0e-39b82d4177fe req-028147f7-cbf5-4359-b1a1-0fda42793b27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9cef4b12-b28c-47df-9af2-a0bf9934e4d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:23:43 compute-0 nova_compute[253661]: 2025-11-22 09:23:43.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:43 compute-0 nova_compute[253661]: 2025-11-22 09:23:43.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.653 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0936cc0d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.653 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.653 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0936cc0d-30, col_values=(('external_ids', {'iface-id': 'a1484e81-5431-4cb7-9298-4572e8674d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:43.654 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:43 compute-0 podman[333415]: 2025-11-22 09:23:43.676677907 +0000 UTC m=+0.072587076 container create 82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:23:43 compute-0 podman[333415]: 2025-11-22 09:23:43.637305826 +0000 UTC m=+0.033215015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:23:43 compute-0 systemd[1]: Started libpod-conmon-82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523.scope.
Nov 22 09:23:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 22 09:23:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:23:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 22 09:23:43 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 22 09:23:43 compute-0 podman[333415]: 2025-11-22 09:23:43.903139022 +0000 UTC m=+0.299048211 container init 82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:23:43 compute-0 podman[333415]: 2025-11-22 09:23:43.91216283 +0000 UTC m=+0.308071999 container start 82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 09:23:43 compute-0 systemd[1]: libpod-82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523.scope: Deactivated successfully.
Nov 22 09:23:43 compute-0 magical_brahmagupta[333432]: 167 167
Nov 22 09:23:43 compute-0 conmon[333432]: conmon 82bd44b1b9cf4839e161 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523.scope/container/memory.events
Nov 22 09:23:43 compute-0 podman[333415]: 2025-11-22 09:23:43.980103263 +0000 UTC m=+0.376012452 container attach 82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:23:43 compute-0 podman[333415]: 2025-11-22 09:23:43.980703447 +0000 UTC m=+0.376612616 container died 82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-572db896ea8104853c07c83f45974b0f6602862250a659c5d609e6ac8b67e827-merged.mount: Deactivated successfully.
Nov 22 09:23:44 compute-0 podman[333415]: 2025-11-22 09:23:44.462222859 +0000 UTC m=+0.858132028 container remove 82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 09:23:44 compute-0 systemd[1]: libpod-conmon-82bd44b1b9cf4839e161e66c1a846b75a22587d4ca46de8bb176176cd49f0523.scope: Deactivated successfully.
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.666 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803424.665562, 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.668 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] VM Started (Lifecycle Event)
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.685 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.694 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803424.6671395, 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.694 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] VM Paused (Lifecycle Event)
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.713 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.717 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.740 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:23:44 compute-0 podman[333499]: 2025-11-22 09:23:44.649465945 +0000 UTC m=+0.027049225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:23:44 compute-0 podman[333499]: 2025-11-22 09:23:44.779270674 +0000 UTC m=+0.156853933 container create 035db97e6fd15914ebb364d278733a48136d733f6b59dd4da3d785132ce126a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.837 253665 DEBUG nova.compute.manager [req-43bfb213-be3a-4be1-b25a-e6746f1a680f req-673d9146-105a-43b0-95dd-cc3ac5ddc15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.837 253665 DEBUG oslo_concurrency.lockutils [req-43bfb213-be3a-4be1-b25a-e6746f1a680f req-673d9146-105a-43b0-95dd-cc3ac5ddc15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.838 253665 DEBUG oslo_concurrency.lockutils [req-43bfb213-be3a-4be1-b25a-e6746f1a680f req-673d9146-105a-43b0-95dd-cc3ac5ddc15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.838 253665 DEBUG oslo_concurrency.lockutils [req-43bfb213-be3a-4be1-b25a-e6746f1a680f req-673d9146-105a-43b0-95dd-cc3ac5ddc15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.838 253665 DEBUG nova.compute.manager [req-43bfb213-be3a-4be1-b25a-e6746f1a680f req-673d9146-105a-43b0-95dd-cc3ac5ddc15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Processing event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.839 253665 DEBUG nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.842 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803424.8423362, 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.842 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] VM Resumed (Lifecycle Event)
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.845 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.850 253665 INFO nova.virt.libvirt.driver [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance spawned successfully.
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.851 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:23:44 compute-0 systemd[1]: Started libpod-conmon-035db97e6fd15914ebb364d278733a48136d733f6b59dd4da3d785132ce126a0.scope.
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.870 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.876 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.881 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.882 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.883 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.883 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.884 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.884 253665 DEBUG nova.virt.libvirt.driver [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0c4fbd1853967af6ff94d0b01db73e5fc724d8864dd0014412fcca3e5332be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.906 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0c4fbd1853967af6ff94d0b01db73e5fc724d8864dd0014412fcca3e5332be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0c4fbd1853967af6ff94d0b01db73e5fc724d8864dd0014412fcca3e5332be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0c4fbd1853967af6ff94d0b01db73e5fc724d8864dd0014412fcca3e5332be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:44 compute-0 ceph-mon[75021]: pgmap v1788: 305 pgs: 305 active+clean; 263 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 12 MiB/s wr, 72 op/s
Nov 22 09:23:44 compute-0 ceph-mon[75021]: osdmap e243: 3 total, 3 up, 3 in
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.956 253665 INFO nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Took 8.61 seconds to spawn the instance on the hypervisor.
Nov 22 09:23:44 compute-0 nova_compute[253661]: 2025-11-22 09:23:44.956 253665 DEBUG nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:45 compute-0 nova_compute[253661]: 2025-11-22 09:23:45.022 253665 INFO nova.compute.manager [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Took 9.65 seconds to build instance.
Nov 22 09:23:45 compute-0 ovn_controller[152872]: 2025-11-22T09:23:45Z|00786|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 09:23:45 compute-0 nova_compute[253661]: 2025-11-22 09:23:45.039 253665 DEBUG oslo_concurrency.lockutils [None req-31ca1e33-53f0-4242-b41f-3105ad107762 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:45 compute-0 podman[333499]: 2025-11-22 09:23:45.049905297 +0000 UTC m=+0.427488546 container init 035db97e6fd15914ebb364d278733a48136d733f6b59dd4da3d785132ce126a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 09:23:45 compute-0 podman[333499]: 2025-11-22 09:23:45.06079189 +0000 UTC m=+0.438375109 container start 035db97e6fd15914ebb364d278733a48136d733f6b59dd4da3d785132ce126a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:23:45 compute-0 nova_compute[253661]: 2025-11-22 09:23:45.103 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:45 compute-0 podman[333499]: 2025-11-22 09:23:45.231007755 +0000 UTC m=+0.608591004 container attach 035db97e6fd15914ebb364d278733a48136d733f6b59dd4da3d785132ce126a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:23:45 compute-0 nova_compute[253661]: 2025-11-22 09:23:45.434 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 279 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 17 MiB/s wr, 74 op/s
Nov 22 09:23:46 compute-0 awesome_bohr[333517]: {
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:     "0": [
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:         {
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "devices": [
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "/dev/loop3"
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             ],
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_name": "ceph_lv0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_size": "21470642176",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "name": "ceph_lv0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "tags": {
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.cluster_name": "ceph",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.crush_device_class": "",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.encrypted": "0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.osd_id": "0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.type": "block",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.vdo": "0"
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             },
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "type": "block",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "vg_name": "ceph_vg0"
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:         }
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:     ],
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:     "1": [
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:         {
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "devices": [
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "/dev/loop4"
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             ],
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_name": "ceph_lv1",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_size": "21470642176",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "name": "ceph_lv1",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "tags": {
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.cluster_name": "ceph",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.crush_device_class": "",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.encrypted": "0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.osd_id": "1",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.type": "block",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.vdo": "0"
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             },
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "type": "block",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "vg_name": "ceph_vg1"
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:         }
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:     ],
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:     "2": [
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:         {
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "devices": [
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "/dev/loop5"
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             ],
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_name": "ceph_lv2",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_size": "21470642176",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "name": "ceph_lv2",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "tags": {
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.cluster_name": "ceph",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.crush_device_class": "",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.encrypted": "0",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.osd_id": "2",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.type": "block",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:                 "ceph.vdo": "0"
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             },
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "type": "block",
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:             "vg_name": "ceph_vg2"
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:         }
Nov 22 09:23:46 compute-0 awesome_bohr[333517]:     ]
Nov 22 09:23:46 compute-0 awesome_bohr[333517]: }
Nov 22 09:23:46 compute-0 systemd[1]: libpod-035db97e6fd15914ebb364d278733a48136d733f6b59dd4da3d785132ce126a0.scope: Deactivated successfully.
Nov 22 09:23:46 compute-0 podman[333526]: 2025-11-22 09:23:46.188443772 +0000 UTC m=+0.034186387 container died 035db97e6fd15914ebb364d278733a48136d733f6b59dd4da3d785132ce126a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0c4fbd1853967af6ff94d0b01db73e5fc724d8864dd0014412fcca3e5332be-merged.mount: Deactivated successfully.
Nov 22 09:23:46 compute-0 podman[333526]: 2025-11-22 09:23:46.372518943 +0000 UTC m=+0.218261538 container remove 035db97e6fd15914ebb364d278733a48136d733f6b59dd4da3d785132ce126a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:23:46 compute-0 systemd[1]: libpod-conmon-035db97e6fd15914ebb364d278733a48136d733f6b59dd4da3d785132ce126a0.scope: Deactivated successfully.
Nov 22 09:23:46 compute-0 sudo[333329]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:46 compute-0 sudo[333541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:23:46 compute-0 sudo[333541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:46 compute-0 sudo[333541]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.522 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.523 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.537 253665 DEBUG nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:23:46 compute-0 sudo[333566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:23:46 compute-0 sudo[333566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:46 compute-0 sudo[333566]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.615 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.615 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.622 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.623 253665 INFO nova.compute.claims [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:23:46 compute-0 sudo[333591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:23:46 compute-0 sudo[333591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:46 compute-0 sudo[333591]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:46 compute-0 sudo[333616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:23:46 compute-0 sudo[333616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.739 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.983 253665 DEBUG nova.compute.manager [req-3c22fdd4-e5ad-48c1-a5a0-c271043dd7f2 req-b244e0c1-31da-4230-9a0c-e81cf1d4d4fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.984 253665 DEBUG oslo_concurrency.lockutils [req-3c22fdd4-e5ad-48c1-a5a0-c271043dd7f2 req-b244e0c1-31da-4230-9a0c-e81cf1d4d4fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.984 253665 DEBUG oslo_concurrency.lockutils [req-3c22fdd4-e5ad-48c1-a5a0-c271043dd7f2 req-b244e0c1-31da-4230-9a0c-e81cf1d4d4fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.984 253665 DEBUG oslo_concurrency.lockutils [req-3c22fdd4-e5ad-48c1-a5a0-c271043dd7f2 req-b244e0c1-31da-4230-9a0c-e81cf1d4d4fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.984 253665 DEBUG nova.compute.manager [req-3c22fdd4-e5ad-48c1-a5a0-c271043dd7f2 req-b244e0c1-31da-4230-9a0c-e81cf1d4d4fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] No waiting events found dispatching network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.984 253665 WARNING nova.compute.manager [req-3c22fdd4-e5ad-48c1-a5a0-c271043dd7f2 req-b244e0c1-31da-4230-9a0c-e81cf1d4d4fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received unexpected event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 for instance with vm_state active and task_state None.
Nov 22 09:23:46 compute-0 nova_compute[253661]: 2025-11-22 09:23:46.999 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:47 compute-0 podman[333700]: 2025-11-22 09:23:47.085372897 +0000 UTC m=+0.029034084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.477 253665 DEBUG oslo_concurrency.lockutils [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.478 253665 DEBUG oslo_concurrency.lockutils [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.478 253665 DEBUG nova.compute.manager [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.482 253665 DEBUG nova.compute.manager [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.483 253665 DEBUG nova.objects.instance [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'flavor' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:23:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2361045376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:47 compute-0 podman[333700]: 2025-11-22 09:23:47.500291287 +0000 UTC m=+0.443952474 container create 7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.511 253665 DEBUG nova.virt.libvirt.driver [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:23:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 255 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 MiB/s wr, 131 op/s
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.526 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.532 253665 DEBUG nova.compute.provider_tree [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.546 253665 DEBUG nova.scheduler.client.report [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.567 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.568 253665 DEBUG nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:23:47 compute-0 ceph-mon[75021]: pgmap v1790: 305 pgs: 305 active+clean; 279 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 17 MiB/s wr, 74 op/s
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.617 253665 DEBUG nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.618 253665 DEBUG nova.network.neutron [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.634 253665 INFO nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.650 253665 DEBUG nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:23:47 compute-0 systemd[1]: Started libpod-conmon-7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a.scope.
Nov 22 09:23:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.721 253665 DEBUG nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.722 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.723 253665 INFO nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Creating image(s)
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.749 253665 DEBUG nova.storage.rbd_utils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.781 253665 DEBUG nova.storage.rbd_utils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:47 compute-0 podman[333700]: 2025-11-22 09:23:47.795870853 +0000 UTC m=+0.739532130 container init 7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 09:23:47 compute-0 podman[333700]: 2025-11-22 09:23:47.805985957 +0000 UTC m=+0.749647174 container start 7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:23:47 compute-0 magical_boyd[333716]: 167 167
Nov 22 09:23:47 compute-0 systemd[1]: libpod-7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a.scope: Deactivated successfully.
Nov 22 09:23:47 compute-0 conmon[333716]: conmon 7737f1f62a76d5d2e5ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a.scope/container/memory.events
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.820 253665 DEBUG nova.storage.rbd_utils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.832 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:47 compute-0 podman[333700]: 2025-11-22 09:23:47.898892444 +0000 UTC m=+0.842553631 container attach 7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 22 09:23:47 compute-0 podman[333700]: 2025-11-22 09:23:47.900231377 +0000 UTC m=+0.843892564 container died 7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.919 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.922 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.924 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.925 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.976 253665 DEBUG nova.storage.rbd_utils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:47 compute-0 nova_compute[253661]: 2025-11-22 09:23:47.982 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a63404e8e59edbc55489dbcee488c19d86e28ad2f58c1c5f781412e9adc9ef42-merged.mount: Deactivated successfully.
Nov 22 09:23:48 compute-0 nova_compute[253661]: 2025-11-22 09:23:48.314 253665 DEBUG nova.policy [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3f7dbcc13af740b491f0498f4ddec69d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e78196ec949a45cf803d3e585b603558', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:23:48 compute-0 podman[333700]: 2025-11-22 09:23:48.364997303 +0000 UTC m=+1.308658520 container remove 7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:23:48 compute-0 systemd[1]: libpod-conmon-7737f1f62a76d5d2e5eaa96f81072f03b1cc763fe6864a07c4570806f90c106a.scope: Deactivated successfully.
Nov 22 09:23:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2361045376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:23:48 compute-0 ceph-mon[75021]: pgmap v1791: 305 pgs: 305 active+clean; 255 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 MiB/s wr, 131 op/s
Nov 22 09:23:48 compute-0 podman[333834]: 2025-11-22 09:23:48.608083119 +0000 UTC m=+0.029869223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:23:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:23:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 23K writes, 97K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 23K writes, 7614 syncs, 3.14 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8839 writes, 35K keys, 8839 commit groups, 1.0 writes per commit group, ingest: 35.22 MB, 0.06 MB/s
                                           Interval WAL: 8839 writes, 3333 syncs, 2.65 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:23:48 compute-0 podman[333834]: 2025-11-22 09:23:48.718480288 +0000 UTC m=+0.140266382 container create 433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 09:23:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 22 09:23:48 compute-0 systemd[1]: Started libpod-conmon-433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769.scope.
Nov 22 09:23:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a85c9cb2062b7d156ea1ac1581297f81f8d335685f3a7d55cf0fc29fb3ba4f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a85c9cb2062b7d156ea1ac1581297f81f8d335685f3a7d55cf0fc29fb3ba4f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a85c9cb2062b7d156ea1ac1581297f81f8d335685f3a7d55cf0fc29fb3ba4f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a85c9cb2062b7d156ea1ac1581297f81f8d335685f3a7d55cf0fc29fb3ba4f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:23:48 compute-0 nova_compute[253661]: 2025-11-22 09:23:48.948 253665 DEBUG nova.network.neutron [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Successfully created port: 15bf0e02-e093-4f45-995f-abb925d1cf71 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:23:49 compute-0 podman[333834]: 2025-11-22 09:23:49.096488037 +0000 UTC m=+0.518274231 container init 433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:23:49 compute-0 podman[333834]: 2025-11-22 09:23:49.104569083 +0000 UTC m=+0.526355217 container start 433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:23:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 22 09:23:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 22 09:23:49 compute-0 podman[333834]: 2025-11-22 09:23:49.290920868 +0000 UTC m=+0.712706962 container attach 433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 09:23:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 167 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.4 MiB/s wr, 172 op/s
Nov 22 09:23:49 compute-0 nova_compute[253661]: 2025-11-22 09:23:49.916 253665 DEBUG nova.network.neutron [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Successfully updated port: 15bf0e02-e093-4f45-995f-abb925d1cf71 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:23:49 compute-0 nova_compute[253661]: 2025-11-22 09:23:49.930 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:23:49 compute-0 nova_compute[253661]: 2025-11-22 09:23:49.932 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:23:49 compute-0 nova_compute[253661]: 2025-11-22 09:23:49.933 253665 DEBUG nova.network.neutron [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:23:50 compute-0 nova_compute[253661]: 2025-11-22 09:23:50.003 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:50 compute-0 nova_compute[253661]: 2025-11-22 09:23:50.074 253665 DEBUG nova.storage.rbd_utils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] resizing rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]: {
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "osd_id": 1,
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "type": "bluestore"
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:     },
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "osd_id": 0,
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "type": "bluestore"
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:     },
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "osd_id": 2,
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:         "type": "bluestore"
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]:     }
Nov 22 09:23:50 compute-0 wizardly_pascal[333851]: }
Nov 22 09:23:50 compute-0 systemd[1]: libpod-433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769.scope: Deactivated successfully.
Nov 22 09:23:50 compute-0 systemd[1]: libpod-433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769.scope: Consumed 1.130s CPU time.
Nov 22 09:23:50 compute-0 conmon[333851]: conmon 433ff7b6866c5375c826 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769.scope/container/memory.events
Nov 22 09:23:50 compute-0 podman[333834]: 2025-11-22 09:23:50.281838395 +0000 UTC m=+1.703624489 container died 433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:23:50 compute-0 ceph-mon[75021]: osdmap e244: 3 total, 3 up, 3 in
Nov 22 09:23:50 compute-0 nova_compute[253661]: 2025-11-22 09:23:50.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:50 compute-0 nova_compute[253661]: 2025-11-22 09:23:50.651 253665 DEBUG nova.network.neutron [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:23:50 compute-0 nova_compute[253661]: 2025-11-22 09:23:50.655 253665 DEBUG nova.compute.manager [req-422b10c2-fab5-4453-88cf-1b8b47d40867 req-d1b78a70-ba05-4ef9-880d-22e5bd9656fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:50 compute-0 nova_compute[253661]: 2025-11-22 09:23:50.655 253665 DEBUG nova.compute.manager [req-422b10c2-fab5-4453-88cf-1b8b47d40867 req-d1b78a70-ba05-4ef9-880d-22e5bd9656fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing instance network info cache due to event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:23:50 compute-0 nova_compute[253661]: 2025-11-22 09:23:50.655 253665 DEBUG oslo_concurrency.lockutils [req-422b10c2-fab5-4453-88cf-1b8b47d40867 req-d1b78a70-ba05-4ef9-880d-22e5bd9656fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a85c9cb2062b7d156ea1ac1581297f81f8d335685f3a7d55cf0fc29fb3ba4f0-merged.mount: Deactivated successfully.
Nov 22 09:23:50 compute-0 podman[333834]: 2025-11-22 09:23:50.913995337 +0000 UTC m=+2.335781431 container remove 433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:23:50 compute-0 sudo[333616]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:23:50 compute-0 systemd[1]: libpod-conmon-433ff7b6866c5375c826ae862994a8069833b98b3fb655ba862d8e578e8e1769.scope: Deactivated successfully.
Nov 22 09:23:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:23:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:23:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:23:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev f89b2328-55bc-4989-83eb-22750d19c993 does not exist
Nov 22 09:23:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e8cce93b-9aea-4a81-ba10-cd16ec742711 does not exist
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.031 253665 DEBUG nova.objects.instance [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'migration_context' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.042 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.042 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Ensure instance console log exists: /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.042 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.043 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.043 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:51 compute-0 sudo[333965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:23:51 compute-0 sudo[333965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:51 compute-0 sudo[333965]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:51 compute-0 sudo[333993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:23:51 compute-0 sudo[333993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:23:51 compute-0 sudo[333993]: pam_unix(sudo:session): session closed for user root
Nov 22 09:23:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 167 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 158 op/s
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.585 253665 DEBUG nova.network.neutron [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.609 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.610 253665 DEBUG nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance network_info: |[{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.610 253665 DEBUG oslo_concurrency.lockutils [req-422b10c2-fab5-4453-88cf-1b8b47d40867 req-d1b78a70-ba05-4ef9-880d-22e5bd9656fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.610 253665 DEBUG nova.network.neutron [req-422b10c2-fab5-4453-88cf-1b8b47d40867 req-d1b78a70-ba05-4ef9-880d-22e5bd9656fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.613 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Start _get_guest_xml network_info=[{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.619 253665 WARNING nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.629 253665 DEBUG nova.virt.libvirt.host [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.630 253665 DEBUG nova.virt.libvirt.host [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.634 253665 DEBUG nova.virt.libvirt.host [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.634 253665 DEBUG nova.virt.libvirt.host [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.635 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.635 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.636 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.636 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.636 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.636 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.637 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.637 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.637 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.637 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.637 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.638 253665 DEBUG nova.virt.hardware [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:23:51 compute-0 nova_compute[253661]: 2025-11-22 09:23:51.641 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:51 compute-0 ceph-mon[75021]: pgmap v1793: 305 pgs: 305 active+clean; 167 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.4 MiB/s wr, 172 op/s
Nov 22 09:23:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:23:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:23:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1023084802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.161 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.184 253665 DEBUG nova.storage.rbd_utils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.188 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:23:52
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'images', '.rgw.root', 'volumes', '.mgr', 'backups']
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.245 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.261 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:23:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 22 09:23:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 22 09:23:52 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:23:52 compute-0 ceph-mon[75021]: pgmap v1794: 305 pgs: 305 active+clean; 167 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 158 op/s
Nov 22 09:23:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1023084802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:52 compute-0 ceph-mon[75021]: osdmap e245: 3 total, 3 up, 3 in
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:23:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.727 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:23:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533400380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.808 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.809 253665 DEBUG nova.virt.libvirt.vif [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-533420499',display_name='tempest-ServerRescueTestJSONUnderV235-server-533420499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-533420499',id=78,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e78196ec949a45cf803d3e585b603558',ramdisk_id='',reservation_id='r-v4cum4j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1716369832',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1716369832-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:23:47Z,user_data=None,user_id='3f7dbcc13af740b491f0498f4ddec69d',uuid=a207d8c4-4fce-4fe6-9ba5-548a92e757ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.810 253665 DEBUG nova.network.os_vif_util [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converting VIF {"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.811 253665 DEBUG nova.network.os_vif_util [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.813 253665 DEBUG nova.objects.instance [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'pci_devices' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.829 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <uuid>a207d8c4-4fce-4fe6-9ba5-548a92e757ac</uuid>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <name>instance-0000004e</name>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-533420499</nova:name>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:23:51</nova:creationTime>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <nova:user uuid="3f7dbcc13af740b491f0498f4ddec69d">tempest-ServerRescueTestJSONUnderV235-1716369832-project-member</nova:user>
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <nova:project uuid="e78196ec949a45cf803d3e585b603558">tempest-ServerRescueTestJSONUnderV235-1716369832</nova:project>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <nova:port uuid="15bf0e02-e093-4f45-995f-abb925d1cf71">
Nov 22 09:23:52 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <system>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <entry name="serial">a207d8c4-4fce-4fe6-9ba5-548a92e757ac</entry>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <entry name="uuid">a207d8c4-4fce-4fe6-9ba5-548a92e757ac</entry>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     </system>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <os>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   </os>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <features>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   </features>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk">
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config">
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:23:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:86:73:cb"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <target dev="tap15bf0e02-e0"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/console.log" append="off"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <video>
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     </video>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:23:52 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:23:52 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:23:52 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:23:52 compute-0 nova_compute[253661]: </domain>
Nov 22 09:23:52 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.836 253665 DEBUG nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Preparing to wait for external event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.836 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.836 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.837 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.837 253665 DEBUG nova.virt.libvirt.vif [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-533420499',display_name='tempest-ServerRescueTestJSONUnderV235-server-533420499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-533420499',id=78,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e78196ec949a45cf803d3e585b603558',ramdisk_id='',reservation_id='r-v4cum4j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1716369832',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1716369832-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:23:47Z,user_data=None,user_id='3f7dbcc13af740b491f0498f4ddec69d',uuid=a207d8c4-4fce-4fe6-9ba5-548a92e757ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.838 253665 DEBUG nova.network.os_vif_util [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converting VIF {"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.839 253665 DEBUG nova.network.os_vif_util [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.839 253665 DEBUG os_vif [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.845 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.853 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15bf0e02-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.853 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15bf0e02-e0, col_values=(('external_ids', {'iface-id': '15bf0e02-e093-4f45-995f-abb925d1cf71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:73:cb', 'vm-uuid': 'a207d8c4-4fce-4fe6-9ba5-548a92e757ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:52 compute-0 NetworkManager[48920]: <info>  [1763803432.8563] manager: (tap15bf0e02-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.866 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:52 compute-0 nova_compute[253661]: 2025-11-22 09:23:52.867 253665 INFO os_vif [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0')
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.082 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.083 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.084 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No VIF found with MAC fa:16:3e:86:73:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.086 253665 INFO nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Using config drive
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.127 253665 DEBUG nova.storage.rbd_utils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.233 253665 DEBUG nova.network.neutron [req-422b10c2-fab5-4453-88cf-1b8b47d40867 req-d1b78a70-ba05-4ef9-880d-22e5bd9656fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updated VIF entry in instance network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.235 253665 DEBUG nova.network.neutron [req-422b10c2-fab5-4453-88cf-1b8b47d40867 req-d1b78a70-ba05-4ef9-880d-22e5bd9656fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.249 253665 DEBUG oslo_concurrency.lockutils [req-422b10c2-fab5-4453-88cf-1b8b47d40867 req-d1b78a70-ba05-4ef9-880d-22e5bd9656fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.460 253665 INFO nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Creating config drive at /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.464 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaec_bi_t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 194 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 192 op/s
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.605 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaec_bi_t" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.638 253665 DEBUG nova.storage.rbd_utils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:23:53 compute-0 nova_compute[253661]: 2025-11-22 09:23:53.642 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:23:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/533400380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.212 253665 DEBUG oslo_concurrency.processutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.213 253665 INFO nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Deleting local config drive /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config because it was imported into RBD.
Nov 22 09:23:54 compute-0 NetworkManager[48920]: <info>  [1763803434.2682] manager: (tap15bf0e02-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Nov 22 09:23:54 compute-0 kernel: tap15bf0e02-e0: entered promiscuous mode
Nov 22 09:23:54 compute-0 ovn_controller[152872]: 2025-11-22T09:23:54Z|00787|binding|INFO|Claiming lport 15bf0e02-e093-4f45-995f-abb925d1cf71 for this chassis.
Nov 22 09:23:54 compute-0 ovn_controller[152872]: 2025-11-22T09:23:54Z|00788|binding|INFO|15bf0e02-e093-4f45-995f-abb925d1cf71: Claiming fa:16:3e:86:73:cb 10.100.0.14
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:54.288 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:73:cb 10.100.0.14'], port_security=['fa:16:3e:86:73:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a207d8c4-4fce-4fe6-9ba5-548a92e757ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcc3ab0c-697f-4983-ad7d-7f2a44c0b653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e78196ec949a45cf803d3e585b603558', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'acde5338-5012-4a35-a74f-7e2170896be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70ae9d50-6442-4aca-8fcc-29daad21c977, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=15bf0e02-e093-4f45-995f-abb925d1cf71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:23:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:54.289 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 15bf0e02-e093-4f45-995f-abb925d1cf71 in datapath fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 bound to our chassis
Nov 22 09:23:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:54.290 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:23:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:23:54.292 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06aebc9e-ba84-4427-ae01-28eefdd3d271]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:54 compute-0 ovn_controller[152872]: 2025-11-22T09:23:54Z|00789|binding|INFO|Setting lport 15bf0e02-e093-4f45-995f-abb925d1cf71 ovn-installed in OVS
Nov 22 09:23:54 compute-0 ovn_controller[152872]: 2025-11-22T09:23:54Z|00790|binding|INFO|Setting lport 15bf0e02-e093-4f45-995f-abb925d1cf71 up in Southbound
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:54 compute-0 systemd-udevd[334153]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:23:54 compute-0 systemd-machined[215941]: New machine qemu-94-instance-0000004e.
Nov 22 09:23:54 compute-0 NetworkManager[48920]: <info>  [1763803434.3346] device (tap15bf0e02-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:23:54 compute-0 NetworkManager[48920]: <info>  [1763803434.3360] device (tap15bf0e02-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:23:54 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-0000004e.
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.625 253665 DEBUG nova.compute.manager [req-332a8465-1108-489b-9eb3-43a8baa70850 req-f4e4bb9c-b71b-471b-9581-a3b531afbf10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.627 253665 DEBUG oslo_concurrency.lockutils [req-332a8465-1108-489b-9eb3-43a8baa70850 req-f4e4bb9c-b71b-471b-9581-a3b531afbf10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.628 253665 DEBUG oslo_concurrency.lockutils [req-332a8465-1108-489b-9eb3-43a8baa70850 req-f4e4bb9c-b71b-471b-9581-a3b531afbf10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.628 253665 DEBUG oslo_concurrency.lockutils [req-332a8465-1108-489b-9eb3-43a8baa70850 req-f4e4bb9c-b71b-471b-9581-a3b531afbf10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:54 compute-0 nova_compute[253661]: 2025-11-22 09:23:54.629 253665 DEBUG nova.compute.manager [req-332a8465-1108-489b-9eb3-43a8baa70850 req-f4e4bb9c-b71b-471b-9581-a3b531afbf10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Processing event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:23:55 compute-0 ceph-mon[75021]: pgmap v1796: 305 pgs: 305 active+clean; 194 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 192 op/s
Nov 22 09:23:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:23:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3001.2 total, 600.0 interval
                                           Cumulative writes: 25K writes, 100K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 25K writes, 8409 syncs, 3.04 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9798 writes, 38K keys, 9798 commit groups, 1.0 writes per commit group, ingest: 38.25 MB, 0.06 MB/s
                                           Interval WAL: 9798 writes, 3730 syncs, 2.63 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.445 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803435.4416108, a207d8c4-4fce-4fe6-9ba5-548a92e757ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.445 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] VM Started (Lifecycle Event)
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.447 253665 DEBUG nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.460 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.463 253665 INFO nova.virt.libvirt.driver [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance spawned successfully.
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.464 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.467 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.470 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.500 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803435.4425926, a207d8c4-4fce-4fe6-9ba5-548a92e757ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.500 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] VM Paused (Lifecycle Event)
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.506 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.507 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.507 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.508 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.508 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.509 253665 DEBUG nova.virt.libvirt.driver [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 213 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 125 op/s
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.533 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.537 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803435.4502313, a207d8c4-4fce-4fe6-9ba5-548a92e757ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.537 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] VM Resumed (Lifecycle Event)
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.570 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.573 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.577 253665 INFO nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Took 7.86 seconds to spawn the instance on the hypervisor.
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.578 253665 DEBUG nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:23:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.589 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.632 253665 INFO nova.compute.manager [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Took 9.05 seconds to build instance.
Nov 22 09:23:55 compute-0 nova_compute[253661]: 2025-11-22 09:23:55.646 253665 DEBUG oslo_concurrency.lockutils [None req-02828a52-6d0b-4895-b574-27353ecf3be4 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:56 compute-0 nova_compute[253661]: 2025-11-22 09:23:56.723 253665 DEBUG nova.compute.manager [req-89e8d43f-a3ae-4d6a-982f-9ff76ad3ac95 req-667d4871-b2d5-40fe-afe6-ad76fc785ba2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:23:56 compute-0 nova_compute[253661]: 2025-11-22 09:23:56.724 253665 DEBUG oslo_concurrency.lockutils [req-89e8d43f-a3ae-4d6a-982f-9ff76ad3ac95 req-667d4871-b2d5-40fe-afe6-ad76fc785ba2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:23:56 compute-0 nova_compute[253661]: 2025-11-22 09:23:56.725 253665 DEBUG oslo_concurrency.lockutils [req-89e8d43f-a3ae-4d6a-982f-9ff76ad3ac95 req-667d4871-b2d5-40fe-afe6-ad76fc785ba2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:23:56 compute-0 nova_compute[253661]: 2025-11-22 09:23:56.725 253665 DEBUG oslo_concurrency.lockutils [req-89e8d43f-a3ae-4d6a-982f-9ff76ad3ac95 req-667d4871-b2d5-40fe-afe6-ad76fc785ba2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:23:56 compute-0 nova_compute[253661]: 2025-11-22 09:23:56.726 253665 DEBUG nova.compute.manager [req-89e8d43f-a3ae-4d6a-982f-9ff76ad3ac95 req-667d4871-b2d5-40fe-afe6-ad76fc785ba2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:23:56 compute-0 nova_compute[253661]: 2025-11-22 09:23:56.726 253665 WARNING nova.compute.manager [req-89e8d43f-a3ae-4d6a-982f-9ff76ad3ac95 req-667d4871-b2d5-40fe-afe6-ad76fc785ba2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received unexpected event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with vm_state active and task_state None.
Nov 22 09:23:57 compute-0 ceph-mon[75021]: pgmap v1797: 305 pgs: 305 active+clean; 213 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 125 op/s
Nov 22 09:23:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 214 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 805 KiB/s rd, 2.6 MiB/s wr, 97 op/s
Nov 22 09:23:57 compute-0 nova_compute[253661]: 2025-11-22 09:23:57.540 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:23:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 22 09:23:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 22 09:23:57 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 22 09:23:57 compute-0 nova_compute[253661]: 2025-11-22 09:23:57.732 253665 DEBUG nova.virt.libvirt.driver [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:23:57 compute-0 nova_compute[253661]: 2025-11-22 09:23:57.842 253665 INFO nova.compute.manager [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Rescuing
Nov 22 09:23:57 compute-0 nova_compute[253661]: 2025-11-22 09:23:57.843 253665 DEBUG oslo_concurrency.lockutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:23:57 compute-0 nova_compute[253661]: 2025-11-22 09:23:57.843 253665 DEBUG oslo_concurrency.lockutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:23:57 compute-0 nova_compute[253661]: 2025-11-22 09:23:57.843 253665 DEBUG nova.network.neutron [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:23:57 compute-0 nova_compute[253661]: 2025-11-22 09:23:57.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:23:58 compute-0 ceph-mon[75021]: pgmap v1798: 305 pgs: 305 active+clean; 214 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 805 KiB/s rd, 2.6 MiB/s wr, 97 op/s
Nov 22 09:23:58 compute-0 ceph-mon[75021]: osdmap e246: 3 total, 3 up, 3 in
Nov 22 09:23:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 214 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 179 op/s
Nov 22 09:23:59 compute-0 nova_compute[253661]: 2025-11-22 09:23:59.576 253665 DEBUG nova.network.neutron [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:23:59 compute-0 nova_compute[253661]: 2025-11-22 09:23:59.596 253665 DEBUG oslo_concurrency.lockutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:23:59 compute-0 nova_compute[253661]: 2025-11-22 09:23:59.875 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:24:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:24:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 19K writes, 80K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 19K writes, 5991 syncs, 3.23 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6315 writes, 25K keys, 6315 commit groups, 1.0 writes per commit group, ingest: 26.00 MB, 0.04 MB/s
                                           Interval WAL: 6314 writes, 2369 syncs, 2.67 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:24:00 compute-0 nova_compute[253661]: 2025-11-22 09:24:00.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:00 compute-0 ceph-mon[75021]: pgmap v1800: 305 pgs: 305 active+clean; 214 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 179 op/s
Nov 22 09:24:01 compute-0 nova_compute[253661]: 2025-11-22 09:24:01.014 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:01 compute-0 nova_compute[253661]: 2025-11-22 09:24:01.015 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:01 compute-0 nova_compute[253661]: 2025-11-22 09:24:01.030 253665 DEBUG nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:24:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 214 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 123 op/s
Nov 22 09:24:01 compute-0 nova_compute[253661]: 2025-11-22 09:24:01.671 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:01 compute-0 nova_compute[253661]: 2025-11-22 09:24:01.673 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:01 compute-0 nova_compute[253661]: 2025-11-22 09:24:01.682 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:24:01 compute-0 nova_compute[253661]: 2025-11-22 09:24:01.683 253665 INFO nova.compute.claims [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:24:01 compute-0 nova_compute[253661]: 2025-11-22 09:24:01.845 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:01 compute-0 anacron[8155]: Job `cron.monthly' started
Nov 22 09:24:01 compute-0 anacron[8155]: Job `cron.monthly' terminated
Nov 22 09:24:01 compute-0 anacron[8155]: Normal exit (3 jobs run)
Nov 22 09:24:01 compute-0 ovn_controller[152872]: 2025-11-22T09:24:01Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:b6:3f 10.100.0.7
Nov 22 09:24:01 compute-0 ovn_controller[152872]: 2025-11-22T09:24:01Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:b6:3f 10.100.0.7
Nov 22 09:24:01 compute-0 podman[334208]: 2025-11-22 09:24:01.967668112 +0000 UTC m=+0.056390024 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:24:01 compute-0 podman[334209]: 2025-11-22 09:24:01.996806906 +0000 UTC m=+0.085508438 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 22 09:24:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:24:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035272548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.330 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.338 253665 DEBUG nova.compute.provider_tree [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.355 253665 DEBUG nova.scheduler.client.report [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.379 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.381 253665 DEBUG nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.453 253665 DEBUG nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.453 253665 DEBUG nova.network.neutron [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.472 253665 INFO nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.489 253665 DEBUG nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:24:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.576 253665 DEBUG nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.577 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.578 253665 INFO nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Creating image(s)
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.597 253665 DEBUG nova.storage.rbd_utils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.617 253665 DEBUG nova.storage.rbd_utils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.637 253665 DEBUG nova.storage.rbd_utils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.643 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.683 253665 DEBUG nova.policy [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014664017223416032 of space, bias 1.0, pg target 0.43992051670248095 quantized to 32 (current 32)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:24:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.727 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.728 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.729 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.729 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.752 253665 DEBUG nova.storage.rbd_utils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.756 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:02 compute-0 nova_compute[253661]: 2025-11-22 09:24:02.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:02 compute-0 ceph-mon[75021]: pgmap v1801: 305 pgs: 305 active+clean; 214 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 123 op/s
Nov 22 09:24:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3035272548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 234 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 170 op/s
Nov 22 09:24:03 compute-0 nova_compute[253661]: 2025-11-22 09:24:03.721 253665 DEBUG nova.network.neutron [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Successfully created port: dedad4aa-19bb-4bc6-a08c-d75d3024d553 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:24:04 compute-0 nova_compute[253661]: 2025-11-22 09:24:04.425 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:04.425 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:04.427 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:24:04 compute-0 nova_compute[253661]: 2025-11-22 09:24:04.568 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:04 compute-0 nova_compute[253661]: 2025-11-22 09:24:04.666 253665 DEBUG nova.storage.rbd_utils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:24:05 compute-0 ceph-mon[75021]: pgmap v1802: 305 pgs: 305 active+clean; 234 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 170 op/s
Nov 22 09:24:05 compute-0 nova_compute[253661]: 2025-11-22 09:24:05.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:05 compute-0 nova_compute[253661]: 2025-11-22 09:24:05.517 253665 DEBUG nova.network.neutron [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Successfully updated port: dedad4aa-19bb-4bc6-a08c-d75d3024d553 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:24:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 245 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Nov 22 09:24:05 compute-0 nova_compute[253661]: 2025-11-22 09:24:05.531 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:05 compute-0 nova_compute[253661]: 2025-11-22 09:24:05.532 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:05 compute-0 nova_compute[253661]: 2025-11-22 09:24:05.532 253665 DEBUG nova.network.neutron [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:24:05 compute-0 nova_compute[253661]: 2025-11-22 09:24:05.662 253665 DEBUG nova.compute.manager [req-72874396-6ead-40df-bb12-8172529b0e69 req-2107222d-f28b-4685-af22-111b3f383c88 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-changed-dedad4aa-19bb-4bc6-a08c-d75d3024d553 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:05 compute-0 nova_compute[253661]: 2025-11-22 09:24:05.663 253665 DEBUG nova.compute.manager [req-72874396-6ead-40df-bb12-8172529b0e69 req-2107222d-f28b-4685-af22-111b3f383c88 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Refreshing instance network info cache due to event network-changed-dedad4aa-19bb-4bc6-a08c-d75d3024d553. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:24:05 compute-0 nova_compute[253661]: 2025-11-22 09:24:05.663 253665 DEBUG oslo_concurrency.lockutils [req-72874396-6ead-40df-bb12-8172529b0e69 req-2107222d-f28b-4685-af22-111b3f383c88 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:05 compute-0 nova_compute[253661]: 2025-11-22 09:24:05.788 253665 DEBUG nova.network.neutron [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:24:06 compute-0 nova_compute[253661]: 2025-11-22 09:24:06.538 253665 DEBUG nova.network.neutron [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updating instance_info_cache with network_info: [{"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:06 compute-0 nova_compute[253661]: 2025-11-22 09:24:06.562 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:06 compute-0 nova_compute[253661]: 2025-11-22 09:24:06.563 253665 DEBUG nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Instance network_info: |[{"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:24:06 compute-0 nova_compute[253661]: 2025-11-22 09:24:06.563 253665 DEBUG oslo_concurrency.lockutils [req-72874396-6ead-40df-bb12-8172529b0e69 req-2107222d-f28b-4685-af22-111b3f383c88 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:06 compute-0 nova_compute[253661]: 2025-11-22 09:24:06.563 253665 DEBUG nova.network.neutron [req-72874396-6ead-40df-bb12-8172529b0e69 req-2107222d-f28b-4685-af22-111b3f383c88 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Refreshing network info cache for port dedad4aa-19bb-4bc6-a08c-d75d3024d553 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:24:06 compute-0 ceph-mon[75021]: pgmap v1803: 305 pgs: 305 active+clean; 245 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.229 253665 DEBUG nova.objects.instance [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid e0b05f62-6966-4bf3-aee5-e4d2137a6cfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.241 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.241 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Ensure instance console log exists: /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.242 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.242 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.243 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.245 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Start _get_guest_xml network_info=[{"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.250 253665 WARNING nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.256 253665 DEBUG nova.virt.libvirt.host [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.257 253665 DEBUG nova.virt.libvirt.host [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.260 253665 DEBUG nova.virt.libvirt.host [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.261 253665 DEBUG nova.virt.libvirt.host [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.261 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.261 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.262 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.262 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.263 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.263 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.264 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.264 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.264 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.265 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.265 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.265 253665 DEBUG nova.virt.hardware [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.270 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:07 compute-0 podman[334434]: 2025-11-22 09:24:07.447113673 +0000 UTC m=+0.138400076 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.493 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 253 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.528 253665 WARNING nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.528 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 9096405c-eb66-4d27-abbb-e709b767afea _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.529 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.529 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.529 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid e0b05f62-6966-4bf3-aee5-e4d2137a6cfc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.530 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.531 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "9096405c-eb66-4d27-abbb-e709b767afea" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.531 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.532 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.532 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.533 253665 INFO nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.533 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.533 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.563 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "9096405c-eb66-4d27-abbb-e709b767afea" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187801297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.839 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.919 253665 DEBUG nova.storage.rbd_utils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.923 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:07 compute-0 nova_compute[253661]: 2025-11-22 09:24:07.967 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/187801297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.292 253665 DEBUG nova.network.neutron [req-72874396-6ead-40df-bb12-8172529b0e69 req-2107222d-f28b-4685-af22-111b3f383c88 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updated VIF entry in instance network info cache for port dedad4aa-19bb-4bc6-a08c-d75d3024d553. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.293 253665 DEBUG nova.network.neutron [req-72874396-6ead-40df-bb12-8172529b0e69 req-2107222d-f28b-4685-af22-111b3f383c88 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updating instance_info_cache with network_info: [{"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.306 253665 DEBUG oslo_concurrency.lockutils [req-72874396-6ead-40df-bb12-8172529b0e69 req-2107222d-f28b-4685-af22-111b3f383c88 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1739102904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.404 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.406 253665 DEBUG nova.virt.libvirt.vif [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1428624522',display_name='tempest-₡-1428624522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1428624522',id=79,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-d8zy45mf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:02Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=e0b05f62-6966-4bf3-aee5-e4d2137a6cfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.406 253665 DEBUG nova.network.os_vif_util [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.407 253665 DEBUG nova.network.os_vif_util [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.409 253665 DEBUG nova.objects.instance [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0b05f62-6966-4bf3-aee5-e4d2137a6cfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.422 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <uuid>e0b05f62-6966-4bf3-aee5-e4d2137a6cfc</uuid>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <name>instance-0000004f</name>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <nova:name>tempest-₡-1428624522</nova:name>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:24:07</nova:creationTime>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <nova:port uuid="dedad4aa-19bb-4bc6-a08c-d75d3024d553">
Nov 22 09:24:08 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <system>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <entry name="serial">e0b05f62-6966-4bf3-aee5-e4d2137a6cfc</entry>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <entry name="uuid">e0b05f62-6966-4bf3-aee5-e4d2137a6cfc</entry>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     </system>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <os>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   </os>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <features>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   </features>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk">
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk.config">
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:08 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:2b:9d:63"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <target dev="tapdedad4aa-19"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc/console.log" append="off"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <video>
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     </video>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:24:08 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:24:08 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:24:08 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:24:08 compute-0 nova_compute[253661]: </domain>
Nov 22 09:24:08 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.423 253665 DEBUG nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Preparing to wait for external event network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.423 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.423 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.423 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.424 253665 DEBUG nova.virt.libvirt.vif [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1428624522',display_name='tempest-₡-1428624522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1428624522',id=79,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-d8zy45mf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:02Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=e0b05f62-6966-4bf3-aee5-e4d2137a6cfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.425 253665 DEBUG nova.network.os_vif_util [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.426 253665 DEBUG nova.network.os_vif_util [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.426 253665 DEBUG os_vif [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.427 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.428 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.433 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdedad4aa-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.433 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdedad4aa-19, col_values=(('external_ids', {'iface-id': 'dedad4aa-19bb-4bc6-a08c-d75d3024d553', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:9d:63', 'vm-uuid': 'e0b05f62-6966-4bf3-aee5-e4d2137a6cfc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:08 compute-0 NetworkManager[48920]: <info>  [1763803448.4359] manager: (tapdedad4aa-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.438 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.443 253665 INFO os_vif [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19')
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.542 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.542 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.543 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:2b:9d:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.544 253665 INFO nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Using config drive
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.568 253665 DEBUG nova.storage.rbd_utils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.885 253665 INFO nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Creating config drive at /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc/disk.config
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.891 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpchgyao78 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:08 compute-0 nova_compute[253661]: 2025-11-22 09:24:08.979 253665 DEBUG nova.virt.libvirt.driver [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:24:09 compute-0 nova_compute[253661]: 2025-11-22 09:24:09.048 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpchgyao78" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:09 compute-0 nova_compute[253661]: 2025-11-22 09:24:09.075 253665 DEBUG nova.storage.rbd_utils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:09 compute-0 nova_compute[253661]: 2025-11-22 09:24:09.079 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc/disk.config e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:09 compute-0 ceph-mon[75021]: pgmap v1804: 305 pgs: 305 active+clean; 253 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Nov 22 09:24:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1739102904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 297 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.6 MiB/s wr, 151 op/s
Nov 22 09:24:09 compute-0 nova_compute[253661]: 2025-11-22 09:24:09.940 253665 DEBUG oslo_concurrency.processutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc/disk.config e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:09 compute-0 nova_compute[253661]: 2025-11-22 09:24:09.942 253665 INFO nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Deleting local config drive /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc/disk.config because it was imported into RBD.
Nov 22 09:24:09 compute-0 nova_compute[253661]: 2025-11-22 09:24:09.984 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:24:09 compute-0 kernel: tapdedad4aa-19: entered promiscuous mode
Nov 22 09:24:09 compute-0 NetworkManager[48920]: <info>  [1763803449.9918] manager: (tapdedad4aa-19): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Nov 22 09:24:09 compute-0 nova_compute[253661]: 2025-11-22 09:24:09.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:09 compute-0 ovn_controller[152872]: 2025-11-22T09:24:09Z|00791|binding|INFO|Claiming lport dedad4aa-19bb-4bc6-a08c-d75d3024d553 for this chassis.
Nov 22 09:24:09 compute-0 ovn_controller[152872]: 2025-11-22T09:24:09Z|00792|binding|INFO|dedad4aa-19bb-4bc6-a08c-d75d3024d553: Claiming fa:16:3e:2b:9d:63 10.100.0.12
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.002 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:9d:63 10.100.0.12'], port_security=['fa:16:3e:2b:9d:63 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e0b05f62-6966-4bf3-aee5-e4d2137a6cfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dedad4aa-19bb-4bc6-a08c-d75d3024d553) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.004 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dedad4aa-19bb-4bc6-a08c-d75d3024d553 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.006 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:24:10 compute-0 ovn_controller[152872]: 2025-11-22T09:24:10Z|00793|binding|INFO|Setting lport dedad4aa-19bb-4bc6-a08c-d75d3024d553 ovn-installed in OVS
Nov 22 09:24:10 compute-0 ovn_controller[152872]: 2025-11-22T09:24:10Z|00794|binding|INFO|Setting lport dedad4aa-19bb-4bc6-a08c-d75d3024d553 up in Southbound
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.013 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.021 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[faf3dc8c-ae8d-41d1-852d-57092b5bbec0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.021 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap502d021b-71 in ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.023 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap502d021b-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.024 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[24dbbf33-bf54-4c5e-9c97-9a6229d6413a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.025 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf5a272-ebeb-40c6-af54-4f4f83b7c304]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 systemd-udevd[334597]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:24:10 compute-0 systemd-machined[215941]: New machine qemu-95-instance-0000004f.
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.037 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b461d3a1-14d0-44ee-b6c1-06a7a6de02e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 NetworkManager[48920]: <info>  [1763803450.0491] device (tapdedad4aa-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:24:10 compute-0 NetworkManager[48920]: <info>  [1763803450.0500] device (tapdedad4aa-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:24:10 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-0000004f.
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.064 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81354598-65e8-476b-8798-3e712f114873]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.098 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c45087-9f5e-4764-a670-1f95aac98ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 systemd-udevd[334601]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:24:10 compute-0 NetworkManager[48920]: <info>  [1763803450.1065] manager: (tap502d021b-70): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b41591dc-6222-47ef-804b-e8546c24f53a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.144 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[51425147-0bca-4831-87a2-541b6524724e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.147 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ec560d8b-5175-4e50-8b09-ad140101144d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 NetworkManager[48920]: <info>  [1763803450.1739] device (tap502d021b-70): carrier: link connected
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.178 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d403be60-6ae4-40da-8abf-bcdf6b6a52c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.200 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dee14bbd-0df6-44b3-9c27-35dfbaf6d3cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 20685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334629, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.219 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad4e9e3-842f-4540-8c54-0399d2c61167]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:7112'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630091, 'tstamp': 630091}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334630, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.243 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94928140-47e2-46f2-be23-a20cfead2b3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 20685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334631, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.244 253665 DEBUG nova.compute.manager [req-9d4adcf1-acbf-4d5d-a5ea-b24ee64ee69e req-fdd36f6b-6579-493b-9485-ad79ac0e80c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.245 253665 DEBUG oslo_concurrency.lockutils [req-9d4adcf1-acbf-4d5d-a5ea-b24ee64ee69e req-fdd36f6b-6579-493b-9485-ad79ac0e80c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.245 253665 DEBUG oslo_concurrency.lockutils [req-9d4adcf1-acbf-4d5d-a5ea-b24ee64ee69e req-fdd36f6b-6579-493b-9485-ad79ac0e80c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.245 253665 DEBUG oslo_concurrency.lockutils [req-9d4adcf1-acbf-4d5d-a5ea-b24ee64ee69e req-fdd36f6b-6579-493b-9485-ad79ac0e80c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.245 253665 DEBUG nova.compute.manager [req-9d4adcf1-acbf-4d5d-a5ea-b24ee64ee69e req-fdd36f6b-6579-493b-9485-ad79ac0e80c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Processing event network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.276 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[628d22ce-3061-464a-bd37-be3877da17df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.331 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce27e50-4c12-4706-be7c-1c0098cb9ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.332 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.332 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.333 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:10 compute-0 kernel: tap502d021b-70: entered promiscuous mode
Nov 22 09:24:10 compute-0 NetworkManager[48920]: <info>  [1763803450.3356] manager: (tap502d021b-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.334 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.340 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:10 compute-0 ovn_controller[152872]: 2025-11-22T09:24:10Z|00795|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.346 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/502d021b-7c33-4c22-8cd9-32a451fdf556.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/502d021b-7c33-4c22-8cd9-32a451fdf556.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.347 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f4b87a-7895-4cf8-85ee-5218639b6715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.348 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/502d021b-7c33-4c22-8cd9-32a451fdf556.pid.haproxy
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:24:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:10.349 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'env', 'PROCESS_TAG=haproxy-502d021b-7c33-4c22-8cd9-32a451fdf556', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/502d021b-7c33-4c22-8cd9-32a451fdf556.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.357 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:10 compute-0 ceph-mon[75021]: pgmap v1805: 305 pgs: 305 active+clean; 297 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.6 MiB/s wr, 151 op/s
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.651 253665 DEBUG nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.652 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803450.6509225, e0b05f62-6966-4bf3-aee5-e4d2137a6cfc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.652 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] VM Started (Lifecycle Event)
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.665 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.669 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.671 253665 INFO nova.virt.libvirt.driver [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Instance spawned successfully.
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.672 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.674 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.694 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.694 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803450.6510549, e0b05f62-6966-4bf3-aee5-e4d2137a6cfc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.695 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] VM Paused (Lifecycle Event)
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.700 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.700 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.701 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.701 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.701 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.702 253665 DEBUG nova.virt.libvirt.driver [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.710 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.714 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803450.6548018, e0b05f62-6966-4bf3-aee5-e4d2137a6cfc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.714 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] VM Resumed (Lifecycle Event)
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.730 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.734 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.772 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.788 253665 INFO nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Took 8.21 seconds to spawn the instance on the hypervisor.
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.788 253665 DEBUG nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:10 compute-0 podman[334703]: 2025-11-22 09:24:10.710720904 +0000 UTC m=+0.022450083 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.848 253665 INFO nova.compute.manager [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Took 9.20 seconds to build instance.
Nov 22 09:24:10 compute-0 podman[334703]: 2025-11-22 09:24:10.853484916 +0000 UTC m=+0.165214065 container create dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.867 253665 DEBUG oslo_concurrency.lockutils [None req-5c597338-1311-4e4c-8e21-5f7e850015ef 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.867 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.867 253665 INFO nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:24:10 compute-0 nova_compute[253661]: 2025-11-22 09:24:10.867 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:10 compute-0 systemd[1]: Started libpod-conmon-dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84.scope.
Nov 22 09:24:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a035ba10fc4c82f058aee7a6871d6f3764a5e61791891e4008e90692352fd688/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:24:10 compute-0 podman[334703]: 2025-11-22 09:24:10.981755508 +0000 UTC m=+0.293484667 container init dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:24:10 compute-0 podman[334703]: 2025-11-22 09:24:10.987502566 +0000 UTC m=+0.299231725 container start dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:24:11 compute-0 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [NOTICE]   (334722) : New worker (334724) forked
Nov 22 09:24:11 compute-0 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [NOTICE]   (334722) : Loading success.
Nov 22 09:24:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 297 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 4.5 MiB/s wr, 99 op/s
Nov 22 09:24:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:24:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2535095547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:24:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:24:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2535095547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:24:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:12 compute-0 nova_compute[253661]: 2025-11-22 09:24:12.605 253665 DEBUG nova.compute.manager [req-54e4fdc6-0ecb-40b6-8abc-8651c6e52bc2 req-ad837bf4-5ac3-4741-b7c5-a3cf97c87616 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:12 compute-0 nova_compute[253661]: 2025-11-22 09:24:12.606 253665 DEBUG oslo_concurrency.lockutils [req-54e4fdc6-0ecb-40b6-8abc-8651c6e52bc2 req-ad837bf4-5ac3-4741-b7c5-a3cf97c87616 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:12 compute-0 nova_compute[253661]: 2025-11-22 09:24:12.606 253665 DEBUG oslo_concurrency.lockutils [req-54e4fdc6-0ecb-40b6-8abc-8651c6e52bc2 req-ad837bf4-5ac3-4741-b7c5-a3cf97c87616 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:12 compute-0 nova_compute[253661]: 2025-11-22 09:24:12.606 253665 DEBUG oslo_concurrency.lockutils [req-54e4fdc6-0ecb-40b6-8abc-8651c6e52bc2 req-ad837bf4-5ac3-4741-b7c5-a3cf97c87616 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:12 compute-0 nova_compute[253661]: 2025-11-22 09:24:12.607 253665 DEBUG nova.compute.manager [req-54e4fdc6-0ecb-40b6-8abc-8651c6e52bc2 req-ad837bf4-5ac3-4741-b7c5-a3cf97c87616 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] No waiting events found dispatching network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:12 compute-0 nova_compute[253661]: 2025-11-22 09:24:12.607 253665 WARNING nova.compute.manager [req-54e4fdc6-0ecb-40b6-8abc-8651c6e52bc2 req-ad837bf4-5ac3-4741-b7c5-a3cf97c87616 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received unexpected event network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 for instance with vm_state active and task_state None.
Nov 22 09:24:12 compute-0 ceph-mon[75021]: pgmap v1806: 305 pgs: 305 active+clean; 297 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 4.5 MiB/s wr, 99 op/s
Nov 22 09:24:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2535095547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:24:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2535095547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:24:13 compute-0 kernel: tapedd81944-57 (unregistering): left promiscuous mode
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.002 253665 INFO nova.virt.libvirt.driver [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance shutdown successfully after 25 seconds.
Nov 22 09:24:13 compute-0 NetworkManager[48920]: <info>  [1763803453.0045] device (tapedd81944-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:24:13 compute-0 ovn_controller[152872]: 2025-11-22T09:24:13Z|00796|binding|INFO|Releasing lport edd81944-578b-4533-9db7-f17a3fb84211 from this chassis (sb_readonly=0)
Nov 22 09:24:13 compute-0 ovn_controller[152872]: 2025-11-22T09:24:13Z|00797|binding|INFO|Setting lport edd81944-578b-4533-9db7-f17a3fb84211 down in Southbound
Nov 22 09:24:13 compute-0 ovn_controller[152872]: 2025-11-22T09:24:13Z|00798|binding|INFO|Removing iface tapedd81944-57 ovn-installed in OVS
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.067 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.075 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b6:3f 10.100.0.7'], port_security=['fa:16:3e:70:b6:3f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9cef4b12-b28c-47df-9af2-a0bf9934e4d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb487cef-189d-444c-a09e-c2cc59f79353', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=edd81944-578b-4533-9db7-f17a3fb84211) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.077 162862 INFO neutron.agent.ovn.metadata.agent [-] Port edd81944-578b-4533-9db7-f17a3fb84211 in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 unbound from our chassis
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.079 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0936cc0d-3697-4210-9c23-8f3e8e452e86
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.107 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25c6035f-7bb3-4409-9d55-4e5210650816]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:13 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Nov 22 09:24:13 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004d.scope: Consumed 15.554s CPU time.
Nov 22 09:24:13 compute-0 systemd-machined[215941]: Machine qemu-93-instance-0000004d terminated.
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.146 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a65663-1a74-4f80-9d87-a2e79604379d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.151 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[090cf765-a568-4a73-8bf2-498475d9bbbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.184 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[db55f118-317d-49f4-bc50-81455dc471a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.267 253665 INFO nova.virt.libvirt.driver [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance destroyed successfully.
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.268 253665 DEBUG nova.objects.instance [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'numa_topology' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.281 253665 DEBUG nova.compute.manager [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.332 253665 DEBUG oslo_concurrency.lockutils [None req-0329a8fe-a16c-4a33-b0e5-1d393e25f853 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 25.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.333 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 5.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.334 253665 INFO nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] During sync_power_state the instance has a pending task (powering-off). Skip.
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.334 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d80169e4-912b-4e41-93ac-60bb9bc6f402]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 30845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334744, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.386 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[caacc3ea-5359-468f-8f25-ad4df1ae5f6b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622144, 'tstamp': 622144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334756, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622147, 'tstamp': 622147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334756, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.389 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.399 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0936cc0d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.400 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.400 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0936cc0d-30, col_values=(('external_ids', {'iface-id': 'a1484e81-5431-4cb7-9298-4572e8674d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:13.401 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:13 compute-0 nova_compute[253661]: 2025-11-22 09:24:13.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 319 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 168 op/s
Nov 22 09:24:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:14.431 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:15 compute-0 ceph-mon[75021]: pgmap v1807: 305 pgs: 305 active+clean; 319 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 168 op/s
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.105 253665 DEBUG nova.compute.manager [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-unplugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.106 253665 DEBUG oslo_concurrency.lockutils [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.107 253665 DEBUG oslo_concurrency.lockutils [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.107 253665 DEBUG oslo_concurrency.lockutils [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.107 253665 DEBUG nova.compute.manager [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] No waiting events found dispatching network-vif-unplugged-edd81944-578b-4533-9db7-f17a3fb84211 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.108 253665 WARNING nova.compute.manager [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received unexpected event network-vif-unplugged-edd81944-578b-4533-9db7-f17a3fb84211 for instance with vm_state stopped and task_state None.
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.108 253665 DEBUG nova.compute.manager [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.108 253665 DEBUG oslo_concurrency.lockutils [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.109 253665 DEBUG oslo_concurrency.lockutils [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.109 253665 DEBUG oslo_concurrency.lockutils [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.110 253665 DEBUG nova.compute.manager [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] No waiting events found dispatching network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.110 253665 WARNING nova.compute.manager [req-19cf0f3f-f563-40a9-98fb-a5c8e3e90ecb req-4dfc74f1-32d9-48e6-9fc0-2412cdce405e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received unexpected event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 for instance with vm_state stopped and task_state None.
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 319 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 168 op/s
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.662 253665 INFO nova.compute.manager [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Rebuilding instance
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.928 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.945 253665 DEBUG nova.compute.manager [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:15 compute-0 nova_compute[253661]: 2025-11-22 09:24:15.998 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'pci_requests' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.009 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.019 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'resources' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.028 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'migration_context' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.036 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.039 253665 INFO nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance already shutdown.
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.046 253665 INFO nova.virt.libvirt.driver [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance destroyed successfully.
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.050 253665 INFO nova.virt.libvirt.driver [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance destroyed successfully.
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.051 253665 DEBUG nova.virt.libvirt.vif [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-566833626',display_name='tempest-tempest.common.compute-instance-566833626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-566833626',id=77,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:23:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-3abserjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:15Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9cef4b12-b28c-47df-9af2-a0bf9934e4d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.052 253665 DEBUG nova.network.os_vif_util [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.053 253665 DEBUG nova.network.os_vif_util [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.053 253665 DEBUG os_vif [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.055 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.055 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedd81944-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.057 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:16 compute-0 nova_compute[253661]: 2025-11-22 09:24:16.061 253665 INFO os_vif [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57')
Nov 22 09:24:17 compute-0 ceph-mon[75021]: pgmap v1808: 305 pgs: 305 active+clean; 319 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 168 op/s
Nov 22 09:24:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 323 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Nov 22 09:24:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:18 compute-0 nova_compute[253661]: 2025-11-22 09:24:18.978 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:18 compute-0 nova_compute[253661]: 2025-11-22 09:24:18.979 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.008 253665 DEBUG nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.073 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.074 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.082 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.082 253665 INFO nova.compute.claims [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.296 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:19 compute-0 ceph-mon[75021]: pgmap v1809: 305 pgs: 305 active+clean; 323 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Nov 22 09:24:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 290 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 178 op/s
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.632 253665 INFO nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deleting instance files /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_del
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.633 253665 INFO nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deletion of /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_del complete
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.799 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.800 253665 INFO nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Creating image(s)
Nov 22 09:24:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:24:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864308709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.831 253665 DEBUG nova.storage.rbd_utils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.858 253665 DEBUG nova.storage.rbd_utils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.881 253665 DEBUG nova.storage.rbd_utils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.886 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.933 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.941 253665 DEBUG nova.compute.provider_tree [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.960 253665 DEBUG nova.scheduler.client.report [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.973 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.974 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.974 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:19 compute-0 nova_compute[253661]: 2025-11-22 09:24:19.975 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.000 253665 DEBUG nova.storage.rbd_utils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.005 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.056 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.057 253665 DEBUG nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.100 253665 DEBUG nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.101 253665 DEBUG nova.network.neutron [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.121 253665 INFO nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.136 253665 DEBUG nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.213 253665 DEBUG nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.215 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.215 253665 INFO nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Creating image(s)
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.244 253665 DEBUG nova.storage.rbd_utils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 99df8fe4-a61a-40d5-b089-90de5d98050f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.283 253665 DEBUG nova.storage.rbd_utils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 99df8fe4-a61a-40d5-b089-90de5d98050f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.315 253665 DEBUG nova.storage.rbd_utils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 99df8fe4-a61a-40d5-b089-90de5d98050f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.321 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.368 253665 DEBUG nova.policy [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.410 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.411 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.412 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.412 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.444 253665 DEBUG nova.storage.rbd_utils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 99df8fe4-a61a-40d5-b089-90de5d98050f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.450 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 99df8fe4-a61a-40d5-b089-90de5d98050f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.491 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1864308709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:20 compute-0 nova_compute[253661]: 2025-11-22 09:24:20.876 253665 DEBUG nova.network.neutron [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Successfully created port: 56dc3604-5308-40cd-a3a8-f768a68f4ef6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.057 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.070 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.121 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.194 253665 DEBUG nova.storage.rbd_utils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] resizing rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:24:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 290 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 153 op/s
Nov 22 09:24:21 compute-0 ceph-mon[75021]: pgmap v1810: 305 pgs: 305 active+clean; 290 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 178 op/s
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.850 253665 DEBUG nova.network.neutron [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Successfully updated port: 56dc3604-5308-40cd-a3a8-f768a68f4ef6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.864 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-99df8fe4-a61a-40d5-b089-90de5d98050f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.865 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-99df8fe4-a61a-40d5-b089-90de5d98050f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.865 253665 DEBUG nova.network.neutron [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.955 253665 DEBUG nova.compute.manager [req-09f32962-f491-4c86-b679-3ac5039a2471 req-26ff4d50-dff4-453f-9d83-cb587c09ce7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-changed-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.956 253665 DEBUG nova.compute.manager [req-09f32962-f491-4c86-b679-3ac5039a2471 req-26ff4d50-dff4-453f-9d83-cb587c09ce7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Refreshing instance network info cache due to event network-changed-56dc3604-5308-40cd-a3a8-f768a68f4ef6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:24:21 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.956 253665 DEBUG oslo_concurrency.lockutils [req-09f32962-f491-4c86-b679-3ac5039a2471 req-26ff4d50-dff4-453f-9d83-cb587c09ce7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-99df8fe4-a61a-40d5-b089-90de5d98050f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:21.999 253665 DEBUG nova.network.neutron [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.501 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 99df8fe4-a61a-40d5-b089-90de5d98050f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.583 253665 DEBUG nova.storage.rbd_utils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image 99df8fe4-a61a-40d5-b089-90de5d98050f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:24:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:24:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:24:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:24:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:24:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:24:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.755 253665 DEBUG nova.network.neutron [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Updating instance_info_cache with network_info: [{"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.764 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.765 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Ensure instance console log exists: /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.766 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.766 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.766 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.769 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Start _get_guest_xml network_info=[{"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.772 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-99df8fe4-a61a-40d5-b089-90de5d98050f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.773 253665 DEBUG nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Instance network_info: |[{"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.773 253665 DEBUG oslo_concurrency.lockutils [req-09f32962-f491-4c86-b679-3ac5039a2471 req-26ff4d50-dff4-453f-9d83-cb587c09ce7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-99df8fe4-a61a-40d5-b089-90de5d98050f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.773 253665 DEBUG nova.network.neutron [req-09f32962-f491-4c86-b679-3ac5039a2471 req-26ff4d50-dff4-453f-9d83-cb587c09ce7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Refreshing network info cache for port 56dc3604-5308-40cd-a3a8-f768a68f4ef6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.776 253665 WARNING nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.780 253665 DEBUG nova.virt.libvirt.host [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.781 253665 DEBUG nova.virt.libvirt.host [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.789 253665 DEBUG nova.virt.libvirt.host [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.790 253665 DEBUG nova.virt.libvirt.host [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.791 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.791 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.792 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.792 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.793 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.793 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.793 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.793 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.793 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.793 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.794 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.794 253665 DEBUG nova.virt.hardware [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.794 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:22 compute-0 nova_compute[253661]: 2025-11-22 09:24:22.810 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:23 compute-0 ceph-mon[75021]: pgmap v1811: 305 pgs: 305 active+clean; 290 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 153 op/s
Nov 22 09:24:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999593834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.290 253665 DEBUG nova.objects.instance [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid 99df8fe4-a61a-40d5-b089-90de5d98050f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.311 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.311 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Ensure instance console log exists: /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.312 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.312 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.312 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.315 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Start _get_guest_xml network_info=[{"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.318 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.345 253665 DEBUG nova.storage.rbd_utils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.351 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.398 253665 WARNING nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.406 253665 DEBUG nova.virt.libvirt.host [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.407 253665 DEBUG nova.virt.libvirt.host [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.412 253665 DEBUG nova.virt.libvirt.host [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.412 253665 DEBUG nova.virt.libvirt.host [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.413 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.413 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.413 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.414 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.414 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.414 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.414 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.415 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.415 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.415 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.416 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.416 253665 DEBUG nova.virt.hardware [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.420 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 313 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 173 op/s
Nov 22 09:24:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398774229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.833 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.835 253665 DEBUG nova.virt.libvirt.vif [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-566833626',display_name='tempest-tempest.common.compute-instance-566833626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-566833626',id=77,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:23:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-3abserjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:19Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9cef4b12-b28c-47df-9af2-a0bf9934e4d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.836 253665 DEBUG nova.network.os_vif_util [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.837 253665 DEBUG nova.network.os_vif_util [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.840 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <uuid>9cef4b12-b28c-47df-9af2-a0bf9934e4d7</uuid>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <name>instance-0000004d</name>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <nova:name>tempest-tempest.common.compute-instance-566833626</nova:name>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:24:22</nova:creationTime>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <nova:user uuid="7e5709393702478dbf0bd566dc94d7fe">tempest-ServerActionsTestOtherA-1527475006-project-member</nova:user>
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <nova:project uuid="9b06c711e582499ab500917d85e27e3c">tempest-ServerActionsTestOtherA-1527475006</nova:project>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <nova:port uuid="edd81944-578b-4533-9db7-f17a3fb84211">
Nov 22 09:24:23 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <system>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <entry name="serial">9cef4b12-b28c-47df-9af2-a0bf9934e4d7</entry>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <entry name="uuid">9cef4b12-b28c-47df-9af2-a0bf9934e4d7</entry>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     </system>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <os>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   </os>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <features>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   </features>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk">
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config">
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:23 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:70:b6:3f"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <target dev="tapedd81944-57"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/console.log" append="off"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <video>
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     </video>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:24:23 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:24:23 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:24:23 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:24:23 compute-0 nova_compute[253661]: </domain>
Nov 22 09:24:23 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.841 253665 DEBUG nova.compute.manager [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Preparing to wait for external event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.842 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.842 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.842 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.843 253665 DEBUG nova.virt.libvirt.vif [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-566833626',display_name='tempest-tempest.common.compute-instance-566833626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-566833626',id=77,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:23:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-3abserjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:19Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9cef4b12-b28c-47df-9af2-a0bf9934e4d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.843 253665 DEBUG nova.network.os_vif_util [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.844 253665 DEBUG nova.network.os_vif_util [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.844 253665 DEBUG os_vif [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.845 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.845 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.846 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.849 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedd81944-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.850 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedd81944-57, col_values=(('external_ids', {'iface-id': 'edd81944-578b-4533-9db7-f17a3fb84211', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:b6:3f', 'vm-uuid': '9cef4b12-b28c-47df-9af2-a0bf9934e4d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:23 compute-0 NetworkManager[48920]: <info>  [1763803463.8534] manager: (tapedd81944-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.856 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.859 253665 INFO os_vif [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57')
Nov 22 09:24:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1710173180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.912 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.940 253665 DEBUG nova.storage.rbd_utils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 99df8fe4-a61a-40d5-b089-90de5d98050f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:23 compute-0 nova_compute[253661]: 2025-11-22 09:24:23.944 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.039 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.040 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.040 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No VIF found with MAC fa:16:3e:70:b6:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.041 253665 INFO nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Using config drive
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.068 253665 DEBUG nova.storage.rbd_utils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.100 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.128 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'keypairs' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.664 253665 INFO nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Creating config drive at /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.670 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1o3k5ai6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.720 253665 DEBUG nova.network.neutron [req-09f32962-f491-4c86-b679-3ac5039a2471 req-26ff4d50-dff4-453f-9d83-cb587c09ce7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Updated VIF entry in instance network info cache for port 56dc3604-5308-40cd-a3a8-f768a68f4ef6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.722 253665 DEBUG nova.network.neutron [req-09f32962-f491-4c86-b679-3ac5039a2471 req-26ff4d50-dff4-453f-9d83-cb587c09ce7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Updating instance_info_cache with network_info: [{"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.739 253665 DEBUG oslo_concurrency.lockutils [req-09f32962-f491-4c86-b679-3ac5039a2471 req-26ff4d50-dff4-453f-9d83-cb587c09ce7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-99df8fe4-a61a-40d5-b089-90de5d98050f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/716010105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.817 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.873s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.818 253665 DEBUG nova.virt.libvirt.vif [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2075369',display_name='tempest-ServersTestJSON-server-2075369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2075369',id=80,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-gztn8gsw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:20Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=99df8fe4-a61a-40d5-b089-90de5d98050f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.819 253665 DEBUG nova.network.os_vif_util [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.820 253665 DEBUG nova.network.os_vif_util [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.821 253665 DEBUG nova.objects.instance [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid 99df8fe4-a61a-40d5-b089-90de5d98050f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.824 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1o3k5ai6" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/999593834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/398774229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1710173180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.917 253665 DEBUG nova.storage.rbd_utils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.922 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.981 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <uuid>99df8fe4-a61a-40d5-b089-90de5d98050f</uuid>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <name>instance-00000050</name>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestJSON-server-2075369</nova:name>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:24:23</nova:creationTime>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <nova:port uuid="56dc3604-5308-40cd-a3a8-f768a68f4ef6">
Nov 22 09:24:24 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <system>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <entry name="serial">99df8fe4-a61a-40d5-b089-90de5d98050f</entry>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <entry name="uuid">99df8fe4-a61a-40d5-b089-90de5d98050f</entry>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     </system>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <os>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   </os>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <features>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   </features>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/99df8fe4-a61a-40d5-b089-90de5d98050f_disk">
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/99df8fe4-a61a-40d5-b089-90de5d98050f_disk.config">
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:24 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:26:03:d2"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <target dev="tap56dc3604-53"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f/console.log" append="off"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <video>
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     </video>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:24:24 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:24:24 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:24:24 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:24:24 compute-0 nova_compute[253661]: </domain>
Nov 22 09:24:24 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.983 253665 DEBUG nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Preparing to wait for external event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.983 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.983 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.983 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.985 253665 DEBUG nova.virt.libvirt.vif [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2075369',display_name='tempest-ServersTestJSON-server-2075369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2075369',id=80,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-gztn8gsw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:20Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=99df8fe4-a61a-40d5-b089-90de5d98050f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.985 253665 DEBUG nova.network.os_vif_util [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.986 253665 DEBUG nova.network.os_vif_util [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.986 253665 DEBUG os_vif [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.987 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.988 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.989 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.992 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56dc3604-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.993 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56dc3604-53, col_values=(('external_ids', {'iface-id': '56dc3604-5308-40cd-a3a8-f768a68f4ef6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:03:d2', 'vm-uuid': '99df8fe4-a61a-40d5-b089-90de5d98050f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:24 compute-0 NetworkManager[48920]: <info>  [1763803464.9960] manager: (tap56dc3604-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Nov 22 09:24:24 compute-0 nova_compute[253661]: 2025-11-22 09:24:24.997 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.005 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.007 253665 INFO os_vif [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53')
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.325 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.325 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.326 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:26:03:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.326 253665 INFO nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Using config drive
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.352 253665 DEBUG nova.storage.rbd_utils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 99df8fe4-a61a-40d5-b089-90de5d98050f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 339 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 MiB/s wr, 150 op/s
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.696 253665 INFO nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Creating config drive at /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f/disk.config
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.701 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgho9j_dg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.849 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgho9j_dg" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:25 compute-0 kernel: tap15bf0e02-e0 (unregistering): left promiscuous mode
Nov 22 09:24:25 compute-0 ceph-mon[75021]: pgmap v1812: 305 pgs: 305 active+clean; 313 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 173 op/s
Nov 22 09:24:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/716010105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:25 compute-0 NetworkManager[48920]: <info>  [1763803465.8939] device (tap15bf0e02-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.894 253665 DEBUG nova.storage.rbd_utils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 99df8fe4-a61a-40d5-b089-90de5d98050f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.905 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f/disk.config 99df8fe4-a61a-40d5-b089-90de5d98050f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:25 compute-0 ovn_controller[152872]: 2025-11-22T09:24:25Z|00799|binding|INFO|Releasing lport 15bf0e02-e093-4f45-995f-abb925d1cf71 from this chassis (sb_readonly=0)
Nov 22 09:24:25 compute-0 ovn_controller[152872]: 2025-11-22T09:24:25Z|00800|binding|INFO|Setting lport 15bf0e02-e093-4f45-995f-abb925d1cf71 down in Southbound
Nov 22 09:24:25 compute-0 ovn_controller[152872]: 2025-11-22T09:24:25Z|00801|binding|INFO|Removing iface tap15bf0e02-e0 ovn-installed in OVS
Nov 22 09:24:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:25.917 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:73:cb 10.100.0.14'], port_security=['fa:16:3e:86:73:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a207d8c4-4fce-4fe6-9ba5-548a92e757ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcc3ab0c-697f-4983-ad7d-7f2a44c0b653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e78196ec949a45cf803d3e585b603558', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'acde5338-5012-4a35-a74f-7e2170896be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70ae9d50-6442-4aca-8fcc-29daad21c977, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=15bf0e02-e093-4f45-995f-abb925d1cf71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:25.919 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 15bf0e02-e093-4f45-995f-abb925d1cf71 in datapath fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 unbound from our chassis
Nov 22 09:24:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:25.920 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:24:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:25.921 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a403d1-c451-4c54-a462-e12f3df13380]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:25 compute-0 nova_compute[253661]: 2025-11-22 09:24:25.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:25 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 22 09:24:25 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004e.scope: Consumed 14.595s CPU time.
Nov 22 09:24:25 compute-0 systemd-machined[215941]: Machine qemu-94-instance-0000004e terminated.
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.152 253665 INFO nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance shutdown successfully after 26 seconds.
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.158 253665 INFO nova.virt.libvirt.driver [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance destroyed successfully.
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.158 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'numa_topology' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.174 253665 INFO nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Attempting rescue
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.176 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.180 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.180 253665 INFO nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Creating image(s)
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.205 253665 DEBUG nova.storage.rbd_utils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.209 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.239 253665 DEBUG nova.storage.rbd_utils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.266 253665 DEBUG nova.storage.rbd_utils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.272 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.321 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.321 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.390 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.391 253665 DEBUG oslo_concurrency.lockutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.391 253665 DEBUG oslo_concurrency.lockutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.392 253665 DEBUG oslo_concurrency.lockutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.417 253665 DEBUG nova.storage.rbd_utils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.422 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.480 253665 DEBUG nova.compute.manager [req-dc1831ab-1142-4a23-bad6-c4cc39a45b35 req-69454302-231a-49d9-86da-144a5523d83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-unplugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.481 253665 DEBUG oslo_concurrency.lockutils [req-dc1831ab-1142-4a23-bad6-c4cc39a45b35 req-69454302-231a-49d9-86da-144a5523d83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.482 253665 DEBUG oslo_concurrency.lockutils [req-dc1831ab-1142-4a23-bad6-c4cc39a45b35 req-69454302-231a-49d9-86da-144a5523d83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.483 253665 DEBUG oslo_concurrency.lockutils [req-dc1831ab-1142-4a23-bad6-c4cc39a45b35 req-69454302-231a-49d9-86da-144a5523d83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.483 253665 DEBUG nova.compute.manager [req-dc1831ab-1142-4a23-bad6-c4cc39a45b35 req-69454302-231a-49d9-86da-144a5523d83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-unplugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.483 253665 WARNING nova.compute.manager [req-dc1831ab-1142-4a23-bad6-c4cc39a45b35 req-69454302-231a-49d9-86da-144a5523d83f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received unexpected event network-vif-unplugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with vm_state active and task_state rescuing.
Nov 22 09:24:26 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.720 253665 DEBUG oslo_concurrency.processutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config 9cef4b12-b28c-47df-9af2-a0bf9934e4d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.798s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.721 253665 INFO nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deleting local config drive /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7/disk.config because it was imported into RBD.
Nov 22 09:24:26 compute-0 systemd-udevd[335363]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:24:26 compute-0 NetworkManager[48920]: <info>  [1763803466.8033] manager: (tapedd81944-57): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Nov 22 09:24:26 compute-0 kernel: tapedd81944-57: entered promiscuous mode
Nov 22 09:24:26 compute-0 ovn_controller[152872]: 2025-11-22T09:24:26Z|00802|binding|INFO|Claiming lport edd81944-578b-4533-9db7-f17a3fb84211 for this chassis.
Nov 22 09:24:26 compute-0 ovn_controller[152872]: 2025-11-22T09:24:26Z|00803|binding|INFO|edd81944-578b-4533-9db7-f17a3fb84211: Claiming fa:16:3e:70:b6:3f 10.100.0.7
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:26 compute-0 NetworkManager[48920]: <info>  [1763803466.8144] device (tapedd81944-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:24:26 compute-0 NetworkManager[48920]: <info>  [1763803466.8156] device (tapedd81944-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.818 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b6:3f 10.100.0.7'], port_security=['fa:16:3e:70:b6:3f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9cef4b12-b28c-47df-9af2-a0bf9934e4d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fb487cef-189d-444c-a09e-c2cc59f79353', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=edd81944-578b-4533-9db7-f17a3fb84211) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.820 162862 INFO neutron.agent.ovn.metadata.agent [-] Port edd81944-578b-4533-9db7-f17a3fb84211 in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 bound to our chassis
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.822 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0936cc0d-3697-4210-9c23-8f3e8e452e86
Nov 22 09:24:26 compute-0 ovn_controller[152872]: 2025-11-22T09:24:26Z|00804|binding|INFO|Setting lport edd81944-578b-4533-9db7-f17a3fb84211 ovn-installed in OVS
Nov 22 09:24:26 compute-0 ovn_controller[152872]: 2025-11-22T09:24:26Z|00805|binding|INFO|Setting lport edd81944-578b-4533-9db7-f17a3fb84211 up in Southbound
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.839 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.846 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5842628-54f9-492f-909c-6137e9a20fa3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:26 compute-0 systemd-machined[215941]: New machine qemu-96-instance-0000004d.
Nov 22 09:24:26 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-0000004d.
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.889 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84c7482b-ed2f-4f02-bc55-aa8cd08bd303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.893 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb49412-1f26-4276-bd39-42d3704daed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.932 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9f79b3-852f-46da-8213-3ec2bac0a439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.956 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26c820b9-054b-4df9-83be-4801e63a5ad1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 36436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335525, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.984 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5d03552e-f15d-465e-a579-56f172ff12a6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622144, 'tstamp': 622144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335527, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622147, 'tstamp': 622147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335527, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.987 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:26 compute-0 nova_compute[253661]: 2025-11-22 09:24:26.997 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.998 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0936cc0d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.999 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.999 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0936cc0d-30, col_values=(('external_ids', {'iface-id': 'a1484e81-5431-4cb7-9298-4572e8674d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:26.999 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:27 compute-0 ceph-mon[75021]: pgmap v1813: 305 pgs: 305 active+clean; 339 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 MiB/s wr, 150 op/s
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.246 253665 DEBUG oslo_concurrency.processutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f/disk.config 99df8fe4-a61a-40d5-b089-90de5d98050f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.247 253665 INFO nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Deleting local config drive /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f/disk.config because it was imported into RBD.
Nov 22 09:24:27 compute-0 NetworkManager[48920]: <info>  [1763803467.3245] manager: (tap56dc3604-53): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Nov 22 09:24:27 compute-0 kernel: tap56dc3604-53: entered promiscuous mode
Nov 22 09:24:27 compute-0 NetworkManager[48920]: <info>  [1763803467.3411] device (tap56dc3604-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:24:27 compute-0 NetworkManager[48920]: <info>  [1763803467.3420] device (tap56dc3604-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:27 compute-0 ovn_controller[152872]: 2025-11-22T09:24:27Z|00806|binding|INFO|Claiming lport 56dc3604-5308-40cd-a3a8-f768a68f4ef6 for this chassis.
Nov 22 09:24:27 compute-0 ovn_controller[152872]: 2025-11-22T09:24:27Z|00807|binding|INFO|56dc3604-5308-40cd-a3a8-f768a68f4ef6: Claiming fa:16:3e:26:03:d2 10.100.0.14
Nov 22 09:24:27 compute-0 ovn_controller[152872]: 2025-11-22T09:24:27Z|00808|binding|INFO|Setting lport 56dc3604-5308-40cd-a3a8-f768a68f4ef6 ovn-installed in OVS
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:27 compute-0 ovn_controller[152872]: 2025-11-22T09:24:27Z|00809|binding|INFO|Setting lport 56dc3604-5308-40cd-a3a8-f768a68f4ef6 up in Southbound
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.384 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:03:d2 10.100.0.14'], port_security=['fa:16:3e:26:03:d2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '99df8fe4-a61a-40d5-b089-90de5d98050f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=56dc3604-5308-40cd-a3a8-f768a68f4ef6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.386 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 56dc3604-5308-40cd-a3a8-f768a68f4ef6 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.388 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:24:27 compute-0 systemd-machined[215941]: New machine qemu-97-instance-00000050.
Nov 22 09:24:27 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-00000050.
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.408 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d627df-e1dd-4cf0-a295-a1ca5a711c83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.450 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[33c4377c-6596-442f-b314-fb7eac473acf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.454 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf21980-1d72-4946-9cf6-3e7c0f57b12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.488 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[88568306-a738-40fe-95ee-f110eb9859f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.511 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f5fa847b-8a1f-4060-90da-89dfb92fd775]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335592, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 344 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 4.0 MiB/s wr, 105 op/s
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.541 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b740afe-6fb7-47fc-8361-7dcf294c5358]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335594, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335594, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.545 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.553 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.554 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.554 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.555 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.647 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.647 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803467.6429582, 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.648 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] VM Started (Lifecycle Event)
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.667 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.679 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803467.6451757, 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.679 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] VM Paused (Lifecycle Event)
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.697 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.703 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.725 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.885 253665 DEBUG nova.compute.manager [req-ebbf89e8-5872-44f4-9dd8-72aa51ea662f req-a9e6c9f0-b98e-414d-aec8-8bf6306ac70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.885 253665 DEBUG oslo_concurrency.lockutils [req-ebbf89e8-5872-44f4-9dd8-72aa51ea662f req-a9e6c9f0-b98e-414d-aec8-8bf6306ac70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.886 253665 DEBUG oslo_concurrency.lockutils [req-ebbf89e8-5872-44f4-9dd8-72aa51ea662f req-a9e6c9f0-b98e-414d-aec8-8bf6306ac70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.886 253665 DEBUG oslo_concurrency.lockutils [req-ebbf89e8-5872-44f4-9dd8-72aa51ea662f req-a9e6c9f0-b98e-414d-aec8-8bf6306ac70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:27 compute-0 nova_compute[253661]: 2025-11-22 09:24:27.886 253665 DEBUG nova.compute.manager [req-ebbf89e8-5872-44f4-9dd8-72aa51ea662f req-a9e6c9f0-b98e-414d-aec8-8bf6306ac70b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Processing event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.966 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.967 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:27.969 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:28 compute-0 ovn_controller[152872]: 2025-11-22T09:24:28Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:9d:63 10.100.0.12
Nov 22 09:24:28 compute-0 ovn_controller[152872]: 2025-11-22T09:24:28Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:9d:63 10.100.0.12
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.257 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.288 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.866s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.289 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'migration_context' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.299 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.300 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Start _get_guest_xml network_info=[{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "vif_mac": "fa:16:3e:86:73:cb"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.301 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'resources' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.315 253665 WARNING nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.323 253665 DEBUG nova.virt.libvirt.host [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.323 253665 DEBUG nova.virt.libvirt.host [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.327 253665 DEBUG nova.virt.libvirt.host [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.328 253665 DEBUG nova.virt.libvirt.host [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.329 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.329 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.329 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.329 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.330 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.330 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.330 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.330 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.330 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.330 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.331 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.331 253665 DEBUG nova.virt.hardware [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.331 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.345 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.397 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.398 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.398 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.399 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9096405c-eb66-4d27-abbb-e709b767afea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.533 253665 DEBUG nova.compute.manager [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.534 253665 DEBUG oslo_concurrency.lockutils [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.534 253665 DEBUG oslo_concurrency.lockutils [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.534 253665 DEBUG oslo_concurrency.lockutils [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.535 253665 DEBUG nova.compute.manager [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.535 253665 WARNING nova.compute.manager [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received unexpected event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with vm_state active and task_state rescuing.
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.535 253665 DEBUG nova.compute.manager [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.535 253665 DEBUG oslo_concurrency.lockutils [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.536 253665 DEBUG oslo_concurrency.lockutils [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.536 253665 DEBUG oslo_concurrency.lockutils [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.536 253665 DEBUG nova.compute.manager [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Processing event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.536 253665 DEBUG nova.compute.manager [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.537 253665 DEBUG oslo_concurrency.lockutils [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.537 253665 DEBUG oslo_concurrency.lockutils [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.537 253665 DEBUG oslo_concurrency.lockutils [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.537 253665 DEBUG nova.compute.manager [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] No waiting events found dispatching network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.537 253665 WARNING nova.compute.manager [req-f0712822-0821-4527-8b62-7a31ec45eabc req-e3e13519-e008-43b1-b9b3-f4efcb604cb1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received unexpected event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 for instance with vm_state stopped and task_state rebuild_spawning.
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.538 253665 DEBUG nova.compute.manager [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.543 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803468.5429327, 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.544 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] VM Resumed (Lifecycle Event)
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.551 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.555 253665 INFO nova.virt.libvirt.driver [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance spawned successfully.
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.556 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.567 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.573 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.584 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.584 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.585 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.586 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.586 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.586 253665 DEBUG nova.virt.libvirt.driver [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.596 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.652 253665 DEBUG nova.compute.manager [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.697 253665 INFO nova.compute.manager [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] bringing vm to original state: 'stopped'
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.762 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.762 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.763 253665 DEBUG nova.compute.manager [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.767 253665 DEBUG nova.compute.manager [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 22 09:24:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2525037607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.823 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803468.8231688, 99df8fe4-a61a-40d5-b089-90de5d98050f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.824 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] VM Started (Lifecycle Event)
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.826 253665 DEBUG nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.829 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.836 253665 INFO nova.virt.libvirt.driver [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Instance spawned successfully.
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.836 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.843 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.844 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.889 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.896 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.900 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.901 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.901 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.901 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.902 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.902 253665 DEBUG nova.virt.libvirt.driver [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.937 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.937 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803468.8233488, 99df8fe4-a61a-40d5-b089-90de5d98050f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.938 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] VM Paused (Lifecycle Event)
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.956 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.959 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803468.833712, 99df8fe4-a61a-40d5-b089-90de5d98050f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.959 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] VM Resumed (Lifecycle Event)
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.995 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:28 compute-0 nova_compute[253661]: 2025-11-22 09:24:28.999 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.022 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.040 253665 INFO nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Took 8.83 seconds to spawn the instance on the hypervisor.
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.041 253665 DEBUG nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.118 253665 INFO nova.compute.manager [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Took 10.06 seconds to build instance.
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.135 253665 DEBUG oslo_concurrency.lockutils [None req-a3e79bd4-14db-440b-b485-44b45de9e601 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/63486296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.321 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.322 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:29 compute-0 kernel: tapedd81944-57 (unregistering): left promiscuous mode
Nov 22 09:24:29 compute-0 NetworkManager[48920]: <info>  [1763803469.4680] device (tapedd81944-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:24:29 compute-0 ceph-mon[75021]: pgmap v1814: 305 pgs: 305 active+clean; 344 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 4.0 MiB/s wr, 105 op/s
Nov 22 09:24:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2525037607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/63486296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:29 compute-0 ovn_controller[152872]: 2025-11-22T09:24:29Z|00810|binding|INFO|Releasing lport edd81944-578b-4533-9db7-f17a3fb84211 from this chassis (sb_readonly=0)
Nov 22 09:24:29 compute-0 ovn_controller[152872]: 2025-11-22T09:24:29Z|00811|binding|INFO|Setting lport edd81944-578b-4533-9db7-f17a3fb84211 down in Southbound
Nov 22 09:24:29 compute-0 ovn_controller[152872]: 2025-11-22T09:24:29Z|00812|binding|INFO|Removing iface tapedd81944-57 ovn-installed in OVS
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.533 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b6:3f 10.100.0.7'], port_security=['fa:16:3e:70:b6:3f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9cef4b12-b28c-47df-9af2-a0bf9934e4d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fb487cef-189d-444c-a09e-c2cc59f79353', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=edd81944-578b-4533-9db7-f17a3fb84211) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.535 162862 INFO neutron.agent.ovn.metadata.agent [-] Port edd81944-578b-4533-9db7-f17a3fb84211 in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 unbound from our chassis
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.538 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0936cc0d-3697-4210-9c23-8f3e8e452e86
Nov 22 09:24:29 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Nov 22 09:24:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 367 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 5.8 MiB/s wr, 155 op/s
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.542 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:29 compute-0 systemd-machined[215941]: Machine qemu-96-instance-0000004d terminated.
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.571 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c830dc9c-2ed7-4883-ba22-8185182321b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.611 253665 INFO nova.virt.libvirt.driver [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance destroyed successfully.
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.612 253665 DEBUG nova.compute.manager [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.614 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d59a1e1a-29bb-425d-bfba-738c79aa9d2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.618 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[09431476-e584-4e77-a00b-9458f11eb395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.657 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9f8538-878a-4c7d-9396-de67268f5d33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.680 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.682 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6a995e-b933-464b-9ed6-fa3d56c8cca5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 36436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335737, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.704 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[acfc6921-b6f5-4785-a4c3-e2dbcd1add20]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622144, 'tstamp': 622144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335738, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622147, 'tstamp': 622147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335738, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.706 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.708 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.709 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.709 253665 DEBUG nova.objects.instance [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.717 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0936cc0d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.718 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.719 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0936cc0d-30, col_values=(('external_ids', {'iface-id': 'a1484e81-5431-4cb7-9298-4572e8674d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:29.719 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.787 253665 DEBUG oslo_concurrency.lockutils [None req-02b80442-72cf-4c73-a1a1-adece9015b30 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661781059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.886 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.888 253665 DEBUG nova.virt.libvirt.vif [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-533420499',display_name='tempest-ServerRescueTestJSONUnderV235-server-533420499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-533420499',id=78,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:23:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e78196ec949a45cf803d3e585b603558',ramdisk_id='',reservation_id='r-v4cum4j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1716369832',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1716369832-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:23:55Z,user_data=None,user_id='3f7dbcc13af740b491f0498f4ddec69d',uuid=a207d8c4-4fce-4fe6-9ba5-548a92e757ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "vif_mac": "fa:16:3e:86:73:cb"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.889 253665 DEBUG nova.network.os_vif_util [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converting VIF {"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "vif_mac": "fa:16:3e:86:73:cb"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.890 253665 DEBUG nova.network.os_vif_util [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.892 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'pci_devices' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.904 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <uuid>a207d8c4-4fce-4fe6-9ba5-548a92e757ac</uuid>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <name>instance-0000004e</name>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-533420499</nova:name>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:24:28</nova:creationTime>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <nova:user uuid="3f7dbcc13af740b491f0498f4ddec69d">tempest-ServerRescueTestJSONUnderV235-1716369832-project-member</nova:user>
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <nova:project uuid="e78196ec949a45cf803d3e585b603558">tempest-ServerRescueTestJSONUnderV235-1716369832</nova:project>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <nova:port uuid="15bf0e02-e093-4f45-995f-abb925d1cf71">
Nov 22 09:24:29 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <system>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <entry name="serial">a207d8c4-4fce-4fe6-9ba5-548a92e757ac</entry>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <entry name="uuid">a207d8c4-4fce-4fe6-9ba5-548a92e757ac</entry>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </system>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <os>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   </os>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <features>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   </features>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.rescue">
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk">
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <target dev="vdb" bus="virtio"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue">
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:29 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:86:73:cb"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <target dev="tap15bf0e02-e0"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/console.log" append="off"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <video>
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </video>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:24:29 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:24:29 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:24:29 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:24:29 compute-0 nova_compute[253661]: </domain>
Nov 22 09:24:29 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.915 253665 INFO nova.virt.libvirt.driver [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance destroyed successfully.
Nov 22 09:24:29 compute-0 nova_compute[253661]: 2025-11-22 09:24:29.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:30 compute-0 rsyslogd[1005]: imjournal: 11394 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.002 253665 DEBUG nova.compute.manager [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.002 253665 DEBUG oslo_concurrency.lockutils [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.003 253665 DEBUG oslo_concurrency.lockutils [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.003 253665 DEBUG oslo_concurrency.lockutils [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.004 253665 DEBUG nova.compute.manager [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] No waiting events found dispatching network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.004 253665 WARNING nova.compute.manager [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received unexpected event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 for instance with vm_state active and task_state None.
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.012 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.012 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.012 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.013 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No VIF found with MAC fa:16:3e:86:73:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.014 253665 INFO nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Using config drive
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.046 253665 DEBUG nova.storage.rbd_utils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.068 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.099 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'keypairs' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.295 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updating instance_info_cache with network_info: [{"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.313 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.314 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.315 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.315 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.459 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:30 compute-0 ceph-mon[75021]: pgmap v1815: 305 pgs: 305 active+clean; 367 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 5.8 MiB/s wr, 155 op/s
Nov 22 09:24:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3661781059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.667 253665 INFO nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Creating config drive at /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.674 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgo4oo9ae execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.843 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgo4oo9ae" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.875 253665 DEBUG nova.storage.rbd_utils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:30 compute-0 nova_compute[253661]: 2025-11-22 09:24:30.879 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.365 253665 DEBUG nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-unplugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.366 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.366 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.366 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.366 253665 DEBUG nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] No waiting events found dispatching network-vif-unplugged-edd81944-578b-4533-9db7-f17a3fb84211 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.367 253665 WARNING nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received unexpected event network-vif-unplugged-edd81944-578b-4533-9db7-f17a3fb84211 for instance with vm_state stopped and task_state None.
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.367 253665 DEBUG nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.367 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.367 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.368 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.368 253665 DEBUG nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] No waiting events found dispatching network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.368 253665 WARNING nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received unexpected event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 for instance with vm_state stopped and task_state None.
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.400 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.401 253665 INFO nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Deleting local config drive /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue because it was imported into RBD.
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.442 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.442 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.443 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.443 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.443 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.445 253665 INFO nova.compute.manager [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Terminating instance
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.446 253665 DEBUG nova.compute.manager [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:24:31 compute-0 kernel: tap15bf0e02-e0: entered promiscuous mode
Nov 22 09:24:31 compute-0 systemd-udevd[335719]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:24:31 compute-0 NetworkManager[48920]: <info>  [1763803471.4729] manager: (tap15bf0e02-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 ovn_controller[152872]: 2025-11-22T09:24:31Z|00813|binding|INFO|Claiming lport 15bf0e02-e093-4f45-995f-abb925d1cf71 for this chassis.
Nov 22 09:24:31 compute-0 ovn_controller[152872]: 2025-11-22T09:24:31Z|00814|binding|INFO|15bf0e02-e093-4f45-995f-abb925d1cf71: Claiming fa:16:3e:86:73:cb 10.100.0.14
Nov 22 09:24:31 compute-0 NetworkManager[48920]: <info>  [1763803471.4851] device (tap15bf0e02-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:24:31 compute-0 NetworkManager[48920]: <info>  [1763803471.4860] device (tap15bf0e02-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.488 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:73:cb 10.100.0.14'], port_security=['fa:16:3e:86:73:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a207d8c4-4fce-4fe6-9ba5-548a92e757ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcc3ab0c-697f-4983-ad7d-7f2a44c0b653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e78196ec949a45cf803d3e585b603558', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'acde5338-5012-4a35-a74f-7e2170896be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70ae9d50-6442-4aca-8fcc-29daad21c977, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=15bf0e02-e093-4f45-995f-abb925d1cf71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.489 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 15bf0e02-e093-4f45-995f-abb925d1cf71 in datapath fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 bound to our chassis
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.490 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.491 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[220d0148-04f7-44b3-bed6-8a86e62c76b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 ovn_controller[152872]: 2025-11-22T09:24:31Z|00815|binding|INFO|Setting lport 15bf0e02-e093-4f45-995f-abb925d1cf71 ovn-installed in OVS
Nov 22 09:24:31 compute-0 ovn_controller[152872]: 2025-11-22T09:24:31Z|00816|binding|INFO|Setting lport 15bf0e02-e093-4f45-995f-abb925d1cf71 up in Southbound
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 systemd-machined[215941]: New machine qemu-98-instance-0000004e.
Nov 22 09:24:31 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-0000004e.
Nov 22 09:24:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 367 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 5.8 MiB/s wr, 134 op/s
Nov 22 09:24:31 compute-0 kernel: tap56dc3604-53 (unregistering): left promiscuous mode
Nov 22 09:24:31 compute-0 NetworkManager[48920]: <info>  [1763803471.6453] device (tap56dc3604-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:24:31 compute-0 ovn_controller[152872]: 2025-11-22T09:24:31Z|00817|binding|INFO|Releasing lport 56dc3604-5308-40cd-a3a8-f768a68f4ef6 from this chassis (sb_readonly=0)
Nov 22 09:24:31 compute-0 ovn_controller[152872]: 2025-11-22T09:24:31Z|00818|binding|INFO|Setting lport 56dc3604-5308-40cd-a3a8-f768a68f4ef6 down in Southbound
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.649 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 ovn_controller[152872]: 2025-11-22T09:24:31Z|00819|binding|INFO|Removing iface tap56dc3604-53 ovn-installed in OVS
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.659 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:03:d2 10.100.0.14'], port_security=['fa:16:3e:26:03:d2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '99df8fe4-a61a-40d5-b089-90de5d98050f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=56dc3604-5308-40cd-a3a8-f768a68f4ef6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.660 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 56dc3604-5308-40cd-a3a8-f768a68f4ef6 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.662 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.674 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.684 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abca885b-a5f3-4036-b8c9-d689e9ad616a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:31 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000050.scope: Deactivated successfully.
Nov 22 09:24:31 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000050.scope: Consumed 3.626s CPU time.
Nov 22 09:24:31 compute-0 systemd-machined[215941]: Machine qemu-97-instance-00000050 terminated.
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.718 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8afc8a-38af-4884-a4a9-99330cfc5d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.723 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d06e1a62-0b15-4caa-af82-03e6cec936d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.762 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecf6f3e-5a2c-4c76-9b7f-4e0ceb10bcf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.787 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5d89e8-969a-4157-b875-37dbf94beef6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335854, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5de39eee-95f9-45b8-958a-c6bd3e278a2f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335855, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335855, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.814 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.822 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.823 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.823 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.823 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.888 253665 INFO nova.virt.libvirt.driver [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Instance destroyed successfully.
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.889 253665 DEBUG nova.objects.instance [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid 99df8fe4-a61a-40d5-b089-90de5d98050f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.905 253665 DEBUG nova.virt.libvirt.vif [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:24:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2075369',display_name='tempest-ServersTestJSON-server-2075369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2075369',id=80,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:24:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-gztn8gsw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:24:29Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=99df8fe4-a61a-40d5-b089-90de5d98050f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.906 253665 DEBUG nova.network.os_vif_util [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.908 253665 DEBUG nova.network.os_vif_util [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.909 253665 DEBUG os_vif [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.911 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.911 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56dc3604-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:31 compute-0 nova_compute[253661]: 2025-11-22 09:24:31.967 253665 INFO os_vif [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53')
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.062 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for a207d8c4-4fce-4fe6-9ba5-548a92e757ac due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.063 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803472.0626485, a207d8c4-4fce-4fe6-9ba5-548a92e757ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.063 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] VM Resumed (Lifecycle Event)
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.067 253665 DEBUG nova.compute.manager [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.097 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG nova.compute.manager [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-unplugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG oslo_concurrency.lockutils [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG oslo_concurrency.lockutils [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG oslo_concurrency.lockutils [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG nova.compute.manager [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] No waiting events found dispatching network-vif-unplugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.103 253665 DEBUG nova.compute.manager [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-unplugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.125 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803472.0638218, a207d8c4-4fce-4fe6-9ba5-548a92e757ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] VM Started (Lifecycle Event)
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.146 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.151 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.245 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.246 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.246 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.246 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.246 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:32 compute-0 podman[335935]: 2025-11-22 09:24:32.381571491 +0000 UTC m=+0.065478144 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:24:32 compute-0 podman[335934]: 2025-11-22 09:24:32.400282673 +0000 UTC m=+0.084184786 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.434 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.434 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.434 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.436 253665 INFO nova.compute.manager [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Terminating instance
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.436 253665 DEBUG nova.compute.manager [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.447 253665 INFO nova.virt.libvirt.driver [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance destroyed successfully.
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.447 253665 DEBUG nova.objects.instance [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'resources' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.460 253665 DEBUG nova.virt.libvirt.vif [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-566833626',display_name='tempest-tempest.common.compute-instance-566833626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-566833626',id=77,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:24:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-3abserjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:24:29Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9cef4b12-b28c-47df-9af2-a0bf9934e4d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.460 253665 DEBUG nova.network.os_vif_util [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.461 253665 DEBUG nova.network.os_vif_util [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.461 253665 DEBUG os_vif [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.463 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.463 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedd81944-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.465 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.470 253665 INFO os_vif [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57')
Nov 22 09:24:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:24:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227198430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:32 compute-0 nova_compute[253661]: 2025-11-22 09:24:32.957 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:33 compute-0 ceph-mon[75021]: pgmap v1816: 305 pgs: 305 active+clean; 367 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 5.8 MiB/s wr, 134 op/s
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.089 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.090 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.090 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.093 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.093 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.097 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.097 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.099 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.099 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.102 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.103 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.292 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.293 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.80893325805664GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.294 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.294 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.367 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9096405c-eb66-4d27-abbb-e709b767afea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.367 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.367 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance a207d8c4-4fce-4fe6-9ba5-548a92e757ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.367 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e0b05f62-6966-4bf3-aee5-e4d2137a6cfc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.368 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 99df8fe4-a61a-40d5-b089-90de5d98050f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.368 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.368 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.467 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 413 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.5 MiB/s wr, 208 op/s
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.589 253665 DEBUG nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.590 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.590 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.590 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.590 253665 DEBUG nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 WARNING nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received unexpected event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with vm_state rescued and task_state None.
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.592 253665 WARNING nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received unexpected event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with vm_state rescued and task_state None.
Nov 22 09:24:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:24:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502384094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.978 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:33 compute-0 nova_compute[253661]: 2025-11-22 09:24:33.984 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:24:34 compute-0 nova_compute[253661]: 2025-11-22 09:24:34.001 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:24:34 compute-0 nova_compute[253661]: 2025-11-22 09:24:34.022 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:24:34 compute-0 nova_compute[253661]: 2025-11-22 09:24:34.023 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:34 compute-0 nova_compute[253661]: 2025-11-22 09:24:34.205 253665 DEBUG nova.compute.manager [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:34 compute-0 nova_compute[253661]: 2025-11-22 09:24:34.206 253665 DEBUG oslo_concurrency.lockutils [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:34 compute-0 nova_compute[253661]: 2025-11-22 09:24:34.206 253665 DEBUG oslo_concurrency.lockutils [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:34 compute-0 nova_compute[253661]: 2025-11-22 09:24:34.207 253665 DEBUG oslo_concurrency.lockutils [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:34 compute-0 nova_compute[253661]: 2025-11-22 09:24:34.207 253665 DEBUG nova.compute.manager [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] No waiting events found dispatching network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:34 compute-0 nova_compute[253661]: 2025-11-22 09:24:34.207 253665 WARNING nova.compute.manager [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received unexpected event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 for instance with vm_state active and task_state deleting.
Nov 22 09:24:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4227198430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2502384094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:35 compute-0 ceph-mon[75021]: pgmap v1817: 305 pgs: 305 active+clean; 413 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.5 MiB/s wr, 208 op/s
Nov 22 09:24:35 compute-0 nova_compute[253661]: 2025-11-22 09:24:35.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 419 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 235 op/s
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.023 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.024 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.024 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.042 253665 DEBUG nova.compute.manager [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.042 253665 DEBUG nova.compute.manager [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing instance network info cache due to event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.042 253665 DEBUG oslo_concurrency.lockutils [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.043 253665 DEBUG oslo_concurrency.lockutils [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.043 253665 DEBUG nova.network.neutron [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.689 253665 DEBUG nova.compute.manager [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.690 253665 DEBUG nova.compute.manager [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing instance network info cache due to event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:24:36 compute-0 nova_compute[253661]: 2025-11-22 09:24:36.690 253665 DEBUG oslo_concurrency.lockutils [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:37 compute-0 nova_compute[253661]: 2025-11-22 09:24:37.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:37 compute-0 ceph-mon[75021]: pgmap v1818: 305 pgs: 305 active+clean; 419 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 235 op/s
Nov 22 09:24:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 373 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 248 op/s
Nov 22 09:24:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:37 compute-0 nova_compute[253661]: 2025-11-22 09:24:37.911 253665 DEBUG nova.network.neutron [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updated VIF entry in instance network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:24:37 compute-0 nova_compute[253661]: 2025-11-22 09:24:37.912 253665 DEBUG nova.network.neutron [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:37 compute-0 nova_compute[253661]: 2025-11-22 09:24:37.925 253665 DEBUG oslo_concurrency.lockutils [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:37 compute-0 nova_compute[253661]: 2025-11-22 09:24:37.926 253665 DEBUG oslo_concurrency.lockutils [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:37 compute-0 nova_compute[253661]: 2025-11-22 09:24:37.926 253665 DEBUG nova.network.neutron [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:24:38 compute-0 podman[336034]: 2025-11-22 09:24:38.468161401 +0000 UTC m=+0.144670648 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:24:38 compute-0 ceph-mon[75021]: pgmap v1819: 305 pgs: 305 active+clean; 373 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 248 op/s
Nov 22 09:24:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 326 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.5 MiB/s wr, 268 op/s
Nov 22 09:24:39 compute-0 nova_compute[253661]: 2025-11-22 09:24:39.701 253665 DEBUG nova.network.neutron [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updated VIF entry in instance network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:24:39 compute-0 nova_compute[253661]: 2025-11-22 09:24:39.702 253665 DEBUG nova.network.neutron [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:39 compute-0 nova_compute[253661]: 2025-11-22 09:24:39.726 253665 DEBUG oslo_concurrency.lockutils [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:39 compute-0 nova_compute[253661]: 2025-11-22 09:24:39.848 253665 DEBUG nova.compute.manager [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:39 compute-0 nova_compute[253661]: 2025-11-22 09:24:39.849 253665 DEBUG nova.compute.manager [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing instance network info cache due to event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:24:39 compute-0 nova_compute[253661]: 2025-11-22 09:24:39.849 253665 DEBUG oslo_concurrency.lockutils [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:39 compute-0 nova_compute[253661]: 2025-11-22 09:24:39.850 253665 DEBUG oslo_concurrency.lockutils [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:39 compute-0 nova_compute[253661]: 2025-11-22 09:24:39.850 253665 DEBUG nova.network.neutron [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:24:40 compute-0 nova_compute[253661]: 2025-11-22 09:24:40.494 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:41 compute-0 ceph-mon[75021]: pgmap v1820: 305 pgs: 305 active+clean; 326 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.5 MiB/s wr, 268 op/s
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.362 253665 INFO nova.virt.libvirt.driver [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Deleting instance files /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f_del
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.363 253665 INFO nova.virt.libvirt.driver [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Deletion of /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f_del complete
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.423 253665 INFO nova.compute.manager [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Took 9.98 seconds to destroy the instance on the hypervisor.
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.424 253665 DEBUG oslo.service.loopingcall [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.424 253665 DEBUG nova.compute.manager [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.424 253665 DEBUG nova.network.neutron [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:24:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 326 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 207 op/s
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.736 253665 INFO nova.virt.libvirt.driver [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deleting instance files /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_del
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.737 253665 INFO nova.virt.libvirt.driver [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deletion of /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_del complete
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.796 253665 INFO nova.compute.manager [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Took 9.36 seconds to destroy the instance on the hypervisor.
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.797 253665 DEBUG oslo.service.loopingcall [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.798 253665 DEBUG nova.compute.manager [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.798 253665 DEBUG nova.network.neutron [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.908 253665 DEBUG nova.network.neutron [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updated VIF entry in instance network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.909 253665 DEBUG nova.network.neutron [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:41 compute-0 nova_compute[253661]: 2025-11-22 09:24:41.931 253665 DEBUG oslo_concurrency.lockutils [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.744 253665 DEBUG nova.network.neutron [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.758 253665 DEBUG nova.network.neutron [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.761 253665 INFO nova.compute.manager [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Took 0.96 seconds to deallocate network for instance.
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.789 253665 INFO nova.compute.manager [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Took 1.36 seconds to deallocate network for instance.
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.833 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.834 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.852 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.872 253665 DEBUG nova.compute.manager [req-3f2a8d9d-4316-4682-9b34-3d0eb4f2f17a req-2a54d1aa-3929-4a84-b18e-52a52debe7ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-deleted-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:42 compute-0 nova_compute[253661]: 2025-11-22 09:24:42.959 253665 DEBUG oslo_concurrency.processutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:24:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959503324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:43 compute-0 nova_compute[253661]: 2025-11-22 09:24:43.498 253665 DEBUG oslo_concurrency.processutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:43 compute-0 nova_compute[253661]: 2025-11-22 09:24:43.505 253665 DEBUG nova.compute.provider_tree [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:24:43 compute-0 nova_compute[253661]: 2025-11-22 09:24:43.519 253665 DEBUG nova.scheduler.client.report [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:24:43 compute-0 nova_compute[253661]: 2025-11-22 09:24:43.541 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:43 compute-0 nova_compute[253661]: 2025-11-22 09:24:43.545 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 326 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 223 op/s
Nov 22 09:24:43 compute-0 ceph-mon[75021]: pgmap v1821: 305 pgs: 305 active+clean; 326 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 207 op/s
Nov 22 09:24:43 compute-0 nova_compute[253661]: 2025-11-22 09:24:43.578 253665 INFO nova.scheduler.client.report [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Deleted allocations for instance 9cef4b12-b28c-47df-9af2-a0bf9934e4d7
Nov 22 09:24:43 compute-0 nova_compute[253661]: 2025-11-22 09:24:43.648 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:43 compute-0 nova_compute[253661]: 2025-11-22 09:24:43.695 253665 DEBUG oslo_concurrency.processutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:43 compute-0 nova_compute[253661]: 2025-11-22 09:24:43.749 253665 DEBUG nova.compute.manager [req-5028dd52-4207-49d2-9dc0-c9287014a000 req-82065a50-74f2-4526-9a62-1abe17d0df76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-deleted-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:24:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998847628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:44 compute-0 nova_compute[253661]: 2025-11-22 09:24:44.201 253665 DEBUG oslo_concurrency.processutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:44 compute-0 nova_compute[253661]: 2025-11-22 09:24:44.209 253665 DEBUG nova.compute.provider_tree [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:24:44 compute-0 nova_compute[253661]: 2025-11-22 09:24:44.225 253665 DEBUG nova.scheduler.client.report [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:24:44 compute-0 nova_compute[253661]: 2025-11-22 09:24:44.249 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:44 compute-0 nova_compute[253661]: 2025-11-22 09:24:44.273 253665 INFO nova.scheduler.client.report [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance 99df8fe4-a61a-40d5-b089-90de5d98050f
Nov 22 09:24:44 compute-0 nova_compute[253661]: 2025-11-22 09:24:44.331 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:44 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 22 09:24:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/959503324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:44 compute-0 ceph-mon[75021]: pgmap v1822: 305 pgs: 305 active+clean; 326 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 223 op/s
Nov 22 09:24:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/998847628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:44 compute-0 nova_compute[253661]: 2025-11-22 09:24:44.609 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803469.6081023, 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:44 compute-0 nova_compute[253661]: 2025-11-22 09:24:44.609 253665 INFO nova.compute.manager [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] VM Stopped (Lifecycle Event)
Nov 22 09:24:44 compute-0 nova_compute[253661]: 2025-11-22 09:24:44.642 253665 DEBUG nova.compute.manager [None req-8a90239e-3e0a-4cd8-ba5a-269bcf6ed3ac - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:45 compute-0 nova_compute[253661]: 2025-11-22 09:24:45.068 253665 DEBUG nova.compute.manager [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:45 compute-0 nova_compute[253661]: 2025-11-22 09:24:45.068 253665 DEBUG nova.compute.manager [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing instance network info cache due to event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:24:45 compute-0 nova_compute[253661]: 2025-11-22 09:24:45.068 253665 DEBUG oslo_concurrency.lockutils [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:45 compute-0 nova_compute[253661]: 2025-11-22 09:24:45.069 253665 DEBUG oslo_concurrency.lockutils [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:45 compute-0 nova_compute[253661]: 2025-11-22 09:24:45.069 253665 DEBUG nova.network.neutron [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:24:45 compute-0 nova_compute[253661]: 2025-11-22 09:24:45.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 326 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 34 KiB/s wr, 151 op/s
Nov 22 09:24:46 compute-0 ceph-mon[75021]: pgmap v1823: 305 pgs: 305 active+clean; 326 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 34 KiB/s wr, 151 op/s
Nov 22 09:24:46 compute-0 nova_compute[253661]: 2025-11-22 09:24:46.886 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803471.8844786, 99df8fe4-a61a-40d5-b089-90de5d98050f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:24:46 compute-0 nova_compute[253661]: 2025-11-22 09:24:46.887 253665 INFO nova.compute.manager [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] VM Stopped (Lifecycle Event)
Nov 22 09:24:46 compute-0 nova_compute[253661]: 2025-11-22 09:24:46.913 253665 DEBUG nova.compute.manager [None req-c9f6297d-0647-4d78-b07d-f781cd36e47d - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:24:47 compute-0 nova_compute[253661]: 2025-11-22 09:24:47.351 253665 DEBUG nova.network.neutron [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updated VIF entry in instance network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:24:47 compute-0 nova_compute[253661]: 2025-11-22 09:24:47.352 253665 DEBUG nova.network.neutron [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:47 compute-0 nova_compute[253661]: 2025-11-22 09:24:47.368 253665 DEBUG oslo_concurrency.lockutils [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:47 compute-0 nova_compute[253661]: 2025-11-22 09:24:47.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 326 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 17 KiB/s wr, 137 op/s
Nov 22 09:24:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:48 compute-0 ceph-mon[75021]: pgmap v1824: 305 pgs: 305 active+clean; 326 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 17 KiB/s wr, 137 op/s
Nov 22 09:24:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 326 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 828 KiB/s rd, 14 KiB/s wr, 106 op/s
Nov 22 09:24:50 compute-0 nova_compute[253661]: 2025-11-22 09:24:50.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:50 compute-0 nova_compute[253661]: 2025-11-22 09:24:50.537 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:50 compute-0 nova_compute[253661]: 2025-11-22 09:24:50.537 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:50 compute-0 nova_compute[253661]: 2025-11-22 09:24:50.555 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:24:50 compute-0 nova_compute[253661]: 2025-11-22 09:24:50.633 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:50 compute-0 nova_compute[253661]: 2025-11-22 09:24:50.634 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:50 compute-0 nova_compute[253661]: 2025-11-22 09:24:50.640 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:24:50 compute-0 nova_compute[253661]: 2025-11-22 09:24:50.640 253665 INFO nova.compute.claims [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:24:50 compute-0 nova_compute[253661]: 2025-11-22 09:24:50.769 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:50 compute-0 ceph-mon[75021]: pgmap v1825: 305 pgs: 305 active+clean; 326 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 828 KiB/s rd, 14 KiB/s wr, 106 op/s
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.000 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.002 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.003 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.004 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.004 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.007 253665 INFO nova.compute.manager [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Terminating instance
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.008 253665 DEBUG nova.compute.manager [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:24:51 compute-0 sudo[336125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:24:51 compute-0 sudo[336125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:51 compute-0 sudo[336125]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:24:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1747420391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.278 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.286 253665 DEBUG nova.compute.provider_tree [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:24:51 compute-0 sudo[336150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:24:51 compute-0 sudo[336150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:51 compute-0 sudo[336150]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.328 253665 DEBUG nova.scheduler.client.report [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.350 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.351 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:24:51 compute-0 sudo[336177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:24:51 compute-0 sudo[336177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:51 compute-0 sudo[336177]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.405 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.405 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.419 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.443 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:24:51 compute-0 sudo[336202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:24:51 compute-0 sudo[336202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 326 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 12 KiB/s wr, 77 op/s
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.552 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.553 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.554 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Creating image(s)
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.580 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.606 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.631 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.636 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.677 253665 DEBUG nova.policy [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.729 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.730 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.731 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.732 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.756 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:51 compute-0 nova_compute[253661]: 2025-11-22 09:24:51.762 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 672288f2-2f9b-4643-9ebf-a949ad316298_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:51 compute-0 sudo[336202]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:52 compute-0 kernel: tap15bf0e02-e0 (unregistering): left promiscuous mode
Nov 22 09:24:52 compute-0 NetworkManager[48920]: <info>  [1763803492.0307] device (tap15bf0e02-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:24:52 compute-0 ovn_controller[152872]: 2025-11-22T09:24:52Z|00820|binding|INFO|Releasing lport 15bf0e02-e093-4f45-995f-abb925d1cf71 from this chassis (sb_readonly=0)
Nov 22 09:24:52 compute-0 ovn_controller[152872]: 2025-11-22T09:24:52Z|00821|binding|INFO|Setting lport 15bf0e02-e093-4f45-995f-abb925d1cf71 down in Southbound
Nov 22 09:24:52 compute-0 ovn_controller[152872]: 2025-11-22T09:24:52Z|00822|binding|INFO|Removing iface tap15bf0e02-e0 ovn-installed in OVS
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.043 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:52.050 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:73:cb 10.100.0.14'], port_security=['fa:16:3e:86:73:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a207d8c4-4fce-4fe6-9ba5-548a92e757ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcc3ab0c-697f-4983-ad7d-7f2a44c0b653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e78196ec949a45cf803d3e585b603558', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'acde5338-5012-4a35-a74f-7e2170896be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70ae9d50-6442-4aca-8fcc-29daad21c977, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=15bf0e02-e093-4f45-995f-abb925d1cf71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:24:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:52.051 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 15bf0e02-e093-4f45-995f-abb925d1cf71 in datapath fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 unbound from our chassis
Nov 22 09:24:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:52.052 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:24:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:24:52.053 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac671fcc-a7a1-495b-ac05-4c484c42c9a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:24:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:24:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:24:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:24:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:24:52 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 22 09:24:52 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d0000004e.scope: Consumed 13.945s CPU time.
Nov 22 09:24:52 compute-0 systemd-machined[215941]: Machine qemu-98-instance-0000004e terminated.
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:24:52
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.meta']
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.239 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1747420391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.253 253665 INFO nova.virt.libvirt.driver [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance destroyed successfully.
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.254 253665 DEBUG nova.objects.instance [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'resources' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.266 253665 DEBUG nova.virt.libvirt.vif [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-533420499',display_name='tempest-ServerRescueTestJSONUnderV235-server-533420499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-533420499',id=78,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:24:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e78196ec949a45cf803d3e585b603558',ramdisk_id='',reservation_id='r-v4cum4j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1716369832',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1716369832-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:24:32Z,user_data=None,user_id='3f7dbcc13af740b491f0498f4ddec69d',uuid=a207d8c4-4fce-4fe6-9ba5-548a92e757ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.266 253665 DEBUG nova.network.os_vif_util [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converting VIF {"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.267 253665 DEBUG nova.network.os_vif_util [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.267 253665 DEBUG os_vif [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.269 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.269 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15bf0e02-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.276 253665 INFO os_vif [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0')
Nov 22 09:24:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1d61e125-cec0-4738-bd98-311ab0a02730 does not exist
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5f77a31c-9489-4091-b3d5-0351d6d040af does not exist
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6463a12b-0a2d-4980-ad94-c1776bb62029 does not exist
Nov 22 09:24:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:24:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:24:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:24:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:24:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:24:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:24:52 compute-0 sudo[336378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:24:52 compute-0 sudo[336378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:52 compute-0 sudo[336378]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:52 compute-0 sudo[336406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:24:52 compute-0 sudo[336406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:52 compute-0 sudo[336406]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:52 compute-0 sudo[336434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:24:52 compute-0 sudo[336434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:52 compute-0 sudo[336434]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:52 compute-0 sudo[336459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:24:52 compute-0 sudo[336459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:24:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:24:52 compute-0 nova_compute[253661]: 2025-11-22 09:24:52.937 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Successfully created port: b173a545-d888-43c0-a1fb-2969a871663c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:24:53 compute-0 podman[336524]: 2025-11-22 09:24:53.011517633 +0000 UTC m=+0.035852127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:24:53 compute-0 podman[336524]: 2025-11-22 09:24:53.153762392 +0000 UTC m=+0.178096876 container create 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:24:53 compute-0 systemd[1]: Started libpod-conmon-6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0.scope.
Nov 22 09:24:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:24:53 compute-0 ceph-mon[75021]: pgmap v1826: 305 pgs: 305 active+clean; 326 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 12 KiB/s wr, 77 op/s
Nov 22 09:24:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:24:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:24:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:24:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:24:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:24:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:24:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 328 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 22 KiB/s wr, 81 op/s
Nov 22 09:24:53 compute-0 podman[336524]: 2025-11-22 09:24:53.593931284 +0000 UTC m=+0.618265788 container init 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:24:53 compute-0 podman[336524]: 2025-11-22 09:24:53.609246954 +0000 UTC m=+0.633581408 container start 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 09:24:53 compute-0 systemd[1]: libpod-6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0.scope: Deactivated successfully.
Nov 22 09:24:53 compute-0 romantic_jemison[336540]: 167 167
Nov 22 09:24:53 compute-0 conmon[336540]: conmon 6ee7d9bc1dd75794731a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0.scope/container/memory.events
Nov 22 09:24:53 compute-0 podman[336524]: 2025-11-22 09:24:53.687579288 +0000 UTC m=+0.711913732 container attach 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:24:53 compute-0 podman[336524]: 2025-11-22 09:24:53.688545102 +0000 UTC m=+0.712879586 container died 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 09:24:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-85652c12bed1bcf6410be7b162be0f36e8b6692d5e24286977287447f29fa7a5-merged.mount: Deactivated successfully.
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.020 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Successfully updated port: b173a545-d888-43c0-a1fb-2969a871663c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.040 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 672288f2-2f9b-4643-9ebf-a949ad316298_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.073 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.074 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.074 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.124 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.341 253665 DEBUG nova.compute.manager [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-changed-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.342 253665 DEBUG nova.compute.manager [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Refreshing instance network info cache due to event network-changed-b173a545-d888-43c0-a1fb-2969a871663c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.342 253665 DEBUG oslo_concurrency.lockutils [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:54 compute-0 podman[336524]: 2025-11-22 09:24:54.350210657 +0000 UTC m=+1.374545141 container remove 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 09:24:54 compute-0 systemd[1]: libpod-conmon-6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0.scope: Deactivated successfully.
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.515 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.517 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.542 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.548 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:24:54 compute-0 podman[336620]: 2025-11-22 09:24:54.544679389 +0000 UTC m=+0.024435952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.641 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.642 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:54 compute-0 podman[336620]: 2025-11-22 09:24:54.646718566 +0000 UTC m=+0.126475099 container create 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.652 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.653 253665 INFO nova.compute.claims [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:24:54 compute-0 ceph-mon[75021]: pgmap v1827: 305 pgs: 305 active+clean; 328 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 22 KiB/s wr, 81 op/s
Nov 22 09:24:54 compute-0 systemd[1]: Started libpod-conmon-7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1.scope.
Nov 22 09:24:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:24:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:24:54 compute-0 nova_compute[253661]: 2025-11-22 09:24:54.831 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:54 compute-0 podman[336620]: 2025-11-22 09:24:54.954984808 +0000 UTC m=+0.434741361 container init 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:24:54 compute-0 podman[336620]: 2025-11-22 09:24:54.963628607 +0000 UTC m=+0.443385140 container start 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:24:55 compute-0 podman[336620]: 2025-11-22 09:24:55.073991905 +0000 UTC m=+0.553748518 container attach 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.164 253665 DEBUG nova.objects.instance [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid 672288f2-2f9b-4643-9ebf-a949ad316298 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.178 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.179 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Ensure instance console log exists: /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.180 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.180 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.180 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:24:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3161226656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.272 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.280 253665 DEBUG nova.compute.provider_tree [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.292 253665 DEBUG nova.scheduler.client.report [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.312 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.313 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.363 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.363 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.385 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.408 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.505 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.507 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.507 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Creating image(s)
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.535 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 340 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 620 KiB/s rd, 504 KiB/s wr, 72 op/s
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.570 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:24:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.607 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.611 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.654 253665 DEBUG nova.policy [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7e5709393702478dbf0bd566dc94d7fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b06c711e582499ab500917d85e27e3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.663 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.706 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.707 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.708 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.708 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.732 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:55 compute-0 nova_compute[253661]: 2025-11-22 09:24:55.736 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3161226656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.201 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Updating instance_info_cache with network_info: [{"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.231 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.231 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance network_info: |[{"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.232 253665 DEBUG oslo_concurrency.lockutils [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.233 253665 DEBUG nova.network.neutron [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Refreshing network info cache for port b173a545-d888-43c0-a1fb-2969a871663c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.237 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start _get_guest_xml network_info=[{"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.245 253665 WARNING nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.257 253665 DEBUG nova.virt.libvirt.host [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.259 253665 DEBUG nova.virt.libvirt.host [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.262 253665 DEBUG nova.virt.libvirt.host [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.263 253665 DEBUG nova.virt.libvirt.host [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.264 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.264 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.265 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.265 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.265 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.265 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.266 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.266 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.266 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.267 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.267 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.267 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:24:56 compute-0 eloquent_keller[336637]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:24:56 compute-0 eloquent_keller[336637]: --> relative data size: 1.0
Nov 22 09:24:56 compute-0 eloquent_keller[336637]: --> All data devices are unavailable
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.273 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:56 compute-0 systemd[1]: libpod-7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1.scope: Deactivated successfully.
Nov 22 09:24:56 compute-0 podman[336620]: 2025-11-22 09:24:56.325172863 +0000 UTC m=+1.804929396 container died 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:24:56 compute-0 systemd[1]: libpod-7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1.scope: Consumed 1.185s CPU time.
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.406 253665 DEBUG nova.compute.manager [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-unplugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.407 253665 DEBUG oslo_concurrency.lockutils [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.407 253665 DEBUG oslo_concurrency.lockutils [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.407 253665 DEBUG oslo_concurrency.lockutils [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.407 253665 DEBUG nova.compute.manager [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-unplugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.408 253665 DEBUG nova.compute.manager [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-unplugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba-merged.mount: Deactivated successfully.
Nov 22 09:24:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159468975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.751 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.782 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.786 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:56 compute-0 nova_compute[253661]: 2025-11-22 09:24:56.923 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Successfully created port: 702cad91-d4bb-4f0c-b378-7e05e928ad09 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:24:57 compute-0 nova_compute[253661]: 2025-11-22 09:24:57.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 326 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 1.1 MiB/s wr, 88 op/s
Nov 22 09:24:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:24:57 compute-0 ceph-mon[75021]: pgmap v1828: 305 pgs: 305 active+clean; 340 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 620 KiB/s rd, 504 KiB/s wr, 72 op/s
Nov 22 09:24:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4159468975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:58 compute-0 podman[336620]: 2025-11-22 09:24:58.32204801 +0000 UTC m=+3.801804573 container remove 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:24:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:24:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550476733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:58 compute-0 sudo[336459]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.392 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.394 253665 DEBUG nova.virt.libvirt.vif [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-287905590',display_name='tempest-ServersTestJSON-server-287905590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-287905590',id=81,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO/elwrk2u0pM3LkpugKE0r9pgrYufUX3T9HzzxAQRxB89i5bBiA7C9yWlosrYihPiHzlNqfpGLV7W1tbdzbGLdP3NreuJMAPnqDTjhMrZ8g7ZHEYCTPrFyftTjdWlo1pA==',key_name='tempest-key-1706317659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-ic2f0n30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:51Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=672288f2-2f9b-4643-9ebf-a949ad316298,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.395 253665 DEBUG nova.network.os_vif_util [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.396 253665 DEBUG nova.network.os_vif_util [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.397 253665 DEBUG nova.objects.instance [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid 672288f2-2f9b-4643-9ebf-a949ad316298 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.410 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <uuid>672288f2-2f9b-4643-9ebf-a949ad316298</uuid>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <name>instance-00000051</name>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestJSON-server-287905590</nova:name>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:24:56</nova:creationTime>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <nova:port uuid="b173a545-d888-43c0-a1fb-2969a871663c">
Nov 22 09:24:58 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <system>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <entry name="serial">672288f2-2f9b-4643-9ebf-a949ad316298</entry>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <entry name="uuid">672288f2-2f9b-4643-9ebf-a949ad316298</entry>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     </system>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <os>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   </os>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <features>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   </features>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/672288f2-2f9b-4643-9ebf-a949ad316298_disk">
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/672288f2-2f9b-4643-9ebf-a949ad316298_disk.config">
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       </source>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:24:58 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:a0:f1:9e"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <target dev="tapb173a545-d8"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/console.log" append="off"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <video>
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     </video>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:24:58 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:24:58 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:24:58 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:24:58 compute-0 nova_compute[253661]: </domain>
Nov 22 09:24:58 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:24:58 compute-0 systemd[1]: libpod-conmon-7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1.scope: Deactivated successfully.
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.411 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Preparing to wait for external event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.412 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.412 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.412 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.413 253665 DEBUG nova.virt.libvirt.vif [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-287905590',display_name='tempest-ServersTestJSON-server-287905590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-287905590',id=81,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO/elwrk2u0pM3LkpugKE0r9pgrYufUX3T9HzzxAQRxB89i5bBiA7C9yWlosrYihPiHzlNqfpGLV7W1tbdzbGLdP3NreuJMAPnqDTjhMrZ8g7ZHEYCTPrFyftTjdWlo1pA==',key_name='tempest-key-1706317659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-ic2f0n30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:51Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=672288f2-2f9b-4643-9ebf-a949ad316298,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.413 253665 DEBUG nova.network.os_vif_util [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.414 253665 DEBUG nova.network.os_vif_util [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.414 253665 DEBUG os_vif [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.415 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.419 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb173a545-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.420 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb173a545-d8, col_values=(('external_ids', {'iface-id': 'b173a545-d888-43c0-a1fb-2969a871663c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:f1:9e', 'vm-uuid': '672288f2-2f9b-4643-9ebf-a949ad316298'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:24:58 compute-0 NetworkManager[48920]: <info>  [1763803498.4236] manager: (tapb173a545-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.426 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.432 253665 INFO os_vif [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8')
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.445 253665 DEBUG nova.network.neutron [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Updated VIF entry in instance network info cache for port b173a545-d888-43c0-a1fb-2969a871663c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.445 253665 DEBUG nova.network.neutron [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Updating instance_info_cache with network_info: [{"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.456 253665 DEBUG oslo_concurrency.lockutils [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:24:58 compute-0 sudo[336876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:24:58 compute-0 sudo[336876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:58 compute-0 sudo[336876]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:58 compute-0 sudo[336904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:24:58 compute-0 sudo[336904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.546 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.547 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.547 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:a0:f1:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.548 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Using config drive
Nov 22 09:24:58 compute-0 sudo[336904]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.570 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.594 253665 DEBUG nova.compute.manager [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.594 253665 DEBUG oslo_concurrency.lockutils [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.595 253665 DEBUG oslo_concurrency.lockutils [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.595 253665 DEBUG oslo_concurrency.lockutils [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.595 253665 DEBUG nova.compute.manager [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.595 253665 WARNING nova.compute.manager [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received unexpected event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with vm_state rescued and task_state deleting.
Nov 22 09:24:58 compute-0 sudo[336937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:24:58 compute-0 sudo[336937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:58 compute-0 sudo[336937]: pam_unix(sudo:session): session closed for user root
Nov 22 09:24:58 compute-0 sudo[336972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:24:58 compute-0 sudo[336972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.736 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.740 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.003s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.815 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] resizing rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:24:58 compute-0 ceph-mon[75021]: pgmap v1829: 305 pgs: 305 active+clean; 326 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 1.1 MiB/s wr, 88 op/s
Nov 22 09:24:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3550476733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.931 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Successfully updated port: 702cad91-d4bb-4f0c-b378-7e05e928ad09 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.951 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.951 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquired lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:24:58 compute-0 nova_compute[253661]: 2025-11-22 09:24:58.951 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.002 253665 DEBUG nova.compute.manager [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-changed-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.003 253665 DEBUG nova.compute.manager [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Refreshing instance network info cache due to event network-changed-702cad91-d4bb-4f0c-b378-7e05e928ad09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.004 253665 DEBUG oslo_concurrency.lockutils [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:24:59 compute-0 podman[337091]: 2025-11-22 09:24:59.12660287 +0000 UTC m=+0.115949694 container create 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:24:59 compute-0 podman[337091]: 2025-11-22 09:24:59.039197076 +0000 UTC m=+0.028543920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.136 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Creating config drive at /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.141 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxe7lhv2p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.189 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:24:59 compute-0 systemd[1]: Started libpod-conmon-35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf.scope.
Nov 22 09:24:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.297 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxe7lhv2p" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.324 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:24:59 compute-0 podman[337091]: 2025-11-22 09:24:59.325620731 +0000 UTC m=+0.314967575 container init 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.329 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:24:59 compute-0 podman[337091]: 2025-11-22 09:24:59.334172688 +0000 UTC m=+0.323519512 container start 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:24:59 compute-0 sad_dubinsky[337110]: 167 167
Nov 22 09:24:59 compute-0 systemd[1]: libpod-35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf.scope: Deactivated successfully.
Nov 22 09:24:59 compute-0 podman[337091]: 2025-11-22 09:24:59.394720801 +0000 UTC m=+0.384067675 container attach 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:24:59 compute-0 podman[337091]: 2025-11-22 09:24:59.396073484 +0000 UTC m=+0.385420318 container died 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.511 253665 DEBUG nova.objects.instance [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'migration_context' on Instance uuid 493b70aa-aaa2-4c40-bfea-6eff7ffec547 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.526 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.526 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Ensure instance console log exists: /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.527 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.527 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.528 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:24:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 323 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 415 KiB/s rd, 2.6 MiB/s wr, 88 op/s
Nov 22 09:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2cb284674dc479cd85af8e30711e53c5c52a6a45193ea742ff698bd27ed8cff-merged.mount: Deactivated successfully.
Nov 22 09:24:59 compute-0 podman[337091]: 2025-11-22 09:24:59.770345402 +0000 UTC m=+0.759692216 container remove 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:24:59 compute-0 systemd[1]: libpod-conmon-35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf.scope: Deactivated successfully.
Nov 22 09:24:59 compute-0 ovn_controller[152872]: 2025-11-22T09:24:59Z|00823|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 09:24:59 compute-0 ovn_controller[152872]: 2025-11-22T09:24:59Z|00824|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 09:24:59 compute-0 nova_compute[253661]: 2025-11-22 09:24:59.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:00 compute-0 podman[337190]: 2025-11-22 09:25:00.046578151 +0000 UTC m=+0.086942013 container create 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:25:00 compute-0 podman[337190]: 2025-11-22 09:24:59.986354155 +0000 UTC m=+0.026718057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:25:00 compute-0 systemd[1]: Started libpod-conmon-8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe.scope.
Nov 22 09:25:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.247 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updating instance_info_cache with network_info: [{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:00 compute-0 podman[337190]: 2025-11-22 09:25:00.275126406 +0000 UTC m=+0.315490278 container init 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:25:00 compute-0 podman[337190]: 2025-11-22 09:25:00.28316539 +0000 UTC m=+0.323529252 container start 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.287 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Releasing lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.288 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance network_info: |[{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.289 253665 DEBUG oslo_concurrency.lockutils [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.289 253665 DEBUG nova.network.neutron [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Refreshing network info cache for port 702cad91-d4bb-4f0c-b378-7e05e928ad09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.292 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start _get_guest_xml network_info=[{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.300 253665 WARNING nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.313 253665 DEBUG nova.virt.libvirt.host [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.315 253665 DEBUG nova.virt.libvirt.host [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.320 253665 DEBUG nova.virt.libvirt.host [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.321 253665 DEBUG nova.virt.libvirt.host [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.321 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.321 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.326 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:00 compute-0 podman[337190]: 2025-11-22 09:25:00.36630295 +0000 UTC m=+0.406666812 container attach 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.375 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.376 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Deleting local config drive /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config because it was imported into RBD.
Nov 22 09:25:00 compute-0 kernel: tapb173a545-d8: entered promiscuous mode
Nov 22 09:25:00 compute-0 NetworkManager[48920]: <info>  [1763803500.4441] manager: (tapb173a545-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:00 compute-0 ovn_controller[152872]: 2025-11-22T09:25:00Z|00825|binding|INFO|Claiming lport b173a545-d888-43c0-a1fb-2969a871663c for this chassis.
Nov 22 09:25:00 compute-0 ovn_controller[152872]: 2025-11-22T09:25:00Z|00826|binding|INFO|b173a545-d888-43c0-a1fb-2969a871663c: Claiming fa:16:3e:a0:f1:9e 10.100.0.14
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:00 compute-0 ovn_controller[152872]: 2025-11-22T09:25:00Z|00827|binding|INFO|Setting lport b173a545-d888-43c0-a1fb-2969a871663c ovn-installed in OVS
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:00 compute-0 systemd-udevd[337240]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.506 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:00 compute-0 NetworkManager[48920]: <info>  [1763803500.5078] device (tapb173a545-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:25:00 compute-0 NetworkManager[48920]: <info>  [1763803500.5092] device (tapb173a545-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:25:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1378503543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.814 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.844 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:00 compute-0 nova_compute[253661]: 2025-11-22 09:25:00.849 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:01 compute-0 ceph-mon[75021]: pgmap v1830: 305 pgs: 305 active+clean; 323 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 415 KiB/s rd, 2.6 MiB/s wr, 88 op/s
Nov 22 09:25:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1378503543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:01 compute-0 sharp_robinson[337207]: {
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:     "0": [
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:         {
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "devices": [
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "/dev/loop3"
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             ],
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_name": "ceph_lv0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_size": "21470642176",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "name": "ceph_lv0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "tags": {
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.cluster_name": "ceph",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.crush_device_class": "",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.encrypted": "0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.osd_id": "0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.type": "block",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.vdo": "0"
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             },
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "type": "block",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "vg_name": "ceph_vg0"
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:         }
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:     ],
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:     "1": [
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:         {
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "devices": [
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "/dev/loop4"
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             ],
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_name": "ceph_lv1",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_size": "21470642176",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "name": "ceph_lv1",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "tags": {
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.cluster_name": "ceph",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.crush_device_class": "",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.encrypted": "0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.osd_id": "1",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.type": "block",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.vdo": "0"
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             },
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "type": "block",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "vg_name": "ceph_vg1"
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:         }
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:     ],
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:     "2": [
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:         {
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "devices": [
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "/dev/loop5"
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             ],
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_name": "ceph_lv2",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_size": "21470642176",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "name": "ceph_lv2",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "tags": {
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.cluster_name": "ceph",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.crush_device_class": "",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.encrypted": "0",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.osd_id": "2",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.type": "block",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:                 "ceph.vdo": "0"
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             },
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "type": "block",
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:             "vg_name": "ceph_vg2"
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:         }
Nov 22 09:25:01 compute-0 sharp_robinson[337207]:     ]
Nov 22 09:25:01 compute-0 sharp_robinson[337207]: }
Nov 22 09:25:01 compute-0 systemd[1]: libpod-8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe.scope: Deactivated successfully.
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.264 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f1:9e 10.100.0.14'], port_security=['fa:16:3e:a0:f1:9e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '672288f2-2f9b-4643-9ebf-a949ad316298', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b173a545-d888-43c0-a1fb-2969a871663c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:01 compute-0 ovn_controller[152872]: 2025-11-22T09:25:01Z|00828|binding|INFO|Setting lport b173a545-d888-43c0-a1fb-2969a871663c up in Southbound
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.267 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b173a545-d888-43c0-a1fb-2969a871663c in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.269 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:25:01 compute-0 podman[337290]: 2025-11-22 09:25:01.28790029 +0000 UTC m=+0.062005890 container died 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:25:01 compute-0 systemd-machined[215941]: New machine qemu-99-instance-00000051.
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.299 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ef6738-0fec-4985-b3e4-eaad92cada3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:01 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000051.
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.338 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[16f6d48e-0366-495b-a21f-8cbb8b080399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.342 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[78b6aa88-6041-4406-9409-8849211de463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.374 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[39b91085-239c-449c-b75a-6c40d41d5794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644878248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[02df6f4c-f93a-4b03-99b1-c0c20a9a21c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337318, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.404 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.407 253665 DEBUG nova.virt.libvirt.vif [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1274648896',display_name='tempest-ServerActionsTestOtherA-server-1274648896',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1274648896',id=82,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-n007hk6i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:55Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=493b70aa-aaa2-4c40-bfea-6eff7ffec547,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.407 253665 DEBUG nova.network.os_vif_util [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.408 253665 DEBUG nova.network.os_vif_util [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.409 253665 DEBUG nova.objects.instance [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 493b70aa-aaa2-4c40-bfea-6eff7ffec547 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.416 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ccea8f48-ca9f-46e7-b009-87a9d66c4db3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337321, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337321, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.419 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.422 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <uuid>493b70aa-aaa2-4c40-bfea-6eff7ffec547</uuid>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <name>instance-00000052</name>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestOtherA-server-1274648896</nova:name>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:25:00</nova:creationTime>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <nova:user uuid="7e5709393702478dbf0bd566dc94d7fe">tempest-ServerActionsTestOtherA-1527475006-project-member</nova:user>
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <nova:project uuid="9b06c711e582499ab500917d85e27e3c">tempest-ServerActionsTestOtherA-1527475006</nova:project>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <nova:port uuid="702cad91-d4bb-4f0c-b378-7e05e928ad09">
Nov 22 09:25:01 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <system>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <entry name="serial">493b70aa-aaa2-4c40-bfea-6eff7ffec547</entry>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <entry name="uuid">493b70aa-aaa2-4c40-bfea-6eff7ffec547</entry>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     </system>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <os>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   </os>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <features>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   </features>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk">
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config">
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:01 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ab:29:75"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <target dev="tap702cad91-d4"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/console.log" append="off"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <video>
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     </video>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:25:01 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:25:01 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:25:01 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:25:01 compute-0 nova_compute[253661]: </domain>
Nov 22 09:25:01 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.423 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Preparing to wait for external event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.423 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.424 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.424 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.424 253665 DEBUG nova.virt.libvirt.vif [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1274648896',display_name='tempest-ServerActionsTestOtherA-server-1274648896',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1274648896',id=82,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-n007hk6i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:55Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=493b70aa-aaa2-4c40-bfea-6eff7ffec547,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.425 253665 DEBUG nova.network.os_vif_util [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.425 253665 DEBUG nova.network.os_vif_util [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.426 253665 DEBUG os_vif [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.427 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.428 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.458 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.459 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap702cad91-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.460 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap702cad91-d4, col_values=(('external_ids', {'iface-id': '702cad91-d4bb-4f0c-b378-7e05e928ad09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:29:75', 'vm-uuid': '493b70aa-aaa2-4c40-bfea-6eff7ffec547'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:01 compute-0 NetworkManager[48920]: <info>  [1763803501.4719] manager: (tap702cad91-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.480 253665 INFO os_vif [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4')
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.479 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.479 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.479 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.480 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 323 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.6 MiB/s wr, 61 op/s
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.555 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.556 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.556 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No VIF found with MAC fa:16:3e:ab:29:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.556 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Using config drive
Nov 22 09:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315-merged.mount: Deactivated successfully.
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.585 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:01 compute-0 podman[337290]: 2025-11-22 09:25:01.655629221 +0000 UTC m=+0.429734831 container remove 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:25:01 compute-0 systemd[1]: libpod-conmon-8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe.scope: Deactivated successfully.
Nov 22 09:25:01 compute-0 sudo[336972]: pam_unix(sudo:session): session closed for user root
Nov 22 09:25:01 compute-0 sudo[337362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:25:01 compute-0 sudo[337362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:25:01 compute-0 sudo[337362]: pam_unix(sudo:session): session closed for user root
Nov 22 09:25:01 compute-0 sudo[337387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:25:01 compute-0 sudo[337387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:25:01 compute-0 sudo[337387]: pam_unix(sudo:session): session closed for user root
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.877 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Creating config drive at /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.884 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpro0botoa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:01 compute-0 sudo[337437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:25:01 compute-0 sudo[337437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:25:01 compute-0 sudo[337437]: pam_unix(sudo:session): session closed for user root
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.966 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803501.9664326, 672288f2-2f9b-4643-9ebf-a949ad316298 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.967 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] VM Started (Lifecycle Event)
Nov 22 09:25:01 compute-0 nova_compute[253661]: 2025-11-22 09:25:01.996 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.002 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803501.970519, 672288f2-2f9b-4643-9ebf-a949ad316298 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.003 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] VM Paused (Lifecycle Event)
Nov 22 09:25:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3644878248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:02 compute-0 sudo[337466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:25:02 compute-0 sudo[337466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.029 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.035 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpro0botoa" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.070 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.075 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.125 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.141 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:02 compute-0 ovn_controller[152872]: 2025-11-22T09:25:02Z|00829|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 09:25:02 compute-0 ovn_controller[152872]: 2025-11-22T09:25:02Z|00830|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:02 compute-0 podman[337567]: 2025-11-22 09:25:02.43449953 +0000 UTC m=+0.052950341 container create a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.482 253665 DEBUG nova.network.neutron [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updated VIF entry in instance network info cache for port 702cad91-d4bb-4f0c-b378-7e05e928ad09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.483 253665 DEBUG nova.network.neutron [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updating instance_info_cache with network_info: [{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.496 253665 DEBUG oslo_concurrency.lockutils [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:02 compute-0 podman[337567]: 2025-11-22 09:25:02.404380273 +0000 UTC m=+0.022831104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:25:02 compute-0 systemd[1]: Started libpod-conmon-a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144.scope.
Nov 22 09:25:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:25:02 compute-0 podman[337567]: 2025-11-22 09:25:02.622354212 +0000 UTC m=+0.240805043 container init a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:25:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:02 compute-0 podman[337567]: 2025-11-22 09:25:02.635945181 +0000 UTC m=+0.254395992 container start a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:25:02 compute-0 heuristic_shockley[337607]: 167 167
Nov 22 09:25:02 compute-0 systemd[1]: libpod-a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144.scope: Deactivated successfully.
Nov 22 09:25:02 compute-0 podman[337567]: 2025-11-22 09:25:02.683168293 +0000 UTC m=+0.301619104 container attach a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:25:02 compute-0 podman[337567]: 2025-11-22 09:25:02.683656494 +0000 UTC m=+0.302107315 container died a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.682 253665 INFO nova.virt.libvirt.driver [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Deleting instance files /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_del
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.684 253665 INFO nova.virt.libvirt.driver [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Deletion of /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_del complete
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.714 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.715 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Deleting local config drive /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config because it was imported into RBD.
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002498288138145471 of space, bias 1.0, pg target 0.7494864414436414 quantized to 32 (current 32)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:25:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.758 253665 INFO nova.compute.manager [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Took 11.75 seconds to destroy the instance on the hypervisor.
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.758 253665 DEBUG oslo.service.loopingcall [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.759 253665 DEBUG nova.compute.manager [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.759 253665 DEBUG nova.network.neutron [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:25:02 compute-0 podman[337581]: 2025-11-22 09:25:02.764461158 +0000 UTC m=+0.276080865 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:25:02 compute-0 podman[337582]: 2025-11-22 09:25:02.764798606 +0000 UTC m=+0.276518506 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4a891f982beeabef8b767cc371267a328b61a874a5d2a9a79c71738942dcc28-merged.mount: Deactivated successfully.
Nov 22 09:25:02 compute-0 kernel: tap702cad91-d4: entered promiscuous mode
Nov 22 09:25:02 compute-0 NetworkManager[48920]: <info>  [1763803502.8041] manager: (tap702cad91-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Nov 22 09:25:02 compute-0 ovn_controller[152872]: 2025-11-22T09:25:02Z|00831|binding|INFO|Claiming lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 for this chassis.
Nov 22 09:25:02 compute-0 ovn_controller[152872]: 2025-11-22T09:25:02Z|00832|binding|INFO|702cad91-d4bb-4f0c-b378-7e05e928ad09: Claiming fa:16:3e:ab:29:75 10.100.0.14
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.806 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:02 compute-0 systemd-udevd[337244]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.821 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:29:75 10.100.0.14'], port_security=['fa:16:3e:ab:29:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '493b70aa-aaa2-4c40-bfea-6eff7ffec547', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb487cef-189d-444c-a09e-c2cc59f79353', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=702cad91-d4bb-4f0c-b378-7e05e928ad09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:02 compute-0 ovn_controller[152872]: 2025-11-22T09:25:02Z|00833|binding|INFO|Setting lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 ovn-installed in OVS
Nov 22 09:25:02 compute-0 ovn_controller[152872]: 2025-11-22T09:25:02Z|00834|binding|INFO|Setting lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 up in Southbound
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.823 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 702cad91-d4bb-4f0c-b378-7e05e928ad09 in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 bound to our chassis
Nov 22 09:25:02 compute-0 NetworkManager[48920]: <info>  [1763803502.8249] device (tap702cad91-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:25:02 compute-0 NetworkManager[48920]: <info>  [1763803502.8263] device (tap702cad91-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.827 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.826 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0936cc0d-3697-4210-9c23-8f3e8e452e86
Nov 22 09:25:02 compute-0 podman[337567]: 2025-11-22 09:25:02.836468068 +0000 UTC m=+0.454918879 container remove a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:25:02 compute-0 systemd[1]: libpod-conmon-a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144.scope: Deactivated successfully.
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.849 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea27d7f-587d-4eeb-b800-3c1c1176b62b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:02 compute-0 systemd-machined[215941]: New machine qemu-100-instance-00000052.
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.886 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf36a29-267f-4322-a696-d23b2824074b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:02 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000052.
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.889 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3844b1-d8ee-4ea8-a199-6358842fd956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.937 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c801235d-5bac-4833-9646-d4bce919091f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.967 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1298563a-b927-45c4-8561-541e02664b63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 36436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337666, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef235262-a6d5-4c80-a91e-0693ec6f5357]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622144, 'tstamp': 622144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337668, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622147, 'tstamp': 622147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337668, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.988 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:02 compute-0 nova_compute[253661]: 2025-11-22 09:25:02.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.993 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0936cc0d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.994 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.995 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0936cc0d-30, col_values=(('external_ids', {'iface-id': 'a1484e81-5431-4cb7-9298-4572e8674d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.995 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:03 compute-0 ceph-mon[75021]: pgmap v1831: 305 pgs: 305 active+clean; 323 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.6 MiB/s wr, 61 op/s
Nov 22 09:25:03 compute-0 podman[337674]: 2025-11-22 09:25:03.108397912 +0000 UTC m=+0.061189279 container create 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:25:03 compute-0 systemd[1]: Started libpod-conmon-50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe.scope.
Nov 22 09:25:03 compute-0 podman[337674]: 2025-11-22 09:25:03.079895514 +0000 UTC m=+0.032686911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:25:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:03 compute-0 podman[337674]: 2025-11-22 09:25:03.259520666 +0000 UTC m=+0.212312083 container init 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:25:03 compute-0 podman[337674]: 2025-11-22 09:25:03.270957073 +0000 UTC m=+0.223748450 container start 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:25:03 compute-0 podman[337674]: 2025-11-22 09:25:03.378909893 +0000 UTC m=+0.331701280 container attach 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:25:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 333 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.664 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803503.663474, 493b70aa-aaa2-4c40-bfea-6eff7ffec547 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.665 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] VM Started (Lifecycle Event)
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.688 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.693 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803503.663768, 493b70aa-aaa2-4c40-bfea-6eff7ffec547 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.694 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] VM Paused (Lifecycle Event)
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.715 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.722 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.745 253665 DEBUG nova.compute.manager [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.746 253665 DEBUG oslo_concurrency.lockutils [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.746 253665 DEBUG oslo_concurrency.lockutils [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.747 253665 DEBUG oslo_concurrency.lockutils [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.747 253665 DEBUG nova.compute.manager [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Processing event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.748 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.749 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.752 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803503.7524176, 493b70aa-aaa2-4c40-bfea-6eff7ffec547 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.752 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] VM Resumed (Lifecycle Event)
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.773 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.777 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.779 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.784 253665 INFO nova.virt.libvirt.driver [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance spawned successfully.
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.785 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.808 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.809 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.810 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.811 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.811 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.812 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.817 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.863 253665 INFO nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Took 8.36 seconds to spawn the instance on the hypervisor.
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.863 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.932 253665 INFO nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Took 9.32 seconds to build instance.
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.975 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:03 compute-0 nova_compute[253661]: 2025-11-22 09:25:03.998 253665 DEBUG nova.network.neutron [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.018 253665 INFO nova.compute.manager [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Took 1.26 seconds to deallocate network for instance.
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.040 253665 DEBUG nova.compute.manager [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.041 253665 DEBUG oslo_concurrency.lockutils [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.041 253665 DEBUG oslo_concurrency.lockutils [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.042 253665 DEBUG oslo_concurrency.lockutils [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.042 253665 DEBUG nova.compute.manager [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Processing event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.043 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.047 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803504.0472684, 672288f2-2f9b-4643-9ebf-a949ad316298 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.047 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] VM Resumed (Lifecycle Event)
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.050 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.062 253665 INFO nova.virt.libvirt.driver [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance spawned successfully.
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.063 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.196 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.197 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.198 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.214 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.219 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.219 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.220 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.220 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.221 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.221 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.230 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.272 253665 INFO nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Took 12.72 seconds to spawn the instance on the hypervisor.
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.272 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.314 253665 DEBUG oslo_concurrency.processutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:04 compute-0 nervous_margulis[337691]: {
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "osd_id": 1,
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "type": "bluestore"
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:     },
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "osd_id": 0,
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "type": "bluestore"
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:     },
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "osd_id": 2,
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:         "type": "bluestore"
Nov 22 09:25:04 compute-0 nervous_margulis[337691]:     }
Nov 22 09:25:04 compute-0 nervous_margulis[337691]: }
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.372 253665 INFO nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Took 13.76 seconds to build instance.
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.389 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:04 compute-0 systemd[1]: libpod-50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe.scope: Deactivated successfully.
Nov 22 09:25:04 compute-0 systemd[1]: libpod-50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe.scope: Consumed 1.120s CPU time.
Nov 22 09:25:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:04.454 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:04.456 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.456 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:04 compute-0 podman[337769]: 2025-11-22 09:25:04.477936623 +0000 UTC m=+0.040475439 container died 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c-merged.mount: Deactivated successfully.
Nov 22 09:25:04 compute-0 podman[337769]: 2025-11-22 09:25:04.743232417 +0000 UTC m=+0.305771223 container remove 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:25:04 compute-0 systemd[1]: libpod-conmon-50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe.scope: Deactivated successfully.
Nov 22 09:25:04 compute-0 sudo[337466]: pam_unix(sudo:session): session closed for user root
Nov 22 09:25:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:25:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769568295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:25:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.897 253665 DEBUG oslo_concurrency.processutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.905 253665 DEBUG nova.compute.provider_tree [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.920 253665 DEBUG nova.scheduler.client.report [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:25:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6307148d-2bfb-42ff-8a83-1144d8b65f79 does not exist
Nov 22 09:25:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4ce7f6f5-2740-4bed-81ea-e7119e1f83cf does not exist
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.945 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:04 compute-0 nova_compute[253661]: 2025-11-22 09:25:04.971 253665 INFO nova.scheduler.client.report [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Deleted allocations for instance a207d8c4-4fce-4fe6-9ba5-548a92e757ac
Nov 22 09:25:05 compute-0 sudo[337803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:25:05 compute-0 sudo[337803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:25:05 compute-0 sudo[337803]: pam_unix(sudo:session): session closed for user root
Nov 22 09:25:05 compute-0 nova_compute[253661]: 2025-11-22 09:25:05.041 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:05 compute-0 sudo[337828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:25:05 compute-0 sudo[337828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:25:05 compute-0 sudo[337828]: pam_unix(sudo:session): session closed for user root
Nov 22 09:25:05 compute-0 ceph-mon[75021]: pgmap v1832: 305 pgs: 305 active+clean; 333 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Nov 22 09:25:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/769568295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:25:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:25:05 compute-0 nova_compute[253661]: 2025-11-22 09:25:05.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 333 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 116 op/s
Nov 22 09:25:05 compute-0 nova_compute[253661]: 2025-11-22 09:25:05.707 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:05 compute-0 nova_compute[253661]: 2025-11-22 09:25:05.707 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:05 compute-0 nova_compute[253661]: 2025-11-22 09:25:05.708 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:05 compute-0 nova_compute[253661]: 2025-11-22 09:25:05.708 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:05 compute-0 nova_compute[253661]: 2025-11-22 09:25:05.708 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:05 compute-0 nova_compute[253661]: 2025-11-22 09:25:05.710 253665 INFO nova.compute.manager [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Terminating instance
Nov 22 09:25:05 compute-0 nova_compute[253661]: 2025-11-22 09:25:05.711 253665 DEBUG nova.compute.manager [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.096 253665 DEBUG nova.compute.manager [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.097 253665 DEBUG oslo_concurrency.lockutils [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.099 253665 DEBUG oslo_concurrency.lockutils [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.099 253665 DEBUG oslo_concurrency.lockutils [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.100 253665 DEBUG nova.compute.manager [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] No waiting events found dispatching network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.101 253665 WARNING nova.compute.manager [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received unexpected event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 for instance with vm_state active and task_state None.
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.101 253665 DEBUG nova.compute.manager [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-deleted-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:06 compute-0 kernel: tapb173a545-d8 (unregistering): left promiscuous mode
Nov 22 09:25:06 compute-0 NetworkManager[48920]: <info>  [1763803506.2270] device (tapb173a545-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.248 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:06 compute-0 ovn_controller[152872]: 2025-11-22T09:25:06Z|00835|binding|INFO|Releasing lport b173a545-d888-43c0-a1fb-2969a871663c from this chassis (sb_readonly=0)
Nov 22 09:25:06 compute-0 ovn_controller[152872]: 2025-11-22T09:25:06Z|00836|binding|INFO|Setting lport b173a545-d888-43c0-a1fb-2969a871663c down in Southbound
Nov 22 09:25:06 compute-0 ovn_controller[152872]: 2025-11-22T09:25:06Z|00837|binding|INFO|Removing iface tapb173a545-d8 ovn-installed in OVS
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.251 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f1:9e 10.100.0.14'], port_security=['fa:16:3e:a0:f1:9e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '672288f2-2f9b-4643-9ebf-a949ad316298', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b173a545-d888-43c0-a1fb-2969a871663c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.254 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b173a545-d888-43c0-a1fb-2969a871663c in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.256 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.267 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.279 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d042aba7-3aa2-48c4-bc39-42cd76f4a19f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:06 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Deactivated successfully.
Nov 22 09:25:06 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Consumed 2.246s CPU time.
Nov 22 09:25:06 compute-0 systemd-machined[215941]: Machine qemu-99-instance-00000051 terminated.
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.335 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[20d3cb9c-e869-4460-9914-4242faddc6a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.341 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ca14da-acfc-46d1-823d-683908857eb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.354 253665 DEBUG nova.compute.manager [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.355 253665 DEBUG oslo_concurrency.lockutils [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.355 253665 DEBUG oslo_concurrency.lockutils [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.355 253665 DEBUG oslo_concurrency.lockutils [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.355 253665 DEBUG nova.compute.manager [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] No waiting events found dispatching network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.356 253665 WARNING nova.compute.manager [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received unexpected event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c for instance with vm_state active and task_state deleting.
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.362 253665 INFO nova.virt.libvirt.driver [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance destroyed successfully.
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.363 253665 DEBUG nova.objects.instance [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid 672288f2-2f9b-4643-9ebf-a949ad316298 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.374 253665 DEBUG nova.virt.libvirt.vif [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-287905590',display_name='tempest-ServersTestJSON-server-287905590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-287905590',id=81,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO/elwrk2u0pM3LkpugKE0r9pgrYufUX3T9HzzxAQRxB89i5bBiA7C9yWlosrYihPiHzlNqfpGLV7W1tbdzbGLdP3NreuJMAPnqDTjhMrZ8g7ZHEYCTPrFyftTjdWlo1pA==',key_name='tempest-key-1706317659',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-ic2f0n30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:04Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=672288f2-2f9b-4643-9ebf-a949ad316298,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.374 253665 DEBUG nova.network.os_vif_util [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.375 253665 DEBUG nova.network.os_vif_util [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.376 253665 DEBUG os_vif [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.378 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb173a545-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.382 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca4b1bd-1a40-4f3d-a793-13f258b9a09b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.386 253665 INFO os_vif [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8')
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[40882fc8-8467-46d5-af80-fdbce2c5578c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337875, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.431 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbb9380-35d8-4fff-9c56-072f39752df6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337890, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337890, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.433 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:06 compute-0 nova_compute[253661]: 2025-11-22 09:25:06.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.436 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.436 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.436 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.437 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.251 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803492.249956, a207d8c4-4fce-4fe6-9ba5-548a92e757ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.252 253665 INFO nova.compute.manager [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] VM Stopped (Lifecycle Event)
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.269 253665 DEBUG nova.compute.manager [None req-168e5866-8b99-437e-b9b2-cb00efd11b3b - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.362 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.362 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.362 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.363 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.363 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.365 253665 INFO nova.compute.manager [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Terminating instance
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.366 253665 DEBUG nova.compute.manager [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:25:07 compute-0 ceph-mon[75021]: pgmap v1833: 305 pgs: 305 active+clean; 333 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 116 op/s
Nov 22 09:25:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 293 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 151 op/s
Nov 22 09:25:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:07 compute-0 kernel: tap702cad91-d4 (unregistering): left promiscuous mode
Nov 22 09:25:07 compute-0 NetworkManager[48920]: <info>  [1763803507.8444] device (tap702cad91-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:25:07 compute-0 ovn_controller[152872]: 2025-11-22T09:25:07Z|00838|binding|INFO|Releasing lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 from this chassis (sb_readonly=0)
Nov 22 09:25:07 compute-0 ovn_controller[152872]: 2025-11-22T09:25:07Z|00839|binding|INFO|Setting lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 down in Southbound
Nov 22 09:25:07 compute-0 ovn_controller[152872]: 2025-11-22T09:25:07Z|00840|binding|INFO|Removing iface tap702cad91-d4 ovn-installed in OVS
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.865 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:29:75 10.100.0.14'], port_security=['fa:16:3e:ab:29:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '493b70aa-aaa2-4c40-bfea-6eff7ffec547', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=702cad91-d4bb-4f0c-b378-7e05e928ad09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.866 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 702cad91-d4bb-4f0c-b378-7e05e928ad09 in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 unbound from our chassis
Nov 22 09:25:07 compute-0 nova_compute[253661]: 2025-11-22 09:25:07.868 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.868 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0936cc0d-3697-4210-9c23-8f3e8e452e86
Nov 22 09:25:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b819623-0143-4f06-9018-ab86faeab669]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:07 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000052.scope: Deactivated successfully.
Nov 22 09:25:07 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000052.scope: Consumed 4.295s CPU time.
Nov 22 09:25:07 compute-0 systemd-machined[215941]: Machine qemu-100-instance-00000052 terminated.
Nov 22 09:25:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.927 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cceb74b5-f4fe-46c2-9824-512690a97c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.931 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cd800a-7284-4fbf-9344-f43de28e54e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.971 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b103f462-5a5e-4570-a09d-0b08498e36bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.000 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.005 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17a383cc-7ab1-420e-8562-78adf5bcc3c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 36436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337906, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.015 253665 INFO nova.virt.libvirt.driver [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance destroyed successfully.
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.016 253665 DEBUG nova.objects.instance [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'resources' on Instance uuid 493b70aa-aaa2-4c40-bfea-6eff7ffec547 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fd793a26-7020-4683-8a90-6b586f7cfdd7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622144, 'tstamp': 622144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337914, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622147, 'tstamp': 622147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337914, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.028 253665 DEBUG nova.virt.libvirt.vif [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1274648896',display_name='tempest-ServerActionsTestOtherA-server-1274648896',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1274648896',id=82,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-n007hk6i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:03Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=493b70aa-aaa2-4c40-bfea-6eff7ffec547,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.029 253665 DEBUG nova.network.os_vif_util [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.029 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.030 253665 DEBUG nova.network.os_vif_util [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.030 253665 DEBUG os_vif [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.031 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.032 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap702cad91-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.033 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.034 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0936cc0d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.037 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.037 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0936cc0d-30, col_values=(('external_ids', {'iface-id': 'a1484e81-5431-4cb7-9298-4572e8674d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.038 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.039 253665 INFO os_vif [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4')
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.191 253665 DEBUG nova.compute.manager [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-changed-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.192 253665 DEBUG nova.compute.manager [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Refreshing instance network info cache due to event network-changed-702cad91-d4bb-4f0c-b378-7e05e928ad09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.192 253665 DEBUG oslo_concurrency.lockutils [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.192 253665 DEBUG oslo_concurrency.lockutils [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:08 compute-0 nova_compute[253661]: 2025-11-22 09:25:08.192 253665 DEBUG nova.network.neutron [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Refreshing network info cache for port 702cad91-d4bb-4f0c-b378-7e05e928ad09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:25:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.457 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:08 compute-0 ceph-mon[75021]: pgmap v1834: 305 pgs: 305 active+clean; 293 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 151 op/s
Nov 22 09:25:09 compute-0 ovn_controller[152872]: 2025-11-22T09:25:09Z|00841|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 09:25:09 compute-0 ovn_controller[152872]: 2025-11-22T09:25:09Z|00842|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.147 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:09 compute-0 podman[337936]: 2025-11-22 09:25:09.419262855 +0000 UTC m=+0.102492819 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.507 253665 DEBUG nova.compute.manager [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-unplugged-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.507 253665 DEBUG oslo_concurrency.lockutils [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.508 253665 DEBUG oslo_concurrency.lockutils [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.508 253665 DEBUG oslo_concurrency.lockutils [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.508 253665 DEBUG nova.compute.manager [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] No waiting events found dispatching network-vif-unplugged-b173a545-d888-43c0-a1fb-2969a871663c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.508 253665 DEBUG nova.compute.manager [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-unplugged-b173a545-d888-43c0-a1fb-2969a871663c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:25:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 293 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 234 op/s
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.648 253665 DEBUG nova.network.neutron [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updated VIF entry in instance network info cache for port 702cad91-d4bb-4f0c-b378-7e05e928ad09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.649 253665 DEBUG nova.network.neutron [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updating instance_info_cache with network_info: [{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:09 compute-0 nova_compute[253661]: 2025-11-22 09:25:09.665 253665 DEBUG oslo_concurrency.lockutils [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:10 compute-0 nova_compute[253661]: 2025-11-22 09:25:10.510 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:11 compute-0 ceph-mon[75021]: pgmap v1835: 305 pgs: 305 active+clean; 293 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 234 op/s
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.530 253665 INFO nova.virt.libvirt.driver [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Deleting instance files /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298_del
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.531 253665 INFO nova.virt.libvirt.driver [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Deletion of /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298_del complete
Nov 22 09:25:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 293 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1022 KiB/s wr, 202 op/s
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.591 253665 INFO nova.compute.manager [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Took 5.88 seconds to destroy the instance on the hypervisor.
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.592 253665 DEBUG oslo.service.loopingcall [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.592 253665 DEBUG nova.compute.manager [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.592 253665 DEBUG nova.network.neutron [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.684 253665 DEBUG nova.compute.manager [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 DEBUG oslo_concurrency.lockutils [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 DEBUG oslo_concurrency.lockutils [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 DEBUG oslo_concurrency.lockutils [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 DEBUG nova.compute.manager [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] No waiting events found dispatching network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:11 compute-0 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 WARNING nova.compute.manager [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received unexpected event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c for instance with vm_state active and task_state deleting.
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.301 253665 DEBUG nova.network.neutron [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.324 253665 INFO nova.compute.manager [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Took 0.73 seconds to deallocate network for instance.
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.340 253665 INFO nova.virt.libvirt.driver [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Deleting instance files /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547_del
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.341 253665 INFO nova.virt.libvirt.driver [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Deletion of /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547_del complete
Nov 22 09:25:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:25:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3187497265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:25:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:25:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3187497265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.375 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.376 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.400 253665 INFO nova.compute.manager [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Took 5.03 seconds to destroy the instance on the hypervisor.
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.401 253665 DEBUG oslo.service.loopingcall [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.401 253665 DEBUG nova.compute.manager [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.402 253665 DEBUG nova.network.neutron [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.408 253665 DEBUG nova.compute.manager [req-b1b53664-e7ab-48ff-968f-e01a97a46a03 req-15369513-b405-4b58-81a5-5a15d2fe5a77 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-deleted-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:12 compute-0 nova_compute[253661]: 2025-11-22 09:25:12.477 253665 DEBUG oslo_concurrency.processutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606131714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:13 compute-0 nova_compute[253661]: 2025-11-22 09:25:13.001 253665 DEBUG oslo_concurrency.processutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:13 compute-0 nova_compute[253661]: 2025-11-22 09:25:13.049 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:13 compute-0 nova_compute[253661]: 2025-11-22 09:25:13.055 253665 DEBUG nova.compute.provider_tree [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:13 compute-0 nova_compute[253661]: 2025-11-22 09:25:13.076 253665 DEBUG nova.scheduler.client.report [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:13 compute-0 nova_compute[253661]: 2025-11-22 09:25:13.100 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:13 compute-0 nova_compute[253661]: 2025-11-22 09:25:13.124 253665 INFO nova.scheduler.client.report [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance 672288f2-2f9b-4643-9ebf-a949ad316298
Nov 22 09:25:13 compute-0 nova_compute[253661]: 2025-11-22 09:25:13.195 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:13 compute-0 ceph-mon[75021]: pgmap v1836: 305 pgs: 305 active+clean; 293 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1022 KiB/s wr, 202 op/s
Nov 22 09:25:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3187497265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:25:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3187497265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:25:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2606131714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 233 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1024 KiB/s wr, 231 op/s
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.518 253665 DEBUG nova.compute.manager [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-unplugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.519 253665 DEBUG oslo_concurrency.lockutils [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.519 253665 DEBUG oslo_concurrency.lockutils [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.519 253665 DEBUG oslo_concurrency.lockutils [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.520 253665 DEBUG nova.compute.manager [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] No waiting events found dispatching network-vif-unplugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.520 253665 DEBUG nova.compute.manager [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-unplugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.612 253665 DEBUG nova.network.neutron [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.627 253665 INFO nova.compute.manager [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Took 2.23 seconds to deallocate network for instance.
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.674 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.674 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:14 compute-0 nova_compute[253661]: 2025-11-22 09:25:14.755 253665 DEBUG oslo_concurrency.processutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/809544080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.273 253665 DEBUG oslo_concurrency.processutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.282 253665 DEBUG nova.compute.provider_tree [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.296 253665 DEBUG nova.scheduler.client.report [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.315 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.340 253665 INFO nova.scheduler.client.report [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Deleted allocations for instance 493b70aa-aaa2-4c40-bfea-6eff7ffec547
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.409 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 200 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 198 op/s
Nov 22 09:25:15 compute-0 ceph-mon[75021]: pgmap v1837: 305 pgs: 305 active+clean; 233 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1024 KiB/s wr, 231 op/s
Nov 22 09:25:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/809544080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.826 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.827 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.827 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.827 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.827 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.828 253665 INFO nova.compute.manager [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Terminating instance
Nov 22 09:25:15 compute-0 nova_compute[253661]: 2025-11-22 09:25:15.829 253665 DEBUG nova.compute.manager [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:25:16 compute-0 kernel: tapecdb3a4e-ac (unregistering): left promiscuous mode
Nov 22 09:25:16 compute-0 NetworkManager[48920]: <info>  [1763803516.1282] device (tapecdb3a4e-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:25:16 compute-0 ovn_controller[152872]: 2025-11-22T09:25:16Z|00843|binding|INFO|Releasing lport ecdb3a4e-ac28-4357-9db5-41ebf06a4adc from this chassis (sb_readonly=0)
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.171 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:16 compute-0 ovn_controller[152872]: 2025-11-22T09:25:16Z|00844|binding|INFO|Setting lport ecdb3a4e-ac28-4357-9db5-41ebf06a4adc down in Southbound
Nov 22 09:25:16 compute-0 ovn_controller[152872]: 2025-11-22T09:25:16Z|00845|binding|INFO|Removing iface tapecdb3a4e-ac ovn-installed in OVS
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.173 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.178 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3e:fb 10.100.0.5'], port_security=['fa:16:3e:e0:3e:fb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9096405c-eb66-4d27-abbb-e709b767afea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6456c660-bee8-4527-8966-f035b8f73def', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.180 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ecdb3a4e-ac28-4357-9db5-41ebf06a4adc in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 unbound from our chassis
Nov 22 09:25:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.181 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0936cc0d-3697-4210-9c23-8f3e8e452e86, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:25:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.183 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e68923b-1b37-412b-8dd2-e89fd9876e1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.184 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86 namespace which is not needed anymore
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:16 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Nov 22 09:25:16 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Consumed 19.644s CPU time.
Nov 22 09:25:16 compute-0 systemd-machined[215941]: Machine qemu-90-instance-0000004b terminated.
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.276 253665 INFO nova.virt.libvirt.driver [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Instance destroyed successfully.
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.278 253665 DEBUG nova.objects.instance [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'resources' on Instance uuid 9096405c-eb66-4d27-abbb-e709b767afea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.292 253665 DEBUG nova.virt.libvirt.vif [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-903328611',display_name='tempest-ServerActionsTestOtherA-server-903328611',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-903328611',id=75,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFom9y+7W1OzHUVkvflqnu/6xnxZe0N+aQAyRSLRBCSgO6CoYZ20Adms5sFPGUitwuO09dh9qM8uob9/gGVzUyIJo9HanjWjMYRoIceLs8pZBGhLtn51xjZTJ05EGeq1rA==',key_name='tempest-keypair-2136084686',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:22:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-o7pshihd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9096405c-eb66-4d27-abbb-e709b767afea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.293 253665 DEBUG nova.network.os_vif_util [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.294 253665 DEBUG nova.network.os_vif_util [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.295 253665 DEBUG os_vif [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.297 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecdb3a4e-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.299 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.304 253665 INFO os_vif [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac')
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.619 253665 DEBUG nova.compute.manager [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.619 253665 DEBUG oslo_concurrency.lockutils [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.619 253665 DEBUG oslo_concurrency.lockutils [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.620 253665 DEBUG oslo_concurrency.lockutils [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.620 253665 DEBUG nova.compute.manager [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] No waiting events found dispatching network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.620 253665 WARNING nova.compute.manager [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received unexpected event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 for instance with vm_state deleted and task_state None.
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.620 253665 DEBUG nova.compute.manager [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-deleted-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:16 compute-0 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [NOTICE]   (331425) : haproxy version is 2.8.14-c23fe91
Nov 22 09:25:16 compute-0 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [NOTICE]   (331425) : path to executable is /usr/sbin/haproxy
Nov 22 09:25:16 compute-0 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [WARNING]  (331425) : Exiting Master process...
Nov 22 09:25:16 compute-0 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [ALERT]    (331425) : Current worker (331427) exited with code 143 (Terminated)
Nov 22 09:25:16 compute-0 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [WARNING]  (331425) : All workers exited. Exiting... (0)
Nov 22 09:25:16 compute-0 systemd[1]: libpod-4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3.scope: Deactivated successfully.
Nov 22 09:25:16 compute-0 podman[338042]: 2025-11-22 09:25:16.64318171 +0000 UTC m=+0.346353604 container died 4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.796 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.796 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.811 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:25:16 compute-0 ceph-mon[75021]: pgmap v1838: 305 pgs: 305 active+clean; 200 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 198 op/s
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.895 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.896 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.903 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:25:16 compute-0 nova_compute[253661]: 2025-11-22 09:25:16.904 253665 INFO nova.compute.claims [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.018 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1019a479c4f9b61416483ab6dc0c9e11f221309433f7114f4bdcf465ffa0866d-merged.mount: Deactivated successfully.
Nov 22 09:25:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3-userdata-shm.mount: Deactivated successfully.
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.351 253665 DEBUG nova.compute.manager [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-unplugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.352 253665 DEBUG oslo_concurrency.lockutils [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.352 253665 DEBUG oslo_concurrency.lockutils [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.353 253665 DEBUG oslo_concurrency.lockutils [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.353 253665 DEBUG nova.compute.manager [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] No waiting events found dispatching network-vif-unplugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.354 253665 DEBUG nova.compute.manager [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-unplugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:25:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364463731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.510 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.517 253665 DEBUG nova.compute.provider_tree [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.532 253665 DEBUG nova.scheduler.client.report [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.560 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 200 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 16 KiB/s wr, 189 op/s
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.562 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:25:17 compute-0 podman[338042]: 2025-11-22 09:25:17.591464926 +0000 UTC m=+1.294636830 container cleanup 4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:25:17 compute-0 systemd[1]: libpod-conmon-4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3.scope: Deactivated successfully.
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.606 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.606 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.621 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:25:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.641 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:25:17 compute-0 podman[338112]: 2025-11-22 09:25:17.701467826 +0000 UTC m=+0.084631717 container remove 4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:25:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.707 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e16a7a37-6e21-42ab-80c4-767ace086a83]: (4, ('Sat Nov 22 09:25:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86 (4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3)\n4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3\nSat Nov 22 09:25:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86 (4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3)\n4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.710 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[063a8be2-cefb-4e1a-a50b-dbae8b484737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.711 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.715 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.716 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.717 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Creating image(s)
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.739 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:17 compute-0 kernel: tap0936cc0d-30: left promiscuous mode
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.767 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.779 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3e3e73-fcfa-46b9-be10-fdff0e35d8a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.794 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c6be25-4006-4360-bd42-67f38e76c575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef06cc0b-f1e1-4385-a707-4b0fcf451e1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.807 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.812 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.818 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0da8bd46-ee2e-4498-b293-48ab2e87b943]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622124, 'reachable_time': 31977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338178, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.821 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:25:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.821 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fbe51f-0fb9-4f7a-8491-b233d48aa448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d0936cc0d\x2d3697\x2d4210\x2d9c23\x2d8f3e8e452e86.mount: Deactivated successfully.
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.869 253665 DEBUG nova.policy [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.910 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.911 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.911 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.912 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.946 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:17 compute-0 nova_compute[253661]: 2025-11-22 09:25:17.952 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bd717644-36b1-45c9-a56f-b2719ae77e72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2364463731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.338 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Successfully created port: ca4c64d8-4f02-4ed0-8099-f18eccb17951 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.361 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bd717644-36b1-45c9-a56f-b2719ae77e72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.429 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.463 253665 INFO nova.virt.libvirt.driver [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Deleting instance files /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea_del
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.464 253665 INFO nova.virt.libvirt.driver [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Deletion of /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea_del complete
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.519 253665 DEBUG nova.objects.instance [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid bd717644-36b1-45c9-a56f-b2719ae77e72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.521 253665 INFO nova.compute.manager [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Took 2.69 seconds to destroy the instance on the hypervisor.
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.521 253665 DEBUG oslo.service.loopingcall [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.521 253665 DEBUG nova.compute.manager [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.521 253665 DEBUG nova.network.neutron [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.528 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.529 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Ensure instance console log exists: /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.529 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.529 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:18 compute-0 nova_compute[253661]: 2025-11-22 09:25:18.530 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:18 compute-0 ceph-mon[75021]: pgmap v1839: 305 pgs: 305 active+clean; 200 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 16 KiB/s wr, 189 op/s
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.468 253665 DEBUG nova.compute.manager [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.469 253665 DEBUG oslo_concurrency.lockutils [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.469 253665 DEBUG oslo_concurrency.lockutils [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.469 253665 DEBUG oslo_concurrency.lockutils [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.470 253665 DEBUG nova.compute.manager [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] No waiting events found dispatching network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.470 253665 WARNING nova.compute.manager [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received unexpected event network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc for instance with vm_state active and task_state deleting.
Nov 22 09:25:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 190 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 369 KiB/s wr, 159 op/s
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.665 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Successfully updated port: ca4c64d8-4f02-4ed0-8099-f18eccb17951 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.683 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.684 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.684 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.715 253665 DEBUG nova.network.neutron [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.747 253665 INFO nova.compute.manager [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Took 1.23 seconds to deallocate network for instance.
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.779 253665 DEBUG nova.compute.manager [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-changed-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.780 253665 DEBUG nova.compute.manager [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Refreshing instance network info cache due to event network-changed-ca4c64d8-4f02-4ed0-8099-f18eccb17951. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.780 253665 DEBUG oslo_concurrency.lockutils [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.790 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.791 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.869 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:25:19 compute-0 nova_compute[253661]: 2025-11-22 09:25:19.874 253665 DEBUG oslo_concurrency.processutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3487432762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:20 compute-0 nova_compute[253661]: 2025-11-22 09:25:20.358 253665 DEBUG oslo_concurrency.processutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:20 compute-0 nova_compute[253661]: 2025-11-22 09:25:20.367 253665 DEBUG nova.compute.provider_tree [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:20 compute-0 nova_compute[253661]: 2025-11-22 09:25:20.381 253665 DEBUG nova.scheduler.client.report [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:20 compute-0 nova_compute[253661]: 2025-11-22 09:25:20.399 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:20 compute-0 nova_compute[253661]: 2025-11-22 09:25:20.426 253665 INFO nova.scheduler.client.report [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Deleted allocations for instance 9096405c-eb66-4d27-abbb-e709b767afea
Nov 22 09:25:20 compute-0 nova_compute[253661]: 2025-11-22 09:25:20.484 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:20 compute-0 nova_compute[253661]: 2025-11-22 09:25:20.515 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:21 compute-0 ceph-mon[75021]: pgmap v1840: 305 pgs: 305 active+clean; 190 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 369 KiB/s wr, 159 op/s
Nov 22 09:25:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3487432762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.184 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Updating instance_info_cache with network_info: [{"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.203 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.204 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance network_info: |[{"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.205 253665 DEBUG oslo_concurrency.lockutils [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.206 253665 DEBUG nova.network.neutron [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Refreshing network info cache for port ca4c64d8-4f02-4ed0-8099-f18eccb17951 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.210 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start _get_guest_xml network_info=[{"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.217 253665 WARNING nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.225 253665 DEBUG nova.virt.libvirt.host [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.226 253665 DEBUG nova.virt.libvirt.host [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.229 253665 DEBUG nova.virt.libvirt.host [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.230 253665 DEBUG nova.virt.libvirt.host [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.230 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.230 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.231 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.231 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.231 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.232 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.232 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.232 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.232 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.233 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.233 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.233 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.236 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.362 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803506.3607337, 672288f2-2f9b-4643-9ebf-a949ad316298 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.363 253665 INFO nova.compute.manager [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] VM Stopped (Lifecycle Event)
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.381 253665 DEBUG nova.compute.manager [None req-c84b47de-d97d-4e71-8a5b-1b6926e399b0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.557 253665 DEBUG nova.compute.manager [req-037d40a9-371a-4163-a059-29566d7b2bc0 req-ef9fd099-0c29-4404-a3a6-c9727fbb69b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-deleted-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 190 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 369 KiB/s wr, 57 op/s
Nov 22 09:25:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783269925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.716 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.745 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:21 compute-0 nova_compute[253661]: 2025-11-22 09:25:21.752 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1783269925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3091257609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.240 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.242 253665 DEBUG nova.virt.libvirt.vif [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=83,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-pl500xqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:17Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=bd717644-36b1-45c9-a56f-b2719ae77e72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.243 253665 DEBUG nova.network.os_vif_util [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.244 253665 DEBUG nova.network.os_vif_util [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.245 253665 DEBUG nova.objects.instance [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd717644-36b1-45c9-a56f-b2719ae77e72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.259 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <uuid>bd717644-36b1-45c9-a56f-b2719ae77e72</uuid>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <name>instance-00000053</name>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestJSON-server-219797360</nova:name>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:25:21</nova:creationTime>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <nova:port uuid="ca4c64d8-4f02-4ed0-8099-f18eccb17951">
Nov 22 09:25:22 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <system>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <entry name="serial">bd717644-36b1-45c9-a56f-b2719ae77e72</entry>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <entry name="uuid">bd717644-36b1-45c9-a56f-b2719ae77e72</entry>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     </system>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <os>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   </os>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <features>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   </features>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/bd717644-36b1-45c9-a56f-b2719ae77e72_disk">
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config">
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:22 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:f3:36:77"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <target dev="tapca4c64d8-4f"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/console.log" append="off"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <video>
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     </video>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:25:22 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:25:22 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:25:22 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:25:22 compute-0 nova_compute[253661]: </domain>
Nov 22 09:25:22 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.260 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Preparing to wait for external event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.261 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.261 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.261 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.262 253665 DEBUG nova.virt.libvirt.vif [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=83,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-pl500xqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:17Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=bd717644-36b1-45c9-a56f-b2719ae77e72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.263 253665 DEBUG nova.network.os_vif_util [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.263 253665 DEBUG nova.network.os_vif_util [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.264 253665 DEBUG os_vif [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.265 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.266 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.270 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.271 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca4c64d8-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.271 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca4c64d8-4f, col_values=(('external_ids', {'iface-id': 'ca4c64d8-4f02-4ed0-8099-f18eccb17951', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:36:77', 'vm-uuid': 'bd717644-36b1-45c9-a56f-b2719ae77e72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.273 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:22 compute-0 NetworkManager[48920]: <info>  [1763803522.2743] manager: (tapca4c64d8-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.281 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.284 253665 INFO os_vif [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f')
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.336 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.336 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.336 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:f3:36:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.337 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Using config drive
Nov 22 09:25:22 compute-0 nova_compute[253661]: 2025-11-22 09:25:22.369 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:25:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:25:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:25:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:25:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:25:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:25:23 compute-0 nova_compute[253661]: 2025-11-22 09:25:23.024 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803508.0138268, 493b70aa-aaa2-4c40-bfea-6eff7ffec547 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:23 compute-0 nova_compute[253661]: 2025-11-22 09:25:23.024 253665 INFO nova.compute.manager [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] VM Stopped (Lifecycle Event)
Nov 22 09:25:23 compute-0 nova_compute[253661]: 2025-11-22 09:25:23.039 253665 DEBUG nova.compute.manager [None req-b7756356-006d-422f-ab11-27528b42c9c8 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:23 compute-0 ceph-mon[75021]: pgmap v1841: 305 pgs: 305 active+clean; 190 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 369 KiB/s wr, 57 op/s
Nov 22 09:25:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3091257609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 167 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Nov 22 09:25:23 compute-0 nova_compute[253661]: 2025-11-22 09:25:23.604 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Creating config drive at /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config
Nov 22 09:25:23 compute-0 nova_compute[253661]: 2025-11-22 09:25:23.611 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3okub307 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:23 compute-0 nova_compute[253661]: 2025-11-22 09:25:23.756 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3okub307" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:23 compute-0 nova_compute[253661]: 2025-11-22 09:25:23.784 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:23 compute-0 nova_compute[253661]: 2025-11-22 09:25:23.788 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:24 compute-0 nova_compute[253661]: 2025-11-22 09:25:24.643 253665 DEBUG nova.network.neutron [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Updated VIF entry in instance network info cache for port ca4c64d8-4f02-4ed0-8099-f18eccb17951. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:25:24 compute-0 nova_compute[253661]: 2025-11-22 09:25:24.644 253665 DEBUG nova.network.neutron [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Updating instance_info_cache with network_info: [{"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:24 compute-0 nova_compute[253661]: 2025-11-22 09:25:24.659 253665 DEBUG oslo_concurrency.lockutils [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:24 compute-0 ceph-mon[75021]: pgmap v1842: 305 pgs: 305 active+clean; 167 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Nov 22 09:25:24 compute-0 nova_compute[253661]: 2025-11-22 09:25:24.844 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:24 compute-0 nova_compute[253661]: 2025-11-22 09:25:24.845 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Deleting local config drive /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config because it was imported into RBD.
Nov 22 09:25:24 compute-0 kernel: tapca4c64d8-4f: entered promiscuous mode
Nov 22 09:25:24 compute-0 NetworkManager[48920]: <info>  [1763803524.8946] manager: (tapca4c64d8-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Nov 22 09:25:24 compute-0 ovn_controller[152872]: 2025-11-22T09:25:24Z|00846|binding|INFO|Claiming lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 for this chassis.
Nov 22 09:25:24 compute-0 ovn_controller[152872]: 2025-11-22T09:25:24Z|00847|binding|INFO|ca4c64d8-4f02-4ed0-8099-f18eccb17951: Claiming fa:16:3e:f3:36:77 10.100.0.7
Nov 22 09:25:24 compute-0 nova_compute[253661]: 2025-11-22 09:25:24.896 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.903 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:36:77 10.100.0.7'], port_security=['fa:16:3e:f3:36:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bd717644-36b1-45c9-a56f-b2719ae77e72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ca4c64d8-4f02-4ed0-8099-f18eccb17951) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.905 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ca4c64d8-4f02-4ed0-8099-f18eccb17951 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis
Nov 22 09:25:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.906 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:25:24 compute-0 ovn_controller[152872]: 2025-11-22T09:25:24Z|00848|binding|INFO|Setting lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 ovn-installed in OVS
Nov 22 09:25:24 compute-0 ovn_controller[152872]: 2025-11-22T09:25:24Z|00849|binding|INFO|Setting lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 up in Southbound
Nov 22 09:25:24 compute-0 nova_compute[253661]: 2025-11-22 09:25:24.917 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:24 compute-0 nova_compute[253661]: 2025-11-22 09:25:24.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:24 compute-0 systemd-udevd[338451]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:25:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.926 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d379143-2429-41fc-999c-e7d41eb98d7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:24 compute-0 systemd-machined[215941]: New machine qemu-101-instance-00000053.
Nov 22 09:25:24 compute-0 NetworkManager[48920]: <info>  [1763803524.9431] device (tapca4c64d8-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:25:24 compute-0 NetworkManager[48920]: <info>  [1763803524.9438] device (tapca4c64d8-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:25:24 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000053.
Nov 22 09:25:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.961 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[50cd7b45-b7a4-4598-8243-f7050777db33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.965 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[83005afc-8a02-4137-bb9c-a25ef47daf00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.000 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bb29b5e8-1cbb-470c-8d5a-38bd8b4997d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.024 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad477a0-8e9e-45e0-bd26-a99dc589c214]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338465, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.049 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[57a22df3-ed31-4864-91fd-b58657287a70]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338467, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338467, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.051 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.056 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.057 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.057 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.058 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.893 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803525.8927553, bd717644-36b1-45c9-a56f-b2719ae77e72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.895 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] VM Started (Lifecycle Event)
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.914 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.918 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803525.8929062, bd717644-36b1-45c9-a56f-b2719ae77e72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.919 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] VM Paused (Lifecycle Event)
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.937 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.942 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:25 compute-0 nova_compute[253661]: 2025-11-22 09:25:25.960 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:26 compute-0 nova_compute[253661]: 2025-11-22 09:25:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:27 compute-0 ceph-mon[75021]: pgmap v1843: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 22 09:25:27 compute-0 nova_compute[253661]: 2025-11-22 09:25:27.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Nov 22 09:25:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:27.968 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:27.968 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:27.969 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:28 compute-0 nova_compute[253661]: 2025-11-22 09:25:28.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:28 compute-0 ovn_controller[152872]: 2025-11-22T09:25:28Z|00850|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 09:25:28 compute-0 nova_compute[253661]: 2025-11-22 09:25:28.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:28 compute-0 ovn_controller[152872]: 2025-11-22T09:25:28Z|00851|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 09:25:28 compute-0 nova_compute[253661]: 2025-11-22 09:25:28.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:29 compute-0 ceph-mon[75021]: pgmap v1844: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Nov 22 09:25:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.663 253665 DEBUG nova.compute.manager [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.664 253665 DEBUG oslo_concurrency.lockutils [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.664 253665 DEBUG oslo_concurrency.lockutils [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.664 253665 DEBUG oslo_concurrency.lockutils [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.664 253665 DEBUG nova.compute.manager [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Processing event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.665 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.668 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803529.6685505, bd717644-36b1-45c9-a56f-b2719ae77e72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.669 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] VM Resumed (Lifecycle Event)
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.671 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.675 253665 INFO nova.virt.libvirt.driver [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance spawned successfully.
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.675 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.703 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.712 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.717 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.718 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.719 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.719 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.719 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.720 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.744 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.782 253665 INFO nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Took 12.07 seconds to spawn the instance on the hypervisor.
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.783 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.795 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.796 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.813 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.864 253665 INFO nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Took 12.99 seconds to build instance.
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.877 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.894 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.894 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.902 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:25:29 compute-0 nova_compute[253661]: 2025-11-22 09:25:29.902 253665 INFO nova.compute.claims [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.058 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439071742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.607 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.608 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.608 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.620 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.625 253665 DEBUG nova.compute.provider_tree [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.638 253665 DEBUG nova.scheduler.client.report [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.655 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.656 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.696 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.697 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.711 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.726 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.803 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.805 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.805 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Creating image(s)
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.834 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.863 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.891 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.898 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.957 253665 DEBUG nova.policy [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '65c2cce1aec04c50ab2c62bf0b87b756', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.990 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.991 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.991 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:30 compute-0 nova_compute[253661]: 2025-11-22 09:25:30.991 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.030 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.040 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.275 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803516.2730086, 9096405c-eb66-4d27-abbb-e709b767afea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.276 253665 INFO nova.compute.manager [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] VM Stopped (Lifecycle Event)
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.295 253665 DEBUG nova.compute.manager [None req-9dd33d35-a372-4fe5-adb8-a4c8f3ac1587 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:31 compute-0 ceph-mon[75021]: pgmap v1845: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 22 09:25:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2439071742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.4 MiB/s wr, 111 op/s
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.810 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Successfully created port: 99300516-c832-4292-af8c-850f873b6dda _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.891 253665 DEBUG nova.compute.manager [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.892 253665 DEBUG oslo_concurrency.lockutils [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.892 253665 DEBUG oslo_concurrency.lockutils [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.892 253665 DEBUG oslo_concurrency.lockutils [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.892 253665 DEBUG nova.compute.manager [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] No waiting events found dispatching network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:31 compute-0 nova_compute[253661]: 2025-11-22 09:25:31.893 253665 WARNING nova.compute.manager [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received unexpected event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 for instance with vm_state active and task_state None.
Nov 22 09:25:32 compute-0 nova_compute[253661]: 2025-11-22 09:25:32.091 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:32 compute-0 nova_compute[253661]: 2025-11-22 09:25:32.154 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] resizing rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:25:32 compute-0 nova_compute[253661]: 2025-11-22 09:25:32.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.351776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532351829, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1513, "num_deletes": 253, "total_data_size": 2232418, "memory_usage": 2274736, "flush_reason": "Manual Compaction"}
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532369968, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2197897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36284, "largest_seqno": 37796, "table_properties": {"data_size": 2190860, "index_size": 4044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15448, "raw_average_key_size": 20, "raw_value_size": 2176469, "raw_average_value_size": 2875, "num_data_blocks": 180, "num_entries": 757, "num_filter_entries": 757, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803393, "oldest_key_time": 1763803393, "file_creation_time": 1763803532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 18327 microseconds, and 5814 cpu microseconds.
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.370105) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2197897 bytes OK
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.370155) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.372505) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.372520) EVENT_LOG_v1 {"time_micros": 1763803532372515, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.372549) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2225678, prev total WAL file size 2225678, number of live WAL files 2.
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.374074) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2146KB)], [80(7874KB)]
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532374112, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10261564, "oldest_snapshot_seqno": -1}
Nov 22 09:25:32 compute-0 nova_compute[253661]: 2025-11-22 09:25:32.417 253665 DEBUG nova.objects.instance [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 0497bf95-95d6-40fb-8a33-aa3ea54bc542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:32 compute-0 nova_compute[253661]: 2025-11-22 09:25:32.432 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:25:32 compute-0 nova_compute[253661]: 2025-11-22 09:25:32.432 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Ensure instance console log exists: /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:25:32 compute-0 nova_compute[253661]: 2025-11-22 09:25:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:32 compute-0 nova_compute[253661]: 2025-11-22 09:25:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:32 compute-0 nova_compute[253661]: 2025-11-22 09:25:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6264 keys, 8544304 bytes, temperature: kUnknown
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532449847, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8544304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8502931, "index_size": 24623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 158352, "raw_average_key_size": 25, "raw_value_size": 8390998, "raw_average_value_size": 1339, "num_data_blocks": 994, "num_entries": 6264, "num_filter_entries": 6264, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.450266) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8544304 bytes
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.452016) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.1 rd, 112.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(8.6) write-amplify(3.9) OK, records in: 6785, records dropped: 521 output_compression: NoCompression
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.452033) EVENT_LOG_v1 {"time_micros": 1763803532452024, "job": 46, "event": "compaction_finished", "compaction_time_micros": 75963, "compaction_time_cpu_micros": 24153, "output_level": 6, "num_output_files": 1, "total_output_size": 8544304, "num_input_records": 6785, "num_output_records": 6264, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532452863, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532454374, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.373682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:25:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:25:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:33 compute-0 ceph-mon[75021]: pgmap v1846: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.4 MiB/s wr, 111 op/s
Nov 22 09:25:33 compute-0 podman[338699]: 2025-11-22 09:25:33.387936395 +0000 UTC m=+0.073036847 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:25:33 compute-0 podman[338700]: 2025-11-22 09:25:33.396281376 +0000 UTC m=+0.079759868 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.476 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updating instance_info_cache with network_info: [{"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.521 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.522 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.522 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.522 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.523 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.557 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.558 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.558 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.558 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.559 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 178 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 167 op/s
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.732 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Successfully updated port: 99300516-c832-4292-af8c-850f873b6dda _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.757 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.757 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquired lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:33 compute-0 nova_compute[253661]: 2025-11-22 09:25:33.758 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:25:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.031 253665 DEBUG nova.compute.manager [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received event network-changed-99300516-c832-4292-af8c-850f873b6dda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105432811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.033 253665 DEBUG nova.compute.manager [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Refreshing instance network info cache due to event network-changed-99300516-c832-4292-af8c-850f873b6dda. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.033 253665 DEBUG oslo_concurrency.lockutils [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.054 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.072 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.130 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.130 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.134 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.134 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.325 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.326 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.341 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.357 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.359 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3587MB free_disk=59.92188262939453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.359 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.360 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1105432811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.412 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.434 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e0b05f62-6966-4bf3-aee5-e4d2137a6cfc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.434 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance bd717644-36b1-45c9-a56f-b2719ae77e72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.434 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 0497bf95-95d6-40fb-8a33-aa3ea54bc542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.450 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 23a926e6-c6a7-4e40-82d1-654f68980549 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.451 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.451 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:25:34 compute-0 nova_compute[253661]: 2025-11-22 09:25:34.570 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1891505159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.078 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Updating instance_info_cache with network_info: [{"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.090 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.097 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.101 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Releasing lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.102 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance network_info: |[{"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.103 253665 DEBUG oslo_concurrency.lockutils [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.103 253665 DEBUG nova.network.neutron [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Refreshing network info cache for port 99300516-c832-4292-af8c-850f873b6dda _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.108 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start _get_guest_xml network_info=[{"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.111 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.124 253665 WARNING nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.130 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.131 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.132 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.133 253665 DEBUG nova.virt.libvirt.host [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.133 253665 DEBUG nova.virt.libvirt.host [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.140 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.141 253665 INFO nova.compute.claims [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.145 253665 DEBUG nova.virt.libvirt.host [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.145 253665 DEBUG nova.virt.libvirt.host [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.146 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.146 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.146 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.147 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.147 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.147 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.147 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.148 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.148 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.148 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.148 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.149 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.153 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.312 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:35 compute-0 ceph-mon[75021]: pgmap v1847: 305 pgs: 305 active+clean; 178 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 167 op/s
Nov 22 09:25:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1891505159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 189 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 134 op/s
Nov 22 09:25:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1500485100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.660 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.685 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.690 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790578603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:35 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:25:35 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.780 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.788 253665 DEBUG nova.compute.provider_tree [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.801 253665 DEBUG nova.scheduler.client.report [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.824 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.825 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.837 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.838 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.838 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.863 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.864 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.879 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.894 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.964 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:25:35 compute-0 nova_compute[253661]: 2025-11-22 09:25:35.966 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:25:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701533542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.232 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Creating image(s)
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.262 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.293 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.320 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.325 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.370 253665 DEBUG nova.policy [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.377 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.379 253665 DEBUG nova.virt.libvirt.vif [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-690745541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-690745541',id=84,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40dd9bb14a354dc591ef4aa8f9ab41e4',ramdisk_id='',reservation_id='r-a4vp5xxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1329496417',owner_user_name='tempest-ServerTagsTestJSON-1329496417-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:30Z,user_data=None,user_id='65c2cce1aec04c50ab2c62bf0b87b756',uuid=0497bf95-95d6-40fb-8a33-aa3ea54bc542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.380 253665 DEBUG nova.network.os_vif_util [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converting VIF {"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.381 253665 DEBUG nova.network.os_vif_util [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.382 253665 DEBUG nova.objects.instance [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0497bf95-95d6-40fb-8a33-aa3ea54bc542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.397 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <uuid>0497bf95-95d6-40fb-8a33-aa3ea54bc542</uuid>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <name>instance-00000054</name>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerTagsTestJSON-server-690745541</nova:name>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:25:35</nova:creationTime>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <nova:user uuid="65c2cce1aec04c50ab2c62bf0b87b756">tempest-ServerTagsTestJSON-1329496417-project-member</nova:user>
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <nova:project uuid="40dd9bb14a354dc591ef4aa8f9ab41e4">tempest-ServerTagsTestJSON-1329496417</nova:project>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <nova:port uuid="99300516-c832-4292-af8c-850f873b6dda">
Nov 22 09:25:36 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <system>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <entry name="serial">0497bf95-95d6-40fb-8a33-aa3ea54bc542</entry>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <entry name="uuid">0497bf95-95d6-40fb-8a33-aa3ea54bc542</entry>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     </system>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <os>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   </os>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <features>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   </features>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk">
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config">
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:36 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:38:c9:52"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <target dev="tap99300516-c8"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/console.log" append="off"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <video>
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     </video>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:25:36 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:25:36 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:25:36 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:25:36 compute-0 nova_compute[253661]: </domain>
Nov 22 09:25:36 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.405 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Preparing to wait for external event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.406 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.406 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.407 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.408 253665 DEBUG nova.virt.libvirt.vif [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-690745541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-690745541',id=84,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40dd9bb14a354dc591ef4aa8f9ab41e4',ramdisk_id='',reservation_id='r-a4vp5xxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1329496417',owner_user_name='tempest-ServerTagsTestJSON-1329496417-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:30Z,user_data=None,user_id='65c2cce1aec04c50ab2c62bf0b87b756',uuid=0497bf95-95d6-40fb-8a33-aa3ea54bc542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.408 253665 DEBUG nova.network.os_vif_util [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converting VIF {"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.409 253665 DEBUG nova.network.os_vif_util [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.409 253665 DEBUG os_vif [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.411 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.411 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.416 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99300516-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.417 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99300516-c8, col_values=(('external_ids', {'iface-id': '99300516-c832-4292-af8c-850f873b6dda', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:c9:52', 'vm-uuid': '0497bf95-95d6-40fb-8a33-aa3ea54bc542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.418 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.419 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:36 compute-0 NetworkManager[48920]: <info>  [1763803536.4197] manager: (tap99300516-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.419 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.420 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.445 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.470 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 23a926e6-c6a7-4e40-82d1-654f68980549_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.525 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.526 253665 INFO os_vif [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8')
Nov 22 09:25:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1500485100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1790578603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3701533542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.680 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.681 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.682 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] No VIF found with MAC fa:16:3e:38:c9:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.682 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Using config drive
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.726 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.741 253665 DEBUG nova.network.neutron [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Updated VIF entry in instance network info cache for port 99300516-c832-4292-af8c-850f873b6dda. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.742 253665 DEBUG nova.network.neutron [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Updating instance_info_cache with network_info: [{"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.773 253665 DEBUG oslo_concurrency.lockutils [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:36 compute-0 nova_compute[253661]: 2025-11-22 09:25:36.950 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 23a926e6-c6a7-4e40-82d1-654f68980549_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.026 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.079 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Creating config drive at /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.084 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7nf8act execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.191 253665 DEBUG nova.objects.instance [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid 23a926e6-c6a7-4e40-82d1-654f68980549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.223 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.223 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Ensure instance console log exists: /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.224 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.224 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.224 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.240 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7nf8act" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.269 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.274 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.327 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Successfully created port: 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.454 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.455 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Deleting local config drive /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config because it was imported into RBD.
Nov 22 09:25:37 compute-0 kernel: tap99300516-c8: entered promiscuous mode
Nov 22 09:25:37 compute-0 NetworkManager[48920]: <info>  [1763803537.5173] manager: (tap99300516-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Nov 22 09:25:37 compute-0 ovn_controller[152872]: 2025-11-22T09:25:37Z|00852|binding|INFO|Claiming lport 99300516-c832-4292-af8c-850f873b6dda for this chassis.
Nov 22 09:25:37 compute-0 ovn_controller[152872]: 2025-11-22T09:25:37Z|00853|binding|INFO|99300516-c832-4292-af8c-850f873b6dda: Claiming fa:16:3e:38:c9:52 10.100.0.6
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.544 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:c9:52 10.100.0.6'], port_security=['fa:16:3e:38:c9:52 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0497bf95-95d6-40fb-8a33-aa3ea54bc542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3163a5d6-dad4-424f-9409-6ea9b8d6c858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c2254-6707-4d0b-939a-dc2b7ceb50e6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=99300516-c832-4292-af8c-850f873b6dda) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.546 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 99300516-c832-4292-af8c-850f873b6dda in datapath c908d88a-c35e-45c0-9b18-f3ea9ab34dfe bound to our chassis
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.548 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c908d88a-c35e-45c0-9b18-f3ea9ab34dfe
Nov 22 09:25:37 compute-0 systemd-udevd[339103]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.565 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c956bcf3-ce65-465f-8e56-2ad72fb4790e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.566 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc908d88a-c1 in ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:25:37 compute-0 NetworkManager[48920]: <info>  [1763803537.5705] device (tap99300516-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:25:37 compute-0 NetworkManager[48920]: <info>  [1763803537.5718] device (tap99300516-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.569 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc908d88a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.570 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e64e6b-75d7-440f-89b4-87bd6fd10ce7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.573 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[02523c78-d774-434b-8ba6-2b537483518c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 219 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 129 op/s
Nov 22 09:25:37 compute-0 systemd-machined[215941]: New machine qemu-102-instance-00000054.
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.588 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d400fa50-5170-4cd0-ba26-c8bcbc065389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000054.
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:37 compute-0 ovn_controller[152872]: 2025-11-22T09:25:37Z|00854|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda ovn-installed in OVS
Nov 22 09:25:37 compute-0 ovn_controller[152872]: 2025-11-22T09:25:37Z|00855|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda up in Southbound
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.605 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.622 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[89ac1ced-1293-45d7-aa91-a772bec40b1e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:37 compute-0 ceph-mon[75021]: pgmap v1848: 305 pgs: 305 active+clean; 189 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 134 op/s
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.654 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe38768e-2a08-4267-9406-a8d9c3e82e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 systemd-udevd[339109]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.661 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c50332a7-3f34-499f-a2ca-fe93f538e857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 NetworkManager[48920]: <info>  [1763803537.6629] manager: (tapc908d88a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.701 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7588c73b-a006-4197-a8bc-bd7491f3f7a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.704 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84cac2b2-5ec7-49dc-bccb-c2b495466213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 NetworkManager[48920]: <info>  [1763803537.7373] device (tapc908d88a-c0): carrier: link connected
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.746 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6d044d72-1d8b-45c4-8b03-429ba3ed4e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.769 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d5b0ea-fce4-4e0e-beb7-21480cb975ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc908d88a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:2a:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638847, 'reachable_time': 35601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339139, 'error': None, 'target': 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.792 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8abfaef4-90c6-45c7-970d-caf3f14a4dcf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:2a1f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638847, 'tstamp': 638847}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339140, 'error': None, 'target': 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.815 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d01bbdc-f5ad-4df7-ae82-a49fda560413]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc908d88a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:2a:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638847, 'reachable_time': 35601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339141, 'error': None, 'target': 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.848 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1236df-62df-4631-9477-0f3382b7d314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.915 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5eb917b-4437-4e8f-9d11-1d879feb4ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.917 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc908d88a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.918 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.918 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc908d88a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:37 compute-0 NetworkManager[48920]: <info>  [1763803537.9585] manager: (tapc908d88a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Nov 22 09:25:37 compute-0 kernel: tapc908d88a-c0: entered promiscuous mode
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.963 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc908d88a-c0, col_values=(('external_ids', {'iface-id': '22da3234-fb70-4ef8-828a-a612debb32b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:37 compute-0 ovn_controller[152872]: 2025-11-22T09:25:37Z|00856|binding|INFO|Releasing lport 22da3234-fb70-4ef8-828a-a612debb32b7 from this chassis (sb_readonly=0)
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.984 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:37 compute-0 nova_compute[253661]: 2025-11-22 09:25:37.988 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.989 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c908d88a-c35e-45c0-9b18-f3ea9ab34dfe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c908d88a-c35e-45c0-9b18-f3ea9ab34dfe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.990 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2e16b7bf-6607-42f8-a628-a16449ad8728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.991 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/c908d88a-c35e-45c0-9b18-f3ea9ab34dfe.pid.haproxy
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID c908d88a-c35e-45c0-9b18-f3ea9ab34dfe
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:25:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.995 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'env', 'PROCESS_TAG=haproxy-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c908d88a-c35e-45c0-9b18-f3ea9ab34dfe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.191 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803538.1905398, 0497bf95-95d6-40fb-8a33-aa3ea54bc542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.191 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] VM Started (Lifecycle Event)
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.215 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.223 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803538.19076, 0497bf95-95d6-40fb-8a33-aa3ea54bc542 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.223 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] VM Paused (Lifecycle Event)
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.247 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.251 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.255 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Successfully updated port: 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.267 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.268 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.269 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.269 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:25:38 compute-0 podman[339215]: 2025-11-22 09:25:38.440875044 +0000 UTC m=+0.064854058 container create 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.440 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:25:38 compute-0 systemd[1]: Started libpod-conmon-5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4.scope.
Nov 22 09:25:38 compute-0 podman[339215]: 2025-11-22 09:25:38.399878633 +0000 UTC m=+0.023857667 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:25:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c30b3101e7c9381790a2a0961a3d0682b383a5a7c70453278a7e7b8e52e2486/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:38 compute-0 podman[339215]: 2025-11-22 09:25:38.543068325 +0000 UTC m=+0.167047369 container init 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:25:38 compute-0 podman[339215]: 2025-11-22 09:25:38.550226249 +0000 UTC m=+0.174205263 container start 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 09:25:38 compute-0 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [NOTICE]   (339234) : New worker (339236) forked
Nov 22 09:25:38 compute-0 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [NOTICE]   (339234) : Loading success.
Nov 22 09:25:38 compute-0 ceph-mon[75021]: pgmap v1849: 305 pgs: 305 active+clean; 219 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 129 op/s
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.694 253665 DEBUG nova.compute.manager [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-changed-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.695 253665 DEBUG nova.compute.manager [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Refreshing instance network info cache due to event network-changed-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:25:38 compute-0 nova_compute[253661]: 2025-11-22 09:25:38.695 253665 DEBUG oslo_concurrency.lockutils [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.407 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Updating instance_info_cache with network_info: [{"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.422 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.422 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance network_info: |[{"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.423 253665 DEBUG oslo_concurrency.lockutils [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.423 253665 DEBUG nova.network.neutron [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Refreshing network info cache for port 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.426 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start _get_guest_xml network_info=[{"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.431 253665 WARNING nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.438 253665 DEBUG nova.virt.libvirt.host [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.438 253665 DEBUG nova.virt.libvirt.host [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.445 253665 DEBUG nova.virt.libvirt.host [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.446 253665 DEBUG nova.virt.libvirt.host [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.447 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.447 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.447 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.449 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.449 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.449 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.449 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.453 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 260 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 127 op/s
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.635 253665 DEBUG nova.compute.manager [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.635 253665 DEBUG oslo_concurrency.lockutils [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.636 253665 DEBUG oslo_concurrency.lockutils [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.636 253665 DEBUG oslo_concurrency.lockutils [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.636 253665 DEBUG nova.compute.manager [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Processing event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.637 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.641 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803539.6415293, 0497bf95-95d6-40fb-8a33-aa3ea54bc542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.642 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] VM Resumed (Lifecycle Event)
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.645 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.649 253665 INFO nova.virt.libvirt.driver [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance spawned successfully.
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.650 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.664 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.673 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.677 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.678 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.678 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.678 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.679 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.679 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.704 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.743 253665 INFO nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Took 8.94 seconds to spawn the instance on the hypervisor.
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.744 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.814 253665 INFO nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Took 9.95 seconds to build instance.
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.833 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838099758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:39 compute-0 nova_compute[253661]: 2025-11-22 09:25:39.986 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.025 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.030 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:40 compute-0 podman[339306]: 2025-11-22 09:25:40.415291949 +0000 UTC m=+0.105991484 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.528 253665 DEBUG nova.network.neutron [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Updated VIF entry in instance network info cache for port 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.529 253665 DEBUG nova.network.neutron [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Updating instance_info_cache with network_info: [{"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.542 253665 DEBUG oslo_concurrency.lockutils [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013202772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.571 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.573 253665 DEBUG nova.virt.libvirt.vif [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=85,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-0kmnieue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:35Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=23a926e6-c6a7-4e40-82d1-654f68980549,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.573 253665 DEBUG nova.network.os_vif_util [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.574 253665 DEBUG nova.network.os_vif_util [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.576 253665 DEBUG nova.objects.instance [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23a926e6-c6a7-4e40-82d1-654f68980549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.592 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <uuid>23a926e6-c6a7-4e40-82d1-654f68980549</uuid>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <name>instance-00000055</name>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestJSON-server-219797360</nova:name>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:25:39</nova:creationTime>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <nova:port uuid="9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d">
Nov 22 09:25:40 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <system>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <entry name="serial">23a926e6-c6a7-4e40-82d1-654f68980549</entry>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <entry name="uuid">23a926e6-c6a7-4e40-82d1-654f68980549</entry>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     </system>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <os>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   </os>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <features>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   </features>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/23a926e6-c6a7-4e40-82d1-654f68980549_disk">
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/23a926e6-c6a7-4e40-82d1-654f68980549_disk.config">
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:40 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:77:c0:58"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <target dev="tap9b10c9c8-f1"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/console.log" append="off"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <video>
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     </video>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:25:40 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:25:40 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:25:40 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:25:40 compute-0 nova_compute[253661]: </domain>
Nov 22 09:25:40 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.593 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Preparing to wait for external event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.593 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.594 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.594 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.595 253665 DEBUG nova.virt.libvirt.vif [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=85,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-0kmnieue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:35Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=23a926e6-c6a7-4e40-82d1-654f68980549,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.595 253665 DEBUG nova.network.os_vif_util [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.596 253665 DEBUG nova.network.os_vif_util [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.596 253665 DEBUG os_vif [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.597 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.597 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.600 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b10c9c8-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.600 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b10c9c8-f1, col_values=(('external_ids', {'iface-id': '9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:c0:58', 'vm-uuid': '23a926e6-c6a7-4e40-82d1-654f68980549'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.602 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:40 compute-0 NetworkManager[48920]: <info>  [1763803540.6029] manager: (tap9b10c9c8-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.612 253665 INFO os_vif [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1')
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.670 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.670 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.671 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:77:c0:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.671 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Using config drive
Nov 22 09:25:40 compute-0 nova_compute[253661]: 2025-11-22 09:25:40.694 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:40 compute-0 ceph-mon[75021]: pgmap v1850: 305 pgs: 305 active+clean; 260 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 127 op/s
Nov 22 09:25:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2838099758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4013202772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.198 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Creating config drive at /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.205 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdnjy2vde execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.371 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdnjy2vde" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.399 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.403 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 260 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 114 op/s
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.758 253665 DEBUG nova.compute.manager [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.759 253665 DEBUG oslo_concurrency.lockutils [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.759 253665 DEBUG oslo_concurrency.lockutils [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.760 253665 DEBUG oslo_concurrency.lockutils [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.760 253665 DEBUG nova.compute.manager [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] No waiting events found dispatching network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:41 compute-0 nova_compute[253661]: 2025-11-22 09:25:41.760 253665 WARNING nova.compute.manager [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received unexpected event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda for instance with vm_state active and task_state None.
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.268 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.865s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.269 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Deleting local config drive /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config because it was imported into RBD.
Nov 22 09:25:42 compute-0 kernel: tap9b10c9c8-f1: entered promiscuous mode
Nov 22 09:25:42 compute-0 NetworkManager[48920]: <info>  [1763803542.3413] manager: (tap9b10c9c8-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00857|binding|INFO|Claiming lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d for this chassis.
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00858|binding|INFO|9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d: Claiming fa:16:3e:77:c0:58 10.100.0.14
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.346 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.357 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:c0:58 10.100.0.14'], port_security=['fa:16:3e:77:c0:58 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23a926e6-c6a7-4e40-82d1-654f68980549', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.359 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.361 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00859|binding|INFO|Setting lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d ovn-installed in OVS
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00860|binding|INFO|Setting lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d up in Southbound
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 systemd-udevd[339409]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.386 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d671a46-a46b-4c01-a587-bb7662fcb94a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:42 compute-0 systemd-machined[215941]: New machine qemu-103-instance-00000055.
Nov 22 09:25:42 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000055.
Nov 22 09:25:42 compute-0 NetworkManager[48920]: <info>  [1763803542.4020] device (tap9b10c9c8-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:25:42 compute-0 NetworkManager[48920]: <info>  [1763803542.4039] device (tap9b10c9c8-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.427 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[db1a9ea6-c4fe-4fc0-9540-df92d79a1022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.431 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ceedf5-4924-47c8-946c-a59c312988df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.463 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44dfa6d9-8863-4185-9429-5160585e068d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.493 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df8bcd63-f132-423b-8d47-96b2fe3761a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339421, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.510 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.510 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.511 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.511 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.512 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.513 253665 INFO nova.compute.manager [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Terminating instance
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.515 253665 DEBUG nova.compute.manager [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.514 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3997b5-4648-4afb-859b-43e1aca37b88]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339423, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339423, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.516 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.522 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.522 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.523 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.523 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:42 compute-0 kernel: tap99300516-c8 (unregistering): left promiscuous mode
Nov 22 09:25:42 compute-0 NetworkManager[48920]: <info>  [1763803542.5638] device (tap99300516-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00861|binding|INFO|Releasing lport 99300516-c832-4292-af8c-850f873b6dda from this chassis (sb_readonly=0)
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00862|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda down in Southbound
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00863|binding|INFO|Removing iface tap99300516-c8 ovn-installed in OVS
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.575 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.580 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:c9:52 10.100.0.6'], port_security=['fa:16:3e:38:c9:52 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0497bf95-95d6-40fb-8a33-aa3ea54bc542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3163a5d6-dad4-424f-9409-6ea9b8d6c858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c2254-6707-4d0b-939a-dc2b7ceb50e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=99300516-c832-4292-af8c-850f873b6dda) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.581 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 99300516-c832-4292-af8c-850f873b6dda in datapath c908d88a-c35e-45c0-9b18-f3ea9ab34dfe unbound from our chassis
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.583 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.584 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[896fb62d-75a2-496d-b55e-d773bc0ad92a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.585 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe namespace which is not needed anymore
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000054.scope: Deactivated successfully.
Nov 22 09:25:42 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000054.scope: Consumed 3.450s CPU time.
Nov 22 09:25:42 compute-0 systemd-machined[215941]: Machine qemu-102-instance-00000054 terminated.
Nov 22 09:25:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:42 compute-0 kernel: tap99300516-c8: entered promiscuous mode
Nov 22 09:25:42 compute-0 NetworkManager[48920]: <info>  [1763803542.7368] manager: (tap99300516-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Nov 22 09:25:42 compute-0 systemd-udevd[339413]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:25:42 compute-0 kernel: tap99300516-c8 (unregistering): left promiscuous mode
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00864|binding|INFO|Claiming lport 99300516-c832-4292-af8c-850f873b6dda for this chassis.
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00865|binding|INFO|99300516-c832-4292-af8c-850f873b6dda: Claiming fa:16:3e:38:c9:52 10.100.0.6
Nov 22 09:25:42 compute-0 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [NOTICE]   (339234) : haproxy version is 2.8.14-c23fe91
Nov 22 09:25:42 compute-0 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [NOTICE]   (339234) : path to executable is /usr/sbin/haproxy
Nov 22 09:25:42 compute-0 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [WARNING]  (339234) : Exiting Master process...
Nov 22 09:25:42 compute-0 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [ALERT]    (339234) : Current worker (339236) exited with code 143 (Terminated)
Nov 22 09:25:42 compute-0 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [WARNING]  (339234) : All workers exited. Exiting... (0)
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.799 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:c9:52 10.100.0.6'], port_security=['fa:16:3e:38:c9:52 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0497bf95-95d6-40fb-8a33-aa3ea54bc542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3163a5d6-dad4-424f-9409-6ea9b8d6c858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c2254-6707-4d0b-939a-dc2b7ceb50e6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=99300516-c832-4292-af8c-850f873b6dda) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:42 compute-0 systemd[1]: libpod-5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4.scope: Deactivated successfully.
Nov 22 09:25:42 compute-0 ceph-mon[75021]: pgmap v1851: 305 pgs: 305 active+clean; 260 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 114 op/s
Nov 22 09:25:42 compute-0 podman[339444]: 2025-11-22 09:25:42.813746964 +0000 UTC m=+0.097716653 container died 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.813 253665 INFO nova.virt.libvirt.driver [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance destroyed successfully.
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.814 253665 DEBUG nova.objects.instance [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lazy-loading 'resources' on Instance uuid 0497bf95-95d6-40fb-8a33-aa3ea54bc542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00866|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda ovn-installed in OVS
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00867|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda up in Southbound
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00868|binding|INFO|Releasing lport 99300516-c832-4292-af8c-850f873b6dda from this chassis (sb_readonly=1)
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00869|if_status|INFO|Dropped 2 log messages in last 236 seconds (most recently, 236 seconds ago) due to excessive rate
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00870|if_status|INFO|Not setting lport 99300516-c832-4292-af8c-850f873b6dda down as sb is readonly
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00871|binding|INFO|Removing iface tap99300516-c8 ovn-installed in OVS
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00872|binding|INFO|Releasing lport 99300516-c832-4292-af8c-850f873b6dda from this chassis (sb_readonly=1)
Nov 22 09:25:42 compute-0 ovn_controller[152872]: 2025-11-22T09:25:42Z|00873|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda down in Southbound
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.828 253665 DEBUG nova.virt.libvirt.vif [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:25:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-690745541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-690745541',id=84,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='40dd9bb14a354dc591ef4aa8f9ab41e4',ramdisk_id='',reservation_id='r-a4vp5xxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1329496417',owner_user_name='tempest-ServerTagsTestJSON-1329496417-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:39Z,user_data=None,user_id='65c2cce1aec04c50ab2c62bf0b87b756',uuid=0497bf95-95d6-40fb-8a33-aa3ea54bc542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.830 253665 DEBUG nova.network.os_vif_util [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converting VIF {"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.831 253665 DEBUG nova.network.os_vif_util [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.831 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:c9:52 10.100.0.6'], port_security=['fa:16:3e:38:c9:52 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0497bf95-95d6-40fb-8a33-aa3ea54bc542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3163a5d6-dad4-424f-9409-6ea9b8d6c858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c2254-6707-4d0b-939a-dc2b7ceb50e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=99300516-c832-4292-af8c-850f873b6dda) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.831 253665 DEBUG os_vif [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.835 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99300516-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.845 253665 INFO os_vif [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8')
Nov 22 09:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4-userdata-shm.mount: Deactivated successfully.
Nov 22 09:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c30b3101e7c9381790a2a0961a3d0682b383a5a7c70453278a7e7b8e52e2486-merged.mount: Deactivated successfully.
Nov 22 09:25:42 compute-0 podman[339444]: 2025-11-22 09:25:42.873363625 +0000 UTC m=+0.157333314 container cleanup 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:25:42 compute-0 systemd[1]: libpod-conmon-5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4.scope: Deactivated successfully.
Nov 22 09:25:42 compute-0 podman[339490]: 2025-11-22 09:25:42.951006923 +0000 UTC m=+0.051522717 container remove 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.959 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd9adc1-c32b-4bfe-b495-ab347c88f5ce]: (4, ('Sat Nov 22 09:25:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe (5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4)\n5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4\nSat Nov 22 09:25:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe (5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4)\n5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.962 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f2eca620-9dfb-4b95-8d37-922897a2cc41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.963 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc908d88a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 kernel: tapc908d88a-c0: left promiscuous mode
Nov 22 09:25:42 compute-0 nova_compute[253661]: 2025-11-22 09:25:42.985 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.990 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7b71857e-e14a-416f-99fb-05ca46cbbf16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.003 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2e0ef3-e41d-4e63-800d-e258baab7b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.004 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5baa690b-ee2c-4206-a909-4b72b0b5e86a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad6bad5-4f68-4618-97e0-ce7b12a68afa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638838, 'reachable_time': 43983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339508, 'error': None, 'target': 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dc908d88a\x2dc35e\x2d45c0\x2d9b18\x2df3ea9ab34dfe.mount: Deactivated successfully.
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.050 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.050 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8feae550-c753-4aa5-b6d2-ea441b3c943b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.052 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 99300516-c832-4292-af8c-850f873b6dda in datapath c908d88a-c35e-45c0-9b18-f3ea9ab34dfe unbound from our chassis
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.053 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6ee6d8-1428-4ba5-990b-3fd0a799f2e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.055 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 99300516-c832-4292-af8c-850f873b6dda in datapath c908d88a-c35e-45c0-9b18-f3ea9ab34dfe unbound from our chassis
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.056 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:25:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.058 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[830ad58d-af72-46ee-a2ed-5c2e75f4bac3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.343 253665 INFO nova.virt.libvirt.driver [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Deleting instance files /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542_del
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.344 253665 INFO nova.virt.libvirt.driver [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Deletion of /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542_del complete
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.396 253665 INFO nova.compute.manager [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.398 253665 DEBUG oslo.service.loopingcall [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.398 253665 DEBUG nova.compute.manager [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.398 253665 DEBUG nova.network.neutron [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.470 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803543.4700127, 23a926e6-c6a7-4e40-82d1-654f68980549 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.471 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] VM Started (Lifecycle Event)
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.487 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.492 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803543.4709501, 23a926e6-c6a7-4e40-82d1-654f68980549 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.492 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] VM Paused (Lifecycle Event)
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.511 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.515 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:43 compute-0 nova_compute[253661]: 2025-11-22 09:25:43.537 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 266 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.2 MiB/s wr, 188 op/s
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.113 253665 DEBUG nova.network.neutron [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.129 253665 INFO nova.compute.manager [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Took 0.73 seconds to deallocate network for instance.
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.186 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.187 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.295 253665 DEBUG oslo_concurrency.processutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.341 253665 DEBUG nova.compute.manager [req-52a5caf9-7894-4341-b4a8-1133d10fbd51 req-ae902705-3f30-43d7-808e-2b975e72eeba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received event network-vif-deleted-99300516-c832-4292-af8c-850f873b6dda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292005100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:44 compute-0 ovn_controller[152872]: 2025-11-22T09:25:44Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:36:77 10.100.0.7
Nov 22 09:25:44 compute-0 ovn_controller[152872]: 2025-11-22T09:25:44Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:36:77 10.100.0.7
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.784 253665 DEBUG oslo_concurrency.processutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.791 253665 DEBUG nova.compute.provider_tree [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.806 253665 DEBUG nova.scheduler.client.report [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.828 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:44 compute-0 ceph-mon[75021]: pgmap v1852: 305 pgs: 305 active+clean; 266 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.2 MiB/s wr, 188 op/s
Nov 22 09:25:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4292005100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.852 253665 INFO nova.scheduler.client.report [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Deleted allocations for instance 0497bf95-95d6-40fb-8a33-aa3ea54bc542
Nov 22 09:25:44 compute-0 nova_compute[253661]: 2025-11-22 09:25:44.905 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.527 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 265 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 179 op/s
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.632 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.633 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.649 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.720 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.721 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.729 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.729 253665 INFO nova.compute.claims [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.875 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.951 253665 DEBUG nova.compute.manager [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.951 253665 DEBUG oslo_concurrency.lockutils [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.952 253665 DEBUG oslo_concurrency.lockutils [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.952 253665 DEBUG oslo_concurrency.lockutils [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.952 253665 DEBUG nova.compute.manager [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Processing event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.953 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.957 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803545.9569583, 23a926e6-c6a7-4e40-82d1-654f68980549 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.957 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] VM Resumed (Lifecycle Event)
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.960 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.963 253665 INFO nova.virt.libvirt.driver [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance spawned successfully.
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.963 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.984 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:45 compute-0 nova_compute[253661]: 2025-11-22 09:25:45.989 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.003 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.004 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.005 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.005 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.005 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.006 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.010 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.053 253665 INFO nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Took 10.09 seconds to spawn the instance on the hypervisor.
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.054 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.107 253665 INFO nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Took 11.71 seconds to build instance.
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.121 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3868983397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.339 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.345 253665 DEBUG nova.compute.provider_tree [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.361 253665 DEBUG nova.scheduler.client.report [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.382 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.383 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.444 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.444 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.462 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.485 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.589 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.591 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.591 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating image(s)
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.616 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.648 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.677 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.682 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.730 253665 DEBUG nova.policy [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ce82551204d04546a5ae9c6f99cccfc8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a246689624d4630a70f69b70d048883', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.770 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.772 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.773 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.773 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.800 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:46 compute-0 nova_compute[253661]: 2025-11-22 09:25:46.808 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e4f9440c-7476-4022-8d08-1b3151a9db79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:46 compute-0 ceph-mon[75021]: pgmap v1853: 305 pgs: 305 active+clean; 265 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 179 op/s
Nov 22 09:25:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3868983397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.238 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e4f9440c-7476-4022-8d08-1b3151a9db79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.310 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] resizing rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.419 253665 DEBUG nova.objects.instance [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'migration_context' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.434 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.435 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Ensure instance console log exists: /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.436 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.436 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.437 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 247 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 196 op/s
Nov 22 09:25:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:47 compute-0 ovn_controller[152872]: 2025-11-22T09:25:47Z|00874|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 09:25:47 compute-0 nova_compute[253661]: 2025-11-22 09:25:47.933 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Successfully created port: b1fc96be-009e-46a8-829c-b7a0bc42af60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:25:48 compute-0 nova_compute[253661]: 2025-11-22 09:25:48.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:48 compute-0 nova_compute[253661]: 2025-11-22 09:25:48.490 253665 DEBUG nova.compute.manager [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:48 compute-0 nova_compute[253661]: 2025-11-22 09:25:48.490 253665 DEBUG oslo_concurrency.lockutils [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:48 compute-0 nova_compute[253661]: 2025-11-22 09:25:48.491 253665 DEBUG oslo_concurrency.lockutils [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:48 compute-0 nova_compute[253661]: 2025-11-22 09:25:48.491 253665 DEBUG oslo_concurrency.lockutils [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:48 compute-0 nova_compute[253661]: 2025-11-22 09:25:48.491 253665 DEBUG nova.compute.manager [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] No waiting events found dispatching network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:48 compute-0 nova_compute[253661]: 2025-11-22 09:25:48.492 253665 WARNING nova.compute.manager [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received unexpected event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d for instance with vm_state active and task_state None.
Nov 22 09:25:48 compute-0 ceph-mon[75021]: pgmap v1854: 305 pgs: 305 active+clean; 247 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 196 op/s
Nov 22 09:25:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 282 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 270 op/s
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.204 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Successfully updated port: b1fc96be-009e-46a8-829c-b7a0bc42af60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.284 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.285 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.285 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.449 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.450 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.450 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.451 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.451 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.452 253665 INFO nova.compute.manager [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Terminating instance
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.453 253665 DEBUG nova.compute.manager [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.539 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.573 253665 DEBUG nova.compute.manager [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.574 253665 DEBUG nova.compute.manager [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing instance network info cache due to event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.575 253665 DEBUG oslo_concurrency.lockutils [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:50 compute-0 kernel: tap9b10c9c8-f1 (unregistering): left promiscuous mode
Nov 22 09:25:50 compute-0 NetworkManager[48920]: <info>  [1763803550.5870] device (tap9b10c9c8-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:25:50 compute-0 ovn_controller[152872]: 2025-11-22T09:25:50Z|00875|binding|INFO|Releasing lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d from this chassis (sb_readonly=0)
Nov 22 09:25:50 compute-0 ovn_controller[152872]: 2025-11-22T09:25:50Z|00876|binding|INFO|Setting lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d down in Southbound
Nov 22 09:25:50 compute-0 ovn_controller[152872]: 2025-11-22T09:25:50Z|00877|binding|INFO|Removing iface tap9b10c9c8-f1 ovn-installed in OVS
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.603 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:c0:58 10.100.0.14'], port_security=['fa:16:3e:77:c0:58 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23a926e6-c6a7-4e40-82d1-654f68980549', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.605 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.608 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.619 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.633 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9676e337-44c0-4e01-a6b0-00ac8901fdc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:50 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 22 09:25:50 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000055.scope: Consumed 5.575s CPU time.
Nov 22 09:25:50 compute-0 systemd-machined[215941]: Machine qemu-103-instance-00000055 terminated.
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.676 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b416932f-6e34-4805-8e18-265010823337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.681 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f660ea2-6275-4866-828a-0baf4745a6d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.715 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7f740d16-a7bf-4a2d-97c1-9141df188cc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.739 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5739b6c-c840-4e48-986f-c923634f8de9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339773, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.759 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a454fd9-59c1-469b-a29a-11d45e4152a1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339774, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339774, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.760 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.770 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.770 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.771 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.771 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.772 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.905 253665 INFO nova.virt.libvirt.driver [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance destroyed successfully.
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.906 253665 DEBUG nova.objects.instance [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid 23a926e6-c6a7-4e40-82d1-654f68980549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.920 253665 DEBUG nova.virt.libvirt.vif [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=85,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-0kmnieue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:46Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=23a926e6-c6a7-4e40-82d1-654f68980549,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.920 253665 DEBUG nova.network.os_vif_util [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.921 253665 DEBUG nova.network.os_vif_util [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.921 253665 DEBUG os_vif [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.923 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b10c9c8-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.926 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:50 compute-0 nova_compute[253661]: 2025-11-22 09:25:50.930 253665 INFO os_vif [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1')
Nov 22 09:25:50 compute-0 ceph-mon[75021]: pgmap v1855: 305 pgs: 305 active+clean; 282 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 270 op/s
Nov 22 09:25:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 282 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 252 op/s
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.780 253665 INFO nova.virt.libvirt.driver [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Deleting instance files /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549_del
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.782 253665 INFO nova.virt.libvirt.driver [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Deletion of /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549_del complete
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.828 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.840 253665 INFO nova.compute.manager [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Took 1.39 seconds to destroy the instance on the hypervisor.
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.842 253665 DEBUG oslo.service.loopingcall [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.843 253665 DEBUG nova.compute.manager [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.843 253665 DEBUG nova.network.neutron [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.847 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.848 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance network_info: |[{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.848 253665 DEBUG oslo_concurrency.lockutils [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.848 253665 DEBUG nova.network.neutron [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.852 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start _get_guest_xml network_info=[{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.857 253665 WARNING nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.867 253665 DEBUG nova.virt.libvirt.host [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.867 253665 DEBUG nova.virt.libvirt.host [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.875 253665 DEBUG nova.virt.libvirt.host [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.876 253665 DEBUG nova.virt.libvirt.host [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.876 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.877 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.877 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.877 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.877 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.879 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.879 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:25:51 compute-0 nova_compute[253661]: 2025-11-22 09:25:51.882 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:25:52
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'images']
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:25:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880324457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.332 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.358 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.364 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.665 253665 DEBUG nova.network.neutron [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.689 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-unplugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.690 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.691 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.691 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.691 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] No waiting events found dispatching network-vif-unplugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.692 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-unplugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.692 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.692 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.693 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.693 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.693 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] No waiting events found dispatching network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.694 253665 WARNING nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received unexpected event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d for instance with vm_state active and task_state deleting.
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.697 253665 INFO nova.compute.manager [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Took 0.85 seconds to deallocate network for instance.
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:25:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.743 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.744 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.849 253665 DEBUG oslo_concurrency.processutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:25:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3391007212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.915 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.918 253665 DEBUG nova.virt.libvirt.vif [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/cRG5bfHD3LbYWZfZhBZW64Gzk9NiecmZChn56cNdUeqOvdqm8gZ047E1aOD+/1rWy6Q/20jfwuj+tARiRMK9Fr/axSxMkwZvm5uYPBSn1o0uJaQf1m6OZmN9YqP8SQ==',key_name='tempest-keypair-427391145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.918 253665 DEBUG nova.network.os_vif_util [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.920 253665 DEBUG nova.network.os_vif_util [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.921 253665 DEBUG nova.objects.instance [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.936 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <uuid>e4f9440c-7476-4022-8d08-1b3151a9db79</uuid>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <name>instance-00000056</name>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestOtherB-server-1215087159</nova:name>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:25:51</nova:creationTime>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <nova:user uuid="ce82551204d04546a5ae9c6f99cccfc8">tempest-ServerActionsTestOtherB-985895222-project-member</nova:user>
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <nova:project uuid="8a246689624d4630a70f69b70d048883">tempest-ServerActionsTestOtherB-985895222</nova:project>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <nova:port uuid="b1fc96be-009e-46a8-829c-b7a0bc42af60">
Nov 22 09:25:52 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <system>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <entry name="serial">e4f9440c-7476-4022-8d08-1b3151a9db79</entry>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <entry name="uuid">e4f9440c-7476-4022-8d08-1b3151a9db79</entry>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     </system>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <os>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   </os>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <features>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   </features>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk">
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config">
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:25:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:38:67:ca"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <target dev="tapb1fc96be-00"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/console.log" append="off"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <video>
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     </video>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:25:52 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:25:52 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:25:52 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:25:52 compute-0 nova_compute[253661]: </domain>
Nov 22 09:25:52 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.937 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Preparing to wait for external event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.938 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.938 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.938 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.939 253665 DEBUG nova.virt.libvirt.vif [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/cRG5bfHD3LbYWZfZhBZW64Gzk9NiecmZChn56cNdUeqOvdqm8gZ047E1aOD+/1rWy6Q/20jfwuj+tARiRMK9Fr/axSxMkwZvm5uYPBSn1o0uJaQf1m6OZmN9YqP8SQ==',key_name='tempest-keypair-427391145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.940 253665 DEBUG nova.network.os_vif_util [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.940 253665 DEBUG nova.network.os_vif_util [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.941 253665 DEBUG os_vif [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.942 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.942 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.943 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.947 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1fc96be-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:52 compute-0 ceph-mon[75021]: pgmap v1856: 305 pgs: 305 active+clean; 282 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 252 op/s
Nov 22 09:25:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/880324457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3391007212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.948 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1fc96be-00, col_values=(('external_ids', {'iface-id': 'b1fc96be-009e-46a8-829c-b7a0bc42af60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:67:ca', 'vm-uuid': 'e4f9440c-7476-4022-8d08-1b3151a9db79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:52 compute-0 NetworkManager[48920]: <info>  [1763803552.9517] manager: (tapb1fc96be-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:25:52 compute-0 nova_compute[253661]: 2025-11-22 09:25:52.959 253665 INFO os_vif [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00')
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.017 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.017 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.018 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No VIF found with MAC fa:16:3e:38:67:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.018 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Using config drive
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.042 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.343 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating config drive at /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.351 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr0_660or execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433808988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.390 253665 DEBUG oslo_concurrency.processutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.397 253665 DEBUG nova.compute.provider_tree [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.413 253665 DEBUG nova.scheduler.client.report [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.431 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.454 253665 INFO nova.scheduler.client.report [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance 23a926e6-c6a7-4e40-82d1-654f68980549
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.501 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr0_660or" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.526 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.531 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.576 253665 DEBUG nova.network.neutron [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated VIF entry in instance network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.577 253665 DEBUG nova.network.neutron [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 271 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 282 op/s
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.582 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.598 253665 DEBUG oslo_concurrency.lockutils [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.703 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.704 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting local config drive /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config because it was imported into RBD.
Nov 22 09:25:53 compute-0 kernel: tapb1fc96be-00: entered promiscuous mode
Nov 22 09:25:53 compute-0 NetworkManager[48920]: <info>  [1763803553.7573] manager: (tapb1fc96be-00): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Nov 22 09:25:53 compute-0 ovn_controller[152872]: 2025-11-22T09:25:53Z|00878|binding|INFO|Claiming lport b1fc96be-009e-46a8-829c-b7a0bc42af60 for this chassis.
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:53 compute-0 ovn_controller[152872]: 2025-11-22T09:25:53Z|00879|binding|INFO|b1fc96be-009e-46a8-829c-b7a0bc42af60: Claiming fa:16:3e:38:67:ca 10.100.0.10
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.770 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:67:ca 10.100.0.10'], port_security=['fa:16:3e:38:67:ca 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e4f9440c-7476-4022-8d08-1b3151a9db79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33563511-c966-495c-93cb-386deb50a2bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b1fc96be-009e-46a8-829c-b7a0bc42af60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.772 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b1fc96be-009e-46a8-829c-b7a0bc42af60 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da bound to our chassis
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.773 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 09:25:53 compute-0 systemd-udevd[339960]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.790 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d169feb3-9bcd-463d-8095-dba283416a63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.791 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape37df2c8-41 in ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.793 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape37df2c8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b8df565f-e9d4-44e0-a0b7-7d3f5572a828]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.795 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c816bab-32b6-46a7-aa4c-101a454e2991]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 systemd-machined[215941]: New machine qemu-104-instance-00000056.
Nov 22 09:25:53 compute-0 NetworkManager[48920]: <info>  [1763803553.8031] device (tapb1fc96be-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:25:53 compute-0 NetworkManager[48920]: <info>  [1763803553.8047] device (tapb1fc96be-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.811 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2e63f22e-4b4d-485c-8336-5b9120be4605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.830 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38dde595-8d63-446c-82d8-91cc6ed8cbc2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:53 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000056.
Nov 22 09:25:53 compute-0 ovn_controller[152872]: 2025-11-22T09:25:53Z|00880|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 ovn-installed in OVS
Nov 22 09:25:53 compute-0 ovn_controller[152872]: 2025-11-22T09:25:53Z|00881|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 up in Southbound
Nov 22 09:25:53 compute-0 nova_compute[253661]: 2025-11-22 09:25:53.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.866 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d55adf9c-3d03-4161-9b41-9c6b9a74f2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 NetworkManager[48920]: <info>  [1763803553.8725] manager: (tape37df2c8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/366)
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f62adfa-c57d-4016-b952-1d7b6124ba3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.910 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ec92e5e9-9d48-4ae5-85fc-647074c80fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.917 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b95a2672-b32b-44ef-9fb2-a403363a61ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 NetworkManager[48920]: <info>  [1763803553.9469] device (tape37df2c8-40): carrier: link connected
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.953 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c8834d66-33c9-452c-867a-b76eafb8f5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3433808988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.976 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3fab7fd6-0a22-4cab-828f-c0d0dbd1e0be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339993, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34ce3ce6-c88f-4ecf-9ebd-753c9a3439d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:c448'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640468, 'tstamp': 640468}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339994, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b09f52b6-28e7-40b6-b34c-da12ffdfbbbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339995, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.039 253665 DEBUG nova.compute.manager [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.039 253665 DEBUG oslo_concurrency.lockutils [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.040 253665 DEBUG oslo_concurrency.lockutils [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.040 253665 DEBUG oslo_concurrency.lockutils [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.040 253665 DEBUG nova.compute.manager [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Processing event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.085 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0f7996-c535-4045-a42f-9fff1ba9b6c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.166 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[519602a0-8b80-4b3b-a11a-43373b77e997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.168 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.168 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.169 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:54 compute-0 kernel: tape37df2c8-40: entered promiscuous mode
Nov 22 09:25:54 compute-0 NetworkManager[48920]: <info>  [1763803554.1713] manager: (tape37df2c8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.170 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.174 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:54 compute-0 ovn_controller[152872]: 2025-11-22T09:25:54Z|00882|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.175 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.191 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e37df2c8-4dc4-418d-92f1-b394537a30da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e37df2c8-4dc4-418d-92f1-b394537a30da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01d8164f-1965-42f5-badd-55bc471ec1bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.193 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/e37df2c8-4dc4-418d-92f1-b394537a30da.pid.haproxy
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:25:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.194 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'env', 'PROCESS_TAG=haproxy-e37df2c8-4dc4-418d-92f1-b394537a30da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e37df2c8-4dc4-418d-92f1-b394537a30da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.277 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803554.276151, e4f9440c-7476-4022-8d08-1b3151a9db79 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.277 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Started (Lifecycle Event)
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.280 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.284 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.288 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance spawned successfully.
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.288 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.295 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.299 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.308 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.308 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.309 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.309 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.310 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.310 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.317 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.318 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803554.2764754, e4f9440c-7476-4022-8d08-1b3151a9db79 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.318 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Paused (Lifecycle Event)
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.342 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.347 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803554.2832325, e4f9440c-7476-4022-8d08-1b3151a9db79 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.347 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Resumed (Lifecycle Event)
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.365 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.370 253665 INFO nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 7.78 seconds to spawn the instance on the hypervisor.
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.371 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.372 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.397 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.431 253665 INFO nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 8.73 seconds to build instance.
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.449 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:54 compute-0 podman[340068]: 2025-11-22 09:25:54.620391472 +0000 UTC m=+0.063495716 container create ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 09:25:54 compute-0 systemd[1]: Started libpod-conmon-ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f.scope.
Nov 22 09:25:54 compute-0 podman[340068]: 2025-11-22 09:25:54.584948615 +0000 UTC m=+0.028052889 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:25:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1bcf14e6b9291c965d546b0551f59e40ba033f0d7af433bf80952dca416338/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:25:54 compute-0 podman[340068]: 2025-11-22 09:25:54.7278564 +0000 UTC m=+0.170960644 container init ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:25:54 compute-0 podman[340068]: 2025-11-22 09:25:54.734792938 +0000 UTC m=+0.177897182 container start ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:25:54 compute-0 nova_compute[253661]: 2025-11-22 09:25:54.755 253665 DEBUG nova.compute.manager [req-a4618cee-dc0a-4c56-8f75-949a07efba84 req-08757fde-9903-41bf-8a26-6040eb324c02 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-deleted-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:54 compute-0 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [NOTICE]   (340087) : New worker (340089) forked
Nov 22 09:25:54 compute-0 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [NOTICE]   (340087) : Loading success.
Nov 22 09:25:54 compute-0 ceph-mon[75021]: pgmap v1857: 305 pgs: 305 active+clean; 271 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 282 op/s
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.289 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.291 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.291 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.291 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.292 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.293 253665 INFO nova.compute.manager [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Terminating instance
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.294 253665 DEBUG nova.compute.manager [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:25:55 compute-0 kernel: tapca4c64d8-4f (unregistering): left promiscuous mode
Nov 22 09:25:55 compute-0 NetworkManager[48920]: <info>  [1763803555.3848] device (tapca4c64d8-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:25:55 compute-0 ovn_controller[152872]: 2025-11-22T09:25:55Z|00883|binding|INFO|Releasing lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 from this chassis (sb_readonly=0)
Nov 22 09:25:55 compute-0 ovn_controller[152872]: 2025-11-22T09:25:55Z|00884|binding|INFO|Setting lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 down in Southbound
Nov 22 09:25:55 compute-0 ovn_controller[152872]: 2025-11-22T09:25:55Z|00885|binding|INFO|Removing iface tapca4c64d8-4f ovn-installed in OVS
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.415 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:36:77 10.100.0.7'], port_security=['fa:16:3e:f3:36:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bd717644-36b1-45c9-a56f-b2719ae77e72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ca4c64d8-4f02-4ed0-8099-f18eccb17951) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.416 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ca4c64d8-4f02-4ed0-8099-f18eccb17951 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.420 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.418 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.441 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66a98a41-0cd4-4517-844c-1febd299633a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:55 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Deactivated successfully.
Nov 22 09:25:55 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Consumed 14.435s CPU time.
Nov 22 09:25:55 compute-0 systemd-machined[215941]: Machine qemu-101-instance-00000053 terminated.
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.477 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e449d1aa-9de3-4b12-bc8e-a996091eee47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.481 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4daecd59-4259-42c7-80ac-b3ed50d872e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:55 compute-0 NetworkManager[48920]: <info>  [1763803555.5187] manager: (tapca4c64d8-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.521 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84b317cd-0e60-4ee4-a391-611221b19850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.525 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.539 253665 INFO nova.virt.libvirt.driver [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance destroyed successfully.
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.540 253665 DEBUG nova.objects.instance [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid bd717644-36b1-45c9-a56f-b2719ae77e72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98e4f6f2-afff-4e8e-84d2-8a0dafe11059]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340112, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.550 253665 DEBUG nova.virt.libvirt.vif [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:25:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=83,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-pl500xqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:29Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=bd717644-36b1-45c9-a56f-b2719ae77e72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.551 253665 DEBUG nova.network.os_vif_util [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.552 253665 DEBUG nova.network.os_vif_util [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.552 253665 DEBUG os_vif [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.555 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca4c64d8-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.564 253665 INFO os_vif [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f')
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.570 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66c384b4-d416-4ff9-8898-26e363071beb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340118, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340118, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.572 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.576 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.576 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.576 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:25:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.576 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 246 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 221 op/s
Nov 22 09:25:55 compute-0 nova_compute[253661]: 2025-11-22 09:25:55.584 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:25:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.089 253665 INFO nova.virt.libvirt.driver [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Deleting instance files /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72_del
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.090 253665 INFO nova.virt.libvirt.driver [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Deletion of /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72_del complete
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.163 253665 INFO nova.compute.manager [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Took 0.87 seconds to destroy the instance on the hypervisor.
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.164 253665 DEBUG oslo.service.loopingcall [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.164 253665 DEBUG nova.compute.manager [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.165 253665 DEBUG nova.network.neutron [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.316 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.316 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 WARNING nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state active and task_state None.
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-unplugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] No waiting events found dispatching network-vif-unplugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-unplugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] No waiting events found dispatching network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.320 253665 WARNING nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received unexpected event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 for instance with vm_state active and task_state deleting.
Nov 22 09:25:56 compute-0 ovn_controller[152872]: 2025-11-22T09:25:56Z|00886|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 09:25:56 compute-0 ovn_controller[152872]: 2025-11-22T09:25:56Z|00887|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 09:25:56 compute-0 NetworkManager[48920]: <info>  [1763803556.3934] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:56 compute-0 NetworkManager[48920]: <info>  [1763803556.3945] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Nov 22 09:25:56 compute-0 ovn_controller[152872]: 2025-11-22T09:25:56Z|00888|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 09:25:56 compute-0 ovn_controller[152872]: 2025-11-22T09:25:56Z|00889|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.852 253665 DEBUG nova.compute.manager [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.852 253665 DEBUG nova.compute.manager [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing instance network info cache due to event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.853 253665 DEBUG oslo_concurrency.lockutils [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.853 253665 DEBUG oslo_concurrency.lockutils [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:25:56 compute-0 nova_compute[253661]: 2025-11-22 09:25:56.853 253665 DEBUG nova.network.neutron [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:25:56 compute-0 ceph-mon[75021]: pgmap v1858: 305 pgs: 305 active+clean; 246 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 221 op/s
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.194 253665 DEBUG nova.network.neutron [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.224 253665 INFO nova.compute.manager [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Took 1.06 seconds to deallocate network for instance.
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.265 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.266 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.356 253665 DEBUG oslo_concurrency.processutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:25:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 218 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 218 op/s
Nov 22 09:25:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.810 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803542.808146, 0497bf95-95d6-40fb-8a33-aa3ea54bc542 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.811 253665 INFO nova.compute.manager [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] VM Stopped (Lifecycle Event)
Nov 22 09:25:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:25:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253257867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.838 253665 DEBUG nova.compute.manager [None req-3200093c-3cf0-4763-b879-8a78b3e30a7a - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.857 253665 DEBUG oslo_concurrency.processutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.866 253665 DEBUG nova.compute.provider_tree [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.882 253665 DEBUG nova.scheduler.client.report [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.908 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:57 compute-0 nova_compute[253661]: 2025-11-22 09:25:57.936 253665 INFO nova.scheduler.client.report [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance bd717644-36b1-45c9-a56f-b2719ae77e72
Nov 22 09:25:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2253257867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:25:58 compute-0 nova_compute[253661]: 2025-11-22 09:25:58.014 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:25:58 compute-0 nova_compute[253661]: 2025-11-22 09:25:58.078 253665 DEBUG nova.network.neutron [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated VIF entry in instance network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:25:58 compute-0 nova_compute[253661]: 2025-11-22 09:25:58.078 253665 DEBUG nova.network.neutron [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:25:58 compute-0 nova_compute[253661]: 2025-11-22 09:25:58.102 253665 DEBUG oslo_concurrency.lockutils [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:25:58 compute-0 nova_compute[253661]: 2025-11-22 09:25:58.430 253665 DEBUG nova.compute.manager [req-1192df44-1692-471e-ba8f-fc5496fe9571 req-96b2908d-abeb-41f8-b139-8b9b2282f236 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-deleted-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:25:58 compute-0 ceph-mon[75021]: pgmap v1859: 305 pgs: 305 active+clean; 218 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 218 op/s
Nov 22 09:25:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 167 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 232 op/s
Nov 22 09:26:00 compute-0 nova_compute[253661]: 2025-11-22 09:26:00.533 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:00 compute-0 nova_compute[253661]: 2025-11-22 09:26:00.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:00 compute-0 ceph-mon[75021]: pgmap v1860: 305 pgs: 305 active+clean; 167 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 232 op/s
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.103 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.104 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.119 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.195 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.195 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.203 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.204 253665 INFO nova.compute.claims [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.332 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 167 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 488 KiB/s wr, 144 op/s
Nov 22 09:26:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:26:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236492097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.801 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.808 253665 DEBUG nova.compute.provider_tree [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.822 253665 DEBUG nova.scheduler.client.report [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.851 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.851 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.892 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.895 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.923 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:26:01 compute-0 nova_compute[253661]: 2025-11-22 09:26:01.947 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.045 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.046 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.047 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Creating image(s)
Nov 22 09:26:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2236492097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.102 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.131 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.157 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.162 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.198 253665 DEBUG nova.policy [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.239 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.240 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.240 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.241 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.261 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:02 compute-0 nova_compute[253661]: 2025-11-22 09:26:02.265 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 cca5bcee-0493-45bc-976f-32bd793dbf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011072414045712052 of space, bias 1.0, pg target 0.33217242137136155 quantized to 32 (current 32)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:26:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.755391) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562755428, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 516, "num_deletes": 255, "total_data_size": 451685, "memory_usage": 461464, "flush_reason": "Manual Compaction"}
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562796431, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 447306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37797, "largest_seqno": 38312, "table_properties": {"data_size": 444453, "index_size": 825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6689, "raw_average_key_size": 18, "raw_value_size": 438753, "raw_average_value_size": 1205, "num_data_blocks": 37, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803533, "oldest_key_time": 1763803533, "file_creation_time": 1763803562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 41104 microseconds, and 2116 cpu microseconds.
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.796491) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 447306 bytes OK
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.796519) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.814360) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.814407) EVENT_LOG_v1 {"time_micros": 1763803562814395, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.814435) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 448682, prev total WAL file size 448682, number of live WAL files 2.
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.815149) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323534' seq:72057594037927935, type:22 .. '6C6F676D0031353035' seq:0, type:0; will stop at (end)
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(436KB)], [83(8344KB)]
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562815188, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8991610, "oldest_snapshot_seqno": -1}
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6110 keys, 8875933 bytes, temperature: kUnknown
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562896481, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8875933, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8834662, "index_size": 24905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 156115, "raw_average_key_size": 25, "raw_value_size": 8724516, "raw_average_value_size": 1427, "num_data_blocks": 1002, "num_entries": 6110, "num_filter_entries": 6110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.896776) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8875933 bytes
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.900578) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.5 rd, 109.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 8.1 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(39.9) write-amplify(19.8) OK, records in: 6628, records dropped: 518 output_compression: NoCompression
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.900600) EVENT_LOG_v1 {"time_micros": 1763803562900591, "job": 48, "event": "compaction_finished", "compaction_time_micros": 81395, "compaction_time_cpu_micros": 24102, "output_level": 6, "num_output_files": 1, "total_output_size": 8875933, "num_input_records": 6628, "num_output_records": 6110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562900780, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562902190, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.815045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:26:02 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:26:03 compute-0 nova_compute[253661]: 2025-11-22 09:26:03.076 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Successfully created port: a812758f-4f22-4843-9cfa-447a7ab9c46a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:26:03 compute-0 ceph-mon[75021]: pgmap v1861: 305 pgs: 305 active+clean; 167 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 488 KiB/s wr, 144 op/s
Nov 22 09:26:03 compute-0 nova_compute[253661]: 2025-11-22 09:26:03.179 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 cca5bcee-0493-45bc-976f-32bd793dbf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:03 compute-0 nova_compute[253661]: 2025-11-22 09:26:03.245 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:26:03 compute-0 nova_compute[253661]: 2025-11-22 09:26:03.548 253665 DEBUG nova.objects.instance [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid cca5bcee-0493-45bc-976f-32bd793dbf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:03 compute-0 nova_compute[253661]: 2025-11-22 09:26:03.561 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:26:03 compute-0 nova_compute[253661]: 2025-11-22 09:26:03.562 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Ensure instance console log exists: /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:26:03 compute-0 nova_compute[253661]: 2025-11-22 09:26:03.563 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:03 compute-0 nova_compute[253661]: 2025-11-22 09:26:03.563 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:03 compute-0 nova_compute[253661]: 2025-11-22 09:26:03.563 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 167 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 488 KiB/s wr, 144 op/s
Nov 22 09:26:04 compute-0 nova_compute[253661]: 2025-11-22 09:26:04.085 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Successfully updated port: a812758f-4f22-4843-9cfa-447a7ab9c46a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:26:04 compute-0 nova_compute[253661]: 2025-11-22 09:26:04.105 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:26:04 compute-0 nova_compute[253661]: 2025-11-22 09:26:04.106 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:26:04 compute-0 nova_compute[253661]: 2025-11-22 09:26:04.106 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:26:04 compute-0 nova_compute[253661]: 2025-11-22 09:26:04.255 253665 DEBUG nova.compute.manager [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-changed-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:04 compute-0 nova_compute[253661]: 2025-11-22 09:26:04.255 253665 DEBUG nova.compute.manager [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Refreshing instance network info cache due to event network-changed-a812758f-4f22-4843-9cfa-447a7ab9c46a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:26:04 compute-0 nova_compute[253661]: 2025-11-22 09:26:04.256 253665 DEBUG oslo_concurrency.lockutils [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:26:04 compute-0 nova_compute[253661]: 2025-11-22 09:26:04.283 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:26:04 compute-0 podman[340348]: 2025-11-22 09:26:04.407990597 +0000 UTC m=+0.083721974 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:26:04 compute-0 podman[340349]: 2025-11-22 09:26:04.42793106 +0000 UTC m=+0.103698218 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 09:26:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:04.473 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:04.474 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:26:04 compute-0 nova_compute[253661]: 2025-11-22 09:26:04.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:05 compute-0 sudo[340386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:05 compute-0 sudo[340386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:05 compute-0 ceph-mon[75021]: pgmap v1862: 305 pgs: 305 active+clean; 167 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 488 KiB/s wr, 144 op/s
Nov 22 09:26:05 compute-0 sudo[340386]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:05 compute-0 sudo[340411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:26:05 compute-0 sudo[340411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:05 compute-0 sudo[340411]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:05 compute-0 sudo[340436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:05 compute-0 sudo[340436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:05 compute-0 sudo[340436]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:05 compute-0 sudo[340461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 09:26:05 compute-0 sudo[340461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.422 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Updating instance_info_cache with network_info: [{"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.441 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.441 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance network_info: |[{"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.441 253665 DEBUG oslo_concurrency.lockutils [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.442 253665 DEBUG nova.network.neutron [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Refreshing network info cache for port a812758f-4f22-4843-9cfa-447a7ab9c46a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.444 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start _get_guest_xml network_info=[{"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.451 253665 WARNING nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.460 253665 DEBUG nova.virt.libvirt.host [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.461 253665 DEBUG nova.virt.libvirt.host [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.467 253665 DEBUG nova.virt.libvirt.host [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.468 253665 DEBUG nova.virt.libvirt.host [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.468 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.468 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.469 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.470 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.471 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.471 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.471 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.472 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.472 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.472 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.473 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.474 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.479 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 183 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 478 KiB/s wr, 139 op/s
Nov 22 09:26:05 compute-0 sudo[340461]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:26:05 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:26:05 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:05 compute-0 sudo[340525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:05 compute-0 sudo[340525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:05 compute-0 sudo[340525]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.905 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803550.903665, 23a926e6-c6a7-4e40-82d1-654f68980549 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.905 253665 INFO nova.compute.manager [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] VM Stopped (Lifecycle Event)
Nov 22 09:26:05 compute-0 sudo[340550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:26:05 compute-0 sudo[340550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:05 compute-0 sudo[340550]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:05 compute-0 nova_compute[253661]: 2025-11-22 09:26:05.927 253665 DEBUG nova.compute.manager [None req-75935937-e969-4e33-9509-319867fa50d0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:05 compute-0 sudo[340575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:05 compute-0 sudo[340575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:05 compute-0 sudo[340575]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:26:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033577931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.021 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:06 compute-0 sudo[340600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.044 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:06 compute-0 sudo[340600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.050 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:26:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340608851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.660 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.662 253665 DEBUG nova.virt.libvirt.vif [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:26:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-79942486',display_name='tempest-ServersTestJSON-server-79942486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-79942486',id=87,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-fbpdzrbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:26:01Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=cca5bcee-0493-45bc-976f-32bd793dbf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.662 253665 DEBUG nova.network.os_vif_util [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.663 253665 DEBUG nova.network.os_vif_util [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.664 253665 DEBUG nova.objects.instance [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid cca5bcee-0493-45bc-976f-32bd793dbf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:06 compute-0 sudo[340600]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.676 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <uuid>cca5bcee-0493-45bc-976f-32bd793dbf01</uuid>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <name>instance-00000057</name>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestJSON-server-79942486</nova:name>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:26:05</nova:creationTime>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <nova:port uuid="a812758f-4f22-4843-9cfa-447a7ab9c46a">
Nov 22 09:26:06 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <system>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <entry name="serial">cca5bcee-0493-45bc-976f-32bd793dbf01</entry>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <entry name="uuid">cca5bcee-0493-45bc-976f-32bd793dbf01</entry>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     </system>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <os>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   </os>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <features>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   </features>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/cca5bcee-0493-45bc-976f-32bd793dbf01_disk">
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       </source>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config">
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       </source>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:26:06 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:80:f5:32"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <target dev="tapa812758f-4f"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/console.log" append="off"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <video>
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     </video>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:26:06 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:26:06 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:26:06 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:26:06 compute-0 nova_compute[253661]: </domain>
Nov 22 09:26:06 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.677 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Preparing to wait for external event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.677 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.677 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.677 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.678 253665 DEBUG nova.virt.libvirt.vif [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:26:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-79942486',display_name='tempest-ServersTestJSON-server-79942486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-79942486',id=87,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-fbpdzrbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:26:01Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=cca5bcee-0493-45bc-976f-32bd793dbf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.678 253665 DEBUG nova.network.os_vif_util [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.679 253665 DEBUG nova.network.os_vif_util [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.679 253665 DEBUG os_vif [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.680 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.680 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.680 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.683 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.684 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa812758f-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.684 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa812758f-4f, col_values=(('external_ids', {'iface-id': 'a812758f-4f22-4843-9cfa-447a7ab9c46a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:f5:32', 'vm-uuid': 'cca5bcee-0493-45bc-976f-32bd793dbf01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.686 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:06 compute-0 NetworkManager[48920]: <info>  [1763803566.6869] manager: (tapa812758f-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.694 253665 INFO os_vif [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f')
Nov 22 09:26:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:26:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:26:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:26:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:26:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:26:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9bcadd1d-0fdd-4a74-961d-b18759d8e7ca does not exist
Nov 22 09:26:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6d8d5f2c-899b-4c47-9f9e-88995f008492 does not exist
Nov 22 09:26:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8de019a6-4f37-403f-846c-e541ac31d934 does not exist
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.891 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.891 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.891 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:80:f5:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:26:06 compute-0 nova_compute[253661]: 2025-11-22 09:26:06.892 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Using config drive
Nov 22 09:26:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:26:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:26:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:26:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:26:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:26:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:26:07 compute-0 nova_compute[253661]: 2025-11-22 09:26:07.058 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:07 compute-0 sudo[340714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:07 compute-0 sudo[340714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:07 compute-0 sudo[340714]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:07 compute-0 ceph-mon[75021]: pgmap v1863: 305 pgs: 305 active+clean; 183 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 478 KiB/s wr, 139 op/s
Nov 22 09:26:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4033577931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:26:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3340608851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:26:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:26:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:26:07 compute-0 sudo[340742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:26:07 compute-0 sudo[340742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:07 compute-0 sudo[340742]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:07 compute-0 sudo[340767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:07 compute-0 sudo[340767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:07 compute-0 sudo[340767]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:07 compute-0 sudo[340792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:26:07 compute-0 sudo[340792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:07 compute-0 nova_compute[253661]: 2025-11-22 09:26:07.504 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Creating config drive at /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config
Nov 22 09:26:07 compute-0 nova_compute[253661]: 2025-11-22 09:26:07.510 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9l686sph execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 206 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 133 op/s
Nov 22 09:26:07 compute-0 nova_compute[253661]: 2025-11-22 09:26:07.662 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9l686sph" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:07 compute-0 nova_compute[253661]: 2025-11-22 09:26:07.692 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:07 compute-0 nova_compute[253661]: 2025-11-22 09:26:07.695 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:07 compute-0 podman[340859]: 2025-11-22 09:26:07.658625645 +0000 UTC m=+0.024341059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:26:07 compute-0 podman[340859]: 2025-11-22 09:26:07.846014305 +0000 UTC m=+0.211729689 container create 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 09:26:08 compute-0 nova_compute[253661]: 2025-11-22 09:26:08.037 253665 DEBUG nova.network.neutron [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Updated VIF entry in instance network info cache for port a812758f-4f22-4843-9cfa-447a7ab9c46a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:26:08 compute-0 nova_compute[253661]: 2025-11-22 09:26:08.038 253665 DEBUG nova.network.neutron [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Updating instance_info_cache with network_info: [{"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:26:08 compute-0 nova_compute[253661]: 2025-11-22 09:26:08.052 253665 DEBUG oslo_concurrency.lockutils [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:26:08 compute-0 systemd[1]: Started libpod-conmon-22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847.scope.
Nov 22 09:26:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:26:08 compute-0 podman[340859]: 2025-11-22 09:26:08.210648871 +0000 UTC m=+0.576364285 container init 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 09:26:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:26:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:26:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:26:08 compute-0 podman[340859]: 2025-11-22 09:26:08.224425454 +0000 UTC m=+0.590140848 container start 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:26:08 compute-0 practical_swanson[340909]: 167 167
Nov 22 09:26:08 compute-0 systemd[1]: libpod-22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847.scope: Deactivated successfully.
Nov 22 09:26:08 compute-0 podman[340859]: 2025-11-22 09:26:08.449662239 +0000 UTC m=+0.815377633 container attach 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:26:08 compute-0 podman[340859]: 2025-11-22 09:26:08.451401641 +0000 UTC m=+0.817117065 container died 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-580afe7c6bae772f492d4c1245ccc572e00bd30300315a9f9bf15e8d7af2fa86-merged.mount: Deactivated successfully.
Nov 22 09:26:09 compute-0 podman[340859]: 2025-11-22 09:26:09.16710623 +0000 UTC m=+1.532821654 container remove 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:26:09 compute-0 systemd[1]: libpod-conmon-22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847.scope: Deactivated successfully.
Nov 22 09:26:09 compute-0 podman[340935]: 2025-11-22 09:26:09.353203308 +0000 UTC m=+0.035512733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:26:09 compute-0 ceph-mon[75021]: pgmap v1864: 305 pgs: 305 active+clean; 206 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 133 op/s
Nov 22 09:26:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 217 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 106 op/s
Nov 22 09:26:09 compute-0 podman[340935]: 2025-11-22 09:26:09.613010872 +0000 UTC m=+0.295320197 container create 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:26:09 compute-0 systemd[1]: Started libpod-conmon-1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508.scope.
Nov 22 09:26:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:09 compute-0 podman[340935]: 2025-11-22 09:26:09.980967744 +0000 UTC m=+0.663277059 container init 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:26:09 compute-0 podman[340935]: 2025-11-22 09:26:09.990195785 +0000 UTC m=+0.672505100 container start 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.041 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.042 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Deleting local config drive /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config because it was imported into RBD.
Nov 22 09:26:10 compute-0 podman[340935]: 2025-11-22 09:26:10.055537699 +0000 UTC m=+0.737847014 container attach 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:26:10 compute-0 kernel: tapa812758f-4f: entered promiscuous mode
Nov 22 09:26:10 compute-0 NetworkManager[48920]: <info>  [1763803570.1176] manager: (tapa812758f-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Nov 22 09:26:10 compute-0 ovn_controller[152872]: 2025-11-22T09:26:10Z|00890|binding|INFO|Claiming lport a812758f-4f22-4843-9cfa-447a7ab9c46a for this chassis.
Nov 22 09:26:10 compute-0 ovn_controller[152872]: 2025-11-22T09:26:10Z|00891|binding|INFO|a812758f-4f22-4843-9cfa-447a7ab9c46a: Claiming fa:16:3e:80:f5:32 10.100.0.6
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.121 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.126 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f5:32 10.100.0.6'], port_security=['fa:16:3e:80:f5:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cca5bcee-0493-45bc-976f-32bd793dbf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a812758f-4f22-4843-9cfa-447a7ab9c46a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.127 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a812758f-4f22-4843-9cfa-447a7ab9c46a in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.129 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:26:10 compute-0 ovn_controller[152872]: 2025-11-22T09:26:10Z|00892|binding|INFO|Setting lport a812758f-4f22-4843-9cfa-447a7ab9c46a up in Southbound
Nov 22 09:26:10 compute-0 ovn_controller[152872]: 2025-11-22T09:26:10Z|00893|binding|INFO|Setting lport a812758f-4f22-4843-9cfa-447a7ab9c46a ovn-installed in OVS
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.159 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.160 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[39c30b0a-38aa-4a78-8b66-cbb56196d838]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:10 compute-0 systemd-udevd[340967]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:26:10 compute-0 NetworkManager[48920]: <info>  [1763803570.1821] device (tapa812758f-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:26:10 compute-0 NetworkManager[48920]: <info>  [1763803570.1829] device (tapa812758f-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:26:10 compute-0 systemd-machined[215941]: New machine qemu-105-instance-00000057.
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.200 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[61b526ca-ec3f-4d75-9ffe-482be5f8f8a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:10 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-00000057.
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.204 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5294d6-cea8-4726-a1f7-64bf26f69b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.242 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec28f05-1541-4da9-a9d7-547a621df6e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.266 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[29b4a6ca-a31c-4c0c-aff4-0cc30a2434a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340982, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.293 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7aa2d27-33f4-4be5-b36a-f9d04ce5b1f0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340983, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340983, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.295 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.298 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.299 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.300 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.301 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.301 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.429 253665 DEBUG nova.compute.manager [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.430 253665 DEBUG oslo_concurrency.lockutils [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.430 253665 DEBUG oslo_concurrency.lockutils [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.430 253665 DEBUG oslo_concurrency.lockutils [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.430 253665 DEBUG nova.compute.manager [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Processing event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.539 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803555.537082, bd717644-36b1-45c9-a56f-b2719ae77e72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.540 253665 INFO nova.compute.manager [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] VM Stopped (Lifecycle Event)
Nov 22 09:26:10 compute-0 nova_compute[253661]: 2025-11-22 09:26:10.554 253665 DEBUG nova.compute.manager [None req-f69642a2-8512-4de1-bc8a-d0d4789b3766 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:10 compute-0 ceph-mon[75021]: pgmap v1865: 305 pgs: 305 active+clean; 217 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 106 op/s
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.136 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.137 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803571.1354618, cca5bcee-0493-45bc-976f-32bd793dbf01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.137 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] VM Started (Lifecycle Event)
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.140 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.144 253665 INFO nova.virt.libvirt.driver [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance spawned successfully.
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.145 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.162 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.173 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.178 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.180 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.181 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.182 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.183 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.183 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.204 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.205 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803571.1366146, cca5bcee-0493-45bc-976f-32bd793dbf01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.205 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] VM Paused (Lifecycle Event)
Nov 22 09:26:11 compute-0 jolly_dhawan[340951]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:26:11 compute-0 jolly_dhawan[340951]: --> relative data size: 1.0
Nov 22 09:26:11 compute-0 jolly_dhawan[340951]: --> All data devices are unavailable
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.232 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.237 253665 INFO nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Took 9.19 seconds to spawn the instance on the hypervisor.
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.238 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.243 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803571.139664, cca5bcee-0493-45bc-976f-32bd793dbf01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.243 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] VM Resumed (Lifecycle Event)
Nov 22 09:26:11 compute-0 systemd[1]: libpod-1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508.scope: Deactivated successfully.
Nov 22 09:26:11 compute-0 systemd[1]: libpod-1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508.scope: Consumed 1.082s CPU time.
Nov 22 09:26:11 compute-0 podman[340935]: 2025-11-22 09:26:11.246448604 +0000 UTC m=+1.928757939 container died 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.270 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.274 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.295 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.308 253665 INFO nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Took 10.13 seconds to build instance.
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.324 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3-merged.mount: Deactivated successfully.
Nov 22 09:26:11 compute-0 podman[341052]: 2025-11-22 09:26:11.586143735 +0000 UTC m=+0.302903947 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 22 09:26:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 217 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.5 MiB/s wr, 48 op/s
Nov 22 09:26:11 compute-0 podman[340935]: 2025-11-22 09:26:11.683387181 +0000 UTC m=+2.365696496 container remove 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:26:11 compute-0 nova_compute[253661]: 2025-11-22 09:26:11.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:11 compute-0 systemd[1]: libpod-conmon-1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508.scope: Deactivated successfully.
Nov 22 09:26:11 compute-0 sudo[340792]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:11 compute-0 sudo[341084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:11 compute-0 sudo[341084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:11 compute-0 sudo[341084]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:11 compute-0 sudo[341109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:26:11 compute-0 sudo[341109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:11 compute-0 sudo[341109]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:11 compute-0 sudo[341134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:11 compute-0 sudo[341134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:11 compute-0 sudo[341134]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:12 compute-0 sudo[341159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:26:12 compute-0 sudo[341159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:26:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/112024807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:26:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:26:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/112024807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:26:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:12.476 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:12 compute-0 podman[341223]: 2025-11-22 09:26:12.490951466 +0000 UTC m=+0.094174649 container create 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 09:26:12 compute-0 podman[341223]: 2025-11-22 09:26:12.424225649 +0000 UTC m=+0.027448852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:26:12 compute-0 nova_compute[253661]: 2025-11-22 09:26:12.595 253665 DEBUG nova.compute.manager [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:12 compute-0 nova_compute[253661]: 2025-11-22 09:26:12.596 253665 DEBUG oslo_concurrency.lockutils [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:12 compute-0 nova_compute[253661]: 2025-11-22 09:26:12.596 253665 DEBUG oslo_concurrency.lockutils [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:12 compute-0 nova_compute[253661]: 2025-11-22 09:26:12.597 253665 DEBUG oslo_concurrency.lockutils [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:12 compute-0 nova_compute[253661]: 2025-11-22 09:26:12.597 253665 DEBUG nova.compute.manager [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] No waiting events found dispatching network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:26:12 compute-0 nova_compute[253661]: 2025-11-22 09:26:12.597 253665 WARNING nova.compute.manager [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received unexpected event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a for instance with vm_state active and task_state None.
Nov 22 09:26:12 compute-0 systemd[1]: Started libpod-conmon-146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c.scope.
Nov 22 09:26:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:26:12 compute-0 podman[341223]: 2025-11-22 09:26:12.740183053 +0000 UTC m=+0.343406236 container init 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:26:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:12 compute-0 podman[341223]: 2025-11-22 09:26:12.751918399 +0000 UTC m=+0.355141582 container start 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:26:12 compute-0 jovial_saha[341239]: 167 167
Nov 22 09:26:12 compute-0 systemd[1]: libpod-146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c.scope: Deactivated successfully.
Nov 22 09:26:12 compute-0 conmon[341239]: conmon 146284058b47cb833a12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c.scope/container/memory.events
Nov 22 09:26:12 compute-0 podman[341223]: 2025-11-22 09:26:12.811889287 +0000 UTC m=+0.415112470 container attach 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:26:12 compute-0 podman[341223]: 2025-11-22 09:26:12.812384379 +0000 UTC m=+0.415607572 container died 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:26:13 compute-0 ovn_controller[152872]: 2025-11-22T09:26:13Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:67:ca 10.100.0.10
Nov 22 09:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-23554277dc2383230cd78f012fa64d1cefdbfdac7b1c11249ce2e66f1d0f08d5-merged.mount: Deactivated successfully.
Nov 22 09:26:13 compute-0 ovn_controller[152872]: 2025-11-22T09:26:13Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:67:ca 10.100.0.10
Nov 22 09:26:13 compute-0 ceph-mon[75021]: pgmap v1866: 305 pgs: 305 active+clean; 217 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.5 MiB/s wr, 48 op/s
Nov 22 09:26:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/112024807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:26:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/112024807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:26:13 compute-0 podman[341223]: 2025-11-22 09:26:13.262271651 +0000 UTC m=+0.865494874 container remove 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:26:13 compute-0 systemd[1]: libpod-conmon-146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c.scope: Deactivated successfully.
Nov 22 09:26:13 compute-0 podman[341263]: 2025-11-22 09:26:13.495720191 +0000 UTC m=+0.032848506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:26:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 231 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 674 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Nov 22 09:26:13 compute-0 podman[341263]: 2025-11-22 09:26:13.709715782 +0000 UTC m=+0.246844077 container create ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:26:13 compute-0 systemd[1]: Started libpod-conmon-ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99.scope.
Nov 22 09:26:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:13 compute-0 podman[341263]: 2025-11-22 09:26:13.991301533 +0000 UTC m=+0.528429858 container init ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 09:26:14 compute-0 podman[341263]: 2025-11-22 09:26:14.003375197 +0000 UTC m=+0.540503492 container start ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:26:14 compute-0 podman[341263]: 2025-11-22 09:26:14.076369912 +0000 UTC m=+0.613498247 container attach ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:26:14 compute-0 festive_raman[341279]: {
Nov 22 09:26:14 compute-0 festive_raman[341279]:     "0": [
Nov 22 09:26:14 compute-0 festive_raman[341279]:         {
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "devices": [
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "/dev/loop3"
Nov 22 09:26:14 compute-0 festive_raman[341279]:             ],
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_name": "ceph_lv0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_size": "21470642176",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "name": "ceph_lv0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "tags": {
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.cluster_name": "ceph",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.crush_device_class": "",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.encrypted": "0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.osd_id": "0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.type": "block",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.vdo": "0"
Nov 22 09:26:14 compute-0 festive_raman[341279]:             },
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "type": "block",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "vg_name": "ceph_vg0"
Nov 22 09:26:14 compute-0 festive_raman[341279]:         }
Nov 22 09:26:14 compute-0 festive_raman[341279]:     ],
Nov 22 09:26:14 compute-0 festive_raman[341279]:     "1": [
Nov 22 09:26:14 compute-0 festive_raman[341279]:         {
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "devices": [
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "/dev/loop4"
Nov 22 09:26:14 compute-0 festive_raman[341279]:             ],
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_name": "ceph_lv1",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_size": "21470642176",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "name": "ceph_lv1",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "tags": {
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.cluster_name": "ceph",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.crush_device_class": "",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.encrypted": "0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.osd_id": "1",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.type": "block",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.vdo": "0"
Nov 22 09:26:14 compute-0 festive_raman[341279]:             },
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "type": "block",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "vg_name": "ceph_vg1"
Nov 22 09:26:14 compute-0 festive_raman[341279]:         }
Nov 22 09:26:14 compute-0 festive_raman[341279]:     ],
Nov 22 09:26:14 compute-0 festive_raman[341279]:     "2": [
Nov 22 09:26:14 compute-0 festive_raman[341279]:         {
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "devices": [
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "/dev/loop5"
Nov 22 09:26:14 compute-0 festive_raman[341279]:             ],
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_name": "ceph_lv2",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_size": "21470642176",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "name": "ceph_lv2",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "tags": {
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.cluster_name": "ceph",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.crush_device_class": "",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.encrypted": "0",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.osd_id": "2",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.type": "block",
Nov 22 09:26:14 compute-0 festive_raman[341279]:                 "ceph.vdo": "0"
Nov 22 09:26:14 compute-0 festive_raman[341279]:             },
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "type": "block",
Nov 22 09:26:14 compute-0 festive_raman[341279]:             "vg_name": "ceph_vg2"
Nov 22 09:26:14 compute-0 festive_raman[341279]:         }
Nov 22 09:26:14 compute-0 festive_raman[341279]:     ]
Nov 22 09:26:14 compute-0 festive_raman[341279]: }
Nov 22 09:26:14 compute-0 systemd[1]: libpod-ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99.scope: Deactivated successfully.
Nov 22 09:26:14 compute-0 podman[341263]: 2025-11-22 09:26:14.849044341 +0000 UTC m=+1.386172656 container died ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 09:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c-merged.mount: Deactivated successfully.
Nov 22 09:26:15 compute-0 ceph-mon[75021]: pgmap v1867: 305 pgs: 305 active+clean; 231 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 674 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.343 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.344 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.344 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.344 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.345 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.346 253665 INFO nova.compute.manager [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Terminating instance
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.347 253665 DEBUG nova.compute.manager [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.540 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 239 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 121 op/s
Nov 22 09:26:15 compute-0 podman[341263]: 2025-11-22 09:26:15.880295271 +0000 UTC m=+2.417423566 container remove ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:26:15 compute-0 kernel: tapa812758f-4f (unregistering): left promiscuous mode
Nov 22 09:26:15 compute-0 systemd[1]: libpod-conmon-ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99.scope: Deactivated successfully.
Nov 22 09:26:15 compute-0 NetworkManager[48920]: <info>  [1763803575.8916] device (tapa812758f-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:26:15 compute-0 ovn_controller[152872]: 2025-11-22T09:26:15Z|00894|binding|INFO|Releasing lport a812758f-4f22-4843-9cfa-447a7ab9c46a from this chassis (sb_readonly=0)
Nov 22 09:26:15 compute-0 ovn_controller[152872]: 2025-11-22T09:26:15Z|00895|binding|INFO|Setting lport a812758f-4f22-4843-9cfa-447a7ab9c46a down in Southbound
Nov 22 09:26:15 compute-0 ovn_controller[152872]: 2025-11-22T09:26:15Z|00896|binding|INFO|Removing iface tapa812758f-4f ovn-installed in OVS
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.909 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f5:32 10.100.0.6'], port_security=['fa:16:3e:80:f5:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cca5bcee-0493-45bc-976f-32bd793dbf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a812758f-4f22-4843-9cfa-447a7ab9c46a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.910 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a812758f-4f22-4843-9cfa-447a7ab9c46a in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis
Nov 22 09:26:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.912 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.933 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f07d1b-acad-4305-ab20-665dc573f32c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:15 compute-0 sudo[341159]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:15 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000057.scope: Deactivated successfully.
Nov 22 09:26:15 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000057.scope: Consumed 4.702s CPU time.
Nov 22 09:26:15 compute-0 systemd-machined[215941]: Machine qemu-105-instance-00000057 terminated.
Nov 22 09:26:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.979 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae688cc-e678-4ed4-9b31-7b9496207f9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.984 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2a2a5-7de3-4e3d-a503-94e1705bd06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.990 253665 INFO nova.virt.libvirt.driver [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance destroyed successfully.
Nov 22 09:26:15 compute-0 nova_compute[253661]: 2025-11-22 09:26:15.991 253665 DEBUG nova.objects.instance [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid cca5bcee-0493-45bc-976f-32bd793dbf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.001 253665 DEBUG nova.virt.libvirt.vif [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:26:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-79942486',display_name='tempest-ServersTestJSON-server-79942486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-79942486',id=87,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:26:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-fbpdzrbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:26:13Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=cca5bcee-0493-45bc-976f-32bd793dbf01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.002 253665 DEBUG nova.network.os_vif_util [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.003 253665 DEBUG nova.network.os_vif_util [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.003 253665 DEBUG os_vif [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.004 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.005 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa812758f-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.008 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.010 253665 INFO os_vif [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f')
Nov 22 09:26:16 compute-0 sudo[341312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:16 compute-0 sudo[341312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.031 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0cdfb8-36a8-45ba-912b-63d4993aa326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:16 compute-0 sudo[341312]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.060 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9c88541e-bfda-45ba-8065-27f3c05590b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341366, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.085 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e7839ed7-7378-472e-b939-6e79f20fa79a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341384, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341384, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.087 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:16 compute-0 nova_compute[253661]: 2025-11-22 09:26:16.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.092 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.092 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:16 compute-0 sudo[341367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:26:16 compute-0 sudo[341367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:16 compute-0 sudo[341367]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:16 compute-0 sudo[341396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:16 compute-0 sudo[341396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:16 compute-0 sudo[341396]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:16 compute-0 sudo[341421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:26:16 compute-0 sudo[341421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:16 compute-0 podman[341485]: 2025-11-22 09:26:16.650852497 +0000 UTC m=+0.027853532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:26:16 compute-0 podman[341485]: 2025-11-22 09:26:16.824358958 +0000 UTC m=+0.201359973 container create fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 09:26:16 compute-0 systemd[1]: Started libpod-conmon-fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17.scope.
Nov 22 09:26:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:26:17 compute-0 podman[341485]: 2025-11-22 09:26:17.11074387 +0000 UTC m=+0.487744985 container init fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:26:17 compute-0 podman[341485]: 2025-11-22 09:26:17.119568342 +0000 UTC m=+0.496569357 container start fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:26:17 compute-0 flamboyant_meninsky[341502]: 167 167
Nov 22 09:26:17 compute-0 systemd[1]: libpod-fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17.scope: Deactivated successfully.
Nov 22 09:26:17 compute-0 conmon[341502]: conmon fbcdfad9dd51b2cf37fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17.scope/container/memory.events
Nov 22 09:26:17 compute-0 podman[341485]: 2025-11-22 09:26:17.178267878 +0000 UTC m=+0.555268923 container attach fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 09:26:17 compute-0 podman[341485]: 2025-11-22 09:26:17.180434633 +0000 UTC m=+0.557435648 container died fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:26:17 compute-0 ceph-mon[75021]: pgmap v1868: 305 pgs: 305 active+clean; 239 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 121 op/s
Nov 22 09:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0013d719f656b2eec6f2c2827a0a1ec4e07e4e839aa81984af2803ea610b4d82-merged.mount: Deactivated successfully.
Nov 22 09:26:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 241 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 134 op/s
Nov 22 09:26:17 compute-0 podman[341485]: 2025-11-22 09:26:17.714118532 +0000 UTC m=+1.091119547 container remove fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:26:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:17 compute-0 systemd[1]: libpod-conmon-fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17.scope: Deactivated successfully.
Nov 22 09:26:17 compute-0 podman[341526]: 2025-11-22 09:26:17.882449424 +0000 UTC m=+0.026940168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:26:18 compute-0 podman[341526]: 2025-11-22 09:26:18.035588445 +0000 UTC m=+0.180079169 container create 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:26:18 compute-0 systemd[1]: Started libpod-conmon-7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727.scope.
Nov 22 09:26:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:26:18 compute-0 podman[341526]: 2025-11-22 09:26:18.231744068 +0000 UTC m=+0.376234812 container init 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:26:18 compute-0 podman[341526]: 2025-11-22 09:26:18.23899744 +0000 UTC m=+0.383488204 container start 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:26:18 compute-0 podman[341526]: 2025-11-22 09:26:18.29547014 +0000 UTC m=+0.439960894 container attach 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:26:18 compute-0 ceph-mon[75021]: pgmap v1869: 305 pgs: 305 active+clean; 241 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 134 op/s
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]: {
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "osd_id": 1,
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "type": "bluestore"
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:     },
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "osd_id": 0,
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "type": "bluestore"
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:     },
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "osd_id": 2,
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:         "type": "bluestore"
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]:     }
Nov 22 09:26:19 compute-0 optimistic_snyder[341543]: }
Nov 22 09:26:19 compute-0 systemd[1]: libpod-7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727.scope: Deactivated successfully.
Nov 22 09:26:19 compute-0 systemd[1]: libpod-7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727.scope: Consumed 1.080s CPU time.
Nov 22 09:26:19 compute-0 podman[341577]: 2025-11-22 09:26:19.377561038 +0000 UTC m=+0.029339748 container died 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 09:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1-merged.mount: Deactivated successfully.
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.563 253665 INFO nova.virt.libvirt.driver [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Deleting instance files /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01_del
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.566 253665 INFO nova.virt.libvirt.driver [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Deletion of /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01_del complete
Nov 22 09:26:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 224 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 154 op/s
Nov 22 09:26:19 compute-0 podman[341577]: 2025-11-22 09:26:19.598602407 +0000 UTC m=+0.250381107 container remove 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:26:19 compute-0 systemd[1]: libpod-conmon-7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727.scope: Deactivated successfully.
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.620 253665 INFO nova.compute.manager [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Took 4.27 seconds to destroy the instance on the hypervisor.
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.621 253665 DEBUG oslo.service.loopingcall [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.621 253665 DEBUG nova.compute.manager [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.622 253665 DEBUG nova.network.neutron [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:26:19 compute-0 sudo[341421]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.672 253665 DEBUG nova.compute.manager [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-unplugged-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.672 253665 DEBUG oslo_concurrency.lockutils [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.672 253665 DEBUG oslo_concurrency.lockutils [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.673 253665 DEBUG oslo_concurrency.lockutils [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.673 253665 DEBUG nova.compute.manager [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] No waiting events found dispatching network-vif-unplugged-a812758f-4f22-4843-9cfa-447a7ab9c46a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:26:19 compute-0 nova_compute[253661]: 2025-11-22 09:26:19.673 253665 DEBUG nova.compute.manager [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-unplugged-a812758f-4f22-4843-9cfa-447a7ab9c46a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:26:19 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:26:19 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:19 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a1a8b41d-981f-4270-9ba4-4564e588ce7e does not exist
Nov 22 09:26:19 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev fa3de5f8-a3af-4976-82c7-e929705e465d does not exist
Nov 22 09:26:19 compute-0 sudo[341592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:26:19 compute-0 sudo[341592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:19 compute-0 sudo[341592]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:19 compute-0 sudo[341617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:26:19 compute-0 sudo[341617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:26:19 compute-0 sudo[341617]: pam_unix(sudo:session): session closed for user root
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.300 253665 DEBUG nova.network.neutron [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.318 253665 INFO nova.compute.manager [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Took 0.70 seconds to deallocate network for instance.
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.382 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.382 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.421 253665 DEBUG nova.compute.manager [req-f634a83c-0441-4186-8a34-4bfddbfa1be1 req-76b8a3be-84e4-4138-a0e3-e3bcfd8cfd80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-deleted-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.468 253665 DEBUG oslo_concurrency.processutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:20 compute-0 ceph-mon[75021]: pgmap v1870: 305 pgs: 305 active+clean; 224 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 154 op/s
Nov 22 09:26:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:26:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:26:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4041259500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.956 253665 DEBUG oslo_concurrency.processutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.965 253665 DEBUG nova.compute.provider_tree [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:26:20 compute-0 nova_compute[253661]: 2025-11-22 09:26:20.983 253665 DEBUG nova.scheduler.client.report [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.010 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.040 253665 INFO nova.scheduler.client.report [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance cca5bcee-0493-45bc-976f-32bd793dbf01
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.105 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 224 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 138 op/s
Nov 22 09:26:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4041259500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.855 253665 DEBUG nova.compute.manager [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.855 253665 DEBUG oslo_concurrency.lockutils [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.856 253665 DEBUG oslo_concurrency.lockutils [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.856 253665 DEBUG oslo_concurrency.lockutils [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.856 253665 DEBUG nova.compute.manager [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] No waiting events found dispatching network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:26:21 compute-0 nova_compute[253661]: 2025-11-22 09:26:21.856 253665 WARNING nova.compute.manager [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received unexpected event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a for instance with vm_state deleted and task_state None.
Nov 22 09:26:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:26:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:26:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:26:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:26:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:26:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:26:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:22 compute-0 ceph-mon[75021]: pgmap v1871: 305 pgs: 305 active+clean; 224 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 138 op/s
Nov 22 09:26:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:23.426 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:23.428 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated
Nov 22 09:26:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:23.430 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:26:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:23.431 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[65b02e5d-c19c-4160-a0c9-087ae0e1dc2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 200 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 141 op/s
Nov 22 09:26:25 compute-0 ceph-mon[75021]: pgmap v1872: 305 pgs: 305 active+clean; 200 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 141 op/s
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.153 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.153 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.172 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.244 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.244 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.251 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.251 253665 INFO nova.compute.claims [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.414 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 200 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 310 KiB/s wr, 106 op/s
Nov 22 09:26:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:26:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762345784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.929 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.937 253665 DEBUG nova.compute.provider_tree [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.951 253665 DEBUG nova.scheduler.client.report [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.972 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:25 compute-0 nova_compute[253661]: 2025-11-22 09:26:25.973 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.016 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.017 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.035 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.054 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:26:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3762345784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.140 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.143 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.143 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Creating image(s)
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.172 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.211 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.239 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.244 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.309 253665 DEBUG nova.policy [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.345 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.346 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.347 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.348 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.371 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.376 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:26.563 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:26.564 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated
Nov 22 09:26:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:26.566 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:26:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:26.568 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31619e0e-5172-4f35-8364-e163ff72f5ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.744 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.813 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.918 253665 DEBUG nova.objects.instance [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.941 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.942 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Ensure instance console log exists: /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.942 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.942 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:26 compute-0 nova_compute[253661]: 2025-11-22 09:26:26.943 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:27 compute-0 ceph-mon[75021]: pgmap v1873: 305 pgs: 305 active+clean; 200 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 310 KiB/s wr, 106 op/s
Nov 22 09:26:27 compute-0 nova_compute[253661]: 2025-11-22 09:26:27.154 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Successfully created port: fa819bcd-7193-4627-920d-254828dcdfea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:26:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 217 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 1.0 MiB/s wr, 75 op/s
Nov 22 09:26:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.739 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.740 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated
Nov 22 09:26:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.741 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:26:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.742 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b31013a3-fa68-42b6-bbc4-1e33e2f87083]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.968 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.969 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:28 compute-0 nova_compute[253661]: 2025-11-22 09:26:28.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:26:28 compute-0 nova_compute[253661]: 2025-11-22 09:26:28.342 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Successfully updated port: fa819bcd-7193-4627-920d-254828dcdfea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:26:28 compute-0 nova_compute[253661]: 2025-11-22 09:26:28.357 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:26:28 compute-0 nova_compute[253661]: 2025-11-22 09:26:28.357 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:26:28 compute-0 nova_compute[253661]: 2025-11-22 09:26:28.357 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:26:28 compute-0 nova_compute[253661]: 2025-11-22 09:26:28.553 253665 DEBUG nova.compute.manager [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-changed-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:28 compute-0 nova_compute[253661]: 2025-11-22 09:26:28.554 253665 DEBUG nova.compute.manager [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Refreshing instance network info cache due to event network-changed-fa819bcd-7193-4627-920d-254828dcdfea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:26:28 compute-0 nova_compute[253661]: 2025-11-22 09:26:28.554 253665 DEBUG oslo_concurrency.lockutils [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:26:28 compute-0 nova_compute[253661]: 2025-11-22 09:26:28.879 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:26:29 compute-0 ceph-mon[75021]: pgmap v1874: 305 pgs: 305 active+clean; 217 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 1.0 MiB/s wr, 75 op/s
Nov 22 09:26:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 246 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.455 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Updating instance_info_cache with network_info: [{"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.478 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.478 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance network_info: |[{"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.479 253665 DEBUG oslo_concurrency.lockutils [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.479 253665 DEBUG nova.network.neutron [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Refreshing network info cache for port fa819bcd-7193-4627-920d-254828dcdfea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.483 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start _get_guest_xml network_info=[{"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.488 253665 WARNING nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.495 253665 DEBUG nova.virt.libvirt.host [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.496 253665 DEBUG nova.virt.libvirt.host [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.501 253665 DEBUG nova.virt.libvirt.host [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.501 253665 DEBUG nova.virt.libvirt.host [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.501 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.502 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.502 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.503 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.503 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.503 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.503 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.504 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.504 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.504 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.504 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.505 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.508 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.988 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803575.9870281, cca5bcee-0493-45bc-976f-32bd793dbf01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:26:30 compute-0 nova_compute[253661]: 2025-11-22 09:26:30.988 253665 INFO nova.compute.manager [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] VM Stopped (Lifecycle Event)
Nov 22 09:26:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:26:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550599483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.006 253665 DEBUG nova.compute.manager [None req-064a0215-c0c6-442d-9477-de44834fa42f - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.021 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.048 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.053 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:31 compute-0 ceph-mon[75021]: pgmap v1875: 305 pgs: 305 active+clean; 246 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 22 09:26:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/550599483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:26:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:26:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73068458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.513 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.515 253665 DEBUG nova.virt.libvirt.vif [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:26:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-582111700',display_name='tempest-ServersTestJSON-server-582111700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-582111700',id=88,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-n4pnb8ja',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:26:26Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=4e9344da-4e80-4749-8d61-a2fe5ffe0cf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.515 253665 DEBUG nova.network.os_vif_util [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.516 253665 DEBUG nova.network.os_vif_util [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.517 253665 DEBUG nova.objects.instance [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.532 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <uuid>4e9344da-4e80-4749-8d61-a2fe5ffe0cf7</uuid>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <name>instance-00000058</name>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersTestJSON-server-582111700</nova:name>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:26:30</nova:creationTime>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <nova:port uuid="fa819bcd-7193-4627-920d-254828dcdfea">
Nov 22 09:26:31 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <system>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <entry name="serial">4e9344da-4e80-4749-8d61-a2fe5ffe0cf7</entry>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <entry name="uuid">4e9344da-4e80-4749-8d61-a2fe5ffe0cf7</entry>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     </system>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <os>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   </os>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <features>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   </features>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk">
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       </source>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config">
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       </source>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:26:31 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:31:45:80"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <target dev="tapfa819bcd-71"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/console.log" append="off"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <video>
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     </video>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:26:31 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:26:31 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:26:31 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:26:31 compute-0 nova_compute[253661]: </domain>
Nov 22 09:26:31 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.533 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Preparing to wait for external event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.533 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.534 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.534 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.535 253665 DEBUG nova.virt.libvirt.vif [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:26:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-582111700',display_name='tempest-ServersTestJSON-server-582111700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-582111700',id=88,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-n4pnb8ja',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:26:26Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=4e9344da-4e80-4749-8d61-a2fe5ffe0cf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.535 253665 DEBUG nova.network.os_vif_util [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.535 253665 DEBUG nova.network.os_vif_util [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.536 253665 DEBUG os_vif [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.537 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.537 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.541 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa819bcd-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.542 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa819bcd-71, col_values=(('external_ids', {'iface-id': 'fa819bcd-7193-4627-920d-254828dcdfea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:45:80', 'vm-uuid': '4e9344da-4e80-4749-8d61-a2fe5ffe0cf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:31 compute-0 NetworkManager[48920]: <info>  [1763803591.5449] manager: (tapfa819bcd-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.553 253665 INFO os_vif [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71')
Nov 22 09:26:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 246 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.606 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.606 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.607 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:31:45:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.607 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Using config drive
Nov 22 09:26:31 compute-0 nova_compute[253661]: 2025-11-22 09:26:31.629 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:31.714 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:31.716 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated
Nov 22 09:26:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:31.718 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:26:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:31.719 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6189b62-f446-4f8f-beb6-75ae6e3ccb7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/73068458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.250 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.294 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Creating config drive at /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.299 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3wis6s_c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.445 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3wis6s_c" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.469 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.473 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.590 253665 DEBUG nova.compute.manager [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.626 253665 INFO nova.compute.manager [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] instance snapshotting
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.627 253665 DEBUG nova.objects.instance [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'flavor' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.670 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.671 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Deleting local config drive /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config because it was imported into RBD.
Nov 22 09:26:32 compute-0 kernel: tapfa819bcd-71: entered promiscuous mode
Nov 22 09:26:32 compute-0 NetworkManager[48920]: <info>  [1763803592.7295] manager: (tapfa819bcd-71): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Nov 22 09:26:32 compute-0 ovn_controller[152872]: 2025-11-22T09:26:32Z|00897|binding|INFO|Claiming lport fa819bcd-7193-4627-920d-254828dcdfea for this chassis.
Nov 22 09:26:32 compute-0 ovn_controller[152872]: 2025-11-22T09:26:32Z|00898|binding|INFO|fa819bcd-7193-4627-920d-254828dcdfea: Claiming fa:16:3e:31:45:80 10.100.0.5
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.737 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:45:80 10.100.0.5'], port_security=['fa:16:3e:31:45:80 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4e9344da-4e80-4749-8d61-a2fe5ffe0cf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fa819bcd-7193-4627-920d-254828dcdfea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.738 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fa819bcd-7193-4627-920d-254828dcdfea in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.740 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:26:32 compute-0 ovn_controller[152872]: 2025-11-22T09:26:32Z|00899|binding|INFO|Setting lport fa819bcd-7193-4627-920d-254828dcdfea ovn-installed in OVS
Nov 22 09:26:32 compute-0 ovn_controller[152872]: 2025-11-22T09:26:32Z|00900|binding|INFO|Setting lport fa819bcd-7193-4627-920d-254828dcdfea up in Southbound
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.747 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.760 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[548c6245-9ed8-4dcb-809d-3e464da65ba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:32 compute-0 systemd-udevd[341990]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:26:32 compute-0 systemd-machined[215941]: New machine qemu-106-instance-00000058.
Nov 22 09:26:32 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-00000058.
Nov 22 09:26:32 compute-0 NetworkManager[48920]: <info>  [1763803592.7874] device (tapfa819bcd-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:26:32 compute-0 NetworkManager[48920]: <info>  [1763803592.7882] device (tapfa819bcd-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.792 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0736698b-45e7-4ffc-8476-3863d066678f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.796 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2b30cdec-39c5-4918-a3c1-b9c851b68409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.828 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.829 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.829 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.829 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e0b05f62-6966-4bf3-aee5-e4d2137a6cfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.829 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8888abd1-bbd5-4628-9df8-8e917549de62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.833 253665 DEBUG nova.network.neutron [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Updated VIF entry in instance network info cache for port fa819bcd-7193-4627-920d-254828dcdfea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.834 253665 DEBUG nova.network.neutron [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Updating instance_info_cache with network_info: [{"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.849 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cd79df-f794-41fc-8401-8f891b34f5a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341999, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.858 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.859 253665 DEBUG oslo_concurrency.lockutils [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af279f7f-728a-4e3c-8808-f31504783d43]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342002, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342002, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.870 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:32 compute-0 nova_compute[253661]: 2025-11-22 09:26:32.873 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.875 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.877 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.878 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:26:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.879 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b65c1eaf-39cd-4c9e-90c7-4a7618c63ab2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:33 compute-0 ceph-mon[75021]: pgmap v1876: 305 pgs: 305 active+clean; 246 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.167 253665 INFO nova.virt.libvirt.driver [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Beginning live snapshot process
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.176 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803593.176301, 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.177 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] VM Started (Lifecycle Event)
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.210 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.221 253665 DEBUG nova.compute.manager [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.221 253665 DEBUG oslo_concurrency.lockutils [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.221 253665 DEBUG oslo_concurrency.lockutils [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.222 253665 DEBUG oslo_concurrency.lockutils [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.222 253665 DEBUG nova.compute.manager [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Processing event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.223 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.224 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803593.1765378, 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.224 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] VM Paused (Lifecycle Event)
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.228 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.231 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance spawned successfully.
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.231 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.253 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.260 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803593.2267067, 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.260 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] VM Resumed (Lifecycle Event)
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.263 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.264 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.264 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.264 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.265 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.265 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.313 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.317 253665 DEBUG nova.virt.libvirt.imagebackend [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.322 253665 INFO nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Took 7.18 seconds to spawn the instance on the hypervisor.
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.322 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.324 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.356 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.392 253665 INFO nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Took 8.16 seconds to build instance.
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.405 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 246 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 22 09:26:33 compute-0 nova_compute[253661]: 2025-11-22 09:26:33.850 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(1d1c33183bfc4c66b184e71a2e1fd599) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:26:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 22 09:26:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 22 09:26:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.207 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk@1d1c33183bfc4c66b184e71a2e1fd599 to images/84078c1f-f45a-4974-ab60-fbf47bdc21a1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.315 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/84078c1f-f45a-4974-ab60-fbf47bdc21a1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.688 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updating instance_info_cache with network_info: [{"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.702 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.702 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.702 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.729 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.730 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.730 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.730 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:26:34 compute-0 nova_compute[253661]: 2025-11-22 09:26:34.730 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:26:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/65132115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.387 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:35 compute-0 podman[342172]: 2025-11-22 09:26:35.419100538 +0000 UTC m=+0.084817614 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 22 09:26:35 compute-0 podman[342173]: 2025-11-22 09:26:35.42355828 +0000 UTC m=+0.089802609 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.441 253665 DEBUG nova.compute.manager [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:35 compute-0 ceph-mon[75021]: pgmap v1877: 305 pgs: 305 active+clean; 246 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 22 09:26:35 compute-0 ceph-mon[75021]: osdmap e247: 3 total, 3 up, 3 in
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.441 253665 DEBUG oslo_concurrency.lockutils [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.442 253665 DEBUG oslo_concurrency.lockutils [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.442 253665 DEBUG oslo_concurrency.lockutils [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.442 253665 DEBUG nova.compute.manager [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] No waiting events found dispatching network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.442 253665 WARNING nova.compute.manager [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received unexpected event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea for instance with vm_state active and task_state None.
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.467 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.468 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.471 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.471 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.475 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.475 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 246 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.714 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(1d1c33183bfc4c66b184e71a2e1fd599) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.847 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.848 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3312MB free_disk=59.87630844116211GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.848 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.848 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.925 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e0b05f62-6966-4bf3-aee5-e4d2137a6cfc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.926 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e4f9440c-7476-4022-8d08-1b3151a9db79 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.926 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.926 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.926 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.939 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.953 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.954 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.969 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:26:35 compute-0 nova_compute[253661]: 2025-11-22 09:26:35.987 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.052 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:36.438 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:36.439 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated
Nov 22 09:26:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:36.441 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:26:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:36.442 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae05d192-ee5c-4c3f-8345-c0958dcde07c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 22 09:26:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 22 09:26:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/65132115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:36 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 22 09:26:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:26:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31841227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.536 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.543 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.558 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.568 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(84078c1f-f45a-4974-ab60-fbf47bdc21a1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.601 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.602 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.964 253665 DEBUG oslo_concurrency.lockutils [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.964 253665 DEBUG oslo_concurrency.lockutils [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.965 253665 DEBUG nova.compute.manager [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.968 253665 DEBUG nova.compute.manager [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.969 253665 DEBUG nova.objects.instance [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'flavor' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:36 compute-0 nova_compute[253661]: 2025-11-22 09:26:36.992 253665 DEBUG nova.virt.libvirt.driver [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:26:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:37.405 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:37.407 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated
Nov 22 09:26:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:37.408 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:26:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:37.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a02012f8-8c50-4c63-87fb-81fbe59ed666]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 22 09:26:37 compute-0 ceph-mon[75021]: pgmap v1879: 305 pgs: 305 active+clean; 246 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 22 09:26:37 compute-0 ceph-mon[75021]: osdmap e248: 3 total, 3 up, 3 in
Nov 22 09:26:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/31841227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 22 09:26:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 259 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 587 KiB/s wr, 94 op/s
Nov 22 09:26:37 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 22 09:26:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:38 compute-0 nova_compute[253661]: 2025-11-22 09:26:38.127 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:26:38 compute-0 nova_compute[253661]: 2025-11-22 09:26:38.128 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:26:38 compute-0 ceph-mon[75021]: pgmap v1881: 305 pgs: 305 active+clean; 259 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 587 KiB/s wr, 94 op/s
Nov 22 09:26:38 compute-0 ceph-mon[75021]: osdmap e249: 3 total, 3 up, 3 in
Nov 22 09:26:39 compute-0 nova_compute[253661]: 2025-11-22 09:26:39.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:26:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 326 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 294 op/s
Nov 22 09:26:39 compute-0 nova_compute[253661]: 2025-11-22 09:26:39.714 253665 INFO nova.virt.libvirt.driver [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Snapshot image upload complete
Nov 22 09:26:39 compute-0 nova_compute[253661]: 2025-11-22 09:26:39.714 253665 INFO nova.compute.manager [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 7.07 seconds to snapshot the instance on the hypervisor.
Nov 22 09:26:40 compute-0 nova_compute[253661]: 2025-11-22 09:26:40.145 253665 DEBUG nova.compute.manager [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 22 09:26:40 compute-0 nova_compute[253661]: 2025-11-22 09:26:40.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:40 compute-0 ceph-mon[75021]: pgmap v1883: 305 pgs: 305 active+clean; 326 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 294 op/s
Nov 22 09:26:41 compute-0 nova_compute[253661]: 2025-11-22 09:26:41.131 253665 DEBUG nova.compute.manager [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:41 compute-0 nova_compute[253661]: 2025-11-22 09:26:41.169 253665 INFO nova.compute.manager [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] instance snapshotting
Nov 22 09:26:41 compute-0 nova_compute[253661]: 2025-11-22 09:26:41.170 253665 DEBUG nova.objects.instance [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'flavor' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:41 compute-0 nova_compute[253661]: 2025-11-22 09:26:41.459 253665 INFO nova.virt.libvirt.driver [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Beginning live snapshot process
Nov 22 09:26:41 compute-0 nova_compute[253661]: 2025-11-22 09:26:41.592 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:41 compute-0 nova_compute[253661]: 2025-11-22 09:26:41.599 253665 DEBUG nova.virt.libvirt.imagebackend [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:26:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 326 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 6.2 MiB/s wr, 215 op/s
Nov 22 09:26:41 compute-0 nova_compute[253661]: 2025-11-22 09:26:41.782 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(8063ee849ec444f896e547beef62fa35) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:26:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:42.338 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:42.340 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated
Nov 22 09:26:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:42.341 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:26:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:42.342 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d762645-b461-4f98-a718-99988e576797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:42 compute-0 podman[342320]: 2025-11-22 09:26:42.424417635 +0000 UTC m=+0.107146465 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:26:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 22 09:26:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 22 09:26:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 22 09:26:42 compute-0 ceph-mon[75021]: pgmap v1884: 305 pgs: 305 active+clean; 326 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 6.2 MiB/s wr, 215 op/s
Nov 22 09:26:42 compute-0 ceph-mon[75021]: osdmap e250: 3 total, 3 up, 3 in
Nov 22 09:26:43 compute-0 nova_compute[253661]: 2025-11-22 09:26:43.157 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk@8063ee849ec444f896e547beef62fa35 to images/59a900cc-5a77-42a2-a590-ba279de1eb2e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:26:43 compute-0 nova_compute[253661]: 2025-11-22 09:26:43.372 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/59a900cc-5a77-42a2-a590-ba279de1eb2e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:26:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 326 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 6.6 MiB/s wr, 248 op/s
Nov 22 09:26:45 compute-0 ceph-mon[75021]: pgmap v1886: 305 pgs: 305 active+clean; 326 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 6.6 MiB/s wr, 248 op/s
Nov 22 09:26:45 compute-0 nova_compute[253661]: 2025-11-22 09:26:45.495 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(8063ee849ec444f896e547beef62fa35) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:26:45 compute-0 nova_compute[253661]: 2025-11-22 09:26:45.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 338 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.1 MiB/s wr, 183 op/s
Nov 22 09:26:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 22 09:26:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 22 09:26:46 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 22 09:26:46 compute-0 nova_compute[253661]: 2025-11-22 09:26:46.212 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(59a900cc-5a77-42a2-a590-ba279de1eb2e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:26:46 compute-0 nova_compute[253661]: 2025-11-22 09:26:46.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:47 compute-0 nova_compute[253661]: 2025-11-22 09:26:47.037 253665 DEBUG nova.virt.libvirt.driver [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:26:47 compute-0 ceph-mon[75021]: pgmap v1887: 305 pgs: 305 active+clean; 338 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.1 MiB/s wr, 183 op/s
Nov 22 09:26:47 compute-0 ceph-mon[75021]: osdmap e251: 3 total, 3 up, 3 in
Nov 22 09:26:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 22 09:26:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 22 09:26:47 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 22 09:26:47 compute-0 ovn_controller[152872]: 2025-11-22T09:26:47Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:31:45:80 10.100.0.5
Nov 22 09:26:47 compute-0 ovn_controller[152872]: 2025-11-22T09:26:47Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:31:45:80 10.100.0.5
Nov 22 09:26:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 366 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 110 op/s
Nov 22 09:26:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:48 compute-0 ceph-mon[75021]: osdmap e252: 3 total, 3 up, 3 in
Nov 22 09:26:48 compute-0 nova_compute[253661]: 2025-11-22 09:26:48.706 253665 INFO nova.virt.libvirt.driver [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Snapshot image upload complete
Nov 22 09:26:48 compute-0 nova_compute[253661]: 2025-11-22 09:26:48.706 253665 INFO nova.compute.manager [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 7.52 seconds to snapshot the instance on the hypervisor.
Nov 22 09:26:48 compute-0 nova_compute[253661]: 2025-11-22 09:26:48.968 253665 DEBUG nova.compute.manager [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 22 09:26:49 compute-0 ceph-mon[75021]: pgmap v1890: 305 pgs: 305 active+clean; 366 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 110 op/s
Nov 22 09:26:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 432 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 276 op/s
Nov 22 09:26:50 compute-0 kernel: tapfa819bcd-71 (unregistering): left promiscuous mode
Nov 22 09:26:50 compute-0 NetworkManager[48920]: <info>  [1763803610.3504] device (tapfa819bcd-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:26:50 compute-0 ovn_controller[152872]: 2025-11-22T09:26:50Z|00901|binding|INFO|Releasing lport fa819bcd-7193-4627-920d-254828dcdfea from this chassis (sb_readonly=0)
Nov 22 09:26:50 compute-0 ovn_controller[152872]: 2025-11-22T09:26:50Z|00902|binding|INFO|Setting lport fa819bcd-7193-4627-920d-254828dcdfea down in Southbound
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:50 compute-0 ovn_controller[152872]: 2025-11-22T09:26:50Z|00903|binding|INFO|Removing iface tapfa819bcd-71 ovn-installed in OVS
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.374 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:45:80 10.100.0.5'], port_security=['fa:16:3e:31:45:80 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4e9344da-4e80-4749-8d61-a2fe5ffe0cf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fa819bcd-7193-4627-920d-254828dcdfea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.376 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fa819bcd-7193-4627-920d-254828dcdfea in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.377 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.389 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.398 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25ff838a-9d03-4e57-bf40-d446252b625f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:50 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000058.scope: Deactivated successfully.
Nov 22 09:26:50 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000058.scope: Consumed 13.579s CPU time.
Nov 22 09:26:50 compute-0 systemd-machined[215941]: Machine qemu-106-instance-00000058 terminated.
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.435 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[665dad5f-d049-4e5f-bd5e-5db492a4923a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.438 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e5655577-dc0d-46c5-a689-2e32a3208db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.457 253665 DEBUG nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.470 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc23dc3-6d8e-4db4-b3cd-5a5e43afc92f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.489 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[62ce9960-4fbd-4548-80f3-f61e6921dfff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342448, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.499 253665 INFO nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] instance snapshotting
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.500 253665 DEBUG nova.objects.instance [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'flavor' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.508 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9220fbea-e787-44d8-837e-c81dbe903738]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342449, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342449, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.510 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.520 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.749 253665 DEBUG nova.compute.manager [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-unplugged-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.750 253665 DEBUG oslo_concurrency.lockutils [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.751 253665 DEBUG oslo_concurrency.lockutils [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.751 253665 DEBUG oslo_concurrency.lockutils [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.751 253665 DEBUG nova.compute.manager [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] No waiting events found dispatching network-vif-unplugged-fa819bcd-7193-4627-920d-254828dcdfea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.752 253665 WARNING nova.compute.manager [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received unexpected event network-vif-unplugged-fa819bcd-7193-4627-920d-254828dcdfea for instance with vm_state active and task_state powering-off.
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.761 253665 INFO nova.virt.libvirt.driver [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Beginning live snapshot process
Nov 22 09:26:50 compute-0 nova_compute[253661]: 2025-11-22 09:26:50.916 253665 DEBUG nova.virt.libvirt.imagebackend [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:26:51 compute-0 nova_compute[253661]: 2025-11-22 09:26:51.054 253665 INFO nova.virt.libvirt.driver [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance shutdown successfully after 14 seconds.
Nov 22 09:26:51 compute-0 nova_compute[253661]: 2025-11-22 09:26:51.060 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance destroyed successfully.
Nov 22 09:26:51 compute-0 nova_compute[253661]: 2025-11-22 09:26:51.061 253665 DEBUG nova.objects.instance [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:51 compute-0 nova_compute[253661]: 2025-11-22 09:26:51.072 253665 DEBUG nova.compute.manager [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:26:51 compute-0 nova_compute[253661]: 2025-11-22 09:26:51.129 253665 DEBUG oslo_concurrency.lockutils [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:51 compute-0 nova_compute[253661]: 2025-11-22 09:26:51.275 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(77d525a265944c66affe9c6402eb1519) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:26:51 compute-0 ceph-mon[75021]: pgmap v1891: 305 pgs: 305 active+clean; 432 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 276 op/s
Nov 22 09:26:51 compute-0 nova_compute[253661]: 2025-11-22 09:26:51.597 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 432 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 9.0 MiB/s wr, 232 op/s
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:26:52
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'images']
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:26:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 22 09:26:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 22 09:26:52 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 22 09:26:52 compute-0 nova_compute[253661]: 2025-11-22 09:26:52.352 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk@77d525a265944c66affe9c6402eb1519 to images/1e999d71-f227-48a5-af57-b8e4ea55b8dc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:26:52 compute-0 nova_compute[253661]: 2025-11-22 09:26:52.463 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/1e999d71-f227-48a5-af57-b8e4ea55b8dc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:26:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:26:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 22 09:26:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 22 09:26:52 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 22 09:26:52 compute-0 nova_compute[253661]: 2025-11-22 09:26:52.888 253665 DEBUG nova.compute.manager [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:52 compute-0 nova_compute[253661]: 2025-11-22 09:26:52.889 253665 DEBUG oslo_concurrency.lockutils [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:52 compute-0 nova_compute[253661]: 2025-11-22 09:26:52.889 253665 DEBUG oslo_concurrency.lockutils [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:52 compute-0 nova_compute[253661]: 2025-11-22 09:26:52.889 253665 DEBUG oslo_concurrency.lockutils [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:52 compute-0 nova_compute[253661]: 2025-11-22 09:26:52.890 253665 DEBUG nova.compute.manager [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] No waiting events found dispatching network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:26:52 compute-0 nova_compute[253661]: 2025-11-22 09:26:52.890 253665 WARNING nova.compute.manager [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received unexpected event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea for instance with vm_state stopped and task_state None.
Nov 22 09:26:52 compute-0 nova_compute[253661]: 2025-11-22 09:26:52.980 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(77d525a265944c66affe9c6402eb1519) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:26:53 compute-0 ceph-mon[75021]: pgmap v1892: 305 pgs: 305 active+clean; 432 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 9.0 MiB/s wr, 232 op/s
Nov 22 09:26:53 compute-0 ceph-mon[75021]: osdmap e253: 3 total, 3 up, 3 in
Nov 22 09:26:53 compute-0 ceph-mon[75021]: osdmap e254: 3 total, 3 up, 3 in
Nov 22 09:26:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 472 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 10 MiB/s wr, 284 op/s
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.795 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.796 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.796 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.796 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.797 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.798 253665 INFO nova.compute.manager [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Terminating instance
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.798 253665 DEBUG nova.compute.manager [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.807 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance destroyed successfully.
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.808 253665 DEBUG nova.objects.instance [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.821 253665 DEBUG nova.virt.libvirt.vif [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:26:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-582111700',display_name='tempest-Íñstáñcé-785856032',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-582111700',id=88,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:26:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-n4pnb8ja',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:26:52Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=4e9344da-4e80-4749-8d61-a2fe5ffe0cf7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.821 253665 DEBUG nova.network.os_vif_util [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.822 253665 DEBUG nova.network.os_vif_util [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.822 253665 DEBUG os_vif [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.824 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.825 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa819bcd-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:53 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.830 253665 INFO os_vif [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71')
Nov 22 09:26:53 compute-0 nova_compute[253661]: 2025-11-22 09:26:53.865 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(1e999d71-f227-48a5-af57-b8e4ea55b8dc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:26:54 compute-0 nova_compute[253661]: 2025-11-22 09:26:54.423 253665 INFO nova.virt.libvirt.driver [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Deleting instance files /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_del
Nov 22 09:26:54 compute-0 nova_compute[253661]: 2025-11-22 09:26:54.424 253665 INFO nova.virt.libvirt.driver [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Deletion of /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_del complete
Nov 22 09:26:54 compute-0 nova_compute[253661]: 2025-11-22 09:26:54.469 253665 INFO nova.compute.manager [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Took 0.67 seconds to destroy the instance on the hypervisor.
Nov 22 09:26:54 compute-0 nova_compute[253661]: 2025-11-22 09:26:54.470 253665 DEBUG oslo.service.loopingcall [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:26:54 compute-0 nova_compute[253661]: 2025-11-22 09:26:54.470 253665 DEBUG nova.compute.manager [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:26:54 compute-0 nova_compute[253661]: 2025-11-22 09:26:54.470 253665 DEBUG nova.network.neutron [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:26:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 22 09:26:54 compute-0 ceph-mon[75021]: pgmap v1895: 305 pgs: 305 active+clean; 472 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 10 MiB/s wr, 284 op/s
Nov 22 09:26:54 compute-0 ceph-mon[75021]: osdmap e255: 3 total, 3 up, 3 in
Nov 22 09:26:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 22 09:26:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 22 09:26:54 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 22 09:26:55 compute-0 nova_compute[253661]: 2025-11-22 09:26:55.369 253665 DEBUG nova.network.neutron [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:26:55 compute-0 nova_compute[253661]: 2025-11-22 09:26:55.388 253665 INFO nova.compute.manager [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Took 0.92 seconds to deallocate network for instance.
Nov 22 09:26:55 compute-0 nova_compute[253661]: 2025-11-22 09:26:55.442 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:55 compute-0 nova_compute[253661]: 2025-11-22 09:26:55.443 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:55 compute-0 nova_compute[253661]: 2025-11-22 09:26:55.543 253665 DEBUG oslo_concurrency.processutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:55 compute-0 nova_compute[253661]: 2025-11-22 09:26:55.585 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 454 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 6.7 MiB/s wr, 241 op/s
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:26:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:26:55 compute-0 nova_compute[253661]: 2025-11-22 09:26:55.752 253665 DEBUG nova.compute.manager [req-e15d6d1a-6b16-4bbe-9528-7fa77b63ee51 req-8e45f07e-2f60-429c-95ed-75b00be9c5d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-deleted-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:55 compute-0 ceph-mon[75021]: osdmap e256: 3 total, 3 up, 3 in
Nov 22 09:26:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:26:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763173564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.037 253665 DEBUG oslo_concurrency.processutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.045 253665 DEBUG nova.compute.provider_tree [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.066 253665 DEBUG nova.scheduler.client.report [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.092 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.117 253665 INFO nova.scheduler.client.report [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.180 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.362 253665 INFO nova.virt.libvirt.driver [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Snapshot image upload complete
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.363 253665 INFO nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 5.84 seconds to snapshot the instance on the hypervisor.
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.610 253665 DEBUG nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.610 253665 DEBUG nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Nov 22 09:26:56 compute-0 nova_compute[253661]: 2025-11-22 09:26:56.610 253665 DEBUG nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting image 84078c1f-f45a-4974-ab60-fbf47bdc21a1 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Nov 22 09:26:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 22 09:26:56 compute-0 ceph-mon[75021]: pgmap v1898: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 454 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 6.7 MiB/s wr, 241 op/s
Nov 22 09:26:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2763173564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:26:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 22 09:26:56 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 22 09:26:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 10 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 291 active+clean; 452 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 9.7 MiB/s wr, 267 op/s
Nov 22 09:26:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:26:57 compute-0 ceph-mon[75021]: osdmap e257: 3 total, 3 up, 3 in
Nov 22 09:26:57 compute-0 nova_compute[253661]: 2025-11-22 09:26:57.967 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:57 compute-0 nova_compute[253661]: 2025-11-22 09:26:57.967 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:57 compute-0 nova_compute[253661]: 2025-11-22 09:26:57.968 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:57 compute-0 nova_compute[253661]: 2025-11-22 09:26:57.968 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:57 compute-0 nova_compute[253661]: 2025-11-22 09:26:57.968 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:57 compute-0 nova_compute[253661]: 2025-11-22 09:26:57.970 253665 INFO nova.compute.manager [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Terminating instance
Nov 22 09:26:57 compute-0 nova_compute[253661]: 2025-11-22 09:26:57.971 253665 DEBUG nova.compute.manager [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:26:58 compute-0 kernel: tapdedad4aa-19 (unregistering): left promiscuous mode
Nov 22 09:26:58 compute-0 NetworkManager[48920]: <info>  [1763803618.0378] device (tapdedad4aa-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.045 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:58 compute-0 ovn_controller[152872]: 2025-11-22T09:26:58Z|00904|binding|INFO|Releasing lport dedad4aa-19bb-4bc6-a08c-d75d3024d553 from this chassis (sb_readonly=0)
Nov 22 09:26:58 compute-0 ovn_controller[152872]: 2025-11-22T09:26:58Z|00905|binding|INFO|Setting lport dedad4aa-19bb-4bc6-a08c-d75d3024d553 down in Southbound
Nov 22 09:26:58 compute-0 ovn_controller[152872]: 2025-11-22T09:26:58Z|00906|binding|INFO|Removing iface tapdedad4aa-19 ovn-installed in OVS
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.051 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:9d:63 10.100.0.12'], port_security=['fa:16:3e:2b:9d:63 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e0b05f62-6966-4bf3-aee5-e4d2137a6cfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dedad4aa-19bb-4bc6-a08c-d75d3024d553) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.052 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dedad4aa-19bb-4bc6-a08c-d75d3024d553 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.053 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 502d021b-7c33-4c22-8cd9-32a451fdf556, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[604655f6-30be-4d00-b75d-66730ecee51c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.055 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556 namespace which is not needed anymore
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:58 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Nov 22 09:26:58 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004f.scope: Consumed 22.199s CPU time.
Nov 22 09:26:58 compute-0 systemd-machined[215941]: Machine qemu-95-instance-0000004f terminated.
Nov 22 09:26:58 compute-0 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [NOTICE]   (334722) : haproxy version is 2.8.14-c23fe91
Nov 22 09:26:58 compute-0 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [NOTICE]   (334722) : path to executable is /usr/sbin/haproxy
Nov 22 09:26:58 compute-0 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [WARNING]  (334722) : Exiting Master process...
Nov 22 09:26:58 compute-0 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [WARNING]  (334722) : Exiting Master process...
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.220 253665 INFO nova.virt.libvirt.driver [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Instance destroyed successfully.
Nov 22 09:26:58 compute-0 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [ALERT]    (334722) : Current worker (334724) exited with code 143 (Terminated)
Nov 22 09:26:58 compute-0 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [WARNING]  (334722) : All workers exited. Exiting... (0)
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.220 253665 DEBUG nova.objects.instance [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid e0b05f62-6966-4bf3-aee5-e4d2137a6cfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:26:58 compute-0 systemd[1]: libpod-dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84.scope: Deactivated successfully.
Nov 22 09:26:58 compute-0 podman[342668]: 2025-11-22 09:26:58.229662824 +0000 UTC m=+0.049871796 container died dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.231 253665 DEBUG nova.virt.libvirt.vif [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:24:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1428624522',display_name='tempest-₡-1428624522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1428624522',id=79,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:24:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-d8zy45mf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:24:10Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=e0b05f62-6966-4bf3-aee5-e4d2137a6cfc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.235 253665 DEBUG nova.network.os_vif_util [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.237 253665 DEBUG nova.network.os_vif_util [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.237 253665 DEBUG os_vif [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.241 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdedad4aa-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.247 253665 INFO os_vif [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19')
Nov 22 09:26:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84-userdata-shm.mount: Deactivated successfully.
Nov 22 09:26:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a035ba10fc4c82f058aee7a6871d6f3764a5e61791891e4008e90692352fd688-merged.mount: Deactivated successfully.
Nov 22 09:26:58 compute-0 podman[342668]: 2025-11-22 09:26:58.286800169 +0000 UTC m=+0.107009151 container cleanup dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:26:58 compute-0 systemd[1]: libpod-conmon-dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84.scope: Deactivated successfully.
Nov 22 09:26:58 compute-0 podman[342727]: 2025-11-22 09:26:58.400673003 +0000 UTC m=+0.087152272 container remove dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.409 253665 DEBUG nova.compute.manager [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-unplugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.409 253665 DEBUG oslo_concurrency.lockutils [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.409 253665 DEBUG oslo_concurrency.lockutils [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.410 253665 DEBUG oslo_concurrency.lockutils [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.410 253665 DEBUG nova.compute.manager [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] No waiting events found dispatching network-vif-unplugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.410 253665 DEBUG nova.compute.manager [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-unplugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.412 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9cf784-9c53-4c00-b401-447c2570f439]: (4, ('Sat Nov 22 09:26:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556 (dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84)\ndc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84\nSat Nov 22 09:26:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556 (dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84)\ndc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.414 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e41169d-e8b1-4cf3-a810-e91b74bbb2ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.415 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:58 compute-0 kernel: tap502d021b-70: left promiscuous mode
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.434 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.437 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8635b8e5-35d0-4698-adaa-691ad52ad46e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.457 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4de3f9-1b12-4136-ad69-e5b8ffd38f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.458 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5c860e-ab37-410f-961e-6790db284512]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.482 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b402518e-9faa-4505-bfbd-f563b6e243f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630083, 'reachable_time': 29194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342741, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.486 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:26:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.487 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbb313e-0f4f-42de-8420-fdafc6404e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:26:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d502d021b\x2d7c33\x2d4c22\x2d8cd9\x2d32a451fdf556.mount: Deactivated successfully.
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.863 253665 INFO nova.virt.libvirt.driver [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Deleting instance files /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_del
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.864 253665 INFO nova.virt.libvirt.driver [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Deletion of /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_del complete
Nov 22 09:26:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 22 09:26:58 compute-0 ceph-mon[75021]: pgmap v1900: 305 pgs: 10 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 291 active+clean; 452 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 9.7 MiB/s wr, 267 op/s
Nov 22 09:26:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 22 09:26:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.922 253665 INFO nova.compute.manager [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Took 0.95 seconds to destroy the instance on the hypervisor.
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.923 253665 DEBUG oslo.service.loopingcall [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.923 253665 DEBUG nova.compute.manager [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:26:58 compute-0 nova_compute[253661]: 2025-11-22 09:26:58.924 253665 DEBUG nova.network.neutron [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:26:59 compute-0 nova_compute[253661]: 2025-11-22 09:26:59.505 253665 DEBUG nova.network.neutron [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:26:59 compute-0 nova_compute[253661]: 2025-11-22 09:26:59.523 253665 INFO nova.compute.manager [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Took 0.60 seconds to deallocate network for instance.
Nov 22 09:26:59 compute-0 nova_compute[253661]: 2025-11-22 09:26:59.576 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:26:59 compute-0 nova_compute[253661]: 2025-11-22 09:26:59.577 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:26:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 14 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 287 active+clean; 335 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.4 MiB/s wr, 227 op/s
Nov 22 09:26:59 compute-0 nova_compute[253661]: 2025-11-22 09:26:59.647 253665 DEBUG oslo_concurrency.processutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:26:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 22 09:26:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 22 09:26:59 compute-0 ceph-mon[75021]: osdmap e258: 3 total, 3 up, 3 in
Nov 22 09:26:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 22 09:27:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:27:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78926417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.122 253665 DEBUG oslo_concurrency.processutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.131 253665 DEBUG nova.compute.provider_tree [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.147 253665 DEBUG nova.scheduler.client.report [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.170 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.208 253665 INFO nova.scheduler.client.report [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance e0b05f62-6966-4bf3-aee5-e4d2137a6cfc
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.273 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.561 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.569 253665 DEBUG nova.compute.manager [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.569 253665 DEBUG oslo_concurrency.lockutils [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.570 253665 DEBUG oslo_concurrency.lockutils [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.570 253665 DEBUG oslo_concurrency.lockutils [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.571 253665 DEBUG nova.compute.manager [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] No waiting events found dispatching network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.571 253665 WARNING nova.compute.manager [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received unexpected event network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 for instance with vm_state deleted and task_state None.
Nov 22 09:27:00 compute-0 nova_compute[253661]: 2025-11-22 09:27:00.572 253665 DEBUG nova.compute.manager [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-deleted-dedad4aa-19bb-4bc6-a08c-d75d3024d553 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:00 compute-0 ceph-mon[75021]: pgmap v1902: 305 pgs: 14 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 287 active+clean; 335 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.4 MiB/s wr, 227 op/s
Nov 22 09:27:00 compute-0 ceph-mon[75021]: osdmap e259: 3 total, 3 up, 3 in
Nov 22 09:27:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/78926417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 14 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 287 active+clean; 335 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 136 op/s
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011676525770007162 of space, bias 1.0, pg target 0.35029577310021487 quantized to 32 (current 32)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0022523192802956305 of space, bias 1.0, pg target 0.6756957840886891 quantized to 32 (current 32)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:27:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:27:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 22 09:27:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 22 09:27:02 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 22 09:27:02 compute-0 ceph-mon[75021]: pgmap v1904: 305 pgs: 14 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 287 active+clean; 335 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 136 op/s
Nov 22 09:27:02 compute-0 ceph-mon[75021]: osdmap e260: 3 total, 3 up, 3 in
Nov 22 09:27:03 compute-0 nova_compute[253661]: 2025-11-22 09:27:03.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 166 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 7.5 KiB/s wr, 178 op/s
Nov 22 09:27:04 compute-0 ovn_controller[152872]: 2025-11-22T09:27:04Z|00907|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 09:27:04 compute-0 nova_compute[253661]: 2025-11-22 09:27:04.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:04.526 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:27:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:04.527 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:27:04 compute-0 nova_compute[253661]: 2025-11-22 09:27:04.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:05 compute-0 ceph-mon[75021]: pgmap v1906: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 166 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 7.5 KiB/s wr, 178 op/s
Nov 22 09:27:05 compute-0 nova_compute[253661]: 2025-11-22 09:27:05.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:05 compute-0 nova_compute[253661]: 2025-11-22 09:27:05.609 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803610.6073596, 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:05 compute-0 nova_compute[253661]: 2025-11-22 09:27:05.609 253665 INFO nova.compute.manager [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] VM Stopped (Lifecycle Event)
Nov 22 09:27:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 7.0 KiB/s wr, 153 op/s
Nov 22 09:27:05 compute-0 nova_compute[253661]: 2025-11-22 09:27:05.629 253665 DEBUG nova.compute.manager [None req-b6c6b744-1942-478c-83cf-73e2614d2f1e - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:06.034 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8:0:1:f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:27:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:06.037 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated
Nov 22 09:27:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:06.039 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:27:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:06.040 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4d0797-462b-4d8c-a6bb-09918e6ef209]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:06 compute-0 podman[342765]: 2025-11-22 09:27:06.378293608 +0000 UTC m=+0.069790856 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:27:06 compute-0 podman[342766]: 2025-11-22 09:27:06.433963197 +0000 UTC m=+0.114900400 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 22 09:27:07 compute-0 ceph-mon[75021]: pgmap v1907: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 7.0 KiB/s wr, 153 op/s
Nov 22 09:27:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 4.6 KiB/s wr, 103 op/s
Nov 22 09:27:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 22 09:27:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 22 09:27:07 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 22 09:27:08 compute-0 nova_compute[253661]: 2025-11-22 09:27:08.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:08 compute-0 nova_compute[253661]: 2025-11-22 09:27:08.412 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:08 compute-0 nova_compute[253661]: 2025-11-22 09:27:08.412 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:08 compute-0 nova_compute[253661]: 2025-11-22 09:27:08.427 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:27:08 compute-0 nova_compute[253661]: 2025-11-22 09:27:08.530 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:08 compute-0 nova_compute[253661]: 2025-11-22 09:27:08.530 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:08 compute-0 nova_compute[253661]: 2025-11-22 09:27:08.537 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:27:08 compute-0 nova_compute[253661]: 2025-11-22 09:27:08.537 253665 INFO nova.compute.claims [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:27:08 compute-0 nova_compute[253661]: 2025-11-22 09:27:08.646 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:09 compute-0 ceph-mon[75021]: pgmap v1908: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 4.6 KiB/s wr, 103 op/s
Nov 22 09:27:09 compute-0 ceph-mon[75021]: osdmap e261: 3 total, 3 up, 3 in
Nov 22 09:27:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:27:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442797825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.152 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.161 253665 DEBUG nova.compute.provider_tree [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.183 253665 DEBUG nova.scheduler.client.report [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.212 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.213 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.261 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.262 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.280 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.301 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.386 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.389 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.390 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Creating image(s)
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.428 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.458 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.482 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.487 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.534 253665 DEBUG nova.policy [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ce82551204d04546a5ae9c6f99cccfc8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a246689624d4630a70f69b70d048883', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.572 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.573 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.573 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.574 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.600 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.604 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 4.6 KiB/s wr, 103 op/s
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.699 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "be0569c8-2c59-4525-a348-590d878662d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.699 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.713 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.778 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.779 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.789 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.790 253665 INFO nova.compute.claims [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:27:09 compute-0 nova_compute[253661]: 2025-11-22 09:27:09.920 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1442797825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.393 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Successfully created port: 6eb31688-c2e8-4f7b-a3df-3008c2065663 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:27:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:27:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833664491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.459 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.466 253665 DEBUG nova.compute.provider_tree [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.497 253665 DEBUG nova.scheduler.client.report [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.521 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.523 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.566 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.569 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.581 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.601 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.685 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.686 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.687 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating image(s)
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.716 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.742 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.764 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.768 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.842 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.843 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.844 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.844 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.877 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:10 compute-0 nova_compute[253661]: 2025-11-22 09:27:10.881 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 be0569c8-2c59-4525-a348-590d878662d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.327 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Successfully updated port: 6eb31688-c2e8-4f7b-a3df-3008c2065663 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.344 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.344 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.345 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.465 253665 DEBUG nova.compute.manager [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-changed-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.466 253665 DEBUG nova.compute.manager [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Refreshing instance network info cache due to event network-changed-6eb31688-c2e8-4f7b-a3df-3008c2065663. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.466 253665 DEBUG oslo_concurrency.lockutils [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:27:11 compute-0 ceph-mon[75021]: pgmap v1910: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 4.6 KiB/s wr, 103 op/s
Nov 22 09:27:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/833664491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.551 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:27:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.855 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:11 compute-0 nova_compute[253661]: 2025-11-22 09:27:11.930 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] resizing rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:27:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:27:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2822086503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:27:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:27:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2822086503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.463 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 be0569c8-2c59-4525-a348-590d878662d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2822086503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:27:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2822086503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.540 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updating instance_info_cache with network_info: [{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.587 253665 DEBUG nova.objects.instance [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.615 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.616 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance network_info: |[{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.616 253665 DEBUG oslo_concurrency.lockutils [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.616 253665 DEBUG nova.network.neutron [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Refreshing network info cache for port 6eb31688-c2e8-4f7b-a3df-3008c2065663 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.626 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] resizing rbd image be0569c8-2c59-4525-a348-590d878662d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.704 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.705 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Ensure instance console log exists: /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.705 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.705 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.706 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.707 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start _get_guest_xml network_info=[{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.714 253665 WARNING nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.719 253665 DEBUG nova.virt.libvirt.host [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.720 253665 DEBUG nova.virt.libvirt.host [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.723 253665 DEBUG nova.virt.libvirt.host [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.723 253665 DEBUG nova.virt.libvirt.host [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.724 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.724 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.724 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.724 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.726 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.726 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.726 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:27:12 compute-0 nova_compute[253661]: 2025-11-22 09:27:12.729 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.025 253665 DEBUG nova.objects.instance [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'migration_context' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.044 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.044 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Ensure instance console log exists: /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.044 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.045 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.045 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.046 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.051 253665 WARNING nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.061 253665 DEBUG nova.virt.libvirt.host [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.062 253665 DEBUG nova.virt.libvirt.host [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.065 253665 DEBUG nova.virt.libvirt.host [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.066 253665 DEBUG nova.virt.libvirt.host [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.066 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.066 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.067 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.067 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.067 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.067 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.069 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.071 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:27:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838187365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.191 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.215 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.220 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.259 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.262 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803618.216964, e0b05f62-6966-4bf3-aee5-e4d2137a6cfc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.263 253665 INFO nova.compute.manager [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] VM Stopped (Lifecycle Event)
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.279 253665 DEBUG nova.compute.manager [None req-23951622-87aa-415f-ac33-f27f5860db91 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:13 compute-0 podman[343243]: 2025-11-22 09:27:13.390613572 +0000 UTC m=+0.086387044 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 09:27:13 compute-0 ceph-mon[75021]: pgmap v1911: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Nov 22 09:27:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2838187365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:27:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/442847026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.538 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.562 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.566 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 150 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Nov 22 09:27:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:27:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529119042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.678 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.680 253665 DEBUG nova.virt.libvirt.vif [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:27:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1520180186',display_name='tempest-ServerActionsTestOtherB-server-1520180186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1520180186',id=89,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-09vxfowe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:27:09Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=2b5cb3fb-8c82-432e-a88b-1ca3fef4f208,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.681 253665 DEBUG nova.network.os_vif_util [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.682 253665 DEBUG nova.network.os_vif_util [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.683 253665 DEBUG nova.objects.instance [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.695 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <uuid>2b5cb3fb-8c82-432e-a88b-1ca3fef4f208</uuid>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <name>instance-00000059</name>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestOtherB-server-1520180186</nova:name>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:27:12</nova:creationTime>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <nova:user uuid="ce82551204d04546a5ae9c6f99cccfc8">tempest-ServerActionsTestOtherB-985895222-project-member</nova:user>
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <nova:project uuid="8a246689624d4630a70f69b70d048883">tempest-ServerActionsTestOtherB-985895222</nova:project>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <nova:port uuid="6eb31688-c2e8-4f7b-a3df-3008c2065663">
Nov 22 09:27:13 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <system>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <entry name="serial">2b5cb3fb-8c82-432e-a88b-1ca3fef4f208</entry>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <entry name="uuid">2b5cb3fb-8c82-432e-a88b-1ca3fef4f208</entry>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     </system>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <os>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   </os>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <features>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   </features>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk">
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       </source>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config">
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       </source>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:27:13 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:26:cd:01"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <target dev="tap6eb31688-c2"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/console.log" append="off"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <video>
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     </video>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:27:13 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:27:13 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:27:13 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:27:13 compute-0 nova_compute[253661]: </domain>
Nov 22 09:27:13 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.696 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Preparing to wait for external event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.697 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.697 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.698 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.699 253665 DEBUG nova.virt.libvirt.vif [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:27:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1520180186',display_name='tempest-ServerActionsTestOtherB-server-1520180186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1520180186',id=89,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-09vxfowe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:27:09Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=2b5cb3fb-8c82-432e-a88b-1ca3fef4f208,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.699 253665 DEBUG nova.network.os_vif_util [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.700 253665 DEBUG nova.network.os_vif_util [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.702 253665 DEBUG os_vif [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.703 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.704 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.704 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.707 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eb31688-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.707 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6eb31688-c2, col_values=(('external_ids', {'iface-id': '6eb31688-c2e8-4f7b-a3df-3008c2065663', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:cd:01', 'vm-uuid': '2b5cb3fb-8c82-432e-a88b-1ca3fef4f208'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:13 compute-0 NetworkManager[48920]: <info>  [1763803633.7261] manager: (tap6eb31688-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.727 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.730 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.732 253665 INFO os_vif [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2')
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.800 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.801 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.801 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No VIF found with MAC fa:16:3e:26:cd:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.802 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Using config drive
Nov 22 09:27:13 compute-0 nova_compute[253661]: 2025-11-22 09:27:13.831 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:27:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272900218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.026 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.028 253665 DEBUG nova.objects.instance [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'pci_devices' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.040 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <uuid>be0569c8-2c59-4525-a348-590d878662d8</uuid>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <name>instance-0000005a</name>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerShowV254Test-server-794044049</nova:name>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:27:13</nova:creationTime>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <nova:user uuid="6f9df33c6ddf4ec9a99024bbc6085706">tempest-ServerShowV254Test-1012776663-project-member</nova:user>
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <nova:project uuid="8b6aee60ba934808adf8732a1c4457cb">tempest-ServerShowV254Test-1012776663</nova:project>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <system>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <entry name="serial">be0569c8-2c59-4525-a348-590d878662d8</entry>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <entry name="uuid">be0569c8-2c59-4525-a348-590d878662d8</entry>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     </system>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <os>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   </os>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <features>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   </features>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/be0569c8-2c59-4525-a348-590d878662d8_disk">
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       </source>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/be0569c8-2c59-4525-a348-590d878662d8_disk.config">
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       </source>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:27:14 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/console.log" append="off"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <video>
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     </video>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:27:14 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:27:14 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:27:14 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:27:14 compute-0 nova_compute[253661]: </domain>
Nov 22 09:27:14 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.118 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.119 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.119 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Using config drive
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.149 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.205 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Creating config drive at /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.211 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe045x__9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.252 253665 DEBUG nova.network.neutron [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updated VIF entry in instance network info cache for port 6eb31688-c2e8-4f7b-a3df-3008c2065663. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.253 253665 DEBUG nova.network.neutron [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updating instance_info_cache with network_info: [{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.269 253665 DEBUG oslo_concurrency.lockutils [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.326 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating config drive at /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.332 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7_6i55o6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.368 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe045x__9" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.395 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.399 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.475 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7_6i55o6" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.503 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.506 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config be0569c8-2c59-4525-a348-590d878662d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:14.529 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/442847026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1529119042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/272900218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.922 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.923 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Deleting local config drive /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config because it was imported into RBD.
Nov 22 09:27:14 compute-0 virtqemud[254229]: End of file while reading data: Input/output error
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.957 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config be0569c8-2c59-4525-a348-590d878662d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.957 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deleting local config drive /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config because it was imported into RBD.
Nov 22 09:27:14 compute-0 kernel: tap6eb31688-c2: entered promiscuous mode
Nov 22 09:27:14 compute-0 NetworkManager[48920]: <info>  [1763803634.9845] manager: (tap6eb31688-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Nov 22 09:27:14 compute-0 ovn_controller[152872]: 2025-11-22T09:27:14Z|00908|binding|INFO|Claiming lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 for this chassis.
Nov 22 09:27:14 compute-0 ovn_controller[152872]: 2025-11-22T09:27:14Z|00909|binding|INFO|6eb31688-c2e8-4f7b-a3df-3008c2065663: Claiming fa:16:3e:26:cd:01 10.100.0.13
Nov 22 09:27:14 compute-0 nova_compute[253661]: 2025-11-22 09:27:14.985 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:14.995 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:cd:01 10.100.0.13'], port_security=['fa:16:3e:26:cd:01 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b5cb3fb-8c82-432e-a88b-1ca3fef4f208', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '2', 'neutron:security_group_ids': '565d4bba-9c09-4fbf-9eb5-c7cb7133e1fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=6eb31688-c2e8-4f7b-a3df-3008c2065663) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:27:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:14.996 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 6eb31688-c2e8-4f7b-a3df-3008c2065663 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da bound to our chassis
Nov 22 09:27:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:14.998 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 09:27:15 compute-0 ovn_controller[152872]: 2025-11-22T09:27:15Z|00910|binding|INFO|Setting lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 ovn-installed in OVS
Nov 22 09:27:15 compute-0 ovn_controller[152872]: 2025-11-22T09:27:15Z|00911|binding|INFO|Setting lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 up in Southbound
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:15 compute-0 systemd-udevd[343470]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf23329-6ff2-4e19-a602-ce88c8923a6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:15 compute-0 NetworkManager[48920]: <info>  [1763803635.0360] device (tap6eb31688-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:27:15 compute-0 NetworkManager[48920]: <info>  [1763803635.0368] device (tap6eb31688-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:27:15 compute-0 systemd-machined[215941]: New machine qemu-107-instance-00000059.
Nov 22 09:27:15 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-00000059.
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.053 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[99182b32-c715-464f-a913-4ed37fed169f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.057 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e798d461-63f0-4f65-8ef6-996773877b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:15 compute-0 systemd-machined[215941]: New machine qemu-108-instance-0000005a.
Nov 22 09:27:15 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-0000005a.
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.088 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5c67871d-fe68-437f-86ab-9b4ba676c262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.109 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3ea4cb-3e52-41fc-b34b-e290866912a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343489, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.134 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e21ccbd0-9aa9-4ce9-aa38-03aa01c46eb8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343494, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343494, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.139 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.140 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.141 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.141 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.142 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.253 253665 DEBUG nova.compute.manager [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.253 253665 DEBUG oslo_concurrency.lockutils [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.254 253665 DEBUG oslo_concurrency.lockutils [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.254 253665 DEBUG oslo_concurrency.lockutils [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.254 253665 DEBUG nova.compute.manager [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Processing event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.513 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.5129354, be0569c8-2c59-4525-a348-590d878662d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.513 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Resumed (Lifecycle Event)
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.516 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.517 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.520 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance spawned successfully.
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.521 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.534 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.539 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.544 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.545 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.545 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.546 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.546 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.546 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.566 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.567 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.5131536, be0569c8-2c59-4525-a348-590d878662d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.567 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Started (Lifecycle Event)
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.568 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.594 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.601 253665 INFO nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Took 4.92 seconds to spawn the instance on the hypervisor.
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.602 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.604 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:27:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 191 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 MiB/s wr, 62 op/s
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.657 253665 INFO nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Took 5.90 seconds to build instance.
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.672 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:15 compute-0 ceph-mon[75021]: pgmap v1912: 305 pgs: 305 active+clean; 150 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.976 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.9764066, 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.977 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] VM Started (Lifecycle Event)
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.978 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.982 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.985 253665 INFO nova.virt.libvirt.driver [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance spawned successfully.
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.985 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:27:15 compute-0 nova_compute[253661]: 2025-11-22 09:27:15.997 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.004 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.006 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.007 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.007 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.007 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.008 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.008 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.036 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.036 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.976573, 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.037 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] VM Paused (Lifecycle Event)
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.061 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.067 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.981308, 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.067 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] VM Resumed (Lifecycle Event)
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.075 253665 INFO nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Took 6.69 seconds to spawn the instance on the hypervisor.
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.076 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.097 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.122 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.136 253665 INFO nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Took 7.66 seconds to build instance.
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.150 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:16 compute-0 ceph-mon[75021]: pgmap v1913: 305 pgs: 305 active+clean; 191 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 MiB/s wr, 62 op/s
Nov 22 09:27:16 compute-0 nova_compute[253661]: 2025-11-22 09:27:16.994 253665 INFO nova.compute.manager [None req-6a5609b2-4946-440a-b88a-75cb108cad7f ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Get console output
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.000 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.091 253665 INFO nova.compute.manager [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Rebuilding instance
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.296 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'trusted_certs' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.308 253665 DEBUG nova.compute.manager [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.317 253665 DEBUG nova.compute.manager [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.318 253665 DEBUG oslo_concurrency.lockutils [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.318 253665 DEBUG oslo_concurrency.lockutils [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.318 253665 DEBUG oslo_concurrency.lockutils [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.318 253665 DEBUG nova.compute.manager [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] No waiting events found dispatching network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.319 253665 WARNING nova.compute.manager [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received unexpected event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 for instance with vm_state active and task_state None.
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.352 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'pci_requests' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.360 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'pci_devices' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.374 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'resources' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.384 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'migration_context' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.393 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:27:17 compute-0 nova_compute[253661]: 2025-11-22 09:27:17.396 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:27:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 213 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.3 MiB/s wr, 110 op/s
Nov 22 09:27:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:18 compute-0 nova_compute[253661]: 2025-11-22 09:27:18.194 253665 INFO nova.compute.manager [None req-5cf462f4-2278-4724-b861-d2eb206e6b52 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Get console output
Nov 22 09:27:18 compute-0 nova_compute[253661]: 2025-11-22 09:27:18.200 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:27:18 compute-0 nova_compute[253661]: 2025-11-22 09:27:18.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:18 compute-0 ceph-mon[75021]: pgmap v1914: 305 pgs: 305 active+clean; 213 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.3 MiB/s wr, 110 op/s
Nov 22 09:27:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 198 op/s
Nov 22 09:27:20 compute-0 sudo[343579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:27:20 compute-0 sudo[343579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:20 compute-0 sudo[343579]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:20 compute-0 sudo[343604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:27:20 compute-0 sudo[343604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:20 compute-0 sudo[343604]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:20 compute-0 sudo[343629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:27:20 compute-0 sudo[343629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:20 compute-0 sudo[343629]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:20 compute-0 sudo[343654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:27:20 compute-0 sudo[343654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:20 compute-0 nova_compute[253661]: 2025-11-22 09:27:20.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:20 compute-0 sudo[343654]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 09:27:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:27:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:27:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:27:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:27:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4b944829-0f44-4e8d-8491-25b9473e156c does not exist
Nov 22 09:27:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 7ef56b29-a0ed-43d5-9823-d49e1893e25d does not exist
Nov 22 09:27:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e0af6662-810f-4752-b7bf-6108908fe0b7 does not exist
Nov 22 09:27:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:27:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:27:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:27:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: pgmap v1915: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 198 op/s
Nov 22 09:27:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:27:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:27:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:27:20 compute-0 sudo[343710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:27:20 compute-0 sudo[343710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:20 compute-0 sudo[343710]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:20 compute-0 sudo[343735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:27:20 compute-0 sudo[343735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:20 compute-0 sudo[343735]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:20 compute-0 sudo[343760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:27:20 compute-0 sudo[343760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:20 compute-0 sudo[343760]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:20 compute-0 sudo[343785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:27:20 compute-0 sudo[343785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:21 compute-0 podman[343851]: 2025-11-22 09:27:21.362968024 +0000 UTC m=+0.044948391 container create 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 22 09:27:21 compute-0 systemd[1]: Started libpod-conmon-8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66.scope.
Nov 22 09:27:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:27:21 compute-0 podman[343851]: 2025-11-22 09:27:21.339931014 +0000 UTC m=+0.021911411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:27:21 compute-0 podman[343851]: 2025-11-22 09:27:21.44710137 +0000 UTC m=+0.129081757 container init 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:27:21 compute-0 podman[343851]: 2025-11-22 09:27:21.454088325 +0000 UTC m=+0.136068692 container start 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:27:21 compute-0 podman[343851]: 2025-11-22 09:27:21.458363623 +0000 UTC m=+0.140344020 container attach 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:27:21 compute-0 tender_mcnulty[343867]: 167 167
Nov 22 09:27:21 compute-0 systemd[1]: libpod-8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66.scope: Deactivated successfully.
Nov 22 09:27:21 compute-0 conmon[343867]: conmon 8975b2eb2034f5fb4dde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66.scope/container/memory.events
Nov 22 09:27:21 compute-0 podman[343851]: 2025-11-22 09:27:21.461974093 +0000 UTC m=+0.143954470 container died 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b44bd8b030fcc766023d2875ff6b6b33f562da4d5457e2e415b215e4a70a730-merged.mount: Deactivated successfully.
Nov 22 09:27:21 compute-0 podman[343851]: 2025-11-22 09:27:21.51033801 +0000 UTC m=+0.192318377 container remove 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 09:27:21 compute-0 systemd[1]: libpod-conmon-8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66.scope: Deactivated successfully.
Nov 22 09:27:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Nov 22 09:27:21 compute-0 podman[343890]: 2025-11-22 09:27:21.693159647 +0000 UTC m=+0.041997127 container create e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:27:21 compute-0 systemd[1]: Started libpod-conmon-e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7.scope.
Nov 22 09:27:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:27:21 compute-0 podman[343890]: 2025-11-22 09:27:21.675297578 +0000 UTC m=+0.024135078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:21 compute-0 podman[343890]: 2025-11-22 09:27:21.809129082 +0000 UTC m=+0.157966582 container init e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 22 09:27:21 compute-0 podman[343890]: 2025-11-22 09:27:21.817895153 +0000 UTC m=+0.166732633 container start e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:27:21 compute-0 podman[343890]: 2025-11-22 09:27:21.823711569 +0000 UTC m=+0.172549099 container attach e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:27:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:27:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:27:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:27:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:27:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:27:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:27:22 compute-0 adoring_newton[343907]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:27:22 compute-0 adoring_newton[343907]: --> relative data size: 1.0
Nov 22 09:27:22 compute-0 adoring_newton[343907]: --> All data devices are unavailable
Nov 22 09:27:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:22 compute-0 systemd[1]: libpod-e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7.scope: Deactivated successfully.
Nov 22 09:27:22 compute-0 podman[343890]: 2025-11-22 09:27:22.940153212 +0000 UTC m=+1.288990692 container died e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:27:22 compute-0 systemd[1]: libpod-e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7.scope: Consumed 1.055s CPU time.
Nov 22 09:27:23 compute-0 ceph-mon[75021]: pgmap v1916: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Nov 22 09:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38-merged.mount: Deactivated successfully.
Nov 22 09:27:23 compute-0 podman[343890]: 2025-11-22 09:27:23.078299166 +0000 UTC m=+1.427136646 container remove e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:27:23 compute-0 systemd[1]: libpod-conmon-e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7.scope: Deactivated successfully.
Nov 22 09:27:23 compute-0 sudo[343785]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:23 compute-0 sudo[343947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:27:23 compute-0 sudo[343947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:23 compute-0 sudo[343947]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:23 compute-0 sudo[343972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:27:23 compute-0 sudo[343972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:23 compute-0 sudo[343972]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:23 compute-0 sudo[343997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:27:23 compute-0 sudo[343997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:23 compute-0 sudo[343997]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:23 compute-0 sudo[344022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:27:23 compute-0 sudo[344022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 204 op/s
Nov 22 09:27:23 compute-0 podman[344088]: 2025-11-22 09:27:23.709051615 +0000 UTC m=+0.040136569 container create a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:27:23 compute-0 nova_compute[253661]: 2025-11-22 09:27:23.727 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:23 compute-0 systemd[1]: Started libpod-conmon-a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f.scope.
Nov 22 09:27:23 compute-0 podman[344088]: 2025-11-22 09:27:23.690615572 +0000 UTC m=+0.021700586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:27:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:27:23 compute-0 podman[344088]: 2025-11-22 09:27:23.812093527 +0000 UTC m=+0.143178471 container init a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:27:23 compute-0 podman[344088]: 2025-11-22 09:27:23.824509069 +0000 UTC m=+0.155594033 container start a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:27:23 compute-0 podman[344088]: 2025-11-22 09:27:23.828645003 +0000 UTC m=+0.159729977 container attach a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:27:23 compute-0 romantic_austin[344104]: 167 167
Nov 22 09:27:23 compute-0 systemd[1]: libpod-a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f.scope: Deactivated successfully.
Nov 22 09:27:23 compute-0 podman[344088]: 2025-11-22 09:27:23.830688674 +0000 UTC m=+0.161773668 container died a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-df77246e3de010d2ef5e2685a893b2c12a8f2308e63290bd956a83832356db2e-merged.mount: Deactivated successfully.
Nov 22 09:27:23 compute-0 podman[344088]: 2025-11-22 09:27:23.869691755 +0000 UTC m=+0.200776699 container remove a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:27:23 compute-0 systemd[1]: libpod-conmon-a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f.scope: Deactivated successfully.
Nov 22 09:27:24 compute-0 podman[344128]: 2025-11-22 09:27:24.084529868 +0000 UTC m=+0.049597999 container create 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:27:24 compute-0 systemd[1]: Started libpod-conmon-527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1.scope.
Nov 22 09:27:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:24 compute-0 podman[344128]: 2025-11-22 09:27:24.063983441 +0000 UTC m=+0.029051602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:24 compute-0 podman[344128]: 2025-11-22 09:27:24.176671374 +0000 UTC m=+0.141739525 container init 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:27:24 compute-0 podman[344128]: 2025-11-22 09:27:24.184562532 +0000 UTC m=+0.149630653 container start 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:27:24 compute-0 podman[344128]: 2025-11-22 09:27:24.188517982 +0000 UTC m=+0.153586123 container attach 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:27:24 compute-0 nova_compute[253661]: 2025-11-22 09:27:24.530 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:24 compute-0 nova_compute[253661]: 2025-11-22 09:27:24.530 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:24 compute-0 nova_compute[253661]: 2025-11-22 09:27:24.546 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:27:24 compute-0 nova_compute[253661]: 2025-11-22 09:27:24.619 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:24 compute-0 nova_compute[253661]: 2025-11-22 09:27:24.620 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:24 compute-0 nova_compute[253661]: 2025-11-22 09:27:24.627 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:27:24 compute-0 nova_compute[253661]: 2025-11-22 09:27:24.627 253665 INFO nova.compute.claims [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:27:24 compute-0 nova_compute[253661]: 2025-11-22 09:27:24.759 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]: {
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:     "0": [
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:         {
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "devices": [
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "/dev/loop3"
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             ],
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_name": "ceph_lv0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_size": "21470642176",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "name": "ceph_lv0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "tags": {
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.cluster_name": "ceph",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.crush_device_class": "",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.encrypted": "0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.osd_id": "0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.type": "block",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.vdo": "0"
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             },
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "type": "block",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "vg_name": "ceph_vg0"
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:         }
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:     ],
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:     "1": [
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:         {
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "devices": [
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "/dev/loop4"
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             ],
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_name": "ceph_lv1",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_size": "21470642176",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "name": "ceph_lv1",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "tags": {
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.cluster_name": "ceph",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.crush_device_class": "",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.encrypted": "0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.osd_id": "1",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.type": "block",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.vdo": "0"
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             },
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "type": "block",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "vg_name": "ceph_vg1"
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:         }
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:     ],
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:     "2": [
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:         {
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "devices": [
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "/dev/loop5"
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             ],
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_name": "ceph_lv2",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_size": "21470642176",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "name": "ceph_lv2",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "tags": {
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.cluster_name": "ceph",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.crush_device_class": "",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.encrypted": "0",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.osd_id": "2",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.type": "block",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:                 "ceph.vdo": "0"
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             },
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "type": "block",
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:             "vg_name": "ceph_vg2"
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:         }
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]:     ]
Nov 22 09:27:25 compute-0 mystifying_euclid[344145]: }
Nov 22 09:27:25 compute-0 ceph-mon[75021]: pgmap v1917: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 204 op/s
Nov 22 09:27:25 compute-0 systemd[1]: libpod-527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1.scope: Deactivated successfully.
Nov 22 09:27:25 compute-0 podman[344174]: 2025-11-22 09:27:25.109253703 +0000 UTC m=+0.034179450 container died 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a-merged.mount: Deactivated successfully.
Nov 22 09:27:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:27:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350919452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.230 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.238 253665 DEBUG nova.compute.provider_tree [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.255 253665 DEBUG nova.scheduler.client.report [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:27:25 compute-0 podman[344174]: 2025-11-22 09:27:25.26422766 +0000 UTC m=+0.189153347 container remove 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:27:25 compute-0 systemd[1]: libpod-conmon-527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1.scope: Deactivated successfully.
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.276 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.291 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:27:25 compute-0 sudo[344022]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.331 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.331 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.355 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.375 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:27:25 compute-0 sudo[344192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:27:25 compute-0 sudo[344192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:25 compute-0 sudo[344192]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:25 compute-0 sudo[344217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:27:25 compute-0 sudo[344217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:25 compute-0 sudo[344217]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.473 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.477 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.478 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Creating image(s)
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.517 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:25 compute-0 sudo[344242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:27:25 compute-0 sudo[344242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:25 compute-0 sudo[344242]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.554 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.585 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:25 compute-0 sudo[344289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:27:25 compute-0 sudo[344289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.590 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 200 op/s
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.637 253665 DEBUG nova.policy [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ce82551204d04546a5ae9c6f99cccfc8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a246689624d4630a70f69b70d048883', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.680 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.681 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.682 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.682 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.706 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:25 compute-0 nova_compute[253661]: 2025-11-22 09:27:25.711 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:26 compute-0 podman[344422]: 2025-11-22 09:27:25.973768252 +0000 UTC m=+0.044106671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:27:26 compute-0 podman[344422]: 2025-11-22 09:27:26.069348585 +0000 UTC m=+0.139687014 container create 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:27:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3350919452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:26 compute-0 systemd[1]: Started libpod-conmon-1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558.scope.
Nov 22 09:27:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:27:26 compute-0 nova_compute[253661]: 2025-11-22 09:27:26.229 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:26 compute-0 podman[344422]: 2025-11-22 09:27:26.254573742 +0000 UTC m=+0.324912161 container init 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 09:27:26 compute-0 podman[344422]: 2025-11-22 09:27:26.263010534 +0000 UTC m=+0.333348943 container start 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 22 09:27:26 compute-0 systemd[1]: libpod-1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558.scope: Deactivated successfully.
Nov 22 09:27:26 compute-0 xenodochial_ritchie[344436]: 167 167
Nov 22 09:27:26 compute-0 conmon[344436]: conmon 1ea825c691e0a9f6148f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558.scope/container/memory.events
Nov 22 09:27:26 compute-0 podman[344422]: 2025-11-22 09:27:26.276487413 +0000 UTC m=+0.346825812 container attach 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:27:26 compute-0 podman[344422]: 2025-11-22 09:27:26.277298414 +0000 UTC m=+0.347636813 container died 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-01a3355fb2c43af039164722f183a887884c2cb7e3ef79ce35696b9d18b8ee0e-merged.mount: Deactivated successfully.
Nov 22 09:27:26 compute-0 podman[344422]: 2025-11-22 09:27:26.3384128 +0000 UTC m=+0.408751199 container remove 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:27:26 compute-0 nova_compute[253661]: 2025-11-22 09:27:26.338 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] resizing rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:27:26 compute-0 systemd[1]: libpod-conmon-1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558.scope: Deactivated successfully.
Nov 22 09:27:26 compute-0 nova_compute[253661]: 2025-11-22 09:27:26.504 253665 DEBUG nova.objects.instance [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'migration_context' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:26 compute-0 nova_compute[253661]: 2025-11-22 09:27:26.518 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:27:26 compute-0 nova_compute[253661]: 2025-11-22 09:27:26.518 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Ensure instance console log exists: /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:27:26 compute-0 nova_compute[253661]: 2025-11-22 09:27:26.519 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:26 compute-0 nova_compute[253661]: 2025-11-22 09:27:26.519 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:26 compute-0 nova_compute[253661]: 2025-11-22 09:27:26.519 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:26 compute-0 podman[344530]: 2025-11-22 09:27:26.532788068 +0000 UTC m=+0.046322836 container create 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:27:26 compute-0 systemd[1]: Started libpod-conmon-86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0.scope.
Nov 22 09:27:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:26 compute-0 podman[344530]: 2025-11-22 09:27:26.511569174 +0000 UTC m=+0.025103952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:27:26 compute-0 podman[344530]: 2025-11-22 09:27:26.628437513 +0000 UTC m=+0.141972281 container init 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:27:26 compute-0 podman[344530]: 2025-11-22 09:27:26.635834969 +0000 UTC m=+0.149369737 container start 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:27:26 compute-0 podman[344530]: 2025-11-22 09:27:26.640332572 +0000 UTC m=+0.153867320 container attach 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:27:26 compute-0 nova_compute[253661]: 2025-11-22 09:27:26.710 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Successfully created port: 28cfa04a-0181-49ef-808d-38b57a093820 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:27:27 compute-0 ceph-mon[75021]: pgmap v1918: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 200 op/s
Nov 22 09:27:27 compute-0 nova_compute[253661]: 2025-11-22 09:27:27.515 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:27:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 235 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 154 op/s
Nov 22 09:27:27 compute-0 reverent_jang[344549]: {
Nov 22 09:27:27 compute-0 reverent_jang[344549]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "osd_id": 1,
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "type": "bluestore"
Nov 22 09:27:27 compute-0 reverent_jang[344549]:     },
Nov 22 09:27:27 compute-0 reverent_jang[344549]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "osd_id": 0,
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "type": "bluestore"
Nov 22 09:27:27 compute-0 reverent_jang[344549]:     },
Nov 22 09:27:27 compute-0 reverent_jang[344549]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "osd_id": 2,
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:27:27 compute-0 reverent_jang[344549]:         "type": "bluestore"
Nov 22 09:27:27 compute-0 reverent_jang[344549]:     }
Nov 22 09:27:27 compute-0 reverent_jang[344549]: }
Nov 22 09:27:27 compute-0 systemd[1]: libpod-86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0.scope: Deactivated successfully.
Nov 22 09:27:27 compute-0 systemd[1]: libpod-86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0.scope: Consumed 1.022s CPU time.
Nov 22 09:27:27 compute-0 podman[344530]: 2025-11-22 09:27:27.687863422 +0000 UTC m=+1.201398170 container died 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a-merged.mount: Deactivated successfully.
Nov 22 09:27:27 compute-0 podman[344530]: 2025-11-22 09:27:27.75820321 +0000 UTC m=+1.271737958 container remove 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:27:27 compute-0 systemd[1]: libpod-conmon-86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0.scope: Deactivated successfully.
Nov 22 09:27:27 compute-0 sudo[344289]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:27:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:27:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:27:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:27:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 40cecc41-241e-4f5f-a74b-221b8ffba620 does not exist
Nov 22 09:27:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 30dedeef-a0ad-4172-ad1c-f27b104439b7 does not exist
Nov 22 09:27:27 compute-0 sudo[344595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:27:27 compute-0 sudo[344595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:27 compute-0 sudo[344595]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:27 compute-0 sudo[344620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:27:27 compute-0 sudo[344620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:27:27 compute-0 sudo[344620]: pam_unix(sudo:session): session closed for user root
Nov 22 09:27:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:27.969 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:28 compute-0 nova_compute[253661]: 2025-11-22 09:27:28.471 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Successfully updated port: 28cfa04a-0181-49ef-808d-38b57a093820 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:27:28 compute-0 nova_compute[253661]: 2025-11-22 09:27:28.491 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:27:28 compute-0 nova_compute[253661]: 2025-11-22 09:27:28.492 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:27:28 compute-0 nova_compute[253661]: 2025-11-22 09:27:28.492 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:27:28 compute-0 nova_compute[253661]: 2025-11-22 09:27:28.609 253665 DEBUG nova.compute.manager [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-changed-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:28 compute-0 nova_compute[253661]: 2025-11-22 09:27:28.610 253665 DEBUG nova.compute.manager [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Refreshing instance network info cache due to event network-changed-28cfa04a-0181-49ef-808d-38b57a093820. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:27:28 compute-0 nova_compute[253661]: 2025-11-22 09:27:28.610 253665 DEBUG oslo_concurrency.lockutils [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:27:28 compute-0 nova_compute[253661]: 2025-11-22 09:27:28.716 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:27:28 compute-0 nova_compute[253661]: 2025-11-22 09:27:28.743 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:28 compute-0 ceph-mon[75021]: pgmap v1919: 305 pgs: 305 active+clean; 235 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 154 op/s
Nov 22 09:27:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:27:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:27:29 compute-0 ovn_controller[152872]: 2025-11-22T09:27:29Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:cd:01 10.100.0.13
Nov 22 09:27:29 compute-0 ovn_controller[152872]: 2025-11-22T09:27:29Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:cd:01 10.100.0.13
Nov 22 09:27:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 289 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.0 MiB/s wr, 196 op/s
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.041 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updating instance_info_cache with network_info: [{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.055 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.056 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance network_info: |[{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.056 253665 DEBUG oslo_concurrency.lockutils [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.056 253665 DEBUG nova.network.neutron [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Refreshing network info cache for port 28cfa04a-0181-49ef-808d-38b57a093820 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.058 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Start _get_guest_xml network_info=[{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.063 253665 WARNING nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.070 253665 DEBUG nova.virt.libvirt.host [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.070 253665 DEBUG nova.virt.libvirt.host [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.074 253665 DEBUG nova.virt.libvirt.host [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.074 253665 DEBUG nova.virt.libvirt.host [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.075 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.075 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.075 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.077 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.077 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.077 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.077 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.081 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:27:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1290931064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.582 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.608 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:30 compute-0 nova_compute[253661]: 2025-11-22 09:27:30.611 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:30 compute-0 ceph-mon[75021]: pgmap v1920: 305 pgs: 305 active+clean; 289 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.0 MiB/s wr, 196 op/s
Nov 22 09:27:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1290931064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:27:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555189802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.074 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.076 253665 DEBUG nova.virt.libvirt.vif [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-695887300',display_name='tempest-ServerActionsTestOtherB-server-695887300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-695887300',id=91,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-h01c4fzy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:27:25Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=753ce408-3988-4fe2-b140-9cae60fcdd6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.076 253665 DEBUG nova.network.os_vif_util [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.078 253665 DEBUG nova.network.os_vif_util [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.080 253665 DEBUG nova.objects.instance [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_devices' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.095 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <uuid>753ce408-3988-4fe2-b140-9cae60fcdd6b</uuid>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <name>instance-0000005b</name>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestOtherB-server-695887300</nova:name>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:27:30</nova:creationTime>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <nova:user uuid="ce82551204d04546a5ae9c6f99cccfc8">tempest-ServerActionsTestOtherB-985895222-project-member</nova:user>
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <nova:project uuid="8a246689624d4630a70f69b70d048883">tempest-ServerActionsTestOtherB-985895222</nova:project>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <nova:port uuid="28cfa04a-0181-49ef-808d-38b57a093820">
Nov 22 09:27:31 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <system>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <entry name="serial">753ce408-3988-4fe2-b140-9cae60fcdd6b</entry>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <entry name="uuid">753ce408-3988-4fe2-b140-9cae60fcdd6b</entry>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     </system>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <os>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   </os>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <features>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   </features>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/753ce408-3988-4fe2-b140-9cae60fcdd6b_disk">
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       </source>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config">
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       </source>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:27:31 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:26:df:55"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <target dev="tap28cfa04a-01"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/console.log" append="off"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <video>
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     </video>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:27:31 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:27:31 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:27:31 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:27:31 compute-0 nova_compute[253661]: </domain>
Nov 22 09:27:31 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.097 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Preparing to wait for external event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.097 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.097 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.097 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.098 253665 DEBUG nova.virt.libvirt.vif [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-695887300',display_name='tempest-ServerActionsTestOtherB-server-695887300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-695887300',id=91,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-h01c4fzy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:27:25Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=753ce408-3988-4fe2-b140-9cae60fcdd6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.098 253665 DEBUG nova.network.os_vif_util [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.099 253665 DEBUG nova.network.os_vif_util [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.099 253665 DEBUG os_vif [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.100 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.101 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.101 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.105 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.106 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28cfa04a-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.106 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28cfa04a-01, col_values=(('external_ids', {'iface-id': '28cfa04a-0181-49ef-808d-38b57a093820', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:df:55', 'vm-uuid': '753ce408-3988-4fe2-b140-9cae60fcdd6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:31 compute-0 NetworkManager[48920]: <info>  [1763803651.1088] manager: (tap28cfa04a-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.110 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.115 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.116 253665 INFO os_vif [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01')
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.222 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.223 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.223 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No VIF found with MAC fa:16:3e:26:df:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.224 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Using config drive
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.248 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 289 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 4.0 MiB/s wr, 93 op/s
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.739 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Creating config drive at /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.744 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu5hnakw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.780 253665 DEBUG nova.network.neutron [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updated VIF entry in instance network info cache for port 28cfa04a-0181-49ef-808d-38b57a093820. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.781 253665 DEBUG nova.network.neutron [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updating instance_info_cache with network_info: [{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.797 253665 DEBUG oslo_concurrency.lockutils [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.887 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu5hnakw" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3555189802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.989 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:31 compute-0 nova_compute[253661]: 2025-11-22 09:27:31.993 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:32 compute-0 nova_compute[253661]: 2025-11-22 09:27:32.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:32 compute-0 nova_compute[253661]: 2025-11-22 09:27:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:27:32 compute-0 nova_compute[253661]: 2025-11-22 09:27:32.739 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:27:32 compute-0 nova_compute[253661]: 2025-11-22 09:27:32.740 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:27:32 compute-0 nova_compute[253661]: 2025-11-22 09:27:32.740 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:27:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:33 compute-0 ceph-mon[75021]: pgmap v1921: 305 pgs: 305 active+clean; 289 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 4.0 MiB/s wr, 93 op/s
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.442 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.443 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Deleting local config drive /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config because it was imported into RBD.
Nov 22 09:27:33 compute-0 kernel: tap28cfa04a-01: entered promiscuous mode
Nov 22 09:27:33 compute-0 NetworkManager[48920]: <info>  [1763803653.5181] manager: (tap28cfa04a-01): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Nov 22 09:27:33 compute-0 ovn_controller[152872]: 2025-11-22T09:27:33Z|00912|binding|INFO|Claiming lport 28cfa04a-0181-49ef-808d-38b57a093820 for this chassis.
Nov 22 09:27:33 compute-0 ovn_controller[152872]: 2025-11-22T09:27:33Z|00913|binding|INFO|28cfa04a-0181-49ef-808d-38b57a093820: Claiming fa:16:3e:26:df:55 10.100.0.3
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:33 compute-0 ovn_controller[152872]: 2025-11-22T09:27:33Z|00914|binding|INFO|Setting lport 28cfa04a-0181-49ef-808d-38b57a093820 ovn-installed in OVS
Nov 22 09:27:33 compute-0 ovn_controller[152872]: 2025-11-22T09:27:33Z|00915|binding|INFO|Setting lport 28cfa04a-0181-49ef-808d-38b57a093820 up in Southbound
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.534 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:df:55 10.100.0.3'], port_security=['fa:16:3e:26:df:55 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '753ce408-3988-4fe2-b140-9cae60fcdd6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '2', 'neutron:security_group_ids': '565d4bba-9c09-4fbf-9eb5-c7cb7133e1fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=28cfa04a-0181-49ef-808d-38b57a093820) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.537 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 28cfa04a-0181-49ef-808d-38b57a093820 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da bound to our chassis
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.541 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 09:27:33 compute-0 systemd-udevd[344784]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:27:33 compute-0 systemd-machined[215941]: New machine qemu-109-instance-0000005b.
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.564 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6019d3ab-9771-4b3b-9a78-a66de3a1b9c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:33 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-0000005b.
Nov 22 09:27:33 compute-0 NetworkManager[48920]: <info>  [1763803653.5777] device (tap28cfa04a-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:27:33 compute-0 NetworkManager[48920]: <info>  [1763803653.5785] device (tap28cfa04a-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.607 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e96b62eb-810f-4e2f-a4e1-79e997b0cb61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.612 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[567c3a86-b20d-438f-ab9d-d63322febb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 322 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 905 KiB/s rd, 6.0 MiB/s wr, 145 op/s
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.648 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[40823b05-139a-46c6-b39e-bc9c2d9bd9aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.671 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af265c35-abf1-475f-b761-5923173491d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344796, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.693 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[751b7826-d4b5-489c-97b4-2bb72d1b7844]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344798, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344798, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.695 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.701 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.702 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.703 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.704 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.769 253665 DEBUG nova.compute.manager [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.770 253665 DEBUG oslo_concurrency.lockutils [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.770 253665 DEBUG oslo_concurrency.lockutils [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.770 253665 DEBUG oslo_concurrency.lockutils [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:33 compute-0 nova_compute[253661]: 2025-11-22 09:27:33.770 253665 DEBUG nova.compute.manager [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Processing event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.218 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803654.2172937, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.219 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Started (Lifecycle Event)
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.221 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.225 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.230 253665 INFO nova.virt.libvirt.driver [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance spawned successfully.
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.231 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.249 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.257 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.265 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.266 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.267 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.268 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.268 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.269 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.296 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.297 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803654.217594, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.297 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Paused (Lifecycle Event)
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.320 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.324 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803654.225146, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.324 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Resumed (Lifecycle Event)
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.335 253665 INFO nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Took 8.86 seconds to spawn the instance on the hypervisor.
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.336 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.347 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.369 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.389 253665 INFO nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Took 9.80 seconds to build instance.
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.403 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.829 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.845 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.846 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.846 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.846 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.847 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.847 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.871 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.871 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.872 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.872 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:27:34 compute-0 nova_compute[253661]: 2025-11-22 09:27:34.872 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:27:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510388740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.339 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.452 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.452 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.456 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.457 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.460 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.460 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.464 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.464 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 6.0 MiB/s wr, 155 op/s
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.641 253665 INFO nova.compute.manager [None req-e39c48ab-a8a5-4e58-859c-0a24c8235ff5 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Pausing
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.643 253665 DEBUG nova.objects.instance [None req-e39c48ab-a8a5-4e58-859c-0a24c8235ff5 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'flavor' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.661 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.662 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3286MB free_disk=59.83132553100586GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.663 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.663 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.675 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803655.6754906, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.676 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Paused (Lifecycle Event)
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.677 253665 DEBUG nova.compute.manager [None req-e39c48ab-a8a5-4e58-859c-0a24c8235ff5 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.712 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.716 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:27:35 compute-0 ceph-mon[75021]: pgmap v1922: 305 pgs: 305 active+clean; 322 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 905 KiB/s rd, 6.0 MiB/s wr, 145 op/s
Nov 22 09:27:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/510388740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.770 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.804 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e4f9440c-7476-4022-8d08-1b3151a9db79 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.805 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.805 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance be0569c8-2c59-4525-a348-590d878662d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.805 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 753ce408-3988-4fe2-b140-9cae60fcdd6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.805 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.806 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.852 253665 DEBUG nova.compute.manager [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.853 253665 DEBUG oslo_concurrency.lockutils [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.854 253665 DEBUG oslo_concurrency.lockutils [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.854 253665 DEBUG oslo_concurrency.lockutils [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.854 253665 DEBUG nova.compute.manager [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] No waiting events found dispatching network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.854 253665 WARNING nova.compute.manager [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received unexpected event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 for instance with vm_state paused and task_state None.
Nov 22 09:27:35 compute-0 nova_compute[253661]: 2025-11-22 09:27:35.921 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:36 compute-0 nova_compute[253661]: 2025-11-22 09:27:36.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:27:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2623197426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:36 compute-0 nova_compute[253661]: 2025-11-22 09:27:36.446 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:36 compute-0 nova_compute[253661]: 2025-11-22 09:27:36.452 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:27:36 compute-0 nova_compute[253661]: 2025-11-22 09:27:36.471 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:27:36 compute-0 nova_compute[253661]: 2025-11-22 09:27:36.505 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:27:36 compute-0 nova_compute[253661]: 2025-11-22 09:27:36.506 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:37 compute-0 ceph-mon[75021]: pgmap v1923: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 6.0 MiB/s wr, 155 op/s
Nov 22 09:27:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2623197426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:37 compute-0 podman[344886]: 2025-11-22 09:27:37.40908763 +0000 UTC m=+0.065807536 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:27:37 compute-0 podman[344887]: 2025-11-22 09:27:37.41426778 +0000 UTC m=+0.069757835 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:27:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1006 KiB/s rd, 6.0 MiB/s wr, 173 op/s
Nov 22 09:27:37 compute-0 nova_compute[253661]: 2025-11-22 09:27:37.887 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:37 compute-0 nova_compute[253661]: 2025-11-22 09:27:37.887 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:27:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.166 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.167 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.167 253665 INFO nova.compute.manager [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Shelving
Nov 22 09:27:38 compute-0 kernel: tap28cfa04a-01 (unregistering): left promiscuous mode
Nov 22 09:27:38 compute-0 NetworkManager[48920]: <info>  [1763803658.4645] device (tap28cfa04a-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:27:38 compute-0 ovn_controller[152872]: 2025-11-22T09:27:38Z|00916|binding|INFO|Releasing lport 28cfa04a-0181-49ef-808d-38b57a093820 from this chassis (sb_readonly=0)
Nov 22 09:27:38 compute-0 ovn_controller[152872]: 2025-11-22T09:27:38Z|00917|binding|INFO|Setting lport 28cfa04a-0181-49ef-808d-38b57a093820 down in Southbound
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:38 compute-0 ovn_controller[152872]: 2025-11-22T09:27:38Z|00918|binding|INFO|Removing iface tap28cfa04a-01 ovn-installed in OVS
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.480 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:df:55 10.100.0.3'], port_security=['fa:16:3e:26:df:55 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '753ce408-3988-4fe2-b140-9cae60fcdd6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '4', 'neutron:security_group_ids': '565d4bba-9c09-4fbf-9eb5-c7cb7133e1fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=28cfa04a-0181-49ef-808d-38b57a093820) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.481 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 28cfa04a-0181-49ef-808d-38b57a093820 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da unbound from our chassis
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.483 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:38 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.505 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[67b8b4f2-83dd-4ea5-aa5a-240658cd43db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:38 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005b.scope: Consumed 1.973s CPU time.
Nov 22 09:27:38 compute-0 systemd-machined[215941]: Machine qemu-109-instance-0000005b terminated.
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.544 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6b826fe0-b952-4ec7-8ce7-181f0cc582c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.548 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[90e00937-6e46-4b07-b14f-ac5b901f2a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.564 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.590 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[af6f4357-678e-4495-bce6-bd5e2608287b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.625 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2d9c7b-bcaa-47bc-9e8b-647ce4cda397]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344941, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.626 253665 INFO nova.virt.libvirt.driver [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance destroyed successfully.
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.627 253665 DEBUG nova.objects.instance [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'numa_topology' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.656 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d5044a67-16fe-411b-8023-8191be447ca6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344948, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344948, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.660 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.669 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.670 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.671 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.672 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.672 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.792 253665 DEBUG nova.compute.manager [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-vif-unplugged-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.792 253665 DEBUG oslo_concurrency.lockutils [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.793 253665 DEBUG oslo_concurrency.lockutils [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.793 253665 DEBUG oslo_concurrency.lockutils [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.793 253665 DEBUG nova.compute.manager [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] No waiting events found dispatching network-vif-unplugged-28cfa04a-0181-49ef-808d-38b57a093820 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.793 253665 WARNING nova.compute.manager [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received unexpected event network-vif-unplugged-28cfa04a-0181-49ef-808d-38b57a093820 for instance with vm_state paused and task_state shelving.
Nov 22 09:27:38 compute-0 nova_compute[253661]: 2025-11-22 09:27:38.887 253665 INFO nova.virt.libvirt.driver [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Beginning cold snapshot process
Nov 22 09:27:39 compute-0 nova_compute[253661]: 2025-11-22 09:27:39.026 253665 DEBUG nova.virt.libvirt.imagebackend [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:27:39 compute-0 nova_compute[253661]: 2025-11-22 09:27:39.270 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(ef11e4df533b4e6db3d19539cbf5b443) on rbd image(753ce408-3988-4fe2-b140-9cae60fcdd6b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:27:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 22 09:27:39 compute-0 ceph-mon[75021]: pgmap v1924: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1006 KiB/s rd, 6.0 MiB/s wr, 173 op/s
Nov 22 09:27:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 22 09:27:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 22 09:27:39 compute-0 nova_compute[253661]: 2025-11-22 09:27:39.495 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/753ce408-3988-4fe2-b140-9cae60fcdd6b_disk@ef11e4df533b4e6db3d19539cbf5b443 to images/e371ade6-6f76-4ac1-a229-6a09e518f8de clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:27:39 compute-0 nova_compute[253661]: 2025-11-22 09:27:39.621 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/e371ade6-6f76-4ac1-a229-6a09e518f8de flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:27:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Nov 22 09:27:39 compute-0 nova_compute[253661]: 2025-11-22 09:27:39.931 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(ef11e4df533b4e6db3d19539cbf5b443) on rbd image(753ce408-3988-4fe2-b140-9cae60fcdd6b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:27:40 compute-0 nova_compute[253661]: 2025-11-22 09:27:40.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 22 09:27:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 22 09:27:40 compute-0 ceph-mon[75021]: osdmap e262: 3 total, 3 up, 3 in
Nov 22 09:27:40 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 22 09:27:40 compute-0 nova_compute[253661]: 2025-11-22 09:27:40.456 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(e371ade6-6f76-4ac1-a229-6a09e518f8de) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:27:40 compute-0 nova_compute[253661]: 2025-11-22 09:27:40.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:40 compute-0 nova_compute[253661]: 2025-11-22 09:27:40.922 253665 DEBUG nova.compute.manager [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:40 compute-0 nova_compute[253661]: 2025-11-22 09:27:40.923 253665 DEBUG oslo_concurrency.lockutils [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:40 compute-0 nova_compute[253661]: 2025-11-22 09:27:40.923 253665 DEBUG oslo_concurrency.lockutils [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:40 compute-0 nova_compute[253661]: 2025-11-22 09:27:40.924 253665 DEBUG oslo_concurrency.lockutils [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:40 compute-0 nova_compute[253661]: 2025-11-22 09:27:40.924 253665 DEBUG nova.compute.manager [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] No waiting events found dispatching network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:27:40 compute-0 nova_compute[253661]: 2025-11-22 09:27:40.924 253665 WARNING nova.compute.manager [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received unexpected event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 for instance with vm_state paused and task_state shelving_image_uploading.
Nov 22 09:27:41 compute-0 nova_compute[253661]: 2025-11-22 09:27:41.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:41 compute-0 nova_compute[253661]: 2025-11-22 09:27:41.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:27:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 22 09:27:41 compute-0 ceph-mon[75021]: pgmap v1926: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Nov 22 09:27:41 compute-0 ceph-mon[75021]: osdmap e263: 3 total, 3 up, 3 in
Nov 22 09:27:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 22 09:27:41 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 22 09:27:41 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 22 09:27:41 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005a.scope: Consumed 13.493s CPU time.
Nov 22 09:27:41 compute-0 systemd-machined[215941]: Machine qemu-108-instance-0000005a terminated.
Nov 22 09:27:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 74 KiB/s wr, 99 op/s
Nov 22 09:27:41 compute-0 nova_compute[253661]: 2025-11-22 09:27:41.761 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance shutdown successfully after 24 seconds.
Nov 22 09:27:41 compute-0 nova_compute[253661]: 2025-11-22 09:27:41.768 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance destroyed successfully.
Nov 22 09:27:41 compute-0 nova_compute[253661]: 2025-11-22 09:27:41.774 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance destroyed successfully.
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.204 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deleting instance files /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8_del
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.205 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deletion of /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8_del complete
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.412 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.413 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating image(s)
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.456 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:42 compute-0 ceph-mon[75021]: osdmap e264: 3 total, 3 up, 3 in
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.618 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.646 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.653 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.767 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.768 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.770 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.770 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.806 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:42 compute-0 nova_compute[253661]: 2025-11-22 09:27:42.811 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 be0569c8-2c59-4525-a348-590d878662d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:43 compute-0 ceph-mon[75021]: pgmap v1929: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 74 KiB/s wr, 99 op/s
Nov 22 09:27:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 351 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.442 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 be0569c8-2c59-4525-a348-590d878662d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:44 compute-0 podman[345208]: 2025-11-22 09:27:44.458202248 +0000 UTC m=+0.137754685 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.530 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] resizing rbd image be0569c8-2c59-4525-a348-590d878662d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.677 253665 INFO nova.virt.libvirt.driver [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Snapshot image upload complete
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.678 253665 DEBUG nova.compute.manager [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:44 compute-0 ceph-mon[75021]: pgmap v1930: 305 pgs: 305 active+clean; 351 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.747 253665 INFO nova.compute.manager [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Shelve offloading
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.761 253665 INFO nova.virt.libvirt.driver [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance destroyed successfully.
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.762 253665 DEBUG nova.compute.manager [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.765 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.765 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:27:44 compute-0 nova_compute[253661]: 2025-11-22 09:27:44.765 253665 DEBUG nova.network.neutron [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.096 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.097 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Ensure instance console log exists: /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.098 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.098 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.099 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.101 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.107 253665 WARNING nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.115 253665 DEBUG nova.virt.libvirt.host [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.116 253665 DEBUG nova.virt.libvirt.host [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.122 253665 DEBUG nova.virt.libvirt.host [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.122 253665 DEBUG nova.virt.libvirt.host [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.123 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.123 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.124 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.124 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.125 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.125 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.125 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.125 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.126 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.126 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.126 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.126 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.127 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'vcpu_model' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.141 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.578 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:27:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322883611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.618 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 347 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.6 MiB/s wr, 162 op/s
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.652 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:45 compute-0 nova_compute[253661]: 2025-11-22 09:27:45.657 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:45 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1322883611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:27:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209297601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.144 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.160 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.165 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <uuid>be0569c8-2c59-4525-a348-590d878662d8</uuid>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <name>instance-0000005a</name>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerShowV254Test-server-794044049</nova:name>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:27:45</nova:creationTime>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <nova:user uuid="6f9df33c6ddf4ec9a99024bbc6085706">tempest-ServerShowV254Test-1012776663-project-member</nova:user>
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <nova:project uuid="8b6aee60ba934808adf8732a1c4457cb">tempest-ServerShowV254Test-1012776663</nova:project>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <system>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <entry name="serial">be0569c8-2c59-4525-a348-590d878662d8</entry>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <entry name="uuid">be0569c8-2c59-4525-a348-590d878662d8</entry>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     </system>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <os>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   </os>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <features>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   </features>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/be0569c8-2c59-4525-a348-590d878662d8_disk">
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/be0569c8-2c59-4525-a348-590d878662d8_disk.config">
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:27:46 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/console.log" append="off"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <video>
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     </video>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:27:46 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:27:46 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:27:46 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:27:46 compute-0 nova_compute[253661]: </domain>
Nov 22 09:27:46 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.238 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.238 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.239 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Using config drive
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.265 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.284 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'ec2_ids' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.471 253665 DEBUG nova.network.neutron [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updating instance_info_cache with network_info: [{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.486 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.508 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating config drive at /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.513 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd65biy7_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.658 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd65biy7_" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.687 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.691 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config be0569c8-2c59-4525-a348-590d878662d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:46 compute-0 ceph-mon[75021]: pgmap v1931: 305 pgs: 305 active+clean; 347 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.6 MiB/s wr, 162 op/s
Nov 22 09:27:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3209297601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.889 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config be0569c8-2c59-4525-a348-590d878662d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:46 compute-0 nova_compute[253661]: 2025-11-22 09:27:46.890 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deleting local config drive /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config because it was imported into RBD.
Nov 22 09:27:46 compute-0 systemd-machined[215941]: New machine qemu-110-instance-0000005a.
Nov 22 09:27:46 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-0000005a.
Nov 22 09:27:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 331 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 134 op/s
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.907 253665 INFO nova.virt.libvirt.driver [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance destroyed successfully.
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.909 253665 DEBUG nova.objects.instance [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'resources' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.920 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for be0569c8-2c59-4525-a348-590d878662d8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.920 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803667.9195416, be0569c8-2c59-4525-a348-590d878662d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.920 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Resumed (Lifecycle Event)
Nov 22 09:27:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.927 253665 DEBUG nova.virt.libvirt.vif [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-695887300',display_name='tempest-ServerActionsTestOtherB-server-695887300',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-695887300',id=91,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-h01c4fzy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member',shelved_at='2025-11-22T09:27:44.678146',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e371ade6-6f76-4ac1-a229-6a09e518f8de'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:27:38Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=753ce408-3988-4fe2-b140-9cae60fcdd6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.928 253665 DEBUG nova.network.os_vif_util [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:27:47 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.929 253665 DEBUG nova.network.os_vif_util [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.930 253665 DEBUG os_vif [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.933 253665 DEBUG nova.compute.manager [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.933 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.934 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.934 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28cfa04a-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.938 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.941 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.942 253665 INFO os_vif [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01')
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.970 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance spawned successfully.
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.971 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:27:47 compute-0 nova_compute[253661]: 2025-11-22 09:27:47.974 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.000 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.001 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803667.921067, be0569c8-2c59-4525-a348-590d878662d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.002 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Started (Lifecycle Event)
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.007 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.008 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.008 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.009 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.009 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.010 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.018 253665 DEBUG nova.compute.manager [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-changed-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.019 253665 DEBUG nova.compute.manager [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Refreshing instance network info cache due to event network-changed-28cfa04a-0181-49ef-808d-38b57a093820. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.019 253665 DEBUG oslo_concurrency.lockutils [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.019 253665 DEBUG oslo_concurrency.lockutils [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.020 253665 DEBUG nova.network.neutron [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Refreshing network info cache for port 28cfa04a-0181-49ef-808d-38b57a093820 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.021 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.026 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.052 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.075 253665 DEBUG nova.compute.manager [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.129 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.129 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.130 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.184 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.386 253665 INFO nova.virt.libvirt.driver [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Deleting instance files /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b_del
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.387 253665 INFO nova.virt.libvirt.driver [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Deletion of /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b_del complete
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.480 253665 INFO nova.scheduler.client.report [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Deleted allocations for instance 753ce408-3988-4fe2-b140-9cae60fcdd6b
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.521 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.521 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.575 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "be0569c8-2c59-4525-a348-590d878662d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.575 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.576 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "be0569c8-2c59-4525-a348-590d878662d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.576 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.576 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.577 253665 INFO nova.compute.manager [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Terminating instance
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.578 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "refresh_cache-be0569c8-2c59-4525-a348-590d878662d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.578 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquired lock "refresh_cache-be0569c8-2c59-4525-a348-590d878662d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.578 253665 DEBUG nova.network.neutron [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:27:48 compute-0 nova_compute[253661]: 2025-11-22 09:27:48.624 253665 DEBUG oslo_concurrency.processutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:48 compute-0 ceph-mon[75021]: pgmap v1932: 305 pgs: 305 active+clean; 331 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 134 op/s
Nov 22 09:27:48 compute-0 ceph-mon[75021]: osdmap e265: 3 total, 3 up, 3 in
Nov 22 09:27:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:27:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860459903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:49 compute-0 nova_compute[253661]: 2025-11-22 09:27:49.113 253665 DEBUG oslo_concurrency.processutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:49 compute-0 nova_compute[253661]: 2025-11-22 09:27:49.119 253665 DEBUG nova.compute.provider_tree [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:27:49 compute-0 nova_compute[253661]: 2025-11-22 09:27:49.137 253665 DEBUG nova.scheduler.client.report [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:27:49 compute-0 nova_compute[253661]: 2025-11-22 09:27:49.157 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:49 compute-0 nova_compute[253661]: 2025-11-22 09:27:49.215 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:49 compute-0 nova_compute[253661]: 2025-11-22 09:27:49.447 253665 DEBUG nova.network.neutron [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:27:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 258 op/s
Nov 22 09:27:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2860459903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:50 compute-0 nova_compute[253661]: 2025-11-22 09:27:50.179 253665 DEBUG nova.network.neutron [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:27:50 compute-0 nova_compute[253661]: 2025-11-22 09:27:50.195 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Releasing lock "refresh_cache-be0569c8-2c59-4525-a348-590d878662d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:27:50 compute-0 nova_compute[253661]: 2025-11-22 09:27:50.196 253665 DEBUG nova.compute.manager [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:27:50 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 22 09:27:50 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Consumed 3.179s CPU time.
Nov 22 09:27:50 compute-0 systemd-machined[215941]: Machine qemu-110-instance-0000005a terminated.
Nov 22 09:27:50 compute-0 nova_compute[253661]: 2025-11-22 09:27:50.422 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance destroyed successfully.
Nov 22 09:27:50 compute-0 nova_compute[253661]: 2025-11-22 09:27:50.423 253665 DEBUG nova.objects.instance [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'resources' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:50 compute-0 nova_compute[253661]: 2025-11-22 09:27:50.580 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:50 compute-0 nova_compute[253661]: 2025-11-22 09:27:50.619 253665 DEBUG nova.network.neutron [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updated VIF entry in instance network info cache for port 28cfa04a-0181-49ef-808d-38b57a093820. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:27:50 compute-0 nova_compute[253661]: 2025-11-22 09:27:50.620 253665 DEBUG nova.network.neutron [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updating instance_info_cache with network_info: [{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": null, "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap28cfa04a-01", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:27:50 compute-0 nova_compute[253661]: 2025-11-22 09:27:50.639 253665 DEBUG oslo_concurrency.lockutils [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:27:51 compute-0 ceph-mon[75021]: pgmap v1934: 305 pgs: 305 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 258 op/s
Nov 22 09:27:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 212 op/s
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:27:52
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.data', 'vms']
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:27:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:27:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:52 compute-0 nova_compute[253661]: 2025-11-22 09:27:52.925 253665 INFO nova.virt.libvirt.driver [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deleting instance files /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8_del
Nov 22 09:27:52 compute-0 nova_compute[253661]: 2025-11-22 09:27:52.926 253665 INFO nova.virt.libvirt.driver [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deletion of /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8_del complete
Nov 22 09:27:52 compute-0 nova_compute[253661]: 2025-11-22 09:27:52.938 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:52 compute-0 nova_compute[253661]: 2025-11-22 09:27:52.996 253665 INFO nova.compute.manager [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Took 2.80 seconds to destroy the instance on the hypervisor.
Nov 22 09:27:52 compute-0 nova_compute[253661]: 2025-11-22 09:27:52.996 253665 DEBUG oslo.service.loopingcall [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:27:52 compute-0 nova_compute[253661]: 2025-11-22 09:27:52.996 253665 DEBUG nova.compute.manager [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:27:52 compute-0 nova_compute[253661]: 2025-11-22 09:27:52.997 253665 DEBUG nova.network.neutron [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.137 253665 DEBUG nova.network.neutron [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.154 253665 DEBUG nova.network.neutron [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.158 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.158 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.159 253665 INFO nova.compute.manager [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Shelving
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.171 253665 INFO nova.compute.manager [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Took 0.17 seconds to deallocate network for instance.
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.189 253665 DEBUG nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.219 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.219 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.293 253665 DEBUG oslo_concurrency.processutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:27:53 compute-0 ceph-mon[75021]: pgmap v1935: 305 pgs: 305 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 212 op/s
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.625 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803658.6240451, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.626 253665 INFO nova.compute.manager [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Stopped (Lifecycle Event)
Nov 22 09:27:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 285 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 234 op/s
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.644 253665 DEBUG nova.compute.manager [None req-235fe83e-6187-46fc-be7b-9c7c67970931 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:27:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:27:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4114288654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.899 253665 DEBUG oslo_concurrency.processutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.906 253665 DEBUG nova.compute.provider_tree [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.926 253665 DEBUG nova.scheduler.client.report [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.949 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:53 compute-0 nova_compute[253661]: 2025-11-22 09:27:53.989 253665 INFO nova.scheduler.client.report [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Deleted allocations for instance be0569c8-2c59-4525-a348-590d878662d8
Nov 22 09:27:54 compute-0 nova_compute[253661]: 2025-11-22 09:27:54.081 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4114288654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:27:55 compute-0 nova_compute[253661]: 2025-11-22 09:27:55.583 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:27:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 261 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 836 KiB/s wr, 190 op/s
Nov 22 09:27:55 compute-0 ceph-mon[75021]: pgmap v1936: 305 pgs: 305 active+clean; 285 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 234 op/s
Nov 22 09:27:57 compute-0 ceph-mon[75021]: pgmap v1937: 305 pgs: 305 active+clean; 261 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 836 KiB/s wr, 190 op/s
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.220 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance shutdown successfully after 4 seconds.
Nov 22 09:27:57 compute-0 kernel: tapb1fc96be-00 (unregistering): left promiscuous mode
Nov 22 09:27:57 compute-0 NetworkManager[48920]: <info>  [1763803677.3022] device (tapb1fc96be-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:27:57 compute-0 ovn_controller[152872]: 2025-11-22T09:27:57Z|00919|binding|INFO|Releasing lport b1fc96be-009e-46a8-829c-b7a0bc42af60 from this chassis (sb_readonly=0)
Nov 22 09:27:57 compute-0 ovn_controller[152872]: 2025-11-22T09:27:57Z|00920|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 down in Southbound
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.318 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:57 compute-0 ovn_controller[152872]: 2025-11-22T09:27:57Z|00921|binding|INFO|Removing iface tapb1fc96be-00 ovn-installed in OVS
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.322 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.332 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:67:ca 10.100.0.10'], port_security=['fa:16:3e:38:67:ca 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e4f9440c-7476-4022-8d08-1b3151a9db79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33563511-c966-495c-93cb-386deb50a2bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b1fc96be-009e-46a8-829c-b7a0bc42af60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.335 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b1fc96be-009e-46a8-829c-b7a0bc42af60 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da unbound from our chassis
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.338 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.352 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.369 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3e7f48-a42f-4b94-bfce-67d30f43048f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:57 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000056.scope: Deactivated successfully.
Nov 22 09:27:57 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000056.scope: Consumed 19.367s CPU time.
Nov 22 09:27:57 compute-0 systemd-machined[215941]: Machine qemu-104-instance-00000056 terminated.
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.417 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9be7921c-f59a-472b-a83d-f75af3cc4010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.422 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b94c4b78-39db-410b-8507-df16c247ab7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.466 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[abf40546-4da7-46a3-8637-677e8a914f06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.467 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance destroyed successfully.
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.468 253665 DEBUG nova.objects.instance [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'numa_topology' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.495 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e60965f-e60d-44ca-8880-27e68b5628a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345587, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.518 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2dea3212-7824-46fa-87d6-b5f8034bc3f1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345588, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345588, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.559 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.560 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.560 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:27:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.561 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:27:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 376 KiB/s wr, 190 op/s
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.904 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Beginning cold snapshot process
Nov 22 09:27:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:27:57 compute-0 nova_compute[253661]: 2025-11-22 09:27:57.940 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:27:58 compute-0 nova_compute[253661]: 2025-11-22 09:27:58.057 253665 DEBUG nova.virt.libvirt.imagebackend [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:27:58 compute-0 nova_compute[253661]: 2025-11-22 09:27:58.237 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(afac88730f284b09ab567ec387418c7b) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:27:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 22 09:27:59 compute-0 nova_compute[253661]: 2025-11-22 09:27:59.219 253665 DEBUG nova.compute.manager [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:27:59 compute-0 nova_compute[253661]: 2025-11-22 09:27:59.219 253665 DEBUG oslo_concurrency.lockutils [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:27:59 compute-0 nova_compute[253661]: 2025-11-22 09:27:59.219 253665 DEBUG oslo_concurrency.lockutils [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:27:59 compute-0 nova_compute[253661]: 2025-11-22 09:27:59.220 253665 DEBUG oslo_concurrency.lockutils [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:27:59 compute-0 nova_compute[253661]: 2025-11-22 09:27:59.220 253665 DEBUG nova.compute.manager [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:27:59 compute-0 nova_compute[253661]: 2025-11-22 09:27:59.220 253665 WARNING nova.compute.manager [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state active and task_state shelving_image_uploading.
Nov 22 09:27:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 22 09:27:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 22 09:27:59 compute-0 ceph-mon[75021]: pgmap v1938: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 376 KiB/s wr, 190 op/s
Nov 22 09:27:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 93 op/s
Nov 22 09:27:59 compute-0 nova_compute[253661]: 2025-11-22 09:27:59.996 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk@afac88730f284b09ab567ec387418c7b to images/38789998-3bd3-4ad6-a223-58e845dd36f2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:28:00 compute-0 nova_compute[253661]: 2025-11-22 09:28:00.586 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:00 compute-0 ceph-mon[75021]: osdmap e266: 3 total, 3 up, 3 in
Nov 22 09:28:01 compute-0 nova_compute[253661]: 2025-11-22 09:28:01.191 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/38789998-3bd3-4ad6-a223-58e845dd36f2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:28:01 compute-0 nova_compute[253661]: 2025-11-22 09:28:01.475 253665 DEBUG nova.compute.manager [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:01 compute-0 nova_compute[253661]: 2025-11-22 09:28:01.476 253665 DEBUG oslo_concurrency.lockutils [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:01 compute-0 nova_compute[253661]: 2025-11-22 09:28:01.476 253665 DEBUG oslo_concurrency.lockutils [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:01 compute-0 nova_compute[253661]: 2025-11-22 09:28:01.476 253665 DEBUG oslo_concurrency.lockutils [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:01 compute-0 nova_compute[253661]: 2025-11-22 09:28:01.476 253665 DEBUG nova.compute.manager [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:28:01 compute-0 nova_compute[253661]: 2025-11-22 09:28:01.477 253665 WARNING nova.compute.manager [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state active and task_state shelving_image_uploading.
Nov 22 09:28:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 93 op/s
Nov 22 09:28:01 compute-0 ceph-mon[75021]: pgmap v1940: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 93 op/s
Nov 22 09:28:01 compute-0 nova_compute[253661]: 2025-11-22 09:28:01.788 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(afac88730f284b09ab567ec387418c7b) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:28:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 22 09:28:02 compute-0 ceph-mon[75021]: pgmap v1941: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 93 op/s
Nov 22 09:28:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 22 09:28:02 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015192456005404702 of space, bias 1.0, pg target 0.4557736801621411 quantized to 32 (current 32)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001336931041218988 of space, bias 1.0, pg target 0.40107931236569644 quantized to 32 (current 32)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:28:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:28:02 compute-0 nova_compute[253661]: 2025-11-22 09:28:02.765 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(38789998-3bd3-4ad6-a223-58e845dd36f2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:28:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:02 compute-0 nova_compute[253661]: 2025-11-22 09:28:02.942 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 318 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.4 MiB/s wr, 59 op/s
Nov 22 09:28:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 22 09:28:03 compute-0 ceph-mon[75021]: osdmap e267: 3 total, 3 up, 3 in
Nov 22 09:28:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 22 09:28:03 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 22 09:28:04 compute-0 nova_compute[253661]: 2025-11-22 09:28:04.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:04 compute-0 ceph-mon[75021]: pgmap v1943: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 318 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.4 MiB/s wr, 59 op/s
Nov 22 09:28:04 compute-0 ceph-mon[75021]: osdmap e268: 3 total, 3 up, 3 in
Nov 22 09:28:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:04.832 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:28:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:04.833 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:28:04 compute-0 nova_compute[253661]: 2025-11-22 09:28:04.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.421 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803670.4197698, be0569c8-2c59-4525-a348-590d878662d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.422 253665 INFO nova.compute.manager [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Stopped (Lifecycle Event)
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.447 253665 DEBUG nova.compute.manager [None req-173c971e-254e-4772-aa4c-a7c7ab2c551d - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.588 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 325 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.5 MiB/s wr, 131 op/s
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.793 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Snapshot image upload complete
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.793 253665 DEBUG nova.compute.manager [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.857 253665 INFO nova.compute.manager [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Shelve offloading
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.866 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance destroyed successfully.
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.867 253665 DEBUG nova.compute.manager [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.870 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.870 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:28:05 compute-0 nova_compute[253661]: 2025-11-22 09:28:05.870 253665 DEBUG nova.network.neutron [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:28:06 compute-0 nova_compute[253661]: 2025-11-22 09:28:06.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:06 compute-0 ceph-mon[75021]: pgmap v1945: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 325 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.5 MiB/s wr, 131 op/s
Nov 22 09:28:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 325 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 104 op/s
Nov 22 09:28:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:07.836 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:07 compute-0 nova_compute[253661]: 2025-11-22 09:28:07.944 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:08 compute-0 nova_compute[253661]: 2025-11-22 09:28:08.415 253665 DEBUG nova.network.neutron [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:28:08 compute-0 podman[345732]: 2025-11-22 09:28:08.417713302 +0000 UTC m=+0.091147373 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:28:08 compute-0 podman[345731]: 2025-11-22 09:28:08.424910473 +0000 UTC m=+0.109650678 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 09:28:08 compute-0 nova_compute[253661]: 2025-11-22 09:28:08.437 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:28:08 compute-0 ovn_controller[152872]: 2025-11-22T09:28:08Z|00922|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 09:28:08 compute-0 nova_compute[253661]: 2025-11-22 09:28:08.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:08 compute-0 ceph-mon[75021]: pgmap v1946: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 325 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 104 op/s
Nov 22 09:28:09 compute-0 ovn_controller[152872]: 2025-11-22T09:28:09Z|00923|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 09:28:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 325 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 122 op/s
Nov 22 09:28:09 compute-0 nova_compute[253661]: 2025-11-22 09:28:09.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.529 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance destroyed successfully.
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.529 253665 DEBUG nova.objects.instance [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'resources' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.540 253665 DEBUG nova.virt.libvirt.vif [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/cRG5bfHD3LbYWZfZhBZW64Gzk9NiecmZChn56cNdUeqOvdqm8gZ047E1aOD+/1rWy6Q/20jfwuj+tARiRMK9Fr/axSxMkwZvm5uYPBSn1o0uJaQf1m6OZmN9YqP8SQ==',key_name='tempest-keypair-427391145',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member',shelved_at='2025-11-22T09:28:05.793825',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='38789998-3bd3-4ad6-a223-58e845dd36f2'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:27:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.540 253665 DEBUG nova.network.os_vif_util [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.541 253665 DEBUG nova.network.os_vif_util [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.541 253665 DEBUG os_vif [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.543 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1fc96be-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.551 253665 INFO os_vif [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00')
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.662 253665 DEBUG nova.compute.manager [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.662 253665 DEBUG nova.compute.manager [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing instance network info cache due to event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.663 253665 DEBUG oslo_concurrency.lockutils [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.663 253665 DEBUG oslo_concurrency.lockutils [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.663 253665 DEBUG nova.network.neutron [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:28:10 compute-0 ceph-mon[75021]: pgmap v1947: 305 pgs: 305 active+clean; 325 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 122 op/s
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.987 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting instance files /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79_del
Nov 22 09:28:10 compute-0 nova_compute[253661]: 2025-11-22 09:28:10.988 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deletion of /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79_del complete
Nov 22 09:28:11 compute-0 nova_compute[253661]: 2025-11-22 09:28:11.081 253665 INFO nova.scheduler.client.report [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Deleted allocations for instance e4f9440c-7476-4022-8d08-1b3151a9db79
Nov 22 09:28:11 compute-0 nova_compute[253661]: 2025-11-22 09:28:11.122 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:11 compute-0 nova_compute[253661]: 2025-11-22 09:28:11.123 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:11 compute-0 nova_compute[253661]: 2025-11-22 09:28:11.177 253665 DEBUG oslo_concurrency.processutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:28:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3349675935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 325 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.0 MiB/s wr, 98 op/s
Nov 22 09:28:11 compute-0 nova_compute[253661]: 2025-11-22 09:28:11.650 253665 DEBUG oslo_concurrency.processutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:11 compute-0 nova_compute[253661]: 2025-11-22 09:28:11.655 253665 DEBUG nova.compute.provider_tree [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:28:11 compute-0 nova_compute[253661]: 2025-11-22 09:28:11.671 253665 DEBUG nova.scheduler.client.report [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:28:11 compute-0 nova_compute[253661]: 2025-11-22 09:28:11.694 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:11 compute-0 nova_compute[253661]: 2025-11-22 09:28:11.737 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3349675935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:28:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902416551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:28:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:28:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902416551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:28:12 compute-0 nova_compute[253661]: 2025-11-22 09:28:12.462 253665 DEBUG nova.network.neutron [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated VIF entry in instance network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:28:12 compute-0 nova_compute[253661]: 2025-11-22 09:28:12.463 253665 DEBUG nova.network.neutron [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": null, "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapb1fc96be-00", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:28:12 compute-0 nova_compute[253661]: 2025-11-22 09:28:12.464 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803677.4639907, e4f9440c-7476-4022-8d08-1b3151a9db79 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:28:12 compute-0 nova_compute[253661]: 2025-11-22 09:28:12.465 253665 INFO nova.compute.manager [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Stopped (Lifecycle Event)
Nov 22 09:28:12 compute-0 nova_compute[253661]: 2025-11-22 09:28:12.494 253665 DEBUG nova.compute.manager [None req-4e836ea3-53af-4367-9d66-f74835c4ae39 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:28:12 compute-0 nova_compute[253661]: 2025-11-22 09:28:12.495 253665 DEBUG oslo_concurrency.lockutils [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:28:12 compute-0 ceph-mon[75021]: pgmap v1948: 305 pgs: 305 active+clean; 325 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.0 MiB/s wr, 98 op/s
Nov 22 09:28:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3902416551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:28:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3902416551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:28:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 22 09:28:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 22 09:28:12 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 22 09:28:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 264 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 715 KiB/s rd, 404 KiB/s wr, 83 op/s
Nov 22 09:28:13 compute-0 ceph-mon[75021]: osdmap e269: 3 total, 3 up, 3 in
Nov 22 09:28:14 compute-0 ceph-mon[75021]: pgmap v1950: 305 pgs: 305 active+clean; 264 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 715 KiB/s rd, 404 KiB/s wr, 83 op/s
Nov 22 09:28:15 compute-0 podman[345811]: 2025-11-22 09:28:15.394552723 +0000 UTC m=+0.083313087 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 22 09:28:15 compute-0 nova_compute[253661]: 2025-11-22 09:28:15.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:15 compute-0 nova_compute[253661]: 2025-11-22 09:28:15.593 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Nov 22 09:28:17 compute-0 ceph-mon[75021]: pgmap v1951: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Nov 22 09:28:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Nov 22 09:28:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.374 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.374 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.375 253665 INFO nova.compute.manager [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Unshelving
Nov 22 09:28:19 compute-0 ceph-mon[75021]: pgmap v1952: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.470 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.471 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.478 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_requests' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.495 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'numa_topology' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.509 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.509 253665 INFO nova.compute.claims [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:28:19 compute-0 nova_compute[253661]: 2025-11-22 09:28:19.637 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 KiB/s wr, 34 op/s
Nov 22 09:28:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:28:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/624732545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:20 compute-0 nova_compute[253661]: 2025-11-22 09:28:20.097 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:20 compute-0 nova_compute[253661]: 2025-11-22 09:28:20.104 253665 DEBUG nova.compute.provider_tree [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:28:20 compute-0 nova_compute[253661]: 2025-11-22 09:28:20.118 253665 DEBUG nova.scheduler.client.report [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:28:20 compute-0 nova_compute[253661]: 2025-11-22 09:28:20.135 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:20 compute-0 nova_compute[253661]: 2025-11-22 09:28:20.423 253665 INFO nova.network.neutron [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating port b1fc96be-009e-46a8-829c-b7a0bc42af60 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 22 09:28:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/624732545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:20 compute-0 nova_compute[253661]: 2025-11-22 09:28:20.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:20 compute-0 nova_compute[253661]: 2025-11-22 09:28:20.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:21 compute-0 nova_compute[253661]: 2025-11-22 09:28:21.083 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:28:21 compute-0 nova_compute[253661]: 2025-11-22 09:28:21.083 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:28:21 compute-0 nova_compute[253661]: 2025-11-22 09:28:21.084 253665 DEBUG nova.network.neutron [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:28:21 compute-0 nova_compute[253661]: 2025-11-22 09:28:21.314 253665 DEBUG nova.compute.manager [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:21 compute-0 nova_compute[253661]: 2025-11-22 09:28:21.314 253665 DEBUG nova.compute.manager [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing instance network info cache due to event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:28:21 compute-0 nova_compute[253661]: 2025-11-22 09:28:21.315 253665 DEBUG oslo_concurrency.lockutils [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:28:21 compute-0 ceph-mon[75021]: pgmap v1953: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 KiB/s wr, 34 op/s
Nov 22 09:28:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 KiB/s wr, 34 op/s
Nov 22 09:28:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:28:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:28:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:28:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:28:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:28:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:28:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:23 compute-0 ceph-mon[75021]: pgmap v1954: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 KiB/s wr, 34 op/s
Nov 22 09:28:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 9.4 KiB/s wr, 17 op/s
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.045 253665 DEBUG nova.network.neutron [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.071 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.074 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.075 253665 INFO nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating image(s)
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.106 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.111 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.115 253665 DEBUG oslo_concurrency.lockutils [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.115 253665 DEBUG nova.network.neutron [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.159 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.184 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.187 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "d4dd4b0658d65fea5dfcd7dfdb0a5b794029a769" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.188 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "d4dd4b0658d65fea5dfcd7dfdb0a5b794029a769" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.535 253665 DEBUG nova.virt.libvirt.imagebackend [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38789998-3bd3-4ad6-a223-58e845dd36f2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38789998-3bd3-4ad6-a223-58e845dd36f2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.590 253665 DEBUG nova.virt.libvirt.imagebackend [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Selected location: {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38789998-3bd3-4ad6-a223-58e845dd36f2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.591 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning images/38789998-3bd3-4ad6-a223-58e845dd36f2@snap to None/e4f9440c-7476-4022-8d08-1b3151a9db79_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.733 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "d4dd4b0658d65fea5dfcd7dfdb0a5b794029a769" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.846 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'migration_context' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:24 compute-0 nova_compute[253661]: 2025-11-22 09:28:24.898 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.434 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Image rbd:vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.435 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.435 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Ensure instance console log exists: /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.436 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.436 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.436 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.438 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start _get_guest_xml network_info=[{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:27:53Z,direct_url=<?>,disk_format='raw',id=38789998-3bd3-4ad6-a223-58e845dd36f2,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1215087159-shelved',owner='8a246689624d4630a70f69b70d048883',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:28:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.447 253665 WARNING nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.458 253665 DEBUG nova.virt.libvirt.host [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.459 253665 DEBUG nova.virt.libvirt.host [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.464 253665 DEBUG nova.virt.libvirt.host [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.465 253665 DEBUG nova.virt.libvirt.host [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.465 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.465 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:27:53Z,direct_url=<?>,disk_format='raw',id=38789998-3bd3-4ad6-a223-58e845dd36f2,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1215087159-shelved',owner='8a246689624d4630a70f69b70d048883',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:28:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.466 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.466 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.466 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.466 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.468 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.468 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.483 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:25 compute-0 ceph-mon[75021]: pgmap v1955: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 9.4 KiB/s wr, 17 op/s
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.597 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.644 253665 DEBUG nova.network.neutron [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated VIF entry in instance network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.644 253665 DEBUG nova.network.neutron [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:28:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 8.4 KiB/s wr, 13 op/s
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.659 253665 DEBUG oslo_concurrency.lockutils [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:28:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:28:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2634363782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:28:25 compute-0 nova_compute[253661]: 2025-11-22 09:28:25.994 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.018 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.023 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:28:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448644035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.507 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.510 253665 DEBUG nova.virt.libvirt.vif [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='38789998-3bd3-4ad6-a223-58e845dd36f2',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-427391145',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member',shelved_at='2025-11-22T09:28:05.793825',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='38789998-3bd3-4ad6-a223-58e845dd36f2'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:28:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.510 253665 DEBUG nova.network.os_vif_util [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.511 253665 DEBUG nova.network.os_vif_util [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.513 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.527 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <uuid>e4f9440c-7476-4022-8d08-1b3151a9db79</uuid>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <name>instance-00000056</name>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerActionsTestOtherB-server-1215087159</nova:name>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:28:25</nova:creationTime>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <nova:user uuid="ce82551204d04546a5ae9c6f99cccfc8">tempest-ServerActionsTestOtherB-985895222-project-member</nova:user>
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <nova:project uuid="8a246689624d4630a70f69b70d048883">tempest-ServerActionsTestOtherB-985895222</nova:project>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="38789998-3bd3-4ad6-a223-58e845dd36f2"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <nova:port uuid="b1fc96be-009e-46a8-829c-b7a0bc42af60">
Nov 22 09:28:26 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <system>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <entry name="serial">e4f9440c-7476-4022-8d08-1b3151a9db79</entry>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <entry name="uuid">e4f9440c-7476-4022-8d08-1b3151a9db79</entry>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     </system>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <os>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   </os>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <features>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   </features>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk">
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config">
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:28:26 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:38:67:ca"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <target dev="tapb1fc96be-00"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/console.log" append="off"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <video>
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     </video>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:28:26 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:28:26 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:28:26 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:28:26 compute-0 nova_compute[253661]: </domain>
Nov 22 09:28:26 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.527 253665 DEBUG nova.compute.manager [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Preparing to wait for external event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.528 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.528 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.528 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.529 253665 DEBUG nova.virt.libvirt.vif [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='38789998-3bd3-4ad6-a223-58e845dd36f2',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-427391145',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member',shelved_at='2025-11-22T09:28:05.793825',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='38789998-3bd3-4ad6-a223-58e845dd36f2'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:28:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.529 253665 DEBUG nova.network.os_vif_util [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.530 253665 DEBUG nova.network.os_vif_util [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.531 253665 DEBUG os_vif [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.532 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.534 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:28:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2634363782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:28:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/448644035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.539 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.539 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1fc96be-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.540 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1fc96be-00, col_values=(('external_ids', {'iface-id': 'b1fc96be-009e-46a8-829c-b7a0bc42af60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:67:ca', 'vm-uuid': 'e4f9440c-7476-4022-8d08-1b3151a9db79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.542 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:26 compute-0 NetworkManager[48920]: <info>  [1763803706.5430] manager: (tapb1fc96be-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.549 253665 INFO os_vif [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00')
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.603 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.604 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.604 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No VIF found with MAC fa:16:3e:38:67:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.604 253665 INFO nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Using config drive
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.626 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.645 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:26 compute-0 nova_compute[253661]: 2025-11-22 09:28:26.681 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'keypairs' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.192 253665 INFO nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating config drive at /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.205 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy_p0v863 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.361 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy_p0v863" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.391 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.395 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:27 compute-0 ceph-mon[75021]: pgmap v1956: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 8.4 KiB/s wr, 13 op/s
Nov 22 09:28:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 284 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 43 op/s
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.718 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.720 253665 INFO nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting local config drive /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config because it was imported into RBD.
Nov 22 09:28:27 compute-0 kernel: tapb1fc96be-00: entered promiscuous mode
Nov 22 09:28:27 compute-0 NetworkManager[48920]: <info>  [1763803707.7854] manager: (tapb1fc96be-00): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Nov 22 09:28:27 compute-0 ovn_controller[152872]: 2025-11-22T09:28:27Z|00924|binding|INFO|Claiming lport b1fc96be-009e-46a8-829c-b7a0bc42af60 for this chassis.
Nov 22 09:28:27 compute-0 ovn_controller[152872]: 2025-11-22T09:28:27Z|00925|binding|INFO|b1fc96be-009e-46a8-829c-b7a0bc42af60: Claiming fa:16:3e:38:67:ca 10.100.0.10
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.793 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:67:ca 10.100.0.10'], port_security=['fa:16:3e:38:67:ca 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e4f9440c-7476-4022-8d08-1b3151a9db79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '7', 'neutron:security_group_ids': '33563511-c966-495c-93cb-386deb50a2bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b1fc96be-009e-46a8-829c-b7a0bc42af60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.794 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b1fc96be-009e-46a8-829c-b7a0bc42af60 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da bound to our chassis
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.795 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 09:28:27 compute-0 ovn_controller[152872]: 2025-11-22T09:28:27Z|00926|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 ovn-installed in OVS
Nov 22 09:28:27 compute-0 ovn_controller[152872]: 2025-11-22T09:28:27Z|00927|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 up in Southbound
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.809 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.819 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f805b50c-6a7e-43c5-89ed-6da1da7ba73c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:27 compute-0 systemd-udevd[346208]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:28:27 compute-0 systemd-machined[215941]: New machine qemu-111-instance-00000056.
Nov 22 09:28:27 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-00000056.
Nov 22 09:28:27 compute-0 NetworkManager[48920]: <info>  [1763803707.8486] device (tapb1fc96be-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:28:27 compute-0 NetworkManager[48920]: <info>  [1763803707.8497] device (tapb1fc96be-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.867 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd7ec9b-f293-4fc6-b03c-0de3b5396433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.872 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6523d3c3-8e17-4468-a1f0-dab7fc87cc6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.909 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8086cd46-8e15-47d0-8dfd-87a224d2b2ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bd9a0e-a764-41c8-93f2-d38a6bc50fbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346221, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.963 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcfd690c-8aab-4c4b-b9da-4915a73fa3da]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346223, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346223, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.967 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:27 compute-0 nova_compute[253661]: 2025-11-22 09:28:27.970 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.971 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.973 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:28:28 compute-0 sudo[346224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:28 compute-0 sudo[346224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:28 compute-0 sudo[346224]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:28 compute-0 sudo[346249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:28:28 compute-0 sudo[346249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:28 compute-0 sudo[346249]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:28 compute-0 sudo[346274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:28 compute-0 sudo[346274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:28 compute-0 sudo[346274]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:28 compute-0 nova_compute[253661]: 2025-11-22 09:28:28.203 253665 DEBUG nova.compute.manager [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:28 compute-0 nova_compute[253661]: 2025-11-22 09:28:28.203 253665 DEBUG oslo_concurrency.lockutils [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:28 compute-0 nova_compute[253661]: 2025-11-22 09:28:28.203 253665 DEBUG oslo_concurrency.lockutils [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:28 compute-0 nova_compute[253661]: 2025-11-22 09:28:28.204 253665 DEBUG oslo_concurrency.lockutils [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:28 compute-0 nova_compute[253661]: 2025-11-22 09:28:28.204 253665 DEBUG nova.compute.manager [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Processing event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:28:28 compute-0 sudo[346299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 09:28:28 compute-0 sudo[346299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:28 compute-0 podman[346398]: 2025-11-22 09:28:28.80961002 +0000 UTC m=+0.094340922 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 09:28:28 compute-0 podman[346398]: 2025-11-22 09:28:28.911463852 +0000 UTC m=+0.196194754 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.101 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803709.1002975, e4f9440c-7476-4022-8d08-1b3151a9db79 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.102 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Started (Lifecycle Event)
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.105 253665 DEBUG nova.compute.manager [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.110 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.115 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance spawned successfully.
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.174 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.180 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.209 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.210 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803709.1008117, e4f9440c-7476-4022-8d08-1b3151a9db79 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.210 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Paused (Lifecycle Event)
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.243 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803709.1090562, e4f9440c-7476-4022-8d08-1b3151a9db79 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.244 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Resumed (Lifecycle Event)
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.272 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.277 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:28:29 compute-0 nova_compute[253661]: 2025-11-22 09:28:29.298 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:28:29 compute-0 ceph-mon[75021]: pgmap v1957: 305 pgs: 305 active+clean; 284 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 43 op/s
Nov 22 09:28:29 compute-0 sudo[346299]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:28:29 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:28:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 325 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 80 op/s
Nov 22 09:28:29 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:29 compute-0 sudo[346595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:29 compute-0 sudo[346595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:29 compute-0 sudo[346595]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:29 compute-0 sudo[346620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:28:29 compute-0 sudo[346620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:29 compute-0 sudo[346620]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:29 compute-0 sudo[346645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:29 compute-0 sudo[346645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:29 compute-0 sudo[346645]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:29 compute-0 sudo[346670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:28:29 compute-0 sudo[346670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:30 compute-0 nova_compute[253661]: 2025-11-22 09:28:30.419 253665 DEBUG nova.compute.manager [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:30 compute-0 nova_compute[253661]: 2025-11-22 09:28:30.420 253665 DEBUG oslo_concurrency.lockutils [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:30 compute-0 nova_compute[253661]: 2025-11-22 09:28:30.421 253665 DEBUG oslo_concurrency.lockutils [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:30 compute-0 nova_compute[253661]: 2025-11-22 09:28:30.421 253665 DEBUG oslo_concurrency.lockutils [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:30 compute-0 nova_compute[253661]: 2025-11-22 09:28:30.421 253665 DEBUG nova.compute.manager [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:28:30 compute-0 nova_compute[253661]: 2025-11-22 09:28:30.422 253665 WARNING nova.compute.manager [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state shelved_offloaded and task_state spawning.
Nov 22 09:28:30 compute-0 sudo[346670]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:28:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:28:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:28:30 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:28:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:28:30 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:30 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 7c21884a-5fc1-47d9-bd76-d454269cb95a does not exist
Nov 22 09:28:30 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 62d8594d-9cfe-455a-948f-b4522ca4e0af does not exist
Nov 22 09:28:30 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e313a013-0ba7-48b6-8dba-52cc174f87bf does not exist
Nov 22 09:28:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:28:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:28:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:28:30 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:28:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:28:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:28:30 compute-0 nova_compute[253661]: 2025-11-22 09:28:30.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:30 compute-0 sudo[346725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:30 compute-0 sudo[346725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:30 compute-0 sudo[346725]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:30 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:30 compute-0 ceph-mon[75021]: pgmap v1958: 305 pgs: 305 active+clean; 325 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 80 op/s
Nov 22 09:28:30 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:30 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:28:30 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:28:30 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:30 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:28:30 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:28:30 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:28:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 22 09:28:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 22 09:28:30 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 22 09:28:30 compute-0 sudo[346750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:28:30 compute-0 sudo[346750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:30 compute-0 sudo[346750]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:30 compute-0 sudo[346775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:30 compute-0 sudo[346775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:30 compute-0 sudo[346775]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:30 compute-0 sudo[346800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:28:30 compute-0 sudo[346800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:31 compute-0 nova_compute[253661]: 2025-11-22 09:28:31.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:31 compute-0 podman[346866]: 2025-11-22 09:28:31.230604185 +0000 UTC m=+0.055897476 container create a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:28:31 compute-0 systemd[1]: Started libpod-conmon-a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012.scope.
Nov 22 09:28:31 compute-0 nova_compute[253661]: 2025-11-22 09:28:31.279 253665 DEBUG nova.compute.manager [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:28:31 compute-0 podman[346866]: 2025-11-22 09:28:31.205549125 +0000 UTC m=+0.030842456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:28:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:28:31 compute-0 podman[346866]: 2025-11-22 09:28:31.339775981 +0000 UTC m=+0.165069372 container init a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 09:28:31 compute-0 podman[346866]: 2025-11-22 09:28:31.350717546 +0000 UTC m=+0.176010847 container start a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:28:31 compute-0 podman[346866]: 2025-11-22 09:28:31.359551508 +0000 UTC m=+0.184844849 container attach a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:28:31 compute-0 zen_snyder[346882]: 167 167
Nov 22 09:28:31 compute-0 systemd[1]: libpod-a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012.scope: Deactivated successfully.
Nov 22 09:28:31 compute-0 conmon[346882]: conmon a842ccc35616e1d47b41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012.scope/container/memory.events
Nov 22 09:28:31 compute-0 podman[346866]: 2025-11-22 09:28:31.364855811 +0000 UTC m=+0.190149152 container died a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 09:28:31 compute-0 nova_compute[253661]: 2025-11-22 09:28:31.374 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-71f653bbcf99d54241a8209df25db0f0980441b8e58bcad379723b682657b7a1-merged.mount: Deactivated successfully.
Nov 22 09:28:31 compute-0 podman[346866]: 2025-11-22 09:28:31.441325114 +0000 UTC m=+0.266618415 container remove a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:28:31 compute-0 systemd[1]: libpod-conmon-a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012.scope: Deactivated successfully.
Nov 22 09:28:31 compute-0 nova_compute[253661]: 2025-11-22 09:28:31.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:31 compute-0 podman[346905]: 2025-11-22 09:28:31.638605005 +0000 UTC m=+0.052875441 container create 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:28:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 325 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 94 op/s
Nov 22 09:28:31 compute-0 ceph-mon[75021]: osdmap e270: 3 total, 3 up, 3 in
Nov 22 09:28:31 compute-0 systemd[1]: Started libpod-conmon-91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb.scope.
Nov 22 09:28:31 compute-0 podman[346905]: 2025-11-22 09:28:31.612413856 +0000 UTC m=+0.026684332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:28:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:31 compute-0 podman[346905]: 2025-11-22 09:28:31.728633019 +0000 UTC m=+0.142903495 container init 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:28:31 compute-0 podman[346905]: 2025-11-22 09:28:31.739807809 +0000 UTC m=+0.154078255 container start 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:28:31 compute-0 podman[346905]: 2025-11-22 09:28:31.744648491 +0000 UTC m=+0.158918997 container attach 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:28:32 compute-0 nova_compute[253661]: 2025-11-22 09:28:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:32 compute-0 nova_compute[253661]: 2025-11-22 09:28:32.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:28:32 compute-0 nova_compute[253661]: 2025-11-22 09:28:32.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:28:32 compute-0 nova_compute[253661]: 2025-11-22 09:28:32.540 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:28:32 compute-0 nova_compute[253661]: 2025-11-22 09:28:32.540 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:28:32 compute-0 nova_compute[253661]: 2025-11-22 09:28:32.541 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:28:32 compute-0 nova_compute[253661]: 2025-11-22 09:28:32.541 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:32 compute-0 ceph-mon[75021]: pgmap v1960: 305 pgs: 305 active+clean; 325 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 94 op/s
Nov 22 09:28:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:32 compute-0 focused_villani[346922]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:28:32 compute-0 focused_villani[346922]: --> relative data size: 1.0
Nov 22 09:28:32 compute-0 focused_villani[346922]: --> All data devices are unavailable
Nov 22 09:28:32 compute-0 systemd[1]: libpod-91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb.scope: Deactivated successfully.
Nov 22 09:28:32 compute-0 podman[346905]: 2025-11-22 09:28:32.993060762 +0000 UTC m=+1.407331198 container died 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:28:32 compute-0 systemd[1]: libpod-91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb.scope: Consumed 1.173s CPU time.
Nov 22 09:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409-merged.mount: Deactivated successfully.
Nov 22 09:28:33 compute-0 podman[346905]: 2025-11-22 09:28:33.169414937 +0000 UTC m=+1.583685383 container remove 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 09:28:33 compute-0 systemd[1]: libpod-conmon-91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb.scope: Deactivated successfully.
Nov 22 09:28:33 compute-0 sudo[346800]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:33 compute-0 sudo[346964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:33 compute-0 sudo[346964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:33 compute-0 sudo[346964]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:33 compute-0 sudo[346989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:28:33 compute-0 sudo[346989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:33 compute-0 sudo[346989]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:33 compute-0 sudo[347014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:33 compute-0 sudo[347014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:33 compute-0 sudo[347014]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:33 compute-0 sudo[347039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:28:33 compute-0 sudo[347039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 253 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.7 MiB/s wr, 194 op/s
Nov 22 09:28:33 compute-0 podman[347105]: 2025-11-22 09:28:33.919480337 +0000 UTC m=+0.057617711 container create 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 09:28:33 compute-0 podman[347105]: 2025-11-22 09:28:33.887601695 +0000 UTC m=+0.025739089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:28:33 compute-0 systemd[1]: Started libpod-conmon-0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412.scope.
Nov 22 09:28:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:28:34 compute-0 podman[347105]: 2025-11-22 09:28:34.04169992 +0000 UTC m=+0.179837314 container init 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:28:34 compute-0 podman[347105]: 2025-11-22 09:28:34.050295586 +0000 UTC m=+0.188432960 container start 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:28:34 compute-0 vigorous_swanson[347122]: 167 167
Nov 22 09:28:34 compute-0 systemd[1]: libpod-0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412.scope: Deactivated successfully.
Nov 22 09:28:34 compute-0 podman[347105]: 2025-11-22 09:28:34.064620146 +0000 UTC m=+0.202757660 container attach 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:28:34 compute-0 podman[347105]: 2025-11-22 09:28:34.065403416 +0000 UTC m=+0.203540790 container died 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4f315c7a28e1a9dffee950f8cd78c59f0c55c9a1aaf6dc256db6e38cb145e0e-merged.mount: Deactivated successfully.
Nov 22 09:28:34 compute-0 podman[347105]: 2025-11-22 09:28:34.186102691 +0000 UTC m=+0.324240065 container remove 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:28:34 compute-0 systemd[1]: libpod-conmon-0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412.scope: Deactivated successfully.
Nov 22 09:28:34 compute-0 nova_compute[253661]: 2025-11-22 09:28:34.446 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updating instance_info_cache with network_info: [{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:28:34 compute-0 podman[347147]: 2025-11-22 09:28:34.356868915 +0000 UTC m=+0.030370455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:28:34 compute-0 nova_compute[253661]: 2025-11-22 09:28:34.472 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:28:34 compute-0 nova_compute[253661]: 2025-11-22 09:28:34.473 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:28:34 compute-0 nova_compute[253661]: 2025-11-22 09:28:34.475 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:34 compute-0 nova_compute[253661]: 2025-11-22 09:28:34.476 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:34 compute-0 nova_compute[253661]: 2025-11-22 09:28:34.477 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:34 compute-0 nova_compute[253661]: 2025-11-22 09:28:34.477 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:34 compute-0 podman[347147]: 2025-11-22 09:28:34.527628949 +0000 UTC m=+0.201130499 container create a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:28:34 compute-0 systemd[1]: Started libpod-conmon-a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813.scope.
Nov 22 09:28:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 22 09:28:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 22 09:28:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 22 09:28:34 compute-0 podman[347147]: 2025-11-22 09:28:34.941288749 +0000 UTC m=+0.614790279 container init a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:28:34 compute-0 podman[347147]: 2025-11-22 09:28:34.949862956 +0000 UTC m=+0.623364466 container start a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:28:35 compute-0 podman[347147]: 2025-11-22 09:28:35.099347404 +0000 UTC m=+0.772848924 container attach a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:28:35 compute-0 nova_compute[253661]: 2025-11-22 09:28:35.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:35 compute-0 ceph-mon[75021]: pgmap v1961: 305 pgs: 305 active+clean; 253 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.7 MiB/s wr, 194 op/s
Nov 22 09:28:35 compute-0 nova_compute[253661]: 2025-11-22 09:28:35.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 246 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]: {
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:     "0": [
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:         {
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "devices": [
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "/dev/loop3"
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             ],
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_name": "ceph_lv0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_size": "21470642176",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "name": "ceph_lv0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "tags": {
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.cluster_name": "ceph",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.crush_device_class": "",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.encrypted": "0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.osd_id": "0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.type": "block",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.vdo": "0"
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             },
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "type": "block",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "vg_name": "ceph_vg0"
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:         }
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:     ],
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:     "1": [
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:         {
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "devices": [
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "/dev/loop4"
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             ],
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_name": "ceph_lv1",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_size": "21470642176",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "name": "ceph_lv1",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "tags": {
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.cluster_name": "ceph",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.crush_device_class": "",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.encrypted": "0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.osd_id": "1",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.type": "block",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.vdo": "0"
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             },
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "type": "block",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "vg_name": "ceph_vg1"
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:         }
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:     ],
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:     "2": [
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:         {
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "devices": [
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "/dev/loop5"
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             ],
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_name": "ceph_lv2",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_size": "21470642176",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "name": "ceph_lv2",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "tags": {
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.cluster_name": "ceph",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.crush_device_class": "",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.encrypted": "0",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.osd_id": "2",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.type": "block",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:                 "ceph.vdo": "0"
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             },
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "type": "block",
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:             "vg_name": "ceph_vg2"
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:         }
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]:     ]
Nov 22 09:28:35 compute-0 compassionate_goldstine[347164]: }
Nov 22 09:28:35 compute-0 systemd[1]: libpod-a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813.scope: Deactivated successfully.
Nov 22 09:28:35 compute-0 podman[347147]: 2025-11-22 09:28:35.874468984 +0000 UTC m=+1.547970484 container died a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 09:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517-merged.mount: Deactivated successfully.
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:36 compute-0 ceph-mon[75021]: osdmap e271: 3 total, 3 up, 3 in
Nov 22 09:28:36 compute-0 podman[347147]: 2025-11-22 09:28:36.548824921 +0000 UTC m=+2.222326431 container remove a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.582 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:36 compute-0 systemd[1]: libpod-conmon-a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813.scope: Deactivated successfully.
Nov 22 09:28:36 compute-0 sudo[347039]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:36 compute-0 sudo[347205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:36 compute-0 sudo[347205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:36 compute-0 sudo[347205]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.719624) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803716719674, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1937, "num_deletes": 260, "total_data_size": 2820162, "memory_usage": 2869552, "flush_reason": "Manual Compaction"}
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 22 09:28:36 compute-0 sudo[347230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:28:36 compute-0 sudo[347230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:36 compute-0 sudo[347230]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:36 compute-0 sudo[347255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:36 compute-0 sudo[347255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:36 compute-0 sudo[347255]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:28:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2859859650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803716893109, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2753371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38313, "largest_seqno": 40249, "table_properties": {"data_size": 2744485, "index_size": 5508, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18930, "raw_average_key_size": 20, "raw_value_size": 2726500, "raw_average_value_size": 2989, "num_data_blocks": 242, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803563, "oldest_key_time": 1763803563, "file_creation_time": 1763803716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 173532 microseconds, and 6995 cpu microseconds.
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.912 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:36 compute-0 sudo[347281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:28:36 compute-0 sudo[347281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.893151) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2753371 bytes OK
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.893177) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.963943) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.964014) EVENT_LOG_v1 {"time_micros": 1763803716964000, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.964053) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2811803, prev total WAL file size 2811803, number of live WAL files 2.
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.965816) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2688KB)], [86(8667KB)]
Nov 22 09:28:36 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803716965856, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11629304, "oldest_snapshot_seqno": -1}
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.994 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.994 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.998 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:28:36 compute-0 nova_compute[253661]: 2025-11-22 09:28:36.998 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6490 keys, 9955749 bytes, temperature: kUnknown
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803717159980, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9955749, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9910551, "index_size": 27872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 164920, "raw_average_key_size": 25, "raw_value_size": 9792492, "raw_average_value_size": 1508, "num_data_blocks": 1122, "num_entries": 6490, "num_filter_entries": 6490, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:28:37 compute-0 nova_compute[253661]: 2025-11-22 09:28:37.207 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:28:37 compute-0 nova_compute[253661]: 2025-11-22 09:28:37.208 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3505MB free_disk=59.89714050292969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:28:37 compute-0 nova_compute[253661]: 2025-11-22 09:28:37.209 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:37 compute-0 nova_compute[253661]: 2025-11-22 09:28:37.209 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.160643) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9955749 bytes
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.427697) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.9 rd, 51.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.5 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 7022, records dropped: 532 output_compression: NoCompression
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.427750) EVENT_LOG_v1 {"time_micros": 1763803717427733, "job": 50, "event": "compaction_finished", "compaction_time_micros": 194229, "compaction_time_cpu_micros": 30750, "output_level": 6, "num_output_files": 1, "total_output_size": 9955749, "num_input_records": 7022, "num_output_records": 6490, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803717429495, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803717431535, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.965735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:28:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:28:37 compute-0 ceph-mon[75021]: pgmap v1963: 305 pgs: 305 active+clean; 246 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Nov 22 09:28:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2859859650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:37 compute-0 podman[347349]: 2025-11-22 09:28:37.553677067 +0000 UTC m=+0.053905656 container create 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 22 09:28:37 compute-0 systemd[1]: Started libpod-conmon-7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756.scope.
Nov 22 09:28:37 compute-0 nova_compute[253661]: 2025-11-22 09:28:37.611 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:28:37 compute-0 nova_compute[253661]: 2025-11-22 09:28:37.612 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e4f9440c-7476-4022-8d08-1b3151a9db79 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:28:37 compute-0 nova_compute[253661]: 2025-11-22 09:28:37.612 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:28:37 compute-0 nova_compute[253661]: 2025-11-22 09:28:37.612 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:28:37 compute-0 podman[347349]: 2025-11-22 09:28:37.525477638 +0000 UTC m=+0.025706237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:28:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:28:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 246 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 144 op/s
Nov 22 09:28:37 compute-0 podman[347349]: 2025-11-22 09:28:37.682944278 +0000 UTC m=+0.183172887 container init 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:28:37 compute-0 podman[347349]: 2025-11-22 09:28:37.693986745 +0000 UTC m=+0.194215334 container start 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:28:37 compute-0 festive_chatelet[347365]: 167 167
Nov 22 09:28:37 compute-0 systemd[1]: libpod-7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756.scope: Deactivated successfully.
Nov 22 09:28:37 compute-0 podman[347349]: 2025-11-22 09:28:37.711072766 +0000 UTC m=+0.211301365 container attach 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:28:37 compute-0 podman[347349]: 2025-11-22 09:28:37.712873931 +0000 UTC m=+0.213102510 container died 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 09:28:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c979b2e44a5e2c85948b3708f2698823f9990823a89253edb8c5476736129d-merged.mount: Deactivated successfully.
Nov 22 09:28:37 compute-0 podman[347349]: 2025-11-22 09:28:37.890867086 +0000 UTC m=+0.391095675 container remove 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:28:37 compute-0 systemd[1]: libpod-conmon-7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756.scope: Deactivated successfully.
Nov 22 09:28:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 22 09:28:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 22 09:28:37 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 22 09:28:38 compute-0 podman[347390]: 2025-11-22 09:28:38.109826891 +0000 UTC m=+0.072450742 container create 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:28:38 compute-0 nova_compute[253661]: 2025-11-22 09:28:38.137 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:38 compute-0 podman[347390]: 2025-11-22 09:28:38.065808255 +0000 UTC m=+0.028432136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:28:38 compute-0 systemd[1]: Started libpod-conmon-957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231.scope.
Nov 22 09:28:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:28:38 compute-0 podman[347390]: 2025-11-22 09:28:38.293168762 +0000 UTC m=+0.255792633 container init 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 09:28:38 compute-0 podman[347390]: 2025-11-22 09:28:38.303155663 +0000 UTC m=+0.265779514 container start 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 09:28:38 compute-0 podman[347390]: 2025-11-22 09:28:38.394761797 +0000 UTC m=+0.357385678 container attach 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 09:28:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:28:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2334154375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:38 compute-0 nova_compute[253661]: 2025-11-22 09:28:38.665 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:38 compute-0 nova_compute[253661]: 2025-11-22 09:28:38.675 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:28:38 compute-0 nova_compute[253661]: 2025-11-22 09:28:38.697 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:28:38 compute-0 nova_compute[253661]: 2025-11-22 09:28:38.725 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:28:38 compute-0 nova_compute[253661]: 2025-11-22 09:28:38.726 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:38 compute-0 ceph-mon[75021]: pgmap v1964: 305 pgs: 305 active+clean; 246 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 144 op/s
Nov 22 09:28:38 compute-0 ceph-mon[75021]: osdmap e272: 3 total, 3 up, 3 in
Nov 22 09:28:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2334154375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:39 compute-0 podman[347457]: 2025-11-22 09:28:39.393675273 +0000 UTC m=+0.072254047 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Nov 22 09:28:39 compute-0 charming_merkle[347408]: {
Nov 22 09:28:39 compute-0 charming_merkle[347408]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "osd_id": 1,
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "type": "bluestore"
Nov 22 09:28:39 compute-0 charming_merkle[347408]:     },
Nov 22 09:28:39 compute-0 charming_merkle[347408]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "osd_id": 0,
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "type": "bluestore"
Nov 22 09:28:39 compute-0 charming_merkle[347408]:     },
Nov 22 09:28:39 compute-0 charming_merkle[347408]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "osd_id": 2,
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:28:39 compute-0 charming_merkle[347408]:         "type": "bluestore"
Nov 22 09:28:39 compute-0 charming_merkle[347408]:     }
Nov 22 09:28:39 compute-0 charming_merkle[347408]: }
Nov 22 09:28:39 compute-0 podman[347456]: 2025-11-22 09:28:39.416728744 +0000 UTC m=+0.095140644 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:28:39 compute-0 systemd[1]: libpod-957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231.scope: Deactivated successfully.
Nov 22 09:28:39 compute-0 systemd[1]: libpod-957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231.scope: Consumed 1.113s CPU time.
Nov 22 09:28:39 compute-0 podman[347390]: 2025-11-22 09:28:39.445325522 +0000 UTC m=+1.407949373 container died 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 09:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999-merged.mount: Deactivated successfully.
Nov 22 09:28:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 224 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 170 op/s
Nov 22 09:28:39 compute-0 podman[347390]: 2025-11-22 09:28:39.825537363 +0000 UTC m=+1.788161214 container remove 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:28:39 compute-0 systemd[1]: libpod-conmon-957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231.scope: Deactivated successfully.
Nov 22 09:28:39 compute-0 sudo[347281]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:28:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:28:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b429e693-67ef-4be8-97d7-61f805a1f34d does not exist
Nov 22 09:28:39 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4bc0da75-0191-41dd-aed0-3053c0233ab4 does not exist
Nov 22 09:28:40 compute-0 sudo[347515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:28:40 compute-0 sudo[347515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:40 compute-0 sudo[347515]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:40 compute-0 sudo[347540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:28:40 compute-0 sudo[347540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:28:40 compute-0 sudo[347540]: pam_unix(sudo:session): session closed for user root
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.107 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.108 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.108 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.108 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.108 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.110 253665 INFO nova.compute.manager [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Terminating instance
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.111 253665 DEBUG nova.compute.manager [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:28:40 compute-0 kernel: tap6eb31688-c2 (unregistering): left promiscuous mode
Nov 22 09:28:40 compute-0 NetworkManager[48920]: <info>  [1763803720.3294] device (tap6eb31688-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:28:40 compute-0 ovn_controller[152872]: 2025-11-22T09:28:40Z|00928|binding|INFO|Releasing lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 from this chassis (sb_readonly=0)
Nov 22 09:28:40 compute-0 ovn_controller[152872]: 2025-11-22T09:28:40Z|00929|binding|INFO|Setting lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 down in Southbound
Nov 22 09:28:40 compute-0 ovn_controller[152872]: 2025-11-22T09:28:40Z|00930|binding|INFO|Removing iface tap6eb31688-c2 ovn-installed in OVS
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.349 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:cd:01 10.100.0.13'], port_security=['fa:16:3e:26:cd:01 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b5cb3fb-8c82-432e-a88b-1ca3fef4f208', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '4', 'neutron:security_group_ids': '565d4bba-9c09-4fbf-9eb5-c7cb7133e1fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=6eb31688-c2e8-4f7b-a3df-3008c2065663) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.351 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 6eb31688-c2e8-4f7b-a3df-3008c2065663 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da unbound from our chassis
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.352 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000059.scope: Deactivated successfully.
Nov 22 09:28:40 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000059.scope: Consumed 17.304s CPU time.
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.381 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc5243e-bd92-47dd-8529-ce34cc9cb7f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:40 compute-0 systemd-machined[215941]: Machine qemu-107-instance-00000059 terminated.
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.429 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[770b8f38-b100-434b-acfb-9e10be6f817a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.434 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ffed57-f0d4-4bda-8ae6-89256fa5bca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.469 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b693dd-f543-4864-9a9f-4c5c12efa751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.492 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ace00daf-a2c2-4c50-a0e2-6e5329609e94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347576, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00f66fb6-0155-446a-ac7b-e0f6e29410f2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347577, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347577, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.515 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.523 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.523 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.524 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.524 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.539 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.557 253665 INFO nova.virt.libvirt.driver [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance destroyed successfully.
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.558 253665 DEBUG nova.objects.instance [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'resources' on Instance uuid 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.571 253665 DEBUG nova.virt.libvirt.vif [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:27:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1520180186',display_name='tempest-ServerActionsTestOtherB-server-1520180186',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1520180186',id=89,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:27:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-09vxfowe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:27:16Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=2b5cb3fb-8c82-432e-a88b-1ca3fef4f208,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.572 253665 DEBUG nova.network.os_vif_util [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.573 253665 DEBUG nova.network.os_vif_util [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.573 253665 DEBUG os_vif [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.575 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.575 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eb31688-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.579 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.581 253665 INFO os_vif [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2')
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.603 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.632 253665 DEBUG nova.compute.manager [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-unplugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG oslo_concurrency.lockutils [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG oslo_concurrency.lockutils [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG oslo_concurrency.lockutils [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG nova.compute.manager [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] No waiting events found dispatching network-vif-unplugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:28:40 compute-0 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG nova.compute.manager [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-unplugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:28:40 compute-0 ceph-mon[75021]: pgmap v1966: 305 pgs: 305 active+clean; 224 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 170 op/s
Nov 22 09:28:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:28:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 224 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Nov 22 09:28:42 compute-0 nova_compute[253661]: 2025-11-22 09:28:42.443 253665 INFO nova.virt.libvirt.driver [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Deleting instance files /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_del
Nov 22 09:28:42 compute-0 nova_compute[253661]: 2025-11-22 09:28:42.445 253665 INFO nova.virt.libvirt.driver [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Deletion of /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_del complete
Nov 22 09:28:42 compute-0 nova_compute[253661]: 2025-11-22 09:28:42.504 253665 INFO nova.compute.manager [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Took 2.39 seconds to destroy the instance on the hypervisor.
Nov 22 09:28:42 compute-0 nova_compute[253661]: 2025-11-22 09:28:42.505 253665 DEBUG oslo.service.loopingcall [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:28:42 compute-0 nova_compute[253661]: 2025-11-22 09:28:42.505 253665 DEBUG nova.compute.manager [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:28:42 compute-0 nova_compute[253661]: 2025-11-22 09:28:42.505 253665 DEBUG nova.network.neutron [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:28:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 22 09:28:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 22 09:28:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 22 09:28:43 compute-0 ceph-mon[75021]: pgmap v1967: 305 pgs: 305 active+clean; 224 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Nov 22 09:28:43 compute-0 ceph-mon[75021]: osdmap e273: 3 total, 3 up, 3 in
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.035 253665 DEBUG nova.compute.manager [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.036 253665 DEBUG oslo_concurrency.lockutils [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.036 253665 DEBUG oslo_concurrency.lockutils [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.037 253665 DEBUG oslo_concurrency.lockutils [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.037 253665 DEBUG nova.compute.manager [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] No waiting events found dispatching network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.037 253665 WARNING nova.compute.manager [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received unexpected event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 for instance with vm_state active and task_state deleting.
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.301 253665 DEBUG nova.network.neutron [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.320 253665 INFO nova.compute.manager [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Took 0.81 seconds to deallocate network for instance.
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.379 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.379 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.459 253665 DEBUG nova.compute.manager [req-978e2a2d-c541-48c9-a762-6b2f19d0a872 req-6a79f31d-401d-4a4f-a2ee-07627187d617 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-deleted-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.461 253665 DEBUG oslo_concurrency.processutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 151 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 18 KiB/s wr, 76 op/s
Nov 22 09:28:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:28:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1656220135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.945 253665 DEBUG oslo_concurrency.processutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.954 253665 DEBUG nova.compute.provider_tree [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.974 253665 DEBUG nova.scheduler.client.report [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:28:43 compute-0 nova_compute[253661]: 2025-11-22 09:28:43.993 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:44 compute-0 nova_compute[253661]: 2025-11-22 09:28:44.021 253665 INFO nova.scheduler.client.report [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Deleted allocations for instance 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208
Nov 22 09:28:44 compute-0 ovn_controller[152872]: 2025-11-22T09:28:44Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:67:ca 10.100.0.10
Nov 22 09:28:44 compute-0 nova_compute[253661]: 2025-11-22 09:28:44.090 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1656220135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.185 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.186 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.186 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.187 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.187 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.188 253665 INFO nova.compute.manager [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Terminating instance
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.189 253665 DEBUG nova.compute.manager [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:28:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.308 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.309 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:28:45 compute-0 ceph-mon[75021]: pgmap v1969: 305 pgs: 305 active+clean; 151 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 18 KiB/s wr, 76 op/s
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 22 KiB/s wr, 108 op/s
Nov 22 09:28:45 compute-0 podman[347630]: 2025-11-22 09:28:45.712630592 +0000 UTC m=+0.132130663 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 09:28:45 compute-0 kernel: tapb1fc96be-00 (unregistering): left promiscuous mode
Nov 22 09:28:45 compute-0 NetworkManager[48920]: <info>  [1763803725.7914] device (tapb1fc96be-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:28:45 compute-0 ovn_controller[152872]: 2025-11-22T09:28:45Z|00931|binding|INFO|Releasing lport b1fc96be-009e-46a8-829c-b7a0bc42af60 from this chassis (sb_readonly=0)
Nov 22 09:28:45 compute-0 ovn_controller[152872]: 2025-11-22T09:28:45Z|00932|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 down in Southbound
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:45 compute-0 ovn_controller[152872]: 2025-11-22T09:28:45Z|00933|binding|INFO|Removing iface tapb1fc96be-00 ovn-installed in OVS
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.803 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.810 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:67:ca 10.100.0.10'], port_security=['fa:16:3e:38:67:ca 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e4f9440c-7476-4022-8d08-1b3151a9db79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '9', 'neutron:security_group_ids': '33563511-c966-495c-93cb-386deb50a2bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b1fc96be-009e-46a8-829c-b7a0bc42af60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:28:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.811 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b1fc96be-009e-46a8-829c-b7a0bc42af60 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da unbound from our chassis
Nov 22 09:28:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.812 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e37df2c8-4dc4-418d-92f1-b394537a30da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:28:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.813 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee8a1e1-df1f-4445-a7b0-dcbc751e79a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.814 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da namespace which is not needed anymore
Nov 22 09:28:45 compute-0 nova_compute[253661]: 2025-11-22 09:28:45.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:45 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000056.scope: Deactivated successfully.
Nov 22 09:28:45 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000056.scope: Consumed 15.148s CPU time.
Nov 22 09:28:45 compute-0 systemd-machined[215941]: Machine qemu-111-instance-00000056 terminated.
Nov 22 09:28:45 compute-0 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [NOTICE]   (340087) : haproxy version is 2.8.14-c23fe91
Nov 22 09:28:45 compute-0 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [NOTICE]   (340087) : path to executable is /usr/sbin/haproxy
Nov 22 09:28:45 compute-0 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [WARNING]  (340087) : Exiting Master process...
Nov 22 09:28:45 compute-0 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [ALERT]    (340087) : Current worker (340089) exited with code 143 (Terminated)
Nov 22 09:28:45 compute-0 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [WARNING]  (340087) : All workers exited. Exiting... (0)
Nov 22 09:28:45 compute-0 systemd[1]: libpod-ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f.scope: Deactivated successfully.
Nov 22 09:28:45 compute-0 podman[347680]: 2025-11-22 09:28:45.980970719 +0000 UTC m=+0.067952999 container died ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.021 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.042 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance destroyed successfully.
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.043 253665 DEBUG nova.objects.instance [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'resources' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.049 253665 DEBUG nova.compute.manager [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.049 253665 DEBUG oslo_concurrency.lockutils [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.050 253665 DEBUG oslo_concurrency.lockutils [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.050 253665 DEBUG oslo_concurrency.lockutils [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.050 253665 DEBUG nova.compute.manager [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.050 253665 DEBUG nova.compute.manager [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.056 253665 DEBUG nova.virt.libvirt.vif [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/cRG5bfHD3LbYWZfZhBZW64Gzk9NiecmZChn56cNdUeqOvdqm8gZ047E1aOD+/1rWy6Q/20jfwuj+tARiRMK9Fr/axSxMkwZvm5uYPBSn1o0uJaQf1m6OZmN9YqP8SQ==',key_name='tempest-keypair-427391145',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:28:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:28:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.057 253665 DEBUG nova.network.os_vif_util [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.057 253665 DEBUG nova.network.os_vif_util [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.058 253665 DEBUG os_vif [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:28:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f-userdata-shm.mount: Deactivated successfully.
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.061 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1fc96be-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec1bcf14e6b9291c965d546b0551f59e40ba033f0d7af433bf80952dca416338-merged.mount: Deactivated successfully.
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.065 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.068 253665 INFO os_vif [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00')
Nov 22 09:28:46 compute-0 podman[347680]: 2025-11-22 09:28:46.107502701 +0000 UTC m=+0.194484981 container cleanup ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 22 09:28:46 compute-0 systemd[1]: libpod-conmon-ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f.scope: Deactivated successfully.
Nov 22 09:28:46 compute-0 podman[347736]: 2025-11-22 09:28:46.222168364 +0000 UTC m=+0.092261661 container remove ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 09:28:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.229 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f2325dc0-2a2b-4077-9a1d-fdd0f4a5bdad]: (4, ('Sat Nov 22 09:28:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da (ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f)\ned7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f\nSat Nov 22 09:28:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da (ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f)\ned7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.232 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a5fa17-d6c7-4f43-8b10-4b27304498f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.234 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:46 compute-0 kernel: tape37df2c8-40: left promiscuous mode
Nov 22 09:28:46 compute-0 nova_compute[253661]: 2025-11-22 09:28:46.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.255 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a244c078-8a2d-494b-8627-e6a358d7eb74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.273 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[feb7f373-9ddc-46b5-970e-e0e2a98231a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.275 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab9ddb9-1805-422b-b1c6-e2a8db2a82b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.297 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[22b42623-3b55-4b31-be65-67cc86ce525c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640459, 'reachable_time': 35848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347750, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.301 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:28:46 compute-0 systemd[1]: run-netns-ovnmeta\x2de37df2c8\x2d4dc4\x2d418d\x2d92f1\x2db394537a30da.mount: Deactivated successfully.
Nov 22 09:28:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.301 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[830283f2-731b-4d07-b4e3-20453e623dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:28:46 compute-0 ceph-mon[75021]: pgmap v1970: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 22 KiB/s wr, 108 op/s
Nov 22 09:28:47 compute-0 nova_compute[253661]: 2025-11-22 09:28:47.510 253665 INFO nova.virt.libvirt.driver [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting instance files /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79_del
Nov 22 09:28:47 compute-0 nova_compute[253661]: 2025-11-22 09:28:47.511 253665 INFO nova.virt.libvirt.driver [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deletion of /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79_del complete
Nov 22 09:28:47 compute-0 nova_compute[253661]: 2025-11-22 09:28:47.568 253665 INFO nova.compute.manager [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 2.38 seconds to destroy the instance on the hypervisor.
Nov 22 09:28:47 compute-0 nova_compute[253661]: 2025-11-22 09:28:47.568 253665 DEBUG oslo.service.loopingcall [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:28:47 compute-0 nova_compute[253661]: 2025-11-22 09:28:47.569 253665 DEBUG nova.compute.manager [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:28:47 compute-0 nova_compute[253661]: 2025-11-22 09:28:47.569 253665 DEBUG nova.network.neutron [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:28:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 83 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 19 KiB/s wr, 127 op/s
Nov 22 09:28:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.323 253665 DEBUG nova.compute.manager [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.323 253665 DEBUG oslo_concurrency.lockutils [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.323 253665 DEBUG oslo_concurrency.lockutils [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.324 253665 DEBUG oslo_concurrency.lockutils [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.324 253665 DEBUG nova.compute.manager [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.324 253665 WARNING nova.compute.manager [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state active and task_state deleting.
Nov 22 09:28:48 compute-0 ceph-mon[75021]: pgmap v1971: 305 pgs: 305 active+clean; 83 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 19 KiB/s wr, 127 op/s
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.835 253665 DEBUG nova.network.neutron [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.859 253665 INFO nova.compute.manager [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 1.29 seconds to deallocate network for instance.
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.896 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.897 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:28:48 compute-0 nova_compute[253661]: 2025-11-22 09:28:48.962 253665 DEBUG oslo_concurrency.processutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:28:49 compute-0 nova_compute[253661]: 2025-11-22 09:28:49.028 253665 DEBUG nova.compute.manager [req-519d92e7-a480-4ea1-a98a-d96da91354b9 req-a56b6706-3326-4b14-9116-1c22eae2617e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-deleted-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:28:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:28:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962236674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:49 compute-0 nova_compute[253661]: 2025-11-22 09:28:49.469 253665 DEBUG oslo_concurrency.processutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:28:49 compute-0 nova_compute[253661]: 2025-11-22 09:28:49.476 253665 DEBUG nova.compute.provider_tree [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:28:49 compute-0 nova_compute[253661]: 2025-11-22 09:28:49.490 253665 DEBUG nova.scheduler.client.report [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:28:49 compute-0 nova_compute[253661]: 2025-11-22 09:28:49.512 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:49 compute-0 nova_compute[253661]: 2025-11-22 09:28:49.548 253665 INFO nova.scheduler.client.report [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Deleted allocations for instance e4f9440c-7476-4022-8d08-1b3151a9db79
Nov 22 09:28:49 compute-0 nova_compute[253661]: 2025-11-22 09:28:49.613 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:28:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 42 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 17 KiB/s wr, 110 op/s
Nov 22 09:28:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3962236674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:28:50 compute-0 nova_compute[253661]: 2025-11-22 09:28:50.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:50 compute-0 ceph-mon[75021]: pgmap v1972: 305 pgs: 305 active+clean; 42 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 17 KiB/s wr, 110 op/s
Nov 22 09:28:51 compute-0 nova_compute[253661]: 2025-11-22 09:28:51.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 42 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 17 KiB/s wr, 110 op/s
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:28:52
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:28:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:28:52 compute-0 ceph-mon[75021]: pgmap v1973: 305 pgs: 305 active+clean; 42 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 17 KiB/s wr, 110 op/s
Nov 22 09:28:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 592 KiB/s rd, 15 KiB/s wr, 103 op/s
Nov 22 09:28:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:28:54.312 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:28:54 compute-0 nova_compute[253661]: 2025-11-22 09:28:54.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:54 compute-0 nova_compute[253661]: 2025-11-22 09:28:54.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:54 compute-0 ceph-mon[75021]: pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 592 KiB/s rd, 15 KiB/s wr, 103 op/s
Nov 22 09:28:55 compute-0 nova_compute[253661]: 2025-11-22 09:28:55.556 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803720.5546837, 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:28:55 compute-0 nova_compute[253661]: 2025-11-22 09:28:55.556 253665 INFO nova.compute.manager [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] VM Stopped (Lifecycle Event)
Nov 22 09:28:55 compute-0 nova_compute[253661]: 2025-11-22 09:28:55.577 253665 DEBUG nova.compute.manager [None req-cdba6d72-8ee4-4124-b401-10783f330d34 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:28:55 compute-0 nova_compute[253661]: 2025-11-22 09:28:55.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:28:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 3.7 KiB/s wr, 69 op/s
Nov 22 09:28:56 compute-0 nova_compute[253661]: 2025-11-22 09:28:56.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:28:57 compute-0 ceph-mon[75021]: pgmap v1975: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 3.7 KiB/s wr, 69 op/s
Nov 22 09:28:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 1.3 KiB/s wr, 46 op/s
Nov 22 09:28:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:28:59 compute-0 ceph-mon[75021]: pgmap v1976: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 1.3 KiB/s wr, 46 op/s
Nov 22 09:28:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 16 op/s
Nov 22 09:29:00 compute-0 nova_compute[253661]: 2025-11-22 09:29:00.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:01 compute-0 nova_compute[253661]: 2025-11-22 09:29:01.039 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803726.03845, e4f9440c-7476-4022-8d08-1b3151a9db79 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:29:01 compute-0 nova_compute[253661]: 2025-11-22 09:29:01.040 253665 INFO nova.compute.manager [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Stopped (Lifecycle Event)
Nov 22 09:29:01 compute-0 nova_compute[253661]: 2025-11-22 09:29:01.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:01 compute-0 nova_compute[253661]: 2025-11-22 09:29:01.086 253665 DEBUG nova.compute.manager [None req-7b5b0b7e-7ed4-4277-aa91-5d08589d4d6c - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:29:01 compute-0 ceph-mon[75021]: pgmap v1977: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 16 op/s
Nov 22 09:29:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 426 B/s wr, 9 op/s
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:29:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:29:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:03 compute-0 ceph-mon[75021]: pgmap v1978: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 426 B/s wr, 9 op/s
Nov 22 09:29:03 compute-0 nova_compute[253661]: 2025-11-22 09:29:03.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:03 compute-0 nova_compute[253661]: 2025-11-22 09:29:03.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:29:03 compute-0 nova_compute[253661]: 2025-11-22 09:29:03.262 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:29:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 426 B/s wr, 9 op/s
Nov 22 09:29:05 compute-0 ceph-mon[75021]: pgmap v1979: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 426 B/s wr, 9 op/s
Nov 22 09:29:05 compute-0 nova_compute[253661]: 2025-11-22 09:29:05.612 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:29:06 compute-0 nova_compute[253661]: 2025-11-22 09:29:06.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:07 compute-0 ceph-mon[75021]: pgmap v1980: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:29:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:29:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:09 compute-0 ceph-mon[75021]: pgmap v1981: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:29:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:29:10 compute-0 podman[347776]: 2025-11-22 09:29:10.382650754 +0000 UTC m=+0.060759279 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 22 09:29:10 compute-0 podman[347777]: 2025-11-22 09:29:10.383518726 +0000 UTC m=+0.059889817 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 09:29:10 compute-0 nova_compute[253661]: 2025-11-22 09:29:10.613 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:11 compute-0 nova_compute[253661]: 2025-11-22 09:29:11.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:11 compute-0 nova_compute[253661]: 2025-11-22 09:29:11.412 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:11 compute-0 nova_compute[253661]: 2025-11-22 09:29:11.413 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:11 compute-0 nova_compute[253661]: 2025-11-22 09:29:11.428 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:29:11 compute-0 ceph-mon[75021]: pgmap v1982: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:29:11 compute-0 nova_compute[253661]: 2025-11-22 09:29:11.514 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:11 compute-0 nova_compute[253661]: 2025-11-22 09:29:11.514 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:11 compute-0 nova_compute[253661]: 2025-11-22 09:29:11.523 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:29:11 compute-0 nova_compute[253661]: 2025-11-22 09:29:11.524 253665 INFO nova.compute.claims [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:29:11 compute-0 nova_compute[253661]: 2025-11-22 09:29:11.660 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:29:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:29:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3116236503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.152 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.159 253665 DEBUG nova.compute.provider_tree [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.178 253665 DEBUG nova.scheduler.client.report [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.205 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.206 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.268 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.269 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.300 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.316 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:29:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:29:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2029530262' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:29:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:29:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2029530262' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.431 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.433 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.433 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Creating image(s)
Nov 22 09:29:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3116236503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2029530262' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:29:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2029530262' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.467 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.495 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.517 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.521 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.606 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.608 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.609 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.609 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.638 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.643 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 db811577-e691-40e3-9e31-1a0a5929133d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:12 compute-0 nova_compute[253661]: 2025-11-22 09:29:12.812 253665 DEBUG nova.policy [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0a91006ab4394e10a534a0887a0d170a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aaac6ab98bec41f7ac2ec49229374dc0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:29:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:13 compute-0 ceph-mon[75021]: pgmap v1983: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:29:13 compute-0 nova_compute[253661]: 2025-11-22 09:29:13.646 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Successfully created port: f5951cb8-b8f6-4d52-850f-d4bdb04390ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:29:13 compute-0 nova_compute[253661]: 2025-11-22 09:29:13.661 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 db811577-e691-40e3-9e31-1a0a5929133d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 45 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 178 KiB/s wr, 2 op/s
Nov 22 09:29:13 compute-0 nova_compute[253661]: 2025-11-22 09:29:13.729 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] resizing rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:29:14 compute-0 nova_compute[253661]: 2025-11-22 09:29:14.505 253665 DEBUG nova.objects.instance [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lazy-loading 'migration_context' on Instance uuid db811577-e691-40e3-9e31-1a0a5929133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:29:14 compute-0 nova_compute[253661]: 2025-11-22 09:29:14.521 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:29:14 compute-0 nova_compute[253661]: 2025-11-22 09:29:14.522 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Ensure instance console log exists: /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:29:14 compute-0 nova_compute[253661]: 2025-11-22 09:29:14.522 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:14 compute-0 nova_compute[253661]: 2025-11-22 09:29:14.522 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:14 compute-0 nova_compute[253661]: 2025-11-22 09:29:14.523 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:14 compute-0 ceph-mon[75021]: pgmap v1984: 305 pgs: 305 active+clean; 45 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 178 KiB/s wr, 2 op/s
Nov 22 09:29:15 compute-0 nova_compute[253661]: 2025-11-22 09:29:15.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 63 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 392 KiB/s wr, 14 op/s
Nov 22 09:29:15 compute-0 nova_compute[253661]: 2025-11-22 09:29:15.712 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Successfully updated port: f5951cb8-b8f6-4d52-850f-d4bdb04390ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:29:15 compute-0 nova_compute[253661]: 2025-11-22 09:29:15.729 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:29:15 compute-0 nova_compute[253661]: 2025-11-22 09:29:15.729 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquired lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:29:15 compute-0 nova_compute[253661]: 2025-11-22 09:29:15.730 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:29:15 compute-0 nova_compute[253661]: 2025-11-22 09:29:15.963 253665 DEBUG nova.compute.manager [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-changed-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:29:15 compute-0 nova_compute[253661]: 2025-11-22 09:29:15.964 253665 DEBUG nova.compute.manager [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Refreshing instance network info cache due to event network-changed-f5951cb8-b8f6-4d52-850f-d4bdb04390ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:29:15 compute-0 nova_compute[253661]: 2025-11-22 09:29:15.964 253665 DEBUG oslo_concurrency.lockutils [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:29:16 compute-0 nova_compute[253661]: 2025-11-22 09:29:16.073 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:16 compute-0 nova_compute[253661]: 2025-11-22 09:29:16.141 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:29:16 compute-0 podman[348002]: 2025-11-22 09:29:16.414722868 +0000 UTC m=+0.103586766 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:29:16 compute-0 ceph-mon[75021]: pgmap v1985: 305 pgs: 305 active+clean; 63 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 392 KiB/s wr, 14 op/s
Nov 22 09:29:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 80 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Nov 22 09:29:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.004 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updating instance_info_cache with network_info: [{"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.043 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Releasing lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.044 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance network_info: |[{"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.046 253665 DEBUG oslo_concurrency.lockutils [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.046 253665 DEBUG nova.network.neutron [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Refreshing network info cache for port f5951cb8-b8f6-4d52-850f-d4bdb04390ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.049 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start _get_guest_xml network_info=[{"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.054 253665 WARNING nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.063 253665 DEBUG nova.virt.libvirt.host [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.063 253665 DEBUG nova.virt.libvirt.host [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.067 253665 DEBUG nova.virt.libvirt.host [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.068 253665 DEBUG nova.virt.libvirt.host [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.068 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.068 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.070 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.070 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.070 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.070 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.072 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:29:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338773131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.561 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.590 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:18 compute-0 nova_compute[253661]: 2025-11-22 09:29:18.595 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:18 compute-0 ceph-mon[75021]: pgmap v1986: 305 pgs: 305 active+clean; 80 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Nov 22 09:29:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2338773131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:29:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:29:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613923299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.060 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.062 253665 DEBUG nova.virt.libvirt.vif [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-724968197',display_name='tempest-ServerAddressesNegativeTestJSON-server-724968197',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-724968197',id=92,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aaac6ab98bec41f7ac2ec49229374dc0',ramdisk_id='',reservation_id='r-5s3go2uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1369579025',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1369579025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:29:12Z,user_data=None,user_id='0a91006ab4394e10a534a0887a0d170a',uuid=db811577-e691-40e3-9e31-1a0a5929133d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.063 253665 DEBUG nova.network.os_vif_util [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converting VIF {"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.064 253665 DEBUG nova.network.os_vif_util [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.065 253665 DEBUG nova.objects.instance [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lazy-loading 'pci_devices' on Instance uuid db811577-e691-40e3-9e31-1a0a5929133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.110 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <uuid>db811577-e691-40e3-9e31-1a0a5929133d</uuid>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <name>instance-0000005c</name>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-724968197</nova:name>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:29:18</nova:creationTime>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <nova:user uuid="0a91006ab4394e10a534a0887a0d170a">tempest-ServerAddressesNegativeTestJSON-1369579025-project-member</nova:user>
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <nova:project uuid="aaac6ab98bec41f7ac2ec49229374dc0">tempest-ServerAddressesNegativeTestJSON-1369579025</nova:project>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <nova:port uuid="f5951cb8-b8f6-4d52-850f-d4bdb04390ad">
Nov 22 09:29:19 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <system>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <entry name="serial">db811577-e691-40e3-9e31-1a0a5929133d</entry>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <entry name="uuid">db811577-e691-40e3-9e31-1a0a5929133d</entry>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     </system>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <os>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   </os>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <features>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   </features>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/db811577-e691-40e3-9e31-1a0a5929133d_disk">
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       </source>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/db811577-e691-40e3-9e31-1a0a5929133d_disk.config">
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       </source>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:29:19 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:33:74:7d"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <target dev="tapf5951cb8-b8"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/console.log" append="off"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <video>
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     </video>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:29:19 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:29:19 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:29:19 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:29:19 compute-0 nova_compute[253661]: </domain>
Nov 22 09:29:19 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.113 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Preparing to wait for external event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.113 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.113 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.114 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.114 253665 DEBUG nova.virt.libvirt.vif [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-724968197',display_name='tempest-ServerAddressesNegativeTestJSON-server-724968197',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-724968197',id=92,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aaac6ab98bec41f7ac2ec49229374dc0',ramdisk_id='',reservation_id='r-5s3go2uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1369579025',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1369579025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:29:12Z,user_data=None,user_id='0a91006ab4394e10a534a0887a0d170a',uuid=db811577-e691-40e3-9e31-1a0a5929133d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.115 253665 DEBUG nova.network.os_vif_util [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converting VIF {"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.116 253665 DEBUG nova.network.os_vif_util [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.116 253665 DEBUG os_vif [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.117 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.118 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.123 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5951cb8-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.123 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5951cb8-b8, col_values=(('external_ids', {'iface-id': 'f5951cb8-b8f6-4d52-850f-d4bdb04390ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:74:7d', 'vm-uuid': 'db811577-e691-40e3-9e31-1a0a5929133d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.126 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:19 compute-0 NetworkManager[48920]: <info>  [1763803759.1264] manager: (tapf5951cb8-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.134 253665 INFO os_vif [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8')
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.208 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.208 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.208 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] No VIF found with MAC fa:16:3e:33:74:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.209 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Using config drive
Nov 22 09:29:19 compute-0 nova_compute[253661]: 2025-11-22 09:29:19.238 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:29:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3613923299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.068 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Creating config drive at /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.074 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpttf2w44z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.226 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpttf2w44z" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.255 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.259 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config db811577-e691-40e3-9e31-1a0a5929133d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.431 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config db811577-e691-40e3-9e31-1a0a5929133d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.433 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Deleting local config drive /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config because it was imported into RBD.
Nov 22 09:29:20 compute-0 kernel: tapf5951cb8-b8: entered promiscuous mode
Nov 22 09:29:20 compute-0 NetworkManager[48920]: <info>  [1763803760.4968] manager: (tapf5951cb8-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Nov 22 09:29:20 compute-0 ovn_controller[152872]: 2025-11-22T09:29:20Z|00934|binding|INFO|Claiming lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad for this chassis.
Nov 22 09:29:20 compute-0 ovn_controller[152872]: 2025-11-22T09:29:20Z|00935|binding|INFO|f5951cb8-b8f6-4d52-850f-d4bdb04390ad: Claiming fa:16:3e:33:74:7d 10.100.0.4
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.542 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:74:7d 10.100.0.4'], port_security=['fa:16:3e:33:74:7d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'db811577-e691-40e3-9e31-1a0a5929133d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaac6ab98bec41f7ac2ec49229374dc0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8fdff023-52d5-4ffb-998e-d0b2022a0bdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afa164a2-2d18-485c-9cf3-c424c8564fdf, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f5951cb8-b8f6-4d52-850f-d4bdb04390ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:29:20 compute-0 systemd-machined[215941]: New machine qemu-112-instance-0000005c.
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.544 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f5951cb8-b8f6-4d52-850f-d4bdb04390ad in datapath 769b5006-dcff-42dc-96bc-e9baa5d2ce51 bound to our chassis
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.545 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 769b5006-dcff-42dc-96bc-e9baa5d2ce51
Nov 22 09:29:20 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005c.
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.563 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[72f7e6b0-293e-4afb-a535-a7706cf6a3de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.565 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap769b5006-d1 in ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.568 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap769b5006-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.568 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6255843-e986-4de3-aa99-dfe3f7cf2c6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f55d29af-5c18-4de1-881e-4984383519ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 systemd-udevd[348167]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.585 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5b81ff-76e5-4354-8b8a-5193ec265013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 NetworkManager[48920]: <info>  [1763803760.5972] device (tapf5951cb8-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:29:20 compute-0 NetworkManager[48920]: <info>  [1763803760.5982] device (tapf5951cb8-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:20 compute-0 ovn_controller[152872]: 2025-11-22T09:29:20Z|00936|binding|INFO|Setting lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad ovn-installed in OVS
Nov 22 09:29:20 compute-0 ovn_controller[152872]: 2025-11-22T09:29:20Z|00937|binding|INFO|Setting lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad up in Southbound
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.620 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.619 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6770d98-49d6-409e-84fb-50f561df00a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.663 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[75fcc5ea-d96e-4ac6-a311-9b5cd3ab6fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 NetworkManager[48920]: <info>  [1763803760.6722] manager: (tap769b5006-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.670 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[faa8687e-e2ff-4513-83e0-9af19ed8cb4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.714 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e99040c3-f272-44e1-ae75-46388d7d0132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.718 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[014fbb27-4fac-4a17-bb21-8cbe1853c55e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 NetworkManager[48920]: <info>  [1763803760.7515] device (tap769b5006-d0): carrier: link connected
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.757 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d379ac-d240-4095-bba3-b95a1c5909d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.778 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3974dfa-dda9-48b8-9f52-8c0cbf3b0dff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap769b5006-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:86:2c:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661148, 'reachable_time': 37605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348199, 'error': None, 'target': 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.804 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f89e8d1-9ca9-4b8c-9e19-efcf6617ea23]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe86:2cd1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661148, 'tstamp': 661148}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348200, 'error': None, 'target': 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ceph-mon[75021]: pgmap v1987: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.829 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c317284f-64d3-4fc8-bc3e-68dc9512e52d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap769b5006-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:86:2c:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661148, 'reachable_time': 37605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348201, 'error': None, 'target': 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.833 253665 DEBUG nova.network.neutron [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updated VIF entry in instance network info cache for port f5951cb8-b8f6-4d52-850f-d4bdb04390ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.834 253665 DEBUG nova.network.neutron [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updating instance_info_cache with network_info: [{"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.846 253665 DEBUG oslo_concurrency.lockutils [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f42470e7-d659-4ab4-b8cb-c4edbcd76b35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.970 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78e3de3b-f24a-40b8-a539-93a9f4a9e97a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap769b5006-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap769b5006-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:29:20 compute-0 kernel: tap769b5006-d0: entered promiscuous mode
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.979 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap769b5006-d0, col_values=(('external_ids', {'iface-id': '3a326757-71e3-4dd7-8bdc-9d3406640135'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:29:20 compute-0 NetworkManager[48920]: <info>  [1763803760.9801] manager: (tap769b5006-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Nov 22 09:29:20 compute-0 ovn_controller[152872]: 2025-11-22T09:29:20Z|00938|binding|INFO|Releasing lport 3a326757-71e3-4dd7-8bdc-9d3406640135 from this chassis (sb_readonly=0)
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.982 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.982 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/769b5006-dcff-42dc-96bc-e9baa5d2ce51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/769b5006-dcff-42dc-96bc-e9baa5d2ce51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.987 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1e5d8c-982c-420c-82eb-a4cce598a92b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.988 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-769b5006-dcff-42dc-96bc-e9baa5d2ce51
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/769b5006-dcff-42dc-96bc-e9baa5d2ce51.pid.haproxy
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 769b5006-dcff-42dc-96bc-e9baa5d2ce51
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:29:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.989 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'env', 'PROCESS_TAG=haproxy-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/769b5006-dcff-42dc-96bc-e9baa5d2ce51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.990 253665 DEBUG nova.compute.manager [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.990 253665 DEBUG oslo_concurrency.lockutils [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.991 253665 DEBUG oslo_concurrency.lockutils [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.991 253665 DEBUG oslo_concurrency.lockutils [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.991 253665 DEBUG nova.compute.manager [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Processing event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:29:20 compute-0 nova_compute[253661]: 2025-11-22 09:29:20.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.056 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.058 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803761.0557299, db811577-e691-40e3-9e31-1a0a5929133d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.058 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] VM Started (Lifecycle Event)
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.064 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.068 253665 INFO nova.virt.libvirt.driver [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance spawned successfully.
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.069 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.080 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.085 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.090 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.091 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.091 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.091 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.092 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.092 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.131 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.132 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803761.0560472, db811577-e691-40e3-9e31-1a0a5929133d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.132 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] VM Paused (Lifecycle Event)
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.163 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.167 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803761.0650063, db811577-e691-40e3-9e31-1a0a5929133d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.167 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] VM Resumed (Lifecycle Event)
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.176 253665 INFO nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Took 8.74 seconds to spawn the instance on the hypervisor.
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.176 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.185 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.188 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.215 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.234 253665 INFO nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Took 9.75 seconds to build instance.
Nov 22 09:29:21 compute-0 nova_compute[253661]: 2025-11-22 09:29:21.249 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:21 compute-0 podman[348275]: 2025-11-22 09:29:21.379642669 +0000 UTC m=+0.056275786 container create 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:29:21 compute-0 systemd[1]: Started libpod-conmon-540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5.scope.
Nov 22 09:29:21 compute-0 podman[348275]: 2025-11-22 09:29:21.34665847 +0000 UTC m=+0.023291597 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:29:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:29:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb22e7e7c58e8491e09d953a6e5f1185f7a55b65c910568c494974d374016cd0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:21 compute-0 podman[348275]: 2025-11-22 09:29:21.467784785 +0000 UTC m=+0.144417922 container init 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:29:21 compute-0 podman[348275]: 2025-11-22 09:29:21.477253114 +0000 UTC m=+0.153886221 container start 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 09:29:21 compute-0 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [NOTICE]   (348295) : New worker (348297) forked
Nov 22 09:29:21 compute-0 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [NOTICE]   (348295) : Loading success.
Nov 22 09:29:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:29:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:29:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:29:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:29:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:29:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:29:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:29:22 compute-0 ceph-mon[75021]: pgmap v1988: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:29:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:23 compute-0 nova_compute[253661]: 2025-11-22 09:29:23.199 253665 DEBUG nova.compute.manager [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:29:23 compute-0 nova_compute[253661]: 2025-11-22 09:29:23.200 253665 DEBUG oslo_concurrency.lockutils [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:23 compute-0 nova_compute[253661]: 2025-11-22 09:29:23.200 253665 DEBUG oslo_concurrency.lockutils [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:23 compute-0 nova_compute[253661]: 2025-11-22 09:29:23.201 253665 DEBUG oslo_concurrency.lockutils [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:23 compute-0 nova_compute[253661]: 2025-11-22 09:29:23.201 253665 DEBUG nova.compute.manager [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] No waiting events found dispatching network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:29:23 compute-0 nova_compute[253661]: 2025-11-22 09:29:23.202 253665 WARNING nova.compute.manager [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received unexpected event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad for instance with vm_state active and task_state None.
Nov 22 09:29:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.126 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.361 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.362 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.362 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.362 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.363 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.364 253665 INFO nova.compute.manager [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Terminating instance
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.366 253665 DEBUG nova.compute.manager [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:29:24 compute-0 kernel: tapf5951cb8-b8 (unregistering): left promiscuous mode
Nov 22 09:29:24 compute-0 NetworkManager[48920]: <info>  [1763803764.4171] device (tapf5951cb8-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.430 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:24 compute-0 ovn_controller[152872]: 2025-11-22T09:29:24Z|00939|binding|INFO|Releasing lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad from this chassis (sb_readonly=0)
Nov 22 09:29:24 compute-0 ovn_controller[152872]: 2025-11-22T09:29:24Z|00940|binding|INFO|Setting lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad down in Southbound
Nov 22 09:29:24 compute-0 ovn_controller[152872]: 2025-11-22T09:29:24Z|00941|binding|INFO|Removing iface tapf5951cb8-b8 ovn-installed in OVS
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.441 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:74:7d 10.100.0.4'], port_security=['fa:16:3e:33:74:7d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'db811577-e691-40e3-9e31-1a0a5929133d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaac6ab98bec41f7ac2ec49229374dc0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8fdff023-52d5-4ffb-998e-d0b2022a0bdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afa164a2-2d18-485c-9cf3-c424c8564fdf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f5951cb8-b8f6-4d52-850f-d4bdb04390ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.442 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f5951cb8-b8f6-4d52-850f-d4bdb04390ad in datapath 769b5006-dcff-42dc-96bc-e9baa5d2ce51 unbound from our chassis
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.443 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 769b5006-dcff-42dc-96bc-e9baa5d2ce51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.445 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3c67b9-4bc4-4059-877f-fa9dabdad386]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.446 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 namespace which is not needed anymore
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:24 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Nov 22 09:29:24 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005c.scope: Consumed 3.854s CPU time.
Nov 22 09:29:24 compute-0 systemd-machined[215941]: Machine qemu-112-instance-0000005c terminated.
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.589 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.594 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.607 253665 INFO nova.virt.libvirt.driver [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance destroyed successfully.
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.607 253665 DEBUG nova.objects.instance [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lazy-loading 'resources' on Instance uuid db811577-e691-40e3-9e31-1a0a5929133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:29:24 compute-0 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [NOTICE]   (348295) : haproxy version is 2.8.14-c23fe91
Nov 22 09:29:24 compute-0 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [NOTICE]   (348295) : path to executable is /usr/sbin/haproxy
Nov 22 09:29:24 compute-0 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [WARNING]  (348295) : Exiting Master process...
Nov 22 09:29:24 compute-0 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [WARNING]  (348295) : Exiting Master process...
Nov 22 09:29:24 compute-0 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [ALERT]    (348295) : Current worker (348297) exited with code 143 (Terminated)
Nov 22 09:29:24 compute-0 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [WARNING]  (348295) : All workers exited. Exiting... (0)
Nov 22 09:29:24 compute-0 systemd[1]: libpod-540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5.scope: Deactivated successfully.
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.637 253665 DEBUG nova.virt.libvirt.vif [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-724968197',display_name='tempest-ServerAddressesNegativeTestJSON-server-724968197',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-724968197',id=92,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:29:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aaac6ab98bec41f7ac2ec49229374dc0',ramdisk_id='',reservation_id='r-5s3go2uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1369579025',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1369579025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:29:21Z,user_data=None,user_id='0a91006ab4394e10a534a0887a0d170a',uuid=db811577-e691-40e3-9e31-1a0a5929133d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.638 253665 DEBUG nova.network.os_vif_util [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converting VIF {"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.639 253665 DEBUG nova.network.os_vif_util [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.639 253665 DEBUG os_vif [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.641 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5951cb8-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:29:24 compute-0 podman[348331]: 2025-11-22 09:29:24.642290828 +0000 UTC m=+0.091999704 container died 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.650 253665 INFO os_vif [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8')
Nov 22 09:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5-userdata-shm.mount: Deactivated successfully.
Nov 22 09:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb22e7e7c58e8491e09d953a6e5f1185f7a55b65c910568c494974d374016cd0-merged.mount: Deactivated successfully.
Nov 22 09:29:24 compute-0 podman[348331]: 2025-11-22 09:29:24.735803169 +0000 UTC m=+0.185512045 container cleanup 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:29:24 compute-0 systemd[1]: libpod-conmon-540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5.scope: Deactivated successfully.
Nov 22 09:29:24 compute-0 podman[348390]: 2025-11-22 09:29:24.820801486 +0000 UTC m=+0.058608454 container remove 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.827 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8037c515-5207-46b6-aa35-286f2517c9f3]: (4, ('Sat Nov 22 09:29:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 (540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5)\n540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5\nSat Nov 22 09:29:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 (540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5)\n540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.830 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[612d635d-56a5-4c7c-a819-a87e8a0ca19a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.832 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap769b5006-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:24 compute-0 kernel: tap769b5006-d0: left promiscuous mode
Nov 22 09:29:24 compute-0 ceph-mon[75021]: pgmap v1989: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 22 09:29:24 compute-0 nova_compute[253661]: 2025-11-22 09:29:24.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.855 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd1cc3b-33d0-499d-9afb-8ab9c65ade8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.875 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[723de831-a32c-463b-8be2-480f39eb1fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4b468a5f-d0bc-4904-8855-eda45cf0bae3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf17c8a-1252-4e72-a867-1623febff00b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661139, 'reachable_time': 17089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348405, 'error': None, 'target': 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.900 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:29:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.900 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f426ee1e-cdc9-40d8-97bd-adb4c1cf1902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:29:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d769b5006\x2ddcff\x2d42dc\x2d96bc\x2de9baa5d2ce51.mount: Deactivated successfully.
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.176 253665 INFO nova.virt.libvirt.driver [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Deleting instance files /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d_del
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.177 253665 INFO nova.virt.libvirt.driver [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Deletion of /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d_del complete
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.324 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-unplugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.325 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.325 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.325 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.325 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] No waiting events found dispatching network-vif-unplugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-unplugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.327 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] No waiting events found dispatching network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.327 253665 WARNING nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received unexpected event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad for instance with vm_state active and task_state deleting.
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.6 MiB/s wr, 76 op/s
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.808 253665 INFO nova.compute.manager [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Took 1.44 seconds to destroy the instance on the hypervisor.
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.809 253665 DEBUG oslo.service.loopingcall [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.809 253665 DEBUG nova.compute.manager [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:29:25 compute-0 nova_compute[253661]: 2025-11-22 09:29:25.810 253665 DEBUG nova.network.neutron [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:29:27 compute-0 ceph-mon[75021]: pgmap v1990: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.6 MiB/s wr, 76 op/s
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.202 253665 DEBUG nova.compute.manager [req-aee83331-773a-4767-b12d-3accc26e6e43 req-1a671b1e-49b2-4830-a105-538276ef7c2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-deleted-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.202 253665 INFO nova.compute.manager [req-aee83331-773a-4767-b12d-3accc26e6e43 req-1a671b1e-49b2-4830-a105-538276ef7c2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Neutron deleted interface f5951cb8-b8f6-4d52-850f-d4bdb04390ad; detaching it from the instance and deleting it from the info cache
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.203 253665 DEBUG nova.network.neutron [req-aee83331-773a-4767-b12d-3accc26e6e43 req-1a671b1e-49b2-4830-a105-538276ef7c2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.275 253665 DEBUG nova.network.neutron [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.290 253665 INFO nova.compute.manager [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Took 1.48 seconds to deallocate network for instance.
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.298 253665 DEBUG nova.compute.manager [req-aee83331-773a-4767-b12d-3accc26e6e43 req-1a671b1e-49b2-4830-a105-538276ef7c2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Detach interface failed, port_id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad, reason: Instance db811577-e691-40e3-9e31-1a0a5929133d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.335 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.336 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.413 253665 DEBUG oslo_concurrency.processutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 71 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 88 op/s
Nov 22 09:29:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:29:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3123438005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.900 253665 DEBUG oslo_concurrency.processutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.907 253665 DEBUG nova.compute.provider_tree [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.921 253665 DEBUG nova.scheduler.client.report [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.948 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:27.972 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:27.972 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:27.972 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:27 compute-0 nova_compute[253661]: 2025-11-22 09:29:27.979 253665 INFO nova.scheduler.client.report [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Deleted allocations for instance db811577-e691-40e3-9e31-1a0a5929133d
Nov 22 09:29:28 compute-0 nova_compute[253661]: 2025-11-22 09:29:28.038 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:28.198 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:29:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:28.198 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:29:28 compute-0 nova_compute[253661]: 2025-11-22 09:29:28.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3123438005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:29 compute-0 ceph-mon[75021]: pgmap v1991: 305 pgs: 305 active+clean; 71 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 88 op/s
Nov 22 09:29:29 compute-0 nova_compute[253661]: 2025-11-22 09:29:29.645 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 611 KiB/s wr, 109 op/s
Nov 22 09:29:30 compute-0 nova_compute[253661]: 2025-11-22 09:29:30.623 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:31 compute-0 nova_compute[253661]: 2025-11-22 09:29:31.236 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 22 09:29:31 compute-0 ceph-mon[75021]: pgmap v1992: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 611 KiB/s wr, 109 op/s
Nov 22 09:29:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 22 09:29:31 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 22 09:29:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 120 op/s
Nov 22 09:29:32 compute-0 nova_compute[253661]: 2025-11-22 09:29:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:32 compute-0 nova_compute[253661]: 2025-11-22 09:29:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:29:32 compute-0 nova_compute[253661]: 2025-11-22 09:29:32.268 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:29:32 compute-0 ceph-mon[75021]: osdmap e274: 3 total, 3 up, 3 in
Nov 22 09:29:32 compute-0 nova_compute[253661]: 2025-11-22 09:29:32.825 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:33 compute-0 ceph-mon[75021]: pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 120 op/s
Nov 22 09:29:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.0 KiB/s wr, 87 op/s
Nov 22 09:29:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:29:34.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:29:34 compute-0 nova_compute[253661]: 2025-11-22 09:29:34.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:34 compute-0 nova_compute[253661]: 2025-11-22 09:29:34.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 22 09:29:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 22 09:29:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 22 09:29:34 compute-0 nova_compute[253661]: 2025-11-22 09:29:34.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:35 compute-0 nova_compute[253661]: 2025-11-22 09:29:35.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:35 compute-0 nova_compute[253661]: 2025-11-22 09:29:35.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:35 compute-0 ceph-mon[75021]: pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.0 KiB/s wr, 87 op/s
Nov 22 09:29:35 compute-0 ceph-mon[75021]: osdmap e275: 3 total, 3 up, 3 in
Nov 22 09:29:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.7 KiB/s wr, 66 op/s
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.250 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 22 09:29:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 22 09:29:36 compute-0 ceph-mon[75021]: pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.7 KiB/s wr, 66 op/s
Nov 22 09:29:36 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 22 09:29:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:29:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500487013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:36 compute-0 nova_compute[253661]: 2025-11-22 09:29:36.786 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.014 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.016 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3916MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.016 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.016 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.076 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.077 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.095 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:29:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3011873185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.582 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.590 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.607 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.648 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:29:37 compute-0 nova_compute[253661]: 2025-11-22 09:29:37.649 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 5.5 KiB/s wr, 87 op/s
Nov 22 09:29:37 compute-0 ceph-mon[75021]: osdmap e276: 3 total, 3 up, 3 in
Nov 22 09:29:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2500487013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3011873185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 22 09:29:38 compute-0 ceph-mon[75021]: pgmap v1999: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 5.5 KiB/s wr, 87 op/s
Nov 22 09:29:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 22 09:29:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 22 09:29:39 compute-0 nova_compute[253661]: 2025-11-22 09:29:39.607 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803764.6051538, db811577-e691-40e3-9e31-1a0a5929133d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:29:39 compute-0 nova_compute[253661]: 2025-11-22 09:29:39.608 253665 INFO nova.compute.manager [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] VM Stopped (Lifecycle Event)
Nov 22 09:29:39 compute-0 nova_compute[253661]: 2025-11-22 09:29:39.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 3 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 295 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 5.2 KiB/s wr, 77 op/s
Nov 22 09:29:39 compute-0 nova_compute[253661]: 2025-11-22 09:29:39.833 253665 DEBUG nova.compute.manager [None req-7cd6ef78-8ba4-43ca-9cac-89b0fb77aae3 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:29:40 compute-0 ceph-mon[75021]: osdmap e277: 3 total, 3 up, 3 in
Nov 22 09:29:40 compute-0 sudo[348475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:29:40 compute-0 sudo[348475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:40 compute-0 sudo[348475]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:40 compute-0 sudo[348500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:29:40 compute-0 sudo[348500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:40 compute-0 sudo[348500]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:40 compute-0 sudo[348525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:29:40 compute-0 sudo[348525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:40 compute-0 sudo[348525]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:40 compute-0 sudo[348550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:29:40 compute-0 sudo[348550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:40 compute-0 nova_compute[253661]: 2025-11-22 09:29:40.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:40 compute-0 sudo[348550]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:29:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:29:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:29:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:29:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:29:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:29:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 03cd26b4-32bf-470c-a3b3-e30aa2f2f6c6 does not exist
Nov 22 09:29:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 86e15cd9-bb53-457c-8c9e-474a0a37cfde does not exist
Nov 22 09:29:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c17d16a4-40ed-4880-bcf5-98ab9b990068 does not exist
Nov 22 09:29:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:29:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:29:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:29:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:29:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:29:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:29:40 compute-0 sudo[348605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:29:40 compute-0 sudo[348605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:40 compute-0 sudo[348605]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:41 compute-0 podman[348629]: 2025-11-22 09:29:41.059395152 +0000 UTC m=+0.060282607 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 09:29:41 compute-0 sudo[348642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:29:41 compute-0 podman[348630]: 2025-11-22 09:29:41.068551892 +0000 UTC m=+0.068959105 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:29:41 compute-0 sudo[348642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:41 compute-0 sudo[348642]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:41 compute-0 sudo[348692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:29:41 compute-0 sudo[348692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:41 compute-0 sudo[348692]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:41 compute-0 sudo[348718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:29:41 compute-0 sudo[348718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:41 compute-0 ceph-mon[75021]: pgmap v2001: 305 pgs: 3 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 295 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 5.2 KiB/s wr, 77 op/s
Nov 22 09:29:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:29:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:29:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:29:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:29:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:29:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:29:41 compute-0 podman[348782]: 2025-11-22 09:29:41.573355525 +0000 UTC m=+0.027609055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:29:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 3 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 295 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.4 KiB/s wr, 60 op/s
Nov 22 09:29:41 compute-0 podman[348782]: 2025-11-22 09:29:41.94783087 +0000 UTC m=+0.402084380 container create 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:29:42 compute-0 systemd[1]: Started libpod-conmon-8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c.scope.
Nov 22 09:29:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:29:42 compute-0 podman[348782]: 2025-11-22 09:29:42.09612031 +0000 UTC m=+0.550373870 container init 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:29:42 compute-0 podman[348782]: 2025-11-22 09:29:42.110657636 +0000 UTC m=+0.564911146 container start 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:29:42 compute-0 zealous_poitras[348798]: 167 167
Nov 22 09:29:42 compute-0 systemd[1]: libpod-8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c.scope: Deactivated successfully.
Nov 22 09:29:42 compute-0 conmon[348798]: conmon 8fd321adf2da83f617cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c.scope/container/memory.events
Nov 22 09:29:42 compute-0 podman[348782]: 2025-11-22 09:29:42.196387472 +0000 UTC m=+0.650641002 container attach 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:29:42 compute-0 podman[348782]: 2025-11-22 09:29:42.197885509 +0000 UTC m=+0.652139049 container died 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 09:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-679e8721c899451c05957f707b30798075def4850104fac1ec6b02a7cf16701e-merged.mount: Deactivated successfully.
Nov 22 09:29:42 compute-0 nova_compute[253661]: 2025-11-22 09:29:42.651 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:42 compute-0 podman[348782]: 2025-11-22 09:29:42.690045744 +0000 UTC m=+1.144299264 container remove 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:29:42 compute-0 systemd[1]: libpod-conmon-8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c.scope: Deactivated successfully.
Nov 22 09:29:42 compute-0 podman[348823]: 2025-11-22 09:29:42.87879943 +0000 UTC m=+0.055387983 container create 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:29:42 compute-0 systemd[1]: Started libpod-conmon-04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a.scope.
Nov 22 09:29:42 compute-0 podman[348823]: 2025-11-22 09:29:42.851617687 +0000 UTC m=+0.028206340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:29:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:42 compute-0 podman[348823]: 2025-11-22 09:29:42.972947008 +0000 UTC m=+0.149535571 container init 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:29:42 compute-0 podman[348823]: 2025-11-22 09:29:42.979944264 +0000 UTC m=+0.156532807 container start 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:29:42 compute-0 podman[348823]: 2025-11-22 09:29:42.985533914 +0000 UTC m=+0.162122457 container attach 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:29:43 compute-0 ceph-mon[75021]: pgmap v2002: 305 pgs: 3 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 295 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.4 KiB/s wr, 60 op/s
Nov 22 09:29:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 4.4 KiB/s wr, 87 op/s
Nov 22 09:29:44 compute-0 crazy_banach[348839]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:29:44 compute-0 crazy_banach[348839]: --> relative data size: 1.0
Nov 22 09:29:44 compute-0 crazy_banach[348839]: --> All data devices are unavailable
Nov 22 09:29:44 compute-0 systemd[1]: libpod-04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a.scope: Deactivated successfully.
Nov 22 09:29:44 compute-0 systemd[1]: libpod-04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a.scope: Consumed 1.083s CPU time.
Nov 22 09:29:44 compute-0 podman[348823]: 2025-11-22 09:29:44.103018073 +0000 UTC m=+1.279606656 container died 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837-merged.mount: Deactivated successfully.
Nov 22 09:29:44 compute-0 podman[348823]: 2025-11-22 09:29:44.195214121 +0000 UTC m=+1.371802674 container remove 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:29:44 compute-0 systemd[1]: libpod-conmon-04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a.scope: Deactivated successfully.
Nov 22 09:29:44 compute-0 nova_compute[253661]: 2025-11-22 09:29:44.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:29:44 compute-0 sudo[348718]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:44 compute-0 sudo[348881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:29:44 compute-0 sudo[348881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:44 compute-0 sudo[348881]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:44 compute-0 sudo[348906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:29:44 compute-0 sudo[348906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:44 compute-0 sudo[348906]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:44 compute-0 sudo[348931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:29:44 compute-0 sudo[348931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:44 compute-0 sudo[348931]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:44 compute-0 sudo[348956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:29:44 compute-0 sudo[348956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:44 compute-0 nova_compute[253661]: 2025-11-22 09:29:44.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:44 compute-0 podman[349020]: 2025-11-22 09:29:44.814085002 +0000 UTC m=+0.023575033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:29:44 compute-0 podman[349020]: 2025-11-22 09:29:44.910373383 +0000 UTC m=+0.119863394 container create e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:29:45 compute-0 systemd[1]: Started libpod-conmon-e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205.scope.
Nov 22 09:29:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:29:45 compute-0 podman[349020]: 2025-11-22 09:29:45.508132244 +0000 UTC m=+0.717622285 container init e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:29:45 compute-0 podman[349020]: 2025-11-22 09:29:45.518036153 +0000 UTC m=+0.727526164 container start e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:29:45 compute-0 ceph-mon[75021]: pgmap v2003: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 4.4 KiB/s wr, 87 op/s
Nov 22 09:29:45 compute-0 relaxed_bell[349036]: 167 167
Nov 22 09:29:45 compute-0 systemd[1]: libpod-e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205.scope: Deactivated successfully.
Nov 22 09:29:45 compute-0 podman[349020]: 2025-11-22 09:29:45.564347807 +0000 UTC m=+0.773837828 container attach e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:29:45 compute-0 podman[349020]: 2025-11-22 09:29:45.565184909 +0000 UTC m=+0.774674950 container died e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:29:45 compute-0 nova_compute[253661]: 2025-11-22 09:29:45.630 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.2 KiB/s wr, 79 op/s
Nov 22 09:29:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ce6d33bfd7537f9406b69f80e01314817ca25645ce6ed4dd35d4eff4ea1997f-merged.mount: Deactivated successfully.
Nov 22 09:29:46 compute-0 podman[349020]: 2025-11-22 09:29:46.933017133 +0000 UTC m=+2.142507184 container remove e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 09:29:47 compute-0 systemd[1]: libpod-conmon-e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205.scope: Deactivated successfully.
Nov 22 09:29:47 compute-0 podman[349066]: 2025-11-22 09:29:47.097426376 +0000 UTC m=+0.023079710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:29:47 compute-0 ceph-mon[75021]: pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.2 KiB/s wr, 79 op/s
Nov 22 09:29:47 compute-0 podman[349066]: 2025-11-22 09:29:47.405967015 +0000 UTC m=+0.331620329 container create 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:29:47 compute-0 systemd[1]: Started libpod-conmon-4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283.scope.
Nov 22 09:29:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:47 compute-0 podman[349055]: 2025-11-22 09:29:47.571710673 +0000 UTC m=+0.530556832 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:29:47 compute-0 podman[349066]: 2025-11-22 09:29:47.593617004 +0000 UTC m=+0.519270338 container init 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:29:47 compute-0 podman[349066]: 2025-11-22 09:29:47.601742397 +0000 UTC m=+0.527395711 container start 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 09:29:47 compute-0 podman[349066]: 2025-11-22 09:29:47.612372245 +0000 UTC m=+0.538025589 container attach 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:29:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Nov 22 09:29:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 22 09:29:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 22 09:29:48 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]: {
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:     "0": [
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:         {
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "devices": [
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "/dev/loop3"
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             ],
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_name": "ceph_lv0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_size": "21470642176",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "name": "ceph_lv0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "tags": {
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.cluster_name": "ceph",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.crush_device_class": "",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.encrypted": "0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.osd_id": "0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.type": "block",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.vdo": "0"
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             },
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "type": "block",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "vg_name": "ceph_vg0"
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:         }
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:     ],
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:     "1": [
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:         {
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "devices": [
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "/dev/loop4"
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             ],
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_name": "ceph_lv1",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_size": "21470642176",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "name": "ceph_lv1",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "tags": {
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.cluster_name": "ceph",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.crush_device_class": "",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.encrypted": "0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.osd_id": "1",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.type": "block",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.vdo": "0"
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             },
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "type": "block",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "vg_name": "ceph_vg1"
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:         }
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:     ],
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:     "2": [
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:         {
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "devices": [
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "/dev/loop5"
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             ],
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_name": "ceph_lv2",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_size": "21470642176",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "name": "ceph_lv2",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "tags": {
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.cluster_name": "ceph",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.crush_device_class": "",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.encrypted": "0",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.osd_id": "2",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.type": "block",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:                 "ceph.vdo": "0"
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             },
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "type": "block",
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:             "vg_name": "ceph_vg2"
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:         }
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]:     ]
Nov 22 09:29:48 compute-0 nostalgic_stonebraker[349100]: }
Nov 22 09:29:48 compute-0 systemd[1]: libpod-4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283.scope: Deactivated successfully.
Nov 22 09:29:48 compute-0 podman[349066]: 2025-11-22 09:29:48.471088397 +0000 UTC m=+1.396741711 container died 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4-merged.mount: Deactivated successfully.
Nov 22 09:29:48 compute-0 podman[349066]: 2025-11-22 09:29:48.556113315 +0000 UTC m=+1.481766629 container remove 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:29:48 compute-0 systemd[1]: libpod-conmon-4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283.scope: Deactivated successfully.
Nov 22 09:29:48 compute-0 sudo[348956]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:48 compute-0 sudo[349123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:29:48 compute-0 sudo[349123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:48 compute-0 sudo[349123]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:48 compute-0 sudo[349148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:29:48 compute-0 sudo[349148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:48 compute-0 sudo[349148]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:48 compute-0 sudo[349173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:29:48 compute-0 sudo[349173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:48 compute-0 sudo[349173]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:48 compute-0 sudo[349198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:29:48 compute-0 sudo[349198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:48 compute-0 ceph-mon[75021]: pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Nov 22 09:29:48 compute-0 ceph-mon[75021]: osdmap e278: 3 total, 3 up, 3 in
Nov 22 09:29:49 compute-0 podman[349264]: 2025-11-22 09:29:49.213967727 +0000 UTC m=+0.044694225 container create 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:29:49 compute-0 systemd[1]: Started libpod-conmon-6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463.scope.
Nov 22 09:29:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:29:49 compute-0 podman[349264]: 2025-11-22 09:29:49.287400273 +0000 UTC m=+0.118126791 container init 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 09:29:49 compute-0 podman[349264]: 2025-11-22 09:29:49.191886871 +0000 UTC m=+0.022613389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:29:49 compute-0 podman[349264]: 2025-11-22 09:29:49.294364508 +0000 UTC m=+0.125091016 container start 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 09:29:49 compute-0 crazy_carson[349280]: 167 167
Nov 22 09:29:49 compute-0 systemd[1]: libpod-6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463.scope: Deactivated successfully.
Nov 22 09:29:49 compute-0 podman[349264]: 2025-11-22 09:29:49.303443507 +0000 UTC m=+0.134170015 container attach 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 09:29:49 compute-0 podman[349264]: 2025-11-22 09:29:49.304014401 +0000 UTC m=+0.134740889 container died 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:29:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 22 09:29:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 22 09:29:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 22 09:29:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-74cce09cd0ce7529b759ddf345369b6f5d06397cf770b1325e922c3619d5b5cc-merged.mount: Deactivated successfully.
Nov 22 09:29:49 compute-0 podman[349264]: 2025-11-22 09:29:49.365415095 +0000 UTC m=+0.196141593 container remove 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:29:49 compute-0 systemd[1]: libpod-conmon-6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463.scope: Deactivated successfully.
Nov 22 09:29:49 compute-0 podman[349304]: 2025-11-22 09:29:49.529724436 +0000 UTC m=+0.044263864 container create 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:29:49 compute-0 systemd[1]: Started libpod-conmon-6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a.scope.
Nov 22 09:29:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:29:49 compute-0 podman[349304]: 2025-11-22 09:29:49.511457057 +0000 UTC m=+0.025996305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:29:49 compute-0 podman[349304]: 2025-11-22 09:29:49.615740729 +0000 UTC m=+0.130279977 container init 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 09:29:49 compute-0 podman[349304]: 2025-11-22 09:29:49.625778841 +0000 UTC m=+0.140318069 container start 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:29:49 compute-0 podman[349304]: 2025-11-22 09:29:49.634945112 +0000 UTC m=+0.149484340 container attach 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 09:29:49 compute-0 nova_compute[253661]: 2025-11-22 09:29:49.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 4.2 KiB/s wr, 61 op/s
Nov 22 09:29:50 compute-0 ceph-mon[75021]: osdmap e279: 3 total, 3 up, 3 in
Nov 22 09:29:50 compute-0 nova_compute[253661]: 2025-11-22 09:29:50.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]: {
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "osd_id": 1,
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "type": "bluestore"
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:     },
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "osd_id": 0,
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "type": "bluestore"
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:     },
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "osd_id": 2,
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:         "type": "bluestore"
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]:     }
Nov 22 09:29:50 compute-0 upbeat_perlman[349321]: }
Nov 22 09:29:50 compute-0 systemd[1]: libpod-6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a.scope: Deactivated successfully.
Nov 22 09:29:50 compute-0 systemd[1]: libpod-6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a.scope: Consumed 1.080s CPU time.
Nov 22 09:29:50 compute-0 podman[349304]: 2025-11-22 09:29:50.704944127 +0000 UTC m=+1.219483365 container died 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:29:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021-merged.mount: Deactivated successfully.
Nov 22 09:29:50 compute-0 podman[349304]: 2025-11-22 09:29:50.855901463 +0000 UTC m=+1.370440721 container remove 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 09:29:50 compute-0 systemd[1]: libpod-conmon-6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a.scope: Deactivated successfully.
Nov 22 09:29:50 compute-0 sudo[349198]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:29:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:29:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:29:50 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:29:50 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2bf1b0ba-ca67-4ccd-8435-ce83fd2e1188 does not exist
Nov 22 09:29:50 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8c8c1a3c-08a8-4e7d-b085-102aec6d44a6 does not exist
Nov 22 09:29:51 compute-0 sudo[349368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:29:51 compute-0 sudo[349368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:51 compute-0 sudo[349368]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:51 compute-0 sudo[349393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:29:51 compute-0 sudo[349393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:29:51 compute-0 sudo[349393]: pam_unix(sudo:session): session closed for user root
Nov 22 09:29:51 compute-0 ceph-mon[75021]: pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 4.2 KiB/s wr, 61 op/s
Nov 22 09:29:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:29:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:29:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 22 09:29:51 compute-0 nova_compute[253661]: 2025-11-22 09:29:51.989 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:51 compute-0 nova_compute[253661]: 2025-11-22 09:29:51.990 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.006 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.167 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.168 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.179 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.180 253665 INFO nova.compute.claims [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:29:52
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr']
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.277 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:29:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:29:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:29:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728926225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:52 compute-0 ceph-mon[75021]: pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.796 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.806 253665 DEBUG nova.compute.provider_tree [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.830 253665 DEBUG nova.scheduler.client.report [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.855 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.921 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "39fbf9e5-5d3a-4211-8ce0-7d3b2c7a90d2" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.921 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "39fbf9e5-5d3a-4211-8ce0-7d3b2c7a90d2" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.928 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "39fbf9e5-5d3a-4211-8ce0-7d3b2c7a90d2" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.929 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.968 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.969 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:29:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:52 compute-0 nova_compute[253661]: 2025-11-22 09:29:52.990 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.007 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.104 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.106 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.107 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Creating image(s)
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.134 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.162 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.186 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.191 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.281 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.282 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.283 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.283 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.308 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.315 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:53 compute-0 nova_compute[253661]: 2025-11-22 09:29:53.646 253665 DEBUG nova.policy [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a42c7b8d01c4f8e8dfbb1a0ce8d230d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '664bf3b26d414971a1d337e3eb9567e0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:29:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 3.4 KiB/s wr, 35 op/s
Nov 22 09:29:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2728926225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:29:54 compute-0 nova_compute[253661]: 2025-11-22 09:29:54.334 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Successfully created port: 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:29:54 compute-0 nova_compute[253661]: 2025-11-22 09:29:54.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:55 compute-0 ceph-mon[75021]: pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 3.4 KiB/s wr, 35 op/s
Nov 22 09:29:55 compute-0 nova_compute[253661]: 2025-11-22 09:29:55.375 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Successfully updated port: 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:29:55 compute-0 nova_compute[253661]: 2025-11-22 09:29:55.391 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:29:55 compute-0 nova_compute[253661]: 2025-11-22 09:29:55.392 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquired lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:29:55 compute-0 nova_compute[253661]: 2025-11-22 09:29:55.392 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:29:55 compute-0 nova_compute[253661]: 2025-11-22 09:29:55.586 253665 DEBUG nova.compute.manager [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received event network-changed-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:29:55 compute-0 nova_compute[253661]: 2025-11-22 09:29:55.587 253665 DEBUG nova.compute.manager [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Refreshing instance network info cache due to event network-changed-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:29:55 compute-0 nova_compute[253661]: 2025-11-22 09:29:55.587 253665 DEBUG oslo_concurrency.lockutils [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:29:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:29:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:29:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:29:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:29:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:29:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:29:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:29:55 compute-0 nova_compute[253661]: 2025-11-22 09:29:55.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 50 op/s
Nov 22 09:29:55 compute-0 nova_compute[253661]: 2025-11-22 09:29:55.709 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:29:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:29:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:29:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:29:56 compute-0 nova_compute[253661]: 2025-11-22 09:29:56.935 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Updating instance_info_cache with network_info: [{"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:29:56 compute-0 nova_compute[253661]: 2025-11-22 09:29:56.959 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Releasing lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:29:56 compute-0 nova_compute[253661]: 2025-11-22 09:29:56.960 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance network_info: |[{"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:29:56 compute-0 nova_compute[253661]: 2025-11-22 09:29:56.961 253665 DEBUG oslo_concurrency.lockutils [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:29:56 compute-0 nova_compute[253661]: 2025-11-22 09:29:56.961 253665 DEBUG nova.network.neutron [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Refreshing network info cache for port 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:29:57 compute-0 ceph-mon[75021]: pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 50 op/s
Nov 22 09:29:57 compute-0 nova_compute[253661]: 2025-11-22 09:29:57.573 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:29:57 compute-0 nova_compute[253661]: 2025-11-22 09:29:57.649 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] resizing rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:29:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 58 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 442 KiB/s wr, 42 op/s
Nov 22 09:29:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:29:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 22 09:29:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 22 09:29:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 22 09:29:58 compute-0 nova_compute[253661]: 2025-11-22 09:29:58.689 253665 DEBUG nova.network.neutron [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Updated VIF entry in instance network info cache for port 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:29:58 compute-0 nova_compute[253661]: 2025-11-22 09:29:58.690 253665 DEBUG nova.network.neutron [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Updating instance_info_cache with network_info: [{"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:29:58 compute-0 nova_compute[253661]: 2025-11-22 09:29:58.704 253665 DEBUG oslo_concurrency.lockutils [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.693 253665 DEBUG nova.objects.instance [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a826b3b-aa3a-40c4-a85d-930239bc78d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:29:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.713 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.713 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Ensure instance console log exists: /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.714 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.714 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.714 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.717 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start _get_guest_xml network_info=[{"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.724 253665 WARNING nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.730 253665 DEBUG nova.virt.libvirt.host [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.731 253665 DEBUG nova.virt.libvirt.host [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.735 253665 DEBUG nova.virt.libvirt.host [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.735 253665 DEBUG nova.virt.libvirt.host [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.736 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.736 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.737 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.737 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.737 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.738 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.738 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.738 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.739 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.739 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.739 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.740 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:29:59 compute-0 nova_compute[253661]: 2025-11-22 09:29:59.744 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:29:59 compute-0 ceph-mon[75021]: pgmap v2012: 305 pgs: 305 active+clean; 58 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 442 KiB/s wr, 42 op/s
Nov 22 09:29:59 compute-0 ceph-mon[75021]: osdmap e280: 3 total, 3 up, 3 in
Nov 22 09:30:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:30:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487422855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:00 compute-0 nova_compute[253661]: 2025-11-22 09:30:00.246 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:00 compute-0 nova_compute[253661]: 2025-11-22 09:30:00.279 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:00 compute-0 nova_compute[253661]: 2025-11-22 09:30:00.286 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:00 compute-0 nova_compute[253661]: 2025-11-22 09:30:00.638 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:30:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272232897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:00 compute-0 nova_compute[253661]: 2025-11-22 09:30:00.735 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:00 compute-0 nova_compute[253661]: 2025-11-22 09:30:00.737 253665 DEBUG nova.virt.libvirt.vif [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:29:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-528470523',display_name='tempest-ServerGroupTestJSON-server-528470523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-528470523',id=93,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='664bf3b26d414971a1d337e3eb9567e0',ramdisk_id='',reservation_id='r-0qsyzo83',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1285465503',owner_user_name='tempest-ServerGroupTestJSON-1285465503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:29:53Z,user_data=None,user_id='7a42c7b8d01c4f8e8dfbb1a0ce8d230d',uuid=4a826b3b-aa3a-40c4-a85d-930239bc78d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:30:00 compute-0 nova_compute[253661]: 2025-11-22 09:30:00.737 253665 DEBUG nova.network.os_vif_util [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converting VIF {"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:30:00 compute-0 nova_compute[253661]: 2025-11-22 09:30:00.738 253665 DEBUG nova.network.os_vif_util [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:30:00 compute-0 nova_compute[253661]: 2025-11-22 09:30:00.740 253665 DEBUG nova.objects.instance [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a826b3b-aa3a-40c4-a85d-930239bc78d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:30:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 22 09:30:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 22 09:30:01 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 22 09:30:01 compute-0 ceph-mon[75021]: pgmap v2014: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Nov 22 09:30:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2487422855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/272232897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 MiB/s wr, 55 op/s
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.846 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <uuid>4a826b3b-aa3a-40c4-a85d-930239bc78d6</uuid>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <name>instance-0000005d</name>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerGroupTestJSON-server-528470523</nova:name>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:29:59</nova:creationTime>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <nova:user uuid="7a42c7b8d01c4f8e8dfbb1a0ce8d230d">tempest-ServerGroupTestJSON-1285465503-project-member</nova:user>
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <nova:project uuid="664bf3b26d414971a1d337e3eb9567e0">tempest-ServerGroupTestJSON-1285465503</nova:project>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <nova:port uuid="1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1">
Nov 22 09:30:01 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <system>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <entry name="serial">4a826b3b-aa3a-40c4-a85d-930239bc78d6</entry>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <entry name="uuid">4a826b3b-aa3a-40c4-a85d-930239bc78d6</entry>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     </system>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <os>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   </os>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <features>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   </features>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk">
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       </source>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config">
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       </source>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:30:01 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:fc:8b:74"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <target dev="tap1a1647d5-7a"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/console.log" append="off"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <video>
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     </video>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:30:01 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:30:01 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:30:01 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:30:01 compute-0 nova_compute[253661]: </domain>
Nov 22 09:30:01 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.847 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Preparing to wait for external event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.848 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.848 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.848 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.849 253665 DEBUG nova.virt.libvirt.vif [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:29:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-528470523',display_name='tempest-ServerGroupTestJSON-server-528470523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-528470523',id=93,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='664bf3b26d414971a1d337e3eb9567e0',ramdisk_id='',reservation_id='r-0qsyzo83',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1285465503',owner_user_name='tempest-ServerGroupTestJSON-1285465503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:29:53Z,user_data=None,user_id='7a42c7b8d01c4f8e8dfbb1a0ce8d230d',uuid=4a826b3b-aa3a-40c4-a85d-930239bc78d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.849 253665 DEBUG nova.network.os_vif_util [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converting VIF {"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.850 253665 DEBUG nova.network.os_vif_util [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.850 253665 DEBUG os_vif [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.851 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.851 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.856 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a1647d5-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.856 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a1647d5-7a, col_values=(('external_ids', {'iface-id': '1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:8b:74', 'vm-uuid': '4a826b3b-aa3a-40c4-a85d-930239bc78d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:01 compute-0 NetworkManager[48920]: <info>  [1763803801.8593] manager: (tap1a1647d5-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.869 253665 INFO os_vif [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a')
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.957 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.958 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.958 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] No VIF found with MAC fa:16:3e:fc:8b:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:30:01 compute-0 nova_compute[253661]: 2025-11-22 09:30:01.958 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Using config drive
Nov 22 09:30:02 compute-0 nova_compute[253661]: 2025-11-22 09:30:02.073 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:02 compute-0 nova_compute[253661]: 2025-11-22 09:30:02.634 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Creating config drive at /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config
Nov 22 09:30:02 compute-0 nova_compute[253661]: 2025-11-22 09:30:02.641 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7f9av7ac execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:02 compute-0 ceph-mon[75021]: osdmap e281: 3 total, 3 up, 3 in
Nov 22 09:30:02 compute-0 ceph-mon[75021]: pgmap v2016: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 MiB/s wr, 55 op/s
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:30:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:30:02 compute-0 nova_compute[253661]: 2025-11-22 09:30:02.793 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7f9av7ac" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:02 compute-0 nova_compute[253661]: 2025-11-22 09:30:02.830 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:02 compute-0 nova_compute[253661]: 2025-11-22 09:30:02.836 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 MiB/s wr, 56 op/s
Nov 22 09:30:04 compute-0 nova_compute[253661]: 2025-11-22 09:30:04.623 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.788s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:04 compute-0 nova_compute[253661]: 2025-11-22 09:30:04.624 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Deleting local config drive /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config because it was imported into RBD.
Nov 22 09:30:04 compute-0 kernel: tap1a1647d5-7a: entered promiscuous mode
Nov 22 09:30:04 compute-0 NetworkManager[48920]: <info>  [1763803804.6890] manager: (tap1a1647d5-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Nov 22 09:30:04 compute-0 ovn_controller[152872]: 2025-11-22T09:30:04Z|00942|binding|INFO|Claiming lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 for this chassis.
Nov 22 09:30:04 compute-0 ovn_controller[152872]: 2025-11-22T09:30:04Z|00943|binding|INFO|1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1: Claiming fa:16:3e:fc:8b:74 10.100.0.12
Nov 22 09:30:04 compute-0 nova_compute[253661]: 2025-11-22 09:30:04.690 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:04 compute-0 nova_compute[253661]: 2025-11-22 09:30:04.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:04 compute-0 systemd-udevd[349741]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:30:04 compute-0 systemd-machined[215941]: New machine qemu-113-instance-0000005d.
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.726 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:8b:74 10.100.0.12'], port_security=['fa:16:3e:fc:8b:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a826b3b-aa3a-40c4-a85d-930239bc78d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2815298-27cc-4036-b985-55e1f44ee473', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '664bf3b26d414971a1d337e3eb9567e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6c4ef4c8-f5db-475c-9470-c9efc0b15564', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d861fb2-ab02-4cd4-8bac-25c3ad28ac31, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.727 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 in datapath b2815298-27cc-4036-b985-55e1f44ee473 bound to our chassis
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.728 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2815298-27cc-4036-b985-55e1f44ee473
Nov 22 09:30:04 compute-0 NetworkManager[48920]: <info>  [1763803804.7392] device (tap1a1647d5-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:30:04 compute-0 NetworkManager[48920]: <info>  [1763803804.7401] device (tap1a1647d5-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.747 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[257b0d01-c11a-4602-a242-06b829016150]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.749 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2815298-21 in ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.751 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2815298-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.751 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[071e6c53-5308-4cf8-ae30-9ccbccb5be50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.752 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a84ba754-3a2a-41a7-bc4e-ee76c0acb75d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 nova_compute[253661]: 2025-11-22 09:30:04.757 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:04 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000005d.
Nov 22 09:30:04 compute-0 ovn_controller[152872]: 2025-11-22T09:30:04Z|00944|binding|INFO|Setting lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 ovn-installed in OVS
Nov 22 09:30:04 compute-0 ovn_controller[152872]: 2025-11-22T09:30:04Z|00945|binding|INFO|Setting lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 up in Southbound
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.764 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[889c9469-8649-4113-9c9f-1b3b5bc92608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 nova_compute[253661]: 2025-11-22 09:30:04.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6931e8c-1345-4678-8243-865735389d6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.822 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b9776d74-dfff-4abe-b1b4-459c4363196d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 NetworkManager[48920]: <info>  [1763803804.8298] manager: (tapb2815298-20): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.829 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e85e423-3a62-4070-813b-87986a272f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.870 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[52c24ce4-8d70-46f9-a2f2-fc75ceb8be04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.873 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb7f3ef-f06f-436a-a575-50332f4c15bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 NetworkManager[48920]: <info>  [1763803804.9025] device (tapb2815298-20): carrier: link connected
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.911 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[791985cb-a554-4902-a533-33d8a5e0a754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7af82b1d-990b-43c6-8bae-ce7b804d9e85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2815298-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:b3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665564, 'reachable_time': 34750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349774, 'error': None, 'target': 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.960 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ade052b-b848-4dc2-8cf1-438b12e1b641]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:b3b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665564, 'tstamp': 665564}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349775, 'error': None, 'target': 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.981 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3aeb50ea-10f4-449d-9009-36a7a4eecf44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2815298-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:b3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665564, 'reachable_time': 34750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349776, 'error': None, 'target': 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.024 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34543836-88e8-423e-8699-0c4197acfc33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.110 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4a238632-aff7-457a-8f70-3a96ab376d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.112 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2815298-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.112 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.113 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2815298-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:05 compute-0 nova_compute[253661]: 2025-11-22 09:30:05.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:05 compute-0 NetworkManager[48920]: <info>  [1763803805.1154] manager: (tapb2815298-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Nov 22 09:30:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 22 09:30:05 compute-0 kernel: tapb2815298-20: entered promiscuous mode
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.117 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2815298-20, col_values=(('external_ids', {'iface-id': '35c942a5-f18d-4a44-89ed-ba479c2b3ff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:05 compute-0 ovn_controller[152872]: 2025-11-22T09:30:05Z|00946|binding|INFO|Releasing lport 35c942a5-f18d-4a44-89ed-ba479c2b3ff9 from this chassis (sb_readonly=0)
Nov 22 09:30:05 compute-0 nova_compute[253661]: 2025-11-22 09:30:05.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.152 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2815298-27cc-4036-b985-55e1f44ee473.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2815298-27cc-4036-b985-55e1f44ee473.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.154 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec925b10-55df-43fb-a3e0-60b517dea5ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.156 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-b2815298-27cc-4036-b985-55e1f44ee473
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/b2815298-27cc-4036-b985-55e1f44ee473.pid.haproxy
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID b2815298-27cc-4036-b985-55e1f44ee473
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.157 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'env', 'PROCESS_TAG=haproxy-b2815298-27cc-4036-b985-55e1f44ee473', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2815298-27cc-4036-b985-55e1f44ee473.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:30:05 compute-0 ceph-mon[75021]: pgmap v2017: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 MiB/s wr, 56 op/s
Nov 22 09:30:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 22 09:30:05 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 22 09:30:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.576 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:30:05 compute-0 nova_compute[253661]: 2025-11-22 09:30:05.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:05 compute-0 nova_compute[253661]: 2025-11-22 09:30:05.597 253665 DEBUG nova.compute.manager [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:05 compute-0 nova_compute[253661]: 2025-11-22 09:30:05.600 253665 DEBUG oslo_concurrency.lockutils [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:05 compute-0 nova_compute[253661]: 2025-11-22 09:30:05.601 253665 DEBUG oslo_concurrency.lockutils [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:05 compute-0 nova_compute[253661]: 2025-11-22 09:30:05.601 253665 DEBUG oslo_concurrency.lockutils [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:05 compute-0 nova_compute[253661]: 2025-11-22 09:30:05.601 253665 DEBUG nova.compute.manager [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Processing event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:30:05 compute-0 nova_compute[253661]: 2025-11-22 09:30:05.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:05 compute-0 podman[349809]: 2025-11-22 09:30:05.589976888 +0000 UTC m=+0.029846542 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:30:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 MiB/s wr, 42 op/s
Nov 22 09:30:06 compute-0 podman[349809]: 2025-11-22 09:30:06.02158503 +0000 UTC m=+0.461454664 container create ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:30:06 compute-0 systemd[1]: Started libpod-conmon-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39.scope.
Nov 22 09:30:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc01e6e97cf42734016b9830101bf1f1f4bc57ae2055ce6357dae147575cc35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:06 compute-0 podman[349809]: 2025-11-22 09:30:06.252680561 +0000 UTC m=+0.692550225 container init ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:30:06 compute-0 podman[349809]: 2025-11-22 09:30:06.261902543 +0000 UTC m=+0.701772177 container start ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:30:06 compute-0 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [NOTICE]   (349871) : New worker (349873) forked
Nov 22 09:30:06 compute-0 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [NOTICE]   (349871) : Loading success.
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.311 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.312 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803806.3101366, 4a826b3b-aa3a-40c4-a85d-930239bc78d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.312 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] VM Started (Lifecycle Event)
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.316 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.320 253665 INFO nova.virt.libvirt.driver [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance spawned successfully.
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.320 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.336 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:06 compute-0 ceph-mon[75021]: osdmap e282: 3 total, 3 up, 3 in
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.348 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.349 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.349 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.350 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.350 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.351 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.379 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.380 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803806.3105385, 4a826b3b-aa3a-40c4-a85d-930239bc78d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.380 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] VM Paused (Lifecycle Event)
Nov 22 09:30:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:06.382 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.413 253665 INFO nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Took 13.31 seconds to spawn the instance on the hypervisor.
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.414 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.414 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.425 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803806.314656, 4a826b3b-aa3a-40c4-a85d-930239bc78d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.426 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] VM Resumed (Lifecycle Event)
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.449 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.452 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.494 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.507 253665 INFO nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Took 14.39 seconds to build instance.
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.522 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:06 compute-0 nova_compute[253661]: 2025-11-22 09:30:06.934 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 22 09:30:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 22 09:30:07 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 22 09:30:07 compute-0 nova_compute[253661]: 2025-11-22 09:30:07.672 253665 DEBUG nova.compute.manager [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:07 compute-0 nova_compute[253661]: 2025-11-22 09:30:07.672 253665 DEBUG oslo_concurrency.lockutils [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:07 compute-0 nova_compute[253661]: 2025-11-22 09:30:07.673 253665 DEBUG oslo_concurrency.lockutils [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:07 compute-0 nova_compute[253661]: 2025-11-22 09:30:07.673 253665 DEBUG oslo_concurrency.lockutils [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:07 compute-0 nova_compute[253661]: 2025-11-22 09:30:07.673 253665 DEBUG nova.compute.manager [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] No waiting events found dispatching network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:30:07 compute-0 nova_compute[253661]: 2025-11-22 09:30:07.673 253665 WARNING nova.compute.manager [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received unexpected event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 for instance with vm_state active and task_state None.
Nov 22 09:30:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Nov 22 09:30:07 compute-0 ceph-mon[75021]: pgmap v2019: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 MiB/s wr, 42 op/s
Nov 22 09:30:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.171 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.172 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.173 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.173 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.174 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.176 253665 INFO nova.compute.manager [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Terminating instance
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.178 253665 DEBUG nova.compute.manager [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:30:08 compute-0 kernel: tap1a1647d5-7a (unregistering): left promiscuous mode
Nov 22 09:30:08 compute-0 NetworkManager[48920]: <info>  [1763803808.4034] device (tap1a1647d5-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:08 compute-0 ovn_controller[152872]: 2025-11-22T09:30:08Z|00947|binding|INFO|Releasing lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 from this chassis (sb_readonly=0)
Nov 22 09:30:08 compute-0 ovn_controller[152872]: 2025-11-22T09:30:08Z|00948|binding|INFO|Setting lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 down in Southbound
Nov 22 09:30:08 compute-0 ovn_controller[152872]: 2025-11-22T09:30:08Z|00949|binding|INFO|Removing iface tap1a1647d5-7a ovn-installed in OVS
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.425 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:8b:74 10.100.0.12'], port_security=['fa:16:3e:fc:8b:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a826b3b-aa3a-40c4-a85d-930239bc78d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2815298-27cc-4036-b985-55e1f44ee473', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '664bf3b26d414971a1d337e3eb9567e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6c4ef4c8-f5db-475c-9470-c9efc0b15564', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d861fb2-ab02-4cd4-8bac-25c3ad28ac31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.426 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 in datapath b2815298-27cc-4036-b985-55e1f44ee473 unbound from our chassis
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.428 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2815298-27cc-4036-b985-55e1f44ee473, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.429 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cfaed6f9-e765-44da-baf8-66997bc1648a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.429 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 namespace which is not needed anymore
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:08 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Nov 22 09:30:08 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005d.scope: Consumed 3.251s CPU time.
Nov 22 09:30:08 compute-0 systemd-machined[215941]: Machine qemu-113-instance-0000005d terminated.
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.621 253665 INFO nova.virt.libvirt.driver [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance destroyed successfully.
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.622 253665 DEBUG nova.objects.instance [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lazy-loading 'resources' on Instance uuid 4a826b3b-aa3a-40c4-a85d-930239bc78d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.635 253665 DEBUG nova.virt.libvirt.vif [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:29:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-528470523',display_name='tempest-ServerGroupTestJSON-server-528470523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-528470523',id=93,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:30:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='664bf3b26d414971a1d337e3eb9567e0',ramdisk_id='',reservation_id='r-0qsyzo83',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1285465503',owner_user_name='tempest-ServerGroupTestJSON-1285465503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:30:06Z,user_data=None,user_id='7a42c7b8d01c4f8e8dfbb1a0ce8d230d',uuid=4a826b3b-aa3a-40c4-a85d-930239bc78d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.635 253665 DEBUG nova.network.os_vif_util [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converting VIF {"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:30:08 compute-0 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [NOTICE]   (349871) : haproxy version is 2.8.14-c23fe91
Nov 22 09:30:08 compute-0 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [NOTICE]   (349871) : path to executable is /usr/sbin/haproxy
Nov 22 09:30:08 compute-0 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [WARNING]  (349871) : Exiting Master process...
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.636 253665 DEBUG nova.network.os_vif_util [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.637 253665 DEBUG os_vif [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:30:08 compute-0 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [ALERT]    (349871) : Current worker (349873) exited with code 143 (Terminated)
Nov 22 09:30:08 compute-0 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [WARNING]  (349871) : All workers exited. Exiting... (0)
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:08 compute-0 systemd[1]: libpod-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39.scope: Deactivated successfully.
Nov 22 09:30:08 compute-0 conmon[349864]: conmon ad931f19d49988cde499 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39.scope/container/memory.events
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.644 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a1647d5-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.646 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:08 compute-0 podman[349906]: 2025-11-22 09:30:08.647746095 +0000 UTC m=+0.097807851 container died ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.651 253665 INFO os_vif [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a')
Nov 22 09:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39-userdata-shm.mount: Deactivated successfully.
Nov 22 09:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbc01e6e97cf42734016b9830101bf1f1f4bc57ae2055ce6357dae147575cc35-merged.mount: Deactivated successfully.
Nov 22 09:30:08 compute-0 ceph-mon[75021]: osdmap e283: 3 total, 3 up, 3 in
Nov 22 09:30:08 compute-0 ceph-mon[75021]: pgmap v2021: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Nov 22 09:30:08 compute-0 podman[349906]: 2025-11-22 09:30:08.755768581 +0000 UTC m=+0.205830337 container cleanup ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:30:08 compute-0 systemd[1]: libpod-conmon-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39.scope: Deactivated successfully.
Nov 22 09:30:08 compute-0 podman[349964]: 2025-11-22 09:30:08.924497383 +0000 UTC m=+0.142289748 container remove ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.932 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18f679ad-426f-4e73-988d-ac0948402dc7]: (4, ('Sat Nov 22 09:30:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 (ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39)\nad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39\nSat Nov 22 09:30:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 (ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39)\nad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.934 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41fe07a8-6b21-430e-9fdf-dec3d453c128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.935 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2815298-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:08 compute-0 kernel: tapb2815298-20: left promiscuous mode
Nov 22 09:30:08 compute-0 nova_compute[253661]: 2025-11-22 09:30:08.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.956 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[49eb5f34-cfd1-4c20-a7ce-d0febe76d246]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.966 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0385b8-82ff-4bed-9f21-2a18c1dc7f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.967 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e2dad181-db97-4c1c-a44a-54e94f97f2a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.990 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af0ed48a-d1fc-432e-86ff-7c2ada68f5ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665555, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349979, 'error': None, 'target': 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:08 compute-0 systemd[1]: run-netns-ovnmeta\x2db2815298\x2d27cc\x2d4036\x2db985\x2d55e1f44ee473.mount: Deactivated successfully.
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.994 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:30:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.994 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f39530-d863-49ea-9035-357407226eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 59 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 154 op/s
Nov 22 09:30:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 22 09:30:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 22 09:30:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 22 09:30:09 compute-0 nova_compute[253661]: 2025-11-22 09:30:09.862 253665 INFO nova.virt.libvirt.driver [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Deleting instance files /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6_del
Nov 22 09:30:09 compute-0 nova_compute[253661]: 2025-11-22 09:30:09.863 253665 INFO nova.virt.libvirt.driver [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Deletion of /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6_del complete
Nov 22 09:30:09 compute-0 nova_compute[253661]: 2025-11-22 09:30:09.922 253665 INFO nova.compute.manager [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Took 1.74 seconds to destroy the instance on the hypervisor.
Nov 22 09:30:09 compute-0 nova_compute[253661]: 2025-11-22 09:30:09.923 253665 DEBUG oslo.service.loopingcall [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:30:09 compute-0 nova_compute[253661]: 2025-11-22 09:30:09.923 253665 DEBUG nova.compute.manager [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:30:09 compute-0 nova_compute[253661]: 2025-11-22 09:30:09.924 253665 DEBUG nova.network.neutron [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:30:10 compute-0 nova_compute[253661]: 2025-11-22 09:30:10.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:10 compute-0 ceph-mon[75021]: pgmap v2022: 305 pgs: 305 active+clean; 59 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 154 op/s
Nov 22 09:30:10 compute-0 ceph-mon[75021]: osdmap e284: 3 total, 3 up, 3 in
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.098 253665 DEBUG nova.network.neutron [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.122 253665 INFO nova.compute.manager [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Took 1.20 seconds to deallocate network for instance.
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.178 253665 DEBUG nova.compute.manager [req-33335be1-8870-438a-90ac-c2eb2de937e7 req-cd049ddd-84f4-41be-8969-041dc38d4256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received event network-vif-deleted-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.180 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.180 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.234 253665 DEBUG oslo_concurrency.processutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:11 compute-0 podman[349982]: 2025-11-22 09:30:11.369256916 +0000 UTC m=+0.055126287 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:30:11 compute-0 podman[349983]: 2025-11-22 09:30:11.405472936 +0000 UTC m=+0.089085950 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:30:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 59 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 30 KiB/s wr, 171 op/s
Nov 22 09:30:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:30:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087184616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.740 253665 DEBUG oslo_concurrency.processutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.747 253665 DEBUG nova.compute.provider_tree [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.762 253665 DEBUG nova.scheduler.client.report [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.800 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1087184616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.832 253665 INFO nova.scheduler.client.report [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Deleted allocations for instance 4a826b3b-aa3a-40c4-a85d-930239bc78d6
Nov 22 09:30:11 compute-0 nova_compute[253661]: 2025-11-22 09:30:11.902 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:30:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769843351' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:30:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:30:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769843351' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:30:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 22 09:30:12 compute-0 ceph-mon[75021]: pgmap v2024: 305 pgs: 305 active+clean; 59 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 30 KiB/s wr, 171 op/s
Nov 22 09:30:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1769843351' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:30:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1769843351' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:30:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 22 09:30:12 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 22 09:30:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:13 compute-0 nova_compute[253661]: 2025-11-22 09:30:13.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 41 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 33 KiB/s wr, 240 op/s
Nov 22 09:30:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 22 09:30:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 22 09:30:13 compute-0 ceph-mon[75021]: osdmap e285: 3 total, 3 up, 3 in
Nov 22 09:30:13 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 22 09:30:14 compute-0 ceph-mon[75021]: pgmap v2026: 305 pgs: 305 active+clean; 41 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 33 KiB/s wr, 240 op/s
Nov 22 09:30:14 compute-0 ceph-mon[75021]: osdmap e286: 3 total, 3 up, 3 in
Nov 22 09:30:15 compute-0 nova_compute[253661]: 2025-11-22 09:30:15.644 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 7.2 KiB/s wr, 112 op/s
Nov 22 09:30:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:16.385 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 22 09:30:16 compute-0 ceph-mon[75021]: pgmap v2028: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 7.2 KiB/s wr, 112 op/s
Nov 22 09:30:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 22 09:30:16 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 22 09:30:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 892 KiB/s rd, 9.0 KiB/s wr, 126 op/s
Nov 22 09:30:17 compute-0 nova_compute[253661]: 2025-11-22 09:30:17.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:17 compute-0 ceph-mon[75021]: osdmap e287: 3 total, 3 up, 3 in
Nov 22 09:30:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:18 compute-0 podman[350039]: 2025-11-22 09:30:18.441527987 +0000 UTC m=+0.114724796 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:30:18 compute-0 nova_compute[253661]: 2025-11-22 09:30:18.650 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 22 09:30:18 compute-0 ceph-mon[75021]: pgmap v2030: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 892 KiB/s rd, 9.0 KiB/s wr, 126 op/s
Nov 22 09:30:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 22 09:30:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 22 09:30:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 8.3 KiB/s wr, 91 op/s
Nov 22 09:30:20 compute-0 ceph-mon[75021]: osdmap e288: 3 total, 3 up, 3 in
Nov 22 09:30:20 compute-0 nova_compute[253661]: 2025-11-22 09:30:20.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:21 compute-0 ceph-mon[75021]: pgmap v2032: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 8.3 KiB/s wr, 91 op/s
Nov 22 09:30:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 6.3 KiB/s wr, 69 op/s
Nov 22 09:30:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 22 09:30:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 22 09:30:22 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 22 09:30:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:30:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:30:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:30:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:30:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:30:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:30:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:23 compute-0 nova_compute[253661]: 2025-11-22 09:30:23.622 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803808.618394, 4a826b3b-aa3a-40c4-a85d-930239bc78d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:23 compute-0 nova_compute[253661]: 2025-11-22 09:30:23.625 253665 INFO nova.compute.manager [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] VM Stopped (Lifecycle Event)
Nov 22 09:30:23 compute-0 nova_compute[253661]: 2025-11-22 09:30:23.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:23 compute-0 nova_compute[253661]: 2025-11-22 09:30:23.664 253665 DEBUG nova.compute.manager [None req-0e5905cc-126b-434c-887b-1fba10b11f0f - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:23 compute-0 ceph-mon[75021]: pgmap v2033: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 6.3 KiB/s wr, 69 op/s
Nov 22 09:30:23 compute-0 ceph-mon[75021]: osdmap e289: 3 total, 3 up, 3 in
Nov 22 09:30:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 6.3 KiB/s wr, 95 op/s
Nov 22 09:30:25 compute-0 ceph-mon[75021]: pgmap v2035: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 6.3 KiB/s wr, 95 op/s
Nov 22 09:30:25 compute-0 nova_compute[253661]: 2025-11-22 09:30:25.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 5.7 KiB/s wr, 92 op/s
Nov 22 09:30:26 compute-0 ceph-mon[75021]: pgmap v2036: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 5.7 KiB/s wr, 92 op/s
Nov 22 09:30:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.2 KiB/s wr, 51 op/s
Nov 22 09:30:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 22 09:30:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 22 09:30:27 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 22 09:30:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:27.973 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:27.973 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:27.974 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Nov 22 09:30:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Nov 22 09:30:28 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Nov 22 09:30:28 compute-0 nova_compute[253661]: 2025-11-22 09:30:28.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:28 compute-0 ceph-mon[75021]: pgmap v2037: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.2 KiB/s wr, 51 op/s
Nov 22 09:30:28 compute-0 ceph-mon[75021]: osdmap e290: 3 total, 3 up, 3 in
Nov 22 09:30:28 compute-0 ceph-mon[75021]: osdmap e291: 3 total, 3 up, 3 in
Nov 22 09:30:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.8 KiB/s wr, 58 op/s
Nov 22 09:30:30 compute-0 nova_compute[253661]: 2025-11-22 09:30:30.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:31 compute-0 ceph-mon[75021]: pgmap v2040: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.8 KiB/s wr, 58 op/s
Nov 22 09:30:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.4 KiB/s wr, 50 op/s
Nov 22 09:30:32 compute-0 nova_compute[253661]: 2025-11-22 09:30:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:30:32 compute-0 nova_compute[253661]: 2025-11-22 09:30:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:30:32 compute-0 nova_compute[253661]: 2025-11-22 09:30:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:30:32 compute-0 nova_compute[253661]: 2025-11-22 09:30:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:30:32 compute-0 nova_compute[253661]: 2025-11-22 09:30:32.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:30:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Nov 22 09:30:33 compute-0 ceph-mon[75021]: pgmap v2041: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.4 KiB/s wr, 50 op/s
Nov 22 09:30:33 compute-0 nova_compute[253661]: 2025-11-22 09:30:33.374 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:33 compute-0 nova_compute[253661]: 2025-11-22 09:30:33.375 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:33 compute-0 nova_compute[253661]: 2025-11-22 09:30:33.393 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:30:33 compute-0 nova_compute[253661]: 2025-11-22 09:30:33.459 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:33 compute-0 nova_compute[253661]: 2025-11-22 09:30:33.459 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:33 compute-0 nova_compute[253661]: 2025-11-22 09:30:33.468 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:30:33 compute-0 nova_compute[253661]: 2025-11-22 09:30:33.468 253665 INFO nova.compute.claims [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:30:33 compute-0 nova_compute[253661]: 2025-11-22 09:30:33.584 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Nov 22 09:30:33 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Nov 22 09:30:33 compute-0 nova_compute[253661]: 2025-11-22 09:30:33.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 4.0 KiB/s wr, 60 op/s
Nov 22 09:30:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:30:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933540216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.093 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.100 253665 DEBUG nova.compute.provider_tree [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.121 253665 DEBUG nova.scheduler.client.report [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.158 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.159 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.244 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.245 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.273 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.292 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.367 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.369 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.369 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Creating image(s)
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.397 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.427 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.446 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.451 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.546 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.547 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.548 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.548 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.571 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.575 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:34 compute-0 ceph-mon[75021]: osdmap e292: 3 total, 3 up, 3 in
Nov 22 09:30:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3933540216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:34 compute-0 nova_compute[253661]: 2025-11-22 09:30:34.939 253665 DEBUG nova.policy [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2edeecfc11f347c0856dcf9fae9296ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a2f39f99bbb4a6ab72b64ecca259a1d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.066 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.119 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] resizing rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.210 253665 DEBUG nova.objects.instance [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lazy-loading 'migration_context' on Instance uuid 1f746354-73cc-421a-9cde-f5b8c2b597fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.231 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.232 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Ensure instance console log exists: /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.233 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.233 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.234 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:35 compute-0 ceph-mon[75021]: pgmap v2043: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 4.0 KiB/s wr, 60 op/s
Nov 22 09:30:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 KiB/s wr, 56 op/s
Nov 22 09:30:35 compute-0 nova_compute[253661]: 2025-11-22 09:30:35.781 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Successfully created port: efe559aa-813e-4a03-9d8a-363ad40448c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:30:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Nov 22 09:30:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Nov 22 09:30:36 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Nov 22 09:30:36 compute-0 ceph-mon[75021]: pgmap v2044: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 KiB/s wr, 56 op/s
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.756 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Successfully updated port: efe559aa-813e-4a03-9d8a-363ad40448c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.770 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.770 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquired lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.770 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.959 253665 DEBUG nova.compute.manager [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-changed-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.960 253665 DEBUG nova.compute.manager [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Refreshing instance network info cache due to event network-changed-efe559aa-813e-4a03-9d8a-363ad40448c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:30:36 compute-0 nova_compute[253661]: 2025-11-22 09:30:36.961 253665 DEBUG oslo_concurrency.lockutils [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.106 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.245 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.245 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.246 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.246 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.246 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:30:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787036650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:37 compute-0 ceph-mon[75021]: osdmap e293: 3 total, 3 up, 3 in
Nov 22 09:30:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/787036650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 59 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 577 KiB/s wr, 68 op/s
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.889 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.890 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3910MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.890 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.891 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.959 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 1f746354-73cc-421a-9cde-f5b8c2b597fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.960 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:30:37 compute-0 nova_compute[253661]: 2025-11-22 09:30:37.960 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.007 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Nov 22 09:30:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Nov 22 09:30:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.399 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Updating instance_info_cache with network_info: [{"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.415 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Releasing lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.416 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance network_info: |[{"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.416 253665 DEBUG oslo_concurrency.lockutils [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.416 253665 DEBUG nova.network.neutron [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Refreshing network info cache for port efe559aa-813e-4a03-9d8a-363ad40448c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.420 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start _get_guest_xml network_info=[{"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.425 253665 WARNING nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.432 253665 DEBUG nova.virt.libvirt.host [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.433 253665 DEBUG nova.virt.libvirt.host [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.436 253665 DEBUG nova.virt.libvirt.host [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.436 253665 DEBUG nova.virt.libvirt.host [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.437 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.437 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.437 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.437 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.439 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.439 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.439 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.445 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:30:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695170307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.488 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.495 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.507 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.525 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.525 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.664 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:30:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7201859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.931 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.958 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:38 compute-0 nova_compute[253661]: 2025-11-22 09:30:38.964 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:39 compute-0 ceph-mon[75021]: pgmap v2046: 305 pgs: 305 active+clean; 59 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 577 KiB/s wr, 68 op/s
Nov 22 09:30:39 compute-0 ceph-mon[75021]: osdmap e294: 3 total, 3 up, 3 in
Nov 22 09:30:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1695170307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/7201859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:30:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315525300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.452 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.454 253665 DEBUG nova.virt.libvirt.vif [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:30:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-813427165',display_name='tempest-ServerMetadataTestJSON-server-813427165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-813427165',id=94,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a2f39f99bbb4a6ab72b64ecca259a1d',ramdisk_id='',reservation_id='r-us3y0nv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-715260665',owner_user_name='tempest-ServerMetadataTestJSON-715260665-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:30:34Z,user_data=None,user_id='2edeecfc11f347c0856dcf9fae9296ff',uuid=1f746354-73cc-421a-9cde-f5b8c2b597fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.454 253665 DEBUG nova.network.os_vif_util [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converting VIF {"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.455 253665 DEBUG nova.network.os_vif_util [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.456 253665 DEBUG nova.objects.instance [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f746354-73cc-421a-9cde-f5b8c2b597fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.471 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <uuid>1f746354-73cc-421a-9cde-f5b8c2b597fe</uuid>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <name>instance-0000005e</name>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerMetadataTestJSON-server-813427165</nova:name>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:30:38</nova:creationTime>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <nova:user uuid="2edeecfc11f347c0856dcf9fae9296ff">tempest-ServerMetadataTestJSON-715260665-project-member</nova:user>
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <nova:project uuid="6a2f39f99bbb4a6ab72b64ecca259a1d">tempest-ServerMetadataTestJSON-715260665</nova:project>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <nova:port uuid="efe559aa-813e-4a03-9d8a-363ad40448c7">
Nov 22 09:30:39 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <system>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <entry name="serial">1f746354-73cc-421a-9cde-f5b8c2b597fe</entry>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <entry name="uuid">1f746354-73cc-421a-9cde-f5b8c2b597fe</entry>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     </system>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <os>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   </os>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <features>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   </features>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/1f746354-73cc-421a-9cde-f5b8c2b597fe_disk">
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       </source>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config">
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       </source>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:30:39 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:00:4f:bc"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <target dev="tapefe559aa-81"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/console.log" append="off"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <video>
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     </video>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:30:39 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:30:39 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:30:39 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:30:39 compute-0 nova_compute[253661]: </domain>
Nov 22 09:30:39 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.471 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Preparing to wait for external event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.471 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.472 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.472 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.472 253665 DEBUG nova.virt.libvirt.vif [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:30:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-813427165',display_name='tempest-ServerMetadataTestJSON-server-813427165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-813427165',id=94,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a2f39f99bbb4a6ab72b64ecca259a1d',ramdisk_id='',reservation_id='r-us3y0nv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-715260665',owner_user_name='tempest-ServerMetadataTestJSON-715260665-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:30:34Z,user_data=None,user_id='2edeecfc11f347c0856dcf9fae9296ff',uuid=1f746354-73cc-421a-9cde-f5b8c2b597fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.473 253665 DEBUG nova.network.os_vif_util [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converting VIF {"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.473 253665 DEBUG nova.network.os_vif_util [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.473 253665 DEBUG os_vif [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.474 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.475 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.480 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.480 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefe559aa-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.481 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapefe559aa-81, col_values=(('external_ids', {'iface-id': 'efe559aa-813e-4a03-9d8a-363ad40448c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:4f:bc', 'vm-uuid': '1f746354-73cc-421a-9cde-f5b8c2b597fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:39 compute-0 NetworkManager[48920]: <info>  [1763803839.4852] manager: (tapefe559aa-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.488 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.494 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.495 253665 INFO os_vif [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81')
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.541 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.541 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.542 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] No VIF found with MAC fa:16:3e:00:4f:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.542 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Using config drive
Nov 22 09:30:39 compute-0 nova_compute[253661]: 2025-11-22 09:30:39.567 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Nov 22 09:30:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/315525300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:40 compute-0 nova_compute[253661]: 2025-11-22 09:30:40.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:40 compute-0 nova_compute[253661]: 2025-11-22 09:30:40.678 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Creating config drive at /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config
Nov 22 09:30:40 compute-0 nova_compute[253661]: 2025-11-22 09:30:40.683 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnqunba08 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:40 compute-0 nova_compute[253661]: 2025-11-22 09:30:40.837 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnqunba08" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:40 compute-0 nova_compute[253661]: 2025-11-22 09:30:40.866 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:40 compute-0 nova_compute[253661]: 2025-11-22 09:30:40.870 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:41 compute-0 ceph-mon[75021]: pgmap v2048: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.340 253665 DEBUG nova.network.neutron [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Updated VIF entry in instance network info cache for port efe559aa-813e-4a03-9d8a-363ad40448c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.342 253665 DEBUG nova.network.neutron [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Updating instance_info_cache with network_info: [{"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.361 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.362 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Deleting local config drive /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config because it was imported into RBD.
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.364 253665 DEBUG oslo_concurrency.lockutils [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:30:41 compute-0 kernel: tapefe559aa-81: entered promiscuous mode
Nov 22 09:30:41 compute-0 NetworkManager[48920]: <info>  [1763803841.4327] manager: (tapefe559aa-81): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Nov 22 09:30:41 compute-0 ovn_controller[152872]: 2025-11-22T09:30:41Z|00950|binding|INFO|Claiming lport efe559aa-813e-4a03-9d8a-363ad40448c7 for this chassis.
Nov 22 09:30:41 compute-0 ovn_controller[152872]: 2025-11-22T09:30:41Z|00951|binding|INFO|efe559aa-813e-4a03-9d8a-363ad40448c7: Claiming fa:16:3e:00:4f:bc 10.100.0.3
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.468 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.481 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:4f:bc 10.100.0.3'], port_security=['fa:16:3e:00:4f:bc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f746354-73cc-421a-9cde-f5b8c2b597fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2246d64b-e77a-4784-9fa1-d08726a529e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a2f39f99bbb4a6ab72b64ecca259a1d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1969534-2e7e-4db0-b633-4898adada66f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5a0f272-182f-42b5-a484-bc1a2cbff822, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=efe559aa-813e-4a03-9d8a-363ad40448c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.482 162862 INFO neutron.agent.ovn.metadata.agent [-] Port efe559aa-813e-4a03-9d8a-363ad40448c7 in datapath 2246d64b-e77a-4784-9fa1-d08726a529e2 bound to our chassis
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.484 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2246d64b-e77a-4784-9fa1-d08726a529e2
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.506 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f34db187-2f5e-4b85-a908-d4991d63846e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.508 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2246d64b-e1 in ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.510 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2246d64b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.511 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d279661-26ea-4fe9-9482-27da6ed3fb52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[56eb4199-c2e5-4f13-a462-f3b154a76e41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.528 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[032066d8-f65f-4b00-9283-cc68eb0feeb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 systemd-machined[215941]: New machine qemu-114-instance-0000005e.
Nov 22 09:30:41 compute-0 systemd-udevd[350454]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:30:41 compute-0 ovn_controller[152872]: 2025-11-22T09:30:41Z|00952|binding|INFO|Setting lport efe559aa-813e-4a03-9d8a-363ad40448c7 ovn-installed in OVS
Nov 22 09:30:41 compute-0 ovn_controller[152872]: 2025-11-22T09:30:41Z|00953|binding|INFO|Setting lport efe559aa-813e-4a03-9d8a-363ad40448c7 up in Southbound
Nov 22 09:30:41 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-0000005e.
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.554 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6efadf97-6a9f-4171-b3c8-27f7ccb3cb23]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 NetworkManager[48920]: <info>  [1763803841.5607] device (tapefe559aa-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:30:41 compute-0 NetworkManager[48920]: <info>  [1763803841.5689] device (tapefe559aa-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.591 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e01998f4-f645-427b-8b94-8c403df06172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 NetworkManager[48920]: <info>  [1763803841.6003] manager: (tap2246d64b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/391)
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.598 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e161f321-ba31-469c-87a9-36709149fff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 podman[350432]: 2025-11-22 09:30:41.610754936 +0000 UTC m=+0.120152497 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 22 09:30:41 compute-0 podman[350433]: 2025-11-22 09:30:41.619467431 +0000 UTC m=+0.118245181 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.642 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c630e21d-5314-4425-89d7-0ac4625a24e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.646 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6bec24-2265-40e2-9847-2ecc5272be94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 NetworkManager[48920]: <info>  [1763803841.6745] device (tap2246d64b-e0): carrier: link connected
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.678 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d75e75b1-add2-434c-9058-8c52c664f42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.700 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06829226-1222-4ac4-8464-ba48f8069260]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2246d64b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:ab:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669241, 'reachable_time': 22089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350502, 'error': None, 'target': 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.721 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[85b084ac-ecf2-4ecf-b1a0-646aa6959292]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:ab05'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669241, 'tstamp': 669241}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350503, 'error': None, 'target': 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.738 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[75e22437-0781-461a-9d24-3390ce696d2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2246d64b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:ab:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669241, 'reachable_time': 22089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350504, 'error': None, 'target': 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.786 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[581ffb34-0a9b-4b1e-8510-38b941e1eb9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[783d5afd-c5f0-428a-a885-2d6c09216be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2246d64b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2246d64b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:41 compute-0 NetworkManager[48920]: <info>  [1763803841.8775] manager: (tap2246d64b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.876 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:41 compute-0 kernel: tap2246d64b-e0: entered promiscuous mode
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2246d64b-e0, col_values=(('external_ids', {'iface-id': '14e59265-4b5a-468d-8359-0d37f8713715'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:41 compute-0 ovn_controller[152872]: 2025-11-22T09:30:41Z|00954|binding|INFO|Releasing lport 14e59265-4b5a-468d-8359-0d37f8713715 from this chassis (sb_readonly=0)
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:41 compute-0 nova_compute[253661]: 2025-11-22 09:30:41.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.901 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2246d64b-e77a-4784-9fa1-d08726a529e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2246d64b-e77a-4784-9fa1-d08726a529e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.903 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[038e8ae6-3c21-486a-9a6a-47bd37951d2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.905 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-2246d64b-e77a-4784-9fa1-d08726a529e2
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/2246d64b-e77a-4784-9fa1-d08726a529e2.pid.haproxy
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 2246d64b-e77a-4784-9fa1-d08726a529e2
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:30:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.905 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'env', 'PROCESS_TAG=haproxy-2246d64b-e77a-4784-9fa1-d08726a529e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2246d64b-e77a-4784-9fa1-d08726a529e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:30:42 compute-0 nova_compute[253661]: 2025-11-22 09:30:42.170 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803842.1700737, 1f746354-73cc-421a-9cde-f5b8c2b597fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:42 compute-0 nova_compute[253661]: 2025-11-22 09:30:42.172 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] VM Started (Lifecycle Event)
Nov 22 09:30:42 compute-0 nova_compute[253661]: 2025-11-22 09:30:42.191 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:42 compute-0 nova_compute[253661]: 2025-11-22 09:30:42.198 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803842.1702478, 1f746354-73cc-421a-9cde-f5b8c2b597fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:42 compute-0 nova_compute[253661]: 2025-11-22 09:30:42.199 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] VM Paused (Lifecycle Event)
Nov 22 09:30:42 compute-0 nova_compute[253661]: 2025-11-22 09:30:42.212 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:42 compute-0 nova_compute[253661]: 2025-11-22 09:30:42.216 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:30:42 compute-0 nova_compute[253661]: 2025-11-22 09:30:42.234 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:30:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Nov 22 09:30:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Nov 22 09:30:42 compute-0 podman[350576]: 2025-11-22 09:30:42.300082533 +0000 UTC m=+0.059492566 container create af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:30:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Nov 22 09:30:42 compute-0 systemd[1]: Started libpod-conmon-af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3.scope.
Nov 22 09:30:42 compute-0 podman[350576]: 2025-11-22 09:30:42.269198443 +0000 UTC m=+0.028608486 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:30:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e8f39852c3ce3291613e83f29352d363c0423547f21a29e1614f59dc448bb1f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:42 compute-0 podman[350576]: 2025-11-22 09:30:42.396019034 +0000 UTC m=+0.155429147 container init af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 09:30:42 compute-0 podman[350576]: 2025-11-22 09:30:42.403409907 +0000 UTC m=+0.162819970 container start af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:30:42 compute-0 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [NOTICE]   (350595) : New worker (350597) forked
Nov 22 09:30:42 compute-0 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [NOTICE]   (350595) : Loading success.
Nov 22 09:30:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Nov 22 09:30:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Nov 22 09:30:43 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Nov 22 09:30:43 compute-0 ceph-mon[75021]: pgmap v2049: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Nov 22 09:30:43 compute-0 ceph-mon[75021]: osdmap e295: 3 total, 3 up, 3 in
Nov 22 09:30:43 compute-0 ceph-mon[75021]: osdmap e296: 3 total, 3 up, 3 in
Nov 22 09:30:43 compute-0 nova_compute[253661]: 2025-11-22 09:30:43.526 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:30:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.379 253665 DEBUG nova.compute.manager [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.380 253665 DEBUG oslo_concurrency.lockutils [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.380 253665 DEBUG oslo_concurrency.lockutils [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.380 253665 DEBUG oslo_concurrency.lockutils [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.380 253665 DEBUG nova.compute.manager [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Processing event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.381 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.386 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803844.3858597, 1f746354-73cc-421a-9cde-f5b8c2b597fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.386 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] VM Resumed (Lifecycle Event)
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.390 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.396 253665 INFO nova.virt.libvirt.driver [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance spawned successfully.
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.397 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.411 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.417 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.423 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.423 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.424 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.424 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.424 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.425 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.456 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.510 253665 INFO nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Took 10.14 seconds to spawn the instance on the hypervisor.
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.510 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.569 253665 INFO nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Took 11.14 seconds to build instance.
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.586 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.909 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.910 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.922 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.992 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.993 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.999 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:30:44 compute-0 nova_compute[253661]: 2025-11-22 09:30:44.999 253665 INFO nova.compute.claims [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.114 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:45 compute-0 ceph-mon[75021]: pgmap v2052: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Nov 22 09:30:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:30:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14115640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.649 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.656 253665 DEBUG nova.compute.provider_tree [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.661 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.669 253665 DEBUG nova.scheduler.client.report [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.690 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.691 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:30:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 31 op/s
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.744 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.746 253665 DEBUG nova.network.neutron [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.768 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.784 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.854 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.855 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.855 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Creating image(s)
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.881 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.905 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.928 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.932 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "5f3104bce7037b3b1741dfbde06f1965ca5da121" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:45 compute-0 nova_compute[253661]: 2025-11-22 09:30:45.933 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "5f3104bce7037b3b1741dfbde06f1965ca5da121" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.098 253665 DEBUG nova.network.neutron [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.099 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.191 253665 DEBUG nova.virt.libvirt.imagebackend [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38888df1-6493-484c-9550-c208e81fe437/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38888df1-6493-484c-9550-c208e81fe437/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.248 253665 DEBUG nova.virt.libvirt.imagebackend [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Selected location: {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38888df1-6493-484c-9550-c208e81fe437/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.249 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] cloning images/38888df1-6493-484c-9550-c208e81fe437@snap to None/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:30:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/14115640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.405 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "5f3104bce7037b3b1741dfbde06f1965ca5da121" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.494 253665 DEBUG nova.compute.manager [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.495 253665 DEBUG oslo_concurrency.lockutils [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.495 253665 DEBUG oslo_concurrency.lockutils [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.495 253665 DEBUG oslo_concurrency.lockutils [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.495 253665 DEBUG nova.compute.manager [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] No waiting events found dispatching network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.496 253665 WARNING nova.compute.manager [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received unexpected event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 for instance with vm_state active and task_state None.
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.552 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] resizing rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.640 253665 DEBUG nova.objects.instance [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lazy-loading 'migration_context' on Instance uuid 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.655 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.656 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Ensure instance console log exists: /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.656 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.656 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.656 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.658 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='ba2f9ee1ff49a773355335bb33ba3dbd',container_format='bare',created_at=2025-11-22T09:30:40Z,direct_url=<?>,disk_format='raw',id=38888df1-6493-484c-9550-c208e81fe437,min_disk=0,min_ram=0,name='tempest-image-dependency-test-2115234108',owner='7f68d63cdf1a45888e4dd7198f06fae4',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-22T09:30:42Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '38888df1-6493-484c-9550-c208e81fe437'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.661 253665 WARNING nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.665 253665 DEBUG nova.virt.libvirt.host [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.666 253665 DEBUG nova.virt.libvirt.host [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.668 253665 DEBUG nova.virt.libvirt.host [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.668 253665 DEBUG nova.virt.libvirt.host [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.668 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.669 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='ba2f9ee1ff49a773355335bb33ba3dbd',container_format='bare',created_at=2025-11-22T09:30:40Z,direct_url=<?>,disk_format='raw',id=38888df1-6493-484c-9550-c208e81fe437,min_disk=0,min_ram=0,name='tempest-image-dependency-test-2115234108',owner='7f68d63cdf1a45888e4dd7198f06fae4',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-22T09:30:42Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.669 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.669 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.671 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.671 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.671 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:30:46 compute-0 nova_compute[253661]: 2025-11-22 09:30:46.674 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:30:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982895415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.080 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.113 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.117 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:47 compute-0 ceph-mon[75021]: pgmap v2053: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 31 op/s
Nov 22 09:30:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1982895415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:30:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3731028041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.546 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.549 253665 DEBUG nova.objects.instance [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.563 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <uuid>79b0dd90-3f01-40c6-a7a7-fe79eeab97d0</uuid>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <name>instance-0000005f</name>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <nova:name>instance-depend-image</nova:name>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:30:46</nova:creationTime>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <nova:user uuid="18279c1b172740a1a20233efdadc6120">tempest-ImageDependencyTests-491625585-project-member</nova:user>
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <nova:project uuid="7f68d63cdf1a45888e4dd7198f06fae4">tempest-ImageDependencyTests-491625585</nova:project>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="38888df1-6493-484c-9550-c208e81fe437"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <system>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <entry name="serial">79b0dd90-3f01-40c6-a7a7-fe79eeab97d0</entry>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <entry name="uuid">79b0dd90-3f01-40c6-a7a7-fe79eeab97d0</entry>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     </system>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <os>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   </os>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <features>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   </features>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk">
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       </source>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config">
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       </source>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:30:47 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/console.log" append="off"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <video>
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     </video>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:30:47 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:30:47 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:30:47 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:30:47 compute-0 nova_compute[253661]: </domain>
Nov 22 09:30:47 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.631 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.632 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.632 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Using config drive
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.659 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 729 KiB/s rd, 22 KiB/s wr, 88 op/s
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.855 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Creating config drive at /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.860 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppw8nsg29 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:47 compute-0 nova_compute[253661]: 2025-11-22 09:30:47.998 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppw8nsg29" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:48 compute-0 nova_compute[253661]: 2025-11-22 09:30:48.026 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:48 compute-0 nova_compute[253661]: 2025-11-22 09:30:48.030 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:48 compute-0 nova_compute[253661]: 2025-11-22 09:30:48.191 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:48 compute-0 nova_compute[253661]: 2025-11-22 09:30:48.193 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Deleting local config drive /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config because it was imported into RBD.
Nov 22 09:30:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:48 compute-0 systemd-machined[215941]: New machine qemu-115-instance-0000005f.
Nov 22 09:30:48 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-0000005f.
Nov 22 09:30:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3731028041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.089 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.090 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.091 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.091 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.091 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.094 253665 INFO nova.compute.manager [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Terminating instance
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.095 253665 DEBUG nova.compute.manager [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:30:49 compute-0 kernel: tapefe559aa-81 (unregistering): left promiscuous mode
Nov 22 09:30:49 compute-0 NetworkManager[48920]: <info>  [1763803849.1448] device (tapefe559aa-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:30:49 compute-0 ovn_controller[152872]: 2025-11-22T09:30:49Z|00955|binding|INFO|Releasing lport efe559aa-813e-4a03-9d8a-363ad40448c7 from this chassis (sb_readonly=0)
Nov 22 09:30:49 compute-0 ovn_controller[152872]: 2025-11-22T09:30:49Z|00956|binding|INFO|Setting lport efe559aa-813e-4a03-9d8a-363ad40448c7 down in Southbound
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:49 compute-0 ovn_controller[152872]: 2025-11-22T09:30:49Z|00957|binding|INFO|Removing iface tapefe559aa-81 ovn-installed in OVS
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.162 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.175 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:4f:bc 10.100.0.3'], port_security=['fa:16:3e:00:4f:bc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f746354-73cc-421a-9cde-f5b8c2b597fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2246d64b-e77a-4784-9fa1-d08726a529e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a2f39f99bbb4a6ab72b64ecca259a1d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1969534-2e7e-4db0-b633-4898adada66f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5a0f272-182f-42b5-a484-bc1a2cbff822, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=efe559aa-813e-4a03-9d8a-363ad40448c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.176 162862 INFO neutron.agent.ovn.metadata.agent [-] Port efe559aa-813e-4a03-9d8a-363ad40448c7 in datapath 2246d64b-e77a-4784-9fa1-d08726a529e2 unbound from our chassis
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.178 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2246d64b-e77a-4784-9fa1-d08726a529e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.187 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e82d5fb-06d3-45fb-a721-7256a88c786c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.188 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 namespace which is not needed anymore
Nov 22 09:30:49 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 22 09:30:49 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005e.scope: Consumed 5.414s CPU time.
Nov 22 09:30:49 compute-0 systemd-machined[215941]: Machine qemu-114-instance-0000005e terminated.
Nov 22 09:30:49 compute-0 podman[350959]: 2025-11-22 09:30:49.279051944 +0000 UTC m=+0.103480658 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.342 253665 INFO nova.virt.libvirt.driver [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance destroyed successfully.
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.342 253665 DEBUG nova.objects.instance [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lazy-loading 'resources' on Instance uuid 1f746354-73cc-421a-9cde-f5b8c2b597fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:30:49 compute-0 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [NOTICE]   (350595) : haproxy version is 2.8.14-c23fe91
Nov 22 09:30:49 compute-0 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [NOTICE]   (350595) : path to executable is /usr/sbin/haproxy
Nov 22 09:30:49 compute-0 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [WARNING]  (350595) : Exiting Master process...
Nov 22 09:30:49 compute-0 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [ALERT]    (350595) : Current worker (350597) exited with code 143 (Terminated)
Nov 22 09:30:49 compute-0 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [WARNING]  (350595) : All workers exited. Exiting... (0)
Nov 22 09:30:49 compute-0 systemd[1]: libpod-af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3.scope: Deactivated successfully.
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.355 253665 DEBUG nova.virt.libvirt.vif [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:30:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-813427165',display_name='tempest-ServerMetadataTestJSON-server-813427165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-813427165',id=94,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:30:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a2f39f99bbb4a6ab72b64ecca259a1d',ramdisk_id='',reservation_id='r-us3y0nv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-715260665',owner_user_name='tempest-ServerMetadataTestJSON-715260665-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:30:48Z,user_data=None,user_id='2edeecfc11f347c0856dcf9fae9296ff',uuid=1f746354-73cc-421a-9cde-f5b8c2b597fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.355 253665 DEBUG nova.network.os_vif_util [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converting VIF {"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.356 253665 DEBUG nova.network.os_vif_util [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.356 253665 DEBUG os_vif [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.358 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.358 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefe559aa-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:30:49 compute-0 podman[351008]: 2025-11-22 09:30:49.365504572 +0000 UTC m=+0.076239767 container died af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.366 253665 INFO os_vif [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81')
Nov 22 09:30:49 compute-0 ceph-mon[75021]: pgmap v2054: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 729 KiB/s rd, 22 KiB/s wr, 88 op/s
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.405 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.406 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.427 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3-userdata-shm.mount: Deactivated successfully.
Nov 22 09:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e8f39852c3ce3291613e83f29352d363c0423547f21a29e1614f59dc448bb1f-merged.mount: Deactivated successfully.
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.520 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.521 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.531 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.532 253665 INFO nova.compute.claims [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:30:49 compute-0 podman[351008]: 2025-11-22 09:30:49.640793838 +0000 UTC m=+0.351529033 container cleanup af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:30:49 compute-0 systemd[1]: libpod-conmon-af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3.scope: Deactivated successfully.
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.678 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 183 op/s
Nov 22 09:30:49 compute-0 podman[351096]: 2025-11-22 09:30:49.823090804 +0000 UTC m=+0.155729363 container remove af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.836 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[089fa18a-da26-401e-a577-b7a688d4f90c]: (4, ('Sat Nov 22 09:30:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 (af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3)\naf5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3\nSat Nov 22 09:30:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 (af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3)\naf5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fafe959-a1f0-4aa8-b706-3016c114c5f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.840 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2246d64b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.843 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:49 compute-0 kernel: tap2246d64b-e0: left promiscuous mode
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.856 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803849.8554833, 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.856 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] VM Resumed (Lifecycle Event)
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.860 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.862 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.863 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd6ab8a-cafe-47b5-8167-76d59dc3273a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.874 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.878 253665 INFO nova.virt.libvirt.driver [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance spawned successfully.
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.886 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.886 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a191bfe7-4bc5-460a-b773-e004de6efe75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73372b24-c987-4ef2-9dab-bd7059364280]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.891 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.905 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.906 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803849.859133, 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.906 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] VM Started (Lifecycle Event)
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.909 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.909 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.910 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.910 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.910 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.911 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.913 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea86c66-477f-4abc-85c7-4b7c7cda12c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669232, 'reachable_time': 33448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351136, 'error': None, 'target': 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d2246d64b\x2de77a\x2d4784\x2d9fa1\x2dd08726a529e2.mount: Deactivated successfully.
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.918 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:30:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.918 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[bd97c9a6-d738-4d2f-b228-f2a66eb0d3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.929 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.934 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.955 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.972 253665 INFO nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 4.12 seconds to spawn the instance on the hypervisor.
Nov 22 09:30:49 compute-0 nova_compute[253661]: 2025-11-22 09:30:49.972 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.044 253665 INFO nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 5.08 seconds to build instance.
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.058 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:30:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739822084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.180 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.186 253665 DEBUG nova.compute.provider_tree [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.197 253665 DEBUG nova.scheduler.client.report [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.206 253665 INFO nova.virt.libvirt.driver [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Deleting instance files /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe_del
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.206 253665 INFO nova.virt.libvirt.driver [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Deletion of /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe_del complete
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.213 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.213 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.263 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.264 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.276 253665 INFO nova.compute.manager [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Took 1.18 seconds to destroy the instance on the hypervisor.
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.277 253665 DEBUG oslo.service.loopingcall [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.277 253665 DEBUG nova.compute.manager [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.278 253665 DEBUG nova.network.neutron [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.283 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.305 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:30:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3739822084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.648 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.649 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.650 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Creating image(s)
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.674 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.701 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.730 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.736 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.777 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.819 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.820 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.820 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.821 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.843 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.848 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:50 compute-0 nova_compute[253661]: 2025-11-22 09:30:50.907 253665 DEBUG nova.policy [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.125 253665 DEBUG nova.compute.manager [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-unplugged-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG oslo_concurrency.lockutils [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG oslo_concurrency.lockutils [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG oslo_concurrency.lockutils [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG nova.compute.manager [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] No waiting events found dispatching network-vif-unplugged-efe559aa-813e-4a03-9d8a-363ad40448c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG nova.compute.manager [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-unplugged-efe559aa-813e-4a03-9d8a-363ad40448c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:30:51 compute-0 sudo[351234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:30:51 compute-0 sudo[351234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:51 compute-0 sudo[351234]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.225 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:51 compute-0 sudo[351259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:30:51 compute-0 sudo[351259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:51 compute-0 sudo[351259]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.303 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:30:51 compute-0 sudo[351320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:30:51 compute-0 sudo[351320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:51 compute-0 sudo[351320]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:51 compute-0 sudo[351363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:30:51 compute-0 sudo[351363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:51 compute-0 ceph-mon[75021]: pgmap v2055: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 183 op/s
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.463 253665 DEBUG nova.network.neutron [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.485 253665 INFO nova.compute.manager [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Took 1.21 seconds to deallocate network for instance.
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.539 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.539 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.647 253665 DEBUG nova.objects.instance [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.657 253665 DEBUG oslo_concurrency.processutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.694 253665 DEBUG nova.compute.manager [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.696 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.696 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Ensure instance console log exists: /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.697 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.697 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.698 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 KiB/s wr, 142 op/s
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.733 253665 INFO nova.compute.manager [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] instance snapshotting
Nov 22 09:30:51 compute-0 sudo[351363]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:30:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:30:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:30:51 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:30:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:30:51 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:30:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ada2bc9a-612d-493b-82a5-e4f0f902af3c does not exist
Nov 22 09:30:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8add5ee4-36a2-4ee8-afdb-552f95ecf8a2 does not exist
Nov 22 09:30:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a81507dd-aab9-4e28-b25c-6acab796a35c does not exist
Nov 22 09:30:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:30:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:30:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:30:51 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:30:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:30:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.942 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Successfully created port: a13cc417-edce-4c30-a5b0-f90095810bcc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:30:51 compute-0 sudo[351457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:30:51 compute-0 sudo[351457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:51 compute-0 nova_compute[253661]: 2025-11-22 09:30:51.965 253665 INFO nova.virt.libvirt.driver [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Beginning live snapshot process
Nov 22 09:30:51 compute-0 sudo[351457]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:52 compute-0 sudo[351482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:30:52 compute-0 sudo[351482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:52 compute-0 sudo[351482]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:52 compute-0 sudo[351507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:30:52 compute-0 sudo[351507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:52 compute-0 sudo[351507]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:30:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3781483500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:52 compute-0 nova_compute[253661]: 2025-11-22 09:30:52.118 253665 DEBUG oslo_concurrency.processutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:52 compute-0 nova_compute[253661]: 2025-11-22 09:30:52.123 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] creating snapshot(850682c18479484b9b5434a2c2332bdc) on rbd image(79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:30:52 compute-0 sudo[351553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:30:52 compute-0 sudo[351553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:52 compute-0 nova_compute[253661]: 2025-11-22 09:30:52.164 253665 DEBUG nova.compute.provider_tree [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:30:52 compute-0 nova_compute[253661]: 2025-11-22 09:30:52.176 253665 DEBUG nova.scheduler.client.report [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:30:52 compute-0 nova_compute[253661]: 2025-11-22 09:30:52.191 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:52 compute-0 nova_compute[253661]: 2025-11-22 09:30:52.228 253665 INFO nova.scheduler.client.report [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Deleted allocations for instance 1f746354-73cc-421a-9cde-f5b8c2b597fe
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:30:52
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.meta', '.mgr', 'vms', 'backups', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data']
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:30:52 compute-0 nova_compute[253661]: 2025-11-22 09:30:52.282 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:52 compute-0 podman[351649]: 2025-11-22 09:30:52.430347776 +0000 UTC m=+0.034254985 container create 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:30:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:30:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:30:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:30:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:30:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:30:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:30:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3781483500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:30:52 compute-0 systemd[1]: Started libpod-conmon-91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2.scope.
Nov 22 09:30:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:30:52 compute-0 podman[351649]: 2025-11-22 09:30:52.415593712 +0000 UTC m=+0.019500951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:30:52 compute-0 podman[351649]: 2025-11-22 09:30:52.516758463 +0000 UTC m=+0.120665722 container init 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 09:30:52 compute-0 podman[351649]: 2025-11-22 09:30:52.523912269 +0000 UTC m=+0.127819488 container start 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:30:52 compute-0 podman[351649]: 2025-11-22 09:30:52.527926038 +0000 UTC m=+0.131833247 container attach 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:30:52 compute-0 focused_hofstadter[351666]: 167 167
Nov 22 09:30:52 compute-0 systemd[1]: libpod-91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2.scope: Deactivated successfully.
Nov 22 09:30:52 compute-0 podman[351649]: 2025-11-22 09:30:52.530155742 +0000 UTC m=+0.134062951 container died 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-12c8b59d88b3260689f594bb504ca865fe7160461837affe2aaaef64aae3606d-merged.mount: Deactivated successfully.
Nov 22 09:30:52 compute-0 podman[351649]: 2025-11-22 09:30:52.56828631 +0000 UTC m=+0.172193519 container remove 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:30:52 compute-0 systemd[1]: libpod-conmon-91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2.scope: Deactivated successfully.
Nov 22 09:30:52 compute-0 podman[351688]: 2025-11-22 09:30:52.727808997 +0000 UTC m=+0.045110901 container create 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:30:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:30:52 compute-0 systemd[1]: Started libpod-conmon-2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2.scope.
Nov 22 09:30:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:52 compute-0 podman[351688]: 2025-11-22 09:30:52.709370323 +0000 UTC m=+0.026672247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:52 compute-0 podman[351688]: 2025-11-22 09:30:52.816833578 +0000 UTC m=+0.134135502 container init 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:30:52 compute-0 podman[351688]: 2025-11-22 09:30:52.823686567 +0000 UTC m=+0.140988471 container start 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:30:52 compute-0 podman[351688]: 2025-11-22 09:30:52.827721577 +0000 UTC m=+0.145023501 container attach 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:30:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Nov 22 09:30:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Nov 22 09:30:52 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Nov 22 09:30:52 compute-0 nova_compute[253661]: 2025-11-22 09:30:52.976 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] cloning vms/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk@850682c18479484b9b5434a2c2332bdc to images/1c164693-7d32-4441-9567-92f357c61148 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.097 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] flattening images/1c164693-7d32-4441-9567-92f357c61148 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:30:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.234 253665 DEBUG nova.compute.manager [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.234 253665 DEBUG oslo_concurrency.lockutils [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.234 253665 DEBUG oslo_concurrency.lockutils [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.235 253665 DEBUG oslo_concurrency.lockutils [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.235 253665 DEBUG nova.compute.manager [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] No waiting events found dispatching network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.235 253665 WARNING nova.compute.manager [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received unexpected event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 for instance with vm_state deleted and task_state None.
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.235 253665 DEBUG nova.compute.manager [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-deleted-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.248 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] removing snapshot(850682c18479484b9b5434a2c2332bdc) on rbd image(79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.378 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Successfully updated port: a13cc417-edce-4c30-a5b0-f90095810bcc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.396 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.396 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.396 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:30:53 compute-0 ceph-mon[75021]: pgmap v2056: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 KiB/s wr, 142 op/s
Nov 22 09:30:53 compute-0 ceph-mon[75021]: osdmap e297: 3 total, 3 up, 3 in
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.510 253665 DEBUG nova.compute.manager [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.511 253665 DEBUG nova.compute.manager [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing instance network info cache due to event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.511 253665 DEBUG oslo_concurrency.lockutils [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:30:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 81 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 201 op/s
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.881 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:30:53 compute-0 youthful_benz[351704]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:30:53 compute-0 youthful_benz[351704]: --> relative data size: 1.0
Nov 22 09:30:53 compute-0 youthful_benz[351704]: --> All data devices are unavailable
Nov 22 09:30:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Nov 22 09:30:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Nov 22 09:30:53 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Nov 22 09:30:53 compute-0 systemd[1]: libpod-2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2.scope: Deactivated successfully.
Nov 22 09:30:53 compute-0 podman[351688]: 2025-11-22 09:30:53.924520091 +0000 UTC m=+1.241821995 container died 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:30:53 compute-0 systemd[1]: libpod-2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2.scope: Consumed 1.030s CPU time.
Nov 22 09:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca-merged.mount: Deactivated successfully.
Nov 22 09:30:53 compute-0 nova_compute[253661]: 2025-11-22 09:30:53.979 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] creating snapshot(snap) on rbd image(1c164693-7d32-4441-9567-92f357c61148) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:30:53 compute-0 podman[351688]: 2025-11-22 09:30:53.997541279 +0000 UTC m=+1.314843183 container remove 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:30:54 compute-0 systemd[1]: libpod-conmon-2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2.scope: Deactivated successfully.
Nov 22 09:30:54 compute-0 sudo[351553]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:54 compute-0 sudo[351835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:30:54 compute-0 sudo[351835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:54 compute-0 sudo[351835]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:54 compute-0 sudo[351860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:30:54 compute-0 sudo[351860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:54 compute-0 sudo[351860]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:54 compute-0 sudo[351885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:30:54 compute-0 sudo[351885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:54 compute-0 sudo[351885]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:54 compute-0 sudo[351910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:30:54 compute-0 sudo[351910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:54 compute-0 nova_compute[253661]: 2025-11-22 09:30:54.361 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:54 compute-0 podman[351975]: 2025-11-22 09:30:54.642648817 +0000 UTC m=+0.047653934 container create 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 22 09:30:54 compute-0 systemd[1]: Started libpod-conmon-45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8.scope.
Nov 22 09:30:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:30:54 compute-0 podman[351975]: 2025-11-22 09:30:54.625007002 +0000 UTC m=+0.030012139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:30:54 compute-0 podman[351975]: 2025-11-22 09:30:54.723656981 +0000 UTC m=+0.128662118 container init 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:30:54 compute-0 podman[351975]: 2025-11-22 09:30:54.732258732 +0000 UTC m=+0.137263849 container start 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:30:54 compute-0 podman[351975]: 2025-11-22 09:30:54.735740677 +0000 UTC m=+0.140745874 container attach 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:30:54 compute-0 charming_cohen[351991]: 167 167
Nov 22 09:30:54 compute-0 systemd[1]: libpod-45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8.scope: Deactivated successfully.
Nov 22 09:30:54 compute-0 conmon[351991]: conmon 45a76bed2164ec80ea36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8.scope/container/memory.events
Nov 22 09:30:54 compute-0 podman[351975]: 2025-11-22 09:30:54.739760076 +0000 UTC m=+0.144765193 container died 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8127c13ecd010438710c7fb20a011c5612db61cf912fbaf7436514cacab70403-merged.mount: Deactivated successfully.
Nov 22 09:30:54 compute-0 podman[351975]: 2025-11-22 09:30:54.779779452 +0000 UTC m=+0.184784589 container remove 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:30:54 compute-0 systemd[1]: libpod-conmon-45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8.scope: Deactivated successfully.
Nov 22 09:30:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Nov 22 09:30:54 compute-0 ceph-mon[75021]: pgmap v2058: 305 pgs: 305 active+clean; 81 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 201 op/s
Nov 22 09:30:54 compute-0 ceph-mon[75021]: osdmap e298: 3 total, 3 up, 3 in
Nov 22 09:30:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Nov 22 09:30:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Nov 22 09:30:54 compute-0 podman[352014]: 2025-11-22 09:30:54.961182706 +0000 UTC m=+0.044281650 container create 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:30:55 compute-0 systemd[1]: Started libpod-conmon-9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48.scope.
Nov 22 09:30:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:55 compute-0 podman[352014]: 2025-11-22 09:30:54.944495256 +0000 UTC m=+0.027594220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:30:55 compute-0 podman[352014]: 2025-11-22 09:30:55.04339856 +0000 UTC m=+0.126497524 container init 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:30:55 compute-0 podman[352014]: 2025-11-22 09:30:55.052218607 +0000 UTC m=+0.135317551 container start 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 09:30:55 compute-0 podman[352014]: 2025-11-22 09:30:55.05641512 +0000 UTC m=+0.139514094 container attach 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.110 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.130 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.130 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance network_info: |[{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.131 253665 DEBUG oslo_concurrency.lockutils [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.131 253665 DEBUG nova.network.neutron [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.135 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start _get_guest_xml network_info=[{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.141 253665 WARNING nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.148 253665 DEBUG nova.virt.libvirt.host [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.150 253665 DEBUG nova.virt.libvirt.host [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.157 253665 DEBUG nova.virt.libvirt.host [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.158 253665 DEBUG nova.virt.libvirt.host [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.158 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.158 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.159 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.159 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.159 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.159 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.161 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.163 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:30:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352264715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.618 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:30:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:30:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:30:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:30:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.654 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.660 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:55 compute-0 nova_compute[253661]: 2025-11-22 09:30:55.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 3.6 MiB/s wr, 155 op/s
Nov 22 09:30:55 compute-0 clever_ride[352030]: {
Nov 22 09:30:55 compute-0 clever_ride[352030]:     "0": [
Nov 22 09:30:55 compute-0 clever_ride[352030]:         {
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "devices": [
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "/dev/loop3"
Nov 22 09:30:55 compute-0 clever_ride[352030]:             ],
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_name": "ceph_lv0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_size": "21470642176",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "name": "ceph_lv0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "tags": {
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.cluster_name": "ceph",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.crush_device_class": "",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.encrypted": "0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.osd_id": "0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.type": "block",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.vdo": "0"
Nov 22 09:30:55 compute-0 clever_ride[352030]:             },
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "type": "block",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "vg_name": "ceph_vg0"
Nov 22 09:30:55 compute-0 clever_ride[352030]:         }
Nov 22 09:30:55 compute-0 clever_ride[352030]:     ],
Nov 22 09:30:55 compute-0 clever_ride[352030]:     "1": [
Nov 22 09:30:55 compute-0 clever_ride[352030]:         {
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "devices": [
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "/dev/loop4"
Nov 22 09:30:55 compute-0 clever_ride[352030]:             ],
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_name": "ceph_lv1",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_size": "21470642176",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "name": "ceph_lv1",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "tags": {
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.cluster_name": "ceph",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.crush_device_class": "",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.encrypted": "0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.osd_id": "1",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.type": "block",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.vdo": "0"
Nov 22 09:30:55 compute-0 clever_ride[352030]:             },
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "type": "block",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "vg_name": "ceph_vg1"
Nov 22 09:30:55 compute-0 clever_ride[352030]:         }
Nov 22 09:30:55 compute-0 clever_ride[352030]:     ],
Nov 22 09:30:55 compute-0 clever_ride[352030]:     "2": [
Nov 22 09:30:55 compute-0 clever_ride[352030]:         {
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "devices": [
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "/dev/loop5"
Nov 22 09:30:55 compute-0 clever_ride[352030]:             ],
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_name": "ceph_lv2",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_size": "21470642176",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "name": "ceph_lv2",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "tags": {
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.cluster_name": "ceph",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.crush_device_class": "",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.encrypted": "0",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.osd_id": "2",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.type": "block",
Nov 22 09:30:55 compute-0 clever_ride[352030]:                 "ceph.vdo": "0"
Nov 22 09:30:55 compute-0 clever_ride[352030]:             },
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "type": "block",
Nov 22 09:30:55 compute-0 clever_ride[352030]:             "vg_name": "ceph_vg2"
Nov 22 09:30:55 compute-0 clever_ride[352030]:         }
Nov 22 09:30:55 compute-0 clever_ride[352030]:     ]
Nov 22 09:30:55 compute-0 clever_ride[352030]: }
Nov 22 09:30:55 compute-0 systemd[1]: libpod-9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48.scope: Deactivated successfully.
Nov 22 09:30:55 compute-0 ceph-mon[75021]: osdmap e299: 3 total, 3 up, 3 in
Nov 22 09:30:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1352264715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:55 compute-0 podman[352099]: 2025-11-22 09:30:55.960204005 +0000 UTC m=+0.031799144 container died 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:30:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b-merged.mount: Deactivated successfully.
Nov 22 09:30:56 compute-0 podman[352099]: 2025-11-22 09:30:56.012614785 +0000 UTC m=+0.084209864 container remove 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 09:30:56 compute-0 systemd[1]: libpod-conmon-9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48.scope: Deactivated successfully.
Nov 22 09:30:56 compute-0 sudo[351910]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:56 compute-0 sudo[352112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:30:56 compute-0 sudo[352112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:30:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279700360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:56 compute-0 sudo[352112]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.162 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.163 253665 DEBUG nova.virt.libvirt.vif [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:30:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-458402395',display_name='tempest-TestNetworkAdvancedServerOps-server-458402395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-458402395',id=96,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPN6Rf2Pe6I6Kwug9Q7FGB75vk9ho8mQhQaKMB+gkIT1QntL149y3I7blWOrUF/CBmpP9hEhIUJwXQpTVsnaSm2uVyBQ0rC8pr4pNUdemX2qkiqIxYyhgu6PS131KVtofw==',key_name='tempest-TestNetworkAdvancedServerOps-1088921116',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-1ip0oei2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:30:50Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.164 253665 DEBUG nova.network.os_vif_util [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.164 253665 DEBUG nova.network.os_vif_util [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.166 253665 DEBUG nova.objects.instance [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.182 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <uuid>84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2</uuid>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <name>instance-00000060</name>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-458402395</nova:name>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:30:55</nova:creationTime>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <nova:port uuid="a13cc417-edce-4c30-a5b0-f90095810bcc">
Nov 22 09:30:56 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <system>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <entry name="serial">84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2</entry>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <entry name="uuid">84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2</entry>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     </system>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <os>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   </os>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <features>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   </features>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk">
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config">
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:30:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:55:4e:e1"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <target dev="tapa13cc417-ed"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/console.log" append="off"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <video>
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     </video>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:30:56 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:30:56 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:30:56 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:30:56 compute-0 nova_compute[253661]: </domain>
Nov 22 09:30:56 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.183 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Preparing to wait for external event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.183 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.183 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.184 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.184 253665 DEBUG nova.virt.libvirt.vif [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:30:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-458402395',display_name='tempest-TestNetworkAdvancedServerOps-server-458402395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-458402395',id=96,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPN6Rf2Pe6I6Kwug9Q7FGB75vk9ho8mQhQaKMB+gkIT1QntL149y3I7blWOrUF/CBmpP9hEhIUJwXQpTVsnaSm2uVyBQ0rC8pr4pNUdemX2qkiqIxYyhgu6PS131KVtofw==',key_name='tempest-TestNetworkAdvancedServerOps-1088921116',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-1ip0oei2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:30:50Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.184 253665 DEBUG nova.network.os_vif_util [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.185 253665 DEBUG nova.network.os_vif_util [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.185 253665 DEBUG os_vif [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.186 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.186 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.190 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa13cc417-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.190 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa13cc417-ed, col_values=(('external_ids', {'iface-id': 'a13cc417-edce-4c30-a5b0-f90095810bcc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:4e:e1', 'vm-uuid': '84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.192 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:56 compute-0 NetworkManager[48920]: <info>  [1763803856.1930] manager: (tapa13cc417-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:30:56 compute-0 sudo[352139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:30:56 compute-0 sudo[352139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:56 compute-0 sudo[352139]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.203 253665 INFO os_vif [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed')
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.256 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.256 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.256 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:55:4e:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.257 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Using config drive
Nov 22 09:30:56 compute-0 sudo[352166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:30:56 compute-0 sudo[352166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:56 compute-0 sudo[352166]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.283 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:56 compute-0 sudo[352207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:30:56 compute-0 sudo[352207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.405 253665 INFO nova.virt.libvirt.driver [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Snapshot image upload complete
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.405 253665 INFO nova.compute.manager [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 4.67 seconds to snapshot the instance on the hypervisor.
Nov 22 09:30:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:30:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:30:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:30:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:30:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:30:56 compute-0 podman[352274]: 2025-11-22 09:30:56.708136094 +0000 UTC m=+0.045859650 container create 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:30:56 compute-0 systemd[1]: Started libpod-conmon-925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7.scope.
Nov 22 09:30:56 compute-0 podman[352274]: 2025-11-22 09:30:56.685930497 +0000 UTC m=+0.023654063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:30:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:30:56 compute-0 podman[352274]: 2025-11-22 09:30:56.802205499 +0000 UTC m=+0.139929065 container init 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 09:30:56 compute-0 podman[352274]: 2025-11-22 09:30:56.810191176 +0000 UTC m=+0.147914722 container start 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:30:56 compute-0 podman[352274]: 2025-11-22 09:30:56.813710403 +0000 UTC m=+0.151433979 container attach 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 09:30:56 compute-0 upbeat_hellman[352290]: 167 167
Nov 22 09:30:56 compute-0 systemd[1]: libpod-925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7.scope: Deactivated successfully.
Nov 22 09:30:56 compute-0 podman[352274]: 2025-11-22 09:30:56.817404453 +0000 UTC m=+0.155127999 container died 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 09:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-77d672e37cc3640753e4e38d7a5bdd806e102d3855338c3202f7b0440bfe2b40-merged.mount: Deactivated successfully.
Nov 22 09:30:56 compute-0 podman[352274]: 2025-11-22 09:30:56.85424854 +0000 UTC m=+0.191972086 container remove 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:30:56 compute-0 systemd[1]: libpod-conmon-925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7.scope: Deactivated successfully.
Nov 22 09:30:56 compute-0 ceph-mon[75021]: pgmap v2061: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 3.6 MiB/s wr, 155 op/s
Nov 22 09:30:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2279700360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.953 253665 DEBUG nova.network.neutron [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updated VIF entry in instance network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.956 253665 DEBUG nova.network.neutron [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:30:56 compute-0 nova_compute[253661]: 2025-11-22 09:30:56.974 253665 DEBUG oslo_concurrency.lockutils [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:30:57 compute-0 podman[352314]: 2025-11-22 09:30:57.026710905 +0000 UTC m=+0.040143759 container create d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 09:30:57 compute-0 systemd[1]: Started libpod-conmon-d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288.scope.
Nov 22 09:30:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:57 compute-0 podman[352314]: 2025-11-22 09:30:57.009405789 +0000 UTC m=+0.022838663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.107 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Creating config drive at /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.117 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxt4dvnon execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:57 compute-0 podman[352314]: 2025-11-22 09:30:57.214075137 +0000 UTC m=+0.227508021 container init d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:30:57 compute-0 podman[352314]: 2025-11-22 09:30:57.224843521 +0000 UTC m=+0.238276385 container start d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 09:30:57 compute-0 podman[352314]: 2025-11-22 09:30:57.228907361 +0000 UTC m=+0.242340215 container attach d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.287 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxt4dvnon" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.330 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.336 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.485 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.559 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.561 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Deleting local config drive /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config because it was imported into RBD.
Nov 22 09:30:57 compute-0 kernel: tapa13cc417-ed: entered promiscuous mode
Nov 22 09:30:57 compute-0 NetworkManager[48920]: <info>  [1763803857.6319] manager: (tapa13cc417-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Nov 22 09:30:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:30:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9208 writes, 41K keys, 9208 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 9208 writes, 9208 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1611 writes, 7208 keys, 1611 commit groups, 1.0 writes per commit group, ingest: 9.66 MB, 0.02 MB/s
                                           Interval WAL: 1611 writes, 1611 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     43.3      1.07              0.17        25    0.043       0      0       0.0       0.0
                                             L6      1/0    9.49 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.0     88.1     73.4      2.53              0.56        24    0.105    129K    13K       0.0       0.0
                                            Sum      1/0    9.49 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.0     61.8     64.4      3.60              0.72        49    0.073    129K    13K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4     58.2     58.5      0.88              0.15        10    0.088     33K   2571       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     88.1     73.4      2.53              0.56        24    0.105    129K    13K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     43.4      1.07              0.17        24    0.045       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.045, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.06 MB/s write, 0.22 GB read, 0.06 MB/s read, 3.6 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 26.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000228 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1739,25.51 MB,8.39252%) FilterBlock(50,368.36 KB,0.118331%) IndexBlock(50,641.23 KB,0.205989%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.679 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.682 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:57 compute-0 systemd-udevd[352385]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:30:57 compute-0 ovn_controller[152872]: 2025-11-22T09:30:57Z|00958|binding|INFO|Claiming lport a13cc417-edce-4c30-a5b0-f90095810bcc for this chassis.
Nov 22 09:30:57 compute-0 ovn_controller[152872]: 2025-11-22T09:30:57Z|00959|binding|INFO|a13cc417-edce-4c30-a5b0-f90095810bcc: Claiming fa:16:3e:55:4e:e1 10.100.0.10
Nov 22 09:30:57 compute-0 NetworkManager[48920]: <info>  [1763803857.7012] device (tapa13cc417-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:30:57 compute-0 NetworkManager[48920]: <info>  [1763803857.7032] device (tapa13cc417-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.705 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:4e:e1 10.100.0.10'], port_security=['fa:16:3e:55:4e:e1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bcad6c6-374a-4697-ae00-916836e6498e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0dea6476-4fee-41aa-8572-212b34cd06a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79997ef2-17e2-4f21-8229-7c6dd79ef3c8, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a13cc417-edce-4c30-a5b0-f90095810bcc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.707 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a13cc417-edce-4c30-a5b0-f90095810bcc in datapath 7bcad6c6-374a-4697-ae00-916836e6498e bound to our chassis
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.708 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bcad6c6-374a-4697-ae00-916836e6498e
Nov 22 09:30:57 compute-0 systemd-machined[215941]: New machine qemu-116-instance-00000060.
Nov 22 09:30:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 3.6 MiB/s wr, 232 op/s
Nov 22 09:30:57 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-00000060.
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.737 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3778f7f0-095a-440d-9c95-f0f7bf2d5760]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.739 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7bcad6c6-31 in ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.741 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7bcad6c6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.742 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b18e826-b2cd-4bd8-9978-647670840261]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.743 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cba63b-b62b-4539-b7ff-befba8650f29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.761 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b7492497-8e14-484f-be89-6cd41895f5d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_controller[152872]: 2025-11-22T09:30:57Z|00960|binding|INFO|Setting lport a13cc417-edce-4c30-a5b0-f90095810bcc ovn-installed in OVS
Nov 22 09:30:57 compute-0 ovn_controller[152872]: 2025-11-22T09:30:57Z|00961|binding|INFO|Setting lport a13cc417-edce-4c30-a5b0-f90095810bcc up in Southbound
Nov 22 09:30:57 compute-0 nova_compute[253661]: 2025-11-22 09:30:57.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bddcc82c-2669-4594-9792-25e1e82ca5d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.824 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4ad525-d7a5-402a-8a27-c9bf9f7c1709]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.829 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4300a0-a9df-46e2-ae2a-2df469d9dc8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 NetworkManager[48920]: <info>  [1763803857.8308] manager: (tap7bcad6c6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Nov 22 09:30:57 compute-0 systemd-udevd[352389]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.874 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d58490f1-baa6-4ca6-9243-707ebfcec7bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.878 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[43915999-921c-491c-8744-50c81441dd60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 NetworkManager[48920]: <info>  [1763803857.9038] device (tap7bcad6c6-30): carrier: link connected
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.910 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dab34b28-1687-41e4-b92a-054a61b53953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ada9dc-827d-4da9-9884-7a9c25a44210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bcad6c6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:b9:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670864, 'reachable_time': 42742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352428, 'error': None, 'target': 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.963 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf92b627-d239-4874-bd13-22bc6782427f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:b9c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670864, 'tstamp': 670864}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352441, 'error': None, 'target': 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.988 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1370b931-0731-44b4-9d2d-1bb3f797aaae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bcad6c6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:b9:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670864, 'reachable_time': 42742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352457, 'error': None, 'target': 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.030 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[472bb6f7-dc51-4b97-ac7e-fd36ada6d4ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.047 253665 DEBUG nova.compute.manager [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.048 253665 DEBUG oslo_concurrency.lockutils [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.048 253665 DEBUG oslo_concurrency.lockutils [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.048 253665 DEBUG oslo_concurrency.lockutils [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.049 253665 DEBUG nova.compute.manager [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Processing event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.107 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803858.1068766, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.107 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Started (Lifecycle Event)
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17514f47-c61e-4bbd-8e5b-13e99d2598b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.109 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bcad6c6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.109 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.110 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bcad6c6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:58 compute-0 NetworkManager[48920]: <info>  [1763803858.1129] manager: (tap7bcad6c6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 22 09:30:58 compute-0 kernel: tap7bcad6c6-30: entered promiscuous mode
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.116 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bcad6c6-30, col_values=(('external_ids', {'iface-id': 'fbfcd9c4-6397-42df-bb55-2a7d60f24e6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:30:58 compute-0 ovn_controller[152872]: 2025-11-22T09:30:58Z|00962|binding|INFO|Releasing lport fbfcd9c4-6397-42df-bb55-2a7d60f24e6f from this chassis (sb_readonly=0)
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.119 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.119 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7bcad6c6-374a-4697-ae00-916836e6498e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7bcad6c6-374a-4697-ae00-916836e6498e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.120 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fc311b-7795-4be7-a54b-434183e80c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.121 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-7bcad6c6-374a-4697-ae00-916836e6498e
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/7bcad6c6-374a-4697-ae00-916836e6498e.pid.haproxy
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 7bcad6c6-374a-4697-ae00-916836e6498e
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:30:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.122 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'env', 'PROCESS_TAG=haproxy-7bcad6c6-374a-4697-ae00-916836e6498e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7bcad6c6-374a-4697-ae00-916836e6498e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.123 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.132 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.136 253665 INFO nova.virt.libvirt.driver [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance spawned successfully.
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.136 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.139 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.164 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.164 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.164 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.165 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.165 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.165 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.169 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.169 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803858.1184814, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.170 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Paused (Lifecycle Event)
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.198 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.202 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803858.1226156, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.203 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Resumed (Lifecycle Event)
Nov 22 09:30:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.231 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.235 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.245 253665 INFO nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Took 7.60 seconds to spawn the instance on the hypervisor.
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.245 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.256 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:30:58 compute-0 exciting_cannon[352330]: {
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "osd_id": 1,
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "type": "bluestore"
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:     },
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "osd_id": 0,
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "type": "bluestore"
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:     },
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "osd_id": 2,
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:         "type": "bluestore"
Nov 22 09:30:58 compute-0 exciting_cannon[352330]:     }
Nov 22 09:30:58 compute-0 exciting_cannon[352330]: }
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.306 253665 INFO nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Took 8.82 seconds to build instance.
Nov 22 09:30:58 compute-0 nova_compute[253661]: 2025-11-22 09:30:58.319 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:58 compute-0 systemd[1]: libpod-d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288.scope: Deactivated successfully.
Nov 22 09:30:58 compute-0 systemd[1]: libpod-d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288.scope: Consumed 1.073s CPU time.
Nov 22 09:30:58 compute-0 podman[352314]: 2025-11-22 09:30:58.329834578 +0000 UTC m=+1.343267442 container died d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:30:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae-merged.mount: Deactivated successfully.
Nov 22 09:30:58 compute-0 podman[352314]: 2025-11-22 09:30:58.402959498 +0000 UTC m=+1.416392352 container remove d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:30:58 compute-0 systemd[1]: libpod-conmon-d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288.scope: Deactivated successfully.
Nov 22 09:30:58 compute-0 sudo[352207]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:30:58 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:30:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:30:58 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:30:58 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a9483270-dc8b-4b13-8780-3406264d4fea does not exist
Nov 22 09:30:58 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b43c6996-37f7-4954-a814-c2440bd2432a does not exist
Nov 22 09:30:58 compute-0 sudo[352535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:30:58 compute-0 sudo[352535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:58 compute-0 sudo[352535]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:58 compute-0 podman[352536]: 2025-11-22 09:30:58.526620952 +0000 UTC m=+0.055478807 container create eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 09:30:58 compute-0 systemd[1]: Started libpod-conmon-eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc.scope.
Nov 22 09:30:58 compute-0 sudo[352570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:30:58 compute-0 sudo[352570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:30:58 compute-0 sudo[352570]: pam_unix(sudo:session): session closed for user root
Nov 22 09:30:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:30:58 compute-0 podman[352536]: 2025-11-22 09:30:58.497826212 +0000 UTC m=+0.026684087 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b23d174e80fe906250cd93b44fdfabff1fe6ae03ff6df4bfad6521ddf56c356/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:30:58 compute-0 podman[352536]: 2025-11-22 09:30:58.613639573 +0000 UTC m=+0.142497448 container init eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:30:58 compute-0 podman[352536]: 2025-11-22 09:30:58.618850882 +0000 UTC m=+0.147708727 container start eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 09:30:58 compute-0 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [NOTICE]   (352601) : New worker (352603) forked
Nov 22 09:30:58 compute-0 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [NOTICE]   (352601) : Loading success.
Nov 22 09:30:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Nov 22 09:30:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Nov 22 09:30:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Nov 22 09:30:59 compute-0 ceph-mon[75021]: pgmap v2062: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 3.6 MiB/s wr, 232 op/s
Nov 22 09:30:59 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:30:59 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:30:59 compute-0 ceph-mon[75021]: osdmap e300: 3 total, 3 up, 3 in
Nov 22 09:30:59 compute-0 nova_compute[253661]: 2025-11-22 09:30:59.718 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:59 compute-0 nova_compute[253661]: 2025-11-22 09:30:59.718 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:59 compute-0 nova_compute[253661]: 2025-11-22 09:30:59.718 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:30:59 compute-0 nova_compute[253661]: 2025-11-22 09:30:59.718 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:30:59 compute-0 nova_compute[253661]: 2025-11-22 09:30:59.719 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:30:59 compute-0 nova_compute[253661]: 2025-11-22 09:30:59.719 253665 INFO nova.compute.manager [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Terminating instance
Nov 22 09:30:59 compute-0 nova_compute[253661]: 2025-11-22 09:30:59.720 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "refresh_cache-79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:30:59 compute-0 nova_compute[253661]: 2025-11-22 09:30:59.720 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquired lock "refresh_cache-79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:30:59 compute-0 nova_compute[253661]: 2025-11-22 09:30:59.720 253665 DEBUG nova.network.neutron [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:30:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 231 op/s
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.411 253665 DEBUG nova.network.neutron [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.572 253665 DEBUG nova.compute.manager [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.572 253665 DEBUG oslo_concurrency.lockutils [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.573 253665 DEBUG oslo_concurrency.lockutils [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.573 253665 DEBUG oslo_concurrency.lockutils [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.573 253665 DEBUG nova.compute.manager [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] No waiting events found dispatching network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.574 253665 WARNING nova.compute.manager [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received unexpected event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc for instance with vm_state active and task_state None.
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.666 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.787 253665 DEBUG nova.network.neutron [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.797 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Releasing lock "refresh_cache-79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:31:00 compute-0 nova_compute[253661]: 2025-11-22 09:31:00.798 253665 DEBUG nova.compute.manager [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:31:00 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 22 09:31:00 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005f.scope: Consumed 1.883s CPU time.
Nov 22 09:31:00 compute-0 systemd-machined[215941]: Machine qemu-115-instance-0000005f terminated.
Nov 22 09:31:01 compute-0 nova_compute[253661]: 2025-11-22 09:31:01.035 253665 INFO nova.virt.libvirt.driver [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance destroyed successfully.
Nov 22 09:31:01 compute-0 nova_compute[253661]: 2025-11-22 09:31:01.036 253665 DEBUG nova.objects.instance [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lazy-loading 'resources' on Instance uuid 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:01 compute-0 nova_compute[253661]: 2025-11-22 09:31:01.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Nov 22 09:31:01 compute-0 ceph-mon[75021]: pgmap v2064: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 231 op/s
Nov 22 09:31:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Nov 22 09:31:01 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Nov 22 09:31:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 KiB/s wr, 167 op/s
Nov 22 09:31:01 compute-0 nova_compute[253661]: 2025-11-22 09:31:01.789 253665 INFO nova.virt.libvirt.driver [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Deleting instance files /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_del
Nov 22 09:31:01 compute-0 nova_compute[253661]: 2025-11-22 09:31:01.790 253665 INFO nova.virt.libvirt.driver [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Deletion of /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_del complete
Nov 22 09:31:01 compute-0 nova_compute[253661]: 2025-11-22 09:31:01.854 253665 INFO nova.compute.manager [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 1.06 seconds to destroy the instance on the hypervisor.
Nov 22 09:31:01 compute-0 nova_compute[253661]: 2025-11-22 09:31:01.854 253665 DEBUG oslo.service.loopingcall [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:31:01 compute-0 nova_compute[253661]: 2025-11-22 09:31:01.855 253665 DEBUG nova.compute.manager [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:31:01 compute-0 nova_compute[253661]: 2025-11-22 09:31:01.855 253665 DEBUG nova.network.neutron [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.102 253665 DEBUG nova.network.neutron [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.113 253665 DEBUG nova.network.neutron [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.127 253665 INFO nova.compute.manager [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 0.27 seconds to deallocate network for instance.
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.183 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.183 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.280 253665 DEBUG oslo_concurrency.processutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:02 compute-0 ovn_controller[152872]: 2025-11-22T09:31:02Z|00963|binding|INFO|Releasing lport fbfcd9c4-6397-42df-bb55-2a7d60f24e6f from this chassis (sb_readonly=0)
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:02 compute-0 NetworkManager[48920]: <info>  [1763803862.3443] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Nov 22 09:31:02 compute-0 NetworkManager[48920]: <info>  [1763803862.3468] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Nov 22 09:31:02 compute-0 ovn_controller[152872]: 2025-11-22T09:31:02Z|00964|binding|INFO|Releasing lport fbfcd9c4-6397-42df-bb55-2a7d60f24e6f from this chassis (sb_readonly=0)
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:02 compute-0 ceph-mon[75021]: osdmap e301: 3 total, 3 up, 3 in
Nov 22 09:31:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:31:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457195015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.722 253665 DEBUG oslo_concurrency.processutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.730 253665 DEBUG nova.compute.provider_tree [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.745 253665 DEBUG nova.scheduler.client.report [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.765 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.790 253665 INFO nova.scheduler.client.report [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Deleted allocations for instance 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003486042602721886 of space, bias 1.0, pg target 0.10458127808165658 quantized to 32 (current 32)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665577993748779 of space, bias 1.0, pg target 0.1999673398124634 quantized to 32 (current 32)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:31:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:31:02 compute-0 nova_compute[253661]: 2025-11-22 09:31:02.858 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:03 compute-0 nova_compute[253661]: 2025-11-22 09:31:03.129 253665 DEBUG nova.compute.manager [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:03 compute-0 nova_compute[253661]: 2025-11-22 09:31:03.130 253665 DEBUG nova.compute.manager [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing instance network info cache due to event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:31:03 compute-0 nova_compute[253661]: 2025-11-22 09:31:03.131 253665 DEBUG oslo_concurrency.lockutils [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:31:03 compute-0 nova_compute[253661]: 2025-11-22 09:31:03.131 253665 DEBUG oslo_concurrency.lockutils [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:31:03 compute-0 nova_compute[253661]: 2025-11-22 09:31:03.132 253665 DEBUG nova.network.neutron [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:31:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Nov 22 09:31:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Nov 22 09:31:03 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Nov 22 09:31:03 compute-0 ceph-mon[75021]: pgmap v2066: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 KiB/s wr, 167 op/s
Nov 22 09:31:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2457195015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:03 compute-0 ceph-mon[75021]: osdmap e302: 3 total, 3 up, 3 in
Nov 22 09:31:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 33 KiB/s wr, 300 op/s
Nov 22 09:31:04 compute-0 nova_compute[253661]: 2025-11-22 09:31:04.339 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803849.3387444, 1f746354-73cc-421a-9cde-f5b8c2b597fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:04 compute-0 nova_compute[253661]: 2025-11-22 09:31:04.340 253665 INFO nova.compute.manager [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] VM Stopped (Lifecycle Event)
Nov 22 09:31:04 compute-0 nova_compute[253661]: 2025-11-22 09:31:04.360 253665 DEBUG nova.compute.manager [None req-9ae03645-ac1d-40cc-bcd5-83bb35193e82 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:04 compute-0 nova_compute[253661]: 2025-11-22 09:31:04.908 253665 DEBUG nova.network.neutron [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updated VIF entry in instance network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:31:04 compute-0 nova_compute[253661]: 2025-11-22 09:31:04.908 253665 DEBUG nova.network.neutron [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:04 compute-0 nova_compute[253661]: 2025-11-22 09:31:04.924 253665 DEBUG oslo_concurrency.lockutils [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:31:05 compute-0 ceph-mon[75021]: pgmap v2068: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 33 KiB/s wr, 300 op/s
Nov 22 09:31:05 compute-0 nova_compute[253661]: 2025-11-22 09:31:05.668 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 247 op/s
Nov 22 09:31:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:05.744 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:31:05 compute-0 nova_compute[253661]: 2025-11-22 09:31:05.745 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:05.746 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:31:06 compute-0 nova_compute[253661]: 2025-11-22 09:31:06.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:06 compute-0 nova_compute[253661]: 2025-11-22 09:31:06.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:07 compute-0 ceph-mon[75021]: pgmap v2069: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 247 op/s
Nov 22 09:31:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 152 op/s
Nov 22 09:31:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Nov 22 09:31:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Nov 22 09:31:08 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Nov 22 09:31:08 compute-0 nova_compute[253661]: 2025-11-22 09:31:08.984 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:08 compute-0 ceph-mon[75021]: pgmap v2070: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 152 op/s
Nov 22 09:31:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 152 op/s
Nov 22 09:31:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:09.749 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:10 compute-0 ceph-mon[75021]: osdmap e303: 3 total, 3 up, 3 in
Nov 22 09:31:10 compute-0 nova_compute[253661]: 2025-11-22 09:31:10.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:11 compute-0 ceph-mon[75021]: pgmap v2072: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 152 op/s
Nov 22 09:31:11 compute-0 nova_compute[253661]: 2025-11-22 09:31:11.198 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 120 B/s wr, 12 op/s
Nov 22 09:31:12 compute-0 nova_compute[253661]: 2025-11-22 09:31:12.241 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:12 compute-0 nova_compute[253661]: 2025-11-22 09:31:12.242 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:12 compute-0 nova_compute[253661]: 2025-11-22 09:31:12.261 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:31:12 compute-0 nova_compute[253661]: 2025-11-22 09:31:12.342 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:12 compute-0 nova_compute[253661]: 2025-11-22 09:31:12.342 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:12 compute-0 nova_compute[253661]: 2025-11-22 09:31:12.353 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:31:12 compute-0 nova_compute[253661]: 2025-11-22 09:31:12.354 253665 INFO nova.compute.claims [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:31:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:31:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1306854197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:31:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:31:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1306854197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:31:12 compute-0 podman[352658]: 2025-11-22 09:31:12.412285488 +0000 UTC m=+0.090753295 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:31:12 compute-0 ovn_controller[152872]: 2025-11-22T09:31:12Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:4e:e1 10.100.0.10
Nov 22 09:31:12 compute-0 ovn_controller[152872]: 2025-11-22T09:31:12Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:4e:e1 10.100.0.10
Nov 22 09:31:12 compute-0 podman[352659]: 2025-11-22 09:31:12.424190551 +0000 UTC m=+0.088342845 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:31:12 compute-0 nova_compute[253661]: 2025-11-22 09:31:12.488 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:31:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726041466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.021 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:13 compute-0 ceph-mon[75021]: pgmap v2073: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 120 B/s wr, 12 op/s
Nov 22 09:31:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1306854197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:31:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1306854197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:31:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/726041466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.030 253665 DEBUG nova.compute.provider_tree [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.055 253665 DEBUG nova.scheduler.client.report [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.080 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.081 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.126 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.127 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.146 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.160 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.230 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.232 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.233 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Creating image(s)
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.270 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.306 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.334 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.340 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.375 253665 DEBUG nova.policy [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ab3f37df3674a13a02926c1e3d79bbf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a426656c0559412895fd288e6aaaf579', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.411 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.412 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.412 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.413 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.440 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.444 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 104 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 1.0 MiB/s wr, 43 op/s
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.867 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.940 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] resizing rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:31:13 compute-0 nova_compute[253661]: 2025-11-22 09:31:13.972 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Successfully created port: 4509527d-ccca-4d6f-96b5-cce2f7e28b54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:31:14 compute-0 nova_compute[253661]: 2025-11-22 09:31:14.043 253665 DEBUG nova.objects.instance [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lazy-loading 'migration_context' on Instance uuid 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:14 compute-0 nova_compute[253661]: 2025-11-22 09:31:14.057 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:31:14 compute-0 nova_compute[253661]: 2025-11-22 09:31:14.058 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Ensure instance console log exists: /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:31:14 compute-0 nova_compute[253661]: 2025-11-22 09:31:14.058 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:14 compute-0 nova_compute[253661]: 2025-11-22 09:31:14.058 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:14 compute-0 nova_compute[253661]: 2025-11-22 09:31:14.059 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:15 compute-0 ceph-mon[75021]: pgmap v2074: 305 pgs: 305 active+clean; 104 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 1.0 MiB/s wr, 43 op/s
Nov 22 09:31:15 compute-0 nova_compute[253661]: 2025-11-22 09:31:15.507 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Successfully updated port: 4509527d-ccca-4d6f-96b5-cce2f7e28b54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:31:15 compute-0 nova_compute[253661]: 2025-11-22 09:31:15.524 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:31:15 compute-0 nova_compute[253661]: 2025-11-22 09:31:15.524 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquired lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:31:15 compute-0 nova_compute[253661]: 2025-11-22 09:31:15.525 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:31:15 compute-0 nova_compute[253661]: 2025-11-22 09:31:15.584 253665 DEBUG nova.compute.manager [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-changed-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:15 compute-0 nova_compute[253661]: 2025-11-22 09:31:15.584 253665 DEBUG nova.compute.manager [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Refreshing instance network info cache due to event network-changed-4509527d-ccca-4d6f-96b5-cce2f7e28b54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:31:15 compute-0 nova_compute[253661]: 2025-11-22 09:31:15.585 253665 DEBUG oslo_concurrency.lockutils [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:31:15 compute-0 nova_compute[253661]: 2025-11-22 09:31:15.657 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:31:15 compute-0 nova_compute[253661]: 2025-11-22 09:31:15.674 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 142 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.033 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803861.030466, 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.034 253665 INFO nova.compute.manager [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] VM Stopped (Lifecycle Event)
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.057 253665 DEBUG nova.compute.manager [None req-8c29c1fd-74be-4a79-8c8f-6dc6ccbaa785 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.201 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.620 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Updating instance_info_cache with network_info: [{"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.762 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Releasing lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.762 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance network_info: |[{"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.764 253665 DEBUG oslo_concurrency.lockutils [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.765 253665 DEBUG nova.network.neutron [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Refreshing network info cache for port 4509527d-ccca-4d6f-96b5-cce2f7e28b54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.769 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start _get_guest_xml network_info=[{"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.776 253665 WARNING nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.785 253665 DEBUG nova.virt.libvirt.host [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.787 253665 DEBUG nova.virt.libvirt.host [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.793 253665 DEBUG nova.virt.libvirt.host [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.793 253665 DEBUG nova.virt.libvirt.host [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.794 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.794 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.795 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.796 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.796 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.796 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.796 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:31:16 compute-0 nova_compute[253661]: 2025-11-22 09:31:16.801 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:17 compute-0 ceph-mon[75021]: pgmap v2075: 305 pgs: 305 active+clean; 142 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Nov 22 09:31:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:31:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495932053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.299 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.326 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.332 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 150 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 4.1 MiB/s wr, 80 op/s
Nov 22 09:31:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:31:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913590761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.804 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.808 253665 DEBUG nova.virt.libvirt.vif [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-105000375',display_name='tempest-ServerPasswordTestJSON-server-105000375',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-105000375',id=97,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a426656c0559412895fd288e6aaaf579',ramdisk_id='',reservation_id='r-rvae7u61',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1791552166',owner_user_name='tempest-ServerPasswordTestJSON-1791552166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:13Z,user_data=None,user_id='2ab3f37df3674a13a02926c1e3d79bbf',uuid=0a3de2bf-6305-4b6e-a9c1-1932598e5bb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.809 253665 DEBUG nova.network.os_vif_util [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converting VIF {"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.811 253665 DEBUG nova.network.os_vif_util [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.814 253665 DEBUG nova.objects.instance [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.832 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <uuid>0a3de2bf-6305-4b6e-a9c1-1932598e5bb9</uuid>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <name>instance-00000061</name>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerPasswordTestJSON-server-105000375</nova:name>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:31:16</nova:creationTime>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <nova:user uuid="2ab3f37df3674a13a02926c1e3d79bbf">tempest-ServerPasswordTestJSON-1791552166-project-member</nova:user>
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <nova:project uuid="a426656c0559412895fd288e6aaaf579">tempest-ServerPasswordTestJSON-1791552166</nova:project>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <nova:port uuid="4509527d-ccca-4d6f-96b5-cce2f7e28b54">
Nov 22 09:31:17 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <system>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <entry name="serial">0a3de2bf-6305-4b6e-a9c1-1932598e5bb9</entry>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <entry name="uuid">0a3de2bf-6305-4b6e-a9c1-1932598e5bb9</entry>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     </system>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <os>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   </os>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <features>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   </features>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk">
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       </source>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config">
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       </source>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:31:17 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:a0:86:b8"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <target dev="tap4509527d-cc"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/console.log" append="off"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <video>
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     </video>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:31:17 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:31:17 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:31:17 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:31:17 compute-0 nova_compute[253661]: </domain>
Nov 22 09:31:17 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.833 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Preparing to wait for external event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.833 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.833 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.834 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.834 253665 DEBUG nova.virt.libvirt.vif [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-105000375',display_name='tempest-ServerPasswordTestJSON-server-105000375',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-105000375',id=97,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a426656c0559412895fd288e6aaaf579',ramdisk_id='',reservation_id='r-rvae7u61',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1791552166',owner_user_name='tempest-ServerPasswordTestJSON-1791552166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:13Z,user_data=None,user_id='2ab3f37df3674a13a02926c1e3d79bbf',uuid=0a3de2bf-6305-4b6e-a9c1-1932598e5bb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.835 253665 DEBUG nova.network.os_vif_util [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converting VIF {"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.835 253665 DEBUG nova.network.os_vif_util [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.836 253665 DEBUG os_vif [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.843 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4509527d-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.843 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4509527d-cc, col_values=(('external_ids', {'iface-id': '4509527d-ccca-4d6f-96b5-cce2f7e28b54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:86:b8', 'vm-uuid': '0a3de2bf-6305-4b6e-a9c1-1932598e5bb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:17 compute-0 NetworkManager[48920]: <info>  [1763803877.8760] manager: (tap4509527d-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.874 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.889 253665 INFO os_vif [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc')
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.944 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.945 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.945 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] No VIF found with MAC fa:16:3e:a0:86:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.946 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Using config drive
Nov 22 09:31:17 compute-0 nova_compute[253661]: 2025-11-22 09:31:17.970 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1495932053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1913590761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:18 compute-0 nova_compute[253661]: 2025-11-22 09:31:18.864 253665 INFO nova.compute.manager [None req-527e9c15-ed44-49eb-8eed-403288cc1b74 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Get console output
Nov 22 09:31:18 compute-0 nova_compute[253661]: 2025-11-22 09:31:18.872 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.038 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Creating config drive at /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.046 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjr3d7dx0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:19 compute-0 ceph-mon[75021]: pgmap v2076: 305 pgs: 305 active+clean; 150 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 4.1 MiB/s wr, 80 op/s
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.140 253665 INFO nova.compute.manager [None req-aadb395c-90f1-4739-ba60-c5231636710b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Pausing
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.142 253665 DEBUG nova.objects.instance [None req-aadb395c-90f1-4739-ba60-c5231636710b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.178 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803879.1781137, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.179 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Paused (Lifecycle Event)
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.184 253665 DEBUG nova.compute.manager [None req-aadb395c-90f1-4739-ba60-c5231636710b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.206 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjr3d7dx0" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.241 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.247 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.312 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.322 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.451 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.453 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Deleting local config drive /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config because it was imported into RBD.
Nov 22 09:31:19 compute-0 kernel: tap4509527d-cc: entered promiscuous mode
Nov 22 09:31:19 compute-0 NetworkManager[48920]: <info>  [1763803879.5203] manager: (tap4509527d-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Nov 22 09:31:19 compute-0 ovn_controller[152872]: 2025-11-22T09:31:19Z|00965|binding|INFO|Claiming lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 for this chassis.
Nov 22 09:31:19 compute-0 ovn_controller[152872]: 2025-11-22T09:31:19Z|00966|binding|INFO|4509527d-ccca-4d6f-96b5-cce2f7e28b54: Claiming fa:16:3e:a0:86:b8 10.100.0.7
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.534 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:86:b8 10.100.0.7'], port_security=['fa:16:3e:a0:86:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0a3de2bf-6305-4b6e-a9c1-1932598e5bb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a426656c0559412895fd288e6aaaf579', 'neutron:revision_number': '2', 'neutron:security_group_ids': '99ba02ce-3868-4329-a8bd-a035b539a697', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac22de64-af49-4fe8-9780-139ebea6ab18, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4509527d-ccca-4d6f-96b5-cce2f7e28b54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.535 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4509527d-ccca-4d6f-96b5-cce2f7e28b54 in datapath 6fefd6ab-3d4a-489e-983f-f6640c22be71 bound to our chassis
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.537 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fefd6ab-3d4a-489e-983f-f6640c22be71
Nov 22 09:31:19 compute-0 ovn_controller[152872]: 2025-11-22T09:31:19Z|00967|binding|INFO|Setting lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 ovn-installed in OVS
Nov 22 09:31:19 compute-0 ovn_controller[152872]: 2025-11-22T09:31:19Z|00968|binding|INFO|Setting lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 up in Southbound
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.540 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.542 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[72eed5f7-fdda-4ba1-971b-29f48cddff4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.557 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fefd6ab-31 in ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.559 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fefd6ab-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.559 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5737082d-8510-4f4d-9152-2a3803a833c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.561 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[14497148-052f-4841-a864-11e7a3b552f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 systemd-udevd[353033]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.575 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[efcb763b-8e62-4e83-ae84-e6d558e35cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 systemd-machined[215941]: New machine qemu-117-instance-00000061.
Nov 22 09:31:19 compute-0 NetworkManager[48920]: <info>  [1763803879.5960] device (tap4509527d-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:31:19 compute-0 NetworkManager[48920]: <info>  [1763803879.5970] device (tap4509527d-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.595 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af268434-03ef-4d7c-a8aa-48655b74e6af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-00000061.
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.639 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c772d9-eb5c-426d-864c-494f131045b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.649 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b56244e6-73d4-48d6-adfb-ef770866c439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 NetworkManager[48920]: <info>  [1763803879.6510] manager: (tap6fefd6ab-30): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Nov 22 09:31:19 compute-0 podman[353021]: 2025-11-22 09:31:19.693717474 +0000 UTC m=+0.136130621 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.697 253665 DEBUG nova.network.neutron [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Updated VIF entry in instance network info cache for port 4509527d-ccca-4d6f-96b5-cce2f7e28b54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.698 253665 DEBUG nova.network.neutron [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Updating instance_info_cache with network_info: [{"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.708 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a56fc6-19cc-4c4d-818f-24ccbe5228c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.713 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9b88fe78-2540-4a8f-9581-e0ebf54aea70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.720 253665 DEBUG oslo_concurrency.lockutils [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:31:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 167 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Nov 22 09:31:19 compute-0 NetworkManager[48920]: <info>  [1763803879.7456] device (tap6fefd6ab-30): carrier: link connected
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.752 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[367f14d0-cf11-41fb-833f-a6db6197e757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.776 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4938be7-a6df-4a67-b758-b27ca93124f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fefd6ab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:04:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673048, 'reachable_time': 27824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353085, 'error': None, 'target': 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.798 253665 DEBUG nova.compute.manager [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.799 253665 DEBUG oslo_concurrency.lockutils [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.799 253665 DEBUG oslo_concurrency.lockutils [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.799 253665 DEBUG oslo_concurrency.lockutils [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.799 253665 DEBUG nova.compute.manager [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Processing event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.800 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f40c8423-089b-4671-a532-351e92d67e42]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:43f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673048, 'tstamp': 673048}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353086, 'error': None, 'target': 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.821 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3c26ed-e3fd-4392-8933-ad95baa2ef8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fefd6ab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:04:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673048, 'reachable_time': 27824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353087, 'error': None, 'target': 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[91ce825a-674e-4ba9-902e-82037fb95298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.940 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc91f9b1-7303-4307-8a36-643f5d56301c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.943 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fefd6ab-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.943 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.944 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fefd6ab-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:19 compute-0 NetworkManager[48920]: <info>  [1763803879.9475] manager: (tap6fefd6ab-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Nov 22 09:31:19 compute-0 kernel: tap6fefd6ab-30: entered promiscuous mode
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.951 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.953 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fefd6ab-30, col_values=(('external_ids', {'iface-id': 'e1825837-b7b8-471c-82f1-66c1fd37dbe3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.954 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:19 compute-0 ovn_controller[152872]: 2025-11-22T09:31:19Z|00969|binding|INFO|Releasing lport e1825837-b7b8-471c-82f1-66c1fd37dbe3 from this chassis (sb_readonly=0)
Nov 22 09:31:19 compute-0 nova_compute[253661]: 2025-11-22 09:31:19.982 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.985 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fefd6ab-3d4a-489e-983f-f6640c22be71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fefd6ab-3d4a-489e-983f-f6640c22be71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[343d47cb-b466-4dee-aa69-5caefe12858b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.987 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-6fefd6ab-3d4a-489e-983f-f6640c22be71
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/6fefd6ab-3d4a-489e-983f-f6640c22be71.pid.haproxy
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 6fefd6ab-3d4a-489e-983f-f6640c22be71
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:31:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.988 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'env', 'PROCESS_TAG=haproxy-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fefd6ab-3d4a-489e-983f-f6640c22be71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.298 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.299 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803880.298701, 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.299 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] VM Started (Lifecycle Event)
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.303 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.307 253665 INFO nova.virt.libvirt.driver [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance spawned successfully.
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.307 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.321 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.327 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.332 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.332 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.333 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.333 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.333 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.334 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.355 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.355 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803880.2987957, 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.356 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] VM Paused (Lifecycle Event)
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.378 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.383 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803880.3027217, 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.384 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] VM Resumed (Lifecycle Event)
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.393 253665 INFO nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Took 7.16 seconds to spawn the instance on the hypervisor.
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.393 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:20 compute-0 podman[353161]: 2025-11-22 09:31:20.396687216 +0000 UTC m=+0.062113411 container create 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.414 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.420 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:31:20 compute-0 systemd[1]: Started libpod-conmon-3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4.scope.
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.452 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:31:20 compute-0 podman[353161]: 2025-11-22 09:31:20.361228233 +0000 UTC m=+0.026654458 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.464 253665 INFO nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Took 8.16 seconds to build instance.
Nov 22 09:31:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.478 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f7e4e3891923fc209f6674be94c109149ed464a00eaa181a40f266a7eb43fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:31:20 compute-0 podman[353161]: 2025-11-22 09:31:20.50005074 +0000 UTC m=+0.165477025 container init 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:31:20 compute-0 podman[353161]: 2025-11-22 09:31:20.508131658 +0000 UTC m=+0.173557883 container start 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:31:20 compute-0 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [NOTICE]   (353180) : New worker (353182) forked
Nov 22 09:31:20 compute-0 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [NOTICE]   (353180) : Loading success.
Nov 22 09:31:20 compute-0 nova_compute[253661]: 2025-11-22 09:31:20.677 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:21 compute-0 ceph-mon[75021]: pgmap v2077: 305 pgs: 305 active+clean; 167 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Nov 22 09:31:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 167 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.010 253665 DEBUG nova.compute.manager [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.012 253665 DEBUG oslo_concurrency.lockutils [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.012 253665 DEBUG oslo_concurrency.lockutils [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.012 253665 DEBUG oslo_concurrency.lockutils [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.012 253665 DEBUG nova.compute.manager [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] No waiting events found dispatching network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.013 253665 WARNING nova.compute.manager [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received unexpected event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 for instance with vm_state active and task_state None.
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.494 253665 INFO nova.compute.manager [None req-9fd2039b-aeef-420d-95c3-64b8b8de0832 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Get console output
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.502 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.731 253665 INFO nova.compute.manager [None req-8e713fd0-50d4-44f0-bd0d-d8cb7429af56 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Unpausing
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.732 253665 DEBUG nova.objects.instance [None req-8e713fd0-50d4-44f0-bd0d-d8cb7429af56 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.733 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.733 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.734 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.734 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.734 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:31:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.736 253665 INFO nova.compute.manager [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Terminating instance
Nov 22 09:31:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:31:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:31:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:31:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.739 253665 DEBUG nova.compute.manager [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.764 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803882.7639353, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.764 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Resumed (Lifecycle Event)
Nov 22 09:31:22 compute-0 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.770 253665 DEBUG nova.virt.libvirt.guest [None req-8e713fd0-50d4-44f0-bd0d-d8cb7429af56 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.770 253665 DEBUG nova.compute.manager [None req-8e713fd0-50d4-44f0-bd0d-d8cb7429af56 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.781 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.784 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:31:22 compute-0 kernel: tap4509527d-cc (unregistering): left promiscuous mode
Nov 22 09:31:22 compute-0 NetworkManager[48920]: <info>  [1763803882.7909] device (tap4509527d-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.805 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:22 compute-0 ovn_controller[152872]: 2025-11-22T09:31:22Z|00970|binding|INFO|Releasing lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 from this chassis (sb_readonly=0)
Nov 22 09:31:22 compute-0 ovn_controller[152872]: 2025-11-22T09:31:22Z|00971|binding|INFO|Setting lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 down in Southbound
Nov 22 09:31:22 compute-0 ovn_controller[152872]: 2025-11-22T09:31:22Z|00972|binding|INFO|Removing iface tap4509527d-cc ovn-installed in OVS
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.808 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.808 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.814 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:86:b8 10.100.0.7'], port_security=['fa:16:3e:a0:86:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0a3de2bf-6305-4b6e-a9c1-1932598e5bb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a426656c0559412895fd288e6aaaf579', 'neutron:revision_number': '4', 'neutron:security_group_ids': '99ba02ce-3868-4329-a8bd-a035b539a697', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac22de64-af49-4fe8-9780-139ebea6ab18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4509527d-ccca-4d6f-96b5-cce2f7e28b54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:31:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.815 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4509527d-ccca-4d6f-96b5-cce2f7e28b54 in datapath 6fefd6ab-3d4a-489e-983f-f6640c22be71 unbound from our chassis
Nov 22 09:31:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.817 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fefd6ab-3d4a-489e-983f-f6640c22be71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:31:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.818 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8463fd98-43e9-49fd-8f16-e605f0474694]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.819 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 namespace which is not needed anymore
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:22 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000061.scope: Deactivated successfully.
Nov 22 09:31:22 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000061.scope: Consumed 3.257s CPU time.
Nov 22 09:31:22 compute-0 systemd-machined[215941]: Machine qemu-117-instance-00000061 terminated.
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.875 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:22 compute-0 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [NOTICE]   (353180) : haproxy version is 2.8.14-c23fe91
Nov 22 09:31:22 compute-0 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [NOTICE]   (353180) : path to executable is /usr/sbin/haproxy
Nov 22 09:31:22 compute-0 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [WARNING]  (353180) : Exiting Master process...
Nov 22 09:31:22 compute-0 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [ALERT]    (353180) : Current worker (353182) exited with code 143 (Terminated)
Nov 22 09:31:22 compute-0 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [WARNING]  (353180) : All workers exited. Exiting... (0)
Nov 22 09:31:22 compute-0 systemd[1]: libpod-3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4.scope: Deactivated successfully.
Nov 22 09:31:22 compute-0 podman[353214]: 2025-11-22 09:31:22.970657578 +0000 UTC m=+0.052203616 container died 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.985 253665 INFO nova.virt.libvirt.driver [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance destroyed successfully.
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.987 253665 DEBUG nova.objects.instance [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lazy-loading 'resources' on Instance uuid 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.997 253665 DEBUG nova.virt.libvirt.vif [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:31:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-105000375',display_name='tempest-ServerPasswordTestJSON-server-105000375',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-105000375',id=97,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:31:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a426656c0559412895fd288e6aaaf579',ramdisk_id='',reservation_id='r-rvae7u61',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1791552166',owner_user_name='tempest-ServerPasswordTestJSON-1791552166-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:31:22Z,user_data=None,user_id='2ab3f37df3674a13a02926c1e3d79bbf',uuid=0a3de2bf-6305-4b6e-a9c1-1932598e5bb9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:31:22 compute-0 nova_compute[253661]: 2025-11-22 09:31:22.998 253665 DEBUG nova.network.os_vif_util [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converting VIF {"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.000 253665 DEBUG nova.network.os_vif_util [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.001 253665 DEBUG os_vif [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4-userdata-shm.mount: Deactivated successfully.
Nov 22 09:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f7e4e3891923fc209f6674be94c109149ed464a00eaa181a40f266a7eb43fa-merged.mount: Deactivated successfully.
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.008 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.010 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4509527d-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.015 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.019 253665 INFO os_vif [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc')
Nov 22 09:31:23 compute-0 podman[353214]: 2025-11-22 09:31:23.028172593 +0000 UTC m=+0.109718631 container cleanup 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:31:23 compute-0 systemd[1]: libpod-conmon-3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4.scope: Deactivated successfully.
Nov 22 09:31:23 compute-0 podman[353269]: 2025-11-22 09:31:23.100932744 +0000 UTC m=+0.051803756 container remove 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 09:31:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.110 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf914db-e836-43f9-b2ae-d513bf09ca1e]: (4, ('Sat Nov 22 09:31:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 (3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4)\n3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4\nSat Nov 22 09:31:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 (3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4)\n3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.113 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4459dd-67fa-4447-bc9d-11442973e404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.114 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fefd6ab-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:23 compute-0 kernel: tap6fefd6ab-30: left promiscuous mode
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.118 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.135 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fff1ddaa-6238-4c5b-9cf4-08f1d518cd7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.149 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa522aa-0f9e-475c-b513-8a79478da663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a75bbd83-a294-4146-9e3d-684cbb444e10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.170 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c05b2ad-a5e2-4cb2-9ed8-beecfbd8e282]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673037, 'reachable_time': 39814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353287, 'error': None, 'target': 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d6fefd6ab\x2d3d4a\x2d489e\x2d983f\x2df6640c22be71.mount: Deactivated successfully.
Nov 22 09:31:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.174 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:31:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.175 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd20e7e-29f7-48c0-b23a-d420c670fd11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:23 compute-0 ceph-mon[75021]: pgmap v2078: 305 pgs: 305 active+clean; 167 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.481 253665 INFO nova.virt.libvirt.driver [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Deleting instance files /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_del
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.482 253665 INFO nova.virt.libvirt.driver [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Deletion of /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_del complete
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.586 253665 INFO nova.compute.manager [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Took 0.85 seconds to destroy the instance on the hypervisor.
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.586 253665 DEBUG oslo.service.loopingcall [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.587 253665 DEBUG nova.compute.manager [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:31:23 compute-0 nova_compute[253661]: 2025-11-22 09:31:23.587 253665 DEBUG nova.network.neutron [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:31:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 167 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.074 253665 DEBUG nova.compute.manager [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-unplugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.075 253665 DEBUG oslo_concurrency.lockutils [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.075 253665 DEBUG oslo_concurrency.lockutils [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.076 253665 DEBUG oslo_concurrency.lockutils [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.076 253665 DEBUG nova.compute.manager [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] No waiting events found dispatching network-vif-unplugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.076 253665 DEBUG nova.compute.manager [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-unplugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:31:25 compute-0 ceph-mon[75021]: pgmap v2079: 305 pgs: 305 active+clean; 167 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.283 253665 DEBUG nova.network.neutron [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.300 253665 INFO nova.compute.manager [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Took 1.71 seconds to deallocate network for instance.
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.347 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.348 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.469 253665 DEBUG oslo_concurrency.processutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.589 253665 INFO nova.compute.manager [None req-b22f5b8f-6e06-4c5b-9d3d-02fa0a0d0d9f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Get console output
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.600 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.696 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 145 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.1 MiB/s wr, 149 op/s
Nov 22 09:31:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:31:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4184846593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.966 253665 DEBUG oslo_concurrency.processutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.975 253665 DEBUG nova.compute.provider_tree [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:31:25 compute-0 nova_compute[253661]: 2025-11-22 09:31:25.988 253665 DEBUG nova.scheduler.client.report [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.009 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.042 253665 INFO nova.scheduler.client.report [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Deleted allocations for instance 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.135 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4184846593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.784 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.785 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.785 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.786 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.786 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.787 253665 INFO nova.compute.manager [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Terminating instance
Nov 22 09:31:26 compute-0 nova_compute[253661]: 2025-11-22 09:31:26.788 253665 DEBUG nova.compute.manager [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:31:27 compute-0 kernel: tapa13cc417-ed (unregistering): left promiscuous mode
Nov 22 09:31:27 compute-0 NetworkManager[48920]: <info>  [1763803887.0519] device (tapa13cc417-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:31:27 compute-0 ovn_controller[152872]: 2025-11-22T09:31:27Z|00973|binding|INFO|Releasing lport a13cc417-edce-4c30-a5b0-f90095810bcc from this chassis (sb_readonly=0)
Nov 22 09:31:27 compute-0 ovn_controller[152872]: 2025-11-22T09:31:27Z|00974|binding|INFO|Setting lport a13cc417-edce-4c30-a5b0-f90095810bcc down in Southbound
Nov 22 09:31:27 compute-0 ovn_controller[152872]: 2025-11-22T09:31:27Z|00975|binding|INFO|Removing iface tapa13cc417-ed ovn-installed in OVS
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.075 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:4e:e1 10.100.0.10'], port_security=['fa:16:3e:55:4e:e1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bcad6c6-374a-4697-ae00-916836e6498e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0dea6476-4fee-41aa-8572-212b34cd06a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79997ef2-17e2-4f21-8229-7c6dd79ef3c8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a13cc417-edce-4c30-a5b0-f90095810bcc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.078 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a13cc417-edce-4c30-a5b0-f90095810bcc in datapath 7bcad6c6-374a-4697-ae00-916836e6498e unbound from our chassis
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.081 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7bcad6c6-374a-4697-ae00-916836e6498e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.082 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cebd317e-224f-42c4-ae5f-e3698c19b021]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.083 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e namespace which is not needed anymore
Nov 22 09:31:27 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 22 09:31:27 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000060.scope: Consumed 14.003s CPU time.
Nov 22 09:31:27 compute-0 systemd-machined[215941]: Machine qemu-116-instance-00000060 terminated.
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.174 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.174 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.175 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.175 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.175 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] No waiting events found dispatching network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.176 253665 WARNING nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received unexpected event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 for instance with vm_state deleted and task_state None.
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.176 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-deleted-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.176 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.177 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing instance network info cache due to event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.177 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.177 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.178 253665 DEBUG nova.network.neutron [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.229 253665 INFO nova.virt.libvirt.driver [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance destroyed successfully.
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.231 253665 DEBUG nova.objects.instance [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:27 compute-0 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [NOTICE]   (352601) : haproxy version is 2.8.14-c23fe91
Nov 22 09:31:27 compute-0 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [NOTICE]   (352601) : path to executable is /usr/sbin/haproxy
Nov 22 09:31:27 compute-0 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [WARNING]  (352601) : Exiting Master process...
Nov 22 09:31:27 compute-0 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [ALERT]    (352601) : Current worker (352603) exited with code 143 (Terminated)
Nov 22 09:31:27 compute-0 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [WARNING]  (352601) : All workers exited. Exiting... (0)
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.247 253665 DEBUG nova.virt.libvirt.vif [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:30:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-458402395',display_name='tempest-TestNetworkAdvancedServerOps-server-458402395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-458402395',id=96,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPN6Rf2Pe6I6Kwug9Q7FGB75vk9ho8mQhQaKMB+gkIT1QntL149y3I7blWOrUF/CBmpP9hEhIUJwXQpTVsnaSm2uVyBQ0rC8pr4pNUdemX2qkiqIxYyhgu6PS131KVtofw==',key_name='tempest-TestNetworkAdvancedServerOps-1088921116',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-1ip0oei2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:31:22Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.247 253665 DEBUG nova.network.os_vif_util [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.248 253665 DEBUG nova.network.os_vif_util [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.248 253665 DEBUG os_vif [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:31:27 compute-0 systemd[1]: libpod-eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc.scope: Deactivated successfully.
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.250 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa13cc417-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.252 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.255 253665 INFO os_vif [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed')
Nov 22 09:31:27 compute-0 podman[353336]: 2025-11-22 09:31:27.256724699 +0000 UTC m=+0.059643729 container died eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:31:27 compute-0 ceph-mon[75021]: pgmap v2080: 305 pgs: 305 active+clean; 145 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.1 MiB/s wr, 149 op/s
Nov 22 09:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc-userdata-shm.mount: Deactivated successfully.
Nov 22 09:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b23d174e80fe906250cd93b44fdfabff1fe6ae03ff6df4bfad6521ddf56c356-merged.mount: Deactivated successfully.
Nov 22 09:31:27 compute-0 podman[353336]: 2025-11-22 09:31:27.295258238 +0000 UTC m=+0.098177288 container cleanup eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:31:27 compute-0 systemd[1]: libpod-conmon-eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc.scope: Deactivated successfully.
Nov 22 09:31:27 compute-0 podman[353393]: 2025-11-22 09:31:27.391125777 +0000 UTC m=+0.062191861 container remove eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.398 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84632274-557c-4200-aa5c-734f8ec82c75]: (4, ('Sat Nov 22 09:31:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e (eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc)\neab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc\nSat Nov 22 09:31:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e (eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc)\neab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.400 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc42f522-9aec-460d-9536-b724c6d132fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.401 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bcad6c6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:27 compute-0 kernel: tap7bcad6c6-30: left promiscuous mode
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.422 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e44ac1ff-65ad-4efd-90f4-4a0512958dff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.434 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88447bb4-bdad-486d-85b2-eddb5eee2adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.435 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33b15d44-a9fa-4355-8235-0ad850189dc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.460 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[919b9e4c-3583-42f4-b525-157a43f2e0ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670855, 'reachable_time': 15238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353408, 'error': None, 'target': 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d7bcad6c6\x2d374a\x2d4697\x2dae00\x2d916836e6498e.mount: Deactivated successfully.
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.466 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.466 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[39c294a6-0e07-4b60-8b09-c71a6eeb70e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.715 253665 INFO nova.virt.libvirt.driver [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Deleting instance files /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_del
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.717 253665 INFO nova.virt.libvirt.driver [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Deletion of /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_del complete
Nov 22 09:31:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 137 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 905 KiB/s wr, 114 op/s
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.770 253665 INFO nova.compute.manager [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Took 0.98 seconds to destroy the instance on the hypervisor.
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.771 253665 DEBUG oslo.service.loopingcall [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.772 253665 DEBUG nova.compute.manager [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:31:27 compute-0 nova_compute[253661]: 2025-11-22 09:31:27.772 253665 DEBUG nova.network.neutron [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.974 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.975 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.975 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.114 253665 DEBUG nova.network.neutron [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updated VIF entry in instance network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.115 253665 DEBUG nova.network.neutron [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.139 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.266 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-unplugged-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.266 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.267 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.267 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.267 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] No waiting events found dispatching network-vif-unplugged-a13cc417-edce-4c30-a5b0-f90095810bcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.268 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-unplugged-a13cc417-edce-4c30-a5b0-f90095810bcc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.268 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.268 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.269 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.269 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.270 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] No waiting events found dispatching network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.270 253665 WARNING nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received unexpected event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc for instance with vm_state active and task_state deleting.
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.287 253665 DEBUG nova.network.neutron [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:29 compute-0 ceph-mon[75021]: pgmap v2081: 305 pgs: 305 active+clean; 137 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 905 KiB/s wr, 114 op/s
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.308 253665 INFO nova.compute.manager [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Took 1.54 seconds to deallocate network for instance.
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.358 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.358 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.371 253665 DEBUG nova.compute.manager [req-8045d8b0-3b00-4b8d-ab63-10ce9eeb320f req-19aa8cc1-dc35-48b0-ada0-dc6482e9c332 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-deleted-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.415 253665 DEBUG oslo_concurrency.processutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 67 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 540 KiB/s wr, 149 op/s
Nov 22 09:31:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:31:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567001461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.861 253665 DEBUG oslo_concurrency.processutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.870 253665 DEBUG nova.compute.provider_tree [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.890 253665 DEBUG nova.scheduler.client.report [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.938 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:29 compute-0 nova_compute[253661]: 2025-11-22 09:31:29.988 253665 INFO nova.scheduler.client.report [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2
Nov 22 09:31:30 compute-0 nova_compute[253661]: 2025-11-22 09:31:30.071 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1567001461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:30 compute-0 nova_compute[253661]: 2025-11-22 09:31:30.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:30 compute-0 nova_compute[253661]: 2025-11-22 09:31:30.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:30 compute-0 nova_compute[253661]: 2025-11-22 09:31:30.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:31 compute-0 ceph-mon[75021]: pgmap v2082: 305 pgs: 305 active+clean; 67 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 540 KiB/s wr, 149 op/s
Nov 22 09:31:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 67 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 125 op/s
Nov 22 09:31:32 compute-0 nova_compute[253661]: 2025-11-22 09:31:32.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:32 compute-0 nova_compute[253661]: 2025-11-22 09:31:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:32 compute-0 nova_compute[253661]: 2025-11-22 09:31:32.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:31:32 compute-0 nova_compute[253661]: 2025-11-22 09:31:32.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:31:32 compute-0 nova_compute[253661]: 2025-11-22 09:31:32.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:31:32 compute-0 nova_compute[253661]: 2025-11-22 09:31:32.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:33 compute-0 ceph-mon[75021]: pgmap v2083: 305 pgs: 305 active+clean; 67 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 125 op/s
Nov 22 09:31:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.627477) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893627531, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2161, "num_deletes": 263, "total_data_size": 3183771, "memory_usage": 3238168, "flush_reason": "Manual Compaction"}
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893644242, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2043766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40250, "largest_seqno": 42410, "table_properties": {"data_size": 2035989, "index_size": 4403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19520, "raw_average_key_size": 21, "raw_value_size": 2019082, "raw_average_value_size": 2226, "num_data_blocks": 196, "num_entries": 907, "num_filter_entries": 907, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803717, "oldest_key_time": 1763803717, "file_creation_time": 1763803893, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 16852 microseconds, and 5534 cpu microseconds.
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.644293) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2043766 bytes OK
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.644353) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.646654) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.646668) EVENT_LOG_v1 {"time_micros": 1763803893646664, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.646688) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3174597, prev total WAL file size 3174597, number of live WAL files 2.
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.647643) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353034' seq:72057594037927935, type:22 .. '6D6772737461740031373535' seq:0, type:0; will stop at (end)
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1995KB)], [89(9722KB)]
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893647676, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11999515, "oldest_snapshot_seqno": -1}
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6947 keys, 9771432 bytes, temperature: kUnknown
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893732702, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 9771432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9724582, "index_size": 28370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 174827, "raw_average_key_size": 25, "raw_value_size": 9599915, "raw_average_value_size": 1381, "num_data_blocks": 1148, "num_entries": 6947, "num_filter_entries": 6947, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803893, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.732951) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9771432 bytes
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.734593) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.0 rd, 114.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(10.7) write-amplify(4.8) OK, records in: 7397, records dropped: 450 output_compression: NoCompression
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.734641) EVENT_LOG_v1 {"time_micros": 1763803893734624, "job": 52, "event": "compaction_finished", "compaction_time_micros": 85107, "compaction_time_cpu_micros": 23212, "output_level": 6, "num_output_files": 1, "total_output_size": 9771432, "num_input_records": 7397, "num_output_records": 6947, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893735400, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893737832, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.647547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 127 op/s
Nov 22 09:31:34 compute-0 nova_compute[253661]: 2025-11-22 09:31:34.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:35 compute-0 ceph-mon[75021]: pgmap v2084: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 127 op/s
Nov 22 09:31:35 compute-0 nova_compute[253661]: 2025-11-22 09:31:35.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.7 KiB/s wr, 94 op/s
Nov 22 09:31:36 compute-0 nova_compute[253661]: 2025-11-22 09:31:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:36 compute-0 nova_compute[253661]: 2025-11-22 09:31:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:36 compute-0 nova_compute[253661]: 2025-11-22 09:31:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:36 compute-0 nova_compute[253661]: 2025-11-22 09:31:36.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.251 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:37 compute-0 ceph-mon[75021]: pgmap v2085: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.7 KiB/s wr, 94 op/s
Nov 22 09:31:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:31:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055461790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.756 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.950 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3887MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.952 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.952 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.983 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803882.9819217, 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:37 compute-0 nova_compute[253661]: 2025-11-22 09:31:37.984 253665 INFO nova.compute.manager [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] VM Stopped (Lifecycle Event)
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.011 253665 DEBUG nova.compute.manager [None req-80f961e2-0ede-48e9-b998-97b1fbd8138c - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.038 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.056 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.079 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.079 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.093 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.121 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.135 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:31:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2394604051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.637 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.643 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:31:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3055461790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2394604051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.661 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.692 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:31:38 compute-0 nova_compute[253661]: 2025-11-22 09:31:38.692 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:39 compute-0 ceph-mon[75021]: pgmap v2086: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.664446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899664506, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 306, "num_deletes": 251, "total_data_size": 99376, "memory_usage": 106488, "flush_reason": "Manual Compaction"}
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899667023, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 98525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42411, "largest_seqno": 42716, "table_properties": {"data_size": 96573, "index_size": 180, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5114, "raw_average_key_size": 18, "raw_value_size": 92692, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803894, "oldest_key_time": 1763803894, "file_creation_time": 1763803899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 2611 microseconds, and 1037 cpu microseconds.
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.667059) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 98525 bytes OK
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.667081) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668309) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668343) EVENT_LOG_v1 {"time_micros": 1763803899668337, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 97175, prev total WAL file size 97175, number of live WAL files 2.
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(96KB)], [92(9542KB)]
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899668884, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 9869957, "oldest_snapshot_seqno": -1}
Nov 22 09:31:39 compute-0 nova_compute[253661]: 2025-11-22 09:31:39.694 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6716 keys, 8217195 bytes, temperature: kUnknown
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899741703, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 8217195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8173305, "index_size": 25986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 170722, "raw_average_key_size": 25, "raw_value_size": 8054147, "raw_average_value_size": 1199, "num_data_blocks": 1038, "num_entries": 6716, "num_filter_entries": 6716, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.742021) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 8217195 bytes
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.743833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.4 rd, 112.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.3 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(183.6) write-amplify(83.4) OK, records in: 7225, records dropped: 509 output_compression: NoCompression
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.743863) EVENT_LOG_v1 {"time_micros": 1763803899743851, "job": 54, "event": "compaction_finished", "compaction_time_micros": 72908, "compaction_time_cpu_micros": 38373, "output_level": 6, "num_output_files": 1, "total_output_size": 8217195, "num_input_records": 7225, "num_output_records": 6716, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899744085, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899745678, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:39 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:31:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Nov 22 09:31:40 compute-0 ceph-mon[75021]: pgmap v2087: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Nov 22 09:31:40 compute-0 nova_compute[253661]: 2025-11-22 09:31:40.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Nov 22 09:31:42 compute-0 nova_compute[253661]: 2025-11-22 09:31:42.225 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803887.2243953, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:42 compute-0 nova_compute[253661]: 2025-11-22 09:31:42.225 253665 INFO nova.compute.manager [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Stopped (Lifecycle Event)
Nov 22 09:31:42 compute-0 nova_compute[253661]: 2025-11-22 09:31:42.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:42 compute-0 nova_compute[253661]: 2025-11-22 09:31:42.240 253665 DEBUG nova.compute.manager [None req-c07db297-99cc-4a60-8594-8b8350052adf - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:42 compute-0 nova_compute[253661]: 2025-11-22 09:31:42.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:42 compute-0 ceph-mon[75021]: pgmap v2088: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Nov 22 09:31:43 compute-0 podman[353480]: 2025-11-22 09:31:43.367174122 +0000 UTC m=+0.058194644 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:31:43 compute-0 podman[353481]: 2025-11-22 09:31:43.379796492 +0000 UTC m=+0.064811956 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 09:31:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Nov 22 09:31:44 compute-0 ceph-mon[75021]: pgmap v2089: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Nov 22 09:31:45 compute-0 nova_compute[253661]: 2025-11-22 09:31:45.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:31:46 compute-0 ceph-mon[75021]: pgmap v2090: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:31:47 compute-0 nova_compute[253661]: 2025-11-22 09:31:47.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:31:47 compute-0 nova_compute[253661]: 2025-11-22 09:31:47.304 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:31:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:49 compute-0 ceph-mon[75021]: pgmap v2091: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:31:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:31:50 compute-0 nova_compute[253661]: 2025-11-22 09:31:50.346 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:50 compute-0 nova_compute[253661]: 2025-11-22 09:31:50.347 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:50 compute-0 nova_compute[253661]: 2025-11-22 09:31:50.361 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:31:50 compute-0 nova_compute[253661]: 2025-11-22 09:31:50.449 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:50 compute-0 nova_compute[253661]: 2025-11-22 09:31:50.450 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:50 compute-0 podman[353519]: 2025-11-22 09:31:50.450356897 +0000 UTC m=+0.137735402 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:31:50 compute-0 nova_compute[253661]: 2025-11-22 09:31:50.459 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:31:50 compute-0 nova_compute[253661]: 2025-11-22 09:31:50.459 253665 INFO nova.compute.claims [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:31:50 compute-0 nova_compute[253661]: 2025-11-22 09:31:50.567 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:50 compute-0 nova_compute[253661]: 2025-11-22 09:31:50.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:31:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998360242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.038 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.044 253665 DEBUG nova.compute.provider_tree [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:31:51 compute-0 ceph-mon[75021]: pgmap v2092: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:31:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/998360242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.059 253665 DEBUG nova.scheduler.client.report [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.088 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.089 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.132 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.133 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.158 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.185 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.291 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.293 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.293 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating image(s)
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.319 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.345 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.372 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.376 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.417 253665 DEBUG nova.policy [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '04e47309bea74c04b0750912db283ae1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93c8020137e04db486facc42cfe30f23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.468 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.469 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.469 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.470 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.494 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.498 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7da16450-9ec5-472a-99df-81f56ee341fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.883 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7da16450-9ec5-472a-99df-81f56ee341fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:51 compute-0 nova_compute[253661]: 2025-11-22 09:31:51.962 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] resizing rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.085 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.085 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.091 253665 DEBUG nova.objects.instance [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'migration_context' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.115 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.116 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Ensure instance console log exists: /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.116 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.117 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.117 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.118 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.213 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.214 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.225 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.226 253665 INFO nova.compute.claims [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:31:52
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', '.mgr', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.307 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.372 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:31:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.799 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Successfully created port: 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:31:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:31:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457592920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.858 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.864 253665 DEBUG nova.compute.provider_tree [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.885 253665 DEBUG nova.scheduler.client.report [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.935 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.937 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.988 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:31:52 compute-0 nova_compute[253661]: 2025-11-22 09:31:52.988 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.003 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.017 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:31:53 compute-0 ceph-mon[75021]: pgmap v2093: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:31:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/457592920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.105 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.106 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.106 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Creating image(s)
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.134 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.159 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.185 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.189 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.229 253665 DEBUG nova.policy [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.277 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.279 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.280 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.281 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.320 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.329 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d60d8746-9288-4829-8073-bed8cf04d748_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 61 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 618 KiB/s wr, 22 op/s
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.831 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d60d8746-9288-4829-8073-bed8cf04d748_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:53 compute-0 nova_compute[253661]: 2025-11-22 09:31:53.905 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.036 253665 DEBUG nova.objects.instance [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid d60d8746-9288-4829-8073-bed8cf04d748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.057 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.057 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Ensure instance console log exists: /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.058 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.058 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.059 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.060 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Successfully updated port: 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.071 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.072 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.072 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.227 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.299 253665 DEBUG nova.compute.manager [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-changed-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.299 253665 DEBUG nova.compute.manager [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Refreshing instance network info cache due to event network-changed-5b1477f9-c3cf-4bac-95a5-109e7ae8d852. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.300 253665 DEBUG oslo_concurrency.lockutils [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:31:54 compute-0 nova_compute[253661]: 2025-11-22 09:31:54.929 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Successfully created port: f0934c58-4d53-43e5-8132-eb2195819f1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:31:55 compute-0 ceph-mon[75021]: pgmap v2094: 305 pgs: 305 active+clean; 61 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 618 KiB/s wr, 22 op/s
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.366 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.412 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.412 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance network_info: |[{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.412 253665 DEBUG oslo_concurrency.lockutils [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.413 253665 DEBUG nova.network.neutron [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Refreshing network info cache for port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.415 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start _get_guest_xml network_info=[{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.420 253665 WARNING nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.425 253665 DEBUG nova.virt.libvirt.host [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.425 253665 DEBUG nova.virt.libvirt.host [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.429 253665 DEBUG nova.virt.libvirt.host [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.429 253665 DEBUG nova.virt.libvirt.host [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.430 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.430 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.430 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.432 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.432 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.432 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.432 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.435 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:31:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:31:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:31:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:31:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 103 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.3 MiB/s wr, 39 op/s
Nov 22 09:31:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:31:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1054565765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.930 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.954 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:55 compute-0 nova_compute[253661]: 2025-11-22 09:31:55.959 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1054565765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.314 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Successfully updated port: f0934c58-4d53-43e5-8132-eb2195819f1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.340 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.340 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.340 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:31:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:31:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460515769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.425 253665 DEBUG nova.compute.manager [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.426 253665 DEBUG nova.compute.manager [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing instance network info cache due to event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.427 253665 DEBUG oslo_concurrency.lockutils [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.447 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.450 253665 DEBUG nova.virt.libvirt.vif [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1786356758',display_name='tempest-ServerRescueTestJSON-server-1786356758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1786356758',id=98,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-nx025m1d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:51Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=7da16450-9ec5-472a-99df-81f56ee341fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.451 253665 DEBUG nova.network.os_vif_util [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.452 253665 DEBUG nova.network.os_vif_util [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.454 253665 DEBUG nova.objects.instance [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.467 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <uuid>7da16450-9ec5-472a-99df-81f56ee341fc</uuid>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <name>instance-00000062</name>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerRescueTestJSON-server-1786356758</nova:name>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:31:55</nova:creationTime>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <nova:user uuid="04e47309bea74c04b0750912db283ae1">tempest-ServerRescueTestJSON-264324954-project-member</nova:user>
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <nova:project uuid="93c8020137e04db486facc42cfe30f23">tempest-ServerRescueTestJSON-264324954</nova:project>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <nova:port uuid="5b1477f9-c3cf-4bac-95a5-109e7ae8d852">
Nov 22 09:31:56 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <system>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <entry name="serial">7da16450-9ec5-472a-99df-81f56ee341fc</entry>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <entry name="uuid">7da16450-9ec5-472a-99df-81f56ee341fc</entry>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     </system>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <os>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   </os>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <features>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   </features>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk">
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk.config">
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:31:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:60:b5:90"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <target dev="tap5b1477f9-c3"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/console.log" append="off"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <video>
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     </video>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:31:56 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:31:56 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:31:56 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:31:56 compute-0 nova_compute[253661]: </domain>
Nov 22 09:31:56 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.469 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Preparing to wait for external event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.470 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.470 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.470 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.471 253665 DEBUG nova.virt.libvirt.vif [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1786356758',display_name='tempest-ServerRescueTestJSON-server-1786356758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1786356758',id=98,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-nx025m1d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:51Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=7da16450-9ec5-472a-99df-81f56ee341fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.472 253665 DEBUG nova.network.os_vif_util [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.473 253665 DEBUG nova.network.os_vif_util [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.473 253665 DEBUG os_vif [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.474 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.475 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.478 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.482 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.483 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b1477f9-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.483 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b1477f9-c3, col_values=(('external_ids', {'iface-id': '5b1477f9-c3cf-4bac-95a5-109e7ae8d852', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:b5:90', 'vm-uuid': '7da16450-9ec5-472a-99df-81f56ee341fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:56 compute-0 NetworkManager[48920]: <info>  [1763803916.5149] manager: (tap5b1477f9-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.524 253665 INFO os_vif [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3')
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.572 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.572 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.573 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No VIF found with MAC fa:16:3e:60:b5:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.573 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Using config drive
Nov 22 09:31:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:31:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:31:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:31:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:31:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:31:56 compute-0 nova_compute[253661]: 2025-11-22 09:31:56.602 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:57 compute-0 ceph-mon[75021]: pgmap v2095: 305 pgs: 305 active+clean; 103 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.3 MiB/s wr, 39 op/s
Nov 22 09:31:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1460515769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.232 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating config drive at /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.242 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgbafze39 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.423 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgbafze39" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.470 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.476 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.540 253665 DEBUG nova.network.neutron [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updated VIF entry in instance network info cache for port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.542 253665 DEBUG nova.network.neutron [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.560 253665 DEBUG oslo_concurrency.lockutils [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.686 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.687 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deleting local config drive /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config because it was imported into RBD.
Nov 22 09:31:57 compute-0 kernel: tap5b1477f9-c3: entered promiscuous mode
Nov 22 09:31:57 compute-0 ovn_controller[152872]: 2025-11-22T09:31:57Z|00976|binding|INFO|Claiming lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for this chassis.
Nov 22 09:31:57 compute-0 NetworkManager[48920]: <info>  [1763803917.7570] manager: (tap5b1477f9-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Nov 22 09:31:57 compute-0 ovn_controller[152872]: 2025-11-22T09:31:57Z|00977|binding|INFO|5b1477f9-c3cf-4bac-95a5-109e7ae8d852: Claiming fa:16:3e:60:b5:90 10.100.0.7
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.757 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 120 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.1 MiB/s wr, 52 op/s
Nov 22 09:31:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:57.776 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:31:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:57.777 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis
Nov 22 09:31:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:57.781 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:31:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:57.784 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ed82dc-1dbd-4e9b-ab3f-933998a402c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:31:57 compute-0 systemd-udevd[354055]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:31:57 compute-0 NetworkManager[48920]: <info>  [1763803917.8011] device (tap5b1477f9-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:31:57 compute-0 NetworkManager[48920]: <info>  [1763803917.8024] device (tap5b1477f9-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:31:57 compute-0 systemd-machined[215941]: New machine qemu-118-instance-00000062.
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:57 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-00000062.
Nov 22 09:31:57 compute-0 ovn_controller[152872]: 2025-11-22T09:31:57Z|00978|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 ovn-installed in OVS
Nov 22 09:31:57 compute-0 ovn_controller[152872]: 2025-11-22T09:31:57Z|00979|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 up in Southbound
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.874 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.893 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.893 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance network_info: |[{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.894 253665 DEBUG oslo_concurrency.lockutils [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.894 253665 DEBUG nova.network.neutron [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.897 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start _get_guest_xml network_info=[{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.913 253665 WARNING nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.920 253665 DEBUG nova.virt.libvirt.host [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.921 253665 DEBUG nova.virt.libvirt.host [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.926 253665 DEBUG nova.virt.libvirt.host [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.927 253665 DEBUG nova.virt.libvirt.host [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.927 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.927 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.928 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.928 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.928 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.930 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.930 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:31:57 compute-0 nova_compute[253661]: 2025-11-22 09:31:57.933 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.265 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803918.2641804, 7da16450-9ec5-472a-99df-81f56ee341fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.267 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Started (Lifecycle Event)
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.289 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.293 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803918.2660067, 7da16450-9ec5-472a-99df-81f56ee341fc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.293 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Paused (Lifecycle Event)
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.312 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.316 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.338 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:31:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:31:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230568166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.399 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.421 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.425 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:31:58 compute-0 sudo[354167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:31:58 compute-0 sudo[354167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:31:58 compute-0 sudo[354167]: pam_unix(sudo:session): session closed for user root
Nov 22 09:31:58 compute-0 sudo[354192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:31:58 compute-0 sudo[354192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:31:58 compute-0 sudo[354192]: pam_unix(sudo:session): session closed for user root
Nov 22 09:31:58 compute-0 sudo[354217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:31:58 compute-0 sudo[354217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:31:58 compute-0 sudo[354217]: pam_unix(sudo:session): session closed for user root
Nov 22 09:31:58 compute-0 sudo[354242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:31:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:31:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647526940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:58 compute-0 sudo[354242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.946 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.948 253665 DEBUG nova.virt.libvirt.vif [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-714198839',display_name='tempest-TestNetworkAdvancedServerOps-server-714198839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-714198839',id=99,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR2b18SIpx8gS1E3y/TzQyi9x+qeFqs0jOon8sbMm/5ZIjx+NrI5fGq/DEFizh5YAuLO2aSf/znN/DytSjdMVp7+cSM7ae+/kERmK84ftJ2WIfziOJizQIKJYVt0Z/aeQ==',key_name='tempest-TestNetworkAdvancedServerOps-1970810248',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-a3b9deas',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:53Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=d60d8746-9288-4829-8073-bed8cf04d748,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.948 253665 DEBUG nova.network.os_vif_util [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.949 253665 DEBUG nova.network.os_vif_util [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.950 253665 DEBUG nova.objects.instance [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid d60d8746-9288-4829-8073-bed8cf04d748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.965 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <uuid>d60d8746-9288-4829-8073-bed8cf04d748</uuid>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <name>instance-00000063</name>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-714198839</nova:name>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:31:57</nova:creationTime>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <nova:port uuid="f0934c58-4d53-43e5-8132-eb2195819f1f">
Nov 22 09:31:58 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <system>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <entry name="serial">d60d8746-9288-4829-8073-bed8cf04d748</entry>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <entry name="uuid">d60d8746-9288-4829-8073-bed8cf04d748</entry>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     </system>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <os>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   </os>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <features>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   </features>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d60d8746-9288-4829-8073-bed8cf04d748_disk">
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       </source>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d60d8746-9288-4829-8073-bed8cf04d748_disk.config">
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       </source>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:31:58 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:00:28:3e"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <target dev="tapf0934c58-4d"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/console.log" append="off"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <video>
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     </video>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:31:58 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:31:58 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:31:58 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:31:58 compute-0 nova_compute[253661]: </domain>
Nov 22 09:31:58 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.966 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Preparing to wait for external event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.968 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.968 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.969 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.969 253665 DEBUG nova.virt.libvirt.vif [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-714198839',display_name='tempest-TestNetworkAdvancedServerOps-server-714198839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-714198839',id=99,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR2b18SIpx8gS1E3y/TzQyi9x+qeFqs0jOon8sbMm/5ZIjx+NrI5fGq/DEFizh5YAuLO2aSf/znN/DytSjdMVp7+cSM7ae+/kERmK84ftJ2WIfziOJizQIKJYVt0Z/aeQ==',key_name='tempest-TestNetworkAdvancedServerOps-1970810248',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-a3b9deas',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:53Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=d60d8746-9288-4829-8073-bed8cf04d748,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.969 253665 DEBUG nova.network.os_vif_util [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.970 253665 DEBUG nova.network.os_vif_util [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.970 253665 DEBUG os_vif [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.971 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.972 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.974 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0934c58-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.975 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0934c58-4d, col_values=(('external_ids', {'iface-id': 'f0934c58-4d53-43e5-8132-eb2195819f1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:28:3e', 'vm-uuid': 'd60d8746-9288-4829-8073-bed8cf04d748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:58 compute-0 NetworkManager[48920]: <info>  [1763803918.9780] manager: (tapf0934c58-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.984 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:58 compute-0 nova_compute[253661]: 2025-11-22 09:31:58.985 253665 INFO os_vif [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d')
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.046 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.047 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.047 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:00:28:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.047 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Using config drive
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.073 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:59 compute-0 ceph-mon[75021]: pgmap v2096: 305 pgs: 305 active+clean; 120 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.1 MiB/s wr, 52 op/s
Nov 22 09:31:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/230568166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1647526940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:31:59 compute-0 sudo[354242]: pam_unix(sudo:session): session closed for user root
Nov 22 09:31:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:31:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:31:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:31:59 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:31:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:31:59 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:31:59 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a4ede35d-ef15-4c9e-9bc5-adecdf0b0756 does not exist
Nov 22 09:31:59 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ac13eaa8-734f-49af-803f-74595ad681ff does not exist
Nov 22 09:31:59 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev fac4903c-c737-49cd-a2ab-fc6b7ad30e94 does not exist
Nov 22 09:31:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:31:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:31:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:31:59 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:31:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:31:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:31:59 compute-0 sudo[354322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:31:59 compute-0 sudo[354322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:31:59 compute-0 sudo[354322]: pam_unix(sudo:session): session closed for user root
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.584 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Creating config drive at /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.589 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplw864cib execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:59 compute-0 sudo[354347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:31:59 compute-0 sudo[354347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:31:59 compute-0 sudo[354347]: pam_unix(sudo:session): session closed for user root
Nov 22 09:31:59 compute-0 sudo[354373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:31:59 compute-0 sudo[354373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:31:59 compute-0 sudo[354373]: pam_unix(sudo:session): session closed for user root
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.692 253665 DEBUG nova.network.neutron [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updated VIF entry in instance network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.693 253665 DEBUG nova.network.neutron [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.707 253665 DEBUG oslo_concurrency.lockutils [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:31:59 compute-0 sudo[354400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:31:59 compute-0 sudo[354400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.739 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplw864cib" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.764 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:31:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 134 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.768 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config d60d8746-9288-4829-8073-bed8cf04d748_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.938 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config d60d8746-9288-4829-8073-bed8cf04d748_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.939 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Deleting local config drive /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config because it was imported into RBD.
Nov 22 09:31:59 compute-0 kernel: tapf0934c58-4d: entered promiscuous mode
Nov 22 09:31:59 compute-0 NetworkManager[48920]: <info>  [1763803919.9815] manager: (tapf0934c58-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:59 compute-0 nova_compute[253661]: 2025-11-22 09:31:59.986 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:31:59 compute-0 ovn_controller[152872]: 2025-11-22T09:31:59Z|00980|binding|INFO|Claiming lport f0934c58-4d53-43e5-8132-eb2195819f1f for this chassis.
Nov 22 09:31:59 compute-0 ovn_controller[152872]: 2025-11-22T09:31:59Z|00981|binding|INFO|f0934c58-4d53-43e5-8132-eb2195819f1f: Claiming fa:16:3e:00:28:3e 10.100.0.5
Nov 22 09:31:59 compute-0 NetworkManager[48920]: <info>  [1763803919.9968] device (tapf0934c58-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:31:59 compute-0 NetworkManager[48920]: <info>  [1763803919.9977] device (tapf0934c58-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:31:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:59.996 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:28:3e 10.100.0.5'], port_security=['fa:16:3e:00:28:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd60d8746-9288-4829-8073-bed8cf04d748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa17f410-f219-4ce2-8b8c-5124640f3749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12465327-1cbd-4adc-ab38-5ef26037180c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0934c58-4d53-43e5-8132-eb2195819f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:31:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:59.997 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0934c58-4d53-43e5-8132-eb2195819f1f in datapath 0263cd25-ddb2-49f9-ab5b-2f514c861684 bound to our chassis
Nov 22 09:31:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:31:59.999 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[990298f3-bade-44a0-8c4b-c831f70f6940]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.016 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0263cd25-d1 in ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.018 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0263cd25-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[198828a5-7a2e-47f3-aadf-0f79c35c6fda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41c95b56-9820-4fc4-ba94-621eba424650]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 systemd-machined[215941]: New machine qemu-119-instance-00000063.
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.032 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3b272f29-1965-48e3-a87a-df6815198641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.055 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:00 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000063.
Nov 22 09:32:00 compute-0 ovn_controller[152872]: 2025-11-22T09:32:00Z|00982|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f ovn-installed in OVS
Nov 22 09:32:00 compute-0 ovn_controller[152872]: 2025-11-22T09:32:00Z|00983|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f up in Southbound
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.063 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9c597a-8610-4165-9fe9-b7e2544e2c11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:32:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:32:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:32:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:32:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:32:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:32:00 compute-0 podman[354515]: 2025-11-22 09:32:00.097335895 +0000 UTC m=+0.054340578 container create 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.102 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3463c8-8d68-41e0-8da3-d87fd102399f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.110 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6c5500-06cb-4cb5-bf69-b5c6f2cc5574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 NetworkManager[48920]: <info>  [1763803920.1118] manager: (tap0263cd25-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.143 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe8aa45-6a56-4885-b60c-dd10576339b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.146 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ded0d488-8dae-4dc7-8026-bad56225532c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 systemd[1]: Started libpod-conmon-11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4.scope.
Nov 22 09:32:00 compute-0 podman[354515]: 2025-11-22 09:32:00.067901271 +0000 UTC m=+0.024905984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:32:00 compute-0 NetworkManager[48920]: <info>  [1763803920.1691] device (tap0263cd25-d0): carrier: link connected
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.176 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23a21238-7e90-4fc8-a212-64682ba11791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.200 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d561cb61-8820-463d-b1b9-89543e89c015]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0263cd25-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:31:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677090, 'reachable_time': 37809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354562, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 podman[354515]: 2025-11-22 09:32:00.208572623 +0000 UTC m=+0.165577336 container init 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:32:00 compute-0 podman[354515]: 2025-11-22 09:32:00.21777764 +0000 UTC m=+0.174782323 container start 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:32:00 compute-0 kind_proskuriakova[354559]: 167 167
Nov 22 09:32:00 compute-0 systemd[1]: libpod-11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4.scope: Deactivated successfully.
Nov 22 09:32:00 compute-0 conmon[354559]: conmon 11305d5191f21d20f8b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4.scope/container/memory.events
Nov 22 09:32:00 compute-0 podman[354515]: 2025-11-22 09:32:00.228778029 +0000 UTC m=+0.185782722 container attach 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:32:00 compute-0 podman[354515]: 2025-11-22 09:32:00.229938168 +0000 UTC m=+0.186942861 container died 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.231 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[020d1288-d5e3-4fe0-a35a-cda55a773584]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:31f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677090, 'tstamp': 677090}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354563, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aad9b08f2ab47fb6b08caee67c1654ff8997cd3b795703eb8bfa4d507f6352a-merged.mount: Deactivated successfully.
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.258 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa2b236-f7e8-4820-967f-c77f4e71b1df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0263cd25-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:31:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677090, 'reachable_time': 37809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354573, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 podman[354515]: 2025-11-22 09:32:00.27506186 +0000 UTC m=+0.232066543 container remove 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:32:00 compute-0 systemd[1]: libpod-conmon-11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4.scope: Deactivated successfully.
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.307 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee9d303-e27c-4e3b-8205-ad50823dc563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.374 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d164ec38-d7ca-4a02-8d91-a73b0b1d0b17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.376 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0263cd25-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.377 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.377 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0263cd25-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:00 compute-0 NetworkManager[48920]: <info>  [1763803920.3804] manager: (tap0263cd25-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Nov 22 09:32:00 compute-0 kernel: tap0263cd25-d0: entered promiscuous mode
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.383 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0263cd25-d0, col_values=(('external_ids', {'iface-id': '771f6da7-e306-4e95-84a5-f08be3c60513'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:00 compute-0 ovn_controller[152872]: 2025-11-22T09:32:00Z|00984|binding|INFO|Releasing lport 771f6da7-e306-4e95-84a5-f08be3c60513 from this chassis (sb_readonly=0)
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.406 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.407 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe89999d-50b9-4057-8b6d-9ad004e793fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.410 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:32:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.411 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'env', 'PROCESS_TAG=haproxy-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0263cd25-ddb2-49f9-ab5b-2f514c861684.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:32:00 compute-0 podman[354591]: 2025-11-22 09:32:00.474355544 +0000 UTC m=+0.041552424 container create 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:32:00 compute-0 systemd[1]: Started libpod-conmon-598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c.scope.
Nov 22 09:32:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:00 compute-0 podman[354591]: 2025-11-22 09:32:00.455538551 +0000 UTC m=+0.022735451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:00 compute-0 podman[354591]: 2025-11-22 09:32:00.568653045 +0000 UTC m=+0.135849945 container init 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:32:00 compute-0 podman[354591]: 2025-11-22 09:32:00.575549745 +0000 UTC m=+0.142746625 container start 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 09:32:00 compute-0 podman[354591]: 2025-11-22 09:32:00.579038351 +0000 UTC m=+0.146235231 container attach 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:00 compute-0 podman[354670]: 2025-11-22 09:32:00.832714365 +0000 UTC m=+0.063374781 container create 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:32:00 compute-0 systemd[1]: Started libpod-conmon-18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da.scope.
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.878 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803920.8782458, d60d8746-9288-4829-8073-bed8cf04d748 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.879 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Started (Lifecycle Event)
Nov 22 09:32:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.899 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:00 compute-0 podman[354670]: 2025-11-22 09:32:00.805453473 +0000 UTC m=+0.036113909 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05970ec99c950ccf608d1273db9007048dadc52d0e0d40152159bb3df22e752b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.909 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803920.8784685, d60d8746-9288-4829-8073-bed8cf04d748 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.909 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Paused (Lifecycle Event)
Nov 22 09:32:00 compute-0 podman[354670]: 2025-11-22 09:32:00.922652618 +0000 UTC m=+0.153313064 container init 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.925 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.929 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:32:00 compute-0 podman[354670]: 2025-11-22 09:32:00.93085353 +0000 UTC m=+0.161513946 container start 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 09:32:00 compute-0 nova_compute[253661]: 2025-11-22 09:32:00.947 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:32:00 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [NOTICE]   (354701) : New worker (354703) forked
Nov 22 09:32:00 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [NOTICE]   (354701) : Loading success.
Nov 22 09:32:01 compute-0 ceph-mon[75021]: pgmap v2097: 305 pgs: 305 active+clean; 134 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Nov 22 09:32:01 compute-0 magical_elbakyan[354610]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:32:01 compute-0 magical_elbakyan[354610]: --> relative data size: 1.0
Nov 22 09:32:01 compute-0 magical_elbakyan[354610]: --> All data devices are unavailable
Nov 22 09:32:01 compute-0 systemd[1]: libpod-598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c.scope: Deactivated successfully.
Nov 22 09:32:01 compute-0 systemd[1]: libpod-598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c.scope: Consumed 1.054s CPU time.
Nov 22 09:32:01 compute-0 podman[354736]: 2025-11-22 09:32:01.764335484 +0000 UTC m=+0.025927359 container died 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:32:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 134 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Nov 22 09:32:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801-merged.mount: Deactivated successfully.
Nov 22 09:32:01 compute-0 podman[354736]: 2025-11-22 09:32:01.821628124 +0000 UTC m=+0.083219989 container remove 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:32:01 compute-0 systemd[1]: libpod-conmon-598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c.scope: Deactivated successfully.
Nov 22 09:32:01 compute-0 sudo[354400]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:01 compute-0 sudo[354750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:32:01 compute-0 sudo[354750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:01 compute-0 sudo[354750]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:02 compute-0 sudo[354775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:32:02 compute-0 sudo[354775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:02 compute-0 sudo[354775]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:02 compute-0 sudo[354800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:32:02 compute-0 sudo[354800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:02 compute-0 sudo[354800]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:02 compute-0 sudo[354825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:32:02 compute-0 sudo[354825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:02 compute-0 podman[354890]: 2025-11-22 09:32:02.521701915 +0000 UTC m=+0.048471654 container create 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:32:02 compute-0 systemd[1]: Started libpod-conmon-4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878.scope.
Nov 22 09:32:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:32:02 compute-0 podman[354890]: 2025-11-22 09:32:02.502359319 +0000 UTC m=+0.029129118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:32:02 compute-0 podman[354890]: 2025-11-22 09:32:02.604072912 +0000 UTC m=+0.130842681 container init 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:32:02 compute-0 podman[354890]: 2025-11-22 09:32:02.613599557 +0000 UTC m=+0.140369286 container start 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:32:02 compute-0 podman[354890]: 2025-11-22 09:32:02.61697663 +0000 UTC m=+0.143746379 container attach 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 09:32:02 compute-0 practical_morse[354906]: 167 167
Nov 22 09:32:02 compute-0 systemd[1]: libpod-4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878.scope: Deactivated successfully.
Nov 22 09:32:02 compute-0 podman[354890]: 2025-11-22 09:32:02.621419849 +0000 UTC m=+0.148189588 container died 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-423749be79d1ee976dae80034cd5e08a3c7202bf9b8b0ebe326ba631a170203a-merged.mount: Deactivated successfully.
Nov 22 09:32:02 compute-0 podman[354890]: 2025-11-22 09:32:02.710507432 +0000 UTC m=+0.237277171 container remove 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:32:02 compute-0 systemd[1]: libpod-conmon-4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878.scope: Deactivated successfully.
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006920576732109136 of space, bias 1.0, pg target 0.20761730196327408 quantized to 32 (current 32)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:32:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:32:02 compute-0 podman[354930]: 2025-11-22 09:32:02.883582682 +0000 UTC m=+0.043022770 container create cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 09:32:02 compute-0 systemd[1]: Started libpod-conmon-cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea.scope.
Nov 22 09:32:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:02 compute-0 podman[354930]: 2025-11-22 09:32:02.865751383 +0000 UTC m=+0.025191491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:32:02 compute-0 podman[354930]: 2025-11-22 09:32:02.967539229 +0000 UTC m=+0.126979337 container init cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 09:32:02 compute-0 podman[354930]: 2025-11-22 09:32:02.974419237 +0000 UTC m=+0.133859325 container start cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:32:02 compute-0 podman[354930]: 2025-11-22 09:32:02.977679438 +0000 UTC m=+0.137119556 container attach cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.127 253665 DEBUG nova.compute.manager [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.128 253665 DEBUG oslo_concurrency.lockutils [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.128 253665 DEBUG oslo_concurrency.lockutils [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.129 253665 DEBUG oslo_concurrency.lockutils [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.129 253665 DEBUG nova.compute.manager [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Processing event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.130 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.135 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803923.1347756, 7da16450-9ec5-472a-99df-81f56ee341fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.135 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Resumed (Lifecycle Event)
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.137 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.140 253665 INFO nova.virt.libvirt.driver [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance spawned successfully.
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.140 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.163 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.168 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.168 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.168 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.169 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.169 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.170 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.172 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.213 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.235 253665 INFO nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Took 11.94 seconds to spawn the instance on the hypervisor.
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.236 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.303 253665 INFO nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Took 12.89 seconds to build instance.
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.320 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:03 compute-0 ceph-mon[75021]: pgmap v2098: 305 pgs: 305 active+clean; 134 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.515 253665 DEBUG nova.compute.manager [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.515 253665 DEBUG oslo_concurrency.lockutils [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.515 253665 DEBUG oslo_concurrency.lockutils [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.516 253665 DEBUG oslo_concurrency.lockutils [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.516 253665 DEBUG nova.compute.manager [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Processing event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.517 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.522 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803923.52259, d60d8746-9288-4829-8073-bed8cf04d748 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.523 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Resumed (Lifecycle Event)
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.525 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.530 253665 INFO nova.virt.libvirt.driver [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance spawned successfully.
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.530 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.560 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.565 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.577 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.578 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.579 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.580 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.580 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.581 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.588 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:32:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.643 253665 INFO nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Took 10.54 seconds to spawn the instance on the hypervisor.
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.644 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.718 253665 INFO nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Took 11.53 seconds to build instance.
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.734 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:03 compute-0 inspiring_turing[354946]: {
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:     "0": [
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:         {
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "devices": [
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "/dev/loop3"
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             ],
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_name": "ceph_lv0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_size": "21470642176",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "name": "ceph_lv0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "tags": {
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.cluster_name": "ceph",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.crush_device_class": "",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.encrypted": "0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.osd_id": "0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.type": "block",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.vdo": "0"
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             },
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "type": "block",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "vg_name": "ceph_vg0"
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:         }
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:     ],
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:     "1": [
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:         {
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "devices": [
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "/dev/loop4"
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             ],
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_name": "ceph_lv1",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_size": "21470642176",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "name": "ceph_lv1",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "tags": {
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.cluster_name": "ceph",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.crush_device_class": "",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.encrypted": "0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.osd_id": "1",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.type": "block",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.vdo": "0"
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             },
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "type": "block",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "vg_name": "ceph_vg1"
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:         }
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:     ],
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:     "2": [
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:         {
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "devices": [
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "/dev/loop5"
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             ],
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_name": "ceph_lv2",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_size": "21470642176",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "name": "ceph_lv2",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "tags": {
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.cluster_name": "ceph",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.crush_device_class": "",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.encrypted": "0",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.osd_id": "2",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.type": "block",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:                 "ceph.vdo": "0"
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             },
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "type": "block",
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:             "vg_name": "ceph_vg2"
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:         }
Nov 22 09:32:03 compute-0 inspiring_turing[354946]:     ]
Nov 22 09:32:03 compute-0 inspiring_turing[354946]: }
Nov 22 09:32:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Nov 22 09:32:03 compute-0 systemd[1]: libpod-cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea.scope: Deactivated successfully.
Nov 22 09:32:03 compute-0 podman[354955]: 2025-11-22 09:32:03.825437903 +0000 UTC m=+0.029615420 container died cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5-merged.mount: Deactivated successfully.
Nov 22 09:32:03 compute-0 podman[354955]: 2025-11-22 09:32:03.903547226 +0000 UTC m=+0.107724733 container remove cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:32:03 compute-0 systemd[1]: libpod-conmon-cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea.scope: Deactivated successfully.
Nov 22 09:32:03 compute-0 sudo[354825]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:03 compute-0 nova_compute[253661]: 2025-11-22 09:32:03.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:04 compute-0 sudo[354970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:32:04 compute-0 sudo[354970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:04 compute-0 sudo[354970]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:04 compute-0 sudo[354995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:32:04 compute-0 sudo[354995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:04 compute-0 sudo[354995]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:04 compute-0 sudo[355020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:32:04 compute-0 sudo[355020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:04 compute-0 sudo[355020]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:04 compute-0 sudo[355045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:32:04 compute-0 sudo[355045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:04 compute-0 nova_compute[253661]: 2025-11-22 09:32:04.493 253665 INFO nova.compute.manager [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Rescuing
Nov 22 09:32:04 compute-0 nova_compute[253661]: 2025-11-22 09:32:04.495 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:32:04 compute-0 nova_compute[253661]: 2025-11-22 09:32:04.495 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:32:04 compute-0 nova_compute[253661]: 2025-11-22 09:32:04.496 253665 DEBUG nova.network.neutron [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:32:04 compute-0 podman[355110]: 2025-11-22 09:32:04.641676883 +0000 UTC m=+0.049467498 container create 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:32:04 compute-0 systemd[1]: Started libpod-conmon-319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526.scope.
Nov 22 09:32:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:32:04 compute-0 podman[355110]: 2025-11-22 09:32:04.619454236 +0000 UTC m=+0.027244871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:32:04 compute-0 podman[355110]: 2025-11-22 09:32:04.729430573 +0000 UTC m=+0.137221198 container init 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:32:04 compute-0 podman[355110]: 2025-11-22 09:32:04.739268075 +0000 UTC m=+0.147058690 container start 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:32:04 compute-0 podman[355110]: 2025-11-22 09:32:04.743147971 +0000 UTC m=+0.150938606 container attach 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:32:04 compute-0 vibrant_dijkstra[355126]: 167 167
Nov 22 09:32:04 compute-0 systemd[1]: libpod-319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526.scope: Deactivated successfully.
Nov 22 09:32:04 compute-0 podman[355110]: 2025-11-22 09:32:04.748174924 +0000 UTC m=+0.155965539 container died 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccf20c3cc29d87eaebe20cc4577f3e1d7db9bab8f10fd78308a657d52d682db6-merged.mount: Deactivated successfully.
Nov 22 09:32:04 compute-0 podman[355110]: 2025-11-22 09:32:04.803005394 +0000 UTC m=+0.210796009 container remove 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:32:04 compute-0 systemd[1]: libpod-conmon-319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526.scope: Deactivated successfully.
Nov 22 09:32:04 compute-0 podman[355149]: 2025-11-22 09:32:04.999621483 +0000 UTC m=+0.061324280 container create ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:32:05 compute-0 systemd[1]: Started libpod-conmon-ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4.scope.
Nov 22 09:32:05 compute-0 podman[355149]: 2025-11-22 09:32:04.971191804 +0000 UTC m=+0.032894631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:32:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:05 compute-0 podman[355149]: 2025-11-22 09:32:05.108171625 +0000 UTC m=+0.169874442 container init ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 09:32:05 compute-0 podman[355149]: 2025-11-22 09:32:05.119245477 +0000 UTC m=+0.180948274 container start ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 22 09:32:05 compute-0 podman[355149]: 2025-11-22 09:32:05.125105862 +0000 UTC m=+0.186808659 container attach ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.247 253665 DEBUG nova.compute.manager [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.248 253665 DEBUG oslo_concurrency.lockutils [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.248 253665 DEBUG oslo_concurrency.lockutils [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.249 253665 DEBUG oslo_concurrency.lockutils [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.249 253665 DEBUG nova.compute.manager [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.249 253665 WARNING nova.compute.manager [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state active and task_state rescuing.
Nov 22 09:32:05 compute-0 ceph-mon[75021]: pgmap v2099: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.595 253665 DEBUG nova.compute.manager [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.595 253665 DEBUG oslo_concurrency.lockutils [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.595 253665 DEBUG oslo_concurrency.lockutils [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.596 253665 DEBUG oslo_concurrency.lockutils [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.596 253665 DEBUG nova.compute.manager [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.596 253665 WARNING nova.compute.manager [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state None.
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 3.0 MiB/s wr, 69 op/s
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.922 253665 DEBUG nova.network.neutron [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:32:05 compute-0 nova_compute[253661]: 2025-11-22 09:32:05.945 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:32:06 compute-0 epic_noether[355166]: {
Nov 22 09:32:06 compute-0 epic_noether[355166]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "osd_id": 1,
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "type": "bluestore"
Nov 22 09:32:06 compute-0 epic_noether[355166]:     },
Nov 22 09:32:06 compute-0 epic_noether[355166]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "osd_id": 0,
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "type": "bluestore"
Nov 22 09:32:06 compute-0 epic_noether[355166]:     },
Nov 22 09:32:06 compute-0 epic_noether[355166]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "osd_id": 2,
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:32:06 compute-0 epic_noether[355166]:         "type": "bluestore"
Nov 22 09:32:06 compute-0 epic_noether[355166]:     }
Nov 22 09:32:06 compute-0 epic_noether[355166]: }
Nov 22 09:32:06 compute-0 systemd[1]: libpod-ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4.scope: Deactivated successfully.
Nov 22 09:32:06 compute-0 podman[355149]: 2025-11-22 09:32:06.195828925 +0000 UTC m=+1.257531752 container died ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:32:06 compute-0 systemd[1]: libpod-ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4.scope: Consumed 1.081s CPU time.
Nov 22 09:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28-merged.mount: Deactivated successfully.
Nov 22 09:32:06 compute-0 nova_compute[253661]: 2025-11-22 09:32:06.230 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:32:06 compute-0 podman[355149]: 2025-11-22 09:32:06.263578513 +0000 UTC m=+1.325281310 container remove ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:32:06 compute-0 systemd[1]: libpod-conmon-ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4.scope: Deactivated successfully.
Nov 22 09:32:06 compute-0 sudo[355045]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:32:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:32:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:32:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:32:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 454bec53-ca39-42e4-8b52-03115dc78397 does not exist
Nov 22 09:32:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev dbcc4ee9-206b-4706-ab99-a1394f283fdb does not exist
Nov 22 09:32:06 compute-0 sudo[355209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:32:06 compute-0 sudo[355209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:06 compute-0 sudo[355209]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:06 compute-0 sudo[355234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:32:06 compute-0 sudo[355234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:32:06 compute-0 sudo[355234]: pam_unix(sudo:session): session closed for user root
Nov 22 09:32:07 compute-0 ovn_controller[152872]: 2025-11-22T09:32:07Z|00985|binding|INFO|Releasing lport 771f6da7-e306-4e95-84a5-f08be3c60513 from this chassis (sb_readonly=0)
Nov 22 09:32:07 compute-0 NetworkManager[48920]: <info>  [1763803927.0895] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 22 09:32:07 compute-0 nova_compute[253661]: 2025-11-22 09:32:07.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:07 compute-0 NetworkManager[48920]: <info>  [1763803927.0911] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Nov 22 09:32:07 compute-0 ovn_controller[152872]: 2025-11-22T09:32:07Z|00986|binding|INFO|Releasing lport 771f6da7-e306-4e95-84a5-f08be3c60513 from this chassis (sb_readonly=0)
Nov 22 09:32:07 compute-0 nova_compute[253661]: 2025-11-22 09:32:07.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:07 compute-0 nova_compute[253661]: 2025-11-22 09:32:07.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:07 compute-0 ceph-mon[75021]: pgmap v2100: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 3.0 MiB/s wr, 69 op/s
Nov 22 09:32:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:32:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:32:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 93 op/s
Nov 22 09:32:07 compute-0 nova_compute[253661]: 2025-11-22 09:32:07.838 253665 DEBUG nova.compute.manager [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:07 compute-0 nova_compute[253661]: 2025-11-22 09:32:07.839 253665 DEBUG nova.compute.manager [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing instance network info cache due to event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:32:07 compute-0 nova_compute[253661]: 2025-11-22 09:32:07.839 253665 DEBUG oslo_concurrency.lockutils [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:32:07 compute-0 nova_compute[253661]: 2025-11-22 09:32:07.839 253665 DEBUG oslo_concurrency.lockutils [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:32:07 compute-0 nova_compute[253661]: 2025-11-22 09:32:07.839 253665 DEBUG nova.network.neutron [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:32:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:08 compute-0 nova_compute[253661]: 2025-11-22 09:32:08.985 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:09 compute-0 ceph-mon[75021]: pgmap v2101: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 93 op/s
Nov 22 09:32:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 440 KiB/s wr, 149 op/s
Nov 22 09:32:09 compute-0 nova_compute[253661]: 2025-11-22 09:32:09.848 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:09.848 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:09.850 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:32:10 compute-0 nova_compute[253661]: 2025-11-22 09:32:10.677 253665 DEBUG nova.network.neutron [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updated VIF entry in instance network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:32:10 compute-0 nova_compute[253661]: 2025-11-22 09:32:10.677 253665 DEBUG nova.network.neutron [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:32:10 compute-0 nova_compute[253661]: 2025-11-22 09:32:10.699 253665 DEBUG oslo_concurrency.lockutils [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:32:10 compute-0 nova_compute[253661]: 2025-11-22 09:32:10.712 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:11 compute-0 ceph-mon[75021]: pgmap v2102: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 440 KiB/s wr, 149 op/s
Nov 22 09:32:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 141 op/s
Nov 22 09:32:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:11.852 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:32:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2701429915' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:32:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:32:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2701429915' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:32:13 compute-0 ceph-mon[75021]: pgmap v2103: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 141 op/s
Nov 22 09:32:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2701429915' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:32:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2701429915' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:32:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 141 op/s
Nov 22 09:32:13 compute-0 nova_compute[253661]: 2025-11-22 09:32:13.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:14 compute-0 podman[355261]: 2025-11-22 09:32:14.410660814 +0000 UTC m=+0.081853205 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 22 09:32:14 compute-0 podman[355260]: 2025-11-22 09:32:14.423303135 +0000 UTC m=+0.094537367 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:32:15 compute-0 ceph-mon[75021]: pgmap v2104: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 141 op/s
Nov 22 09:32:15 compute-0 nova_compute[253661]: 2025-11-22 09:32:15.764 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 170 B/s wr, 130 op/s
Nov 22 09:32:16 compute-0 nova_compute[253661]: 2025-11-22 09:32:16.285 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:32:17 compute-0 ceph-mon[75021]: pgmap v2105: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 170 B/s wr, 130 op/s
Nov 22 09:32:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 151 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.2 MiB/s wr, 167 op/s
Nov 22 09:32:18 compute-0 ovn_controller[152872]: 2025-11-22T09:32:18Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:28:3e 10.100.0.5
Nov 22 09:32:18 compute-0 ovn_controller[152872]: 2025-11-22T09:32:18Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:28:3e 10.100.0.5
Nov 22 09:32:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:19 compute-0 nova_compute[253661]: 2025-11-22 09:32:19.041 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:19 compute-0 ceph-mon[75021]: pgmap v2106: 305 pgs: 305 active+clean; 151 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.2 MiB/s wr, 167 op/s
Nov 22 09:32:19 compute-0 kernel: tap5b1477f9-c3 (unregistering): left promiscuous mode
Nov 22 09:32:19 compute-0 NetworkManager[48920]: <info>  [1763803939.3888] device (tap5b1477f9-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:32:19 compute-0 ovn_controller[152872]: 2025-11-22T09:32:19Z|00987|binding|INFO|Releasing lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 from this chassis (sb_readonly=0)
Nov 22 09:32:19 compute-0 ovn_controller[152872]: 2025-11-22T09:32:19Z|00988|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 down in Southbound
Nov 22 09:32:19 compute-0 nova_compute[253661]: 2025-11-22 09:32:19.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:19 compute-0 ovn_controller[152872]: 2025-11-22T09:32:19Z|00989|binding|INFO|Removing iface tap5b1477f9-c3 ovn-installed in OVS
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.404 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.405 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.406 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.408 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7e811bee-1c88-4734-8349-510ed5a76c7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:19 compute-0 nova_compute[253661]: 2025-11-22 09:32:19.411 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:19 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 22 09:32:19 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000062.scope: Consumed 13.789s CPU time.
Nov 22 09:32:19 compute-0 systemd-machined[215941]: Machine qemu-118-instance-00000062 terminated.
Nov 22 09:32:19 compute-0 kernel: tap5b1477f9-c3: entered promiscuous mode
Nov 22 09:32:19 compute-0 kernel: tap5b1477f9-c3 (unregistering): left promiscuous mode
Nov 22 09:32:19 compute-0 NetworkManager[48920]: <info>  [1763803939.6452] manager: (tap5b1477f9-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Nov 22 09:32:19 compute-0 ovn_controller[152872]: 2025-11-22T09:32:19Z|00990|binding|INFO|Claiming lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for this chassis.
Nov 22 09:32:19 compute-0 ovn_controller[152872]: 2025-11-22T09:32:19Z|00991|binding|INFO|5b1477f9-c3cf-4bac-95a5-109e7ae8d852: Claiming fa:16:3e:60:b5:90 10.100.0.7
Nov 22 09:32:19 compute-0 nova_compute[253661]: 2025-11-22 09:32:19.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.659 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.661 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.662 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.664 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25a230f6-62c9-4fff-9c63-66e178a6e2de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:19 compute-0 ovn_controller[152872]: 2025-11-22T09:32:19Z|00992|binding|INFO|Releasing lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 from this chassis (sb_readonly=0)
Nov 22 09:32:19 compute-0 nova_compute[253661]: 2025-11-22 09:32:19.669 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.675 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.677 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.678 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:32:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.679 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca487be-b666-49fc-87c8-a998910063c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:19 compute-0 nova_compute[253661]: 2025-11-22 09:32:19.684 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 196 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 188 op/s
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.095 253665 DEBUG nova.compute.manager [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.095 253665 DEBUG oslo_concurrency.lockutils [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.096 253665 DEBUG oslo_concurrency.lockutils [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.096 253665 DEBUG oslo_concurrency.lockutils [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.096 253665 DEBUG nova.compute.manager [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.097 253665 WARNING nova.compute.manager [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state active and task_state rescuing.
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.317 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance shutdown successfully after 14 seconds.
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.325 253665 INFO nova.virt.libvirt.driver [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance destroyed successfully.
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.326 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.351 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Attempting rescue
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.353 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.360 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.360 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating image(s)
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.428 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.434 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.480 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.519 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.526 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.639 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.640 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.641 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.641 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.666 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.670 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:20 compute-0 nova_compute[253661]: 2025-11-22 09:32:20.767 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.437 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.767s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.439 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'migration_context' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.456 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.458 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start _get_guest_xml network_info=[{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:60:b5:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.458 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'resources' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:21 compute-0 podman[355409]: 2025-11-22 09:32:21.480638145 +0000 UTC m=+0.145745438 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.487 253665 WARNING nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.499 253665 DEBUG nova.virt.libvirt.host [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.500 253665 DEBUG nova.virt.libvirt.host [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:32:21 compute-0 ceph-mon[75021]: pgmap v2107: 305 pgs: 305 active+clean; 196 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 188 op/s
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.504 253665 DEBUG nova.virt.libvirt.host [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.505 253665 DEBUG nova.virt.libvirt.host [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.505 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.505 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.506 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.506 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.506 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.506 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.508 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.508 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:21 compute-0 nova_compute[253661]: 2025-11-22 09:32:21.524 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 196 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 4.2 MiB/s wr, 119 op/s
Nov 22 09:32:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:32:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1498760101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.052 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.054 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.178 253665 DEBUG nova.compute.manager [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.178 253665 DEBUG oslo_concurrency.lockutils [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.179 253665 DEBUG oslo_concurrency.lockutils [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.179 253665 DEBUG oslo_concurrency.lockutils [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.179 253665 DEBUG nova.compute.manager [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.179 253665 WARNING nova.compute.manager [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state active and task_state rescuing.
Nov 22 09:32:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1498760101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:32:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/819269320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.546 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:22 compute-0 nova_compute[253661]: 2025-11-22 09:32:22.548 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:32:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:32:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:32:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:32:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:32:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:32:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:32:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/518424414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.075 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.077 253665 DEBUG nova.virt.libvirt.vif [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1786356758',display_name='tempest-ServerRescueTestJSON-server-1786356758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1786356758',id=98,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:32:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-nx025m1d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:32:03Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=7da16450-9ec5-472a-99df-81f56ee341fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:60:b5:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.078 253665 DEBUG nova.network.os_vif_util [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:60:b5:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.079 253665 DEBUG nova.network.os_vif_util [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.081 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.098 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <uuid>7da16450-9ec5-472a-99df-81f56ee341fc</uuid>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <name>instance-00000062</name>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerRescueTestJSON-server-1786356758</nova:name>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:32:21</nova:creationTime>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <nova:user uuid="04e47309bea74c04b0750912db283ae1">tempest-ServerRescueTestJSON-264324954-project-member</nova:user>
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <nova:project uuid="93c8020137e04db486facc42cfe30f23">tempest-ServerRescueTestJSON-264324954</nova:project>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <nova:port uuid="5b1477f9-c3cf-4bac-95a5-109e7ae8d852">
Nov 22 09:32:23 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <system>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <entry name="serial">7da16450-9ec5-472a-99df-81f56ee341fc</entry>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <entry name="uuid">7da16450-9ec5-472a-99df-81f56ee341fc</entry>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </system>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <os>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   </os>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <features>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   </features>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue">
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk">
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <target dev="vdb" bus="virtio"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue">
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:32:23 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:60:b5:90"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <target dev="tap5b1477f9-c3"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/console.log" append="off"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <video>
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </video>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:32:23 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:32:23 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:32:23 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:32:23 compute-0 nova_compute[253661]: </domain>
Nov 22 09:32:23 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.107 253665 INFO nova.virt.libvirt.driver [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance destroyed successfully.
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.164 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.165 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.165 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.165 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No VIF found with MAC fa:16:3e:60:b5:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.166 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Using config drive
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.191 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.211 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.238 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'keypairs' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.428 253665 INFO nova.compute.manager [None req-26950718-f93c-4fba-8e83-7bbf30983c59 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Get console output
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.442 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:32:23 compute-0 ceph-mon[75021]: pgmap v2108: 305 pgs: 305 active+clean; 196 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 4.2 MiB/s wr, 119 op/s
Nov 22 09:32:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/819269320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/518424414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.645003) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943645057, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 628, "num_deletes": 255, "total_data_size": 695058, "memory_usage": 707936, "flush_reason": "Manual Compaction"}
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943652673, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 688873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42717, "largest_seqno": 43344, "table_properties": {"data_size": 685433, "index_size": 1284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7748, "raw_average_key_size": 18, "raw_value_size": 678583, "raw_average_value_size": 1655, "num_data_blocks": 57, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803900, "oldest_key_time": 1763803900, "file_creation_time": 1763803943, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 7772 microseconds, and 4548 cpu microseconds.
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.652768) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 688873 bytes OK
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.652795) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.654991) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.655012) EVENT_LOG_v1 {"time_micros": 1763803943655005, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.655034) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 691631, prev total WAL file size 691631, number of live WAL files 2.
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.655739) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353034' seq:72057594037927935, type:22 .. '6C6F676D0031373535' seq:0, type:0; will stop at (end)
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(672KB)], [95(8024KB)]
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943655787, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 8906068, "oldest_snapshot_seqno": -1}
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6604 keys, 8772895 bytes, temperature: kUnknown
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943729592, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 8772895, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8728635, "index_size": 26676, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 169360, "raw_average_key_size": 25, "raw_value_size": 8610263, "raw_average_value_size": 1303, "num_data_blocks": 1065, "num_entries": 6604, "num_filter_entries": 6604, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803943, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.729889) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 8772895 bytes
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.731359) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.5 rd, 118.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.8 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(25.7) write-amplify(12.7) OK, records in: 7126, records dropped: 522 output_compression: NoCompression
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.731386) EVENT_LOG_v1 {"time_micros": 1763803943731374, "job": 56, "event": "compaction_finished", "compaction_time_micros": 73893, "compaction_time_cpu_micros": 42070, "output_level": 6, "num_output_files": 1, "total_output_size": 8772895, "num_input_records": 7126, "num_output_records": 6604, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943731822, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943734921, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.655587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:32:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.754 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.755 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.756 253665 INFO nova.compute.manager [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Rebooting instance
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.774 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.774 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:32:23 compute-0 nova_compute[253661]: 2025-11-22 09:32:23.775 253665 DEBUG nova.network.neutron [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:32:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 213 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 704 KiB/s rd, 4.5 MiB/s wr, 132 op/s
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.046 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.110 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating config drive at /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.119 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0yu9rdqh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.286 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0yu9rdqh" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.335 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.341 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.542 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.545 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deleting local config drive /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue because it was imported into RBD.
Nov 22 09:32:24 compute-0 kernel: tap5b1477f9-c3: entered promiscuous mode
Nov 22 09:32:24 compute-0 NetworkManager[48920]: <info>  [1763803944.6327] manager: (tap5b1477f9-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Nov 22 09:32:24 compute-0 ovn_controller[152872]: 2025-11-22T09:32:24Z|00993|binding|INFO|Claiming lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for this chassis.
Nov 22 09:32:24 compute-0 ovn_controller[152872]: 2025-11-22T09:32:24Z|00994|binding|INFO|5b1477f9-c3cf-4bac-95a5-109e7ae8d852: Claiming fa:16:3e:60:b5:90 10.100.0.7
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:24.640 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '5', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:24.642 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis
Nov 22 09:32:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:24.643 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:32:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:24.644 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94a79e4b-3237-40c5-9ae3-68f9f596fd48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:24 compute-0 ovn_controller[152872]: 2025-11-22T09:32:24Z|00995|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 up in Southbound
Nov 22 09:32:24 compute-0 ovn_controller[152872]: 2025-11-22T09:32:24Z|00996|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 ovn-installed in OVS
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:24 compute-0 nova_compute[253661]: 2025-11-22 09:32:24.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:24 compute-0 systemd-udevd[355574]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:32:24 compute-0 systemd-machined[215941]: New machine qemu-120-instance-00000062.
Nov 22 09:32:24 compute-0 NetworkManager[48920]: <info>  [1763803944.7045] device (tap5b1477f9-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:32:24 compute-0 NetworkManager[48920]: <info>  [1763803944.7060] device (tap5b1477f9-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:32:24 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000062.
Nov 22 09:32:25 compute-0 ceph-mon[75021]: pgmap v2109: 305 pgs: 305 active+clean; 213 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 704 KiB/s rd, 4.5 MiB/s wr, 132 op/s
Nov 22 09:32:25 compute-0 nova_compute[253661]: 2025-11-22 09:32:25.769 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 712 KiB/s rd, 6.1 MiB/s wr, 146 op/s
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.227 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 7da16450-9ec5-472a-99df-81f56ee341fc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.228 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803946.2261624, 7da16450-9ec5-472a-99df-81f56ee341fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.229 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Resumed (Lifecycle Event)
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.238 253665 DEBUG nova.compute.manager [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.251 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.256 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.278 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.279 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803946.2278614, 7da16450-9ec5-472a-99df-81f56ee341fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.279 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Started (Lifecycle Event)
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.303 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.310 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.940 253665 DEBUG nova.network.neutron [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.963 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:32:26 compute-0 nova_compute[253661]: 2025-11-22 09:32:26.965 253665 DEBUG nova.compute.manager [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:27 compute-0 ceph-mon[75021]: pgmap v2110: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 712 KiB/s rd, 6.1 MiB/s wr, 146 op/s
Nov 22 09:32:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 717 KiB/s rd, 6.1 MiB/s wr, 153 op/s
Nov 22 09:32:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:27.975 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:27.976 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:27.977 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:29 compute-0 kernel: tapf0934c58-4d (unregistering): left promiscuous mode
Nov 22 09:32:29 compute-0 NetworkManager[48920]: <info>  [1763803949.3864] device (tapf0934c58-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:32:29 compute-0 ovn_controller[152872]: 2025-11-22T09:32:29Z|00997|binding|INFO|Releasing lport f0934c58-4d53-43e5-8132-eb2195819f1f from this chassis (sb_readonly=0)
Nov 22 09:32:29 compute-0 ovn_controller[152872]: 2025-11-22T09:32:29Z|00998|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f down in Southbound
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.394 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:29 compute-0 ovn_controller[152872]: 2025-11-22T09:32:29Z|00999|binding|INFO|Removing iface tapf0934c58-4d ovn-installed in OVS
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.406 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:28:3e 10.100.0.5'], port_security=['fa:16:3e:00:28:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd60d8746-9288-4829-8073-bed8cf04d748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa17f410-f219-4ce2-8b8c-5124640f3749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12465327-1cbd-4adc-ab38-5ef26037180c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0934c58-4d53-43e5-8132-eb2195819f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.407 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0934c58-4d53-43e5-8132-eb2195819f1f in datapath 0263cd25-ddb2-49f9-ab5b-2f514c861684 unbound from our chassis
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.409 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0263cd25-ddb2-49f9-ab5b-2f514c861684, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.410 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c146bdaf-128a-4294-83d0-5f62c72401dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.410 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 namespace which is not needed anymore
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:29 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 22 09:32:29 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000063.scope: Consumed 14.928s CPU time.
Nov 22 09:32:29 compute-0 systemd-machined[215941]: Machine qemu-119-instance-00000063 terminated.
Nov 22 09:32:29 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [NOTICE]   (354701) : haproxy version is 2.8.14-c23fe91
Nov 22 09:32:29 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [NOTICE]   (354701) : path to executable is /usr/sbin/haproxy
Nov 22 09:32:29 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [WARNING]  (354701) : Exiting Master process...
Nov 22 09:32:29 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [ALERT]    (354701) : Current worker (354703) exited with code 143 (Terminated)
Nov 22 09:32:29 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [WARNING]  (354701) : All workers exited. Exiting... (0)
Nov 22 09:32:29 compute-0 systemd[1]: libpod-18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da.scope: Deactivated successfully.
Nov 22 09:32:29 compute-0 podman[355669]: 2025-11-22 09:32:29.584086393 +0000 UTC m=+0.055678431 container died 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da-userdata-shm.mount: Deactivated successfully.
Nov 22 09:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-05970ec99c950ccf608d1273db9007048dadc52d0e0d40152159bb3df22e752b-merged.mount: Deactivated successfully.
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.630 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:29 compute-0 podman[355669]: 2025-11-22 09:32:29.644635664 +0000 UTC m=+0.116227702 container cleanup 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:32:29 compute-0 systemd[1]: libpod-conmon-18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da.scope: Deactivated successfully.
Nov 22 09:32:29 compute-0 ceph-mon[75021]: pgmap v2111: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 717 KiB/s rd, 6.1 MiB/s wr, 153 op/s
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.683 253665 DEBUG nova.compute.manager [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.684 253665 DEBUG oslo_concurrency.lockutils [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.684 253665 DEBUG oslo_concurrency.lockutils [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.684 253665 DEBUG oslo_concurrency.lockutils [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.684 253665 DEBUG nova.compute.manager [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.685 253665 WARNING nova.compute.manager [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state reboot_started.
Nov 22 09:32:29 compute-0 podman[355706]: 2025-11-22 09:32:29.730139728 +0000 UTC m=+0.054700407 container remove 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.738 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6a5126-7aa2-4ab5-b853-4641d63af7fb]: (4, ('Sat Nov 22 09:32:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 (18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da)\n18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da\nSat Nov 22 09:32:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 (18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da)\n18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.740 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[822923fb-efc0-4e2d-9384-06fece24f475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.741 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0263cd25-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.744 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:29 compute-0 kernel: tap0263cd25-d0: left promiscuous mode
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.767 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31a156fc-396a-4b62-9699-aa9340bdd9ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 166 op/s
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.791 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0e6019-8fdd-43e3-a3c5-e7a1c1c93cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[79f63a22-2d15-47b5-826d-46e4563b6a13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a0a21a-2005-4119-97d7-56014d3007c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677083, 'reachable_time': 32880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355725, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.819 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:32:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.819 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[427004bc-a955-4086-8685-8d329e899696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d0263cd25\x2dddb2\x2d49f9\x2dab5b\x2d2f514c861684.mount: Deactivated successfully.
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.954 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.955 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:29 compute-0 nova_compute[253661]: 2025-11-22 09:32:29.986 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.094 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.095 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.104 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.104 253665 INFO nova.compute.claims [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.110 253665 INFO nova.virt.libvirt.driver [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance shutdown successfully.
Nov 22 09:32:30 compute-0 kernel: tapf0934c58-4d: entered promiscuous mode
Nov 22 09:32:30 compute-0 systemd-udevd[355650]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:32:30 compute-0 ovn_controller[152872]: 2025-11-22T09:32:30Z|01000|binding|INFO|Claiming lport f0934c58-4d53-43e5-8132-eb2195819f1f for this chassis.
Nov 22 09:32:30 compute-0 ovn_controller[152872]: 2025-11-22T09:32:30Z|01001|binding|INFO|f0934c58-4d53-43e5-8132-eb2195819f1f: Claiming fa:16:3e:00:28:3e 10.100.0.5
Nov 22 09:32:30 compute-0 NetworkManager[48920]: <info>  [1763803950.1888] manager: (tapf0934c58-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.197 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:28:3e 10.100.0.5'], port_security=['fa:16:3e:00:28:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd60d8746-9288-4829-8073-bed8cf04d748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'aa17f410-f219-4ce2-8b8c-5124640f3749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12465327-1cbd-4adc-ab38-5ef26037180c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0934c58-4d53-43e5-8132-eb2195819f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.199 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0934c58-4d53-43e5-8132-eb2195819f1f in datapath 0263cd25-ddb2-49f9-ab5b-2f514c861684 bound to our chassis
Nov 22 09:32:30 compute-0 NetworkManager[48920]: <info>  [1763803950.1997] device (tapf0934c58-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.200 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 09:32:30 compute-0 NetworkManager[48920]: <info>  [1763803950.2017] device (tapf0934c58-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:32:30 compute-0 ovn_controller[152872]: 2025-11-22T09:32:30Z|01002|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f ovn-installed in OVS
Nov 22 09:32:30 compute-0 ovn_controller[152872]: 2025-11-22T09:32:30Z|01003|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f up in Southbound
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[faf58e14-0af1-4283-89ad-26f484e694c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.224 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0263cd25-d1 in ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.228 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0263cd25-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.228 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dd341b-236e-41bb-95c1-9e92dd600c0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.229 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b56b88-a795-4fe3-b532-71538f27b76a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.243 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ab70ba73-06d5-4828-abf4-a3b16dac6337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.242 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:30 compute-0 systemd-machined[215941]: New machine qemu-121-instance-00000063.
Nov 22 09:32:30 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000063.
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.272 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae95088-8edf-40c7-b023-088d2c147848]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.320 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ba108be4-a60f-49c9-93b4-fdc16c25bad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 NetworkManager[48920]: <info>  [1763803950.3275] manager: (tap0263cd25-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/414)
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.329 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee66dfc-e9ab-4837-9331-882614683c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.371 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f30e21b-7b25-4413-84b1-147f126f7ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.375 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7e2fa3-379d-4b2a-beba-14d88c372301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 NetworkManager[48920]: <info>  [1763803950.4131] device (tap0263cd25-d0): carrier: link connected
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.420 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8e0f96-4326-4c82-a95c-5b8b013afb25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0eb291-667b-4338-ba90-ba32eec8f33f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0263cd25-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:31:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680115, 'reachable_time': 16988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355789, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.483 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34447df0-b7fa-40ff-b69f-5e0605711745]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:31f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680115, 'tstamp': 680115}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355790, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.503 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c233f63c-c036-4385-9821-e39b0e8fdfc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0263cd25-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:31:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680115, 'reachable_time': 16988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355791, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[078e0e2c-0dfb-4fa1-b248-7d25aa5f0122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.627 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88af3d8c-acf5-47fd-81b3-ebe38273ae6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.629 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0263cd25-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0263cd25-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:30 compute-0 kernel: tap0263cd25-d0: entered promiscuous mode
Nov 22 09:32:30 compute-0 NetworkManager[48920]: <info>  [1763803950.6329] manager: (tap0263cd25-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.637 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0263cd25-d0, col_values=(('external_ids', {'iface-id': '771f6da7-e306-4e95-84a5-f08be3c60513'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.638 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:30 compute-0 ovn_controller[152872]: 2025-11-22T09:32:30Z|01004|binding|INFO|Releasing lport 771f6da7-e306-4e95-84a5-f08be3c60513 from this chassis (sb_readonly=0)
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.640 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.641 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6da271-4572-46cd-8c05-5e8c4f663e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.642 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:32:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.644 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'env', 'PROCESS_TAG=haproxy-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0263cd25-ddb2-49f9-ab5b-2f514c861684.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:32:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326492962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.761 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.770 253665 DEBUG nova.compute.provider_tree [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.822 253665 DEBUG nova.scheduler.client.report [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.852 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.853 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.932 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.933 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.962 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:32:30 compute-0 nova_compute[253661]: 2025-11-22 09:32:30.982 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:32:31 compute-0 podman[355843]: 2025-11-22 09:32:31.069671116 +0000 UTC m=+0.069561643 container create ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.091 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.093 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.093 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating image(s)
Nov 22 09:32:31 compute-0 systemd[1]: Started libpod-conmon-ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18.scope.
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.130 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:31 compute-0 podman[355843]: 2025-11-22 09:32:31.041028201 +0000 UTC m=+0.040918748 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:32:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:32:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da5c3a54b658459714df9c8a3e587b9a9c6c388db9cb59b269295b55bea989a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.169 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:31 compute-0 podman[355843]: 2025-11-22 09:32:31.171507913 +0000 UTC m=+0.171398500 container init ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 09:32:31 compute-0 podman[355843]: 2025-11-22 09:32:31.179233093 +0000 UTC m=+0.179123630 container start ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:32:31 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [NOTICE]   (355929) : New worker (355940) forked
Nov 22 09:32:31 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [NOTICE]   (355929) : Loading success.
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.206 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.211 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.258 253665 DEBUG nova.policy [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '04e47309bea74c04b0750912db283ae1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93c8020137e04db486facc42cfe30f23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.263 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for d60d8746-9288-4829-8073-bed8cf04d748 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.264 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803951.1365564, d60d8746-9288-4829-8073-bed8cf04d748 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.264 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Resumed (Lifecycle Event)
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.275 253665 INFO nova.virt.libvirt.driver [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance running successfully.
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.276 253665 INFO nova.virt.libvirt.driver [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance soft rebooted successfully.
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.277 253665 DEBUG nova.compute.manager [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.297 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.298 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.299 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.299 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.322 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.331 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.383 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.386 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.391 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.407 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803951.1385937, d60d8746-9288-4829-8073-bed8cf04d748 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.408 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Started (Lifecycle Event)
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.421 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.428 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.661 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:31 compute-0 ceph-mon[75021]: pgmap v2112: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 166 op/s
Nov 22 09:32:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1326492962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.729 253665 DEBUG nova.compute.manager [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.730 253665 DEBUG oslo_concurrency.lockutils [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.731 253665 DEBUG oslo_concurrency.lockutils [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.731 253665 DEBUG oslo_concurrency.lockutils [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.732 253665 DEBUG nova.compute.manager [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.732 253665 WARNING nova.compute.manager [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state rescued and task_state None.
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.744 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] resizing rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:32:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.810 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.811 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.812 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.812 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.813 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.814 253665 WARNING nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state None.
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.814 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.815 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.816 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.816 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.817 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.819 253665 WARNING nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state None.
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.820 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.820 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.821 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.821 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.821 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.821 253665 WARNING nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state None.
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.862 253665 DEBUG nova.objects.instance [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'migration_context' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.878 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.879 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Ensure instance console log exists: /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.880 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.880 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:31 compute-0 nova_compute[253661]: 2025-11-22 09:32:31.881 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:32 compute-0 nova_compute[253661]: 2025-11-22 09:32:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:32:32 compute-0 nova_compute[253661]: 2025-11-22 09:32:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:32:32 compute-0 nova_compute[253661]: 2025-11-22 09:32:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:32:32 compute-0 nova_compute[253661]: 2025-11-22 09:32:32.257 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:32:32 compute-0 nova_compute[253661]: 2025-11-22 09:32:32.432 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:32:32 compute-0 nova_compute[253661]: 2025-11-22 09:32:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:32:32 compute-0 nova_compute[253661]: 2025-11-22 09:32:32.434 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:32:32 compute-0 nova_compute[253661]: 2025-11-22 09:32:32.434 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:32 compute-0 nova_compute[253661]: 2025-11-22 09:32:32.963 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Successfully created port: 4b489529-5b94-46ce-9810-23bef9215c04 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:32:33 compute-0 nova_compute[253661]: 2025-11-22 09:32:33.613 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:32:33 compute-0 nova_compute[253661]: 2025-11-22 09:32:33.630 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:32:33 compute-0 nova_compute[253661]: 2025-11-22 09:32:33.631 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:32:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:33 compute-0 ceph-mon[75021]: pgmap v2113: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 22 09:32:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 277 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 143 op/s
Nov 22 09:32:33 compute-0 nova_compute[253661]: 2025-11-22 09:32:33.819 253665 DEBUG nova.compute.manager [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:33 compute-0 nova_compute[253661]: 2025-11-22 09:32:33.820 253665 DEBUG oslo_concurrency.lockutils [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:33 compute-0 nova_compute[253661]: 2025-11-22 09:32:33.821 253665 DEBUG oslo_concurrency.lockutils [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:33 compute-0 nova_compute[253661]: 2025-11-22 09:32:33.821 253665 DEBUG oslo_concurrency.lockutils [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:33 compute-0 nova_compute[253661]: 2025-11-22 09:32:33.821 253665 DEBUG nova.compute.manager [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:33 compute-0 nova_compute[253661]: 2025-11-22 09:32:33.821 253665 WARNING nova.compute.manager [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state rescued and task_state None.
Nov 22 09:32:34 compute-0 nova_compute[253661]: 2025-11-22 09:32:34.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:34 compute-0 nova_compute[253661]: 2025-11-22 09:32:34.148 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Successfully updated port: 4b489529-5b94-46ce-9810-23bef9215c04 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:32:34 compute-0 nova_compute[253661]: 2025-11-22 09:32:34.177 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:32:34 compute-0 nova_compute[253661]: 2025-11-22 09:32:34.178 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:32:34 compute-0 nova_compute[253661]: 2025-11-22 09:32:34.179 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:32:34 compute-0 nova_compute[253661]: 2025-11-22 09:32:34.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:32:34 compute-0 nova_compute[253661]: 2025-11-22 09:32:34.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:32:34 compute-0 nova_compute[253661]: 2025-11-22 09:32:34.356 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:32:35 compute-0 nova_compute[253661]: 2025-11-22 09:32:35.561 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:32:35 compute-0 ceph-mon[75021]: pgmap v2114: 305 pgs: 305 active+clean; 277 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 143 op/s
Nov 22 09:32:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.3 MiB/s wr, 165 op/s
Nov 22 09:32:35 compute-0 nova_compute[253661]: 2025-11-22 09:32:35.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:35 compute-0 nova_compute[253661]: 2025-11-22 09:32:35.922 253665 DEBUG nova.compute.manager [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-changed-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:35 compute-0 nova_compute[253661]: 2025-11-22 09:32:35.923 253665 DEBUG nova.compute.manager [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Refreshing instance network info cache due to event network-changed-4b489529-5b94-46ce-9810-23bef9215c04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:32:35 compute-0 nova_compute[253661]: 2025-11-22 09:32:35.923 253665 DEBUG oslo_concurrency.lockutils [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.643 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.645 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance network_info: |[{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.646 253665 DEBUG oslo_concurrency.lockutils [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.646 253665 DEBUG nova.network.neutron [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Refreshing network info cache for port 4b489529-5b94-46ce-9810-23bef9215c04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.650 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start _get_guest_xml network_info=[{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.656 253665 WARNING nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.666 253665 DEBUG nova.virt.libvirt.host [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.668 253665 DEBUG nova.virt.libvirt.host [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.678 253665 DEBUG nova.virt.libvirt.host [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.679 253665 DEBUG nova.virt.libvirt.host [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.679 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.680 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.680 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.681 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.681 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.681 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.682 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.682 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.682 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.683 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.683 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.684 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:32:36 compute-0 nova_compute[253661]: 2025-11-22 09:32:36.687 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:36 compute-0 ceph-mon[75021]: pgmap v2115: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.3 MiB/s wr, 165 op/s
Nov 22 09:32:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:32:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281243671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.173 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.200 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.206 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.253 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.284 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.285 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.285 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.286 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.286 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:32:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735989894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.713 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.716 253665 DEBUG nova.virt.libvirt.vif [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1406665007',display_name='tempest-ServerRescueTestJSON-server-1406665007',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1406665007',id=100,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-lhedv0vk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:32:31Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=80bb6ea3-dbff-48a3-b804-e3d356031a23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.717 253665 DEBUG nova.network.os_vif_util [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.718 253665 DEBUG nova.network.os_vif_util [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.719 253665 DEBUG nova.objects.instance [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.736 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <uuid>80bb6ea3-dbff-48a3-b804-e3d356031a23</uuid>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <name>instance-00000064</name>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerRescueTestJSON-server-1406665007</nova:name>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:32:36</nova:creationTime>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <nova:user uuid="04e47309bea74c04b0750912db283ae1">tempest-ServerRescueTestJSON-264324954-project-member</nova:user>
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <nova:project uuid="93c8020137e04db486facc42cfe30f23">tempest-ServerRescueTestJSON-264324954</nova:project>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <nova:port uuid="4b489529-5b94-46ce-9810-23bef9215c04">
Nov 22 09:32:37 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <system>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <entry name="serial">80bb6ea3-dbff-48a3-b804-e3d356031a23</entry>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <entry name="uuid">80bb6ea3-dbff-48a3-b804-e3d356031a23</entry>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     </system>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <os>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   </os>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <features>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   </features>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk">
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config">
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:32:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:01:fd:71"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <target dev="tap4b489529-5b"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/console.log" append="off"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <video>
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     </video>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:32:37 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:32:37 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:32:37 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:32:37 compute-0 nova_compute[253661]: </domain>
Nov 22 09:32:37 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.738 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Preparing to wait for external event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.738 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.738 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.739 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.740 253665 DEBUG nova.virt.libvirt.vif [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1406665007',display_name='tempest-ServerRescueTestJSON-server-1406665007',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1406665007',id=100,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-lhedv0vk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:32:31Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=80bb6ea3-dbff-48a3-b804-e3d356031a23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.740 253665 DEBUG nova.network.os_vif_util [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.741 253665 DEBUG nova.network.os_vif_util [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.741 253665 DEBUG os_vif [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.742 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.743 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.744 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:32:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4281243671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2735989894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.759 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b489529-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.760 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b489529-5b, col_values=(('external_ids', {'iface-id': '4b489529-5b94-46ce-9810-23bef9215c04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:fd:71', 'vm-uuid': '80bb6ea3-dbff-48a3-b804-e3d356031a23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:37 compute-0 NetworkManager[48920]: <info>  [1763803957.7632] manager: (tap4b489529-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Nov 22 09:32:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:32:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672544068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.770 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.772 253665 INFO os_vif [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b')
Nov 22 09:32:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.792 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.848 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.849 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.849 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No VIF found with MAC fa:16:3e:01:fd:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.850 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Using config drive
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.876 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.917 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.918 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.922 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.922 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.922 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.926 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:32:37 compute-0 nova_compute[253661]: 2025-11-22 09:32:37.927 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.134 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.135 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3552MB free_disk=59.855552673339844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.135 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.135 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 7da16450-9ec5-472a-99df-81f56ee341fc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d60d8746-9288-4829-8073-bed8cf04d748 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 80bb6ea3-dbff-48a3-b804-e3d356031a23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.296 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.358 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating config drive at /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.364 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpft73g2cu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.428 253665 DEBUG nova.network.neutron [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updated VIF entry in instance network info cache for port 4b489529-5b94-46ce-9810-23bef9215c04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.429 253665 DEBUG nova.network.neutron [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.450 253665 DEBUG oslo_concurrency.lockutils [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.526 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpft73g2cu" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.561 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.566 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.749 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.750 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deleting local config drive /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config because it was imported into RBD.
Nov 22 09:32:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2672544068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:32:38 compute-0 ceph-mon[75021]: pgmap v2116: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Nov 22 09:32:38 compute-0 kernel: tap4b489529-5b: entered promiscuous mode
Nov 22 09:32:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:32:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193320973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:32:38 compute-0 ovn_controller[152872]: 2025-11-22T09:32:38Z|01005|binding|INFO|Claiming lport 4b489529-5b94-46ce-9810-23bef9215c04 for this chassis.
Nov 22 09:32:38 compute-0 ovn_controller[152872]: 2025-11-22T09:32:38Z|01006|binding|INFO|4b489529-5b94-46ce-9810-23bef9215c04: Claiming fa:16:3e:01:fd:71 10.100.0.2
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:38 compute-0 NetworkManager[48920]: <info>  [1763803958.8148] manager: (tap4b489529-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Nov 22 09:32:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:38.822 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:38.825 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis
Nov 22 09:32:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:38.827 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:32:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:38.828 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[83a5bbeb-f605-4d88-b4f5-8bb78a478094]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:38 compute-0 ovn_controller[152872]: 2025-11-22T09:32:38Z|01007|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 ovn-installed in OVS
Nov 22 09:32:38 compute-0 ovn_controller[152872]: 2025-11-22T09:32:38Z|01008|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 up in Southbound
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.845 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.853 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:32:38 compute-0 systemd-machined[215941]: New machine qemu-122-instance-00000064.
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.867 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:32:38 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-00000064.
Nov 22 09:32:38 compute-0 systemd-udevd[356244]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.893 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:32:38 compute-0 nova_compute[253661]: 2025-11-22 09:32:38.893 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:38 compute-0 NetworkManager[48920]: <info>  [1763803958.8961] device (tap4b489529-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:32:38 compute-0 NetworkManager[48920]: <info>  [1763803958.8990] device (tap4b489529-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:32:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/193320973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:32:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.791 253665 DEBUG nova.compute.manager [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.792 253665 DEBUG oslo_concurrency.lockutils [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.793 253665 DEBUG oslo_concurrency.lockutils [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.793 253665 DEBUG oslo_concurrency.lockutils [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.794 253665 DEBUG nova.compute.manager [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Processing event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.868 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.869 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.869 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.869 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.975 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803959.9749005, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.975 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Started (Lifecycle Event)
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.978 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.982 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.985 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance spawned successfully.
Nov 22 09:32:39 compute-0 nova_compute[253661]: 2025-11-22 09:32:39.985 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.015 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.020 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.021 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.021 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.021 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.022 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.022 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.027 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.049 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.050 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803959.9784346, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.050 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Paused (Lifecycle Event)
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.072 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.077 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803959.9814057, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.077 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Resumed (Lifecycle Event)
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.092 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.096 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.110 253665 INFO nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Took 9.02 seconds to spawn the instance on the hypervisor.
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.111 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.122 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.182 253665 INFO nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Took 10.13 seconds to build instance.
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.201 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:40 compute-0 ceph-mon[75021]: pgmap v2117: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Nov 22 09:32:40 compute-0 nova_compute[253661]: 2025-11-22 09:32:40.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 22 09:32:41 compute-0 nova_compute[253661]: 2025-11-22 09:32:41.905 253665 DEBUG nova.compute.manager [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:41 compute-0 nova_compute[253661]: 2025-11-22 09:32:41.906 253665 DEBUG oslo_concurrency.lockutils [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:41 compute-0 nova_compute[253661]: 2025-11-22 09:32:41.907 253665 DEBUG oslo_concurrency.lockutils [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:41 compute-0 nova_compute[253661]: 2025-11-22 09:32:41.907 253665 DEBUG oslo_concurrency.lockutils [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:41 compute-0 nova_compute[253661]: 2025-11-22 09:32:41.907 253665 DEBUG nova.compute.manager [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:41 compute-0 nova_compute[253661]: 2025-11-22 09:32:41.908 253665 WARNING nova.compute.manager [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state None.
Nov 22 09:32:42 compute-0 nova_compute[253661]: 2025-11-22 09:32:42.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:42 compute-0 ceph-mon[75021]: pgmap v2118: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 22 09:32:43 compute-0 nova_compute[253661]: 2025-11-22 09:32:43.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:32:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Nov 22 09:32:43 compute-0 ovn_controller[152872]: 2025-11-22T09:32:43Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:28:3e 10.100.0.5
Nov 22 09:32:44 compute-0 ceph-mon[75021]: pgmap v2119: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Nov 22 09:32:45 compute-0 podman[356295]: 2025-11-22 09:32:45.397579785 +0000 UTC m=+0.083311072 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:32:45 compute-0 podman[356296]: 2025-11-22 09:32:45.403751016 +0000 UTC m=+0.089514894 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:32:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 293 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 377 KiB/s wr, 205 op/s
Nov 22 09:32:45 compute-0 nova_compute[253661]: 2025-11-22 09:32:45.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:46 compute-0 ceph-mon[75021]: pgmap v2120: 305 pgs: 305 active+clean; 293 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 377 KiB/s wr, 205 op/s
Nov 22 09:32:47 compute-0 nova_compute[253661]: 2025-11-22 09:32:47.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 295 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 45 KiB/s wr, 188 op/s
Nov 22 09:32:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:48 compute-0 ceph-mon[75021]: pgmap v2121: 305 pgs: 305 active+clean; 295 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 45 KiB/s wr, 188 op/s
Nov 22 09:32:49 compute-0 nova_compute[253661]: 2025-11-22 09:32:49.744 253665 INFO nova.compute.manager [None req-46e00d8f-4e4a-4c47-9acb-04720c537d25 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Get console output
Nov 22 09:32:49 compute-0 nova_compute[253661]: 2025-11-22 09:32:49.749 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:32:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 295 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 45 KiB/s wr, 176 op/s
Nov 22 09:32:50 compute-0 nova_compute[253661]: 2025-11-22 09:32:50.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:50 compute-0 ceph-mon[75021]: pgmap v2122: 305 pgs: 305 active+clean; 295 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 45 KiB/s wr, 176 op/s
Nov 22 09:32:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 295 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 31 KiB/s wr, 152 op/s
Nov 22 09:32:51 compute-0 nova_compute[253661]: 2025-11-22 09:32:51.931 253665 INFO nova.compute.manager [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Rescuing
Nov 22 09:32:51 compute-0 nova_compute[253661]: 2025-11-22 09:32:51.932 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:32:51 compute-0 nova_compute[253661]: 2025-11-22 09:32:51.933 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:32:51 compute-0 nova_compute[253661]: 2025-11-22 09:32:51.934 253665 DEBUG nova.network.neutron [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.220 253665 DEBUG nova.compute.manager [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.221 253665 DEBUG nova.compute.manager [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing instance network info cache due to event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.222 253665 DEBUG oslo_concurrency.lockutils [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.222 253665 DEBUG oslo_concurrency.lockutils [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.222 253665 DEBUG nova.network.neutron [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:32:52
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:32:52 compute-0 podman[356332]: 2025-11-22 09:32:52.418393576 +0000 UTC m=+0.095962434 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.421 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.422 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.422 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.422 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.423 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.424 253665 INFO nova.compute.manager [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Terminating instance
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.425 253665 DEBUG nova.compute.manager [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:32:52 compute-0 kernel: tapf0934c58-4d (unregistering): left promiscuous mode
Nov 22 09:32:52 compute-0 NetworkManager[48920]: <info>  [1763803972.4850] device (tapf0934c58-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:32:52 compute-0 ovn_controller[152872]: 2025-11-22T09:32:52Z|01009|binding|INFO|Releasing lport f0934c58-4d53-43e5-8132-eb2195819f1f from this chassis (sb_readonly=0)
Nov 22 09:32:52 compute-0 ovn_controller[152872]: 2025-11-22T09:32:52Z|01010|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f down in Southbound
Nov 22 09:32:52 compute-0 ovn_controller[152872]: 2025-11-22T09:32:52Z|01011|binding|INFO|Removing iface tapf0934c58-4d ovn-installed in OVS
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.494 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.516 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:28:3e 10.100.0.5'], port_security=['fa:16:3e:00:28:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd60d8746-9288-4829-8073-bed8cf04d748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'aa17f410-f219-4ce2-8b8c-5124640f3749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12465327-1cbd-4adc-ab38-5ef26037180c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0934c58-4d53-43e5-8132-eb2195819f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.518 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0934c58-4d53-43e5-8132-eb2195819f1f in datapath 0263cd25-ddb2-49f9-ab5b-2f514c861684 unbound from our chassis
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.519 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0263cd25-ddb2-49f9-ab5b-2f514c861684, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.521 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f2fe8284-99c2-4451-8a0d-3f756e59d58a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.522 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 namespace which is not needed anymore
Nov 22 09:32:52 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 22 09:32:52 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Consumed 13.783s CPU time.
Nov 22 09:32:52 compute-0 systemd-machined[215941]: Machine qemu-121-instance-00000063 terminated.
Nov 22 09:32:52 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [NOTICE]   (355929) : haproxy version is 2.8.14-c23fe91
Nov 22 09:32:52 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [NOTICE]   (355929) : path to executable is /usr/sbin/haproxy
Nov 22 09:32:52 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [WARNING]  (355929) : Exiting Master process...
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.676 253665 INFO nova.virt.libvirt.driver [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance destroyed successfully.
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.677 253665 DEBUG nova.objects.instance [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid d60d8746-9288-4829-8073-bed8cf04d748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:52 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [ALERT]    (355929) : Current worker (355940) exited with code 143 (Terminated)
Nov 22 09:32:52 compute-0 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [WARNING]  (355929) : All workers exited. Exiting... (0)
Nov 22 09:32:52 compute-0 systemd[1]: libpod-ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18.scope: Deactivated successfully.
Nov 22 09:32:52 compute-0 podman[356381]: 2025-11-22 09:32:52.689100568 +0000 UTC m=+0.072069024 container died ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.692 253665 DEBUG nova.virt.libvirt.vif [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:31:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-714198839',display_name='tempest-TestNetworkAdvancedServerOps-server-714198839',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-714198839',id=99,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR2b18SIpx8gS1E3y/TzQyi9x+qeFqs0jOon8sbMm/5ZIjx+NrI5fGq/DEFizh5YAuLO2aSf/znN/DytSjdMVp7+cSM7ae+/kERmK84ftJ2WIfziOJizQIKJYVt0Z/aeQ==',key_name='tempest-TestNetworkAdvancedServerOps-1970810248',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:32:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-a3b9deas',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:32:31Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=d60d8746-9288-4829-8073-bed8cf04d748,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.693 253665 DEBUG nova.network.os_vif_util [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.694 253665 DEBUG nova.network.os_vif_util [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.695 253665 DEBUG os_vif [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.699 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0934c58-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.700 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.705 253665 INFO os_vif [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d')
Nov 22 09:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18-userdata-shm.mount: Deactivated successfully.
Nov 22 09:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-da5c3a54b658459714df9c8a3e587b9a9c6c388db9cb59b269295b55bea989a9-merged.mount: Deactivated successfully.
Nov 22 09:32:52 compute-0 podman[356381]: 2025-11-22 09:32:52.732852856 +0000 UTC m=+0.115821312 container cleanup ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:32:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:32:52 compute-0 systemd[1]: libpod-conmon-ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18.scope: Deactivated successfully.
Nov 22 09:32:52 compute-0 podman[356437]: 2025-11-22 09:32:52.813721335 +0000 UTC m=+0.056596703 container remove ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.820 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33228e81-09fb-4844-bf0c-01ba05e43487]: (4, ('Sat Nov 22 09:32:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 (ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18)\nef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18\nSat Nov 22 09:32:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 (ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18)\nef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c17f87a3-0b34-4358-a87d-f83307565349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.823 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0263cd25-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:52 compute-0 kernel: tap0263cd25-d0: left promiscuous mode
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.843 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.846 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69ff97a3-65d7-4d43-8ebb-c06ef18b8614]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.866 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dad93bd8-0bf8-457f-8776-ed114396cf59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f50ca2-26ba-4de0-a7f2-a686770a7580]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.893 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6058d3-629b-4c55-8a5d-9ba9f2186052]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680105, 'reachable_time': 43971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356455, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:52 compute-0 ceph-mon[75021]: pgmap v2123: 305 pgs: 305 active+clean; 295 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 31 KiB/s wr, 152 op/s
Nov 22 09:32:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d0263cd25\x2dddb2\x2d49f9\x2dab5b\x2d2f514c861684.mount: Deactivated successfully.
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.900 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:32:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.901 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a4629c23-0443-47f1-ad21-bbc51ced5ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.913 253665 DEBUG nova.compute.manager [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.914 253665 DEBUG oslo_concurrency.lockutils [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.914 253665 DEBUG oslo_concurrency.lockutils [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.914 253665 DEBUG oslo_concurrency.lockutils [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.914 253665 DEBUG nova.compute.manager [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:52 compute-0 nova_compute[253661]: 2025-11-22 09:32:52.915 253665 DEBUG nova.compute.manager [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:32:53 compute-0 nova_compute[253661]: 2025-11-22 09:32:53.224 253665 INFO nova.virt.libvirt.driver [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Deleting instance files /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748_del
Nov 22 09:32:53 compute-0 nova_compute[253661]: 2025-11-22 09:32:53.225 253665 INFO nova.virt.libvirt.driver [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Deletion of /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748_del complete
Nov 22 09:32:53 compute-0 nova_compute[253661]: 2025-11-22 09:32:53.301 253665 INFO nova.compute.manager [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 22 09:32:53 compute-0 nova_compute[253661]: 2025-11-22 09:32:53.302 253665 DEBUG oslo.service.loopingcall [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:32:53 compute-0 nova_compute[253661]: 2025-11-22 09:32:53.303 253665 DEBUG nova.compute.manager [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:32:53 compute-0 nova_compute[253661]: 2025-11-22 09:32:53.303 253665 DEBUG nova.network.neutron [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:32:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:53 compute-0 nova_compute[253661]: 2025-11-22 09:32:53.718 253665 DEBUG nova.network.neutron [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:32:53 compute-0 nova_compute[253661]: 2025-11-22 09:32:53.791 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:32:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 272 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 725 KiB/s wr, 182 op/s
Nov 22 09:32:54 compute-0 nova_compute[253661]: 2025-11-22 09:32:54.023 253665 DEBUG nova.network.neutron [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updated VIF entry in instance network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:32:54 compute-0 nova_compute[253661]: 2025-11-22 09:32:54.023 253665 DEBUG nova.network.neutron [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:32:54 compute-0 nova_compute[253661]: 2025-11-22 09:32:54.073 253665 DEBUG oslo_concurrency.lockutils [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:32:54 compute-0 nova_compute[253661]: 2025-11-22 09:32:54.087 253665 DEBUG nova.network.neutron [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:32:54 compute-0 nova_compute[253661]: 2025-11-22 09:32:54.114 253665 INFO nova.compute.manager [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Took 0.81 seconds to deallocate network for instance.
Nov 22 09:32:54 compute-0 nova_compute[253661]: 2025-11-22 09:32:54.260 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:54 compute-0 nova_compute[253661]: 2025-11-22 09:32:54.261 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:54 compute-0 nova_compute[253661]: 2025-11-22 09:32:54.266 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:32:54 compute-0 nova_compute[253661]: 2025-11-22 09:32:54.358 253665 DEBUG oslo_concurrency.processutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.033 253665 DEBUG nova.compute.manager [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.034 253665 DEBUG oslo_concurrency.lockutils [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.035 253665 DEBUG oslo_concurrency.lockutils [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.036 253665 DEBUG oslo_concurrency.lockutils [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.036 253665 DEBUG nova.compute.manager [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.036 253665 WARNING nova.compute.manager [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state deleted and task_state None.
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.037 253665 DEBUG nova.compute.manager [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-deleted-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:32:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121893696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:32:55 compute-0 ceph-mon[75021]: pgmap v2124: 305 pgs: 305 active+clean; 272 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 725 KiB/s wr, 182 op/s
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.272 253665 DEBUG oslo_concurrency.processutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.282 253665 DEBUG nova.compute.provider_tree [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.299 253665 DEBUG nova.scheduler.client.report [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.434 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.477 253665 INFO nova.scheduler.client.report [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance d60d8746-9288-4829-8073-bed8cf04d748
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.544 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:32:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:32:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:32:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:32:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:32:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 266 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 125 op/s
Nov 22 09:32:55 compute-0 nova_compute[253661]: 2025-11-22 09:32:55.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3121893696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:32:56 compute-0 kernel: tap4b489529-5b (unregistering): left promiscuous mode
Nov 22 09:32:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:32:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:32:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:32:56 compute-0 NetworkManager[48920]: <info>  [1763803976.6052] device (tap4b489529-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:32:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:32:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:32:56 compute-0 ovn_controller[152872]: 2025-11-22T09:32:56Z|01012|binding|INFO|Releasing lport 4b489529-5b94-46ce-9810-23bef9215c04 from this chassis (sb_readonly=0)
Nov 22 09:32:56 compute-0 ovn_controller[152872]: 2025-11-22T09:32:56Z|01013|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 down in Southbound
Nov 22 09:32:56 compute-0 ovn_controller[152872]: 2025-11-22T09:32:56Z|01014|binding|INFO|Removing iface tap4b489529-5b ovn-installed in OVS
Nov 22 09:32:56 compute-0 nova_compute[253661]: 2025-11-22 09:32:56.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:56 compute-0 nova_compute[253661]: 2025-11-22 09:32:56.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:56 compute-0 nova_compute[253661]: 2025-11-22 09:32:56.630 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:56.643 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:32:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:56.644 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis
Nov 22 09:32:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:56.645 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:32:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:32:56.647 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20b50e05-c14e-4a53-890e-e00c5e6db9b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:32:56 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 22 09:32:56 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000064.scope: Consumed 14.723s CPU time.
Nov 22 09:32:56 compute-0 systemd-machined[215941]: Machine qemu-122-instance-00000064 terminated.
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.116 253665 DEBUG nova.compute.manager [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.116 253665 DEBUG oslo_concurrency.lockutils [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.117 253665 DEBUG oslo_concurrency.lockutils [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.117 253665 DEBUG oslo_concurrency.lockutils [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.117 253665 DEBUG nova.compute.manager [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.117 253665 WARNING nova.compute.manager [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state rescuing.
Nov 22 09:32:57 compute-0 ceph-mon[75021]: pgmap v2125: 305 pgs: 305 active+clean; 266 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 125 op/s
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.298 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance shutdown successfully after 3 seconds.
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.306 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance destroyed successfully.
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.306 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'numa_topology' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.331 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Attempting rescue
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.332 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.339 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.340 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating image(s)
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.369 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.374 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.446 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.478 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.482 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.556 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.557 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.558 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.558 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.588 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.593 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 245 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 22 09:32:57 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.950 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.951 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'migration_context' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.966 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.967 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start _get_guest_xml network_info=[{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:01:fd:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.967 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'resources' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:57 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.994 253665 WARNING nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:57.999 253665 DEBUG nova.virt.libvirt.host [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.000 253665 DEBUG nova.virt.libvirt.host [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.003 253665 DEBUG nova.virt.libvirt.host [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.004 253665 DEBUG nova.virt.libvirt.host [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.004 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.004 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.005 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.005 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.005 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.005 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.006 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.006 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.006 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.007 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.007 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.007 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.008 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.031 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.118 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:32:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:32:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554647853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.582 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:58 compute-0 nova_compute[253661]: 2025-11-22 09:32:58.583 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:32:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:32:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2523787406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.077 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.083 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.262 253665 DEBUG nova.compute.manager [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.263 253665 DEBUG oslo_concurrency.lockutils [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.263 253665 DEBUG oslo_concurrency.lockutils [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.264 253665 DEBUG oslo_concurrency.lockutils [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.264 253665 DEBUG nova.compute.manager [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.264 253665 WARNING nova.compute.manager [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state rescuing.
Nov 22 09:32:59 compute-0 ceph-mon[75021]: pgmap v2126: 305 pgs: 305 active+clean; 245 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 22 09:32:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3554647853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2523787406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:32:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908467108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.610 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.611 253665 DEBUG nova.virt.libvirt.vif [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1406665007',display_name='tempest-ServerRescueTestJSON-server-1406665007',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1406665007',id=100,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:32:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-lhedv0vk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:32:40Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=80bb6ea3-dbff-48a3-b804-e3d356031a23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:01:fd:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.612 253665 DEBUG nova.network.os_vif_util [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:01:fd:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.613 253665 DEBUG nova.network.os_vif_util [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.615 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.639 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <uuid>80bb6ea3-dbff-48a3-b804-e3d356031a23</uuid>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <name>instance-00000064</name>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerRescueTestJSON-server-1406665007</nova:name>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:32:57</nova:creationTime>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <nova:user uuid="04e47309bea74c04b0750912db283ae1">tempest-ServerRescueTestJSON-264324954-project-member</nova:user>
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <nova:project uuid="93c8020137e04db486facc42cfe30f23">tempest-ServerRescueTestJSON-264324954</nova:project>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <nova:port uuid="4b489529-5b94-46ce-9810-23bef9215c04">
Nov 22 09:32:59 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <system>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <entry name="serial">80bb6ea3-dbff-48a3-b804-e3d356031a23</entry>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <entry name="uuid">80bb6ea3-dbff-48a3-b804-e3d356031a23</entry>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </system>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <os>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   </os>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <features>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   </features>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue">
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       </source>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk">
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       </source>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <target dev="vdb" bus="virtio"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue">
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       </source>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:32:59 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:01:fd:71"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <target dev="tap4b489529-5b"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/console.log" append="off"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <video>
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </video>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:32:59 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:32:59 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:32:59 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:32:59 compute-0 nova_compute[253661]: </domain>
Nov 22 09:32:59 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.649 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance destroyed successfully.
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.717 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.718 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.718 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.718 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No VIF found with MAC fa:16:3e:01:fd:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.719 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Using config drive
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.744 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.781 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:32:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 279 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 453 KiB/s rd, 3.2 MiB/s wr, 123 op/s
Nov 22 09:32:59 compute-0 nova_compute[253661]: 2025-11-22 09:32:59.805 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'keypairs' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2908467108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:00 compute-0 nova_compute[253661]: 2025-11-22 09:33:00.815 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.229 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.229 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.231 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.251 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.264 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.265 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Image id 878156d4-57f6-4a8b-8f4c-cbde182bb832 yields fingerprint 82db50257fd208421e31241f1b0ae2cc5ee8c9c4 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.265 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] image 878156d4-57f6-4a8b-8f4c-cbde182bb832 at (/var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4): checking
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.265 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] image 878156d4-57f6-4a8b-8f4c-cbde182bb832 at (/var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.268 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.269 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] 7da16450-9ec5-472a-99df-81f56ee341fc is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.269 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] 80bb6ea3-dbff-48a3-b804-e3d356031a23 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.269 253665 WARNING nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.270 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Active base files: /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.270 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Removable base files: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.270 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.271 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.271 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.271 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 22 09:33:01 compute-0 nova_compute[253661]: 2025-11-22 09:33:01.271 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 22 09:33:01 compute-0 ceph-mon[75021]: pgmap v2127: 305 pgs: 305 active+clean; 279 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 453 KiB/s rd, 3.2 MiB/s wr, 123 op/s
Nov 22 09:33:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 279 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 3.2 MiB/s wr, 110 op/s
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.262 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating config drive at /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.269 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5gbsf03m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.414 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5gbsf03m" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.444 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.449 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.651 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.652 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deleting local config drive /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue because it was imported into RBD.
Nov 22 09:33:02 compute-0 kernel: tap4b489529-5b: entered promiscuous mode
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:02 compute-0 NetworkManager[48920]: <info>  [1763803982.7087] manager: (tap4b489529-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:02 compute-0 ovn_controller[152872]: 2025-11-22T09:33:02Z|01015|binding|INFO|Claiming lport 4b489529-5b94-46ce-9810-23bef9215c04 for this chassis.
Nov 22 09:33:02 compute-0 ovn_controller[152872]: 2025-11-22T09:33:02Z|01016|binding|INFO|4b489529-5b94-46ce-9810-23bef9215c04: Claiming fa:16:3e:01:fd:71 10.100.0.2
Nov 22 09:33:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:02.722 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '5', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:02.723 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis
Nov 22 09:33:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:02.724 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:33:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:02.725 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73cac9ee-7301-441b-8801-1c4d89acf506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:02 compute-0 ovn_controller[152872]: 2025-11-22T09:33:02Z|01017|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 ovn-installed in OVS
Nov 22 09:33:02 compute-0 ovn_controller[152872]: 2025-11-22T09:33:02Z|01018|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 up in Southbound
Nov 22 09:33:02 compute-0 systemd-udevd[356731]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:02 compute-0 NetworkManager[48920]: <info>  [1763803982.7467] device (tap4b489529-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:33:02 compute-0 NetworkManager[48920]: <info>  [1763803982.7475] device (tap4b489529-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:33:02 compute-0 systemd-machined[215941]: New machine qemu-123-instance-00000064.
Nov 22 09:33:02 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020612292032949142 of space, bias 1.0, pg target 0.6183687609884743 quantized to 32 (current 32)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:33:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.990 253665 DEBUG nova.compute.manager [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.992 253665 DEBUG oslo_concurrency.lockutils [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.992 253665 DEBUG oslo_concurrency.lockutils [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.992 253665 DEBUG oslo_concurrency.lockutils [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.993 253665 DEBUG nova.compute.manager [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:02 compute-0 nova_compute[253661]: 2025-11-22 09:33:02.993 253665 WARNING nova.compute.manager [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state rescuing.
Nov 22 09:33:03 compute-0 ceph-mon[75021]: pgmap v2128: 305 pgs: 305 active+clean; 279 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 3.2 MiB/s wr, 110 op/s
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.498 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 80bb6ea3-dbff-48a3-b804-e3d356031a23 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.499 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803983.4981024, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Resumed (Lifecycle Event)
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.505 253665 DEBUG nova.compute.manager [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.532 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.536 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.573 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.576 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803983.5018227, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.576 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Started (Lifecycle Event)
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.602 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:03 compute-0 nova_compute[253661]: 2025-11-22 09:33:03.606 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 294 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.062 253665 DEBUG nova.compute.manager [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.063 253665 DEBUG oslo_concurrency.lockutils [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.063 253665 DEBUG oslo_concurrency.lockutils [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.064 253665 DEBUG oslo_concurrency.lockutils [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.064 253665 DEBUG nova.compute.manager [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.064 253665 WARNING nova.compute.manager [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state rescued and task_state None.
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.208 253665 INFO nova.compute.manager [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Unrescuing
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.208 253665 DEBUG oslo_concurrency.lockutils [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.209 253665 DEBUG oslo_concurrency.lockutils [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.209 253665 DEBUG nova.network.neutron [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:33:05 compute-0 ceph-mon[75021]: pgmap v2129: 305 pgs: 305 active+clean; 294 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Nov 22 09:33:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 295 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 3.3 MiB/s wr, 93 op/s
Nov 22 09:33:05 compute-0 nova_compute[253661]: 2025-11-22 09:33:05.818 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:06 compute-0 sudo[356802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:06 compute-0 sudo[356802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:06 compute-0 sudo[356802]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:06 compute-0 sudo[356827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:33:06 compute-0 sudo[356827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:06 compute-0 sudo[356827]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:06 compute-0 sudo[356852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:06 compute-0 sudo[356852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:06 compute-0 sudo[356852]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:06 compute-0 sudo[356877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:33:06 compute-0 sudo[356877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:06 compute-0 nova_compute[253661]: 2025-11-22 09:33:06.901 253665 DEBUG nova.network.neutron [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:33:06 compute-0 nova_compute[253661]: 2025-11-22 09:33:06.929 253665 DEBUG oslo_concurrency.lockutils [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:33:06 compute-0 nova_compute[253661]: 2025-11-22 09:33:06.930 253665 DEBUG nova.objects.instance [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'flavor' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:07 compute-0 kernel: tap4b489529-5b (unregistering): left promiscuous mode
Nov 22 09:33:07 compute-0 NetworkManager[48920]: <info>  [1763803987.0281] device (tap4b489529-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:07 compute-0 ovn_controller[152872]: 2025-11-22T09:33:07Z|01019|binding|INFO|Releasing lport 4b489529-5b94-46ce-9810-23bef9215c04 from this chassis (sb_readonly=0)
Nov 22 09:33:07 compute-0 ovn_controller[152872]: 2025-11-22T09:33:07Z|01020|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 down in Southbound
Nov 22 09:33:07 compute-0 ovn_controller[152872]: 2025-11-22T09:33:07Z|01021|binding|INFO|Removing iface tap4b489529-5b ovn-installed in OVS
Nov 22 09:33:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.049 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '6', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.055 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis
Nov 22 09:33:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.060 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:33:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.061 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef42efde-dd66-492a-b0b6-421ba8a8b208]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:07 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 22 09:33:07 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 4.311s CPU time.
Nov 22 09:33:07 compute-0 systemd-machined[215941]: Machine qemu-123-instance-00000064 terminated.
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.213 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance destroyed successfully.
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.213 253665 DEBUG nova.objects.instance [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'numa_topology' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:07 compute-0 sudo[356877]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:07 compute-0 kernel: tap4b489529-5b: entered promiscuous mode
Nov 22 09:33:07 compute-0 systemd-udevd[356920]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:33:07 compute-0 NetworkManager[48920]: <info>  [1763803987.3543] manager: (tap4b489529-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Nov 22 09:33:07 compute-0 ovn_controller[152872]: 2025-11-22T09:33:07Z|01022|binding|INFO|Claiming lport 4b489529-5b94-46ce-9810-23bef9215c04 for this chassis.
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:07 compute-0 ovn_controller[152872]: 2025-11-22T09:33:07Z|01023|binding|INFO|4b489529-5b94-46ce-9810-23bef9215c04: Claiming fa:16:3e:01:fd:71 10.100.0.2
Nov 22 09:33:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.361 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '6', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.363 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis
Nov 22 09:33:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.364 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:33:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4532733f-b276-4ce4-b7c0-cf540ea81745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:07 compute-0 NetworkManager[48920]: <info>  [1763803987.3700] device (tap4b489529-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:33:07 compute-0 NetworkManager[48920]: <info>  [1763803987.3710] device (tap4b489529-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.374 253665 DEBUG nova.compute.manager [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.374 253665 DEBUG oslo_concurrency.lockutils [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:07 compute-0 ceph-mon[75021]: pgmap v2130: 305 pgs: 305 active+clean; 295 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 3.3 MiB/s wr, 93 op/s
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.375 253665 DEBUG oslo_concurrency.lockutils [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.375 253665 DEBUG oslo_concurrency.lockutils [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.375 253665 DEBUG nova.compute.manager [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.375 253665 WARNING nova.compute.manager [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state rescued and task_state unrescuing.
Nov 22 09:33:07 compute-0 ovn_controller[152872]: 2025-11-22T09:33:07Z|01024|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 ovn-installed in OVS
Nov 22 09:33:07 compute-0 ovn_controller[152872]: 2025-11-22T09:33:07Z|01025|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 up in Southbound
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:07 compute-0 systemd-machined[215941]: New machine qemu-124-instance-00000064.
Nov 22 09:33:07 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000064.
Nov 22 09:33:07 compute-0 sudo[356963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:07 compute-0 sudo[356963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:07 compute-0 sudo[356963]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:07 compute-0 sudo[356995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:33:07 compute-0 sudo[356995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:07 compute-0 sudo[356995]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:07 compute-0 sudo[357021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:07 compute-0 sudo[357021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:07 compute-0 sudo[357021]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:07 compute-0 sudo[357046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 22 09:33:07 compute-0 sudo[357046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.675 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803972.6739175, d60d8746-9288-4829-8073-bed8cf04d748 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.675 253665 INFO nova.compute.manager [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Stopped (Lifecycle Event)
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.698 253665 DEBUG nova.compute.manager [None req-5ace31e9-a4b8-4484-8d74-63b1873015f3 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:07 compute-0 nova_compute[253661]: 2025-11-22 09:33:07.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 295 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 995 KiB/s rd, 2.3 MiB/s wr, 90 op/s
Nov 22 09:33:07 compute-0 sudo[357046]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:33:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:33:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:33:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:33:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:33:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:33:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:33:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 403ea977-11d0-4979-8d25-a61f8e3bedf5 does not exist
Nov 22 09:33:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b3a59441-e1b8-4ef1-8a6f-84630c20efcf does not exist
Nov 22 09:33:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1e60d011-1d12-4a08-b595-40ce54b23239 does not exist
Nov 22 09:33:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:33:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:33:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:33:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:33:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:33:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.063 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 80bb6ea3-dbff-48a3-b804-e3d356031a23 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.064 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803988.0632834, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.064 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Resumed (Lifecycle Event)
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.105 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:08 compute-0 sudo[357133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:08 compute-0 sudo[357133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.122 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.123 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803988.0669854, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.123 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Started (Lifecycle Event)
Nov 22 09:33:08 compute-0 sudo[357133]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.144 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.149 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.166 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 22 09:33:08 compute-0 sudo[357176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:33:08 compute-0 sudo[357176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:08 compute-0 sudo[357176]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:08 compute-0 sudo[357201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:08 compute-0 sudo[357201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:08 compute-0 sudo[357201]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:08 compute-0 sudo[357227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:33:08 compute-0 sudo[357227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:08 compute-0 nova_compute[253661]: 2025-11-22 09:33:08.541 253665 DEBUG nova.compute.manager [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.657202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988657252, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 626, "num_deletes": 252, "total_data_size": 707025, "memory_usage": 720136, "flush_reason": "Manual Compaction"}
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988664985, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 701325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43345, "largest_seqno": 43970, "table_properties": {"data_size": 698000, "index_size": 1233, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 6563, "raw_average_key_size": 16, "raw_value_size": 691443, "raw_average_value_size": 1732, "num_data_blocks": 55, "num_entries": 399, "num_filter_entries": 399, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803944, "oldest_key_time": 1763803944, "file_creation_time": 1763803988, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 7837 microseconds, and 3579 cpu microseconds.
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.665035) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 701325 bytes OK
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.665059) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667264) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667279) EVENT_LOG_v1 {"time_micros": 1763803988667274, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667299) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 703663, prev total WAL file size 703663, number of live WAL files 2.
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667951) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(684KB)], [98(8567KB)]
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988668003, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9474220, "oldest_snapshot_seqno": -1}
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6488 keys, 8749073 bytes, temperature: kUnknown
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988741751, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8749073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8705163, "index_size": 26584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 168719, "raw_average_key_size": 26, "raw_value_size": 8588225, "raw_average_value_size": 1323, "num_data_blocks": 1042, "num_entries": 6488, "num_filter_entries": 6488, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803988, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.742062) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8749073 bytes
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.744565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.3 rd, 118.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.4 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(26.0) write-amplify(12.5) OK, records in: 7003, records dropped: 515 output_compression: NoCompression
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.744592) EVENT_LOG_v1 {"time_micros": 1763803988744577, "job": 58, "event": "compaction_finished", "compaction_time_micros": 73865, "compaction_time_cpu_micros": 30136, "output_level": 6, "num_output_files": 1, "total_output_size": 8749073, "num_input_records": 7003, "num_output_records": 6488, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988744995, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988746675, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:33:08 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:33:08 compute-0 podman[357293]: 2025-11-22 09:33:08.759780083 +0000 UTC m=+0.043887640 container create 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 09:33:08 compute-0 systemd[1]: Started libpod-conmon-9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5.scope.
Nov 22 09:33:08 compute-0 podman[357293]: 2025-11-22 09:33:08.737983658 +0000 UTC m=+0.022091235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:33:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:33:08 compute-0 podman[357293]: 2025-11-22 09:33:08.876528828 +0000 UTC m=+0.160636405 container init 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:33:08 compute-0 podman[357293]: 2025-11-22 09:33:08.88601544 +0000 UTC m=+0.170122987 container start 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:33:08 compute-0 podman[357293]: 2025-11-22 09:33:08.889726342 +0000 UTC m=+0.173833889 container attach 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:33:08 compute-0 systemd[1]: libpod-9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5.scope: Deactivated successfully.
Nov 22 09:33:08 compute-0 eager_leakey[357309]: 167 167
Nov 22 09:33:08 compute-0 conmon[357309]: conmon 9fb1dd3785202700188a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5.scope/container/memory.events
Nov 22 09:33:08 compute-0 podman[357293]: 2025-11-22 09:33:08.896042568 +0000 UTC m=+0.180150115 container died 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-36464d677d3d139b6d49c9dd766396375ddb61d390f8ca9dc556ec0fbea28f36-merged.mount: Deactivated successfully.
Nov 22 09:33:08 compute-0 podman[357293]: 2025-11-22 09:33:08.940665315 +0000 UTC m=+0.224772862 container remove 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:33:08 compute-0 ceph-mon[75021]: pgmap v2131: 305 pgs: 305 active+clean; 295 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 995 KiB/s rd, 2.3 MiB/s wr, 90 op/s
Nov 22 09:33:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:33:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:33:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:33:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:33:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:33:08 compute-0 systemd[1]: libpod-conmon-9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5.scope: Deactivated successfully.
Nov 22 09:33:09 compute-0 podman[357334]: 2025-11-22 09:33:09.140890223 +0000 UTC m=+0.040951678 container create a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:33:09 compute-0 systemd[1]: Started libpod-conmon-a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358.scope.
Nov 22 09:33:09 compute-0 podman[357334]: 2025-11-22 09:33:09.123085516 +0000 UTC m=+0.023147001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:33:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:09 compute-0 podman[357334]: 2025-11-22 09:33:09.255498544 +0000 UTC m=+0.155560019 container init a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:33:09 compute-0 podman[357334]: 2025-11-22 09:33:09.264488556 +0000 UTC m=+0.164550011 container start a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 09:33:09 compute-0 podman[357334]: 2025-11-22 09:33:09.269479518 +0000 UTC m=+0.169540993 container attach a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.470 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.471 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.471 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.471 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 WARNING nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state None.
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.473 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.473 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.473 253665 WARNING nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state None.
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.474 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.474 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.474 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.475 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.475 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:09 compute-0 nova_compute[253661]: 2025-11-22 09:33:09.475 253665 WARNING nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state None.
Nov 22 09:33:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 275 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 142 op/s
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.315 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.317 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.318 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.318 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.319 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.320 253665 INFO nova.compute.manager [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Terminating instance
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.321 253665 DEBUG nova.compute.manager [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:33:10 compute-0 kernel: tap4b489529-5b (unregistering): left promiscuous mode
Nov 22 09:33:10 compute-0 NetworkManager[48920]: <info>  [1763803990.3667] device (tap4b489529-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:33:10 compute-0 nifty_moore[357351]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:33:10 compute-0 nifty_moore[357351]: --> relative data size: 1.0
Nov 22 09:33:10 compute-0 nifty_moore[357351]: --> All data devices are unavailable
Nov 22 09:33:10 compute-0 ovn_controller[152872]: 2025-11-22T09:33:10Z|01026|binding|INFO|Releasing lport 4b489529-5b94-46ce-9810-23bef9215c04 from this chassis (sb_readonly=0)
Nov 22 09:33:10 compute-0 ovn_controller[152872]: 2025-11-22T09:33:10Z|01027|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 down in Southbound
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:10 compute-0 ovn_controller[152872]: 2025-11-22T09:33:10Z|01028|binding|INFO|Removing iface tap4b489529-5b ovn-installed in OVS
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:10.393 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '8', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:10.394 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis
Nov 22 09:33:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:10.395 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:33:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:10.396 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[414474ee-541f-4c44-b7dc-869276b9d5ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.396 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:10 compute-0 systemd[1]: libpod-a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358.scope: Deactivated successfully.
Nov 22 09:33:10 compute-0 systemd[1]: libpod-a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358.scope: Consumed 1.073s CPU time.
Nov 22 09:33:10 compute-0 podman[357334]: 2025-11-22 09:33:10.409819186 +0000 UTC m=+1.309880651 container died a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:33:10 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 22 09:33:10 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000064.scope: Consumed 2.719s CPU time.
Nov 22 09:33:10 compute-0 systemd-machined[215941]: Machine qemu-124-instance-00000064 terminated.
Nov 22 09:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b-merged.mount: Deactivated successfully.
Nov 22 09:33:10 compute-0 podman[357334]: 2025-11-22 09:33:10.469221238 +0000 UTC m=+1.369282693 container remove a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:33:10 compute-0 systemd[1]: libpod-conmon-a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358.scope: Deactivated successfully.
Nov 22 09:33:10 compute-0 sudo[357227]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:10 compute-0 kernel: tap4b489529-5b: entered promiscuous mode
Nov 22 09:33:10 compute-0 kernel: tap4b489529-5b (unregistering): left promiscuous mode
Nov 22 09:33:10 compute-0 NetworkManager[48920]: <info>  [1763803990.5510] manager: (tap4b489529-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.570 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance destroyed successfully.
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.570 253665 DEBUG nova.objects.instance [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'resources' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.585 253665 DEBUG nova.virt.libvirt.vif [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1406665007',display_name='tempest-ServerRescueTestJSON-server-1406665007',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1406665007',id=100,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-lhedv0vk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:33:08Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=80bb6ea3-dbff-48a3-b804-e3d356031a23,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.586 253665 DEBUG nova.network.os_vif_util [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.587 253665 DEBUG nova.network.os_vif_util [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.587 253665 DEBUG os_vif [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.590 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b489529-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.592 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:33:10 compute-0 sudo[357401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.598 253665 INFO os_vif [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b')
Nov 22 09:33:10 compute-0 sudo[357401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:10 compute-0 sudo[357401]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:10 compute-0 sudo[357450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:33:10 compute-0 sudo[357450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:10 compute-0 sudo[357450]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:10 compute-0 sudo[357478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:10 compute-0 sudo[357478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:10 compute-0 sudo[357478]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:10 compute-0 sudo[357503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:33:10 compute-0 sudo[357503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:10 compute-0 nova_compute[253661]: 2025-11-22 09:33:10.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:10 compute-0 ceph-mon[75021]: pgmap v2132: 305 pgs: 305 active+clean; 275 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 142 op/s
Nov 22 09:33:11 compute-0 podman[357571]: 2025-11-22 09:33:11.181292524 +0000 UTC m=+0.052394511 container create 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:33:11 compute-0 systemd[1]: Started libpod-conmon-73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c.scope.
Nov 22 09:33:11 compute-0 podman[357571]: 2025-11-22 09:33:11.155849148 +0000 UTC m=+0.026951185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:33:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:33:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:11.402 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:11.406 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:33:11 compute-0 podman[357571]: 2025-11-22 09:33:11.507160974 +0000 UTC m=+0.378263001 container init 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:33:11 compute-0 podman[357571]: 2025-11-22 09:33:11.518266247 +0000 UTC m=+0.389368274 container start 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:33:11 compute-0 epic_sinoussi[357588]: 167 167
Nov 22 09:33:11 compute-0 systemd[1]: libpod-73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c.scope: Deactivated successfully.
Nov 22 09:33:11 compute-0 conmon[357588]: conmon 73b1b4f676ef64d3a26d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c.scope/container/memory.events
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.580 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.581 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.581 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.581 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.582 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.582 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.582 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.583 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.583 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.583 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.583 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.584 253665 WARNING nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state deleting.
Nov 22 09:33:11 compute-0 podman[357571]: 2025-11-22 09:33:11.624080752 +0000 UTC m=+0.495182829 container attach 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:33:11 compute-0 podman[357571]: 2025-11-22 09:33:11.62481034 +0000 UTC m=+0.495912367 container died 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 22 09:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8287e690c9884eaf8ceb14d39b030eb25bdf1764bed8d3bdb89b8ee210890fca-merged.mount: Deactivated successfully.
Nov 22 09:33:11 compute-0 podman[357571]: 2025-11-22 09:33:11.667172803 +0000 UTC m=+0.538274800 container remove 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:33:11 compute-0 systemd[1]: libpod-conmon-73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c.scope: Deactivated successfully.
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.713 253665 INFO nova.virt.libvirt.driver [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deleting instance files /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23_del
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.715 253665 INFO nova.virt.libvirt.driver [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deletion of /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23_del complete
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.776 253665 INFO nova.compute.manager [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Took 1.45 seconds to destroy the instance on the hypervisor.
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.777 253665 DEBUG oslo.service.loopingcall [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.778 253665 DEBUG nova.compute.manager [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:33:11 compute-0 nova_compute[253661]: 2025-11-22 09:33:11.778 253665 DEBUG nova.network.neutron [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:33:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 275 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 801 KiB/s wr, 102 op/s
Nov 22 09:33:11 compute-0 podman[357612]: 2025-11-22 09:33:11.870694372 +0000 UTC m=+0.051917269 container create cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:33:11 compute-0 systemd[1]: Started libpod-conmon-cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893.scope.
Nov 22 09:33:11 compute-0 podman[357612]: 2025-11-22 09:33:11.853923969 +0000 UTC m=+0.035146886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:33:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:11 compute-0 podman[357612]: 2025-11-22 09:33:11.985306503 +0000 UTC m=+0.166529420 container init cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:33:12 compute-0 podman[357612]: 2025-11-22 09:33:12.004078304 +0000 UTC m=+0.185301201 container start cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:33:12 compute-0 podman[357612]: 2025-11-22 09:33:12.008137104 +0000 UTC m=+0.189360021 container attach cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:33:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:33:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/282528922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:33:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:33:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/282528922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:33:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:12.409 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]: {
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:     "0": [
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:         {
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "devices": [
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "/dev/loop3"
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             ],
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_name": "ceph_lv0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_size": "21470642176",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "name": "ceph_lv0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "tags": {
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.cluster_name": "ceph",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.crush_device_class": "",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.encrypted": "0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.osd_id": "0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.type": "block",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.vdo": "0"
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             },
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "type": "block",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "vg_name": "ceph_vg0"
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:         }
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:     ],
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:     "1": [
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:         {
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "devices": [
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "/dev/loop4"
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             ],
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_name": "ceph_lv1",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_size": "21470642176",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "name": "ceph_lv1",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "tags": {
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.cluster_name": "ceph",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.crush_device_class": "",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.encrypted": "0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.osd_id": "1",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.type": "block",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.vdo": "0"
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             },
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "type": "block",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "vg_name": "ceph_vg1"
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:         }
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:     ],
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:     "2": [
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:         {
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "devices": [
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "/dev/loop5"
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             ],
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_name": "ceph_lv2",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_size": "21470642176",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "name": "ceph_lv2",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "tags": {
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.cluster_name": "ceph",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.crush_device_class": "",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.encrypted": "0",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.osd_id": "2",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.type": "block",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:                 "ceph.vdo": "0"
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             },
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "type": "block",
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:             "vg_name": "ceph_vg2"
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:         }
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]:     ]
Nov 22 09:33:12 compute-0 stupefied_keldysh[357629]: }
Nov 22 09:33:12 compute-0 systemd[1]: libpod-cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893.scope: Deactivated successfully.
Nov 22 09:33:12 compute-0 podman[357612]: 2025-11-22 09:33:12.842981622 +0000 UTC m=+1.024204519 container died cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3-merged.mount: Deactivated successfully.
Nov 22 09:33:12 compute-0 podman[357612]: 2025-11-22 09:33:12.913000656 +0000 UTC m=+1.094223553 container remove cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:33:12 compute-0 systemd[1]: libpod-conmon-cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893.scope: Deactivated successfully.
Nov 22 09:33:12 compute-0 sudo[357503]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:12 compute-0 ceph-mon[75021]: pgmap v2133: 305 pgs: 305 active+clean; 275 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 801 KiB/s wr, 102 op/s
Nov 22 09:33:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/282528922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:33:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/282528922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:33:13 compute-0 sudo[357652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:13 compute-0 sudo[357652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:13 compute-0 sudo[357652]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:13 compute-0 sudo[357677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:33:13 compute-0 sudo[357677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:13 compute-0 sudo[357677]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:13 compute-0 sudo[357702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:13 compute-0 sudo[357702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:13 compute-0 sudo[357702]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:13 compute-0 sudo[357727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:33:13 compute-0 sudo[357727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:13 compute-0 nova_compute[253661]: 2025-11-22 09:33:13.566 253665 DEBUG nova.network.neutron [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:33:13 compute-0 nova_compute[253661]: 2025-11-22 09:33:13.586 253665 INFO nova.compute.manager [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Took 1.81 seconds to deallocate network for instance.
Nov 22 09:33:13 compute-0 nova_compute[253661]: 2025-11-22 09:33:13.628 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:13 compute-0 nova_compute[253661]: 2025-11-22 09:33:13.629 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:13 compute-0 podman[357792]: 2025-11-22 09:33:13.642567503 +0000 UTC m=+0.062076770 container create 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:33:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:13 compute-0 nova_compute[253661]: 2025-11-22 09:33:13.692 253665 DEBUG nova.compute.manager [req-3ce4fe7b-df81-4ba6-b837-45ad9d1f894b req-24eb43d5-ec0b-4c06-b301-ce44a4bcb6c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-deleted-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:13 compute-0 systemd[1]: Started libpod-conmon-1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e.scope.
Nov 22 09:33:13 compute-0 nova_compute[253661]: 2025-11-22 09:33:13.708 253665 DEBUG oslo_concurrency.processutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:13 compute-0 podman[357792]: 2025-11-22 09:33:13.621870713 +0000 UTC m=+0.041380020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:33:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:33:13 compute-0 podman[357792]: 2025-11-22 09:33:13.76314436 +0000 UTC m=+0.182653657 container init 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:33:13 compute-0 podman[357792]: 2025-11-22 09:33:13.777725769 +0000 UTC m=+0.197235036 container start 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:33:13 compute-0 podman[357792]: 2025-11-22 09:33:13.781900202 +0000 UTC m=+0.201409489 container attach 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:33:13 compute-0 exciting_buck[357809]: 167 167
Nov 22 09:33:13 compute-0 systemd[1]: libpod-1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e.scope: Deactivated successfully.
Nov 22 09:33:13 compute-0 podman[357792]: 2025-11-22 09:33:13.789872798 +0000 UTC m=+0.209382115 container died 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:33:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 204 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 803 KiB/s wr, 181 op/s
Nov 22 09:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-927d722d656c188d70f46f8c61fc4d984254813916174383f307e07ba8cad23c-merged.mount: Deactivated successfully.
Nov 22 09:33:13 compute-0 podman[357792]: 2025-11-22 09:33:13.833544533 +0000 UTC m=+0.253053800 container remove 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:33:13 compute-0 systemd[1]: libpod-conmon-1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e.scope: Deactivated successfully.
Nov 22 09:33:14 compute-0 podman[357853]: 2025-11-22 09:33:14.029935766 +0000 UTC m=+0.049499499 container create 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:33:14 compute-0 systemd[1]: Started libpod-conmon-4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81.scope.
Nov 22 09:33:14 compute-0 podman[357853]: 2025-11-22 09:33:14.008206452 +0000 UTC m=+0.027770215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:33:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:14 compute-0 podman[357853]: 2025-11-22 09:33:14.145739847 +0000 UTC m=+0.165303580 container init 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 09:33:14 compute-0 podman[357853]: 2025-11-22 09:33:14.153682963 +0000 UTC m=+0.173246696 container start 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:33:14 compute-0 podman[357853]: 2025-11-22 09:33:14.157913407 +0000 UTC m=+0.177477160 container attach 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:33:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:33:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225321112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:14 compute-0 nova_compute[253661]: 2025-11-22 09:33:14.187 253665 DEBUG oslo_concurrency.processutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:14 compute-0 nova_compute[253661]: 2025-11-22 09:33:14.195 253665 DEBUG nova.compute.provider_tree [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:33:14 compute-0 nova_compute[253661]: 2025-11-22 09:33:14.212 253665 DEBUG nova.scheduler.client.report [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:33:14 compute-0 nova_compute[253661]: 2025-11-22 09:33:14.230 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:14 compute-0 nova_compute[253661]: 2025-11-22 09:33:14.257 253665 INFO nova.scheduler.client.report [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Deleted allocations for instance 80bb6ea3-dbff-48a3-b804-e3d356031a23
Nov 22 09:33:14 compute-0 nova_compute[253661]: 2025-11-22 09:33:14.329 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:15 compute-0 ceph-mon[75021]: pgmap v2134: 305 pgs: 305 active+clean; 204 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 803 KiB/s wr, 181 op/s
Nov 22 09:33:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2225321112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]: {
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "osd_id": 1,
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "type": "bluestore"
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:     },
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "osd_id": 0,
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "type": "bluestore"
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:     },
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "osd_id": 2,
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:         "type": "bluestore"
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]:     }
Nov 22 09:33:15 compute-0 condescending_sanderson[357870]: }
Nov 22 09:33:15 compute-0 systemd[1]: libpod-4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81.scope: Deactivated successfully.
Nov 22 09:33:15 compute-0 systemd[1]: libpod-4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81.scope: Consumed 1.128s CPU time.
Nov 22 09:33:15 compute-0 podman[357853]: 2025-11-22 09:33:15.276787315 +0000 UTC m=+1.296351068 container died 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 22 09:33:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87-merged.mount: Deactivated successfully.
Nov 22 09:33:15 compute-0 podman[357853]: 2025-11-22 09:33:15.385212213 +0000 UTC m=+1.404775956 container remove 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:33:15 compute-0 systemd[1]: libpod-conmon-4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81.scope: Deactivated successfully.
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.424 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.426 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.427 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.428 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.428 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.429 253665 INFO nova.compute.manager [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Terminating instance
Nov 22 09:33:15 compute-0 sudo[357727]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.430 253665 DEBUG nova.compute.manager [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:33:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:33:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:33:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4aeea8fa-eadd-48c7-a40d-9719dcbb0b23 does not exist
Nov 22 09:33:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev adcd9e4d-e58f-4d04-a693-682e3c112674 does not exist
Nov 22 09:33:15 compute-0 kernel: tap5b1477f9-c3 (unregistering): left promiscuous mode
Nov 22 09:33:15 compute-0 NetworkManager[48920]: <info>  [1763803995.5044] device (tap5b1477f9-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:33:15 compute-0 ovn_controller[152872]: 2025-11-22T09:33:15Z|01029|binding|INFO|Releasing lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 from this chassis (sb_readonly=0)
Nov 22 09:33:15 compute-0 ovn_controller[152872]: 2025-11-22T09:33:15Z|01030|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 down in Southbound
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:15 compute-0 ovn_controller[152872]: 2025-11-22T09:33:15Z|01031|binding|INFO|Removing iface tap5b1477f9-c3 ovn-installed in OVS
Nov 22 09:33:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:15.523 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '6', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:15 compute-0 sudo[357920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:33:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:15.525 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis
Nov 22 09:33:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:15.526 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:33:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:15.527 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa5595e-3d81-4f87-a45e-1c049c12321e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:15 compute-0 sudo[357920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:15 compute-0 sudo[357920]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:15 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 22 09:33:15 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Consumed 16.482s CPU time.
Nov 22 09:33:15 compute-0 systemd-machined[215941]: Machine qemu-120-instance-00000062 terminated.
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.598 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:15 compute-0 podman[357943]: 2025-11-22 09:33:15.602992604 +0000 UTC m=+0.067065522 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:33:15 compute-0 sudo[357960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:33:15 compute-0 podman[357947]: 2025-11-22 09:33:15.610495388 +0000 UTC m=+0.075712604 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 09:33:15 compute-0 sudo[357960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:33:15 compute-0 sudo[357960]: pam_unix(sudo:session): session closed for user root
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.669 253665 INFO nova.virt.libvirt.driver [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance destroyed successfully.
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.670 253665 DEBUG nova.objects.instance [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'resources' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.684 253665 DEBUG nova.virt.libvirt.vif [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1786356758',display_name='tempest-ServerRescueTestJSON-server-1786356758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1786356758',id=98,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-nx025m1d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:32:26Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=7da16450-9ec5-472a-99df-81f56ee341fc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.685 253665 DEBUG nova.network.os_vif_util [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.685 253665 DEBUG nova.network.os_vif_util [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.686 253665 DEBUG os_vif [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.689 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.690 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b1477f9-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.692 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.698 253665 INFO os_vif [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3')
Nov 22 09:33:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 169 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 17 KiB/s wr, 195 op/s
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.868 253665 DEBUG nova.compute.manager [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.868 253665 DEBUG oslo_concurrency.lockutils [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.869 253665 DEBUG oslo_concurrency.lockutils [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.869 253665 DEBUG oslo_concurrency.lockutils [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.869 253665 DEBUG nova.compute.manager [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:15 compute-0 nova_compute[253661]: 2025-11-22 09:33:15.869 253665 DEBUG nova.compute.manager [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:33:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:33:16 compute-0 nova_compute[253661]: 2025-11-22 09:33:16.528 253665 INFO nova.virt.libvirt.driver [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deleting instance files /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc_del
Nov 22 09:33:16 compute-0 nova_compute[253661]: 2025-11-22 09:33:16.529 253665 INFO nova.virt.libvirt.driver [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deletion of /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc_del complete
Nov 22 09:33:16 compute-0 nova_compute[253661]: 2025-11-22 09:33:16.575 253665 INFO nova.compute.manager [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Took 1.14 seconds to destroy the instance on the hypervisor.
Nov 22 09:33:16 compute-0 nova_compute[253661]: 2025-11-22 09:33:16.576 253665 DEBUG oslo.service.loopingcall [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:33:16 compute-0 nova_compute[253661]: 2025-11-22 09:33:16.577 253665 DEBUG nova.compute.manager [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:33:16 compute-0 nova_compute[253661]: 2025-11-22 09:33:16.577 253665 DEBUG nova.network.neutron [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:33:17 compute-0 ceph-mon[75021]: pgmap v2135: 305 pgs: 305 active+clean; 169 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 17 KiB/s wr, 195 op/s
Nov 22 09:33:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 143 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.0 KiB/s wr, 205 op/s
Nov 22 09:33:17 compute-0 nova_compute[253661]: 2025-11-22 09:33:17.924 253665 DEBUG nova.network.neutron [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:33:17 compute-0 nova_compute[253661]: 2025-11-22 09:33:17.945 253665 INFO nova.compute.manager [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Took 1.37 seconds to deallocate network for instance.
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.000 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.000 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.007 253665 DEBUG nova.compute.manager [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.008 253665 DEBUG oslo_concurrency.lockutils [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.008 253665 DEBUG oslo_concurrency.lockutils [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.008 253665 DEBUG oslo_concurrency.lockutils [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.009 253665 DEBUG nova.compute.manager [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.009 253665 WARNING nova.compute.manager [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state deleted and task_state None.
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.052 253665 DEBUG oslo_concurrency.processutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.106 253665 DEBUG nova.compute.manager [req-0c0db93c-d381-4b1a-b010-52661809f9be req-e770c740-4e49-46e2-9c10-19119dc5efde 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-deleted-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.419 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.420 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.433 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.491 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:33:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601486343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.517 253665 DEBUG oslo_concurrency.processutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.522 253665 DEBUG nova.compute.provider_tree [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.535 253665 DEBUG nova.scheduler.client.report [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.555 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.558 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.568 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.568 253665 INFO nova.compute.claims [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.577 253665 INFO nova.scheduler.client.report [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Deleted allocations for instance 7da16450-9ec5-472a-99df-81f56ee341fc
Nov 22 09:33:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.661 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:18 compute-0 nova_compute[253661]: 2025-11-22 09:33:18.696 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:33:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/554635775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.149 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.155 253665 DEBUG nova.compute.provider_tree [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.169 253665 DEBUG nova.scheduler.client.report [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.193 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.194 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.242 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.243 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.262 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.278 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.367 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.368 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.369 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating image(s)
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.393 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.417 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.442 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.448 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:19 compute-0 ceph-mon[75021]: pgmap v2136: 305 pgs: 305 active+clean; 143 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.0 KiB/s wr, 205 op/s
Nov 22 09:33:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2601486343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/554635775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.534 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.536 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.537 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.538 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.568 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.576 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 eb81b22a-c733-4b44-8546-e4bd1c24d808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 KiB/s wr, 214 op/s
Nov 22 09:33:19 compute-0 nova_compute[253661]: 2025-11-22 09:33:19.963 253665 DEBUG nova.policy [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:33:20 compute-0 nova_compute[253661]: 2025-11-22 09:33:20.093 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 eb81b22a-c733-4b44-8546-e4bd1c24d808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:20 compute-0 nova_compute[253661]: 2025-11-22 09:33:20.153 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:33:20 compute-0 nova_compute[253661]: 2025-11-22 09:33:20.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:20 compute-0 nova_compute[253661]: 2025-11-22 09:33:20.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:21 compute-0 ceph-mon[75021]: pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 KiB/s wr, 214 op/s
Nov 22 09:33:21 compute-0 nova_compute[253661]: 2025-11-22 09:33:21.269 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Successfully created port: 9cb5df7f-b707-42d9-b17d-75811fd05cbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:33:21 compute-0 nova_compute[253661]: 2025-11-22 09:33:21.481 253665 DEBUG nova.objects.instance [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:21 compute-0 nova_compute[253661]: 2025-11-22 09:33:21.496 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:33:21 compute-0 nova_compute[253661]: 2025-11-22 09:33:21.497 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Ensure instance console log exists: /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:33:21 compute-0 nova_compute[253661]: 2025-11-22 09:33:21.497 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:21 compute-0 nova_compute[253661]: 2025-11-22 09:33:21.497 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:21 compute-0 nova_compute[253661]: 2025-11-22 09:33:21.498 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.3 KiB/s wr, 152 op/s
Nov 22 09:33:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:33:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:33:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:33:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:33:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:33:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:33:22 compute-0 nova_compute[253661]: 2025-11-22 09:33:22.909 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Successfully updated port: 9cb5df7f-b707-42d9-b17d-75811fd05cbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:33:22 compute-0 nova_compute[253661]: 2025-11-22 09:33:22.924 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:33:22 compute-0 nova_compute[253661]: 2025-11-22 09:33:22.925 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:33:22 compute-0 nova_compute[253661]: 2025-11-22 09:33:22.925 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:33:23 compute-0 ceph-mon[75021]: pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.3 KiB/s wr, 152 op/s
Nov 22 09:33:23 compute-0 nova_compute[253661]: 2025-11-22 09:33:23.164 253665 DEBUG nova.compute.manager [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:23 compute-0 nova_compute[253661]: 2025-11-22 09:33:23.165 253665 DEBUG nova.compute.manager [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing instance network info cache due to event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:33:23 compute-0 nova_compute[253661]: 2025-11-22 09:33:23.165 253665 DEBUG oslo_concurrency.lockutils [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:33:23 compute-0 nova_compute[253661]: 2025-11-22 09:33:23.298 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:33:23 compute-0 podman[358254]: 2025-11-22 09:33:23.482081208 +0000 UTC m=+0.160601864 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:33:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 71 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 165 op/s
Nov 22 09:33:23 compute-0 nova_compute[253661]: 2025-11-22 09:33:23.966 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:24 compute-0 nova_compute[253661]: 2025-11-22 09:33:24.963 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:33:24 compute-0 nova_compute[253661]: 2025-11-22 09:33:24.988 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:33:24 compute-0 nova_compute[253661]: 2025-11-22 09:33:24.988 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance network_info: |[{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:33:24 compute-0 nova_compute[253661]: 2025-11-22 09:33:24.989 253665 DEBUG oslo_concurrency.lockutils [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:33:24 compute-0 nova_compute[253661]: 2025-11-22 09:33:24.989 253665 DEBUG nova.network.neutron [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:33:24 compute-0 nova_compute[253661]: 2025-11-22 09:33:24.994 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start _get_guest_xml network_info=[{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.000 253665 WARNING nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.010 253665 DEBUG nova.virt.libvirt.host [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.011 253665 DEBUG nova.virt.libvirt.host [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.015 253665 DEBUG nova.virt.libvirt.host [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.016 253665 DEBUG nova.virt.libvirt.host [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.016 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.016 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.018 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.018 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.018 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.018 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.019 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.021 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:25 compute-0 ceph-mon[75021]: pgmap v2139: 305 pgs: 305 active+clean; 71 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 165 op/s
Nov 22 09:33:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3929486319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.515 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.551 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.558 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.615 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803990.5675216, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.615 253665 INFO nova.compute.manager [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Stopped (Lifecycle Event)
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.649 253665 DEBUG nova.compute.manager [None req-52c67dcf-91cc-4ec1-b41a-4af07f2b4f2e - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Nov 22 09:33:25 compute-0 nova_compute[253661]: 2025-11-22 09:33:25.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513764550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.089 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.091 253665 DEBUG nova.virt.libvirt.vif [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:19Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.092 253665 DEBUG nova.network.os_vif_util [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.093 253665 DEBUG nova.network.os_vif_util [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.095 253665 DEBUG nova.objects.instance [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3929486319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/513764550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.117 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <uuid>eb81b22a-c733-4b44-8546-e4bd1c24d808</uuid>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <name>instance-00000065</name>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1059413669</nova:name>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:33:25</nova:creationTime>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <nova:port uuid="9cb5df7f-b707-42d9-b17d-75811fd05cbb">
Nov 22 09:33:26 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <system>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <entry name="serial">eb81b22a-c733-4b44-8546-e4bd1c24d808</entry>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <entry name="uuid">eb81b22a-c733-4b44-8546-e4bd1c24d808</entry>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     </system>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <os>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   </os>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <features>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   </features>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:33:26 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:33:26 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/eb81b22a-c733-4b44-8546-e4bd1c24d808_disk">
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config">
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:26 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ee:62:78"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <target dev="tap9cb5df7f-b7"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/console.log" append="off"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <video>
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     </video>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:33:26 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:33:26 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:33:26 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:33:26 compute-0 nova_compute[253661]: </domain>
Nov 22 09:33:26 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.120 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Preparing to wait for external event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.121 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.121 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.122 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.123 253665 DEBUG nova.virt.libvirt.vif [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:19Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.124 253665 DEBUG nova.network.os_vif_util [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.127 253665 DEBUG nova.network.os_vif_util [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.127 253665 DEBUG os_vif [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.128 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.129 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.133 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb5df7f-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.134 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9cb5df7f-b7, col_values=(('external_ids', {'iface-id': '9cb5df7f-b707-42d9-b17d-75811fd05cbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:62:78', 'vm-uuid': 'eb81b22a-c733-4b44-8546-e4bd1c24d808'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:26 compute-0 NetworkManager[48920]: <info>  [1763804006.1371] manager: (tap9cb5df7f-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.143 253665 INFO os_vif [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7')
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.200 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.200 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.201 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:ee:62:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.201 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Using config drive
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.228 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.852 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating config drive at /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.856 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq4yhv1w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:26 compute-0 nova_compute[253661]: 2025-11-22 09:33:26.998 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq4yhv1w" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.023 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.027 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:27 compute-0 ceph-mon[75021]: pgmap v2140: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.196 253665 DEBUG nova.network.neutron [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updated VIF entry in instance network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.197 253665 DEBUG nova.network.neutron [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.211 253665 DEBUG oslo_concurrency.lockutils [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.245 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.246 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deleting local config drive /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config because it was imported into RBD.
Nov 22 09:33:27 compute-0 kernel: tap9cb5df7f-b7: entered promiscuous mode
Nov 22 09:33:27 compute-0 NetworkManager[48920]: <info>  [1763804007.3053] manager: (tap9cb5df7f-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Nov 22 09:33:27 compute-0 ovn_controller[152872]: 2025-11-22T09:33:27Z|01032|binding|INFO|Claiming lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb for this chassis.
Nov 22 09:33:27 compute-0 ovn_controller[152872]: 2025-11-22T09:33:27Z|01033|binding|INFO|9cb5df7f-b707-42d9-b17d-75811fd05cbb: Claiming fa:16:3e:ee:62:78 10.100.0.13
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.311 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.325 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:78 10.100.0.13'], port_security=['fa:16:3e:ee:62:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb81b22a-c733-4b44-8546-e4bd1c24d808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9a9e980-b9b8-4093-8614-a39717adaa19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed202cc-8346-4c69-b67f-f490be608094, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9cb5df7f-b707-42d9-b17d-75811fd05cbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.327 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9cb5df7f-b707-42d9-b17d-75811fd05cbb in datapath 3acaad61-a3f6-4bd6-83f4-0ab1438bb136 bound to our chassis
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.329 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 09:33:27 compute-0 systemd-udevd[358415]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.347 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[416ce9ff-5375-4521-835c-be8a1408c040]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.350 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3acaad61-a1 in ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:33:27 compute-0 NetworkManager[48920]: <info>  [1763804007.3548] device (tap9cb5df7f-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.353 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3acaad61-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.353 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[53e07c3b-93c1-47f1-84ce-1fdbd0b27c14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.355 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b79a41d3-adee-4046-ad15-a1e3fc30e65c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 NetworkManager[48920]: <info>  [1763804007.3567] device (tap9cb5df7f-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:33:27 compute-0 systemd-machined[215941]: New machine qemu-125-instance-00000065.
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.370 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[64ec8d87-6d61-4eb5-b42a-c71e7ee36489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.387 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:27 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000065.
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:27 compute-0 ovn_controller[152872]: 2025-11-22T09:33:27Z|01034|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb ovn-installed in OVS
Nov 22 09:33:27 compute-0 ovn_controller[152872]: 2025-11-22T09:33:27Z|01035|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb up in Southbound
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.400 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[42859102-177e-4fbf-b630-9f733817d876]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.441 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[682e17ed-aad7-4fd8-b72c-891c3d72e7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.447 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe97ecd0-1284-4fd7-b8aa-d413aeb42735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 NetworkManager[48920]: <info>  [1763804007.4489] manager: (tap3acaad61-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.498 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[30b831e2-3dbe-4769-bc62-9385a9eeb885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.503 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d869fe7-b82f-4216-b1f4-cefacc7f9c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 NetworkManager[48920]: <info>  [1763804007.5295] device (tap3acaad61-a0): carrier: link connected
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.536 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f442eb47-701a-4bf7-bb54-f15e29d9e593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.560 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[610eccf2-bbaa-4221-8bc1-30eb365df10a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3acaad61-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a4:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685826, 'reachable_time': 30323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358450, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.584 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2dea1c47-b2c9-4428-bfd6-ace7398a0464]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a4ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685826, 'tstamp': 685826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358451, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.608 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[146b0b56-fc35-4ad8-803f-d682c98dee5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3acaad61-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a4:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685826, 'reachable_time': 30323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358452, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.652 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af524059-092c-4ed1-81a7-455dba1010be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.737 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1be4c2-7bd5-4b7b-a29b-f7ca05c22351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.739 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3acaad61-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.739 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.740 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3acaad61-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:27 compute-0 kernel: tap3acaad61-a0: entered promiscuous mode
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.782 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:27 compute-0 NetworkManager[48920]: <info>  [1763804007.7845] manager: (tap3acaad61-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.785 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3acaad61-a0, col_values=(('external_ids', {'iface-id': '505b5f2b-f067-432d-8ac4-da2043ed18cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:27 compute-0 ovn_controller[152872]: 2025-11-22T09:33:27Z|01036|binding|INFO|Releasing lport 505b5f2b-f067-432d-8ac4-da2043ed18cf from this chassis (sb_readonly=0)
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 22 09:33:27 compute-0 nova_compute[253661]: 2025-11-22 09:33:27.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.820 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.821 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9716e6d1-a40b-4600-8c80-e22785e0e685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.821 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.822 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'env', 'PROCESS_TAG=haproxy-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.976 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.979 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.980 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:28 compute-0 nova_compute[253661]: 2025-11-22 09:33:28.002 253665 DEBUG nova.compute.manager [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:28 compute-0 nova_compute[253661]: 2025-11-22 09:33:28.002 253665 DEBUG oslo_concurrency.lockutils [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:28 compute-0 nova_compute[253661]: 2025-11-22 09:33:28.003 253665 DEBUG oslo_concurrency.lockutils [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:28 compute-0 nova_compute[253661]: 2025-11-22 09:33:28.003 253665 DEBUG oslo_concurrency.lockutils [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:28 compute-0 nova_compute[253661]: 2025-11-22 09:33:28.003 253665 DEBUG nova.compute.manager [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Processing event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:33:28 compute-0 podman[358484]: 2025-11-22 09:33:28.220661797 +0000 UTC m=+0.064432237 container create 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:33:28 compute-0 systemd[1]: Started libpod-conmon-09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467.scope.
Nov 22 09:33:28 compute-0 podman[358484]: 2025-11-22 09:33:28.183233126 +0000 UTC m=+0.027003596 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:33:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323283cbee7146e9c3c1575a344ce40e1bcaad9765d07290209292215bb1d53b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:28 compute-0 podman[358484]: 2025-11-22 09:33:28.338493458 +0000 UTC m=+0.182263968 container init 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:33:28 compute-0 podman[358484]: 2025-11-22 09:33:28.344660859 +0000 UTC m=+0.188431339 container start 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:33:28 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [NOTICE]   (358504) : New worker (358506) forked
Nov 22 09:33:28 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [NOTICE]   (358504) : Loading success.
Nov 22 09:33:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:29 compute-0 ceph-mon[75021]: pgmap v2141: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.309 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.310 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804009.3092601, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.310 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Started (Lifecycle Event)
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.316 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.320 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance spawned successfully.
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.321 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.340 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.343 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.350 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.351 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.351 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.351 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.352 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.352 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.383 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.383 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804009.3094077, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.384 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Paused (Lifecycle Event)
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.409 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.413 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804009.3145497, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Resumed (Lifecycle Event)
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.430 253665 INFO nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Took 10.06 seconds to spawn the instance on the hypervisor.
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.430 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.432 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.438 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.469 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.488 253665 INFO nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Took 11.02 seconds to build instance.
Nov 22 09:33:29 compute-0 nova_compute[253661]: 2025-11-22 09:33:29.501 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.084 253665 DEBUG nova.compute.manager [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.085 253665 DEBUG oslo_concurrency.lockutils [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.085 253665 DEBUG oslo_concurrency.lockutils [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.085 253665 DEBUG oslo_concurrency.lockutils [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.086 253665 DEBUG nova.compute.manager [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.086 253665 WARNING nova.compute.manager [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state None.
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.667 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803995.6670394, 7da16450-9ec5-472a-99df-81f56ee341fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.668 253665 INFO nova.compute.manager [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Stopped (Lifecycle Event)
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.690 253665 DEBUG nova.compute.manager [None req-c7db1342-f86d-4b28-8773-9c068fc7fb5d - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:30 compute-0 nova_compute[253661]: 2025-11-22 09:33:30.829 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:31 compute-0 nova_compute[253661]: 2025-11-22 09:33:31.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:31 compute-0 ceph-mon[75021]: pgmap v2142: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 22 09:33:31 compute-0 nova_compute[253661]: 2025-11-22 09:33:31.453 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:31 compute-0 nova_compute[253661]: 2025-11-22 09:33:31.453 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:31 compute-0 nova_compute[253661]: 2025-11-22 09:33:31.473 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:33:31 compute-0 nova_compute[253661]: 2025-11-22 09:33:31.559 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:31 compute-0 nova_compute[253661]: 2025-11-22 09:33:31.560 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:31 compute-0 nova_compute[253661]: 2025-11-22 09:33:31.573 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:33:31 compute-0 nova_compute[253661]: 2025-11-22 09:33:31.573 253665 INFO nova.compute.claims [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:33:31 compute-0 nova_compute[253661]: 2025-11-22 09:33:31.716 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 22 09:33:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:33:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891860011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.206 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.214 253665 DEBUG nova.compute.provider_tree [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.233 253665 DEBUG nova.scheduler.client.report [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.261 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.262 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:33:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2891860011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.307 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.324 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.346 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.451 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.453 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.453 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Creating image(s)
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.477 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.506 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.535 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.540 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.637 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.639 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.640 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.640 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.665 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.670 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.857 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "0922fe2c-d67c-47da-a1ac-5b217442c632" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.858 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.881 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.984 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.985 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.992 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:33:32 compute-0 nova_compute[253661]: 2025-11-22 09:33:32.992 253665 INFO nova.compute.claims [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.100 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:33 compute-0 NetworkManager[48920]: <info>  [1763804013.1628] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Nov 22 09:33:33 compute-0 NetworkManager[48920]: <info>  [1763804013.1641] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Nov 22 09:33:33 compute-0 ovn_controller[152872]: 2025-11-22T09:33:33Z|01037|binding|INFO|Releasing lport 505b5f2b-f067-432d-8ac4-da2043ed18cf from this chassis (sb_readonly=0)
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.234 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] resizing rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.271 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.271 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.292 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.296 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:33 compute-0 ceph-mon[75021]: pgmap v2143: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.417 253665 DEBUG nova.objects.instance [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'migration_context' on Instance uuid 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.437 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.438 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Ensure instance console log exists: /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.438 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.439 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.439 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.441 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.447 253665 WARNING nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.455 253665 DEBUG nova.virt.libvirt.host [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.456 253665 DEBUG nova.virt.libvirt.host [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.460 253665 DEBUG nova.virt.libvirt.host [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.461 253665 DEBUG nova.virt.libvirt.host [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.462 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.462 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.463 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.463 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.463 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.464 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.464 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.464 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.464 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.465 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.465 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.465 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.469 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.757 253665 DEBUG nova.compute.manager [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.758 253665 DEBUG nova.compute.manager [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing instance network info cache due to event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.758 253665 DEBUG oslo_concurrency.lockutils [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.759 253665 DEBUG oslo_concurrency.lockutils [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.759 253665 DEBUG nova.network.neutron [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:33:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:33:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651741607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.796 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.803 253665 DEBUG nova.compute.provider_tree [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:33:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 102 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 91 op/s
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.824 253665 DEBUG nova.scheduler.client.report [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.844 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.845 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.897 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.916 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.933 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:33:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584434442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:33 compute-0 nova_compute[253661]: 2025-11-22 09:33:33.994 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.024 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.031 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.084 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.087 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.087 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating image(s)
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.121 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.155 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.186 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.191 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.275 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.276 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.277 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.277 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.300 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.305 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0922fe2c-d67c-47da-a1ac-5b217442c632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2651741607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2584434442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3089764262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.539 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.541 253665 DEBUG nova.objects.instance [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.557 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <uuid>9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae</uuid>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <name>instance-00000066</name>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerShowV247Test-server-1320746866</nova:name>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:33:33</nova:creationTime>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <nova:user uuid="872ddfa50ca3429ca2eb86919c4c82cf">tempest-ServerShowV247Test-1598997937-project-member</nova:user>
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <nova:project uuid="93a61bafffff48389d1004154f28d04c">tempest-ServerShowV247Test-1598997937</nova:project>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <system>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <entry name="serial">9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae</entry>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <entry name="uuid">9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae</entry>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     </system>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <os>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   </os>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <features>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   </features>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk">
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config">
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:34 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/console.log" append="off"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <video>
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     </video>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:33:34 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:33:34 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:33:34 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:33:34 compute-0 nova_compute[253661]: </domain>
Nov 22 09:33:34 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.628 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.629 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.629 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Using config drive
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.651 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.906 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Creating config drive at /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config
Nov 22 09:33:34 compute-0 nova_compute[253661]: 2025-11-22 09:33:34.911 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfv4w_t1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:35 compute-0 nova_compute[253661]: 2025-11-22 09:33:35.065 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfv4w_t1" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:35 compute-0 nova_compute[253661]: 2025-11-22 09:33:35.089 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:35 compute-0 nova_compute[253661]: 2025-11-22 09:33:35.092 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:35 compute-0 nova_compute[253661]: 2025-11-22 09:33:35.382 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0922fe2c-d67c-47da-a1ac-5b217442c632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:35 compute-0 nova_compute[253661]: 2025-11-22 09:33:35.453 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] resizing rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:33:35 compute-0 ceph-mon[75021]: pgmap v2144: 305 pgs: 305 active+clean; 102 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 91 op/s
Nov 22 09:33:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3089764262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 116 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 102 op/s
Nov 22 09:33:35 compute-0 nova_compute[253661]: 2025-11-22 09:33:35.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:35 compute-0 nova_compute[253661]: 2025-11-22 09:33:35.842 253665 DEBUG nova.network.neutron [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updated VIF entry in instance network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:33:35 compute-0 nova_compute[253661]: 2025-11-22 09:33:35.843 253665 DEBUG nova.network.neutron [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:33:35 compute-0 nova_compute[253661]: 2025-11-22 09:33:35.863 253665 DEBUG oslo_concurrency.lockutils [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.466 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.467 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Deleting local config drive /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config because it was imported into RBD.
Nov 22 09:33:36 compute-0 systemd-machined[215941]: New machine qemu-126-instance-00000066.
Nov 22 09:33:36 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000066.
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.850 253665 DEBUG nova.objects.instance [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'migration_context' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.863 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.864 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Ensure instance console log exists: /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.864 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.865 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.865 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.867 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:33:36 compute-0 ceph-mon[75021]: pgmap v2145: 305 pgs: 305 active+clean; 116 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 102 op/s
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.872 253665 WARNING nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.880 253665 DEBUG nova.virt.libvirt.host [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.882 253665 DEBUG nova.virt.libvirt.host [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.885 253665 DEBUG nova.virt.libvirt.host [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.886 253665 DEBUG nova.virt.libvirt.host [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.886 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.887 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.888 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.888 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.889 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.889 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.890 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.890 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.890 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.891 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.891 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.891 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:33:36 compute-0 nova_compute[253661]: 2025-11-22 09:33:36.895 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.197 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804017.197109, 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.200 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] VM Resumed (Lifecycle Event)
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.221 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.222 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.227 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.230 253665 INFO nova.virt.libvirt.driver [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance spawned successfully.
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.231 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.234 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.258 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.258 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804017.2193768, 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.259 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] VM Started (Lifecycle Event)
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.265 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.266 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.267 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.267 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.267 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.268 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.273 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.276 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.297 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.318 253665 INFO nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Took 4.87 seconds to spawn the instance on the hypervisor.
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.319 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.362 253665 INFO nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Took 5.83 seconds to build instance.
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.385 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1439546622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.427 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.452 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:37 compute-0 nova_compute[253661]: 2025-11-22 09:33:37.460 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 148 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 93 op/s
Nov 22 09:33:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4121484801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.049 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.052 253665 DEBUG nova.objects.instance [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'pci_devices' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1439546622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.066 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <uuid>0922fe2c-d67c-47da-a1ac-5b217442c632</uuid>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <name>instance-00000067</name>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerShowV247Test-server-2120834641</nova:name>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:33:36</nova:creationTime>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <nova:user uuid="872ddfa50ca3429ca2eb86919c4c82cf">tempest-ServerShowV247Test-1598997937-project-member</nova:user>
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <nova:project uuid="93a61bafffff48389d1004154f28d04c">tempest-ServerShowV247Test-1598997937</nova:project>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <system>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <entry name="serial">0922fe2c-d67c-47da-a1ac-5b217442c632</entry>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <entry name="uuid">0922fe2c-d67c-47da-a1ac-5b217442c632</entry>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     </system>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <os>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   </os>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <features>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   </features>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0922fe2c-d67c-47da-a1ac-5b217442c632_disk">
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config">
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:38 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/console.log" append="off"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <video>
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     </video>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:33:38 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:33:38 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:33:38 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:33:38 compute-0 nova_compute[253661]: </domain>
Nov 22 09:33:38 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.177 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.177 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.178 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Using config drive
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.203 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.250 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.357 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating config drive at /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.362 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_176jbi9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.517 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_176jbi9" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.547 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.553 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.757 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:38 compute-0 nova_compute[253661]: 2025-11-22 09:33:38.758 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deleting local config drive /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config because it was imported into RBD.
Nov 22 09:33:38 compute-0 systemd-machined[215941]: New machine qemu-127-instance-00000067.
Nov 22 09:33:38 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000067.
Nov 22 09:33:39 compute-0 ceph-mon[75021]: pgmap v2146: 305 pgs: 305 active+clean; 148 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 93 op/s
Nov 22 09:33:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4121484801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.248 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804019.24759, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.249 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Resumed (Lifecycle Event)
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.252 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.252 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.258 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.259 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.314 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.318 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance spawned successfully.
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.319 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.322 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.343 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.344 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.344 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.345 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.345 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.346 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.349 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.350 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804019.2496293, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.350 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Started (Lifecycle Event)
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.386 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.389 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.409 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.474 253665 INFO nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Took 5.39 seconds to spawn the instance on the hypervisor.
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.475 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.528 253665 INFO nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Took 6.57 seconds to build instance.
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.547 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:33:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3156660686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.761 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 181 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.877 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.877 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.881 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.881 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.885 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:33:39 compute-0 nova_compute[253661]: 2025-11-22 09:33:39.885 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:33:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3156660686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.087 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.088 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3474MB free_disk=59.9403076171875GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.088 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.089 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.319 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance eb81b22a-c733-4b44-8546-e4bd1c24d808 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 0922fe2c-d67c-47da-a1ac-5b217442c632 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.507 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:33:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4026977404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.974 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.979 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:33:40 compute-0 nova_compute[253661]: 2025-11-22 09:33:40.994 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.013 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.013 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.059 253665 INFO nova.compute.manager [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Rebuilding instance
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.328 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.344 253665 DEBUG nova.compute.manager [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:41 compute-0 ceph-mon[75021]: pgmap v2147: 305 pgs: 305 active+clean; 181 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Nov 22 09:33:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4026977404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.392 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'pci_requests' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.403 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'pci_devices' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.414 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'resources' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.422 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'migration_context' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.431 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:33:41 compute-0 nova_compute[253661]: 2025-11-22 09:33:41.435 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:33:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 181 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Nov 22 09:33:42 compute-0 ovn_controller[152872]: 2025-11-22T09:33:42Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:62:78 10.100.0.13
Nov 22 09:33:42 compute-0 ovn_controller[152872]: 2025-11-22T09:33:42Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:62:78 10.100.0.13
Nov 22 09:33:42 compute-0 nova_compute[253661]: 2025-11-22 09:33:42.259 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:42 compute-0 nova_compute[253661]: 2025-11-22 09:33:42.259 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:42 compute-0 nova_compute[253661]: 2025-11-22 09:33:42.276 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:33:42 compute-0 nova_compute[253661]: 2025-11-22 09:33:42.346 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:42 compute-0 nova_compute[253661]: 2025-11-22 09:33:42.346 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:42 compute-0 nova_compute[253661]: 2025-11-22 09:33:42.359 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:33:42 compute-0 nova_compute[253661]: 2025-11-22 09:33:42.360 253665 INFO nova.compute.claims [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:33:42 compute-0 nova_compute[253661]: 2025-11-22 09:33:42.529 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.014 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:33:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437313440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.086 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.092 253665 DEBUG nova.compute.provider_tree [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.108 253665 DEBUG nova.scheduler.client.report [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.142 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.144 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.196 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.197 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.220 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.244 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.344 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.346 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.346 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Creating image(s)
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.375 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:43 compute-0 ceph-mon[75021]: pgmap v2148: 305 pgs: 305 active+clean; 181 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Nov 22 09:33:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2437313440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.429 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.464 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.470 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.572 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.573 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.574 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.574 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.603 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:43 compute-0 nova_compute[253661]: 2025-11-22 09:33:43.609 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 197 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.8 MiB/s wr, 276 op/s
Nov 22 09:33:44 compute-0 nova_compute[253661]: 2025-11-22 09:33:44.049 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:44 compute-0 nova_compute[253661]: 2025-11-22 09:33:44.120 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:33:44 compute-0 nova_compute[253661]: 2025-11-22 09:33:44.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:44 compute-0 nova_compute[253661]: 2025-11-22 09:33:44.279 253665 DEBUG nova.objects.instance [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:44 compute-0 nova_compute[253661]: 2025-11-22 09:33:44.291 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:33:44 compute-0 nova_compute[253661]: 2025-11-22 09:33:44.292 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Ensure instance console log exists: /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:33:44 compute-0 nova_compute[253661]: 2025-11-22 09:33:44.292 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:44 compute-0 nova_compute[253661]: 2025-11-22 09:33:44.293 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:44 compute-0 nova_compute[253661]: 2025-11-22 09:33:44.293 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:45 compute-0 nova_compute[253661]: 2025-11-22 09:33:45.011 253665 DEBUG nova.policy [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:33:45 compute-0 nova_compute[253661]: 2025-11-22 09:33:45.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:45 compute-0 nova_compute[253661]: 2025-11-22 09:33:45.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:33:45 compute-0 ceph-mon[75021]: pgmap v2149: 305 pgs: 305 active+clean; 197 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.8 MiB/s wr, 276 op/s
Nov 22 09:33:45 compute-0 nova_compute[253661]: 2025-11-22 09:33:45.715 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Successfully created port: 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:33:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 234 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 286 op/s
Nov 22 09:33:45 compute-0 nova_compute[253661]: 2025-11-22 09:33:45.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:46 compute-0 nova_compute[253661]: 2025-11-22 09:33:46.144 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:46 compute-0 podman[359523]: 2025-11-22 09:33:46.390446066 +0000 UTC m=+0.071739818 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 22 09:33:46 compute-0 podman[359524]: 2025-11-22 09:33:46.393612043 +0000 UTC m=+0.070040695 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.290 253665 INFO nova.compute.manager [None req-5546d05f-fa71-4eb8-9f99-43d699b8610d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Get console output
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.297 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.447 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Successfully updated port: 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.465 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.465 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.466 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:33:47 compute-0 ceph-mon[75021]: pgmap v2150: 305 pgs: 305 active+clean; 234 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 286 op/s
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.710 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:33:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 245 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 270 op/s
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.871 253665 DEBUG nova.compute.manager [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.871 253665 DEBUG nova.compute.manager [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing instance network info cache due to event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:33:47 compute-0 nova_compute[253661]: 2025-11-22 09:33:47.872 253665 DEBUG oslo_concurrency.lockutils [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:33:48 compute-0 nova_compute[253661]: 2025-11-22 09:33:48.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:33:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:33:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 125K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 10K syncs, 2.99 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7339 writes, 27K keys, 7339 commit groups, 1.0 writes per commit group, ingest: 27.17 MB, 0.05 MB/s
                                           Interval WAL: 7339 writes, 2840 syncs, 2.58 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:33:48 compute-0 nova_compute[253661]: 2025-11-22 09:33:48.949 253665 INFO nova.compute.manager [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Rebuilding instance
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.096 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.112 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.113 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance network_info: |[{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.114 253665 DEBUG oslo_concurrency.lockutils [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.115 253665 DEBUG nova.network.neutron [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.119 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start _get_guest_xml network_info=[{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.125 253665 WARNING nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.139 253665 DEBUG nova.virt.libvirt.host [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.141 253665 DEBUG nova.virt.libvirt.host [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.147 253665 DEBUG nova.virt.libvirt.host [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.148 253665 DEBUG nova.virt.libvirt.host [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.149 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.150 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.150 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.150 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.151 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.152 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.152 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.153 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.154 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.155 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.155 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.156 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.162 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.277 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'trusted_certs' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.295 253665 DEBUG nova.compute.manager [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.363 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_requests' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.396 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.409 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.424 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.435 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.439 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:33:49 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 22 09:33:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513839142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.705 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.733 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:49 compute-0 nova_compute[253661]: 2025-11-22 09:33:49.739 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 260 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 274 op/s
Nov 22 09:33:49 compute-0 ceph-mon[75021]: pgmap v2151: 305 pgs: 305 active+clean; 245 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 270 op/s
Nov 22 09:33:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073774821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.302 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.304 253665 DEBUG nova.virt.libvirt.vif [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1891830994',display_name='tempest-TestNetworkBasicOps-server-1891830994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1891830994',id=104,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9/oNERa4AbqxHUPuutKC57v2O48q74KuKUDGgcFa55ErxBYOBd37EKQrgbQiEDb5SwoFM9AeHUddF0XE/aljzNPw78dYMARly2RFfRYPgRPvDRHLrrtwK6XNq8kEtqIg==',key_name='tempest-TestNetworkBasicOps-1008221113',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-8xsk0rz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:43Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c5540f5a-8dfa-4b11-8452-c6fe99db1d64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.305 253665 DEBUG nova.network.os_vif_util [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.306 253665 DEBUG nova.network.os_vif_util [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.309 253665 DEBUG nova.objects.instance [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.324 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <uuid>c5540f5a-8dfa-4b11-8452-c6fe99db1d64</uuid>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <name>instance-00000068</name>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-1891830994</nova:name>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:33:49</nova:creationTime>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <nova:port uuid="4d3de607-ad62-4c7d-ae3b-7cecb934aa9a">
Nov 22 09:33:50 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <system>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <entry name="serial">c5540f5a-8dfa-4b11-8452-c6fe99db1d64</entry>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <entry name="uuid">c5540f5a-8dfa-4b11-8452-c6fe99db1d64</entry>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     </system>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <os>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   </os>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <features>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   </features>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk">
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config">
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:50 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:cf:2e:8f"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <target dev="tap4d3de607-ad"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/console.log" append="off"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <video>
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     </video>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:33:50 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:33:50 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:33:50 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:33:50 compute-0 nova_compute[253661]: </domain>
Nov 22 09:33:50 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.325 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Preparing to wait for external event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.325 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.326 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.326 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.327 253665 DEBUG nova.virt.libvirt.vif [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1891830994',display_name='tempest-TestNetworkBasicOps-server-1891830994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1891830994',id=104,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9/oNERa4AbqxHUPuutKC57v2O48q74KuKUDGgcFa55ErxBYOBd37EKQrgbQiEDb5SwoFM9AeHUddF0XE/aljzNPw78dYMARly2RFfRYPgRPvDRHLrrtwK6XNq8kEtqIg==',key_name='tempest-TestNetworkBasicOps-1008221113',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-8xsk0rz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:43Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c5540f5a-8dfa-4b11-8452-c6fe99db1d64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.327 253665 DEBUG nova.network.os_vif_util [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.328 253665 DEBUG nova.network.os_vif_util [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.328 253665 DEBUG os_vif [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.329 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.330 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.333 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d3de607-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.334 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d3de607-ad, col_values=(('external_ids', {'iface-id': '4d3de607-ad62-4c7d-ae3b-7cecb934aa9a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:2e:8f', 'vm-uuid': 'c5540f5a-8dfa-4b11-8452-c6fe99db1d64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:50 compute-0 NetworkManager[48920]: <info>  [1763804030.3380] manager: (tap4d3de607-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.348 253665 INFO os_vif [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad')
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.452 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.453 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.453 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:cf:2e:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.453 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Using config drive
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.478 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.954 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Creating config drive at /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config
Nov 22 09:33:50 compute-0 nova_compute[253661]: 2025-11-22 09:33:50.961 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpok3265ah execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2513839142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:51 compute-0 ceph-mon[75021]: pgmap v2152: 305 pgs: 305 active+clean; 260 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 274 op/s
Nov 22 09:33:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2073774821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:51 compute-0 nova_compute[253661]: 2025-11-22 09:33:51.115 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpok3265ah" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:51 compute-0 nova_compute[253661]: 2025-11-22 09:33:51.160 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:51 compute-0 nova_compute[253661]: 2025-11-22 09:33:51.165 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:51 compute-0 nova_compute[253661]: 2025-11-22 09:33:51.569 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:33:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 260 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Nov 22 09:33:51 compute-0 nova_compute[253661]: 2025-11-22 09:33:51.989 253665 DEBUG nova.network.neutron [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updated VIF entry in instance network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:33:51 compute-0 nova_compute[253661]: 2025-11-22 09:33:51.990 253665 DEBUG nova.network.neutron [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:33:52 compute-0 nova_compute[253661]: 2025-11-22 09:33:52.022 253665 DEBUG oslo_concurrency.lockutils [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:33:52
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', '.mgr', 'volumes']
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:33:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:33:53 compute-0 kernel: tap9cb5df7f-b7 (unregistering): left promiscuous mode
Nov 22 09:33:53 compute-0 NetworkManager[48920]: <info>  [1763804033.1956] device (tap9cb5df7f-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:33:53 compute-0 ceph-mon[75021]: pgmap v2153: 305 pgs: 305 active+clean; 260 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:53 compute-0 ovn_controller[152872]: 2025-11-22T09:33:53Z|01038|binding|INFO|Releasing lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb from this chassis (sb_readonly=0)
Nov 22 09:33:53 compute-0 ovn_controller[152872]: 2025-11-22T09:33:53Z|01039|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb down in Southbound
Nov 22 09:33:53 compute-0 ovn_controller[152872]: 2025-11-22T09:33:53Z|01040|binding|INFO|Removing iface tap9cb5df7f-b7 ovn-installed in OVS
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.211 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:78 10.100.0.13'], port_security=['fa:16:3e:ee:62:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb81b22a-c733-4b44-8546-e4bd1c24d808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9a9e980-b9b8-4093-8614-a39717adaa19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed202cc-8346-4c69-b67f-f490be608094, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9cb5df7f-b707-42d9-b17d-75811fd05cbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.212 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9cb5df7f-b707-42d9-b17d-75811fd05cbb in datapath 3acaad61-a3f6-4bd6-83f4-0ab1438bb136 unbound from our chassis
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.213 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.215 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[35dcf32d-862c-40fd-8a3a-86628ddc31e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.216 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 namespace which is not needed anymore
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:53 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 22 09:33:53 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Consumed 15.577s CPU time.
Nov 22 09:33:53 compute-0 systemd-machined[215941]: Machine qemu-125-instance-00000065 terminated.
Nov 22 09:33:53 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [NOTICE]   (358504) : haproxy version is 2.8.14-c23fe91
Nov 22 09:33:53 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [NOTICE]   (358504) : path to executable is /usr/sbin/haproxy
Nov 22 09:33:53 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [WARNING]  (358504) : Exiting Master process...
Nov 22 09:33:53 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [ALERT]    (358504) : Current worker (358506) exited with code 143 (Terminated)
Nov 22 09:33:53 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [WARNING]  (358504) : All workers exited. Exiting... (0)
Nov 22 09:33:53 compute-0 systemd[1]: libpod-09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467.scope: Deactivated successfully.
Nov 22 09:33:53 compute-0 podman[359710]: 2025-11-22 09:33:53.40484716 +0000 UTC m=+0.074201527 container died 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.483 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.484 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Deleting local config drive /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config because it was imported into RBD.
Nov 22 09:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467-userdata-shm.mount: Deactivated successfully.
Nov 22 09:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-323283cbee7146e9c3c1575a344ce40e1bcaad9765d07290209292215bb1d53b-merged.mount: Deactivated successfully.
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.504 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance shutdown successfully after 4 seconds.
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.525 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance destroyed successfully.
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.537 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance destroyed successfully.
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.538 253665 DEBUG nova.virt.libvirt.vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:48Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.539 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.540 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.540 253665 DEBUG os_vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.544 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb5df7f-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.549 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:53 compute-0 podman[359710]: 2025-11-22 09:33:53.558347498 +0000 UTC m=+0.227701855 container cleanup 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.558 253665 INFO os_vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7')
Nov 22 09:33:53 compute-0 kernel: tap4d3de607-ad: entered promiscuous mode
Nov 22 09:33:53 compute-0 NetworkManager[48920]: <info>  [1763804033.5673] manager: (tap4d3de607-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Nov 22 09:33:53 compute-0 systemd-udevd[359689]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:33:53 compute-0 ovn_controller[152872]: 2025-11-22T09:33:53Z|01041|binding|INFO|Claiming lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a for this chassis.
Nov 22 09:33:53 compute-0 ovn_controller[152872]: 2025-11-22T09:33:53Z|01042|binding|INFO|4d3de607-ad62-4c7d-ae3b-7cecb934aa9a: Claiming fa:16:3e:cf:2e:8f 10.100.0.14
Nov 22 09:33:53 compute-0 systemd[1]: libpod-conmon-09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467.scope: Deactivated successfully.
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.577 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:2e:8f 10.100.0.14'], port_security=['fa:16:3e:cf:2e:8f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c5540f5a-8dfa-4b11-8452-c6fe99db1d64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ea999ce-3074-41ab-b630-d39c003b894a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1557c77-7174-4c01-8889-0c9609535e78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f542600-846f-418d-bf6a-c20db70e9dc6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:53 compute-0 NetworkManager[48920]: <info>  [1763804033.5830] device (tap4d3de607-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:33:53 compute-0 NetworkManager[48920]: <info>  [1763804033.5842] device (tap4d3de607-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:33:53 compute-0 ovn_controller[152872]: 2025-11-22T09:33:53Z|01043|binding|INFO|Setting lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a ovn-installed in OVS
Nov 22 09:33:53 compute-0 ovn_controller[152872]: 2025-11-22T09:33:53Z|01044|binding|INFO|Setting lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a up in Southbound
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:53 compute-0 systemd-machined[215941]: New machine qemu-128-instance-00000068.
Nov 22 09:33:53 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000068.
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.654 253665 DEBUG nova.compute.manager [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-unplugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.655 253665 DEBUG oslo_concurrency.lockutils [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.655 253665 DEBUG oslo_concurrency.lockutils [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.655 253665 DEBUG oslo_concurrency.lockutils [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.655 253665 DEBUG nova.compute.manager [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-unplugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.656 253665 WARNING nova.compute.manager [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-unplugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state rebuilding.
Nov 22 09:33:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:53 compute-0 podman[359751]: 2025-11-22 09:33:53.675513582 +0000 UTC m=+0.144537899 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:33:53 compute-0 podman[359772]: 2025-11-22 09:33:53.687332403 +0000 UTC m=+0.093697118 container remove 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.695 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03c7e64b-e04f-4b0c-bf5f-c0a6479edc52]: (4, ('Sat Nov 22 09:33:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 (09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467)\n09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467\nSat Nov 22 09:33:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 (09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467)\n09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.696 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c44fdba-2329-4282-bf97-de185aa0c20d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.697 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3acaad61-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:53 compute-0 kernel: tap3acaad61-a0: left promiscuous mode
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.715 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.717 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32783e75-4f84-49d9-8e26-a4939a1261e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.730 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59df2a55-28cb-4a51-ad03-8888d620370e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.731 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[794a8f93-16bf-4a1b-ad88-5d90dd37c170]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.751 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[158db009-fac2-45be-ac22-80296133b75f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685816, 'reachable_time': 33099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359828, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.755 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.756 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfaf21a-263e-4af9-b866-cdaa5d72e93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d3acaad61\x2da3f6\x2d4bd6\x2d83f4\x2d0ab1438bb136.mount: Deactivated successfully.
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.757 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a in datapath 5ea999ce-3074-41ab-b630-d39c003b894a unbound from our chassis
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.758 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5ea999ce-3074-41ab-b630-d39c003b894a
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.770 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fbdca928-9a2d-45a0-acfe-7a2f0240e5aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.771 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5ea999ce-31 in ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.775 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5ea999ce-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.775 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[168fdb4e-ad2e-491f-ba65-c7e76369059e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.777 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c54e7acc-7ce8-4e3d-af53-e32717ad5ce7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.792 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[12ab45dd-28d4-4b31-9c43-669a948e417a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.819 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4786a2-46ad-4d21-80e2-14a9d9980fe3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 280 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.9 MiB/s wr, 261 op/s
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.861 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebb447e-24bc-4503-abdb-478ad6575b04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 NetworkManager[48920]: <info>  [1763804033.8680] manager: (tap5ea999ce-30): new Veth device (/org/freedesktop/NetworkManager/Devices/429)
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.869 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a4daa590-3516-444e-9e6a-fb858019a388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.893 253665 DEBUG nova.compute.manager [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.893 253665 DEBUG oslo_concurrency.lockutils [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.894 253665 DEBUG oslo_concurrency.lockutils [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.894 253665 DEBUG oslo_concurrency.lockutils [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:53 compute-0 nova_compute[253661]: 2025-11-22 09:33:53.894 253665 DEBUG nova.compute.manager [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Processing event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.913 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf110b83-72ae-4c23-8e00-56d4a5c3756b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.917 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[713fc1aa-270c-4420-bc49-e3fc796370a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 NetworkManager[48920]: <info>  [1763804033.9506] device (tap5ea999ce-30): carrier: link connected
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.957 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c9dc50-3ce9-4320-b58c-4cf4160f1d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.984 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eadc6e43-29bb-4b3b-9005-99b6227a3375]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ea999ce-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:63:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688468, 'reachable_time': 43905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359852, 'error': None, 'target': 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.006 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[42bd18af-e923-492f-a7b2-d582d959bf86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:6347'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688468, 'tstamp': 688468}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359868, 'error': None, 'target': 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.040 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[93246642-cee0-4af5-904f-3240153d0bce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ea999ce-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:63:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688468, 'reachable_time': 43905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359872, 'error': None, 'target': 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.092 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0506cab8-597d-4eb3-907d-529db4af9f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.165 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e96a0077-e4df-4a8e-9ca3-6132b94d874d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.166 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ea999ce-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.167 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.167 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ea999ce-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:54 compute-0 NetworkManager[48920]: <info>  [1763804034.1697] manager: (tap5ea999ce-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:54 compute-0 kernel: tap5ea999ce-30: entered promiscuous mode
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.173 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.179 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5ea999ce-30, col_values=(('external_ids', {'iface-id': 'a1771b67-4cb9-46af-b99c-bccbb7cc939f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:54 compute-0 ovn_controller[152872]: 2025-11-22T09:33:54Z|01045|binding|INFO|Releasing lport a1771b67-4cb9-46af-b99c-bccbb7cc939f from this chassis (sb_readonly=0)
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.185 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5ea999ce-3074-41ab-b630-d39c003b894a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5ea999ce-3074-41ab-b630-d39c003b894a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84c17024-1d28-48d2-b3e7-9443af348750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.187 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-5ea999ce-3074-41ab-b630-d39c003b894a
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/5ea999ce-3074-41ab-b630-d39c003b894a.pid.haproxy
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 5ea999ce-3074-41ab-b630-d39c003b894a
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:33:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.188 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'env', 'PROCESS_TAG=haproxy-5ea999ce-3074-41ab-b630-d39c003b894a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5ea999ce-3074-41ab-b630-d39c003b894a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.246 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.247 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804034.247238, c5540f5a-8dfa-4b11-8452-c6fe99db1d64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.247 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] VM Started (Lifecycle Event)
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.263 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.273 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.276 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.279 253665 INFO nova.virt.libvirt.driver [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance spawned successfully.
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.279 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.297 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.297 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804034.2496908, c5540f5a-8dfa-4b11-8452-c6fe99db1d64 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.297 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] VM Paused (Lifecycle Event)
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.306 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.306 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.307 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.307 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.307 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.308 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.313 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.317 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804034.2523408, c5540f5a-8dfa-4b11-8452-c6fe99db1d64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.317 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] VM Resumed (Lifecycle Event)
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.339 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.351 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deleting instance files /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808_del
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.352 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deletion of /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808_del complete
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.378 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.394 253665 INFO nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Took 11.05 seconds to spawn the instance on the hypervisor.
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.394 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.487 253665 INFO nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Took 12.17 seconds to build instance.
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.528 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.536 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.536 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating image(s)
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.565 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.594 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.618 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.622 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:54 compute-0 podman[359929]: 2025-11-22 09:33:54.574164411 +0000 UTC m=+0.025945000 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.708 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.709 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.710 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.710 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.733 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:54 compute-0 nova_compute[253661]: 2025-11-22 09:33:54.736 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 eb81b22a-c733-4b44-8546-e4bd1c24d808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:55 compute-0 podman[359929]: 2025-11-22 09:33:55.030679797 +0000 UTC m=+0.482460366 container create c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:33:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:33:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3601.2 total, 600.0 interval
                                           Cumulative writes: 32K writes, 128K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 32K writes, 11K syncs, 2.94 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7053 writes, 27K keys, 7053 commit groups, 1.0 writes per commit group, ingest: 28.68 MB, 0.05 MB/s
                                           Interval WAL: 7053 writes, 2686 syncs, 2.63 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:33:55 compute-0 ceph-mon[75021]: pgmap v2154: 305 pgs: 305 active+clean; 280 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.9 MiB/s wr, 261 op/s
Nov 22 09:33:55 compute-0 systemd[1]: Started libpod-conmon-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b.scope.
Nov 22 09:33:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f254f240f0f6ebbbb79d2bb4edf4897aae117a89bf2f466a7a1a47ed4367c5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:33:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:33:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:33:55 compute-0 podman[359929]: 2025-11-22 09:33:55.655641889 +0000 UTC m=+1.107422478 container init c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:33:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:33:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:33:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:33:55 compute-0 podman[359929]: 2025-11-22 09:33:55.664186139 +0000 UTC m=+1.115966708 container start c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:33:55 compute-0 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [NOTICE]   (360040) : New worker (360045) forked
Nov 22 09:33:55 compute-0 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [NOTICE]   (360040) : Loading success.
Nov 22 09:33:55 compute-0 nova_compute[253661]: 2025-11-22 09:33:55.781 253665 DEBUG nova.compute.manager [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:55 compute-0 nova_compute[253661]: 2025-11-22 09:33:55.782 253665 DEBUG oslo_concurrency.lockutils [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:55 compute-0 nova_compute[253661]: 2025-11-22 09:33:55.782 253665 DEBUG oslo_concurrency.lockutils [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:55 compute-0 nova_compute[253661]: 2025-11-22 09:33:55.783 253665 DEBUG oslo_concurrency.lockutils [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:55 compute-0 nova_compute[253661]: 2025-11-22 09:33:55.783 253665 DEBUG nova.compute.manager [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:55 compute-0 nova_compute[253661]: 2025-11-22 09:33:55.783 253665 WARNING nova.compute.manager [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state rebuild_spawning.
Nov 22 09:33:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 269 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.1 MiB/s wr, 175 op/s
Nov 22 09:33:55 compute-0 nova_compute[253661]: 2025-11-22 09:33:55.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:56 compute-0 nova_compute[253661]: 2025-11-22 09:33:56.020 253665 DEBUG nova.compute.manager [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:56 compute-0 nova_compute[253661]: 2025-11-22 09:33:56.021 253665 DEBUG oslo_concurrency.lockutils [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:56 compute-0 nova_compute[253661]: 2025-11-22 09:33:56.021 253665 DEBUG oslo_concurrency.lockutils [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:56 compute-0 nova_compute[253661]: 2025-11-22 09:33:56.021 253665 DEBUG oslo_concurrency.lockutils [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:56 compute-0 nova_compute[253661]: 2025-11-22 09:33:56.021 253665 DEBUG nova.compute.manager [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] No waiting events found dispatching network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:56 compute-0 nova_compute[253661]: 2025-11-22 09:33:56.022 253665 WARNING nova.compute.manager [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received unexpected event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a for instance with vm_state active and task_state None.
Nov 22 09:33:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:33:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:33:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:33:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:33:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:33:56 compute-0 nova_compute[253661]: 2025-11-22 09:33:56.682 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 eb81b22a-c733-4b44-8546-e4bd1c24d808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.947s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:56 compute-0 nova_compute[253661]: 2025-11-22 09:33:56.740 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:33:57 compute-0 ceph-mon[75021]: pgmap v2155: 305 pgs: 305 active+clean; 269 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.1 MiB/s wr, 175 op/s
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.684 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.684 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Ensure instance console log exists: /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.685 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.685 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.685 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.687 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start _get_guest_xml network_info=[{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.691 253665 WARNING nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.697 253665 DEBUG nova.virt.libvirt.host [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.697 253665 DEBUG nova.virt.libvirt.host [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.701 253665 DEBUG nova.virt.libvirt.host [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.701 253665 DEBUG nova.virt.libvirt.host [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.702 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.702 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.703 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.703 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.703 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.703 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.704 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.704 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.704 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.705 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.705 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.705 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.706 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'vcpu_model' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:57 compute-0 nova_compute[253661]: 2025-11-22 09:33:57.725 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 262 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 974 KiB/s rd, 5.6 MiB/s wr, 146 op/s
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.113 253665 DEBUG nova.compute.manager [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.114 253665 DEBUG nova.compute.manager [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing instance network info cache due to event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.114 253665 DEBUG oslo_concurrency.lockutils [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.115 253665 DEBUG oslo_concurrency.lockutils [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.115 253665 DEBUG nova.network.neutron [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:33:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626209637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.226 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.249 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.255 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:33:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2626209637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:33:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073921862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.739 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.741 253665 DEBUG nova.virt.libvirt.vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:54Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.742 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.743 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.746 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <uuid>eb81b22a-c733-4b44-8546-e4bd1c24d808</uuid>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <name>instance-00000065</name>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1059413669</nova:name>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:33:57</nova:creationTime>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <nova:port uuid="9cb5df7f-b707-42d9-b17d-75811fd05cbb">
Nov 22 09:33:58 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <system>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <entry name="serial">eb81b22a-c733-4b44-8546-e4bd1c24d808</entry>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <entry name="uuid">eb81b22a-c733-4b44-8546-e4bd1c24d808</entry>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     </system>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <os>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   </os>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <features>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   </features>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/eb81b22a-c733-4b44-8546-e4bd1c24d808_disk">
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config">
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       </source>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:33:58 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ee:62:78"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <target dev="tap9cb5df7f-b7"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/console.log" append="off"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <video>
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     </video>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:33:58 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:33:58 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:33:58 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:33:58 compute-0 nova_compute[253661]: </domain>
Nov 22 09:33:58 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.752 253665 DEBUG nova.virt.libvirt.vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:54Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.753 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.753 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.754 253665 DEBUG os_vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.755 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.755 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.756 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.759 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb5df7f-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.759 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9cb5df7f-b7, col_values=(('external_ids', {'iface-id': '9cb5df7f-b707-42d9-b17d-75811fd05cbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:62:78', 'vm-uuid': 'eb81b22a-c733-4b44-8546-e4bd1c24d808'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.761 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:58 compute-0 NetworkManager[48920]: <info>  [1763804038.7620] manager: (tap9cb5df7f-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.768 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.769 253665 INFO os_vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7')
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.837 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.838 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.838 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:ee:62:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.839 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Using config drive
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.870 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.890 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'ec2_ids' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:58 compute-0 nova_compute[253661]: 2025-11-22 09:33:58.921 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'keypairs' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.291 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating config drive at /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.301 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu5l267p2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.466 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu5l267p2" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.501 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.506 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.672 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.673 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deleting local config drive /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config because it was imported into RBD.
Nov 22 09:33:59 compute-0 ceph-mon[75021]: pgmap v2156: 305 pgs: 305 active+clean; 262 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 974 KiB/s rd, 5.6 MiB/s wr, 146 op/s
Nov 22 09:33:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3073921862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:33:59 compute-0 kernel: tap9cb5df7f-b7: entered promiscuous mode
Nov 22 09:33:59 compute-0 NetworkManager[48920]: <info>  [1763804039.7354] manager: (tap9cb5df7f-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Nov 22 09:33:59 compute-0 ovn_controller[152872]: 2025-11-22T09:33:59Z|01046|binding|INFO|Claiming lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb for this chassis.
Nov 22 09:33:59 compute-0 ovn_controller[152872]: 2025-11-22T09:33:59Z|01047|binding|INFO|9cb5df7f-b707-42d9-b17d-75811fd05cbb: Claiming fa:16:3e:ee:62:78 10.100.0.13
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.739 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.744 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:78 10.100.0.13'], port_security=['fa:16:3e:ee:62:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb81b22a-c733-4b44-8546-e4bd1c24d808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a9a9e980-b9b8-4093-8614-a39717adaa19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed202cc-8346-4c69-b67f-f490be608094, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9cb5df7f-b707-42d9-b17d-75811fd05cbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.746 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9cb5df7f-b707-42d9-b17d-75811fd05cbb in datapath 3acaad61-a3f6-4bd6-83f4-0ab1438bb136 bound to our chassis
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.748 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 09:33:59 compute-0 ovn_controller[152872]: 2025-11-22T09:33:59Z|01048|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb ovn-installed in OVS
Nov 22 09:33:59 compute-0 ovn_controller[152872]: 2025-11-22T09:33:59Z|01049|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb up in Southbound
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.771 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb1a863-f612-4b27-980a-dd5fcf84ea2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.772 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3acaad61-a1 in ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:33:59 compute-0 systemd-udevd[360262]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.776 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3acaad61-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.776 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10f3ded6-b99f-4c37-b4a9-2784a4b9b663]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.780 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c588ebff-409a-49e8-8c72-d1efd205f21a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 systemd-machined[215941]: New machine qemu-129-instance-00000065.
Nov 22 09:33:59 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000065.
Nov 22 09:33:59 compute-0 NetworkManager[48920]: <info>  [1763804039.7962] device (tap9cb5df7f-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.796 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9b77dbbb-d694-47fc-8f26-97f1300821f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 NetworkManager[48920]: <info>  [1763804039.7974] device (tap9cb5df7f-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.816 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[529d9c6b-3ecb-471b-99ea-b6cc475522c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 288 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 259 op/s
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.854 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf1540e-db4a-4c67-b16a-9a6d2642764b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 NetworkManager[48920]: <info>  [1763804039.8632] manager: (tap3acaad61-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/433)
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.867 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c34ff98-7f2d-46fe-ac14-2beb7e1f8ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.902 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecd4dc7-52f4-4a19-b916-b9670fac1930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.905 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[267fc380-3aee-409e-9501-da7989c4479d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 NetworkManager[48920]: <info>  [1763804039.9411] device (tap3acaad61-a0): carrier: link connected
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.952 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[61d32570-fbd0-4a0e-b5fc-2004b76912e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.969 253665 DEBUG nova.compute.manager [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.969 253665 DEBUG oslo_concurrency.lockutils [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.970 253665 DEBUG oslo_concurrency.lockutils [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.970 253665 DEBUG oslo_concurrency.lockutils [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.970 253665 DEBUG nova.compute.manager [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:33:59 compute-0 nova_compute[253661]: 2025-11-22 09:33:59.970 253665 WARNING nova.compute.manager [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state rebuild_spawning.
Nov 22 09:33:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.978 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31d04337-32c7-416f-826b-a558ea3d6bbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3acaad61-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a4:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689067, 'reachable_time': 28290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360295, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f85a1196-f8e3-4fe7-abeb-d0799371143a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a4ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689067, 'tstamp': 689067}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360296, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.005 253665 DEBUG nova.network.neutron [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updated VIF entry in instance network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.005 253665 DEBUG nova.network.neutron [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.016 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[16bbacbd-63dc-4ebb-884c-f9bdb2a2b669]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3acaad61-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a4:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689067, 'reachable_time': 28290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360297, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.026 253665 DEBUG oslo_concurrency.lockutils [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.046 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8ad344-c77a-42fb-b59c-0e6fa1962827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.133 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e8482a-4bcc-452d-8fcd-d314c5b2585f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.136 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3acaad61-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.136 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3acaad61-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:00 compute-0 NetworkManager[48920]: <info>  [1763804040.1396] manager: (tap3acaad61-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Nov 22 09:34:00 compute-0 kernel: tap3acaad61-a0: entered promiscuous mode
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.139 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3acaad61-a0, col_values=(('external_ids', {'iface-id': '505b5f2b-f067-432d-8ac4-da2043ed18cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:00 compute-0 ovn_controller[152872]: 2025-11-22T09:34:00Z|01050|binding|INFO|Releasing lport 505b5f2b-f067-432d-8ac4-da2043ed18cf from this chassis (sb_readonly=0)
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.147 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.148 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.150 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[22bc76ce-1dd2-4395-b9f4-3268065aff4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.150 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:34:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.151 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'env', 'PROCESS_TAG=haproxy-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.161 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.338 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for eb81b22a-c733-4b44-8546-e4bd1c24d808 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.341 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804040.337858, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.342 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Resumed (Lifecycle Event)
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.347 253665 DEBUG nova.compute.manager [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.348 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.353 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance spawned successfully.
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.354 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.385 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.386 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.386 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.387 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.388 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.388 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:34:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 25K writes, 104K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 25K writes, 8408 syncs, 3.05 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6270 writes, 23K keys, 6270 commit groups, 1.0 writes per commit group, ingest: 24.71 MB, 0.04 MB/s
                                           Interval WAL: 6271 writes, 2417 syncs, 2.59 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.515 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.519 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.541 253665 DEBUG nova.compute.manager [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.542 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.543 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804040.3396366, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.543 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Started (Lifecycle Event)
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.558 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.566 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.589 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:34:00 compute-0 podman[360371]: 2025-11-22 09:34:00.599374657 +0000 UTC m=+0.072285811 container create 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.609 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.610 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.610 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:34:00 compute-0 systemd[1]: Started libpod-conmon-4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d.scope.
Nov 22 09:34:00 compute-0 podman[360371]: 2025-11-22 09:34:00.562852458 +0000 UTC m=+0.035763642 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.667 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6f38e7aba289fc0c9ae62241c8d2f65dd0c33419fd634f67d646ac8fe686c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:00 compute-0 podman[360371]: 2025-11-22 09:34:00.704930705 +0000 UTC m=+0.177841869 container init 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:34:00 compute-0 podman[360371]: 2025-11-22 09:34:00.712160442 +0000 UTC m=+0.185071596 container start 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:34:00 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [NOTICE]   (360390) : New worker (360392) forked
Nov 22 09:34:00 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [NOTICE]   (360390) : Loading success.
Nov 22 09:34:00 compute-0 nova_compute[253661]: 2025-11-22 09:34:00.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:01 compute-0 ceph-mon[75021]: pgmap v2157: 305 pgs: 305 active+clean; 288 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 259 op/s
Nov 22 09:34:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 293 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.1 MiB/s wr, 270 op/s
Nov 22 09:34:02 compute-0 nova_compute[253661]: 2025-11-22 09:34:02.069 253665 DEBUG nova.compute.manager [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:02 compute-0 nova_compute[253661]: 2025-11-22 09:34:02.070 253665 DEBUG oslo_concurrency.lockutils [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:02 compute-0 nova_compute[253661]: 2025-11-22 09:34:02.070 253665 DEBUG oslo_concurrency.lockutils [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:02 compute-0 nova_compute[253661]: 2025-11-22 09:34:02.071 253665 DEBUG oslo_concurrency.lockutils [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:02 compute-0 nova_compute[253661]: 2025-11-22 09:34:02.071 253665 DEBUG nova.compute.manager [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:34:02 compute-0 nova_compute[253661]: 2025-11-22 09:34:02.071 253665 WARNING nova.compute.manager [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state None.
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 09:34:02 compute-0 nova_compute[253661]: 2025-11-22 09:34:02.807 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022084416918994594 of space, bias 1.0, pg target 0.6625325075698378 quantized to 32 (current 32)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:34:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:34:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:03 compute-0 ceph-mon[75021]: pgmap v2158: 305 pgs: 305 active+clean; 293 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.1 MiB/s wr, 270 op/s
Nov 22 09:34:03 compute-0 nova_compute[253661]: 2025-11-22 09:34:03.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 293 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.1 MiB/s wr, 310 op/s
Nov 22 09:34:04 compute-0 ceph-mon[75021]: pgmap v2159: 305 pgs: 305 active+clean; 293 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.1 MiB/s wr, 310 op/s
Nov 22 09:34:05 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 22 09:34:05 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Consumed 14.227s CPU time.
Nov 22 09:34:05 compute-0 systemd-machined[215941]: Machine qemu-127-instance-00000067 terminated.
Nov 22 09:34:05 compute-0 nova_compute[253661]: 2025-11-22 09:34:05.826 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance shutdown successfully after 24 seconds.
Nov 22 09:34:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 293 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.1 MiB/s wr, 257 op/s
Nov 22 09:34:05 compute-0 nova_compute[253661]: 2025-11-22 09:34:05.835 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance destroyed successfully.
Nov 22 09:34:05 compute-0 nova_compute[253661]: 2025-11-22 09:34:05.843 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance destroyed successfully.
Nov 22 09:34:05 compute-0 nova_compute[253661]: 2025-11-22 09:34:05.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.523 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deleting instance files /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632_del
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.524 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deletion of /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632_del complete
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.708 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.709 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating image(s)
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.743 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.775 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.807 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.812 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.890 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.892 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.893 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.893 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:06 compute-0 ceph-mon[75021]: pgmap v2160: 305 pgs: 305 active+clean; 293 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.1 MiB/s wr, 257 op/s
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.920 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:06 compute-0 nova_compute[253661]: 2025-11-22 09:34:06.924 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 0922fe2c-d67c-47da-a1ac-5b217442c632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.379 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 0922fe2c-d67c-47da-a1ac-5b217442c632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.456 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] resizing rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.593 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.595 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Ensure instance console log exists: /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.595 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.596 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.596 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.598 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.604 253665 WARNING nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.613 253665 DEBUG nova.virt.libvirt.host [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.614 253665 DEBUG nova.virt.libvirt.host [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.619 253665 DEBUG nova.virt.libvirt.host [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.620 253665 DEBUG nova.virt.libvirt.host [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.620 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.620 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.621 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.621 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.622 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.622 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.622 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.623 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.623 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.623 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.624 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.624 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.625 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:07 compute-0 nova_compute[253661]: 2025-11-22 09:34:07.639 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 257 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Nov 22 09:34:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737509771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.129 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.165 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.171 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1545881550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.623 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.628 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <uuid>0922fe2c-d67c-47da-a1ac-5b217442c632</uuid>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <name>instance-00000067</name>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerShowV247Test-server-2120834641</nova:name>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:34:07</nova:creationTime>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <nova:user uuid="872ddfa50ca3429ca2eb86919c4c82cf">tempest-ServerShowV247Test-1598997937-project-member</nova:user>
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <nova:project uuid="93a61bafffff48389d1004154f28d04c">tempest-ServerShowV247Test-1598997937</nova:project>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <system>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <entry name="serial">0922fe2c-d67c-47da-a1ac-5b217442c632</entry>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <entry name="uuid">0922fe2c-d67c-47da-a1ac-5b217442c632</entry>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     </system>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <os>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   </os>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <features>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   </features>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0922fe2c-d67c-47da-a1ac-5b217442c632_disk">
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config">
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:08 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/console.log" append="off"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <video>
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     </video>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:34:08 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:34:08 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:34:08 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:34:08 compute-0 nova_compute[253661]: </domain>
Nov 22 09:34:08 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:34:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.693 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.695 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.696 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Using config drive
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.724 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.746 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:08 compute-0 nova_compute[253661]: 2025-11-22 09:34:08.790 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'keypairs' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:08 compute-0 ceph-mon[75021]: pgmap v2161: 305 pgs: 305 active+clean; 257 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Nov 22 09:34:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2737509771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1545881550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:09 compute-0 ovn_controller[152872]: 2025-11-22T09:34:09Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:2e:8f 10.100.0.14
Nov 22 09:34:09 compute-0 ovn_controller[152872]: 2025-11-22T09:34:09Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:2e:8f 10.100.0.14
Nov 22 09:34:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 251 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 267 op/s
Nov 22 09:34:09 compute-0 nova_compute[253661]: 2025-11-22 09:34:09.903 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating config drive at /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config
Nov 22 09:34:09 compute-0 nova_compute[253661]: 2025-11-22 09:34:09.909 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxsms8fpo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:10 compute-0 nova_compute[253661]: 2025-11-22 09:34:10.052 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxsms8fpo" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:10 compute-0 nova_compute[253661]: 2025-11-22 09:34:10.085 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:10 compute-0 nova_compute[253661]: 2025-11-22 09:34:10.093 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:10 compute-0 nova_compute[253661]: 2025-11-22 09:34:10.353 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:10 compute-0 nova_compute[253661]: 2025-11-22 09:34:10.355 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deleting local config drive /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config because it was imported into RBD.
Nov 22 09:34:10 compute-0 systemd-machined[215941]: New machine qemu-130-instance-00000067.
Nov 22 09:34:10 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000067.
Nov 22 09:34:10 compute-0 nova_compute[253661]: 2025-11-22 09:34:10.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:10 compute-0 ceph-mon[75021]: pgmap v2162: 305 pgs: 305 active+clean; 251 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 267 op/s
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.251 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.345 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 0922fe2c-d67c-47da-a1ac-5b217442c632 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.345 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804051.344523, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.346 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Resumed (Lifecycle Event)
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.354 253665 DEBUG nova.compute.manager [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.355 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.361 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance spawned successfully.
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.361 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.379 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.383 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.400 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.400 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.401 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.401 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.402 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.402 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.414 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804051.3532696, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Started (Lifecycle Event)
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.462 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.467 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.495 253665 DEBUG nova.compute.manager [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.497 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.559 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.560 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.560 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:34:11 compute-0 nova_compute[253661]: 2025-11-22 09:34:11.619 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 271 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 164 op/s
Nov 22 09:34:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:34:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2253664503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:34:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:34:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2253664503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:34:13 compute-0 ceph-mon[75021]: pgmap v2163: 305 pgs: 305 active+clean; 271 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 164 op/s
Nov 22 09:34:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2253664503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:34:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2253664503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.544 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "0922fe2c-d67c-47da-a1ac-5b217442c632" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.546 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.547 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "0922fe2c-d67c-47da-a1ac-5b217442c632-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.547 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.547 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.549 253665 INFO nova.compute.manager [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Terminating instance
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.550 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "refresh_cache-0922fe2c-d67c-47da-a1ac-5b217442c632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.550 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquired lock "refresh_cache-0922fe2c-d67c-47da-a1ac-5b217442c632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.550 253665 DEBUG nova.network.neutron [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:34:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.770 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 293 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Nov 22 09:34:13 compute-0 nova_compute[253661]: 2025-11-22 09:34:13.885 253665 DEBUG nova.network.neutron [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:34:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:14.136 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:34:14 compute-0 nova_compute[253661]: 2025-11-22 09:34:14.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:14.137 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:34:14 compute-0 nova_compute[253661]: 2025-11-22 09:34:14.243 253665 DEBUG nova.network.neutron [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:14 compute-0 nova_compute[253661]: 2025-11-22 09:34:14.261 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Releasing lock "refresh_cache-0922fe2c-d67c-47da-a1ac-5b217442c632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:34:14 compute-0 nova_compute[253661]: 2025-11-22 09:34:14.264 253665 DEBUG nova.compute.manager [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:34:14 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 22 09:34:14 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000067.scope: Consumed 3.932s CPU time.
Nov 22 09:34:14 compute-0 systemd-machined[215941]: Machine qemu-130-instance-00000067 terminated.
Nov 22 09:34:14 compute-0 nova_compute[253661]: 2025-11-22 09:34:14.501 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance destroyed successfully.
Nov 22 09:34:14 compute-0 nova_compute[253661]: 2025-11-22 09:34:14.501 253665 DEBUG nova.objects.instance [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'resources' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:15 compute-0 ceph-mon[75021]: pgmap v2164: 305 pgs: 305 active+clean; 293 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.125 253665 INFO nova.compute.manager [None req-5482febe-56e6-43c5-a57d-dce6e3126815 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Get console output
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.131 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:34:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:15.140 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.285 253665 INFO nova.virt.libvirt.driver [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deleting instance files /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632_del
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.286 253665 INFO nova.virt.libvirt.driver [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deletion of /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632_del complete
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.369 253665 INFO nova.compute.manager [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Took 1.10 seconds to destroy the instance on the hypervisor.
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.370 253665 DEBUG oslo.service.loopingcall [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.370 253665 DEBUG nova.compute.manager [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.371 253665 DEBUG nova.network.neutron [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:34:15 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.528 253665 DEBUG nova.network.neutron [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.543 253665 DEBUG nova.network.neutron [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.560 253665 INFO nova.compute.manager [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Took 0.19 seconds to deallocate network for instance.
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.623 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.624 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:15 compute-0 sudo[360789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:34:15 compute-0 sudo[360789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:15 compute-0 sudo[360789]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.763 253665 DEBUG oslo_concurrency.processutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:15 compute-0 sudo[360814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:34:15 compute-0 sudo[360814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:15 compute-0 sudo[360814]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 293 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Nov 22 09:34:15 compute-0 sudo[360840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:34:15 compute-0 nova_compute[253661]: 2025-11-22 09:34:15.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:15 compute-0 sudo[360840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:15 compute-0 sudo[360840]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:15 compute-0 sudo[360865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:34:15 compute-0 sudo[360865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:15 compute-0 ovn_controller[152872]: 2025-11-22T09:34:15Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:62:78 10.100.0.13
Nov 22 09:34:15 compute-0 ovn_controller[152872]: 2025-11-22T09:34:15Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:62:78 10.100.0.13
Nov 22 09:34:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:34:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954433103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:16 compute-0 nova_compute[253661]: 2025-11-22 09:34:16.242 253665 DEBUG oslo_concurrency.processutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:16 compute-0 nova_compute[253661]: 2025-11-22 09:34:16.250 253665 DEBUG nova.compute.provider_tree [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:34:16 compute-0 nova_compute[253661]: 2025-11-22 09:34:16.278 253665 DEBUG nova.scheduler.client.report [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:34:16 compute-0 nova_compute[253661]: 2025-11-22 09:34:16.312 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:16 compute-0 nova_compute[253661]: 2025-11-22 09:34:16.347 253665 INFO nova.scheduler.client.report [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Deleted allocations for instance 0922fe2c-d67c-47da-a1ac-5b217442c632
Nov 22 09:34:16 compute-0 nova_compute[253661]: 2025-11-22 09:34:16.404 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:16 compute-0 sudo[360865]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:34:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:34:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:34:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:34:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:34:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:34:16 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a24fb9e6-6655-4690-a299-bbfacb8263e3 does not exist
Nov 22 09:34:16 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 41127b87-1f75-472b-8837-8c3bdcb61fbe does not exist
Nov 22 09:34:16 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ea1852af-6b4c-4c6e-bf3e-22e8c859e1fc does not exist
Nov 22 09:34:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:34:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:34:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:34:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:34:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:34:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:34:16 compute-0 sudo[360942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:34:16 compute-0 sudo[360942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:16 compute-0 sudo[360942]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:16 compute-0 podman[360966]: 2025-11-22 09:34:16.724339825 +0000 UTC m=+0.069100781 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:34:16 compute-0 podman[360967]: 2025-11-22 09:34:16.731478662 +0000 UTC m=+0.076340920 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 09:34:16 compute-0 sudo[360979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:34:16 compute-0 sudo[360979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:16 compute-0 sudo[360979]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:16 compute-0 sudo[361029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:34:16 compute-0 sudo[361029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:16 compute-0 sudo[361029]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:16 compute-0 sudo[361054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:34:16 compute-0 sudo[361054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:17 compute-0 ceph-mon[75021]: pgmap v2165: 305 pgs: 305 active+clean; 293 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Nov 22 09:34:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/954433103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:34:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:34:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:34:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:34:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:34:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:34:17 compute-0 podman[361118]: 2025-11-22 09:34:17.233388075 +0000 UTC m=+0.047572412 container create 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 09:34:17 compute-0 systemd[1]: Started libpod-conmon-92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b.scope.
Nov 22 09:34:17 compute-0 podman[361118]: 2025-11-22 09:34:17.209497797 +0000 UTC m=+0.023682164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:34:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:34:17 compute-0 podman[361118]: 2025-11-22 09:34:17.330377602 +0000 UTC m=+0.144561939 container init 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:34:17 compute-0 podman[361118]: 2025-11-22 09:34:17.339474836 +0000 UTC m=+0.153659173 container start 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 09:34:17 compute-0 podman[361118]: 2025-11-22 09:34:17.345482884 +0000 UTC m=+0.159667221 container attach 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:34:17 compute-0 peaceful_roentgen[361134]: 167 167
Nov 22 09:34:17 compute-0 systemd[1]: libpod-92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b.scope: Deactivated successfully.
Nov 22 09:34:17 compute-0 conmon[361134]: conmon 92f67ee57f6d78a430ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b.scope/container/memory.events
Nov 22 09:34:17 compute-0 podman[361118]: 2025-11-22 09:34:17.350271452 +0000 UTC m=+0.164455809 container died 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 09:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5597ba80905c1d64a139a484186ae68892dbbd312fa68174d21077d7bcba01a1-merged.mount: Deactivated successfully.
Nov 22 09:34:17 compute-0 podman[361118]: 2025-11-22 09:34:17.417941737 +0000 UTC m=+0.232126074 container remove 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 09:34:17 compute-0 systemd[1]: libpod-conmon-92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b.scope: Deactivated successfully.
Nov 22 09:34:17 compute-0 podman[361157]: 2025-11-22 09:34:17.651083975 +0000 UTC m=+0.062356245 container create 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:34:17 compute-0 systemd[1]: Started libpod-conmon-543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1.scope.
Nov 22 09:34:17 compute-0 podman[361157]: 2025-11-22 09:34:17.626914571 +0000 UTC m=+0.038186901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:34:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.757 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.760 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.760 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.760 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.761 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.762 253665 INFO nova.compute.manager [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Terminating instance
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.763 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "refresh_cache-9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.763 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquired lock "refresh_cache-9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.763 253665 DEBUG nova.network.neutron [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:34:17 compute-0 podman[361157]: 2025-11-22 09:34:17.76503028 +0000 UTC m=+0.176302570 container init 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 09:34:17 compute-0 podman[361157]: 2025-11-22 09:34:17.774944164 +0000 UTC m=+0.186216434 container start 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:34:17 compute-0 podman[361157]: 2025-11-22 09:34:17.780403509 +0000 UTC m=+0.191675779 container attach 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:34:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 287 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.8 MiB/s wr, 281 op/s
Nov 22 09:34:17 compute-0 nova_compute[253661]: 2025-11-22 09:34:17.944 253665 DEBUG nova.network.neutron [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:34:18 compute-0 nova_compute[253661]: 2025-11-22 09:34:18.221 253665 DEBUG nova.network.neutron [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:18 compute-0 nova_compute[253661]: 2025-11-22 09:34:18.243 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Releasing lock "refresh_cache-9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:34:18 compute-0 nova_compute[253661]: 2025-11-22 09:34:18.243 253665 DEBUG nova.compute.manager [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:34:18 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 22 09:34:18 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000066.scope: Consumed 15.206s CPU time.
Nov 22 09:34:18 compute-0 systemd-machined[215941]: Machine qemu-126-instance-00000066 terminated.
Nov 22 09:34:18 compute-0 nova_compute[253661]: 2025-11-22 09:34:18.480 253665 INFO nova.virt.libvirt.driver [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance destroyed successfully.
Nov 22 09:34:18 compute-0 nova_compute[253661]: 2025-11-22 09:34:18.481 253665 DEBUG nova.objects.instance [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'resources' on Instance uuid 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:18 compute-0 nova_compute[253661]: 2025-11-22 09:34:18.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:18 compute-0 bold_franklin[361173]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:34:18 compute-0 bold_franklin[361173]: --> relative data size: 1.0
Nov 22 09:34:18 compute-0 bold_franklin[361173]: --> All data devices are unavailable
Nov 22 09:34:18 compute-0 systemd[1]: libpod-543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1.scope: Deactivated successfully.
Nov 22 09:34:18 compute-0 systemd[1]: libpod-543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1.scope: Consumed 1.035s CPU time.
Nov 22 09:34:18 compute-0 podman[361157]: 2025-11-22 09:34:18.888803189 +0000 UTC m=+1.300075469 container died 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3-merged.mount: Deactivated successfully.
Nov 22 09:34:19 compute-0 podman[361157]: 2025-11-22 09:34:19.002163969 +0000 UTC m=+1.413436239 container remove 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:34:19 compute-0 systemd[1]: libpod-conmon-543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1.scope: Deactivated successfully.
Nov 22 09:34:19 compute-0 sudo[361054]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:19 compute-0 ceph-mon[75021]: pgmap v2166: 305 pgs: 305 active+clean; 287 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.8 MiB/s wr, 281 op/s
Nov 22 09:34:19 compute-0 sudo[361236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:34:19 compute-0 sudo[361236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:19 compute-0 sudo[361236]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.146 253665 INFO nova.virt.libvirt.driver [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Deleting instance files /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_del
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.148 253665 INFO nova.virt.libvirt.driver [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Deletion of /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_del complete
Nov 22 09:34:19 compute-0 sudo[361261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:34:19 compute-0 sudo[361261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:19 compute-0 sudo[361261]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.259 253665 INFO nova.compute.manager [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Took 1.02 seconds to destroy the instance on the hypervisor.
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.260 253665 DEBUG oslo.service.loopingcall [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.261 253665 DEBUG nova.compute.manager [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.261 253665 DEBUG nova.network.neutron [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:34:19 compute-0 sudo[361286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:34:19 compute-0 sudo[361286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:19 compute-0 sudo[361286]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:19 compute-0 sudo[361311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:34:19 compute-0 sudo[361311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.400 253665 DEBUG nova.network.neutron [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.410 253665 DEBUG nova.network.neutron [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.431 253665 INFO nova.compute.manager [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Took 0.17 seconds to deallocate network for instance.
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.476 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.477 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:19 compute-0 nova_compute[253661]: 2025-11-22 09:34:19.585 253665 DEBUG oslo_concurrency.processutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:19 compute-0 podman[361378]: 2025-11-22 09:34:19.687102798 +0000 UTC m=+0.040547339 container create 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 09:34:19 compute-0 systemd[1]: Started libpod-conmon-3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe.scope.
Nov 22 09:34:19 compute-0 podman[361378]: 2025-11-22 09:34:19.670637782 +0000 UTC m=+0.024082343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:34:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:34:19 compute-0 podman[361378]: 2025-11-22 09:34:19.811286194 +0000 UTC m=+0.164730735 container init 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:34:19 compute-0 podman[361378]: 2025-11-22 09:34:19.819858104 +0000 UTC m=+0.173302645 container start 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:34:19 compute-0 loving_bouman[361395]: 167 167
Nov 22 09:34:19 compute-0 systemd[1]: libpod-3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe.scope: Deactivated successfully.
Nov 22 09:34:19 compute-0 podman[361378]: 2025-11-22 09:34:19.828750654 +0000 UTC m=+0.182195225 container attach 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:34:19 compute-0 podman[361378]: 2025-11-22 09:34:19.830528327 +0000 UTC m=+0.183972888 container died 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:34:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 272 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 245 op/s
Nov 22 09:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e768aa26aaf80eaf67224ae680aa63f3377b9a65525f3f326634da9699f3799e-merged.mount: Deactivated successfully.
Nov 22 09:34:19 compute-0 podman[361378]: 2025-11-22 09:34:19.932295823 +0000 UTC m=+0.285740374 container remove 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:34:19 compute-0 systemd[1]: libpod-conmon-3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe.scope: Deactivated successfully.
Nov 22 09:34:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:34:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3766812116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:20 compute-0 nova_compute[253661]: 2025-11-22 09:34:20.100 253665 DEBUG oslo_concurrency.processutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:20 compute-0 nova_compute[253661]: 2025-11-22 09:34:20.108 253665 DEBUG nova.compute.provider_tree [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:34:20 compute-0 nova_compute[253661]: 2025-11-22 09:34:20.124 253665 DEBUG nova.scheduler.client.report [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:34:20 compute-0 nova_compute[253661]: 2025-11-22 09:34:20.143 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:20 compute-0 podman[361437]: 2025-11-22 09:34:20.146289089 +0000 UTC m=+0.070089346 container create e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 09:34:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3766812116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:20 compute-0 nova_compute[253661]: 2025-11-22 09:34:20.165 253665 INFO nova.scheduler.client.report [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Deleted allocations for instance 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae
Nov 22 09:34:20 compute-0 systemd[1]: Started libpod-conmon-e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d.scope.
Nov 22 09:34:20 compute-0 podman[361437]: 2025-11-22 09:34:20.105683929 +0000 UTC m=+0.029484216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:34:20 compute-0 nova_compute[253661]: 2025-11-22 09:34:20.214 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:20 compute-0 podman[361437]: 2025-11-22 09:34:20.256216695 +0000 UTC m=+0.180017002 container init e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 09:34:20 compute-0 podman[361437]: 2025-11-22 09:34:20.263640017 +0000 UTC m=+0.187440274 container start e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:34:20 compute-0 podman[361437]: 2025-11-22 09:34:20.269489611 +0000 UTC m=+0.193289928 container attach e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 09:34:20 compute-0 nova_compute[253661]: 2025-11-22 09:34:20.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]: {
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:     "0": [
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:         {
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "devices": [
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "/dev/loop3"
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             ],
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_name": "ceph_lv0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_size": "21470642176",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "name": "ceph_lv0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "tags": {
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.cluster_name": "ceph",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.crush_device_class": "",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.encrypted": "0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.osd_id": "0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.type": "block",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.vdo": "0"
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             },
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "type": "block",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "vg_name": "ceph_vg0"
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:         }
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:     ],
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:     "1": [
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:         {
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "devices": [
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "/dev/loop4"
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             ],
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_name": "ceph_lv1",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_size": "21470642176",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "name": "ceph_lv1",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "tags": {
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.cluster_name": "ceph",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.crush_device_class": "",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.encrypted": "0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.osd_id": "1",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.type": "block",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.vdo": "0"
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             },
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "type": "block",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "vg_name": "ceph_vg1"
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:         }
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:     ],
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:     "2": [
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:         {
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "devices": [
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "/dev/loop5"
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             ],
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_name": "ceph_lv2",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_size": "21470642176",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "name": "ceph_lv2",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "tags": {
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.cluster_name": "ceph",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.crush_device_class": "",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.encrypted": "0",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.osd_id": "2",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.type": "block",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:                 "ceph.vdo": "0"
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             },
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "type": "block",
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:             "vg_name": "ceph_vg2"
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:         }
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]:     ]
Nov 22 09:34:21 compute-0 pensive_mahavira[361456]: }
Nov 22 09:34:21 compute-0 systemd[1]: libpod-e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d.scope: Deactivated successfully.
Nov 22 09:34:21 compute-0 podman[361437]: 2025-11-22 09:34:21.130609826 +0000 UTC m=+1.054410103 container died e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 09:34:21 compute-0 ceph-mon[75021]: pgmap v2167: 305 pgs: 305 active+clean; 272 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 245 op/s
Nov 22 09:34:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754-merged.mount: Deactivated successfully.
Nov 22 09:34:21 compute-0 podman[361437]: 2025-11-22 09:34:21.191556256 +0000 UTC m=+1.115356513 container remove e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:34:21 compute-0 systemd[1]: libpod-conmon-e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d.scope: Deactivated successfully.
Nov 22 09:34:21 compute-0 sudo[361311]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:21 compute-0 sudo[361479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:34:21 compute-0 sudo[361479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:21 compute-0 sudo[361479]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:21 compute-0 sudo[361504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:34:21 compute-0 sudo[361504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:21 compute-0 sudo[361504]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:21 compute-0 sudo[361529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:34:21 compute-0 sudo[361529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:21 compute-0 sudo[361529]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:21 compute-0 sudo[361554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:34:21 compute-0 sudo[361554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 256 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 224 op/s
Nov 22 09:34:21 compute-0 podman[361619]: 2025-11-22 09:34:21.929205902 +0000 UTC m=+0.045344258 container create 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:34:21 compute-0 systemd[1]: Started libpod-conmon-0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18.scope.
Nov 22 09:34:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:34:22 compute-0 podman[361619]: 2025-11-22 09:34:21.909742413 +0000 UTC m=+0.025880769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:34:22 compute-0 podman[361619]: 2025-11-22 09:34:22.017633878 +0000 UTC m=+0.133772254 container init 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:34:22 compute-0 podman[361619]: 2025-11-22 09:34:22.027372538 +0000 UTC m=+0.143510894 container start 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:34:22 compute-0 podman[361619]: 2025-11-22 09:34:22.031493289 +0000 UTC m=+0.147631665 container attach 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:34:22 compute-0 systemd[1]: libpod-0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18.scope: Deactivated successfully.
Nov 22 09:34:22 compute-0 condescending_agnesi[361635]: 167 167
Nov 22 09:34:22 compute-0 conmon[361635]: conmon 0b964d6b6ac7f2b7d0da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18.scope/container/memory.events
Nov 22 09:34:22 compute-0 podman[361619]: 2025-11-22 09:34:22.035732524 +0000 UTC m=+0.151870880 container died 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb931832c02d65b9b735ff1a11721f9349b7baecb28a5fbf839739790267243b-merged.mount: Deactivated successfully.
Nov 22 09:34:22 compute-0 podman[361619]: 2025-11-22 09:34:22.076771813 +0000 UTC m=+0.192910169 container remove 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:34:22 compute-0 systemd[1]: libpod-conmon-0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18.scope: Deactivated successfully.
Nov 22 09:34:22 compute-0 nova_compute[253661]: 2025-11-22 09:34:22.117 253665 INFO nova.compute.manager [None req-7daa3790-656a-4a94-bd91-9abe7a6aea00 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Get console output
Nov 22 09:34:22 compute-0 nova_compute[253661]: 2025-11-22 09:34:22.125 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:34:22 compute-0 podman[361659]: 2025-11-22 09:34:22.271867816 +0000 UTC m=+0.047341466 container create e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:34:22 compute-0 systemd[1]: Started libpod-conmon-e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6.scope.
Nov 22 09:34:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:22 compute-0 podman[361659]: 2025-11-22 09:34:22.253682768 +0000 UTC m=+0.029156438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:34:22 compute-0 podman[361659]: 2025-11-22 09:34:22.362474235 +0000 UTC m=+0.137947905 container init e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 09:34:22 compute-0 podman[361659]: 2025-11-22 09:34:22.370607276 +0000 UTC m=+0.146080926 container start e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:34:22 compute-0 podman[361659]: 2025-11-22 09:34:22.374279846 +0000 UTC m=+0.149753506 container attach e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:34:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:34:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:34:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:34:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:34:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:34:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:34:23 compute-0 ceph-mon[75021]: pgmap v2168: 305 pgs: 305 active+clean; 256 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 224 op/s
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.376 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.378 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.378 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.379 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.379 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.381 253665 INFO nova.compute.manager [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Terminating instance
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.384 253665 DEBUG nova.compute.manager [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:34:23 compute-0 friendly_shamir[361675]: {
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "osd_id": 1,
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "type": "bluestore"
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:     },
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "osd_id": 0,
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "type": "bluestore"
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:     },
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "osd_id": 2,
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:         "type": "bluestore"
Nov 22 09:34:23 compute-0 friendly_shamir[361675]:     }
Nov 22 09:34:23 compute-0 friendly_shamir[361675]: }
Nov 22 09:34:23 compute-0 systemd[1]: libpod-e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6.scope: Deactivated successfully.
Nov 22 09:34:23 compute-0 systemd[1]: libpod-e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6.scope: Consumed 1.073s CPU time.
Nov 22 09:34:23 compute-0 podman[361659]: 2025-11-22 09:34:23.439888754 +0000 UTC m=+1.215362454 container died e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 09:34:23 compute-0 kernel: tap9cb5df7f-b7 (unregistering): left promiscuous mode
Nov 22 09:34:23 compute-0 NetworkManager[48920]: <info>  [1763804063.4514] device (tap9cb5df7f-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.457 253665 DEBUG nova.compute.manager [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.457 253665 DEBUG nova.compute.manager [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing instance network info cache due to event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.458 253665 DEBUG oslo_concurrency.lockutils [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.458 253665 DEBUG oslo_concurrency.lockutils [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.458 253665 DEBUG nova.network.neutron [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:34:23 compute-0 ovn_controller[152872]: 2025-11-22T09:34:23Z|01051|binding|INFO|Releasing lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb from this chassis (sb_readonly=0)
Nov 22 09:34:23 compute-0 ovn_controller[152872]: 2025-11-22T09:34:23Z|01052|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb down in Southbound
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:23 compute-0 ovn_controller[152872]: 2025-11-22T09:34:23Z|01053|binding|INFO|Removing iface tap9cb5df7f-b7 ovn-installed in OVS
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.470 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9-merged.mount: Deactivated successfully.
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.489 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:78 10.100.0.13'], port_security=['fa:16:3e:ee:62:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb81b22a-c733-4b44-8546-e4bd1c24d808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a9a9e980-b9b8-4093-8614-a39717adaa19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed202cc-8346-4c69-b67f-f490be608094, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9cb5df7f-b707-42d9-b17d-75811fd05cbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.491 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9cb5df7f-b707-42d9-b17d-75811fd05cbb in datapath 3acaad61-a3f6-4bd6-83f4-0ab1438bb136 unbound from our chassis
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.493 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.495 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1daa2c4-f605-4242-b91f-d04759a1ffc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.501 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 namespace which is not needed anymore
Nov 22 09:34:23 compute-0 podman[361659]: 2025-11-22 09:34:23.5218292 +0000 UTC m=+1.297302840 container remove e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:34:23 compute-0 systemd[1]: libpod-conmon-e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6.scope: Deactivated successfully.
Nov 22 09:34:23 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 22 09:34:23 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000065.scope: Consumed 16.902s CPU time.
Nov 22 09:34:23 compute-0 systemd-machined[215941]: Machine qemu-129-instance-00000065 terminated.
Nov 22 09:34:23 compute-0 sudo[361554]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:34:23 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:34:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:34:23 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:34:23 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b89ec349-f9b0-4713-b2dc-4864b2dcf627 does not exist
Nov 22 09:34:23 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 704feecb-8b05-447b-a4e4-180c22b1fe8e does not exist
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.631 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance destroyed successfully.
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.633 253665 DEBUG nova.objects.instance [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.644 253665 DEBUG nova.virt.libvirt.vif [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:34:00Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.645 253665 DEBUG nova.network.os_vif_util [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.646 253665 DEBUG nova.network.os_vif_util [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.647 253665 DEBUG os_vif [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.649 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.651 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb5df7f-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:23 compute-0 sudo[361739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.658 253665 INFO os_vif [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7')
Nov 22 09:34:23 compute-0 sudo[361739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:23 compute-0 sudo[361739]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:23 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [NOTICE]   (360390) : haproxy version is 2.8.14-c23fe91
Nov 22 09:34:23 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [NOTICE]   (360390) : path to executable is /usr/sbin/haproxy
Nov 22 09:34:23 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [WARNING]  (360390) : Exiting Master process...
Nov 22 09:34:23 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [ALERT]    (360390) : Current worker (360392) exited with code 143 (Terminated)
Nov 22 09:34:23 compute-0 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [WARNING]  (360390) : All workers exited. Exiting... (0)
Nov 22 09:34:23 compute-0 systemd[1]: libpod-4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d.scope: Deactivated successfully.
Nov 22 09:34:23 compute-0 podman[361756]: 2025-11-22 09:34:23.688356219 +0000 UTC m=+0.054404950 container died 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d-userdata-shm.mount: Deactivated successfully.
Nov 22 09:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a6f38e7aba289fc0c9ae62241c8d2f65dd0c33419fd634f67d646ac8fe686c6-merged.mount: Deactivated successfully.
Nov 22 09:34:23 compute-0 sudo[361802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:34:23 compute-0 sudo[361802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:34:23 compute-0 podman[361756]: 2025-11-22 09:34:23.741938969 +0000 UTC m=+0.107987700 container cleanup 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:34:23 compute-0 sudo[361802]: pam_unix(sudo:session): session closed for user root
Nov 22 09:34:23 compute-0 systemd[1]: libpod-conmon-4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d.scope: Deactivated successfully.
Nov 22 09:34:23 compute-0 podman[361812]: 2025-11-22 09:34:23.81273135 +0000 UTC m=+0.102789290 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 22 09:34:23 compute-0 podman[361858]: 2025-11-22 09:34:23.825408822 +0000 UTC m=+0.051657152 container remove 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.833 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7c8469-2db1-4598-98fb-58760020f290]: (4, ('Sat Nov 22 09:34:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 (4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d)\n4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d\nSat Nov 22 09:34:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 (4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d)\n4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.836 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19d8af29-bcd2-4dad-bbd4-4e141d908020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.838 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3acaad61-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 200 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 227 op/s
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:23 compute-0 kernel: tap3acaad61-a0: left promiscuous mode
Nov 22 09:34:23 compute-0 nova_compute[253661]: 2025-11-22 09:34:23.862 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc962e8-587f-481f-8165-d4752cd1aa13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.884 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[378778af-28f5-4ba6-a242-e7eea0d8099d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.885 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[83665db7-a093-4fea-b3c3-b7c842351bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.904 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[415692e8-4698-4b3a-890e-738bf1d6fc44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689058, 'reachable_time': 23410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361881, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d3acaad61\x2da3f6\x2d4bd6\x2d83f4\x2d0ab1438bb136.mount: Deactivated successfully.
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.907 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:34:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.908 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8afc3213-4f1c-40e5-a0df-a0b24fb3186c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:24 compute-0 nova_compute[253661]: 2025-11-22 09:34:24.106 253665 INFO nova.virt.libvirt.driver [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deleting instance files /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808_del
Nov 22 09:34:24 compute-0 nova_compute[253661]: 2025-11-22 09:34:24.107 253665 INFO nova.virt.libvirt.driver [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deletion of /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808_del complete
Nov 22 09:34:24 compute-0 nova_compute[253661]: 2025-11-22 09:34:24.199 253665 INFO nova.compute.manager [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 22 09:34:24 compute-0 nova_compute[253661]: 2025-11-22 09:34:24.200 253665 DEBUG oslo.service.loopingcall [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:34:24 compute-0 nova_compute[253661]: 2025-11-22 09:34:24.200 253665 DEBUG nova.compute.manager [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:34:24 compute-0 nova_compute[253661]: 2025-11-22 09:34:24.200 253665 DEBUG nova.network.neutron [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:34:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:34:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.003 253665 DEBUG nova.network.neutron [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.021 253665 INFO nova.compute.manager [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Took 0.82 seconds to deallocate network for instance.
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.072 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.073 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.092 253665 DEBUG nova.compute.manager [req-19b6783e-a520-4744-a574-bfcff83580ab req-4d327967-fded-4d60-a8b8-144e61d9a882 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-deleted-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.134 253665 DEBUG oslo_concurrency.processutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:34:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26099833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:25 compute-0 ceph-mon[75021]: pgmap v2169: 305 pgs: 305 active+clean; 200 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 227 op/s
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.612 253665 DEBUG oslo_concurrency.processutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.621 253665 DEBUG nova.compute.provider_tree [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.639 253665 DEBUG nova.scheduler.client.report [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.668 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 200 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 597 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.860 253665 INFO nova.scheduler.client.report [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance eb81b22a-c733-4b44-8546-e4bd1c24d808
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.922 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.970 253665 DEBUG nova.network.neutron [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updated VIF entry in instance network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:34:25 compute-0 nova_compute[253661]: 2025-11-22 09:34:25.971 253665 DEBUG nova.network.neutron [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.000 253665 DEBUG oslo_concurrency.lockutils [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.200 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.280 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.281 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.288 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.289 253665 INFO nova.compute.claims [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.414 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/26099833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:34:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716024109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.886 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.894 253665 DEBUG nova.compute.provider_tree [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.908 253665 DEBUG nova.scheduler.client.report [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.929 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:26 compute-0 nova_compute[253661]: 2025-11-22 09:34:26.930 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.007 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.007 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.033 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.049 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.141 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.142 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.143 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Creating image(s)
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.165 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.192 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.217 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.221 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.287 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.288 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.289 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.289 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.314 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.317 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.611 253665 DEBUG nova.policy [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:34:27 compute-0 ceph-mon[75021]: pgmap v2170: 305 pgs: 305 active+clean; 200 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 597 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 22 09:34:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3716024109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.836 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 155 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 3.4 MiB/s wr, 156 op/s
Nov 22 09:34:27 compute-0 nova_compute[253661]: 2025-11-22 09:34:27.918 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:34:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:27.977 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:27.977 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:27.978 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:28 compute-0 nova_compute[253661]: 2025-11-22 09:34:28.040 253665 DEBUG nova.objects.instance [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid da98da35-5fb2-47cd-9d6b-a3bb2254bec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:28 compute-0 nova_compute[253661]: 2025-11-22 09:34:28.054 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:34:28 compute-0 nova_compute[253661]: 2025-11-22 09:34:28.054 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Ensure instance console log exists: /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:34:28 compute-0 nova_compute[253661]: 2025-11-22 09:34:28.055 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:28 compute-0 nova_compute[253661]: 2025-11-22 09:34:28.055 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:28 compute-0 nova_compute[253661]: 2025-11-22 09:34:28.055 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:28 compute-0 nova_compute[253661]: 2025-11-22 09:34:28.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:29 compute-0 nova_compute[253661]: 2025-11-22 09:34:29.372 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Successfully created port: 46a77e89-60ff-4609-9a5a-6e542d8343e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:34:29 compute-0 nova_compute[253661]: 2025-11-22 09:34:29.499 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804054.4982965, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:29 compute-0 nova_compute[253661]: 2025-11-22 09:34:29.500 253665 INFO nova.compute.manager [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Stopped (Lifecycle Event)
Nov 22 09:34:29 compute-0 nova_compute[253661]: 2025-11-22 09:34:29.534 253665 DEBUG nova.compute.manager [None req-36c34fc6-6377-43c1-908d-8333d9921925 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:29 compute-0 ceph-mon[75021]: pgmap v2171: 305 pgs: 305 active+clean; 155 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 3.4 MiB/s wr, 156 op/s
Nov 22 09:34:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 151 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 1.7 MiB/s wr, 95 op/s
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.142 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.143 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.159 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.244 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.244 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.252 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.252 253665 INFO nova.compute.claims [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.419 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:30 compute-0 ovn_controller[152872]: 2025-11-22T09:34:30Z|01054|binding|INFO|Releasing lport a1771b67-4cb9-46af-b99c-bccbb7cc939f from this chassis (sb_readonly=0)
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.655 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Successfully updated port: 46a77e89-60ff-4609-9a5a-6e542d8343e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.673 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.674 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.674 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.857 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:34:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915126923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.944 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.951 253665 DEBUG nova.compute.provider_tree [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.959 253665 DEBUG nova.compute.manager [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-changed-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.960 253665 DEBUG nova.compute.manager [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Refreshing instance network info cache due to event network-changed-46a77e89-60ff-4609-9a5a-6e542d8343e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.960 253665 DEBUG oslo_concurrency.lockutils [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.963 253665 DEBUG nova.scheduler.client.report [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.983 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:30 compute-0 nova_compute[253661]: 2025-11-22 09:34:30.984 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.024 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.044 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.062 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.088 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.130 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.132 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.132 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating image(s)
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.161 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.190 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.211 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.214 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.283 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.284 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.284 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.284 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.308 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.313 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.664 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:31 compute-0 ceph-mon[75021]: pgmap v2172: 305 pgs: 305 active+clean; 151 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 1.7 MiB/s wr, 95 op/s
Nov 22 09:34:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/915126923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.726 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] resizing rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.831 253665 DEBUG nova.objects.instance [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'migration_context' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 163 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.850 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.851 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Ensure instance console log exists: /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.852 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.852 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.852 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.854 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.859 253665 WARNING nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.865 253665 DEBUG nova.virt.libvirt.host [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.866 253665 DEBUG nova.virt.libvirt.host [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.869 253665 DEBUG nova.virt.libvirt.host [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.869 253665 DEBUG nova.virt.libvirt.host [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.870 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.870 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.872 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.872 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.872 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.872 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.873 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:34:31 compute-0 nova_compute[253661]: 2025-11-22 09:34:31.876 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.226 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Updating instance_info_cache with network_info: [{"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.256 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.256 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance network_info: |[{"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.257 253665 DEBUG oslo_concurrency.lockutils [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.257 253665 DEBUG nova.network.neutron [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Refreshing network info cache for port 46a77e89-60ff-4609-9a5a-6e542d8343e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.259 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start _get_guest_xml network_info=[{"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.264 253665 WARNING nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.272 253665 DEBUG nova.virt.libvirt.host [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.273 253665 DEBUG nova.virt.libvirt.host [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.278 253665 DEBUG nova.virt.libvirt.host [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.278 253665 DEBUG nova.virt.libvirt.host [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.279 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.279 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.279 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.280 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.280 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.280 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.280 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.281 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.281 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.281 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.281 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.282 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.285 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162470709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.377 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.399 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.404 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4162470709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954683896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.748 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.771 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.776 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459410174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.885 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.887 253665 DEBUG nova.objects.instance [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'pci_devices' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.916 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <uuid>361d3f1d-84a4-4159-a69a-8a0254446ab6</uuid>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <name>instance-0000006a</name>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerShowV257Test-server-1169039346</nova:name>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:34:31</nova:creationTime>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <nova:user uuid="c9e3213f01af435aab231356352dba1b">tempest-ServerShowV257Test-555892026-project-member</nova:user>
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <nova:project uuid="d7e64b9e1f5f4ed7a0a6326357a91223">tempest-ServerShowV257Test-555892026</nova:project>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <system>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <entry name="serial">361d3f1d-84a4-4159-a69a-8a0254446ab6</entry>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <entry name="uuid">361d3f1d-84a4-4159-a69a-8a0254446ab6</entry>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     </system>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <os>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   </os>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <features>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   </features>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/361d3f1d-84a4-4159-a69a-8a0254446ab6_disk">
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config">
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:32 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/console.log" append="off"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <video>
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     </video>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:34:32 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:34:32 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:34:32 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:34:32 compute-0 nova_compute[253661]: </domain>
Nov 22 09:34:32 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.965 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.965 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.966 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Using config drive
Nov 22 09:34:32 compute-0 nova_compute[253661]: 2025-11-22 09:34:32.987 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231786100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.221 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.223 253665 DEBUG nova.virt.libvirt.vif [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-295501579',display_name='tempest-TestNetworkBasicOps-server-295501579',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-295501579',id=105,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNCNSJMfyeSfEucYEe4QlFd/0YaeFQZS6dilhF9jqVM15NRor8ABSVHqTZCPRl6JVm69HZTDz0B8aTd74/zbrdmaxxEXgRl0/0G8KTm0chRbWM6114wV+6thTAZigMHMqw==',key_name='tempest-TestNetworkBasicOps-1414379365',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-eerdrny6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:27Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=da98da35-5fb2-47cd-9d6b-a3bb2254bec9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.224 253665 DEBUG nova.network.os_vif_util [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.225 253665 DEBUG nova.network.os_vif_util [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.227 253665 DEBUG nova.objects.instance [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid da98da35-5fb2-47cd-9d6b-a3bb2254bec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.244 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <uuid>da98da35-5fb2-47cd-9d6b-a3bb2254bec9</uuid>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <name>instance-00000069</name>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-295501579</nova:name>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:34:32</nova:creationTime>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <nova:port uuid="46a77e89-60ff-4609-9a5a-6e542d8343e1">
Nov 22 09:34:33 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <system>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <entry name="serial">da98da35-5fb2-47cd-9d6b-a3bb2254bec9</entry>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <entry name="uuid">da98da35-5fb2-47cd-9d6b-a3bb2254bec9</entry>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     </system>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <os>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   </os>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <features>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   </features>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk">
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config">
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:33 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:75:2e:22"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <target dev="tap46a77e89-60"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/console.log" append="off"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <video>
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     </video>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:34:33 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:34:33 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:34:33 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:34:33 compute-0 nova_compute[253661]: </domain>
Nov 22 09:34:33 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.246 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Preparing to wait for external event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.247 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.247 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.247 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.248 253665 DEBUG nova.virt.libvirt.vif [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-295501579',display_name='tempest-TestNetworkBasicOps-server-295501579',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-295501579',id=105,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNCNSJMfyeSfEucYEe4QlFd/0YaeFQZS6dilhF9jqVM15NRor8ABSVHqTZCPRl6JVm69HZTDz0B8aTd74/zbrdmaxxEXgRl0/0G8KTm0chRbWM6114wV+6thTAZigMHMqw==',key_name='tempest-TestNetworkBasicOps-1414379365',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-eerdrny6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:27Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=da98da35-5fb2-47cd-9d6b-a3bb2254bec9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.248 253665 DEBUG nova.network.os_vif_util [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.249 253665 DEBUG nova.network.os_vif_util [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.249 253665 DEBUG os_vif [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.250 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.251 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.254 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46a77e89-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.254 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap46a77e89-60, col_values=(('external_ids', {'iface-id': '46a77e89-60ff-4609-9a5a-6e542d8343e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:2e:22', 'vm-uuid': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:33 compute-0 NetworkManager[48920]: <info>  [1763804073.3090] manager: (tap46a77e89-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.308 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.317 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.320 253665 INFO os_vif [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60')
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.371 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.372 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.372 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:75:2e:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.373 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Using config drive
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.399 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.479 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804058.4782982, 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.479 253665 INFO nova.compute.manager [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] VM Stopped (Lifecycle Event)
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.507 253665 DEBUG nova.compute.manager [None req-6c3d8916-57ed-4b74-8fab-7fce2c32f581 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:33 compute-0 ceph-mon[75021]: pgmap v2173: 305 pgs: 305 active+clean; 163 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 22 09:34:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2954683896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2459410174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3231786100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 186 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 2.4 MiB/s wr, 94 op/s
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.991 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating config drive at /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config
Nov 22 09:34:33 compute-0 nova_compute[253661]: 2025-11-22 09:34:33.996 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rszdhjt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.156 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rszdhjt" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.182 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.187 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.256 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.274 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.275 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.288 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Creating config drive at /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.294 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn67jqeu_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.448 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn67jqeu_" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.478 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.482 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.525 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.526 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deleting local config drive /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config because it was imported into RBD.
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.541 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.541 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.542 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.542 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.575 253665 DEBUG nova.network.neutron [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Updated VIF entry in instance network info cache for port 46a77e89-60ff-4609-9a5a-6e542d8343e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.576 253665 DEBUG nova.network.neutron [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Updating instance_info_cache with network_info: [{"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.591 253665 DEBUG oslo_concurrency.lockutils [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:34:34 compute-0 systemd-machined[215941]: New machine qemu-131-instance-0000006a.
Nov 22 09:34:34 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-0000006a.
Nov 22 09:34:34 compute-0 ceph-mon[75021]: pgmap v2174: 305 pgs: 305 active+clean; 186 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 2.4 MiB/s wr, 94 op/s
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.785 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.786 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Deleting local config drive /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config because it was imported into RBD.
Nov 22 09:34:34 compute-0 kernel: tap46a77e89-60: entered promiscuous mode
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.829 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:34 compute-0 NetworkManager[48920]: <info>  [1763804074.8304] manager: (tap46a77e89-60): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Nov 22 09:34:34 compute-0 ovn_controller[152872]: 2025-11-22T09:34:34Z|01055|binding|INFO|Claiming lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 for this chassis.
Nov 22 09:34:34 compute-0 ovn_controller[152872]: 2025-11-22T09:34:34Z|01056|binding|INFO|46a77e89-60ff-4609-9a5a-6e542d8343e1: Claiming fa:16:3e:75:2e:22 10.100.0.22
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.844 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:2e:22 10.100.0.22'], port_security=['fa:16:3e:75:2e:22 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15f88390-1071-41f9-b1a4-108f4f3845d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e50820c9-c083-42b3-a5c1-62f5befbff0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcb3a475-2422-4e03-9155-1b7e58a05aab, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=46a77e89-60ff-4609-9a5a-6e542d8343e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.845 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 46a77e89-60ff-4609-9a5a-6e542d8343e1 in datapath 15f88390-1071-41f9-b1a4-108f4f3845d0 bound to our chassis
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.847 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15f88390-1071-41f9-b1a4-108f4f3845d0
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.860 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0dcc9f-3972-41fa-a1ef-61254f707610]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.861 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap15f88390-11 in ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.864 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap15f88390-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[484d4a8e-5674-4c12-a8ef-ccd41a23b6d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eafacb55-0a1c-4890-94a7-c7d35f55dfc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:34 compute-0 systemd-machined[215941]: New machine qemu-132-instance-00000069.
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.876 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4861efa8-e800-4fa2-8ae5-56ff0161eaa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:34 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-00000069.
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.879 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:34 compute-0 ovn_controller[152872]: 2025-11-22T09:34:34Z|01057|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 ovn-installed in OVS
Nov 22 09:34:34 compute-0 ovn_controller[152872]: 2025-11-22T09:34:34Z|01058|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 up in Southbound
Nov 22 09:34:34 compute-0 nova_compute[253661]: 2025-11-22 09:34:34.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.895 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[86b8f433-6985-4fe2-92a8-ef8b5922ed6f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:34 compute-0 systemd-udevd[362556]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:34:34 compute-0 NetworkManager[48920]: <info>  [1763804074.9232] device (tap46a77e89-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:34:34 compute-0 NetworkManager[48920]: <info>  [1763804074.9240] device (tap46a77e89-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.927 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0d5dc4-ec71-4682-958e-26609f22c76c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:34 compute-0 systemd-udevd[362564]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[42303a38-f873-4f5f-bb83-a0952f62dbe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:34 compute-0 NetworkManager[48920]: <info>  [1763804074.9394] manager: (tap15f88390-10): new Veth device (/org/freedesktop/NetworkManager/Devices/437)
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.977 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8045a02f-885a-49b4-87a6-9feddfbf7dcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.981 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7cebe14f-6226-4f8d-ad5e-b416baaa0f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:35 compute-0 NetworkManager[48920]: <info>  [1763804075.0057] device (tap15f88390-10): carrier: link connected
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.013 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[159b1de2-c0a8-4031-8b8b-f60761147be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c142bdf4-55af-48e1-9b48-85669fc86500]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15f88390-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:5d:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692574, 'reachable_time': 23853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362585, 'error': None, 'target': 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.051 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5650bfbe-3eff-483a-ab86-7a3b0b9e6265]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:5df2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692574, 'tstamp': 692574}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362586, 'error': None, 'target': 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.069 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[29740764-b8b0-4893-8d2a-296af6ce7325]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15f88390-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:5d:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692574, 'reachable_time': 23853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362587, 'error': None, 'target': 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30c2ed43-13a3-4d86-8a7c-767147be75af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.160 253665 DEBUG nova.compute.manager [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.161 253665 DEBUG oslo_concurrency.lockutils [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.161 253665 DEBUG oslo_concurrency.lockutils [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.161 253665 DEBUG oslo_concurrency.lockutils [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.161 253665 DEBUG nova.compute.manager [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Processing event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10909999-ab70-468b-9c37-6bab030928da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.187 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15f88390-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.188 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.189 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15f88390-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:35 compute-0 NetworkManager[48920]: <info>  [1763804075.1912] manager: (tap15f88390-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Nov 22 09:34:35 compute-0 kernel: tap15f88390-10: entered promiscuous mode
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.194 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15f88390-10, col_values=(('external_ids', {'iface-id': '06dfbc87-2377-412e-8b1f-e2e6f4be9f29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:35 compute-0 ovn_controller[152872]: 2025-11-22T09:34:35Z|01059|binding|INFO|Releasing lport 06dfbc87-2377-412e-8b1f-e2e6f4be9f29 from this chassis (sb_readonly=0)
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.214 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/15f88390-1071-41f9-b1a4-108f4f3845d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/15f88390-1071-41f9-b1a4-108f4f3845d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.215 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d358950-5369-4321-b223-9e8abd2b20eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.216 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-15f88390-1071-41f9-b1a4-108f4f3845d0
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/15f88390-1071-41f9-b1a4-108f4f3845d0.pid.haproxy
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 15f88390-1071-41f9-b1a4-108f4f3845d0
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:34:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.217 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'env', 'PROCESS_TAG=haproxy-15f88390-1071-41f9-b1a4-108f4f3845d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/15f88390-1071-41f9-b1a4-108f4f3845d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.282 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.2818408, da98da35-5fb2-47cd-9d6b-a3bb2254bec9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.283 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] VM Started (Lifecycle Event)
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.288 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.296 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.303 253665 INFO nova.virt.libvirt.driver [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance spawned successfully.
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.304 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.309 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.318 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.333 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.333 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.334 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.334 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.335 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.335 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.340 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.340 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.2859046, da98da35-5fb2-47cd-9d6b-a3bb2254bec9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.341 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] VM Paused (Lifecycle Event)
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.382 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.387 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.2908475, da98da35-5fb2-47cd-9d6b-a3bb2254bec9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.387 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] VM Resumed (Lifecycle Event)
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.406 253665 INFO nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Took 8.27 seconds to spawn the instance on the hypervisor.
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.407 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.415 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.417 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.456 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.456 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.4547694, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.456 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Resumed (Lifecycle Event)
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.458 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.458 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.461 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance spawned successfully.
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.461 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.477 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.480 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.490 253665 INFO nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Took 9.24 seconds to build instance.
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.494 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.494 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.494 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.495 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.495 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.495 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.499 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.455411, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Started (Lifecycle Event)
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.519 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.526 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.532 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.557 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.566 253665 INFO nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Took 4.44 seconds to spawn the instance on the hypervisor.
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.566 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.625 253665 INFO nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Took 5.41 seconds to build instance.
Nov 22 09:34:35 compute-0 podman[362699]: 2025-11-22 09:34:35.632679383 +0000 UTC m=+0.060880199 container create 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.641 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:35 compute-0 systemd[1]: Started libpod-conmon-3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf.scope.
Nov 22 09:34:35 compute-0 podman[362699]: 2025-11-22 09:34:35.600612003 +0000 UTC m=+0.028812849 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:34:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dbd92e328427ae01a91a6e788883f396c5e4580a3c0d724680d342b6d537fb2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:35 compute-0 podman[362699]: 2025-11-22 09:34:35.734083629 +0000 UTC m=+0.162284475 container init 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 09:34:35 compute-0 podman[362699]: 2025-11-22 09:34:35.742386573 +0000 UTC m=+0.170587389 container start 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:34:35 compute-0 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [NOTICE]   (362719) : New worker (362721) forked
Nov 22 09:34:35 compute-0 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [NOTICE]   (362719) : Loading success.
Nov 22 09:34:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 186 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Nov 22 09:34:35 compute-0 nova_compute[253661]: 2025-11-22 09:34:35.861 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.391 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.404 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.404 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.404 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.405 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.418 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.419 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid da98da35-5fb2-47cd-9d6b-a3bb2254bec9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.419 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.420 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.420 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.420 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.421 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.421 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.421 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.444 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.444 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:36 compute-0 nova_compute[253661]: 2025-11-22 09:34:36.446 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:36 compute-0 ceph-mon[75021]: pgmap v2175: 305 pgs: 305 active+clean; 186 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Nov 22 09:34:37 compute-0 nova_compute[253661]: 2025-11-22 09:34:37.386 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:37 compute-0 nova_compute[253661]: 2025-11-22 09:34:37.843 253665 DEBUG nova.compute.manager [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:37 compute-0 nova_compute[253661]: 2025-11-22 09:34:37.844 253665 DEBUG oslo_concurrency.lockutils [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:37 compute-0 nova_compute[253661]: 2025-11-22 09:34:37.844 253665 DEBUG oslo_concurrency.lockutils [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:37 compute-0 nova_compute[253661]: 2025-11-22 09:34:37.844 253665 DEBUG oslo_concurrency.lockutils [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:37 compute-0 nova_compute[253661]: 2025-11-22 09:34:37.844 253665 DEBUG nova.compute.manager [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] No waiting events found dispatching network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:34:37 compute-0 nova_compute[253661]: 2025-11-22 09:34:37.845 253665 WARNING nova.compute.manager [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received unexpected event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 for instance with vm_state active and task_state None.
Nov 22 09:34:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 213 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.5 MiB/s wr, 173 op/s
Nov 22 09:34:38 compute-0 nova_compute[253661]: 2025-11-22 09:34:38.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:38 compute-0 nova_compute[253661]: 2025-11-22 09:34:38.325 253665 INFO nova.compute.manager [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Rebuilding instance
Nov 22 09:34:38 compute-0 nova_compute[253661]: 2025-11-22 09:34:38.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:38 compute-0 nova_compute[253661]: 2025-11-22 09:34:38.628 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804063.6279562, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:38 compute-0 nova_compute[253661]: 2025-11-22 09:34:38.629 253665 INFO nova.compute.manager [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Stopped (Lifecycle Event)
Nov 22 09:34:38 compute-0 nova_compute[253661]: 2025-11-22 09:34:38.648 253665 DEBUG nova.compute.manager [None req-37acd4bd-cfe2-4001-8fc4-0e9e410c89d3 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:38 compute-0 ceph-mon[75021]: pgmap v2176: 305 pgs: 305 active+clean; 213 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.5 MiB/s wr, 173 op/s
Nov 22 09:34:38 compute-0 nova_compute[253661]: 2025-11-22 09:34:38.937 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:38 compute-0 nova_compute[253661]: 2025-11-22 09:34:38.955 253665 DEBUG nova.compute.manager [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:39 compute-0 nova_compute[253661]: 2025-11-22 09:34:39.017 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'pci_requests' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:39 compute-0 nova_compute[253661]: 2025-11-22 09:34:39.025 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'pci_devices' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:39 compute-0 nova_compute[253661]: 2025-11-22 09:34:39.036 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'resources' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:39 compute-0 nova_compute[253661]: 2025-11-22 09:34:39.043 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'migration_context' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:39 compute-0 nova_compute[253661]: 2025-11-22 09:34:39.053 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:34:39 compute-0 nova_compute[253661]: 2025-11-22 09:34:39.056 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:34:39 compute-0 nova_compute[253661]: 2025-11-22 09:34:39.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.4 MiB/s wr, 172 op/s
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.249 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.250 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:34:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911633361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.702 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.795 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.796 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.801 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.802 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.806 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.806 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:34:40 compute-0 nova_compute[253661]: 2025-11-22 09:34:40.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:40 compute-0 ceph-mon[75021]: pgmap v2177: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.4 MiB/s wr, 172 op/s
Nov 22 09:34:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/911633361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.008 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.009 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3282MB free_disk=59.900909423828125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.010 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.010 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.076 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance c5540f5a-8dfa-4b11-8452-c6fe99db1d64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.077 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance da98da35-5fb2-47cd-9d6b-a3bb2254bec9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.077 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 361d3f1d-84a4-4159-a69a-8a0254446ab6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.077 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.078 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.180 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:34:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3897940132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.643 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.652 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.669 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.703 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:34:41 compute-0 nova_compute[253661]: 2025-11-22 09:34:41.704 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 186 op/s
Nov 22 09:34:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3897940132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:42 compute-0 nova_compute[253661]: 2025-11-22 09:34:42.705 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:42 compute-0 nova_compute[253661]: 2025-11-22 09:34:42.705 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:34:42 compute-0 ceph-mon[75021]: pgmap v2178: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 186 op/s
Nov 22 09:34:43 compute-0 nova_compute[253661]: 2025-11-22 09:34:43.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:43 compute-0 nova_compute[253661]: 2025-11-22 09:34:43.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:43 compute-0 nova_compute[253661]: 2025-11-22 09:34:43.455 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Nov 22 09:34:45 compute-0 ceph-mon[75021]: pgmap v2179: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Nov 22 09:34:45 compute-0 nova_compute[253661]: 2025-11-22 09:34:45.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:34:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 158 op/s
Nov 22 09:34:45 compute-0 nova_compute[253661]: 2025-11-22 09:34:45.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:46 compute-0 nova_compute[253661]: 2025-11-22 09:34:46.356 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:46 compute-0 nova_compute[253661]: 2025-11-22 09:34:46.356 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:46 compute-0 nova_compute[253661]: 2025-11-22 09:34:46.383 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:34:46 compute-0 nova_compute[253661]: 2025-11-22 09:34:46.464 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:46 compute-0 nova_compute[253661]: 2025-11-22 09:34:46.465 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:46 compute-0 nova_compute[253661]: 2025-11-22 09:34:46.470 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:34:46 compute-0 nova_compute[253661]: 2025-11-22 09:34:46.470 253665 INFO nova.compute.claims [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:34:46 compute-0 nova_compute[253661]: 2025-11-22 09:34:46.608 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:34:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390561136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.147 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:47 compute-0 ceph-mon[75021]: pgmap v2180: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 158 op/s
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.156 253665 DEBUG nova.compute.provider_tree [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.180 253665 DEBUG nova.scheduler.client.report [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.206 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.207 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.250 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.250 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.265 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.285 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:34:47 compute-0 podman[362799]: 2025-11-22 09:34:47.378872844 +0000 UTC m=+0.069494789 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 09:34:47 compute-0 podman[362800]: 2025-11-22 09:34:47.387286733 +0000 UTC m=+0.075508049 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.411 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.412 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.413 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Creating image(s)
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.435 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.459 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.488 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.495 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.547 253665 DEBUG nova.policy [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.591 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.592 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.593 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.593 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.627 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:47 compute-0 nova_compute[253661]: 2025-11-22 09:34:47.632 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3f8530ae-f429-4807-81ca-84d8f964a38c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 222 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 164 op/s
Nov 22 09:34:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2390561136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.321 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.505 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Successfully created port: 8da41f38-3812-4494-9cab-c4854772a569 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.532 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3f8530ae-f429-4807-81ca-84d8f964a38c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.900s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.617 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:34:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.750 253665 DEBUG nova.objects.instance [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.761 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.762 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Ensure instance console log exists: /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.762 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.762 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:48 compute-0 nova_compute[253661]: 2025-11-22 09:34:48.763 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:49 compute-0 nova_compute[253661]: 2025-11-22 09:34:49.105 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 22 09:34:49 compute-0 ceph-mon[75021]: pgmap v2181: 305 pgs: 305 active+clean; 222 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 164 op/s
Nov 22 09:34:49 compute-0 nova_compute[253661]: 2025-11-22 09:34:49.443 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Successfully updated port: 8da41f38-3812-4494-9cab-c4854772a569 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:34:49 compute-0 nova_compute[253661]: 2025-11-22 09:34:49.469 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:49 compute-0 nova_compute[253661]: 2025-11-22 09:34:49.469 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:49 compute-0 nova_compute[253661]: 2025-11-22 09:34:49.470 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:34:49 compute-0 ovn_controller[152872]: 2025-11-22T09:34:49Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:2e:22 10.100.0.22
Nov 22 09:34:49 compute-0 ovn_controller[152872]: 2025-11-22T09:34:49Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:2e:22 10.100.0.22
Nov 22 09:34:49 compute-0 nova_compute[253661]: 2025-11-22 09:34:49.534 253665 DEBUG nova.compute.manager [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-changed-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:49 compute-0 nova_compute[253661]: 2025-11-22 09:34:49.534 253665 DEBUG nova.compute.manager [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing instance network info cache due to event network-changed-8da41f38-3812-4494-9cab-c4854772a569. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:34:49 compute-0 nova_compute[253661]: 2025-11-22 09:34:49.534 253665 DEBUG oslo_concurrency.lockutils [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 234 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 82 op/s
Nov 22 09:34:50 compute-0 nova_compute[253661]: 2025-11-22 09:34:50.017 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:34:50 compute-0 nova_compute[253661]: 2025-11-22 09:34:50.458 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:50 compute-0 nova_compute[253661]: 2025-11-22 09:34:50.866 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:50.999 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.024 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.025 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance network_info: |[{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.027 253665 DEBUG oslo_concurrency.lockutils [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.027 253665 DEBUG nova.network.neutron [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing network info cache for port 8da41f38-3812-4494-9cab-c4854772a569 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.035 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start _get_guest_xml network_info=[{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.043 253665 WARNING nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.052 253665 DEBUG nova.virt.libvirt.host [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.053 253665 DEBUG nova.virt.libvirt.host [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.063 253665 DEBUG nova.virt.libvirt.host [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.064 253665 DEBUG nova.virt.libvirt.host [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.065 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.065 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.066 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.066 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.067 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.067 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.068 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.068 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.069 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.069 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.070 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.070 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.075 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:51 compute-0 ceph-mon[75021]: pgmap v2182: 305 pgs: 305 active+clean; 234 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 82 op/s
Nov 22 09:34:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784280840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.555 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.589 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:51 compute-0 nova_compute[253661]: 2025-11-22 09:34:51.597 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 270 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.7 MiB/s wr, 117 op/s
Nov 22 09:34:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1283272683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.048 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.052 253665 DEBUG nova.virt.libvirt.vif [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:47Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.052 253665 DEBUG nova.network.os_vif_util [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.054 253665 DEBUG nova.network.os_vif_util [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.056 253665 DEBUG nova.objects.instance [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.069 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <uuid>3f8530ae-f429-4807-81ca-84d8f964a38c</uuid>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <name>instance-0000006b</name>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1778115453</nova:name>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:34:51</nova:creationTime>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <nova:port uuid="8da41f38-3812-4494-9cab-c4854772a569">
Nov 22 09:34:52 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <system>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <entry name="serial">3f8530ae-f429-4807-81ca-84d8f964a38c</entry>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <entry name="uuid">3f8530ae-f429-4807-81ca-84d8f964a38c</entry>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     </system>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <os>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   </os>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <features>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   </features>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3f8530ae-f429-4807-81ca-84d8f964a38c_disk">
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config">
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:02:ea:ba"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <target dev="tap8da41f38-38"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/console.log" append="off"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <video>
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     </video>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:34:52 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:34:52 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:34:52 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:34:52 compute-0 nova_compute[253661]: </domain>
Nov 22 09:34:52 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.071 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Preparing to wait for external event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.072 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.072 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.073 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.074 253665 DEBUG nova.virt.libvirt.vif [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:47Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.074 253665 DEBUG nova.network.os_vif_util [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.075 253665 DEBUG nova.network.os_vif_util [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.075 253665 DEBUG os_vif [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.076 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.077 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.077 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.083 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8da41f38-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.084 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8da41f38-38, col_values=(('external_ids', {'iface-id': '8da41f38-3812-4494-9cab-c4854772a569', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:ea:ba', 'vm-uuid': '3f8530ae-f429-4807-81ca-84d8f964a38c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:52 compute-0 NetworkManager[48920]: <info>  [1763804092.0869] manager: (tap8da41f38-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.093 253665 INFO os_vif [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38')
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.140 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.141 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.142 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:02:ea:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.143 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Using config drive
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.175 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:34:52
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.mgr', '.rgw.root']
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:34:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2784280840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1283272683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:52 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 22 09:34:52 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Consumed 13.726s CPU time.
Nov 22 09:34:52 compute-0 systemd-machined[215941]: Machine qemu-131-instance-0000006a terminated.
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.601 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Creating config drive at /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.606 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx95ajfle execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:34:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.754 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx95ajfle" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.788 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.792 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.992 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:52 compute-0 nova_compute[253661]: 2025-11-22 09:34:52.993 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Deleting local config drive /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config because it was imported into RBD.
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.037 253665 DEBUG nova.network.neutron [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updated VIF entry in instance network info cache for port 8da41f38-3812-4494-9cab-c4854772a569. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.038 253665 DEBUG nova.network.neutron [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:53 compute-0 kernel: tap8da41f38-38: entered promiscuous mode
Nov 22 09:34:53 compute-0 NetworkManager[48920]: <info>  [1763804093.0531] manager: (tap8da41f38-38): new Tun device (/org/freedesktop/NetworkManager/Devices/440)
Nov 22 09:34:53 compute-0 systemd-udevd[363083]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:34:53 compute-0 ovn_controller[152872]: 2025-11-22T09:34:53Z|01060|binding|INFO|Claiming lport 8da41f38-3812-4494-9cab-c4854772a569 for this chassis.
Nov 22 09:34:53 compute-0 ovn_controller[152872]: 2025-11-22T09:34:53Z|01061|binding|INFO|8da41f38-3812-4494-9cab-c4854772a569: Claiming fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.052 253665 DEBUG oslo_concurrency.lockutils [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.061 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ea:ba 10.100.0.4'], port_security=['fa:16:3e:02:ea:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f8530ae-f429-4807-81ca-84d8f964a38c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20570e02-4f3c-425d-9564-924b275d70dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0291e4d-91dd-4ee6-9074-0372622e253d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f04ee3-5979-45f2-bf12-c1c6b0bf9924, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8da41f38-3812-4494-9cab-c4854772a569) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.062 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8da41f38-3812-4494-9cab-c4854772a569 in datapath 20570e02-4f3c-425d-9564-924b275d70dc bound to our chassis
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.065 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 09:34:53 compute-0 NetworkManager[48920]: <info>  [1763804093.0660] device (tap8da41f38-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:34:53 compute-0 NetworkManager[48920]: <info>  [1763804093.0679] device (tap8da41f38-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:34:53 compute-0 ovn_controller[152872]: 2025-11-22T09:34:53Z|01062|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 ovn-installed in OVS
Nov 22 09:34:53 compute-0 ovn_controller[152872]: 2025-11-22T09:34:53Z|01063|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 up in Southbound
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.083 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e811a07-45b4-4a1d-b3f7-3c2a4fd5e635]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.084 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20570e02-41 in ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.086 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20570e02-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.086 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2190e54e-a592-4251-9f51-5f8aefebdd21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.088 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26d22db1-7ef7-471a-9f6b-e779b2b23c9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 systemd-machined[215941]: New machine qemu-133-instance-0000006b.
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.103 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a49799ee-1fa2-4aac-83eb-a28c03aa1647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006b.
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.131 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance shutdown successfully after 14 seconds.
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.132 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73abbd6c-e594-47ae-bc93-e02c1efd81c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.143 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance destroyed successfully.
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.155 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance destroyed successfully.
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.166 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6da58c-57b2-4000-a276-81b6dc6fd149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 NetworkManager[48920]: <info>  [1763804093.1760] manager: (tap20570e02-40): new Veth device (/org/freedesktop/NetworkManager/Devices/441)
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.174 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc75f5f-3ac1-45ad-891f-eb42f423824f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 systemd-udevd[363137]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.220 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ac825cba-750f-4c19-b1b4-eccef9e92524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.223 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b69a68fa-a1c9-4578-a964-728edc722414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 NetworkManager[48920]: <info>  [1763804093.2564] device (tap20570e02-40): carrier: link connected
Nov 22 09:34:53 compute-0 ceph-mon[75021]: pgmap v2183: 305 pgs: 305 active+clean; 270 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.7 MiB/s wr, 117 op/s
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.264 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[db42c948-662d-4d34-ad5c-7fbf86877443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.290 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[12ebc52b-b8ed-4509-a1d8-75d1ffb7390b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20570e02-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:a4:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694399, 'reachable_time': 29289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363191, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.305 253665 DEBUG nova.compute.manager [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.306 253665 DEBUG oslo_concurrency.lockutils [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.306 253665 DEBUG oslo_concurrency.lockutils [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.306 253665 DEBUG oslo_concurrency.lockutils [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.307 253665 DEBUG nova.compute.manager [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Processing event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9126b73-7682-4987-9005-1fb3f1fd0d92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:a4f4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694399, 'tstamp': 694399}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363192, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.337 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c34727-9b79-4c6c-9bd6-d5b6c08076c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20570e02-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:a4:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694399, 'reachable_time': 29289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363193, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.377 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6c5a06-283e-4d8e-bdf9-1ff33fc180f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.456 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b782b496-9a4d-4a16-a24f-5441159aac8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.457 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20570e02-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.458 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.458 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20570e02-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:53 compute-0 NetworkManager[48920]: <info>  [1763804093.4612] manager: (tap20570e02-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Nov 22 09:34:53 compute-0 kernel: tap20570e02-40: entered promiscuous mode
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.467 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20570e02-40, col_values=(('external_ids', {'iface-id': '4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.466 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:53 compute-0 ovn_controller[152872]: 2025-11-22T09:34:53Z|01064|binding|INFO|Releasing lport 4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9 from this chassis (sb_readonly=0)
Nov 22 09:34:53 compute-0 nova_compute[253661]: 2025-11-22 09:34:53.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.493 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82c082d3-ce25-4adb-a999-fa0738cdb3f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.495 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:34:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.496 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'env', 'PROCESS_TAG=haproxy-20570e02-4f3c-425d-9564-924b275d70dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20570e02-4f3c-425d-9564-924b275d70dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:34:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 325 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.1 MiB/s wr, 159 op/s
Nov 22 09:34:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:54.227 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:a9:a9 10.100.0.2 2001:db8::f816:3eff:fe75:a9a9'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe75:a9a9/64', 'neutron:device_id': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ff0f834b-9623-4226-98e1-741634e7eb05) old=Port_Binding(mac=['fa:16:3e:75:a9:a9 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.408 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deleting instance files /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6_del
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.410 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deletion of /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6_del complete
Nov 22 09:34:54 compute-0 podman[363205]: 2025-11-22 09:34:54.428098399 +0000 UTC m=+0.110207592 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 22 09:34:54 compute-0 podman[363244]: 2025-11-22 09:34:54.463348325 +0000 UTC m=+0.052332592 container create 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:34:54 compute-0 systemd[1]: Started libpod-conmon-83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2.scope.
Nov 22 09:34:54 compute-0 podman[363244]: 2025-11-22 09:34:54.43658619 +0000 UTC m=+0.025570467 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:34:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054c9b7e428385e12a38fc4d69601f1307ee63ce36fecad82d262095c96a25dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.537 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.538 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating image(s)
Nov 22 09:34:54 compute-0 podman[363244]: 2025-11-22 09:34:54.548612366 +0000 UTC m=+0.137596643 container init 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:34:54 compute-0 podman[363244]: 2025-11-22 09:34:54.555501237 +0000 UTC m=+0.144485494 container start 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.561 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:54 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [NOTICE]   (363285) : New worker (363305) forked
Nov 22 09:34:54 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [NOTICE]   (363285) : Loading success.
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.585 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.606 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.609 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:54.630 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ff0f834b-9623-4226-98e1-741634e7eb05 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 updated
Nov 22 09:34:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:54.631 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3e4e01e-5e3e-4572-b404-ee47aaec1186, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:34:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:54.632 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b925d9a7-1459-4eca-a27f-18fbe77dca03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.681 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.682 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.682 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.683 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.712 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:54 compute-0 nova_compute[253661]: 2025-11-22 09:34:54.716 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.026 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804095.0260923, 3f8530ae-f429-4807-81ca-84d8f964a38c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.027 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Started (Lifecycle Event)
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.031 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.034 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.037 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance spawned successfully.
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.037 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.053 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.057 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.062 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.063 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.063 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.064 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.064 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.065 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.094 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.095 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804095.030547, 3f8530ae-f429-4807-81ca-84d8f964a38c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.095 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Paused (Lifecycle Event)
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.098 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.130 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.134 253665 INFO nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Took 7.72 seconds to spawn the instance on the hypervisor.
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.134 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.171 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804095.0335686, 3f8530ae-f429-4807-81ca-84d8f964a38c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.171 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Resumed (Lifecycle Event)
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.176 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] resizing rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.206 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.213 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.271 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.278 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.279 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Ensure instance console log exists: /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.282 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.283 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.283 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.285 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.294 253665 INFO nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Took 8.86 seconds to build instance.
Nov 22 09:34:55 compute-0 ceph-mon[75021]: pgmap v2184: 305 pgs: 305 active+clean; 325 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.1 MiB/s wr, 159 op/s
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.300 253665 WARNING nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.306 253665 DEBUG nova.virt.libvirt.host [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.307 253665 DEBUG nova.virt.libvirt.host [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.311 253665 DEBUG nova.virt.libvirt.host [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.311 253665 DEBUG nova.virt.libvirt.host [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.312 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.312 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.312 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.314 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.314 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.314 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.314 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.316 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.333 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.406 253665 DEBUG nova.compute.manager [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.407 253665 DEBUG oslo_concurrency.lockutils [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.407 253665 DEBUG oslo_concurrency.lockutils [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.407 253665 DEBUG oslo_concurrency.lockutils [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.407 253665 DEBUG nova.compute.manager [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.408 253665 WARNING nova.compute.manager [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state active and task_state None.
Nov 22 09:34:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:34:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:34:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:34:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:34:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:34:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 325 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.1 MiB/s wr, 159 op/s
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.869 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226086813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.898 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.926 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:55 compute-0 nova_compute[253661]: 2025-11-22 09:34:55.932 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3226086813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:34:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491713492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.492 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.496 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <uuid>361d3f1d-84a4-4159-a69a-8a0254446ab6</uuid>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <name>instance-0000006a</name>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <nova:name>tempest-ServerShowV257Test-server-1169039346</nova:name>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:34:55</nova:creationTime>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <nova:user uuid="c9e3213f01af435aab231356352dba1b">tempest-ServerShowV257Test-555892026-project-member</nova:user>
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <nova:project uuid="d7e64b9e1f5f4ed7a0a6326357a91223">tempest-ServerShowV257Test-555892026</nova:project>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <system>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <entry name="serial">361d3f1d-84a4-4159-a69a-8a0254446ab6</entry>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <entry name="uuid">361d3f1d-84a4-4159-a69a-8a0254446ab6</entry>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     </system>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <os>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   </os>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <features>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   </features>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/361d3f1d-84a4-4159-a69a-8a0254446ab6_disk">
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config">
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       </source>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:34:56 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/console.log" append="off"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <video>
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     </video>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:34:56 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:34:56 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:34:56 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:34:56 compute-0 nova_compute[253661]: </domain>
Nov 22 09:34:56 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.558 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.559 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.560 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Using config drive
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.583 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.601 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:34:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:34:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:34:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:34:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.630 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'keypairs' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.977 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating config drive at /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config
Nov 22 09:34:56 compute-0 nova_compute[253661]: 2025-11-22 09:34:56.983 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdd1xkba execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.163 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdd1xkba" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.190 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.194 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.307 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.308 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.309 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.309 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.309 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.311 253665 INFO nova.compute.manager [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Terminating instance
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.311 253665 DEBUG nova.compute.manager [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:34:57 compute-0 ceph-mon[75021]: pgmap v2185: 305 pgs: 305 active+clean; 325 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.1 MiB/s wr, 159 op/s
Nov 22 09:34:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3491713492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:34:57 compute-0 kernel: tap46a77e89-60 (unregistering): left promiscuous mode
Nov 22 09:34:57 compute-0 NetworkManager[48920]: <info>  [1763804097.4114] device (tap46a77e89-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01065|binding|INFO|Releasing lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 from this chassis (sb_readonly=0)
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01066|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 down in Southbound
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01067|binding|INFO|Removing iface tap46a77e89-60 ovn-installed in OVS
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.447 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:2e:22 10.100.0.22'], port_security=['fa:16:3e:75:2e:22 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15f88390-1071-41f9-b1a4-108f4f3845d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e50820c9-c083-42b3-a5c1-62f5befbff0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcb3a475-2422-4e03-9155-1b7e58a05aab, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=46a77e89-60ff-4609-9a5a-6e542d8343e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.449 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 46a77e89-60ff-4609-9a5a-6e542d8343e1 in datapath 15f88390-1071-41f9-b1a4-108f4f3845d0 unbound from our chassis
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.450 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15f88390-1071-41f9-b1a4-108f4f3845d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.451 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3c9e4b-ab79-4220-938f-1758bb756b2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.452 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 namespace which is not needed anymore
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.465 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.478 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.479 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deleting local config drive /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config because it was imported into RBD.
Nov 22 09:34:57 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 22 09:34:57 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Consumed 13.971s CPU time.
Nov 22 09:34:57 compute-0 systemd-machined[215941]: Machine qemu-132-instance-00000069 terminated.
Nov 22 09:34:57 compute-0 kernel: tap46a77e89-60: entered promiscuous mode
Nov 22 09:34:57 compute-0 NetworkManager[48920]: <info>  [1763804097.5444] manager: (tap46a77e89-60): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Nov 22 09:34:57 compute-0 systemd-udevd[363613]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:34:57 compute-0 kernel: tap46a77e89-60 (unregistering): left promiscuous mode
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01068|binding|INFO|Claiming lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 for this chassis.
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01069|binding|INFO|46a77e89-60ff-4609-9a5a-6e542d8343e1: Claiming fa:16:3e:75:2e:22 10.100.0.22
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: hostname: compute-0
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.582 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:2e:22 10.100.0.22'], port_security=['fa:16:3e:75:2e:22 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15f88390-1071-41f9-b1a4-108f4f3845d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e50820c9-c083-42b3-a5c1-62f5befbff0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcb3a475-2422-4e03-9155-1b7e58a05aab, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=46a77e89-60ff-4609-9a5a-6e542d8343e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01070|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 ovn-installed in OVS
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01071|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 up in Southbound
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01072|binding|INFO|Releasing lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 from this chassis (sb_readonly=1)
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01073|if_status|INFO|Dropped 3 log messages in last 555 seconds (most recently, 555 seconds ago) due to excessive rate
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.597 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01074|if_status|INFO|Not setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 down as sb is readonly
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01075|binding|INFO|Removing iface tap46a77e89-60 ovn-installed in OVS
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01076|binding|INFO|Releasing lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 from this chassis (sb_readonly=0)
Nov 22 09:34:57 compute-0 ovn_controller[152872]: 2025-11-22T09:34:57Z|01077|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 down in Southbound
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.605 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:2e:22 10.100.0.22'], port_security=['fa:16:3e:75:2e:22 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15f88390-1071-41f9-b1a4-108f4f3845d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e50820c9-c083-42b3-a5c1-62f5befbff0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcb3a475-2422-4e03-9155-1b7e58a05aab, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=46a77e89-60ff-4609-9a5a-6e542d8343e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 09:34:57 compute-0 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.626 253665 INFO nova.virt.libvirt.driver [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance destroyed successfully.
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.627 253665 DEBUG nova.objects.instance [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid da98da35-5fb2-47cd-9d6b-a3bb2254bec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.638 253665 DEBUG nova.virt.libvirt.vif [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-295501579',display_name='tempest-TestNetworkBasicOps-server-295501579',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-295501579',id=105,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNCNSJMfyeSfEucYEe4QlFd/0YaeFQZS6dilhF9jqVM15NRor8ABSVHqTZCPRl6JVm69HZTDz0B8aTd74/zbrdmaxxEXgRl0/0G8KTm0chRbWM6114wV+6thTAZigMHMqw==',key_name='tempest-TestNetworkBasicOps-1414379365',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-eerdrny6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:34:35Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=da98da35-5fb2-47cd-9d6b-a3bb2254bec9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.638 253665 DEBUG nova.network.os_vif_util [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.639 253665 DEBUG nova.network.os_vif_util [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.639 253665 DEBUG os_vif [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.642 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46a77e89-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.644 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.646 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.648 253665 INFO os_vif [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60')
Nov 22 09:34:57 compute-0 systemd-machined[215941]: New machine qemu-134-instance-0000006a.
Nov 22 09:34:57 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006a.
Nov 22 09:34:57 compute-0 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [NOTICE]   (362719) : haproxy version is 2.8.14-c23fe91
Nov 22 09:34:57 compute-0 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [NOTICE]   (362719) : path to executable is /usr/sbin/haproxy
Nov 22 09:34:57 compute-0 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [ALERT]    (362719) : Current worker (362721) exited with code 143 (Terminated)
Nov 22 09:34:57 compute-0 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [WARNING]  (362719) : All workers exited. Exiting... (0)
Nov 22 09:34:57 compute-0 systemd[1]: libpod-3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf.scope: Deactivated successfully.
Nov 22 09:34:57 compute-0 podman[363645]: 2025-11-22 09:34:57.691727091 +0000 UTC m=+0.086359249 container died 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf-userdata-shm.mount: Deactivated successfully.
Nov 22 09:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dbd92e328427ae01a91a6e788883f396c5e4580a3c0d724680d342b6d537fb2-merged.mount: Deactivated successfully.
Nov 22 09:34:57 compute-0 podman[363645]: 2025-11-22 09:34:57.831056066 +0000 UTC m=+0.225688234 container cleanup 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:34:57 compute-0 systemd[1]: libpod-conmon-3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf.scope: Deactivated successfully.
Nov 22 09:34:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 295 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.0 MiB/s wr, 216 op/s
Nov 22 09:34:57 compute-0 podman[363715]: 2025-11-22 09:34:57.948533337 +0000 UTC m=+0.086988284 container remove 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.955 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7ae19c-a83d-4c73-afca-27f5df6c8673]: (4, ('Sat Nov 22 09:34:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 (3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf)\n3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf\nSat Nov 22 09:34:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 (3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf)\n3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.958 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[199f0955-3734-4bc8-96bd-9826553fbea5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.959 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15f88390-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 kernel: tap15f88390-10: left promiscuous mode
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.972 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[28c2a5a8-67ad-4efd-aff3-e10f55258b5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.989 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f424b037-db19-450a-bb4e-bd4237492dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:57 compute-0 nova_compute[253661]: 2025-11-22 09:34:57.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:34:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.991 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa03396d-a7fe-47e9-9534-30ea6f1a204c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.013 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33739d93-70e6-4422-9e8a-cc2b7bbae122]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692565, 'reachable_time': 42158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363764, 'error': None, 'target': 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d15f88390\x2d1071\x2d41f9\x2db1a4\x2d108f4f3845d0.mount: Deactivated successfully.
Nov 22 09:34:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.017 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:34:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.018 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8f482a38-f03d-4b95-805d-1e48bd9b562f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.022 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 46a77e89-60ff-4609-9a5a-6e542d8343e1 in datapath 15f88390-1071-41f9-b1a4-108f4f3845d0 unbound from our chassis
Nov 22 09:34:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.023 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15f88390-1071-41f9-b1a4-108f4f3845d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:34:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.024 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cd4617-0058-4707-a7b3-9db6d2a41e79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.025 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 46a77e89-60ff-4609-9a5a-6e542d8343e1 in datapath 15f88390-1071-41f9-b1a4-108f4f3845d0 unbound from our chassis
Nov 22 09:34:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.026 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15f88390-1071-41f9-b1a4-108f4f3845d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:34:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4a6f8c-d76a-4b21-87aa-e12f54a313ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.133 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 361d3f1d-84a4-4159-a69a-8a0254446ab6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.133 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804098.1327739, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.133 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Resumed (Lifecycle Event)
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.141 253665 DEBUG nova.compute.manager [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.141 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.145 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance spawned successfully.
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.145 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.165 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.171 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.175 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.176 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.176 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.176 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.177 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.177 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.207 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.208 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804098.1390717, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.208 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Started (Lifecycle Event)
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.241 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.245 253665 DEBUG nova.compute.manager [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.276 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.306 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.306 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.306 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.366 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.702 253665 DEBUG nova.compute.manager [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-changed-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.703 253665 DEBUG nova.compute.manager [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing instance network info cache due to event network-changed-8da41f38-3812-4494-9cab-c4854772a569. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.703 253665 DEBUG oslo_concurrency.lockutils [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.704 253665 DEBUG oslo_concurrency.lockutils [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.704 253665 DEBUG nova.network.neutron [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing network info cache for port 8da41f38-3812-4494-9cab-c4854772a569 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.774 253665 INFO nova.virt.libvirt.driver [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Deleting instance files /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9_del
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.774 253665 INFO nova.virt.libvirt.driver [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Deletion of /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9_del complete
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.842 253665 INFO nova.compute.manager [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Took 1.53 seconds to destroy the instance on the hypervisor.
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.843 253665 DEBUG oslo.service.loopingcall [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.843 253665 DEBUG nova.compute.manager [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.844 253665 DEBUG nova.network.neutron [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.933 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.933 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:58 compute-0 nova_compute[253661]: 2025-11-22 09:34:58.950 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.044 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.045 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.054 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.055 253665 INFO nova.compute.claims [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.258 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:34:59 compute-0 ceph-mon[75021]: pgmap v2186: 305 pgs: 305 active+clean; 295 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.0 MiB/s wr, 216 op/s
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.552 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.553 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.553 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "361d3f1d-84a4-4159-a69a-8a0254446ab6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.554 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.554 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.555 253665 INFO nova.compute.manager [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Terminating instance
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.557 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "refresh_cache-361d3f1d-84a4-4159-a69a-8a0254446ab6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.557 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquired lock "refresh_cache-361d3f1d-84a4-4159-a69a-8a0254446ab6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.557 253665 DEBUG nova.network.neutron [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.630 253665 DEBUG nova.network.neutron [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.651 253665 INFO nova.compute.manager [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Took 0.81 seconds to deallocate network for instance.
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.693 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.752 253665 DEBUG nova.network.neutron [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:34:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:34:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3326564271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.779 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.786 253665 DEBUG nova.compute.provider_tree [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.802 253665 DEBUG nova.scheduler.client.report [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.827 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.828 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.830 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:34:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 269 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.1 MiB/s wr, 294 op/s
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.879 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.879 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.892 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.906 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:34:59 compute-0 nova_compute[253661]: 2025-11-22 09:34:59.937 253665 DEBUG oslo_concurrency.processutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.261 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.262 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.263 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Creating image(s)
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.292 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.325 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.351 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.356 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3326564271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1988217623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.415 253665 DEBUG nova.network.neutron [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.418 253665 DEBUG nova.network.neutron [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updated VIF entry in instance network info cache for port 8da41f38-3812-4494-9cab-c4854772a569. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.419 253665 DEBUG nova.network.neutron [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.421 253665 DEBUG nova.policy [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.427 253665 DEBUG oslo_concurrency.processutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.433 253665 DEBUG nova.compute.provider_tree [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.459 253665 DEBUG nova.scheduler.client.report [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.463 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Releasing lock "refresh_cache-361d3f1d-84a4-4159-a69a-8a0254446ab6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.463 253665 DEBUG nova.compute.manager [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.465 253665 DEBUG oslo_concurrency.lockutils [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.466 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.466 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.467 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.468 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.489 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.493 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.555 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.580 253665 INFO nova.scheduler.client.report [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance da98da35-5fb2-47cd-9d6b-a3bb2254bec9
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.646 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:00 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 22 09:35:00 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Consumed 2.821s CPU time.
Nov 22 09:35:00 compute-0 systemd-machined[215941]: Machine qemu-134-instance-0000006a terminated.
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.777 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-unplugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.779 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.779 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.779 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.780 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] No waiting events found dispatching network-vif-unplugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.780 253665 WARNING nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received unexpected event network-vif-unplugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 for instance with vm_state deleted and task_state None.
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.780 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.780 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.781 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.781 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.781 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] No waiting events found dispatching network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.781 253665 WARNING nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received unexpected event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 for instance with vm_state deleted and task_state None.
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] No waiting events found dispatching network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.785 253665 WARNING nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received unexpected event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 for instance with vm_state deleted and task_state None.
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.786 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-deleted-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.799 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance destroyed successfully.
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.800 253665 DEBUG nova.objects.instance [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'resources' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:00 compute-0 nova_compute[253661]: 2025-11-22 09:35:00.876 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.043 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.125 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.271 253665 DEBUG nova.objects.instance [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 2a866674-0c27-4cfc-89f2-dfe8e9768900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.285 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.286 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Ensure instance console log exists: /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.286 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.287 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.287 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:01 compute-0 ceph-mon[75021]: pgmap v2187: 305 pgs: 305 active+clean; 269 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.1 MiB/s wr, 294 op/s
Nov 22 09:35:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1988217623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 239 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.2 MiB/s wr, 307 op/s
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.947 253665 INFO nova.virt.libvirt.driver [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deleting instance files /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6_del
Nov 22 09:35:01 compute-0 nova_compute[253661]: 2025-11-22 09:35:01.948 253665 INFO nova.virt.libvirt.driver [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deletion of /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6_del complete
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.007 253665 INFO nova.compute.manager [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Took 1.54 seconds to destroy the instance on the hypervisor.
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.008 253665 DEBUG oslo.service.loopingcall [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.009 253665 DEBUG nova.compute.manager [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.009 253665 DEBUG nova.network.neutron [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.287 253665 DEBUG nova.network.neutron [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.301 253665 DEBUG nova.network.neutron [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.316 253665 INFO nova.compute.manager [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Took 0.31 seconds to deallocate network for instance.
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.375 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.375 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.460 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Successfully created port: 0334ba91-f8b0-462b-a47b-b421e8796a21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.508 253665 DEBUG oslo_concurrency.processutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.645 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018322390644331574 of space, bias 1.0, pg target 0.5496717193299472 quantized to 32 (current 32)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:35:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:35:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2097270545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.985 253665 DEBUG oslo_concurrency.processutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:02 compute-0 nova_compute[253661]: 2025-11-22 09:35:02.993 253665 DEBUG nova.compute.provider_tree [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:03 compute-0 nova_compute[253661]: 2025-11-22 09:35:03.016 253665 DEBUG nova.scheduler.client.report [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:03 compute-0 nova_compute[253661]: 2025-11-22 09:35:03.051 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:03 compute-0 nova_compute[253661]: 2025-11-22 09:35:03.094 253665 INFO nova.scheduler.client.report [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Deleted allocations for instance 361d3f1d-84a4-4159-a69a-8a0254446ab6
Nov 22 09:35:03 compute-0 nova_compute[253661]: 2025-11-22 09:35:03.191 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:03 compute-0 ceph-mon[75021]: pgmap v2188: 305 pgs: 305 active+clean; 239 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.2 MiB/s wr, 307 op/s
Nov 22 09:35:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2097270545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 213 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.9 MiB/s wr, 348 op/s
Nov 22 09:35:04 compute-0 ovn_controller[152872]: 2025-11-22T09:35:04Z|01078|binding|INFO|Releasing lport 4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9 from this chassis (sb_readonly=0)
Nov 22 09:35:04 compute-0 ovn_controller[152872]: 2025-11-22T09:35:04Z|01079|binding|INFO|Releasing lport a1771b67-4cb9-46af-b99c-bccbb7cc939f from this chassis (sb_readonly=0)
Nov 22 09:35:04 compute-0 nova_compute[253661]: 2025-11-22 09:35:04.562 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Successfully updated port: 0334ba91-f8b0-462b-a47b-b421e8796a21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:35:04 compute-0 nova_compute[253661]: 2025-11-22 09:35:04.579 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:04 compute-0 nova_compute[253661]: 2025-11-22 09:35:04.582 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:04 compute-0 nova_compute[253661]: 2025-11-22 09:35:04.583 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:04 compute-0 nova_compute[253661]: 2025-11-22 09:35:04.583 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:35:04 compute-0 nova_compute[253661]: 2025-11-22 09:35:04.640 253665 DEBUG nova.compute.manager [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:04 compute-0 nova_compute[253661]: 2025-11-22 09:35:04.641 253665 DEBUG nova.compute.manager [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing instance network info cache due to event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:04 compute-0 nova_compute[253661]: 2025-11-22 09:35:04.641 253665 DEBUG oslo_concurrency.lockutils [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:04 compute-0 nova_compute[253661]: 2025-11-22 09:35:04.734 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.391 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.391 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.392 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.392 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.392 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.393 253665 INFO nova.compute.manager [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Terminating instance
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.395 253665 DEBUG nova.compute.manager [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:35:05 compute-0 ceph-mon[75021]: pgmap v2189: 305 pgs: 305 active+clean; 213 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.9 MiB/s wr, 348 op/s
Nov 22 09:35:05 compute-0 kernel: tap4d3de607-ad (unregistering): left promiscuous mode
Nov 22 09:35:05 compute-0 NetworkManager[48920]: <info>  [1763804105.4622] device (tap4d3de607-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:05 compute-0 ovn_controller[152872]: 2025-11-22T09:35:05Z|01080|binding|INFO|Releasing lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a from this chassis (sb_readonly=0)
Nov 22 09:35:05 compute-0 ovn_controller[152872]: 2025-11-22T09:35:05Z|01081|binding|INFO|Setting lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a down in Southbound
Nov 22 09:35:05 compute-0 ovn_controller[152872]: 2025-11-22T09:35:05Z|01082|binding|INFO|Removing iface tap4d3de607-ad ovn-installed in OVS
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.485 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:2e:8f 10.100.0.14'], port_security=['fa:16:3e:cf:2e:8f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c5540f5a-8dfa-4b11-8452-c6fe99db1d64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ea999ce-3074-41ab-b630-d39c003b894a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1557c77-7174-4c01-8889-0c9609535e78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f542600-846f-418d-bf6a-c20db70e9dc6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.491 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a in datapath 5ea999ce-3074-41ab-b630-d39c003b894a unbound from our chassis
Nov 22 09:35:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.493 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ea999ce-3074-41ab-b630-d39c003b894a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:35:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68f40c0e-4a24-495b-9cc1-0511c994a047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.496 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a namespace which is not needed anymore
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:05 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 22 09:35:05 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Consumed 17.630s CPU time.
Nov 22 09:35:05 compute-0 systemd-machined[215941]: Machine qemu-128-instance-00000068 terminated.
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.619 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.633 253665 INFO nova.virt.libvirt.driver [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance destroyed successfully.
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.634 253665 DEBUG nova.objects.instance [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.645 253665 DEBUG nova.virt.libvirt.vif [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1891830994',display_name='tempest-TestNetworkBasicOps-server-1891830994',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1891830994',id=104,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9/oNERa4AbqxHUPuutKC57v2O48q74KuKUDGgcFa55ErxBYOBd37EKQrgbQiEDb5SwoFM9AeHUddF0XE/aljzNPw78dYMARly2RFfRYPgRPvDRHLrrtwK6XNq8kEtqIg==',key_name='tempest-TestNetworkBasicOps-1008221113',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-8xsk0rz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:33:54Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c5540f5a-8dfa-4b11-8452-c6fe99db1d64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.647 253665 DEBUG nova.network.os_vif_util [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.648 253665 DEBUG nova.network.os_vif_util [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.649 253665 DEBUG os_vif [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.653 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d3de607-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.665 253665 INFO os_vif [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad')
Nov 22 09:35:05 compute-0 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [NOTICE]   (360040) : haproxy version is 2.8.14-c23fe91
Nov 22 09:35:05 compute-0 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [NOTICE]   (360040) : path to executable is /usr/sbin/haproxy
Nov 22 09:35:05 compute-0 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [ALERT]    (360040) : Current worker (360045) exited with code 143 (Terminated)
Nov 22 09:35:05 compute-0 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [WARNING]  (360040) : All workers exited. Exiting... (0)
Nov 22 09:35:05 compute-0 systemd[1]: libpod-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b.scope: Deactivated successfully.
Nov 22 09:35:05 compute-0 conmon[360036]: conmon c99013bc2fb20721eb98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b.scope/container/memory.events
Nov 22 09:35:05 compute-0 podman[364050]: 2025-11-22 09:35:05.783127653 +0000 UTC m=+0.168209254 container died c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:35:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 213 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 279 op/s
Nov 22 09:35:05 compute-0 nova_compute[253661]: 2025-11-22 09:35:05.875 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b-userdata-shm.mount: Deactivated successfully.
Nov 22 09:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f254f240f0f6ebbbb79d2bb4edf4897aae117a89bf2f466a7a1a47ed4367c5b-merged.mount: Deactivated successfully.
Nov 22 09:35:06 compute-0 podman[364050]: 2025-11-22 09:35:06.001552495 +0000 UTC m=+0.386634086 container cleanup c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:35:06 compute-0 systemd[1]: libpod-conmon-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b.scope: Deactivated successfully.
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.513 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.531 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.532 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance network_info: |[{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.534 253665 DEBUG oslo_concurrency.lockutils [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.534 253665 DEBUG nova.network.neutron [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.538 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start _get_guest_xml network_info=[{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.542 253665 WARNING nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.551 253665 DEBUG nova.virt.libvirt.host [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.552 253665 DEBUG nova.virt.libvirt.host [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.555 253665 DEBUG nova.virt.libvirt.host [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.556 253665 DEBUG nova.virt.libvirt.host [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.556 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.556 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.557 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.558 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.558 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.558 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.558 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.559 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.559 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.559 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.560 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.560 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.563 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.734 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.736 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing instance network info cache due to event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.736 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.737 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.737 253665 DEBUG nova.network.neutron [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:06 compute-0 podman[364104]: 2025-11-22 09:35:06.886478282 +0000 UTC m=+0.858519861 container remove c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:35:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.895 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[269367ee-c180-46bc-b923-0a5b2eb52d3a]: (4, ('Sat Nov 22 09:35:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a (c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b)\nc99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b\nSat Nov 22 09:35:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a (c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b)\nc99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa38ac67-9d3c-4582-9fd1-a30e3743cd5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.899 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ea999ce-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:06 compute-0 kernel: tap5ea999ce-30: left promiscuous mode
Nov 22 09:35:06 compute-0 nova_compute[253661]: 2025-11-22 09:35:06.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.944 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6daf320d-29f0-49d1-868f-6a4e10d05745]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.963 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ffe19ce-fc0f-4e1f-8b5a-40a75b0d9c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.966 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdb69db-cda3-46a4-8883-17c330298c27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.991 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba3d4c0-d03b-4545-877f-b90ba97f2c3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688459, 'reachable_time': 25486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364139, 'error': None, 'target': 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d5ea999ce\x2d3074\x2d41ab\x2db630\x2dd39c003b894a.mount: Deactivated successfully.
Nov 22 09:35:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.997 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:35:06 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.997 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfaaeb9-d8fa-4ced-b71f-f24f8f93a98e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616839578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.022 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.048 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.053 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845275590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.541 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.543 253665 DEBUG nova.virt.libvirt.vif [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-534004704',display_name='tempest-TestGettingAddress-server-534004704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-534004704',id=108,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-svtsxafy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:59Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2a866674-0c27-4cfc-89f2-dfe8e9768900,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.544 253665 DEBUG nova.network.os_vif_util [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.545 253665 DEBUG nova.network.os_vif_util [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.546 253665 DEBUG nova.objects.instance [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2a866674-0c27-4cfc-89f2-dfe8e9768900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.562 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <uuid>2a866674-0c27-4cfc-89f2-dfe8e9768900</uuid>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <name>instance-0000006c</name>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-534004704</nova:name>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:35:06</nova:creationTime>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <nova:port uuid="0334ba91-f8b0-462b-a47b-b421e8796a21">
Nov 22 09:35:07 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb6:3376" ipVersion="6"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <system>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <entry name="serial">2a866674-0c27-4cfc-89f2-dfe8e9768900</entry>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <entry name="uuid">2a866674-0c27-4cfc-89f2-dfe8e9768900</entry>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     </system>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <os>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   </os>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <features>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   </features>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2a866674-0c27-4cfc-89f2-dfe8e9768900_disk">
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config">
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:b6:33:76"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <target dev="tap0334ba91-f8"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/console.log" append="off"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <video>
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     </video>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:35:07 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:35:07 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:35:07 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:35:07 compute-0 nova_compute[253661]: </domain>
Nov 22 09:35:07 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.564 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Preparing to wait for external event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.564 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.564 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.565 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.565 253665 DEBUG nova.virt.libvirt.vif [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-534004704',display_name='tempest-TestGettingAddress-server-534004704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-534004704',id=108,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-svtsxafy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:59Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2a866674-0c27-4cfc-89f2-dfe8e9768900,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.565 253665 DEBUG nova.network.os_vif_util [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.566 253665 DEBUG nova.network.os_vif_util [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.567 253665 DEBUG os_vif [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.567 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.568 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.568 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.571 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0334ba91-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.572 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0334ba91-f8, col_values=(('external_ids', {'iface-id': '0334ba91-f8b0-462b-a47b-b421e8796a21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:33:76', 'vm-uuid': '2a866674-0c27-4cfc-89f2-dfe8e9768900'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:07 compute-0 NetworkManager[48920]: <info>  [1763804107.5754] manager: (tap0334ba91-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.580 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.584 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.585 253665 INFO os_vif [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8')
Nov 22 09:35:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 213 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 281 op/s
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.916 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.917 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.917 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:b6:33:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.918 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Using config drive
Nov 22 09:35:07 compute-0 nova_compute[253661]: 2025-11-22 09:35:07.983 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:08 compute-0 ceph-mon[75021]: pgmap v2190: 305 pgs: 305 active+clean; 213 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 279 op/s
Nov 22 09:35:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2616839578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1845275590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.197 253665 DEBUG nova.network.neutron [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updated VIF entry in instance network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.197 253665 DEBUG nova.network.neutron [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.213 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.213 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-unplugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.214 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.214 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.214 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.215 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] No waiting events found dispatching network-vif-unplugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.215 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-unplugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.215 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.215 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.216 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.216 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.216 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] No waiting events found dispatching network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.216 253665 WARNING nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received unexpected event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a for instance with vm_state active and task_state deleting.
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.395 253665 DEBUG nova.network.neutron [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updated VIF entry in instance network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.396 253665 DEBUG nova.network.neutron [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.405 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Creating config drive at /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.415 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_wjotg0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.476 253665 DEBUG oslo_concurrency.lockutils [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.587 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_wjotg0z" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.624 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:08 compute-0 nova_compute[253661]: 2025-11-22 09:35:08.628 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:09 compute-0 ceph-mon[75021]: pgmap v2191: 305 pgs: 305 active+clean; 213 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 281 op/s
Nov 22 09:35:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 214 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.7 MiB/s wr, 239 op/s
Nov 22 09:35:10 compute-0 nova_compute[253661]: 2025-11-22 09:35:10.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:11 compute-0 ceph-mon[75021]: pgmap v2192: 305 pgs: 305 active+clean; 214 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.7 MiB/s wr, 239 op/s
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.359 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.731s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.360 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Deleting local config drive /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config because it was imported into RBD.
Nov 22 09:35:11 compute-0 kernel: tap0334ba91-f8: entered promiscuous mode
Nov 22 09:35:11 compute-0 NetworkManager[48920]: <info>  [1763804111.4296] manager: (tap0334ba91-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/445)
Nov 22 09:35:11 compute-0 ovn_controller[152872]: 2025-11-22T09:35:11Z|01083|binding|INFO|Claiming lport 0334ba91-f8b0-462b-a47b-b421e8796a21 for this chassis.
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.431 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:11 compute-0 ovn_controller[152872]: 2025-11-22T09:35:11Z|01084|binding|INFO|0334ba91-f8b0-462b-a47b-b421e8796a21: Claiming fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.441 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376'], port_security=['fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8::f816:3eff:feb6:3376/64', 'neutron:device_id': '2a866674-0c27-4cfc-89f2-dfe8e9768900', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d5326a8-c171-4fdf-9f85-e6536ded5f96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0334ba91-f8b0-462b-a47b-b421e8796a21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.443 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0334ba91-f8b0-462b-a47b-b421e8796a21 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 bound to our chassis
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.444 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3e4e01e-5e3e-4572-b404-ee47aaec1186
Nov 22 09:35:11 compute-0 systemd-udevd[364255]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.511 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84322510-427e-43fc-bed1-505f93cf6146]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.512 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3e4e01e-51 in ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.513 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3e4e01e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.513 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7ceae220-44d7-416a-8608-b38db41a1713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.514 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7af97014-de38-426a-a3e5-581c7e3e7f0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_controller[152872]: 2025-11-22T09:35:11Z|01085|binding|INFO|Setting lport 0334ba91-f8b0-462b-a47b-b421e8796a21 ovn-installed in OVS
Nov 22 09:35:11 compute-0 ovn_controller[152872]: 2025-11-22T09:35:11Z|01086|binding|INFO|Setting lport 0334ba91-f8b0-462b-a47b-b421e8796a21 up in Southbound
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:11 compute-0 NetworkManager[48920]: <info>  [1763804111.5310] device (tap0334ba91-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:35:11 compute-0 NetworkManager[48920]: <info>  [1763804111.5322] device (tap0334ba91-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.532 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[644a19f5-a28a-4489-81dd-7a70045f4de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 systemd-machined[215941]: New machine qemu-135-instance-0000006c.
Nov 22 09:35:11 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-0000006c.
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.561 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b68f0396-dfda-4350-a38c-53789ea11f93]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.594 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[64632633-f2ca-4c4b-bf6a-6b8867e96a9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 systemd-udevd[364260]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:35:11 compute-0 NetworkManager[48920]: <info>  [1763804111.6018] manager: (tapd3e4e01e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/446)
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.600 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9f4078-92e2-47f6-8c41-df8259c2d40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.637 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[477d1edd-1461-4a45-8875-7dd61fe07367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.642 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe28184a-e362-4648-9f90-cc1f92b6ee46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 NetworkManager[48920]: <info>  [1763804111.6709] device (tapd3e4e01e-50): carrier: link connected
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.680 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2199ce95-7a4b-43e5-8408-8435304f39bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.705 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[966aff60-6962-48ac-bffa-124951f4c476]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3e4e01e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:a9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696240, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364289, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.728 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4cdd3154-e080-496b-a122-573a97753d56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe75:a9a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696240, 'tstamp': 696240}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364290, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.756 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3b9fc2-6dc2-43d3-8d9d-c22c63bb7a6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3e4e01e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:a9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696240, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364291, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.801 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[941bbee7-0f4b-4792-8056-2085b4a936e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.848 253665 DEBUG nova.compute.manager [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.848 253665 DEBUG oslo_concurrency.lockutils [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.849 253665 DEBUG oslo_concurrency.lockutils [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.849 253665 DEBUG oslo_concurrency.lockutils [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.849 253665 DEBUG nova.compute.manager [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Processing event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:35:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 206 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 165 op/s
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.889 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c93d2d88-3520-4c4c-b783-46ccd350a144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.890 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3e4e01e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.891 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.891 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3e4e01e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.893 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:11 compute-0 NetworkManager[48920]: <info>  [1763804111.8946] manager: (tapd3e4e01e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Nov 22 09:35:11 compute-0 kernel: tapd3e4e01e-50: entered promiscuous mode
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.898 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3e4e01e-50, col_values=(('external_ids', {'iface-id': 'ff0f834b-9623-4226-98e1-741634e7eb05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.899 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.902 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3e4e01e-5e3e-4572-b404-ee47aaec1186.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3e4e01e-5e3e-4572-b404-ee47aaec1186.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.903 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c62c2db5-3a06-405c-b2e5-fcf64a825264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.904 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-d3e4e01e-5e3e-4572-b404-ee47aaec1186
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/d3e4e01e-5e3e-4572-b404-ee47aaec1186.pid.haproxy
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID d3e4e01e-5e3e-4572-b404-ee47aaec1186
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:35:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.906 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'env', 'PROCESS_TAG=haproxy-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3e4e01e-5e3e-4572-b404-ee47aaec1186.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:35:11 compute-0 ovn_controller[152872]: 2025-11-22T09:35:11Z|01087|binding|INFO|Releasing lport ff0f834b-9623-4226-98e1-741634e7eb05 from this chassis (sb_readonly=0)
Nov 22 09:35:11 compute-0 nova_compute[253661]: 2025-11-22 09:35:11.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:35:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3110288167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:35:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:35:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3110288167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.427 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.429 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804112.4269822, 2a866674-0c27-4cfc-89f2-dfe8e9768900 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.429 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] VM Started (Lifecycle Event)
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.434 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.440 253665 INFO nova.virt.libvirt.driver [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance spawned successfully.
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.440 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.455 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.462 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.468 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.469 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.469 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.469 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.470 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.470 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:12 compute-0 podman[364364]: 2025-11-22 09:35:12.378177813 +0000 UTC m=+0.057068620 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.481 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.481 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804112.4285688, 2a866674-0c27-4cfc-89f2-dfe8e9768900 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.481 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] VM Paused (Lifecycle Event)
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.522 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.526 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804112.4323664, 2a866674-0c27-4cfc-89f2-dfe8e9768900 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.526 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] VM Resumed (Lifecycle Event)
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.545 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.549 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.553 253665 INFO nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Took 12.29 seconds to spawn the instance on the hypervisor.
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.554 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.576 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:35:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3110288167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:35:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3110288167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.620 253665 INFO nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Took 13.62 seconds to build instance.
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.623 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804097.6225607, da98da35-5fb2-47cd-9d6b-a3bb2254bec9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.623 253665 INFO nova.compute.manager [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] VM Stopped (Lifecycle Event)
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.641 253665 DEBUG nova.compute.manager [None req-fd156203-268a-470d-b710-e96f51238074 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.642 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:12 compute-0 podman[364364]: 2025-11-22 09:35:12.748551844 +0000 UTC m=+0.427442661 container create a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.772575) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112772654, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1330, "num_deletes": 251, "total_data_size": 1917534, "memory_usage": 1944960, "flush_reason": "Manual Compaction"}
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112795130, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1886781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43971, "largest_seqno": 45300, "table_properties": {"data_size": 1880601, "index_size": 3383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13805, "raw_average_key_size": 20, "raw_value_size": 1867953, "raw_average_value_size": 2726, "num_data_blocks": 151, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803988, "oldest_key_time": 1763803988, "file_creation_time": 1763804112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 22624 microseconds, and 10468 cpu microseconds.
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.795197) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1886781 bytes OK
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.795230) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.798865) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.798894) EVENT_LOG_v1 {"time_micros": 1763804112798887, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.798917) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1911501, prev total WAL file size 1911501, number of live WAL files 2.
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.799933) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1842KB)], [101(8544KB)]
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112800029, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10635854, "oldest_snapshot_seqno": -1}
Nov 22 09:35:12 compute-0 systemd[1]: Started libpod-conmon-a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a.scope.
Nov 22 09:35:12 compute-0 nova_compute[253661]: 2025-11-22 09:35:12.860 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c895ab309e207727e077cfd1951c34149c6808ab71f4a6a14342fc5743fd6a36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6659 keys, 8978774 bytes, temperature: kUnknown
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112890177, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8978774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8933739, "index_size": 27259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 173097, "raw_average_key_size": 25, "raw_value_size": 8813855, "raw_average_value_size": 1323, "num_data_blocks": 1065, "num_entries": 6659, "num_filter_entries": 6659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.890566) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8978774 bytes
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.894535) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.8 rd, 99.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.3 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(10.4) write-amplify(4.8) OK, records in: 7173, records dropped: 514 output_compression: NoCompression
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.894585) EVENT_LOG_v1 {"time_micros": 1763804112894564, "job": 60, "event": "compaction_finished", "compaction_time_micros": 90288, "compaction_time_cpu_micros": 35399, "output_level": 6, "num_output_files": 1, "total_output_size": 8978774, "num_input_records": 7173, "num_output_records": 6659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112895080, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112896331, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.799690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:35:12 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:35:12 compute-0 podman[364364]: 2025-11-22 09:35:12.901211551 +0000 UTC m=+0.580102348 container init a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:35:12 compute-0 podman[364364]: 2025-11-22 09:35:12.907721452 +0000 UTC m=+0.586612259 container start a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:35:12 compute-0 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [NOTICE]   (364385) : New worker (364387) forked
Nov 22 09:35:12 compute-0 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [NOTICE]   (364385) : Loading success.
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.130 253665 INFO nova.virt.libvirt.driver [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Deleting instance files /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64_del
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.132 253665 INFO nova.virt.libvirt.driver [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Deletion of /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64_del complete
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.192 253665 INFO nova.compute.manager [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Took 7.80 seconds to destroy the instance on the hypervisor.
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.193 253665 DEBUG oslo.service.loopingcall [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.193 253665 DEBUG nova.compute.manager [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.193 253665 DEBUG nova.network.neutron [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:35:13 compute-0 ceph-mon[75021]: pgmap v2193: 305 pgs: 305 active+clean; 206 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 165 op/s
Nov 22 09:35:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 160 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 179 op/s
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.960 253665 DEBUG nova.compute.manager [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.962 253665 DEBUG oslo_concurrency.lockutils [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.962 253665 DEBUG oslo_concurrency.lockutils [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.962 253665 DEBUG oslo_concurrency.lockutils [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.962 253665 DEBUG nova.compute.manager [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] No waiting events found dispatching network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.963 253665 WARNING nova.compute.manager [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received unexpected event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 for instance with vm_state active and task_state None.
Nov 22 09:35:13 compute-0 nova_compute[253661]: 2025-11-22 09:35:13.989 253665 DEBUG nova.network.neutron [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.009 253665 INFO nova.compute.manager [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Took 0.82 seconds to deallocate network for instance.
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.062 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.063 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.082 253665 DEBUG nova.compute.manager [req-ae10ed67-13bf-4b5c-9d65-0033deccaf76 req-37da3148-2ba8-4706-b99e-59bd4ba5289b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-deleted-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.156 253665 DEBUG oslo_concurrency.processutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:14 compute-0 ovn_controller[152872]: 2025-11-22T09:35:14Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 09:35:14 compute-0 ovn_controller[152872]: 2025-11-22T09:35:14Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 09:35:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:14.484 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:14.486 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4117078659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4117078659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.647 253665 DEBUG oslo_concurrency.processutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.654 253665 DEBUG nova.compute.provider_tree [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.672 253665 DEBUG nova.scheduler.client.report [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.707 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.752 253665 INFO nova.scheduler.client.report [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance c5540f5a-8dfa-4b11-8452-c6fe99db1d64
Nov 22 09:35:14 compute-0 nova_compute[253661]: 2025-11-22 09:35:14.845 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:15 compute-0 ceph-mon[75021]: pgmap v2194: 305 pgs: 305 active+clean; 160 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 179 op/s
Nov 22 09:35:15 compute-0 nova_compute[253661]: 2025-11-22 09:35:15.788 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804100.7845926, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:15 compute-0 nova_compute[253661]: 2025-11-22 09:35:15.788 253665 INFO nova.compute.manager [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Stopped (Lifecycle Event)
Nov 22 09:35:15 compute-0 nova_compute[253661]: 2025-11-22 09:35:15.820 253665 DEBUG nova.compute.manager [None req-93f6abdd-e98b-4036-8c8d-410ebf43a029 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 160 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Nov 22 09:35:15 compute-0 nova_compute[253661]: 2025-11-22 09:35:15.881 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:16 compute-0 nova_compute[253661]: 2025-11-22 09:35:16.210 253665 DEBUG nova.compute.manager [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:16 compute-0 nova_compute[253661]: 2025-11-22 09:35:16.210 253665 DEBUG nova.compute.manager [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing instance network info cache due to event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:16 compute-0 nova_compute[253661]: 2025-11-22 09:35:16.210 253665 DEBUG oslo_concurrency.lockutils [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:16 compute-0 nova_compute[253661]: 2025-11-22 09:35:16.211 253665 DEBUG oslo_concurrency.lockutils [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:16 compute-0 nova_compute[253661]: 2025-11-22 09:35:16.211 253665 DEBUG nova.network.neutron [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:16 compute-0 nova_compute[253661]: 2025-11-22 09:35:16.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:17 compute-0 nova_compute[253661]: 2025-11-22 09:35:17.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:17 compute-0 ceph-mon[75021]: pgmap v2195: 305 pgs: 305 active+clean; 160 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Nov 22 09:35:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 164 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Nov 22 09:35:17 compute-0 nova_compute[253661]: 2025-11-22 09:35:17.893 253665 DEBUG nova.network.neutron [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updated VIF entry in instance network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:35:17 compute-0 nova_compute[253661]: 2025-11-22 09:35:17.894 253665 DEBUG nova.network.neutron [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:17 compute-0 nova_compute[253661]: 2025-11-22 09:35:17.913 253665 DEBUG oslo_concurrency.lockutils [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:18 compute-0 podman[364419]: 2025-11-22 09:35:18.381207069 +0000 UTC m=+0.062470145 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:35:18 compute-0 podman[364418]: 2025-11-22 09:35:18.414474216 +0000 UTC m=+0.102738126 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:35:18 compute-0 ovn_controller[152872]: 2025-11-22T09:35:18Z|01088|binding|INFO|Releasing lport 4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9 from this chassis (sb_readonly=0)
Nov 22 09:35:18 compute-0 ovn_controller[152872]: 2025-11-22T09:35:18Z|01089|binding|INFO|Releasing lport ff0f834b-9623-4226-98e1-741634e7eb05 from this chassis (sb_readonly=0)
Nov 22 09:35:18 compute-0 nova_compute[253661]: 2025-11-22 09:35:18.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:19 compute-0 nova_compute[253661]: 2025-11-22 09:35:19.452 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:19 compute-0 nova_compute[253661]: 2025-11-22 09:35:19.453 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:19 compute-0 nova_compute[253661]: 2025-11-22 09:35:19.472 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:35:19 compute-0 nova_compute[253661]: 2025-11-22 09:35:19.550 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:19 compute-0 nova_compute[253661]: 2025-11-22 09:35:19.550 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:19 compute-0 nova_compute[253661]: 2025-11-22 09:35:19.558 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:35:19 compute-0 nova_compute[253661]: 2025-11-22 09:35:19.559 253665 INFO nova.compute.claims [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:35:19 compute-0 nova_compute[253661]: 2025-11-22 09:35:19.739 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:19 compute-0 ceph-mon[75021]: pgmap v2196: 305 pgs: 305 active+clean; 164 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Nov 22 09:35:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 167 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Nov 22 09:35:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584020299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.228 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.235 253665 DEBUG nova.compute.provider_tree [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.251 253665 DEBUG nova.scheduler.client.report [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.272 253665 INFO nova.compute.manager [None req-d390af23-f0e9-472c-839e-a786035cdb81 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Get console output
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.281 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.282 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.281 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.344 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.345 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.368 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.394 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.487 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:35:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:20.488 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.489 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.489 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating image(s)
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.509 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.534 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.561 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.567 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.623 253665 DEBUG nova.policy [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31c7a4aa8fa340d2881ddc3ed426b6db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a31947dfacfc450ba028c42968f103b2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.631 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804105.6253514, c5540f5a-8dfa-4b11-8452-c6fe99db1d64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.632 253665 INFO nova.compute.manager [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] VM Stopped (Lifecycle Event)
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.650 253665 DEBUG oslo_concurrency.lockutils [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.651 253665 DEBUG oslo_concurrency.lockutils [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.651 253665 DEBUG nova.compute.manager [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.652 253665 DEBUG nova.compute.manager [None req-d4086b6c-08a3-475c-8c1c-ff5e85914d49 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.657 253665 DEBUG nova.compute.manager [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.658 253665 DEBUG nova.objects.instance [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.663 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.664 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.664 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.665 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.689 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.693 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.746 253665 DEBUG nova.virt.libvirt.driver [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:35:20 compute-0 nova_compute[253661]: 2025-11-22 09:35:20.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:20 compute-0 ceph-mon[75021]: pgmap v2197: 305 pgs: 305 active+clean; 167 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Nov 22 09:35:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2584020299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:21 compute-0 nova_compute[253661]: 2025-11-22 09:35:21.240 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Successfully created port: c027d879-91b3-497d-9f51-8476006ea65c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:35:21 compute-0 nova_compute[253661]: 2025-11-22 09:35:21.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 178 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 22 09:35:22 compute-0 nova_compute[253661]: 2025-11-22 09:35:22.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:35:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:35:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:35:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:35:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:35:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:35:22 compute-0 nova_compute[253661]: 2025-11-22 09:35:22.825 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:22 compute-0 nova_compute[253661]: 2025-11-22 09:35:22.894 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] resizing rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.049 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Successfully updated port: c027d879-91b3-497d-9f51-8476006ea65c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.057 253665 DEBUG nova.compute.manager [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-changed-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.058 253665 DEBUG nova.compute.manager [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing instance network info cache due to event network-changed-c027d879-91b3-497d-9f51-8476006ea65c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.058 253665 DEBUG oslo_concurrency.lockutils [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.059 253665 DEBUG oslo_concurrency.lockutils [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.059 253665 DEBUG nova.network.neutron [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing network info cache for port c027d879-91b3-497d-9f51-8476006ea65c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.076 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.218 253665 DEBUG nova.network.neutron [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:35:23 compute-0 ceph-mon[75021]: pgmap v2198: 305 pgs: 305 active+clean; 178 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.591 253665 DEBUG nova.objects.instance [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'migration_context' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.611 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.613 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Ensure instance console log exists: /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.613 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.614 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.614 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.644 253665 DEBUG nova.network.neutron [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.669 253665 DEBUG oslo_concurrency.lockutils [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.671 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.671 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:35:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:23 compute-0 nova_compute[253661]: 2025-11-22 09:35:23.811 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:35:23 compute-0 sudo[364640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:23 compute-0 sudo[364640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:23 compute-0 sudo[364640]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 202 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Nov 22 09:35:23 compute-0 sudo[364665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:35:23 compute-0 sudo[364665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:23 compute-0 sudo[364665]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:23 compute-0 sudo[364690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:23 compute-0 sudo[364690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:23 compute-0 sudo[364690]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:24 compute-0 sudo[364715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:35:24 compute-0 kernel: tap8da41f38-38 (unregistering): left promiscuous mode
Nov 22 09:35:24 compute-0 sudo[364715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:24 compute-0 NetworkManager[48920]: <info>  [1763804124.0739] device (tap8da41f38-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:35:24 compute-0 ovn_controller[152872]: 2025-11-22T09:35:24Z|01090|binding|INFO|Releasing lport 8da41f38-3812-4494-9cab-c4854772a569 from this chassis (sb_readonly=0)
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.081 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:24 compute-0 ovn_controller[152872]: 2025-11-22T09:35:24Z|01091|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 down in Southbound
Nov 22 09:35:24 compute-0 ovn_controller[152872]: 2025-11-22T09:35:24Z|01092|binding|INFO|Removing iface tap8da41f38-38 ovn-installed in OVS
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.090 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ea:ba 10.100.0.4'], port_security=['fa:16:3e:02:ea:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f8530ae-f429-4807-81ca-84d8f964a38c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20570e02-4f3c-425d-9564-924b275d70dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0291e4d-91dd-4ee6-9074-0372622e253d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f04ee3-5979-45f2-bf12-c1c6b0bf9924, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8da41f38-3812-4494-9cab-c4854772a569) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.092 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8da41f38-3812-4494-9cab-c4854772a569 in datapath 20570e02-4f3c-425d-9564-924b275d70dc unbound from our chassis
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.094 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20570e02-4f3c-425d-9564-924b275d70dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.095 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c0d617-a059-428f-a671-dfa8b62b79d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.096 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc namespace which is not needed anymore
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:24 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 22 09:35:24 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Consumed 15.508s CPU time.
Nov 22 09:35:24 compute-0 systemd-machined[215941]: Machine qemu-133-instance-0000006b terminated.
Nov 22 09:35:24 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [NOTICE]   (363285) : haproxy version is 2.8.14-c23fe91
Nov 22 09:35:24 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [NOTICE]   (363285) : path to executable is /usr/sbin/haproxy
Nov 22 09:35:24 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [WARNING]  (363285) : Exiting Master process...
Nov 22 09:35:24 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [WARNING]  (363285) : Exiting Master process...
Nov 22 09:35:24 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [ALERT]    (363285) : Current worker (363305) exited with code 143 (Terminated)
Nov 22 09:35:24 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [WARNING]  (363285) : All workers exited. Exiting... (0)
Nov 22 09:35:24 compute-0 systemd[1]: libpod-83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2.scope: Deactivated successfully.
Nov 22 09:35:24 compute-0 podman[364765]: 2025-11-22 09:35:24.28307829 +0000 UTC m=+0.072033692 container died 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:35:24 compute-0 NetworkManager[48920]: <info>  [1763804124.3103] manager: (tap8da41f38-38): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Nov 22 09:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2-userdata-shm.mount: Deactivated successfully.
Nov 22 09:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-054c9b7e428385e12a38fc4d69601f1307ee63ce36fecad82d262095c96a25dd-merged.mount: Deactivated successfully.
Nov 22 09:35:24 compute-0 podman[364765]: 2025-11-22 09:35:24.349354038 +0000 UTC m=+0.138309430 container cleanup 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:35:24 compute-0 systemd[1]: libpod-conmon-83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2.scope: Deactivated successfully.
Nov 22 09:35:24 compute-0 podman[364813]: 2025-11-22 09:35:24.434453225 +0000 UTC m=+0.054650430 container remove 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.441 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[15181837-cadd-476b-a531-b44be9dde25d]: (4, ('Sat Nov 22 09:35:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc (83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2)\n83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2\nSat Nov 22 09:35:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc (83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2)\n83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.444 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23f51168-d4c1-48db-975a-81a590ee88e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.446 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20570e02-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:24 compute-0 kernel: tap20570e02-40: left promiscuous mode
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.473 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5cfe94-dde6-42dc-9b41-87c8c173b281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.491 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1db4219-f8b1-42aa-be46-40b75c04a40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcccaf52-807d-4752-8a76-35afbb7fdd62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.516 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[85850246-2561-4878-8cba-98df1f1fe106]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694389, 'reachable_time': 23472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364842, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d20570e02\x2d4f3c\x2d425d\x2d9564\x2d924b275d70dc.mount: Deactivated successfully.
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.520 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:35:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.521 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c4d93b-8d5b-41e6-b79b-8f377776ea9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:24 compute-0 podman[364829]: 2025-11-22 09:35:24.598108295 +0000 UTC m=+0.107227708 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 09:35:24 compute-0 sudo[364715]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:24 compute-0 sudo[364874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:24 compute-0 sudo[364874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:24 compute-0 sudo[364874]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.773 253665 INFO nova.virt.libvirt.driver [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance shutdown successfully after 4 seconds.
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.781 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance destroyed successfully.
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.781 253665 DEBUG nova.objects.instance [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.797 253665 DEBUG nova.compute.manager [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:24 compute-0 sudo[364899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:35:24 compute-0 sudo[364899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:24 compute-0 sudo[364899]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:24 compute-0 nova_compute[253661]: 2025-11-22 09:35:24.865 253665 DEBUG oslo_concurrency.lockutils [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 4.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:24 compute-0 sudo[364924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:24 compute-0 sudo[364924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:24 compute-0 sudo[364924]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:24 compute-0 sudo[364949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- inventory --format=json-pretty --filter-for-batch
Nov 22 09:35:24 compute-0 sudo[364949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:25 compute-0 podman[365013]: 2025-11-22 09:35:25.324378556 +0000 UTC m=+0.047458731 container create 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:35:25 compute-0 systemd[1]: Started libpod-conmon-5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842.scope.
Nov 22 09:35:25 compute-0 podman[365013]: 2025-11-22 09:35:25.305140918 +0000 UTC m=+0.028221123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:35:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:25 compute-0 podman[365013]: 2025-11-22 09:35:25.424079006 +0000 UTC m=+0.147159201 container init 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:35:25 compute-0 podman[365013]: 2025-11-22 09:35:25.433711525 +0000 UTC m=+0.156791700 container start 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:35:25 compute-0 podman[365013]: 2025-11-22 09:35:25.438548945 +0000 UTC m=+0.161629120 container attach 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 09:35:25 compute-0 eloquent_margulis[365030]: 167 167
Nov 22 09:35:25 compute-0 systemd[1]: libpod-5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842.scope: Deactivated successfully.
Nov 22 09:35:25 compute-0 ceph-mon[75021]: pgmap v2199: 305 pgs: 305 active+clean; 202 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Nov 22 09:35:25 compute-0 conmon[365030]: conmon 5c960927c72689ed587c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842.scope/container/memory.events
Nov 22 09:35:25 compute-0 podman[365035]: 2025-11-22 09:35:25.4857918 +0000 UTC m=+0.023797082 container died 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae89b6c82d29cc8455cba0297577334154f152d0797137de100c71f6ecf2dec5-merged.mount: Deactivated successfully.
Nov 22 09:35:25 compute-0 podman[365035]: 2025-11-22 09:35:25.527857856 +0000 UTC m=+0.065863138 container remove 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:35:25 compute-0 systemd[1]: libpod-conmon-5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842.scope: Deactivated successfully.
Nov 22 09:35:25 compute-0 podman[365054]: 2025-11-22 09:35:25.714223281 +0000 UTC m=+0.042719974 container create c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:35:25 compute-0 systemd[1]: Started libpod-conmon-c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf.scope.
Nov 22 09:35:25 compute-0 podman[365054]: 2025-11-22 09:35:25.696730536 +0000 UTC m=+0.025227249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:35:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:25 compute-0 podman[365054]: 2025-11-22 09:35:25.811864769 +0000 UTC m=+0.140361492 container init c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:35:25 compute-0 podman[365054]: 2025-11-22 09:35:25.818790982 +0000 UTC m=+0.147287675 container start c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:35:25 compute-0 podman[365054]: 2025-11-22 09:35:25.82195324 +0000 UTC m=+0.150449933 container attach c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.860 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 202 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 136 op/s
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.897 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.897 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance network_info: |[{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.901 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start _get_guest_xml network_info=[{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.907 253665 WARNING nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.916 253665 DEBUG nova.virt.libvirt.host [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.918 253665 DEBUG nova.virt.libvirt.host [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.922 253665 DEBUG nova.virt.libvirt.host [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.922 253665 DEBUG nova.virt.libvirt.host [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.923 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.923 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.924 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.924 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.924 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.924 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.925 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.925 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.925 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.926 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.926 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.926 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:35:25 compute-0 nova_compute[253661]: 2025-11-22 09:35:25.929 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.016 253665 DEBUG nova.compute.manager [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.021 253665 DEBUG oslo_concurrency.lockutils [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.022 253665 DEBUG oslo_concurrency.lockutils [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.022 253665 DEBUG oslo_concurrency.lockutils [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.022 253665 DEBUG nova.compute.manager [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.023 253665 WARNING nova.compute.manager [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state stopped and task_state None.
Nov 22 09:35:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1804604923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.430 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1804604923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.479 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.486 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:26 compute-0 ovn_controller[152872]: 2025-11-22T09:35:26Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:33:76 10.100.0.5
Nov 22 09:35:26 compute-0 ovn_controller[152872]: 2025-11-22T09:35:26Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:33:76 10.100.0.5
Nov 22 09:35:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208672219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.990 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.994 253665 DEBUG nova.virt.libvirt.vif [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:20Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.994 253665 DEBUG nova.network.os_vif_util [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.996 253665 DEBUG nova.network.os_vif_util [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:26 compute-0 nova_compute[253661]: 2025-11-22 09:35:26.997 253665 DEBUG nova.objects.instance [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.021 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <uuid>cf5e117a-f203-4c8f-b795-01fb355ca5e8</uuid>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <name>instance-0000006d</name>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersNegativeTestJSON-server-627235813</nova:name>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:35:25</nova:creationTime>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <nova:user uuid="31c7a4aa8fa340d2881ddc3ed426b6db">tempest-ServersNegativeTestJSON-1692723590-project-member</nova:user>
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <nova:project uuid="a31947dfacfc450ba028c42968f103b2">tempest-ServersNegativeTestJSON-1692723590</nova:project>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <nova:port uuid="c027d879-91b3-497d-9f51-8476006ea65c">
Nov 22 09:35:27 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <system>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <entry name="serial">cf5e117a-f203-4c8f-b795-01fb355ca5e8</entry>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <entry name="uuid">cf5e117a-f203-4c8f-b795-01fb355ca5e8</entry>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     </system>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <os>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   </os>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <features>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   </features>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk">
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config">
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:27 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:d9:42:5a"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <target dev="tapc027d879-91"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/console.log" append="off"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <video>
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     </video>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:35:27 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:35:27 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:35:27 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:35:27 compute-0 nova_compute[253661]: </domain>
Nov 22 09:35:27 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.023 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Preparing to wait for external event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.023 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.024 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.024 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.025 253665 DEBUG nova.virt.libvirt.vif [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:20Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.025 253665 DEBUG nova.network.os_vif_util [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.026 253665 DEBUG nova.network.os_vif_util [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.026 253665 DEBUG os_vif [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.027 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.027 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.028 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.032 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.033 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc027d879-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.034 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc027d879-91, col_values=(('external_ids', {'iface-id': 'c027d879-91b3-497d-9f51-8476006ea65c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:42:5a', 'vm-uuid': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:27 compute-0 NetworkManager[48920]: <info>  [1763804127.0846] manager: (tapc027d879-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.096 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.098 253665 INFO os_vif [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.173 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.174 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.174 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No VIF found with MAC fa:16:3e:d9:42:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.177 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Using config drive
Nov 22 09:35:27 compute-0 nova_compute[253661]: 2025-11-22 09:35:27.200 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:27 compute-0 zen_lalande[365070]: [
Nov 22 09:35:27 compute-0 zen_lalande[365070]:     {
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         "available": false,
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         "ceph_device": false,
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         "lsm_data": {},
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         "lvs": [],
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         "path": "/dev/sr0",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         "rejected_reasons": [
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "Insufficient space (<5GB)",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "Has a FileSystem"
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         ],
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         "sys_api": {
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "actuators": null,
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "device_nodes": "sr0",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "devname": "sr0",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "human_readable_size": "482.00 KB",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "id_bus": "ata",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "model": "QEMU DVD-ROM",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "nr_requests": "2",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "parent": "/dev/sr0",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "partitions": {},
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "path": "/dev/sr0",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "removable": "1",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "rev": "2.5+",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "ro": "0",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "rotational": "1",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "sas_address": "",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "sas_device_handle": "",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "scheduler_mode": "mq-deadline",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "sectors": 0,
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "sectorsize": "2048",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "size": 493568.0,
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "support_discard": "2048",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "type": "disk",
Nov 22 09:35:27 compute-0 zen_lalande[365070]:             "vendor": "QEMU"
Nov 22 09:35:27 compute-0 zen_lalande[365070]:         }
Nov 22 09:35:27 compute-0 zen_lalande[365070]:     }
Nov 22 09:35:27 compute-0 zen_lalande[365070]: ]
Nov 22 09:35:27 compute-0 systemd[1]: libpod-c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf.scope: Deactivated successfully.
Nov 22 09:35:27 compute-0 systemd[1]: libpod-c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf.scope: Consumed 1.619s CPU time.
Nov 22 09:35:27 compute-0 podman[365054]: 2025-11-22 09:35:27.452982592 +0000 UTC m=+1.781479285 container died c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:35:27 compute-0 ceph-mon[75021]: pgmap v2200: 305 pgs: 305 active+clean; 202 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 136 op/s
Nov 22 09:35:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/208672219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87-merged.mount: Deactivated successfully.
Nov 22 09:35:27 compute-0 podman[365054]: 2025-11-22 09:35:27.514961283 +0000 UTC m=+1.843457976 container remove c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:35:27 compute-0 systemd[1]: libpod-conmon-c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf.scope: Deactivated successfully.
Nov 22 09:35:27 compute-0 sudo[364949]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:35:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:35:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:35:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:35:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:35:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:35:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:35:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev cb3680f0-f703-4038-9259-c54cca930ac7 does not exist
Nov 22 09:35:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6dbc029d-ac53-41bf-b8b3-4f810945474c does not exist
Nov 22 09:35:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a8fbdc7b-e04f-4596-82b7-cb320a8ddd01 does not exist
Nov 22 09:35:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:35:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:35:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:35:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:35:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:35:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:35:27 compute-0 sudo[367074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:27 compute-0 sudo[367074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:27 compute-0 sudo[367074]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:27 compute-0 sudo[367099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:35:27 compute-0 sudo[367099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:27 compute-0 sudo[367099]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:27 compute-0 sudo[367124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:27 compute-0 sudo[367124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:27 compute-0 sudo[367124]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:27 compute-0 sudo[367149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:35:27 compute-0 sudo[367149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 231 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 201 op/s
Nov 22 09:35:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:27.978 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:27.979 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:27.980 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.080 253665 DEBUG nova.compute.manager [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.081 253665 DEBUG oslo_concurrency.lockutils [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.081 253665 DEBUG oslo_concurrency.lockutils [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.082 253665 DEBUG oslo_concurrency.lockutils [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.082 253665 DEBUG nova.compute.manager [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.082 253665 WARNING nova.compute.manager [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state stopped and task_state None.
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.173 253665 INFO nova.compute.manager [None req-95b77ef9-31d5-442a-b11b-39366af88e40 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Get console output
Nov 22 09:35:28 compute-0 podman[367213]: 2025-11-22 09:35:28.24992782 +0000 UTC m=+0.081152649 container create 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:35:28 compute-0 podman[367213]: 2025-11-22 09:35:28.189970189 +0000 UTC m=+0.021195048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:35:28 compute-0 systemd[1]: Started libpod-conmon-82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736.scope.
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.330 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.352 253665 DEBUG oslo_concurrency.lockutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.353 253665 DEBUG oslo_concurrency.lockutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.353 253665 DEBUG nova.network.neutron [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.353 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'info_cache' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:28 compute-0 podman[367213]: 2025-11-22 09:35:28.376934769 +0000 UTC m=+0.208159618 container init 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 22 09:35:28 compute-0 podman[367213]: 2025-11-22 09:35:28.386667811 +0000 UTC m=+0.217892640 container start 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:35:28 compute-0 podman[367213]: 2025-11-22 09:35:28.392217539 +0000 UTC m=+0.223442358 container attach 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:35:28 compute-0 nostalgic_bassi[367230]: 167 167
Nov 22 09:35:28 compute-0 systemd[1]: libpod-82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736.scope: Deactivated successfully.
Nov 22 09:35:28 compute-0 podman[367213]: 2025-11-22 09:35:28.393892571 +0000 UTC m=+0.225117410 container died 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 09:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a4a9e48dcfee7d0acd03821a050ba22c1d6f8bfeec43cfc58bceef71be364d1-merged.mount: Deactivated successfully.
Nov 22 09:35:28 compute-0 podman[367213]: 2025-11-22 09:35:28.527188506 +0000 UTC m=+0.358413335 container remove 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:35:28 compute-0 systemd[1]: libpod-conmon-82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736.scope: Deactivated successfully.
Nov 22 09:35:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:35:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:35:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:35:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:35:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:35:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:28 compute-0 podman[367253]: 2025-11-22 09:35:28.728152514 +0000 UTC m=+0.049970434 container create 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.771 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating config drive at /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config
Nov 22 09:35:28 compute-0 systemd[1]: Started libpod-conmon-85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b.scope.
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.777 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsz_bb24t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:28 compute-0 podman[367253]: 2025-11-22 09:35:28.703599433 +0000 UTC m=+0.025417373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:35:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:28 compute-0 podman[367253]: 2025-11-22 09:35:28.896422498 +0000 UTC m=+0.218240438 container init 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:35:28 compute-0 podman[367253]: 2025-11-22 09:35:28.904968491 +0000 UTC m=+0.226786411 container start 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:35:28 compute-0 podman[367253]: 2025-11-22 09:35:28.912497447 +0000 UTC m=+0.234315387 container attach 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.927 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsz_bb24t" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.957 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:28 compute-0 nova_compute[253661]: 2025-11-22 09:35:28.963 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.025 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.408 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.408 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deleting local config drive /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config because it was imported into RBD.
Nov 22 09:35:29 compute-0 kernel: tapc027d879-91: entered promiscuous mode
Nov 22 09:35:29 compute-0 NetworkManager[48920]: <info>  [1763804129.4867] manager: (tapc027d879-91): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Nov 22 09:35:29 compute-0 ovn_controller[152872]: 2025-11-22T09:35:29Z|01093|binding|INFO|Claiming lport c027d879-91b3-497d-9f51-8476006ea65c for this chassis.
Nov 22 09:35:29 compute-0 ovn_controller[152872]: 2025-11-22T09:35:29Z|01094|binding|INFO|c027d879-91b3-497d-9f51-8476006ea65c: Claiming fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.499 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.501 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 bound to our chassis
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.502 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:35:29 compute-0 ovn_controller[152872]: 2025-11-22T09:35:29Z|01095|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c ovn-installed in OVS
Nov 22 09:35:29 compute-0 ovn_controller[152872]: 2025-11-22T09:35:29Z|01096|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c up in Southbound
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:29 compute-0 systemd-udevd[367328]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.518 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5df16e79-4143-490d-b6df-2c9315328ea1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.519 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa990966c-01 in ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.522 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa990966c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.522 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5ee243-207a-4e1a-beb5-1a4e3ca6c35d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.526 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f96ae01-b6ae-47ab-9b5a-4ed4139b8e33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.525 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:29 compute-0 NetworkManager[48920]: <info>  [1763804129.5387] device (tapc027d879-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:35:29 compute-0 NetworkManager[48920]: <info>  [1763804129.5402] device (tapc027d879-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.541 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1070ae38-cc20-4218-9490-ecb282e0606a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 systemd-machined[215941]: New machine qemu-136-instance-0000006d.
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.559 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5accc57f-3b98-4907-b03a-33afd2dc291c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006d.
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.599 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf48f2ec-2bc2-4ae1-aa46-acacecd44b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 NetworkManager[48920]: <info>  [1763804129.6108] manager: (tapa990966c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/451)
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.605 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a214bc1-b11f-4101-9c22-349faceda2a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.655 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1382c4-d480-4884-bf9b-fb49d7d3fa82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.659 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[91e8e822-8740-4c04-b092-42b93978eb84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ceph-mon[75021]: pgmap v2201: 305 pgs: 305 active+clean; 231 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 201 op/s
Nov 22 09:35:29 compute-0 NetworkManager[48920]: <info>  [1763804129.6897] device (tapa990966c-00): carrier: link connected
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.701 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5007c79e-2c67-4494-ae8f-f36668fa80b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.724 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9108d56-548c-4928-82ce-8e15eb59a141]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698042, 'reachable_time': 32119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367371, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78524a29-d966-4942-9466-831dac9eee6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:6fb9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698042, 'tstamp': 698042}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367374, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.773 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d587404a-ef94-4827-8a9c-e7d0dd35f774]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698042, 'reachable_time': 32119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367377, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.815 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af730cb1-a10d-4940-9939-dfdf19a80e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 244 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 207 op/s
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.899 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[168f2a7b-d3e1-4db7-9095-54d9fede97ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.901 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.901 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.902 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:29 compute-0 NetworkManager[48920]: <info>  [1763804129.9068] manager: (tapa990966c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Nov 22 09:35:29 compute-0 kernel: tapa990966c-00: entered promiscuous mode
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.914 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.918 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:29 compute-0 ovn_controller[152872]: 2025-11-22T09:35:29Z|01097|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.937 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:29 compute-0 nova_compute[253661]: 2025-11-22 09:35:29.942 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.943 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.945 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68647839-8740-4e8c-8bf8-9f61ea190f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.945 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:35:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.946 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'env', 'PROCESS_TAG=haproxy-a990966c-0851-457f-bdd5-27cf73032674', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a990966c-0851-457f-bdd5-27cf73032674.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:35:30 compute-0 hardcore_austin[367270]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:35:30 compute-0 hardcore_austin[367270]: --> relative data size: 1.0
Nov 22 09:35:30 compute-0 hardcore_austin[367270]: --> All data devices are unavailable
Nov 22 09:35:30 compute-0 systemd[1]: libpod-85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b.scope: Deactivated successfully.
Nov 22 09:35:30 compute-0 systemd[1]: libpod-85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b.scope: Consumed 1.118s CPU time.
Nov 22 09:35:30 compute-0 podman[367253]: 2025-11-22 09:35:30.12095042 +0000 UTC m=+1.442768380 container died 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.123 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804130.1214938, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Started (Lifecycle Event)
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.141 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.147 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804130.1217077, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.147 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Paused (Lifecycle Event)
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.165 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.173 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.194 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21-merged.mount: Deactivated successfully.
Nov 22 09:35:30 compute-0 podman[367253]: 2025-11-22 09:35:30.315063028 +0000 UTC m=+1.636880948 container remove 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:35:30 compute-0 systemd[1]: libpod-conmon-85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b.scope: Deactivated successfully.
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.354 253665 DEBUG nova.compute.manager [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.356 253665 DEBUG oslo_concurrency.lockutils [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.357 253665 DEBUG oslo_concurrency.lockutils [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.357 253665 DEBUG oslo_concurrency.lockutils [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.357 253665 DEBUG nova.compute.manager [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Processing event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.358 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:35:30 compute-0 sudo[367149]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.362 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804130.3620164, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.362 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Resumed (Lifecycle Event)
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.365 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.369 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance spawned successfully.
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.369 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.381 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.387 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.392 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.392 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.393 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.393 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.394 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.394 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:35:30 compute-0 sudo[367495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:30 compute-0 sudo[367495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:30 compute-0 sudo[367495]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:30 compute-0 podman[367482]: 2025-11-22 09:35:30.36059641 +0000 UTC m=+0.029995667 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.456 253665 INFO nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Took 9.97 seconds to spawn the instance on the hypervisor.
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.457 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:30 compute-0 sudo[367520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:35:30 compute-0 sudo[367520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:30 compute-0 sudo[367520]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.526 253665 INFO nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Took 11.00 seconds to build instance.
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.544 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:30 compute-0 sudo[367545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:30 compute-0 sudo[367545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:30 compute-0 sudo[367545]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:30 compute-0 podman[367482]: 2025-11-22 09:35:30.589138644 +0000 UTC m=+0.258537891 container create cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:35:30 compute-0 sudo[367570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:35:30 compute-0 sudo[367570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:30 compute-0 systemd[1]: Started libpod-conmon-cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a.scope.
Nov 22 09:35:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2118a5a184395a8c9e4712c7d009d993e2f304960246506fc15d26ee155b6cdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:30 compute-0 podman[367482]: 2025-11-22 09:35:30.721185147 +0000 UTC m=+0.390584414 container init cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.727 253665 DEBUG nova.network.neutron [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:30 compute-0 podman[367482]: 2025-11-22 09:35:30.727789332 +0000 UTC m=+0.397188569 container start cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.751 253665 DEBUG oslo_concurrency.lockutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:30 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [NOTICE]   (367601) : New worker (367603) forked
Nov 22 09:35:30 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [NOTICE]   (367601) : Loading success.
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.780 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance destroyed successfully.
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.781 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.795 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.812 253665 DEBUG nova.virt.libvirt.vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:24Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.813 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.814 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.815 253665 DEBUG os_vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.818 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8da41f38-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.826 253665 INFO os_vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38')
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.835 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start _get_guest_xml network_info=[{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.849 253665 WARNING nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.856 253665 DEBUG nova.virt.libvirt.host [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.858 253665 DEBUG nova.virt.libvirt.host [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.865 253665 DEBUG nova.virt.libvirt.host [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.866 253665 DEBUG nova.virt.libvirt.host [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.867 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.867 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.868 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.868 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.869 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.869 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.870 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.870 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.871 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.874 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.874 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.875 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.875 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:30 compute-0 nova_compute[253661]: 2025-11-22 09:35:30.895 253665 DEBUG oslo_concurrency.processutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:31 compute-0 podman[367652]: 2025-11-22 09:35:31.032682474 +0000 UTC m=+0.050168099 container create 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 09:35:31 compute-0 podman[367652]: 2025-11-22 09:35:31.009983919 +0000 UTC m=+0.027469564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:35:31 compute-0 systemd[1]: Started libpod-conmon-52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb.scope.
Nov 22 09:35:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:31 compute-0 podman[367652]: 2025-11-22 09:35:31.287638584 +0000 UTC m=+0.305124219 container init 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:35:31 compute-0 podman[367652]: 2025-11-22 09:35:31.296668729 +0000 UTC m=+0.314154344 container start 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:35:31 compute-0 systemd[1]: libpod-52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb.scope: Deactivated successfully.
Nov 22 09:35:31 compute-0 relaxed_leavitt[367687]: 167 167
Nov 22 09:35:31 compute-0 podman[367652]: 2025-11-22 09:35:31.304478013 +0000 UTC m=+0.321963628 container attach 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:35:31 compute-0 conmon[367687]: conmon 52022af7067a038b36c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb.scope/container/memory.events
Nov 22 09:35:31 compute-0 podman[367652]: 2025-11-22 09:35:31.305064437 +0000 UTC m=+0.322550072 container died 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1911e3537ea44e56930853f37934d7cbec4faab451acd250ff6ad5c9106bd98-merged.mount: Deactivated successfully.
Nov 22 09:35:31 compute-0 podman[367652]: 2025-11-22 09:35:31.413484733 +0000 UTC m=+0.430970338 container remove 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:35:31 compute-0 systemd[1]: libpod-conmon-52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb.scope: Deactivated successfully.
Nov 22 09:35:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248668810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:31 compute-0 nova_compute[253661]: 2025-11-22 09:35:31.481 253665 DEBUG oslo_concurrency.processutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:31 compute-0 nova_compute[253661]: 2025-11-22 09:35:31.532 253665 DEBUG oslo_concurrency.processutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:31 compute-0 podman[367732]: 2025-11-22 09:35:31.680840922 +0000 UTC m=+0.117647527 container create fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:35:31 compute-0 podman[367732]: 2025-11-22 09:35:31.589382158 +0000 UTC m=+0.026188783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:35:31 compute-0 ceph-mon[75021]: pgmap v2202: 305 pgs: 305 active+clean; 244 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 207 op/s
Nov 22 09:35:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2248668810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:31 compute-0 systemd[1]: Started libpod-conmon-fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657.scope.
Nov 22 09:35:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:31 compute-0 podman[367732]: 2025-11-22 09:35:31.867173167 +0000 UTC m=+0.303979802 container init fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:35:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 246 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 812 KiB/s rd, 3.9 MiB/s wr, 182 op/s
Nov 22 09:35:31 compute-0 podman[367732]: 2025-11-22 09:35:31.877173025 +0000 UTC m=+0.313979630 container start fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 09:35:31 compute-0 podman[367732]: 2025-11-22 09:35:31.902426373 +0000 UTC m=+0.339232978 container attach fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:35:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/673530446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.048 253665 DEBUG oslo_concurrency.processutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.053 253665 DEBUG nova.virt.libvirt.vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:24Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.054 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.056 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.058 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.072 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <uuid>3f8530ae-f429-4807-81ca-84d8f964a38c</uuid>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <name>instance-0000006b</name>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1778115453</nova:name>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:35:30</nova:creationTime>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <nova:port uuid="8da41f38-3812-4494-9cab-c4854772a569">
Nov 22 09:35:32 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <system>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <entry name="serial">3f8530ae-f429-4807-81ca-84d8f964a38c</entry>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <entry name="uuid">3f8530ae-f429-4807-81ca-84d8f964a38c</entry>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     </system>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <os>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   </os>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <features>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   </features>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3f8530ae-f429-4807-81ca-84d8f964a38c_disk">
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config">
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:32 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:02:ea:ba"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <target dev="tap8da41f38-38"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/console.log" append="off"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <video>
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     </video>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:35:32 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:35:32 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:35:32 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:35:32 compute-0 nova_compute[253661]: </domain>
Nov 22 09:35:32 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.075 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.075 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.076 253665 DEBUG nova.virt.libvirt.vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:24Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.077 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.077 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.078 253665 DEBUG os_vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.079 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.079 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.080 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.084 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8da41f38-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.084 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8da41f38-38, col_values=(('external_ids', {'iface-id': '8da41f38-3812-4494-9cab-c4854772a569', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:ea:ba', 'vm-uuid': '3f8530ae-f429-4807-81ca-84d8f964a38c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 NetworkManager[48920]: <info>  [1763804132.0883] manager: (tap8da41f38-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.096 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.097 253665 INFO os_vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38')
Nov 22 09:35:32 compute-0 NetworkManager[48920]: <info>  [1763804132.2311] manager: (tap8da41f38-38): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Nov 22 09:35:32 compute-0 kernel: tap8da41f38-38: entered promiscuous mode
Nov 22 09:35:32 compute-0 ovn_controller[152872]: 2025-11-22T09:35:32Z|01098|binding|INFO|Claiming lport 8da41f38-3812-4494-9cab-c4854772a569 for this chassis.
Nov 22 09:35:32 compute-0 ovn_controller[152872]: 2025-11-22T09:35:32Z|01099|binding|INFO|8da41f38-3812-4494-9cab-c4854772a569: Claiming fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 systemd-udevd[367356]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.243 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ea:ba 10.100.0.4'], port_security=['fa:16:3e:02:ea:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f8530ae-f429-4807-81ca-84d8f964a38c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20570e02-4f3c-425d-9564-924b275d70dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e0291e4d-91dd-4ee6-9074-0372622e253d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f04ee3-5979-45f2-bf12-c1c6b0bf9924, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8da41f38-3812-4494-9cab-c4854772a569) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.245 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8da41f38-3812-4494-9cab-c4854772a569 in datapath 20570e02-4f3c-425d-9564-924b275d70dc bound to our chassis
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.247 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 09:35:32 compute-0 NetworkManager[48920]: <info>  [1763804132.2542] device (tap8da41f38-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:35:32 compute-0 NetworkManager[48920]: <info>  [1763804132.2555] device (tap8da41f38-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:35:32 compute-0 ovn_controller[152872]: 2025-11-22T09:35:32Z|01100|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 ovn-installed in OVS
Nov 22 09:35:32 compute-0 ovn_controller[152872]: 2025-11-22T09:35:32Z|01101|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 up in Southbound
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.268 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.269 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78c213f0-e2dc-4db7-99c5-53f13a187e32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.270 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20570e02-41 in ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.276 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20570e02-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[032030b2-cce8-4e10-b29b-6dfe92309163]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.278 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1faa629-6408-4250-ae97-89154e196d94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 systemd-machined[215941]: New machine qemu-137-instance-0000006b.
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.297 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[df4b1e41-b1ae-45cf-91d4-8f433854748f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006b.
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.328 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48c80abe-e93f-4536-a58b-0d887b0420f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.365 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[42272870-0baa-44cc-acb3-39441f3e1257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 NetworkManager[48920]: <info>  [1763804132.3729] manager: (tap20570e02-40): new Veth device (/org/freedesktop/NetworkManager/Devices/455)
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.374 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f90880-f89c-4526-a9b8-0e5dca33a9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.413 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0f26985d-7ab8-439c-9338-8b0f4796ab30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.418 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a85c2103-71b0-4cd9-87f0-5ef104f5d3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 NetworkManager[48920]: <info>  [1763804132.4467] device (tap20570e02-40): carrier: link connected
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.456 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cd10f4b1-31c6-435e-a738-66cd04d36ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.513 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2005de86-e277-4240-8bdd-0f7ff9d86415]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20570e02-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:a4:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698318, 'reachable_time': 40672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367809, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.535 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47df396e-80b7-40c0-9513-23a31d6a2868]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:a4f4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698318, 'tstamp': 698318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367810, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.577 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3478c5e7-eedc-4860-93d1-de3ccd3a923b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20570e02-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:a4:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698318, 'reachable_time': 40672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367811, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.620 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc862b2b-edac-49ca-bfdc-5406cee9acd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.704 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[277ac18a-0224-4f83-a653-72934d36fa3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.706 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20570e02-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.706 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.707 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20570e02-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 kernel: tap20570e02-40: entered promiscuous mode
Nov 22 09:35:32 compute-0 NetworkManager[48920]: <info>  [1763804132.7106] manager: (tap20570e02-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.712 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20570e02-40, col_values=(('external_ids', {'iface-id': '4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:32 compute-0 ovn_controller[152872]: 2025-11-22T09:35:32Z|01102|binding|INFO|Releasing lport 4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9 from this chassis (sb_readonly=0)
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.717 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 sweet_easley[367770]: {
Nov 22 09:35:32 compute-0 sweet_easley[367770]:     "0": [
Nov 22 09:35:32 compute-0 sweet_easley[367770]:         {
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "devices": [
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "/dev/loop3"
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             ],
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_name": "ceph_lv0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_size": "21470642176",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "name": "ceph_lv0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "tags": {
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.cluster_name": "ceph",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.crush_device_class": "",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.encrypted": "0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.osd_id": "0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.type": "block",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.vdo": "0"
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             },
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "type": "block",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "vg_name": "ceph_vg0"
Nov 22 09:35:32 compute-0 sweet_easley[367770]:         }
Nov 22 09:35:32 compute-0 sweet_easley[367770]:     ],
Nov 22 09:35:32 compute-0 sweet_easley[367770]:     "1": [
Nov 22 09:35:32 compute-0 sweet_easley[367770]:         {
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "devices": [
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "/dev/loop4"
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             ],
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_name": "ceph_lv1",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_size": "21470642176",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "name": "ceph_lv1",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "tags": {
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.cluster_name": "ceph",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.crush_device_class": "",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.encrypted": "0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.osd_id": "1",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.type": "block",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.vdo": "0"
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             },
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "type": "block",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "vg_name": "ceph_vg1"
Nov 22 09:35:32 compute-0 sweet_easley[367770]:         }
Nov 22 09:35:32 compute-0 sweet_easley[367770]:     ],
Nov 22 09:35:32 compute-0 sweet_easley[367770]:     "2": [
Nov 22 09:35:32 compute-0 sweet_easley[367770]:         {
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "devices": [
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "/dev/loop5"
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             ],
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_name": "ceph_lv2",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_size": "21470642176",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "name": "ceph_lv2",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "tags": {
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.cluster_name": "ceph",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.crush_device_class": "",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.encrypted": "0",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.osd_id": "2",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.type": "block",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:                 "ceph.vdo": "0"
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             },
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "type": "block",
Nov 22 09:35:32 compute-0 sweet_easley[367770]:             "vg_name": "ceph_vg2"
Nov 22 09:35:32 compute-0 sweet_easley[367770]:         }
Nov 22 09:35:32 compute-0 sweet_easley[367770]:     ]
Nov 22 09:35:32 compute-0 sweet_easley[367770]: }
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.727 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a75ef93-9ce1-419c-aa03-226eea98bc36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.733 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:35:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.734 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'env', 'PROCESS_TAG=haproxy-20570e02-4f3c-425d-9564-924b275d70dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20570e02-4f3c-425d-9564-924b275d70dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.736 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:32 compute-0 systemd[1]: libpod-fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657.scope: Deactivated successfully.
Nov 22 09:35:32 compute-0 conmon[367770]: conmon fcadfc99b71401a58ecd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657.scope/container/memory.events
Nov 22 09:35:32 compute-0 podman[367832]: 2025-11-22 09:35:32.845158257 +0000 UTC m=+0.041744099 container died fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:35:32 compute-0 ceph-mon[75021]: pgmap v2203: 305 pgs: 305 active+clean; 246 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 812 KiB/s rd, 3.9 MiB/s wr, 182 op/s
Nov 22 09:35:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/673530446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.879 253665 DEBUG nova.compute.manager [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.879 253665 DEBUG oslo_concurrency.lockutils [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.880 253665 DEBUG oslo_concurrency.lockutils [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.880 253665 DEBUG oslo_concurrency.lockutils [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.880 253665 DEBUG nova.compute.manager [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:32 compute-0 nova_compute[253661]: 2025-11-22 09:35:32.880 253665 WARNING nova.compute.manager [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state None.
Nov 22 09:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388-merged.mount: Deactivated successfully.
Nov 22 09:35:33 compute-0 podman[367832]: 2025-11-22 09:35:33.134450022 +0000 UTC m=+0.331035844 container remove fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:35:33 compute-0 systemd[1]: libpod-conmon-fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657.scope: Deactivated successfully.
Nov 22 09:35:33 compute-0 sudo[367570]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.211 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 3f8530ae-f429-4807-81ca-84d8f964a38c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.213 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804133.2111423, 3f8530ae-f429-4807-81ca-84d8f964a38c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.213 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Resumed (Lifecycle Event)
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.216 253665 DEBUG nova.compute.manager [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.222 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance rebooted successfully.
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.223 253665 DEBUG nova.compute.manager [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.230 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:33 compute-0 podman[367903]: 2025-11-22 09:35:33.253943963 +0000 UTC m=+0.098331646 container create 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.260 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.261 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804133.2123375, 3f8530ae-f429-4807-81ca-84d8f964a38c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.261 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Started (Lifecycle Event)
Nov 22 09:35:33 compute-0 podman[367903]: 2025-11-22 09:35:33.197868449 +0000 UTC m=+0.042256162 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.297 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:33 compute-0 nova_compute[253661]: 2025-11-22 09:35:33.302 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:33 compute-0 systemd[1]: Started libpod-conmon-8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d.scope.
Nov 22 09:35:33 compute-0 sudo[367914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:33 compute-0 sudo[367914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:33 compute-0 sudo[367914]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc8b44b411584b888a60508b08d5f7368fa8670e8b86e9a637d46fbcd929032/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:33 compute-0 podman[367903]: 2025-11-22 09:35:33.362733879 +0000 UTC m=+0.207121592 container init 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:35:33 compute-0 podman[367903]: 2025-11-22 09:35:33.370183704 +0000 UTC m=+0.214571387 container start 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:35:33 compute-0 sudo[367947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:35:33 compute-0 sudo[367947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:33 compute-0 sudo[367947]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:33 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [NOTICE]   (367971) : New worker (367975) forked
Nov 22 09:35:33 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [NOTICE]   (367971) : Loading success.
Nov 22 09:35:33 compute-0 sudo[367983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:33 compute-0 sudo[367983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:33 compute-0 sudo[367983]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:33 compute-0 sudo[368009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:35:33 compute-0 sudo[368009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 246 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 241 op/s
Nov 22 09:35:33 compute-0 podman[368072]: 2025-11-22 09:35:33.906377239 +0000 UTC m=+0.048311303 container create e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:35:33 compute-0 systemd[1]: Started libpod-conmon-e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a.scope.
Nov 22 09:35:33 compute-0 podman[368072]: 2025-11-22 09:35:33.883744406 +0000 UTC m=+0.025678490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:35:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:34 compute-0 podman[368072]: 2025-11-22 09:35:34.006647203 +0000 UTC m=+0.148581287 container init e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:35:34 compute-0 podman[368072]: 2025-11-22 09:35:34.016842716 +0000 UTC m=+0.158776780 container start e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 09:35:34 compute-0 podman[368072]: 2025-11-22 09:35:34.02103649 +0000 UTC m=+0.162970574 container attach e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:35:34 compute-0 systemd[1]: libpod-e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a.scope: Deactivated successfully.
Nov 22 09:35:34 compute-0 affectionate_diffie[368088]: 167 167
Nov 22 09:35:34 compute-0 conmon[368088]: conmon e5dcb6caf73c0e8dc033 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a.scope/container/memory.events
Nov 22 09:35:34 compute-0 podman[368093]: 2025-11-22 09:35:34.071387802 +0000 UTC m=+0.029849514 container died e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c83346662e8366c9474f0dd84214aba2ff6037e8963afcc1fbc8869e35a84a7-merged.mount: Deactivated successfully.
Nov 22 09:35:34 compute-0 podman[368093]: 2025-11-22 09:35:34.116533235 +0000 UTC m=+0.074994927 container remove e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:35:34 compute-0 systemd[1]: libpod-conmon-e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a.scope: Deactivated successfully.
Nov 22 09:35:34 compute-0 podman[368114]: 2025-11-22 09:35:34.333866359 +0000 UTC m=+0.056134927 container create 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 09:35:34 compute-0 systemd[1]: Started libpod-conmon-571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446.scope.
Nov 22 09:35:34 compute-0 podman[368114]: 2025-11-22 09:35:34.313576955 +0000 UTC m=+0.035845543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:35:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:34 compute-0 podman[368114]: 2025-11-22 09:35:34.43161432 +0000 UTC m=+0.153882908 container init 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:35:34 compute-0 podman[368114]: 2025-11-22 09:35:34.438907071 +0000 UTC m=+0.161175629 container start 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:35:34 compute-0 podman[368114]: 2025-11-22 09:35:34.442696976 +0000 UTC m=+0.164965654 container attach 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:35:34 compute-0 ceph-mon[75021]: pgmap v2204: 305 pgs: 305 active+clean; 246 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 241 op/s
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.313 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.313 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.331 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.382 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.383 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.383 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.383 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.405 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.406 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.421 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.422 253665 INFO nova.compute.claims [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:35:35 compute-0 great_black[368131]: {
Nov 22 09:35:35 compute-0 great_black[368131]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:35:35 compute-0 great_black[368131]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:35:35 compute-0 great_black[368131]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:35:35 compute-0 great_black[368131]:         "osd_id": 1,
Nov 22 09:35:35 compute-0 great_black[368131]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:35:35 compute-0 great_black[368131]:         "type": "bluestore"
Nov 22 09:35:35 compute-0 great_black[368131]:     },
Nov 22 09:35:35 compute-0 great_black[368131]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:35:35 compute-0 great_black[368131]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:35:35 compute-0 great_black[368131]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:35:35 compute-0 great_black[368131]:         "osd_id": 0,
Nov 22 09:35:35 compute-0 great_black[368131]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:35:35 compute-0 great_black[368131]:         "type": "bluestore"
Nov 22 09:35:35 compute-0 great_black[368131]:     },
Nov 22 09:35:35 compute-0 great_black[368131]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:35:35 compute-0 great_black[368131]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:35:35 compute-0 great_black[368131]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:35:35 compute-0 great_black[368131]:         "osd_id": 2,
Nov 22 09:35:35 compute-0 great_black[368131]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:35:35 compute-0 great_black[368131]:         "type": "bluestore"
Nov 22 09:35:35 compute-0 great_black[368131]:     }
Nov 22 09:35:35 compute-0 great_black[368131]: }
Nov 22 09:35:35 compute-0 systemd[1]: libpod-571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446.scope: Deactivated successfully.
Nov 22 09:35:35 compute-0 systemd[1]: libpod-571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446.scope: Consumed 1.040s CPU time.
Nov 22 09:35:35 compute-0 podman[368114]: 2025-11-22 09:35:35.480956357 +0000 UTC m=+1.203224915 container died 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.590 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.643 253665 DEBUG nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.643 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.644 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.644 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.644 253665 DEBUG nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.644 253665 WARNING nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state active and task_state None.
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.646 253665 WARNING nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state active and task_state None.
Nov 22 09:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c-merged.mount: Deactivated successfully.
Nov 22 09:35:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 246 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 205 op/s
Nov 22 09:35:35 compute-0 podman[368114]: 2025-11-22 09:35:35.876122764 +0000 UTC m=+1.598391322 container remove 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:35:35 compute-0 systemd[1]: libpod-conmon-571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446.scope: Deactivated successfully.
Nov 22 09:35:35 compute-0 nova_compute[253661]: 2025-11-22 09:35:35.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:35 compute-0 sudo[368009]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:35:36 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.041 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.042 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:36 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:36 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0fb32c88-9313-4476-a403-407572b81c10 does not exist
Nov 22 09:35:36 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 962c79d1-09ec-4450-a7c5-4018752dc4c7 does not exist
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.060 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:35:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69129438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.104 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.112 253665 DEBUG nova.compute.provider_tree [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:36 compute-0 sudo[368197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:35:36 compute-0 sudo[368197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:36 compute-0 sudo[368197]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.132 253665 DEBUG nova.scheduler.client.report [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.155 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.156 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.160 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.160 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.168 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.168 253665 INFO nova.compute.claims [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:35:36 compute-0 sudo[368224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:35:36 compute-0 sudo[368224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:35:36 compute-0 sudo[368224]: pam_unix(sudo:session): session closed for user root
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.235 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.236 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.254 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.281 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.377 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.431 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.433 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.434 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Creating image(s)
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.461 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.496 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.524 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.530 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.637 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.640 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.641 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.641 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.674 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.678 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7b3234ab-db15-43a8-8093-469f6e62db91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3296429346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.893 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.904 253665 DEBUG nova.compute.provider_tree [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.923 253665 DEBUG nova.scheduler.client.report [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.963 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:36 compute-0 nova_compute[253661]: 2025-11-22 09:35:36.964 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.027 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.027 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.052 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:35:37 compute-0 ceph-mon[75021]: pgmap v2205: 305 pgs: 305 active+clean; 246 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 205 op/s
Nov 22 09:35:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:35:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/69129438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3296429346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.068 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.099 253665 DEBUG nova.policy [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.195 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.199 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.199 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Creating image(s)
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.223 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.251 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.292 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.297 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.346 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.350 253665 DEBUG nova.policy [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31c7a4aa8fa340d2881ddc3ed426b6db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a31947dfacfc450ba028c42968f103b2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.372 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.373 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.374 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.382 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.383 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.383 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.384 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.405 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:37 compute-0 nova_compute[253661]: 2025-11-22 09:35:37.410 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 250 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 223 op/s
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.263 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7b3234ab-db15-43a8-8093-469f6e62db91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.328 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.439 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.440 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.456 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.524 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.526 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.534 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.535 253665 INFO nova.compute.claims [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.641 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Successfully created port: a0713d25-85db-4bb0-9be1-0cb5253aa017 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:35:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.737 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:38 compute-0 nova_compute[253661]: 2025-11-22 09:35:38.784 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Successfully created port: 735988ac-a658-458d-975f-872cfa132420 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:35:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590950912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.246 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.252 253665 DEBUG nova.compute.provider_tree [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.278 253665 DEBUG nova.scheduler.client.report [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:39 compute-0 ceph-mon[75021]: pgmap v2206: 305 pgs: 305 active+clean; 250 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 223 op/s
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.299 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.300 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.337 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Successfully updated port: a0713d25-85db-4bb0-9be1-0cb5253aa017 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.408 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.409 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.409 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.411 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.414 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.422 253665 DEBUG nova.compute.manager [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-changed-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.422 253665 DEBUG nova.compute.manager [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Refreshing instance network info cache due to event network-changed-a0713d25-85db-4bb0-9be1-0cb5253aa017. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.423 253665 DEBUG oslo_concurrency.lockutils [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.430 253665 DEBUG nova.objects.instance [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 7b3234ab-db15-43a8-8093-469f6e62db91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.441 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.446 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.447 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Ensure instance console log exists: /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.447 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.448 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.450 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.459 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.494 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.534 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] resizing rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.600 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.601 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.602 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Creating image(s)
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.624 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.654 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.691 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.696 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.781 253665 DEBUG nova.objects.instance [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'migration_context' on Instance uuid 4bcc50c8-3188-45f6-aa14-994c5ab8b966 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.794 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.794 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Ensure instance console log exists: /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.795 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.795 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.795 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.799 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.799 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.800 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.800 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.823 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:39 compute-0 nova_compute[253661]: 2025-11-22 09:35:39.828 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 269 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.7 MiB/s wr, 218 op/s
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.064 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.098 253665 DEBUG nova.policy [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.243 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1590950912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.326 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.359 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Successfully updated port: 735988ac-a658-458d-975f-872cfa132420 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.378 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.378 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.378 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.439 253665 DEBUG nova.objects.instance [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.450 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.450 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Ensure instance console log exists: /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.451 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.451 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.451 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.470 253665 DEBUG nova.compute.manager [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-changed-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.470 253665 DEBUG nova.compute.manager [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing instance network info cache due to event network-changed-735988ac-a658-458d-975f-872cfa132420. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.471 253665 DEBUG oslo_concurrency.lockutils [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.780 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:35:40 compute-0 nova_compute[253661]: 2025-11-22 09:35:40.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.132 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Successfully created port: 21b54230-3ad3-4b65-b752-5a1b0472844e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:35:41 compute-0 ceph-mon[75021]: pgmap v2207: 305 pgs: 305 active+clean; 269 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.7 MiB/s wr, 218 op/s
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.381 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Updating instance_info_cache with network_info: [{"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.408 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.409 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance network_info: |[{"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.410 253665 DEBUG oslo_concurrency.lockutils [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.410 253665 DEBUG nova.network.neutron [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Refreshing network info cache for port a0713d25-85db-4bb0-9be1-0cb5253aa017 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.413 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start _get_guest_xml network_info=[{"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.424 253665 WARNING nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.429 253665 DEBUG nova.virt.libvirt.host [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.430 253665 DEBUG nova.virt.libvirt.host [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.438 253665 DEBUG nova.virt.libvirt.host [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.439 253665 DEBUG nova.virt.libvirt.host [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.439 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.440 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.440 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.440 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.440 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.441 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.441 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.441 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.441 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.442 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.442 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.442 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.446 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 314 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 186 op/s
Nov 22 09:35:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218277084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.943 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.975 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:41 compute-0 nova_compute[253661]: 2025-11-22 09:35:41.980 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.256 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/218277084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527630677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.457 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.459 253665 DEBUG nova.virt.libvirt.vif [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1440739346',display_name='tempest-ServersNegativeTestJSON-server-1440739346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1440739346',id=111,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-pm1o12oq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:37Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=4bcc50c8-3188-45f6-aa14-994c5ab8b966,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.459 253665 DEBUG nova.network.os_vif_util [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.460 253665 DEBUG nova.network.os_vif_util [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.461 253665 DEBUG nova.objects.instance [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4bcc50c8-3188-45f6-aa14-994c5ab8b966 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.474 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <uuid>4bcc50c8-3188-45f6-aa14-994c5ab8b966</uuid>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <name>instance-0000006f</name>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersNegativeTestJSON-server-1440739346</nova:name>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:35:41</nova:creationTime>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <nova:user uuid="31c7a4aa8fa340d2881ddc3ed426b6db">tempest-ServersNegativeTestJSON-1692723590-project-member</nova:user>
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <nova:project uuid="a31947dfacfc450ba028c42968f103b2">tempest-ServersNegativeTestJSON-1692723590</nova:project>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <nova:port uuid="a0713d25-85db-4bb0-9be1-0cb5253aa017">
Nov 22 09:35:42 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <system>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <entry name="serial">4bcc50c8-3188-45f6-aa14-994c5ab8b966</entry>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <entry name="uuid">4bcc50c8-3188-45f6-aa14-994c5ab8b966</entry>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     </system>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <os>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   </os>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <features>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   </features>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk">
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config">
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:42 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:75:6e:95"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <target dev="tapa0713d25-85"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/console.log" append="off"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <video>
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     </video>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:35:42 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:35:42 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:35:42 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:35:42 compute-0 nova_compute[253661]: </domain>
Nov 22 09:35:42 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.475 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Preparing to wait for external event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.475 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.475 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.475 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.476 253665 DEBUG nova.virt.libvirt.vif [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1440739346',display_name='tempest-ServersNegativeTestJSON-server-1440739346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1440739346',id=111,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-pm1o12oq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:37Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=4bcc50c8-3188-45f6-aa14-994c5ab8b966,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.476 253665 DEBUG nova.network.os_vif_util [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.476 253665 DEBUG nova.network.os_vif_util [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.477 253665 DEBUG os_vif [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.477 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.478 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.478 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.482 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0713d25-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.482 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0713d25-85, col_values=(('external_ids', {'iface-id': 'a0713d25-85db-4bb0-9be1-0cb5253aa017', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:6e:95', 'vm-uuid': '4bcc50c8-3188-45f6-aa14-994c5ab8b966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:42 compute-0 NetworkManager[48920]: <info>  [1763804142.4851] manager: (tapa0713d25-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.494 253665 INFO os_vif [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85')
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.559 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.572 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.573 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.573 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No VIF found with MAC fa:16:3e:75:6e:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.574 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Using config drive
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.616 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.625 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.626 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance network_info: |[{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.626 253665 DEBUG oslo_concurrency.lockutils [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.626 253665 DEBUG nova.network.neutron [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing network info cache for port 735988ac-a658-458d-975f-872cfa132420 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.632 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start _get_guest_xml network_info=[{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.653 253665 WARNING nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.660 253665 DEBUG nova.virt.libvirt.host [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.663 253665 DEBUG nova.virt.libvirt.host [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.667 253665 DEBUG nova.virt.libvirt.host [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.667 253665 DEBUG nova.virt.libvirt.host [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.670 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.670 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.671 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.671 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.671 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.672 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.672 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.672 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.672 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.673 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.673 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.673 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.677 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097962855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.794 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.872 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.873 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.887 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.888 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.893 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.893 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.897 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:42 compute-0 nova_compute[253661]: 2025-11-22 09:35:42.897 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.023 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Successfully updated port: 21b54230-3ad3-4b65-b752-5a1b0472844e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.043 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.043 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.043 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.119 253665 DEBUG nova.compute.manager [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.120 253665 DEBUG nova.compute.manager [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing instance network info cache due to event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.120 253665 DEBUG oslo_concurrency.lockutils [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.166 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.167 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3224MB free_disk=59.84364700317383GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.167 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.167 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.215 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:35:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006528993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.246 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.277 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.284 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:43 compute-0 ceph-mon[75021]: pgmap v2208: 305 pgs: 305 active+clean; 314 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 186 op/s
Nov 22 09:35:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2527630677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3097962855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2006528993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.334 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3f8530ae-f429-4807-81ca-84d8f964a38c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.334 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2a866674-0c27-4cfc-89f2-dfe8e9768900 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.334 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance cf5e117a-f203-4c8f-b795-01fb355ca5e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 7b3234ab-db15-43a8-8093-469f6e62db91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 4bcc50c8-3188-45f6-aa14-994c5ab8b966 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.348 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Creating config drive at /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.353 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyii9gv5m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.508 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyii9gv5m" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.538 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.542 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.596 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.756 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.757 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Deleting local config drive /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config because it was imported into RBD.
Nov 22 09:35:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326226583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.812 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.813 253665 DEBUG nova.virt.libvirt.vif [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-340448396',display_name='tempest-TestGettingAddress-server-340448396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-340448396',id=110,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-quxnyf0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:36Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7b3234ab-db15-43a8-8093-469f6e62db91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.814 253665 DEBUG nova.network.os_vif_util [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.815 253665 DEBUG nova.network.os_vif_util [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.816 253665 DEBUG nova.objects.instance [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b3234ab-db15-43a8-8093-469f6e62db91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:43 compute-0 kernel: tapa0713d25-85: entered promiscuous mode
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:43 compute-0 NetworkManager[48920]: <info>  [1763804143.8281] manager: (tapa0713d25-85): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Nov 22 09:35:43 compute-0 ovn_controller[152872]: 2025-11-22T09:35:43Z|01103|binding|INFO|Claiming lport a0713d25-85db-4bb0-9be1-0cb5253aa017 for this chassis.
Nov 22 09:35:43 compute-0 ovn_controller[152872]: 2025-11-22T09:35:43Z|01104|binding|INFO|a0713d25-85db-4bb0-9be1-0cb5253aa017: Claiming fa:16:3e:75:6e:95 10.100.0.9
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.832 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <uuid>7b3234ab-db15-43a8-8093-469f6e62db91</uuid>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <name>instance-0000006e</name>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-340448396</nova:name>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:35:42</nova:creationTime>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <nova:port uuid="735988ac-a658-458d-975f-872cfa132420">
Nov 22 09:35:43 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0e:5306" ipVersion="6"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <system>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <entry name="serial">7b3234ab-db15-43a8-8093-469f6e62db91</entry>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <entry name="uuid">7b3234ab-db15-43a8-8093-469f6e62db91</entry>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     </system>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <os>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   </os>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <features>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   </features>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/7b3234ab-db15-43a8-8093-469f6e62db91_disk">
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/7b3234ab-db15-43a8-8093-469f6e62db91_disk.config">
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:43 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:0e:53:06"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <target dev="tap735988ac-a6"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/console.log" append="off"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <video>
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     </video>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:35:43 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:35:43 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:35:43 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:35:43 compute-0 nova_compute[253661]: </domain>
Nov 22 09:35:43 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.832 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Preparing to wait for external event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.835 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.835 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.835 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.836 253665 DEBUG nova.virt.libvirt.vif [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-340448396',display_name='tempest-TestGettingAddress-server-340448396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-340448396',id=110,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-quxnyf0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:36Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7b3234ab-db15-43a8-8093-469f6e62db91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.836 253665 DEBUG nova.network.os_vif_util [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.838 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:6e:95 10.100.0.9'], port_security=['fa:16:3e:75:6e:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4bcc50c8-3188-45f6-aa14-994c5ab8b966', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a0713d25-85db-4bb0-9be1-0cb5253aa017) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.838 253665 DEBUG nova.network.os_vif_util [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.839 253665 DEBUG os_vif [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:35:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.839 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a0713d25-85db-4bb0-9be1-0cb5253aa017 in datapath a990966c-0851-457f-bdd5-27cf73032674 bound to our chassis
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.839 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.841 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.844 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.845 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap735988ac-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.845 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap735988ac-a6, col_values=(('external_ids', {'iface-id': '735988ac-a658-458d-975f-872cfa132420', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:53:06', 'vm-uuid': '7b3234ab-db15-43a8-8093-469f6e62db91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.847 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:35:43 compute-0 NetworkManager[48920]: <info>  [1763804143.8511] manager: (tap735988ac-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Nov 22 09:35:43 compute-0 ovn_controller[152872]: 2025-11-22T09:35:43Z|01105|binding|INFO|Setting lport a0713d25-85db-4bb0-9be1-0cb5253aa017 ovn-installed in OVS
Nov 22 09:35:43 compute-0 ovn_controller[152872]: 2025-11-22T09:35:43Z|01106|binding|INFO|Setting lport a0713d25-85db-4bb0-9be1-0cb5253aa017 up in Southbound
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:43 compute-0 systemd-udevd[369033]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.866 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.868 253665 INFO os_vif [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6')
Nov 22 09:35:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 394 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.1 MiB/s wr, 238 op/s
Nov 22 09:35:43 compute-0 systemd-machined[215941]: New machine qemu-138-instance-0000006f.
Nov 22 09:35:43 compute-0 NetworkManager[48920]: <info>  [1763804143.8806] device (tapa0713d25-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:35:43 compute-0 NetworkManager[48920]: <info>  [1763804143.8813] device (tapa0713d25-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:35:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.884 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[90d71db5-0af1-4f6b-9143-e46d7ce78e61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:43 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006f.
Nov 22 09:35:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.926 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa3e637-5f3a-42e9-bde6-6f12f075ccde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.932 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ffcab0-ef3b-423c-ac2f-7f6f30d53b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.949 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.949 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.950 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:0e:53:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.950 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Using config drive
Nov 22 09:35:43 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.982 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[35b7fd92-f2e9-4df3-be16-308965acf8f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:43 compute-0 nova_compute[253661]: 2025-11-22 09:35:43.982 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.014 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a63ddbc5-9358-467d-964d-1a3da2f1d82e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698042, 'reachable_time': 32119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369067, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.036 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc02c109-f1a2-463c-8160-336e51affda3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa990966c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698058, 'tstamp': 698058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369068, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa990966c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698062, 'tstamp': 698062}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369068, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.039 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.041 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.048 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.049 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.049 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.049 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.102 253665 DEBUG nova.network.neutron [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Updated VIF entry in instance network info cache for port a0713d25-85db-4bb0-9be1-0cb5253aa017. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.102 253665 DEBUG nova.network.neutron [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Updating instance_info_cache with network_info: [{"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.126 253665 DEBUG oslo_concurrency.lockutils [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672916499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.173 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.181 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.197 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:44 compute-0 ovn_controller[152872]: 2025-11-22T09:35:44Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 09:35:44 compute-0 ovn_controller[152872]: 2025-11-22T09:35:44Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.268 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.268 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/326226583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2672916499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.421 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Creating config drive at /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.426 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv3xz0qg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.587 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv3xz0qg" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.623 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.634 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.682 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804144.6624904, 4bcc50c8-3188-45f6-aa14-994c5ab8b966 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.683 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] VM Started (Lifecycle Event)
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.702 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.709 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804144.662754, 4bcc50c8-3188-45f6-aa14-994c5ab8b966 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.709 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] VM Paused (Lifecycle Event)
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.725 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.729 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.746 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.831 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.831 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Deleting local config drive /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config because it was imported into RBD.
Nov 22 09:35:44 compute-0 kernel: tap735988ac-a6: entered promiscuous mode
Nov 22 09:35:44 compute-0 systemd-udevd[369038]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:35:44 compute-0 NetworkManager[48920]: <info>  [1763804144.8988] manager: (tap735988ac-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/460)
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:44 compute-0 ovn_controller[152872]: 2025-11-22T09:35:44Z|01107|binding|INFO|Claiming lport 735988ac-a658-458d-975f-872cfa132420 for this chassis.
Nov 22 09:35:44 compute-0 ovn_controller[152872]: 2025-11-22T09:35:44Z|01108|binding|INFO|735988ac-a658-458d-975f-872cfa132420: Claiming fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.914 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306'], port_security=['fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe0e:5306/64', 'neutron:device_id': '7b3234ab-db15-43a8-8093-469f6e62db91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d5326a8-c171-4fdf-9f85-e6536ded5f96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=735988ac-a658-458d-975f-872cfa132420) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.916 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 735988ac-a658-458d-975f-872cfa132420 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 bound to our chassis
Nov 22 09:35:44 compute-0 NetworkManager[48920]: <info>  [1763804144.9195] device (tap735988ac-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:35:44 compute-0 NetworkManager[48920]: <info>  [1763804144.9206] device (tap735988ac-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.920 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3e4e01e-5e3e-4572-b404-ee47aaec1186
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:44 compute-0 ovn_controller[152872]: 2025-11-22T09:35:44Z|01109|binding|INFO|Setting lport 735988ac-a658-458d-975f-872cfa132420 ovn-installed in OVS
Nov 22 09:35:44 compute-0 ovn_controller[152872]: 2025-11-22T09:35:44Z|01110|binding|INFO|Setting lport 735988ac-a658-458d-975f-872cfa132420 up in Southbound
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.941 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48a7068f-291a-4c01-9031-594db39e4164]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:44 compute-0 systemd-machined[215941]: New machine qemu-139-instance-0000006e.
Nov 22 09:35:44 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-0000006e.
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.964 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.984 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e801eb-0191-49b1-9ccd-fd38d9001943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.987 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.988 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance network_info: |[{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.988 253665 DEBUG oslo_concurrency.lockutils [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.989 253665 DEBUG nova.network.neutron [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.989 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7382bfb6-6ccd-4a91-9684-f8d68a7c997c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:44 compute-0 nova_compute[253661]: 2025-11-22 09:35:44.992 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start _get_guest_xml network_info=[{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.016 253665 WARNING nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.022 253665 DEBUG nova.virt.libvirt.host [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.023 253665 DEBUG nova.virt.libvirt.host [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.027 253665 DEBUG nova.virt.libvirt.host [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.027 253665 DEBUG nova.virt.libvirt.host [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.028 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.028 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.028 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.028 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.030 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.032 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.033 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a8a370-cabc-4555-a662-6c8dfd3b6800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.058 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2de7c358-78c2-475f-9572-39bb8e02762c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3e4e01e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:a9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696240, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369181, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.088 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2d5a5e-65e1-4f02-91b4-c64f35e7d7cd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd3e4e01e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696257, 'tstamp': 696257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369183, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd3e4e01e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696261, 'tstamp': 696261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369183, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.090 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3e4e01e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3e4e01e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3e4e01e-50, col_values=(('external_ids', {'iface-id': 'ff0f834b-9623-4226-98e1-741634e7eb05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.094 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:45 compute-0 ceph-mon[75021]: pgmap v2209: 305 pgs: 305 active+clean; 394 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.1 MiB/s wr, 238 op/s
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.381 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804145.3802052, 7b3234ab-db15-43a8-8093-469f6e62db91 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.381 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] VM Started (Lifecycle Event)
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.407 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.412 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804145.3804626, 7b3234ab-db15-43a8-8093-469f6e62db91 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.412 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] VM Paused (Lifecycle Event)
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.429 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.435 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.482 253665 DEBUG nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.482 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.482 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.483 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.483 253665 DEBUG nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Processing event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.483 253665 DEBUG nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.483 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.484 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.484 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.484 253665 DEBUG nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] No waiting events found dispatching network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.484 253665 WARNING nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received unexpected event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 for instance with vm_state building and task_state spawning.
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.485 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.490 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.495 253665 INFO nova.virt.libvirt.driver [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance spawned successfully.
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.495 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:35:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099570431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.533 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.563 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.569 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.628 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.629 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804145.4902775, 4bcc50c8-3188-45f6-aa14-994c5ab8b966 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] VM Resumed (Lifecycle Event)
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.636 253665 DEBUG nova.network.neutron [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updated VIF entry in instance network info cache for port 735988ac-a658-458d-975f-872cfa132420. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.637 253665 DEBUG nova.network.neutron [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.649 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.649 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.650 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.651 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.651 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.652 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.661 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.662 253665 DEBUG oslo_concurrency.lockutils [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.666 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.687 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.711 253665 INFO nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Took 8.51 seconds to spawn the instance on the hypervisor.
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.711 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.774 253665 INFO nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Took 9.65 seconds to build instance.
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.789 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 394 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.1 MiB/s wr, 173 op/s
Nov 22 09:35:45 compute-0 nova_compute[253661]: 2025-11-22 09:35:45.893 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:35:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3412478433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.068 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.069 253665 DEBUG nova.virt.libvirt.vif [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:39Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.070 253665 DEBUG nova.network.os_vif_util [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.071 253665 DEBUG nova.network.os_vif_util [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.072 253665 DEBUG nova.objects.instance [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.086 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <name>instance-00000070</name>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:35:45</nova:creationTime>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 09:35:46 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <system>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <entry name="serial">c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <entry name="uuid">c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     </system>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <os>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   </os>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <features>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   </features>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk">
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config">
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:35:46 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ad:ee:9e"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <target dev="tap21b54230-3a"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log" append="off"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <video>
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     </video>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:35:46 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:35:46 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:35:46 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:35:46 compute-0 nova_compute[253661]: </domain>
Nov 22 09:35:46 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.087 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Preparing to wait for external event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.087 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.087 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.087 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.088 253665 DEBUG nova.virt.libvirt.vif [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:39Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.088 253665 DEBUG nova.network.os_vif_util [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.089 253665 DEBUG nova.network.os_vif_util [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.089 253665 DEBUG os_vif [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.090 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.091 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.093 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21b54230-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.094 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21b54230-3a, col_values=(('external_ids', {'iface-id': '21b54230-3ad3-4b65-b752-5a1b0472844e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:ee:9e', 'vm-uuid': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:46 compute-0 NetworkManager[48920]: <info>  [1763804146.0965] manager: (tap21b54230-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.103 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.103 253665 INFO os_vif [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a')
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.171 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.172 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.172 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:ad:ee:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.173 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Using config drive
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.197 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.269 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.269 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1099570431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3412478433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.486 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Creating config drive at /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.490 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdl9cwtdx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.539 253665 DEBUG nova.network.neutron [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updated VIF entry in instance network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.541 253665 DEBUG nova.network.neutron [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.554 253665 DEBUG oslo_concurrency.lockutils [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.644 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdl9cwtdx" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:46 compute-0 ovn_controller[152872]: 2025-11-22T09:35:46Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.682 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.687 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.878 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.879 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Deleting local config drive /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config because it was imported into RBD.
Nov 22 09:35:46 compute-0 kernel: tap21b54230-3a: entered promiscuous mode
Nov 22 09:35:46 compute-0 NetworkManager[48920]: <info>  [1763804146.9465] manager: (tap21b54230-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Nov 22 09:35:46 compute-0 nova_compute[253661]: 2025-11-22 09:35:46.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:46 compute-0 ovn_controller[152872]: 2025-11-22T09:35:46Z|01111|binding|INFO|Claiming lport 21b54230-3ad3-4b65-b752-5a1b0472844e for this chassis.
Nov 22 09:35:46 compute-0 ovn_controller[152872]: 2025-11-22T09:35:46Z|01112|binding|INFO|21b54230-3ad3-4b65-b752-5a1b0472844e: Claiming fa:16:3e:ad:ee:9e 10.100.0.5
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.008 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:ee:9e 10.100.0.5'], port_security=['fa:16:3e:ad:ee:9e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '624d1a5b-7d33-4814-8a02-c8e1e513249a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a19e22c3-d4f6-4134-81df-8e8895569f77, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=21b54230-3ad3-4b65-b752-5a1b0472844e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.010 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 21b54230-3ad3-4b65-b752-5a1b0472844e in datapath 5c1e456e-4030-4169-b20f-3aec7a20c24e bound to our chassis
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.012 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c1e456e-4030-4169-b20f-3aec7a20c24e
Nov 22 09:35:47 compute-0 ovn_controller[152872]: 2025-11-22T09:35:47Z|01113|binding|INFO|Setting lport 21b54230-3ad3-4b65-b752-5a1b0472844e up in Southbound
Nov 22 09:35:47 compute-0 ovn_controller[152872]: 2025-11-22T09:35:47Z|01114|binding|INFO|Setting lport 21b54230-3ad3-4b65-b752-5a1b0472844e ovn-installed in OVS
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.030 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:47 compute-0 systemd-udevd[369360]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.037 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[278be5e0-0997-429f-9621-631a9131b610]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.048 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c1e456e-41 in ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:35:47 compute-0 systemd-machined[215941]: New machine qemu-140-instance-00000070.
Nov 22 09:35:47 compute-0 NetworkManager[48920]: <info>  [1763804147.0511] device (tap21b54230-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:35:47 compute-0 NetworkManager[48920]: <info>  [1763804147.0528] device (tap21b54230-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.051 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c1e456e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.051 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5858e3c1-fe70-4fcd-ba15-670ce9bf0204]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.053 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d1bc8b-24f3-4c98-8dcd-2a2e0354c9fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-00000070.
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.070 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[73f4cee3-ac33-4db5-90a3-ca6151fa274d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.101 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[35e741d0-107e-439c-9ce8-bc6654c2d9ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.141 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d4ce21-6494-452c-b049-610418abd22b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 NetworkManager[48920]: <info>  [1763804147.1557] manager: (tap5c1e456e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/463)
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.160 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[840604b6-9b13-4198-8751-8107de0f777e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.197 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1c007a-e6cd-432e-aa99-bcfdbf85b3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.202 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dc5ca214-60e4-496d-8c00-aa2ab20c32bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 NetworkManager[48920]: <info>  [1763804147.2317] device (tap5c1e456e-40): carrier: link connected
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.240 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ae2e81-504c-44aa-9d29-37589f40222c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.262 253665 DEBUG nova.compute.manager [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.263 253665 DEBUG oslo_concurrency.lockutils [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.263 253665 DEBUG oslo_concurrency.lockutils [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.263 253665 DEBUG oslo_concurrency.lockutils [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.263 253665 DEBUG nova.compute.manager [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Processing event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7fb8bc-0216-472c-af77-0b4ea1ccfe93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c1e456e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:c2:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699796, 'reachable_time': 20850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369393, 'error': None, 'target': 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.289 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81cde3f2-28db-4aed-8daa-cf6bcea0880b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:c2f5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699796, 'tstamp': 699796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369394, 'error': None, 'target': 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.313 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[427ab25f-b205-4fd4-a850-16614fb49651]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c1e456e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:c2:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699796, 'reachable_time': 20850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369395, 'error': None, 'target': 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.356 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[764fbcf6-b20d-4b86-8354-50828edd6737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ceph-mon[75021]: pgmap v2210: 305 pgs: 305 active+clean; 394 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.1 MiB/s wr, 173 op/s
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.454 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[373cc13f-5c09-4296-a83e-1c857c4e297f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.458 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c1e456e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.459 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.459 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c1e456e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:47 compute-0 NetworkManager[48920]: <info>  [1763804147.4624] manager: (tap5c1e456e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Nov 22 09:35:47 compute-0 kernel: tap5c1e456e-40: entered promiscuous mode
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.467 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c1e456e-40, col_values=(('external_ids', {'iface-id': '3ff32fba-8fe7-4d58-94eb-b5f91ea2b9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:47 compute-0 ovn_controller[152872]: 2025-11-22T09:35:47Z|01115|binding|INFO|Releasing lport 3ff32fba-8fe7-4d58-94eb-b5f91ea2b9e2 from this chassis (sb_readonly=0)
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.499 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c1e456e-4030-4169-b20f-3aec7a20c24e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c1e456e-4030-4169-b20f-3aec7a20c24e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.501 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[234031e9-a937-49b8-9c99-53efc63cdb90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.502 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-5c1e456e-4030-4169-b20f-3aec7a20c24e
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/5c1e456e-4030-4169-b20f-3aec7a20c24e.pid.haproxy
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 5c1e456e-4030-4169-b20f-3aec7a20c24e
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.504 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'env', 'PROCESS_TAG=haproxy-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c1e456e-4030-4169-b20f-3aec7a20c24e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.567 253665 DEBUG nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.567 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.567 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.568 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.570 253665 DEBUG nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Processing event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.571 253665 DEBUG nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.571 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.571 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.572 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.572 253665 DEBUG nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] No waiting events found dispatching network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.572 253665 WARNING nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received unexpected event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 for instance with vm_state building and task_state spawning.
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.573 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.578 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804147.5779262, 7b3234ab-db15-43a8-8093-469f6e62db91 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.578 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] VM Resumed (Lifecycle Event)
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.581 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.585 253665 INFO nova.virt.libvirt.driver [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance spawned successfully.
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.586 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.606 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.610 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.618 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.620 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.627 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.628 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.629 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.629 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.629 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.630 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.637 253665 INFO nova.virt.libvirt.driver [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance spawned successfully.
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.637 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.652 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.653 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804147.6116383, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.653 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] VM Started (Lifecycle Event)
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.662 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.662 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.663 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.663 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.663 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.664 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.695 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.699 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.729 253665 INFO nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Took 11.30 seconds to spawn the instance on the hypervisor.
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.730 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.732 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.732 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804147.6121569, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.732 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] VM Paused (Lifecycle Event)
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.746 253665 INFO nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Took 8.15 seconds to spawn the instance on the hypervisor.
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.747 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.775 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.783 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804147.6168694, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.784 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] VM Resumed (Lifecycle Event)
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.834 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.835 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.836 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.837 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.837 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.838 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.840 253665 INFO nova.compute.manager [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Terminating instance
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.843 253665 DEBUG nova.compute.manager [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.862 253665 INFO nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Took 12.49 seconds to build instance.
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.866 253665 INFO nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Took 9.37 seconds to build instance.
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.872 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:35:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 412 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 275 op/s
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.898 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.901 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:47 compute-0 kernel: tapa0713d25-85 (unregistering): left promiscuous mode
Nov 22 09:35:47 compute-0 NetworkManager[48920]: <info>  [1763804147.9112] device (tapa0713d25-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:47 compute-0 ovn_controller[152872]: 2025-11-22T09:35:47Z|01116|binding|INFO|Releasing lport a0713d25-85db-4bb0-9be1-0cb5253aa017 from this chassis (sb_readonly=0)
Nov 22 09:35:47 compute-0 ovn_controller[152872]: 2025-11-22T09:35:47Z|01117|binding|INFO|Setting lport a0713d25-85db-4bb0-9be1-0cb5253aa017 down in Southbound
Nov 22 09:35:47 compute-0 ovn_controller[152872]: 2025-11-22T09:35:47Z|01118|binding|INFO|Removing iface tapa0713d25-85 ovn-installed in OVS
Nov 22 09:35:47 compute-0 podman[369468]: 2025-11-22 09:35:47.9341241 +0000 UTC m=+0.071974812 container create 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.934 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:47 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.937 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:6e:95 10.100.0.9'], port_security=['fa:16:3e:75:6e:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4bcc50c8-3188-45f6-aa14-994c5ab8b966', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a0713d25-85db-4bb0-9be1-0cb5253aa017) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:47 compute-0 nova_compute[253661]: 2025-11-22 09:35:47.954 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:47 compute-0 systemd[1]: Started libpod-conmon-43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230.scope.
Nov 22 09:35:47 compute-0 podman[369468]: 2025-11-22 09:35:47.897648043 +0000 UTC m=+0.035498785 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:35:47 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 22 09:35:47 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Consumed 3.099s CPU time.
Nov 22 09:35:47 compute-0 systemd-machined[215941]: Machine qemu-138-instance-0000006f terminated.
Nov 22 09:35:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f47abe13203d442e803e0c61cc2d5415b1d609f3c4c552ce2b7214c39c4e00/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:35:48 compute-0 podman[369468]: 2025-11-22 09:35:48.047029537 +0000 UTC m=+0.184880259 container init 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:35:48 compute-0 podman[369468]: 2025-11-22 09:35:48.056018751 +0000 UTC m=+0.193869463 container start 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 09:35:48 compute-0 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [NOTICE]   (369490) : New worker (369493) forked
Nov 22 09:35:48 compute-0 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [NOTICE]   (369490) : Loading success.
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.150 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a0713d25-85db-4bb0-9be1-0cb5253aa017 in datapath a990966c-0851-457f-bdd5-27cf73032674 unbound from our chassis
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.152 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.166 253665 INFO nova.virt.libvirt.driver [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance destroyed successfully.
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.168 253665 DEBUG nova.objects.instance [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'resources' on Instance uuid 4bcc50c8-3188-45f6-aa14-994c5ab8b966 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.173 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82513854-c253-43a7-b7dd-5b3118895529]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.178 253665 DEBUG nova.virt.libvirt.vif [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1440739346',display_name='tempest-ServersNegativeTestJSON-server-1440739346',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1440739346',id=111,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-pm1o12oq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:45Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=4bcc50c8-3188-45f6-aa14-994c5ab8b966,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.180 253665 DEBUG nova.network.os_vif_util [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.181 253665 DEBUG nova.network.os_vif_util [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.181 253665 DEBUG os_vif [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.184 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.185 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0713d25-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.188 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.199 253665 INFO os_vif [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85')
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.218 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[260539db-4db0-440e-b019-cdfd833dbdc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.223 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[583b3de0-8174-4dab-bc18-0264c524b9ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.278 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a05ed243-2962-4dc7-bd15-f388757adbe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.311 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[83c14402-b2bd-4c12-92a7-7ab3fc18c821]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698042, 'reachable_time': 32119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369534, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.331 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7fd0b03-8e31-4a9c-9596-2b1aca177468]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa990966c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698058, 'tstamp': 698058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369535, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa990966c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698062, 'tstamp': 698062}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369535, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.334 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:48 compute-0 nova_compute[253661]: 2025-11-22 09:35:48.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.338 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.338 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.338 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.339 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:35:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.010 253665 INFO nova.virt.libvirt.driver [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Deleting instance files /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966_del
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.012 253665 INFO nova.virt.libvirt.driver [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Deletion of /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966_del complete
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.076 253665 INFO nova.compute.manager [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Took 1.23 seconds to destroy the instance on the hypervisor.
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.077 253665 DEBUG oslo.service.loopingcall [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.078 253665 DEBUG nova.compute.manager [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.078 253665 DEBUG nova.network.neutron [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.352 253665 DEBUG nova.compute.manager [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.353 253665 DEBUG oslo_concurrency.lockutils [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.354 253665 DEBUG oslo_concurrency.lockutils [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.354 253665 DEBUG oslo_concurrency.lockutils [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.354 253665 DEBUG nova.compute.manager [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.354 253665 WARNING nova.compute.manager [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e for instance with vm_state active and task_state None.
Nov 22 09:35:49 compute-0 podman[369537]: 2025-11-22 09:35:49.378664913 +0000 UTC m=+0.068438052 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:35:49 compute-0 podman[369538]: 2025-11-22 09:35:49.391280997 +0000 UTC m=+0.080906402 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:35:49 compute-0 ceph-mon[75021]: pgmap v2211: 305 pgs: 305 active+clean; 412 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 275 op/s
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.688 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-unplugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.689 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.690 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.690 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.691 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] No waiting events found dispatching network-vif-unplugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.692 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-unplugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.693 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.693 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.694 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.695 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.696 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] No waiting events found dispatching network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.697 253665 WARNING nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received unexpected event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 for instance with vm_state active and task_state deleting.
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.877 253665 DEBUG nova.network.neutron [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 419 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.2 MiB/s wr, 328 op/s
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.906 253665 INFO nova.compute.manager [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Took 0.83 seconds to deallocate network for instance.
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.970 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:49 compute-0 nova_compute[253661]: 2025-11-22 09:35:49.971 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:50 compute-0 nova_compute[253661]: 2025-11-22 09:35:50.117 253665 DEBUG oslo_concurrency.processutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:50 compute-0 nova_compute[253661]: 2025-11-22 09:35:50.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:35:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141246286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:50 compute-0 nova_compute[253661]: 2025-11-22 09:35:50.655 253665 DEBUG oslo_concurrency.processutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:50 compute-0 nova_compute[253661]: 2025-11-22 09:35:50.661 253665 DEBUG nova.compute.provider_tree [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:50 compute-0 nova_compute[253661]: 2025-11-22 09:35:50.677 253665 DEBUG nova.scheduler.client.report [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:50 compute-0 nova_compute[253661]: 2025-11-22 09:35:50.697 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:50 compute-0 nova_compute[253661]: 2025-11-22 09:35:50.743 253665 INFO nova.scheduler.client.report [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Deleted allocations for instance 4bcc50c8-3188-45f6-aa14-994c5ab8b966
Nov 22 09:35:50 compute-0 nova_compute[253661]: 2025-11-22 09:35:50.850 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:50 compute-0 nova_compute[253661]: 2025-11-22 09:35:50.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:51 compute-0 ceph-mon[75021]: pgmap v2212: 305 pgs: 305 active+clean; 419 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.2 MiB/s wr, 328 op/s
Nov 22 09:35:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3141246286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.456 253665 DEBUG nova.compute.manager [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-changed-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.457 253665 DEBUG nova.compute.manager [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing instance network info cache due to event network-changed-735988ac-a658-458d-975f-872cfa132420. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.458 253665 DEBUG oslo_concurrency.lockutils [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.458 253665 DEBUG oslo_concurrency.lockutils [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.459 253665 DEBUG nova.network.neutron [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing network info cache for port 735988ac-a658-458d-975f-872cfa132420 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.787 253665 DEBUG nova.compute.manager [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-deleted-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.788 253665 DEBUG nova.compute.manager [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.788 253665 DEBUG nova.compute.manager [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing instance network info cache due to event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.789 253665 DEBUG oslo_concurrency.lockutils [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.789 253665 DEBUG oslo_concurrency.lockutils [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:51 compute-0 nova_compute[253661]: 2025-11-22 09:35:51.789 253665 DEBUG nova.network.neutron [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 406 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.7 MiB/s wr, 318 op/s
Nov 22 09:35:52 compute-0 nova_compute[253661]: 2025-11-22 09:35:52.094 253665 INFO nova.compute.manager [None req-1e6ff597-7c15-4aa2-9c00-57f066f5af60 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Get console output
Nov 22 09:35:52 compute-0 nova_compute[253661]: 2025-11-22 09:35:52.101 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:35:52
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images']
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:35:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:35:52 compute-0 nova_compute[253661]: 2025-11-22 09:35:52.993 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:52 compute-0 nova_compute[253661]: 2025-11-22 09:35:52.996 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:52 compute-0 nova_compute[253661]: 2025-11-22 09:35:52.997 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:52 compute-0 nova_compute[253661]: 2025-11-22 09:35:52.998 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:52 compute-0 nova_compute[253661]: 2025-11-22 09:35:52.999 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.001 253665 INFO nova.compute.manager [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Terminating instance
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.004 253665 DEBUG nova.compute.manager [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:35:53 compute-0 kernel: tap8da41f38-38 (unregistering): left promiscuous mode
Nov 22 09:35:53 compute-0 NetworkManager[48920]: <info>  [1763804153.0626] device (tap8da41f38-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 ovn_controller[152872]: 2025-11-22T09:35:53Z|01119|binding|INFO|Releasing lport 8da41f38-3812-4494-9cab-c4854772a569 from this chassis (sb_readonly=0)
Nov 22 09:35:53 compute-0 ovn_controller[152872]: 2025-11-22T09:35:53Z|01120|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 down in Southbound
Nov 22 09:35:53 compute-0 ovn_controller[152872]: 2025-11-22T09:35:53Z|01121|binding|INFO|Removing iface tap8da41f38-38 ovn-installed in OVS
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.075 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.079 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ea:ba 10.100.0.4'], port_security=['fa:16:3e:02:ea:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f8530ae-f429-4807-81ca-84d8f964a38c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20570e02-4f3c-425d-9564-924b275d70dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e0291e4d-91dd-4ee6-9074-0372622e253d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f04ee3-5979-45f2-bf12-c1c6b0bf9924, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8da41f38-3812-4494-9cab-c4854772a569) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.080 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8da41f38-3812-4494-9cab-c4854772a569 in datapath 20570e02-4f3c-425d-9564-924b275d70dc unbound from our chassis
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.082 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20570e02-4f3c-425d-9564-924b275d70dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.083 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2a7eb7-6b65-4e55-8a72-2bfa6bb908a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.084 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc namespace which is not needed anymore
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.103 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 22 09:35:53 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006b.scope: Consumed 14.480s CPU time.
Nov 22 09:35:53 compute-0 systemd-machined[215941]: Machine qemu-137-instance-0000006b terminated.
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 ceph-mon[75021]: pgmap v2213: 305 pgs: 305 active+clean; 406 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.7 MiB/s wr, 318 op/s
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.541 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance destroyed successfully.
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.541 253665 DEBUG nova.objects.instance [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.558 253665 DEBUG nova.virt.libvirt.vif [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:33Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.558 253665 DEBUG nova.network.os_vif_util [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.559 253665 DEBUG nova.network.os_vif_util [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.560 253665 DEBUG os_vif [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.562 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.563 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8da41f38-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.568 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.570 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.573 253665 INFO os_vif [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38')
Nov 22 09:35:53 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [NOTICE]   (367971) : haproxy version is 2.8.14-c23fe91
Nov 22 09:35:53 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [NOTICE]   (367971) : path to executable is /usr/sbin/haproxy
Nov 22 09:35:53 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [WARNING]  (367971) : Exiting Master process...
Nov 22 09:35:53 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [WARNING]  (367971) : Exiting Master process...
Nov 22 09:35:53 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [ALERT]    (367971) : Current worker (367975) exited with code 143 (Terminated)
Nov 22 09:35:53 compute-0 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [WARNING]  (367971) : All workers exited. Exiting... (0)
Nov 22 09:35:53 compute-0 systemd[1]: libpod-8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d.scope: Deactivated successfully.
Nov 22 09:35:53 compute-0 podman[369621]: 2025-11-22 09:35:53.587513581 +0000 UTC m=+0.054508335 container died 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d-userdata-shm.mount: Deactivated successfully.
Nov 22 09:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dc8b44b411584b888a60508b08d5f7368fa8670e8b86e9a637d46fbcd929032-merged.mount: Deactivated successfully.
Nov 22 09:35:53 compute-0 podman[369621]: 2025-11-22 09:35:53.641528905 +0000 UTC m=+0.108523649 container cleanup 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:35:53 compute-0 systemd[1]: libpod-conmon-8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d.scope: Deactivated successfully.
Nov 22 09:35:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:53 compute-0 podman[369676]: 2025-11-22 09:35:53.73382341 +0000 UTC m=+0.066553056 container remove 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.744 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0fa7a1-0814-45ee-8780-29aa2747cc0d]: (4, ('Sat Nov 22 09:35:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc (8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d)\n8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d\nSat Nov 22 09:35:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc (8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d)\n8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d524829-2f26-41fb-83d4-ce8ffb46c6d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.747 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20570e02-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.748 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 kernel: tap20570e02-40: left promiscuous mode
Nov 22 09:35:53 compute-0 nova_compute[253661]: 2025-11-22 09:35:53.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.782 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[50427d23-7caf-41a0-ba8b-b2e115a84ed9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.797 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03153a1e-47e9-4bf6-9f4d-33297d53f29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.798 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1124b8a-9c84-421e-b8d1-c4d92a9f6af1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.817 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb16df4d-e3f0-4878-8b46-bc06305b9a52]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698309, 'reachable_time': 29226, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369689, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d20570e02\x2d4f3c\x2d425d\x2d9564\x2d924b275d70dc.mount: Deactivated successfully.
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.822 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:35:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.823 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e8492c93-3e6f-4cae-aa37-886220933560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:35:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 374 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.2 MiB/s wr, 397 op/s
Nov 22 09:35:54 compute-0 nova_compute[253661]: 2025-11-22 09:35:54.038 253665 INFO nova.virt.libvirt.driver [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Deleting instance files /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c_del
Nov 22 09:35:54 compute-0 nova_compute[253661]: 2025-11-22 09:35:54.039 253665 INFO nova.virt.libvirt.driver [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Deletion of /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c_del complete
Nov 22 09:35:54 compute-0 nova_compute[253661]: 2025-11-22 09:35:54.116 253665 INFO nova.compute.manager [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Took 1.11 seconds to destroy the instance on the hypervisor.
Nov 22 09:35:54 compute-0 nova_compute[253661]: 2025-11-22 09:35:54.117 253665 DEBUG oslo.service.loopingcall [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:35:54 compute-0 nova_compute[253661]: 2025-11-22 09:35:54.118 253665 DEBUG nova.compute.manager [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:35:54 compute-0 nova_compute[253661]: 2025-11-22 09:35:54.118 253665 DEBUG nova.network.neutron [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.028 253665 DEBUG nova.network.neutron [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updated VIF entry in instance network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.029 253665 DEBUG nova.network.neutron [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.052 253665 DEBUG oslo_concurrency.lockutils [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.402 253665 DEBUG nova.compute.manager [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-changed-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.402 253665 DEBUG nova.compute.manager [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing instance network info cache due to event network-changed-8da41f38-3812-4494-9cab-c4854772a569. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.403 253665 DEBUG oslo_concurrency.lockutils [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.403 253665 DEBUG oslo_concurrency.lockutils [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.403 253665 DEBUG nova.network.neutron [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing network info cache for port 8da41f38-3812-4494-9cab-c4854772a569 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.406 253665 DEBUG nova.compute.manager [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.406 253665 DEBUG oslo_concurrency.lockutils [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.406 253665 DEBUG oslo_concurrency.lockutils [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.407 253665 DEBUG oslo_concurrency.lockutils [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.407 253665 DEBUG nova.compute.manager [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.407 253665 DEBUG nova.compute.manager [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:35:55 compute-0 podman[369691]: 2025-11-22 09:35:55.415650295 +0000 UTC m=+0.100196262 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:35:55 compute-0 ceph-mon[75021]: pgmap v2214: 305 pgs: 305 active+clean; 374 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.2 MiB/s wr, 397 op/s
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.541 253665 DEBUG nova.network.neutron [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updated VIF entry in instance network info cache for port 735988ac-a658-458d-975f-872cfa132420. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.542 253665 DEBUG nova.network.neutron [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.554 253665 INFO nova.network.neutron [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Port 8da41f38-3812-4494-9cab-c4854772a569 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.555 253665 DEBUG nova.network.neutron [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.591 253665 DEBUG oslo_concurrency.lockutils [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.622 253665 DEBUG oslo_concurrency.lockutils [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.633 253665 DEBUG nova.network.neutron [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.646 253665 INFO nova.compute.manager [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Took 1.53 seconds to deallocate network for instance.
Nov 22 09:35:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:35:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:35:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:35:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:35:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.697 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.698 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.811 253665 DEBUG oslo_concurrency.processutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:35:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 374 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 329 op/s
Nov 22 09:35:55 compute-0 nova_compute[253661]: 2025-11-22 09:35:55.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:35:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981033853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:56 compute-0 nova_compute[253661]: 2025-11-22 09:35:56.304 253665 DEBUG oslo_concurrency.processutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:35:56 compute-0 nova_compute[253661]: 2025-11-22 09:35:56.312 253665 DEBUG nova.compute.provider_tree [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:35:56 compute-0 nova_compute[253661]: 2025-11-22 09:35:56.351 253665 DEBUG nova.scheduler.client.report [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:35:56 compute-0 nova_compute[253661]: 2025-11-22 09:35:56.380 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:56 compute-0 nova_compute[253661]: 2025-11-22 09:35:56.423 253665 INFO nova.scheduler.client.report [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance 3f8530ae-f429-4807-81ca-84d8f964a38c
Nov 22 09:35:56 compute-0 nova_compute[253661]: 2025-11-22 09:35:56.504 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1981033853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:35:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:35:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:35:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:35:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:35:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:35:57 compute-0 ceph-mon[75021]: pgmap v2215: 305 pgs: 305 active+clean; 374 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 329 op/s
Nov 22 09:35:57 compute-0 nova_compute[253661]: 2025-11-22 09:35:57.543 253665 DEBUG nova.compute.manager [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:57 compute-0 nova_compute[253661]: 2025-11-22 09:35:57.545 253665 DEBUG oslo_concurrency.lockutils [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:35:57 compute-0 nova_compute[253661]: 2025-11-22 09:35:57.545 253665 DEBUG oslo_concurrency.lockutils [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:35:57 compute-0 nova_compute[253661]: 2025-11-22 09:35:57.546 253665 DEBUG oslo_concurrency.lockutils [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:35:57 compute-0 nova_compute[253661]: 2025-11-22 09:35:57.546 253665 DEBUG nova.compute.manager [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:35:57 compute-0 nova_compute[253661]: 2025-11-22 09:35:57.546 253665 WARNING nova.compute.manager [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state deleted and task_state None.
Nov 22 09:35:57 compute-0 nova_compute[253661]: 2025-11-22 09:35:57.548 253665 DEBUG nova.compute.manager [req-4ade8b58-ed2d-46b6-9ac4-e482963d8c40 req-cd0433eb-2199-4efc-bf94-b2054b1f50fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-deleted-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:35:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 335 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 333 op/s
Nov 22 09:35:58 compute-0 nova_compute[253661]: 2025-11-22 09:35:58.567 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:35:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:35:59 compute-0 ceph-mon[75021]: pgmap v2216: 305 pgs: 305 active+clean; 335 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 333 op/s
Nov 22 09:35:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 293 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 95 KiB/s wr, 255 op/s
Nov 22 09:36:00 compute-0 ovn_controller[152872]: 2025-11-22T09:36:00Z|01122|binding|INFO|Releasing lport 3ff32fba-8fe7-4d58-94eb-b5f91ea2b9e2 from this chassis (sb_readonly=0)
Nov 22 09:36:00 compute-0 ovn_controller[152872]: 2025-11-22T09:36:00Z|01123|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 09:36:00 compute-0 ovn_controller[152872]: 2025-11-22T09:36:00Z|01124|binding|INFO|Releasing lport ff0f834b-9623-4226-98e1-741634e7eb05 from this chassis (sb_readonly=0)
Nov 22 09:36:00 compute-0 nova_compute[253661]: 2025-11-22 09:36:00.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:00 compute-0 nova_compute[253661]: 2025-11-22 09:36:00.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:01 compute-0 ceph-mon[75021]: pgmap v2217: 305 pgs: 305 active+clean; 293 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 95 KiB/s wr, 255 op/s
Nov 22 09:36:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 299 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 948 KiB/s wr, 202 op/s
Nov 22 09:36:02 compute-0 ovn_controller[152872]: 2025-11-22T09:36:02Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:ee:9e 10.100.0.5
Nov 22 09:36:02 compute-0 ovn_controller[152872]: 2025-11-22T09:36:02Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:ee:9e 10.100.0.5
Nov 22 09:36:02 compute-0 ovn_controller[152872]: 2025-11-22T09:36:02Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:53:06 10.100.0.13
Nov 22 09:36:02 compute-0 ovn_controller[152872]: 2025-11-22T09:36:02Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:53:06 10.100.0.13
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0023904382976818388 of space, bias 1.0, pg target 0.7171314893045516 quantized to 32 (current 32)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:36:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:36:03 compute-0 nova_compute[253661]: 2025-11-22 09:36:03.162 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804148.161207, 4bcc50c8-3188-45f6-aa14-994c5ab8b966 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:03 compute-0 nova_compute[253661]: 2025-11-22 09:36:03.163 253665 INFO nova.compute.manager [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] VM Stopped (Lifecycle Event)
Nov 22 09:36:03 compute-0 nova_compute[253661]: 2025-11-22 09:36:03.183 253665 DEBUG nova.compute.manager [None req-fd152ee9-fec9-4aaa-98d4-e12aee3feb3e - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:03 compute-0 nova_compute[253661]: 2025-11-22 09:36:03.570 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:03 compute-0 ceph-mon[75021]: pgmap v2218: 305 pgs: 305 active+clean; 299 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 948 KiB/s wr, 202 op/s
Nov 22 09:36:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 345 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 230 op/s
Nov 22 09:36:04 compute-0 nova_compute[253661]: 2025-11-22 09:36:04.803 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:05 compute-0 ceph-mon[75021]: pgmap v2219: 305 pgs: 305 active+clean; 345 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 230 op/s
Nov 22 09:36:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 345 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 465 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 22 09:36:05 compute-0 nova_compute[253661]: 2025-11-22 09:36:05.945 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:07 compute-0 nova_compute[253661]: 2025-11-22 09:36:07.343 253665 INFO nova.compute.manager [None req-17593a9d-9395-471a-bfb6-36afded676aa 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Get console output
Nov 22 09:36:07 compute-0 nova_compute[253661]: 2025-11-22 09:36:07.354 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:36:07 compute-0 ceph-mon[75021]: pgmap v2220: 305 pgs: 305 active+clean; 345 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 465 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 22 09:36:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 598 KiB/s rd, 4.3 MiB/s wr, 149 op/s
Nov 22 09:36:08 compute-0 nova_compute[253661]: 2025-11-22 09:36:08.538 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804153.5359373, 3f8530ae-f429-4807-81ca-84d8f964a38c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:08 compute-0 nova_compute[253661]: 2025-11-22 09:36:08.539 253665 INFO nova.compute.manager [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Stopped (Lifecycle Event)
Nov 22 09:36:08 compute-0 nova_compute[253661]: 2025-11-22 09:36:08.572 253665 DEBUG nova.compute.manager [None req-cb77386b-ce49-4bc6-8e3d-414260be3ed5 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:08 compute-0 nova_compute[253661]: 2025-11-22 09:36:08.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:09 compute-0 ceph-mon[75021]: pgmap v2221: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 598 KiB/s rd, 4.3 MiB/s wr, 149 op/s
Nov 22 09:36:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 594 KiB/s rd, 4.3 MiB/s wr, 144 op/s
Nov 22 09:36:10 compute-0 nova_compute[253661]: 2025-11-22 09:36:10.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:10 compute-0 nova_compute[253661]: 2025-11-22 09:36:10.947 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:11 compute-0 ceph-mon[75021]: pgmap v2222: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 594 KiB/s rd, 4.3 MiB/s wr, 144 op/s
Nov 22 09:36:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 4.3 MiB/s wr, 121 op/s
Nov 22 09:36:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:36:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444247548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:36:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:36:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444247548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:36:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1444247548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:36:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1444247548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:36:13 compute-0 nova_compute[253661]: 2025-11-22 09:36:13.148 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:13 compute-0 nova_compute[253661]: 2025-11-22 09:36:13.149 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:13 compute-0 nova_compute[253661]: 2025-11-22 09:36:13.150 253665 DEBUG nova.objects.instance [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'flavor' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:13 compute-0 nova_compute[253661]: 2025-11-22 09:36:13.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:13 compute-0 ceph-mon[75021]: pgmap v2223: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 4.3 MiB/s wr, 121 op/s
Nov 22 09:36:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 3.4 MiB/s wr, 105 op/s
Nov 22 09:36:14 compute-0 nova_compute[253661]: 2025-11-22 09:36:14.476 253665 DEBUG nova.objects.instance [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_requests' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:14 compute-0 nova_compute[253661]: 2025-11-22 09:36:14.650 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:36:14 compute-0 nova_compute[253661]: 2025-11-22 09:36:14.815 253665 DEBUG nova.policy [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:36:14 compute-0 nova_compute[253661]: 2025-11-22 09:36:14.922 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:14 compute-0 nova_compute[253661]: 2025-11-22 09:36:14.923 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:14 compute-0 nova_compute[253661]: 2025-11-22 09:36:14.943 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:36:15 compute-0 nova_compute[253661]: 2025-11-22 09:36:15.043 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:15 compute-0 nova_compute[253661]: 2025-11-22 09:36:15.044 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:15 compute-0 nova_compute[253661]: 2025-11-22 09:36:15.055 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:36:15 compute-0 nova_compute[253661]: 2025-11-22 09:36:15.056 253665 INFO nova.compute.claims [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:36:15 compute-0 nova_compute[253661]: 2025-11-22 09:36:15.422 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:15 compute-0 nova_compute[253661]: 2025-11-22 09:36:15.560 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Successfully created port: ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:36:15 compute-0 ceph-mon[75021]: pgmap v2224: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 3.4 MiB/s wr, 105 op/s
Nov 22 09:36:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 362 KiB/s wr, 25 op/s
Nov 22 09:36:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:36:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232426750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:15 compute-0 nova_compute[253661]: 2025-11-22 09:36:15.949 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:15 compute-0 nova_compute[253661]: 2025-11-22 09:36:15.970 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:15 compute-0 nova_compute[253661]: 2025-11-22 09:36:15.981 253665 DEBUG nova.compute.provider_tree [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.001 253665 DEBUG nova.scheduler.client.report [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.029 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.031 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.093 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.094 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.109 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.126 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.225 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.227 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.228 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Creating image(s)
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.263 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.298 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.328 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.334 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.382 253665 DEBUG nova.policy [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.386 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Successfully updated port: ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.398 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.399 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.399 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.420 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.421 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.422 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.422 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.446 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:16 compute-0 nova_compute[253661]: 2025-11-22 09:36:16.451 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1232426750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:17 compute-0 nova_compute[253661]: 2025-11-22 09:36:17.246 253665 DEBUG nova.compute.manager [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-changed-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:17 compute-0 nova_compute[253661]: 2025-11-22 09:36:17.247 253665 DEBUG nova.compute.manager [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing instance network info cache due to event network-changed-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:36:17 compute-0 nova_compute[253661]: 2025-11-22 09:36:17.247 253665 DEBUG oslo_concurrency.lockutils [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:17 compute-0 nova_compute[253661]: 2025-11-22 09:36:17.751 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Successfully created port: f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:36:17 compute-0 ceph-mon[75021]: pgmap v2225: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 362 KiB/s wr, 25 op/s
Nov 22 09:36:17 compute-0 nova_compute[253661]: 2025-11-22 09:36:17.879 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 385 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 1.5 MiB/s wr, 28 op/s
Nov 22 09:36:17 compute-0 nova_compute[253661]: 2025-11-22 09:36:17.945 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.271 253665 DEBUG nova.objects.instance [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.290 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.291 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Ensure instance console log exists: /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.292 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.292 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.292 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.579 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:18 compute-0 ceph-mon[75021]: pgmap v2226: 305 pgs: 305 active+clean; 385 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 1.5 MiB/s wr, 28 op/s
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.969 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Successfully updated port: f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.986 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.987 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:18 compute-0 nova_compute[253661]: 2025-11-22 09:36:18.988 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.042 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.073 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.075 253665 DEBUG oslo_concurrency.lockutils [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.075 253665 DEBUG nova.network.neutron [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing network info cache for port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.080 253665 DEBUG nova.virt.libvirt.vif [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.081 253665 DEBUG nova.network.os_vif_util [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.082 253665 DEBUG nova.network.os_vif_util [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.082 253665 DEBUG os_vif [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.084 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.085 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.094 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.094 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef638c6f-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.095 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef638c6f-1e, col_values=(('external_ids', {'iface-id': 'ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:7d:ea', 'vm-uuid': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 NetworkManager[48920]: <info>  [1763804179.0988] manager: (tapef638c6f-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.100 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.107 253665 INFO os_vif [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e')
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.108 253665 DEBUG nova.virt.libvirt.vif [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.108 253665 DEBUG nova.network.os_vif_util [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.109 253665 DEBUG nova.network.os_vif_util [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.113 253665 DEBUG nova.virt.libvirt.guest [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] attach device xml: <interface type="ethernet">
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:ec:7d:ea"/>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <target dev="tapef638c6f-1e"/>
Nov 22 09:36:19 compute-0 nova_compute[253661]: </interface>
Nov 22 09:36:19 compute-0 nova_compute[253661]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 09:36:19 compute-0 NetworkManager[48920]: <info>  [1763804179.1270] manager: (tapef638c6f-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/466)
Nov 22 09:36:19 compute-0 kernel: tapef638c6f-1e: entered promiscuous mode
Nov 22 09:36:19 compute-0 ovn_controller[152872]: 2025-11-22T09:36:19Z|01125|binding|INFO|Claiming lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for this chassis.
Nov 22 09:36:19 compute-0 ovn_controller[152872]: 2025-11-22T09:36:19Z|01126|binding|INFO|ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3: Claiming fa:16:3e:ec:7d:ea 10.100.0.18
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.159 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:7d:ea 10.100.0.18'], port_security=['fa:16:3e:ec:7d:ea 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1ca4427-d611-4bfe-814f-5bb5cca8ded7, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:19 compute-0 systemd-udevd[369934]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.160 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 in datapath f987bf48-fed4-4a9a-a268-76d80e7b77fd bound to our chassis
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.163 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f987bf48-fed4-4a9a-a268-76d80e7b77fd
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.171 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 NetworkManager[48920]: <info>  [1763804179.1770] device (tapef638c6f-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:36:19 compute-0 NetworkManager[48920]: <info>  [1763804179.1788] device (tapef638c6f-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:36:19 compute-0 ovn_controller[152872]: 2025-11-22T09:36:19Z|01127|binding|INFO|Setting lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 ovn-installed in OVS
Nov 22 09:36:19 compute-0 ovn_controller[152872]: 2025-11-22T09:36:19Z|01128|binding|INFO|Setting lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 up in Southbound
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f713388f-12f3-4fc5-9ee2-c7a41161ebb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.187 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf987bf48-f1 in ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.190 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf987bf48-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.190 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5af76da1-d0b6-4056-b378-acb2d56b830f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[110aefc9-f8f9-4e51-808b-6f54bfc3f29a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.215 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[50aae8da-17f3-44b1-acf9-ac459122e542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.225 253665 DEBUG nova.virt.libvirt.driver [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.225 253665 DEBUG nova.virt.libvirt.driver [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.225 253665 DEBUG nova.virt.libvirt.driver [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:ad:ee:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.225 253665 DEBUG nova.virt.libvirt.driver [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:ec:7d:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.230 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.247 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[27fad550-6939-4731-b60b-1eb4983f09ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.249 253665 DEBUG nova.virt.libvirt.guest [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:36:19</nova:creationTime>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:36:19 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 09:36:19 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     <nova:port uuid="ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3">
Nov 22 09:36:19 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Nov 22 09:36:19 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:19 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:36:19 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:36:19 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.273 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.285 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f51c01fe-aab1-4713-a048-a133ba204b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 NetworkManager[48920]: <info>  [1763804179.2929] manager: (tapf987bf48-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/467)
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.292 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c554a74-c749-40b3-9a43-873cda1d2d06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.333 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0df48496-2110-4552-a4a8-ee9d34d74064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.337 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcea7fc-53df-4f1a-8e83-e84cb3b95424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 NetworkManager[48920]: <info>  [1763804179.3755] device (tapf987bf48-f0): carrier: link connected
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.382 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[12fd5796-cb7e-4895-83c0-b2e37de565a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.404 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8f5dcc-c363-4236-be82-f78c5a3f0cca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf987bf48-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:ec:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703011, 'reachable_time': 31820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369961, 'error': None, 'target': 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.423 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a440c05-db82-4f56-bd4b-d13f383a4292]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:ec89'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703011, 'tstamp': 703011}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369962, 'error': None, 'target': 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.442 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a8347b66-1d46-49f2-b60f-fe1f9869a68e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf987bf48-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:ec:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703011, 'reachable_time': 31820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369963, 'error': None, 'target': 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.470 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f0a675-af3c-42df-aa23-ddff57a44213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.541 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3497d8a0-a4cc-477b-a4d6-329caf7abbb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.543 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf987bf48-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.543 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.543 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf987bf48-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 kernel: tapf987bf48-f0: entered promiscuous mode
Nov 22 09:36:19 compute-0 NetworkManager[48920]: <info>  [1763804179.5473] manager: (tapf987bf48-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf987bf48-f0, col_values=(('external_ids', {'iface-id': '832bb68b-158a-4026-bb7e-ec09f983eb31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 ovn_controller[152872]: 2025-11-22T09:36:19Z|01129|binding|INFO|Releasing lport 832bb68b-158a-4026-bb7e-ec09f983eb31 from this chassis (sb_readonly=0)
Nov 22 09:36:19 compute-0 nova_compute[253661]: 2025-11-22 09:36:19.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.566 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f987bf48-fed4-4a9a-a268-76d80e7b77fd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f987bf48-fed4-4a9a-a268-76d80e7b77fd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59fd2ef6-0b05-4ccc-9acb-73cd714b6f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.568 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-f987bf48-fed4-4a9a-a268-76d80e7b77fd
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/f987bf48-fed4-4a9a-a268-76d80e7b77fd.pid.haproxy
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID f987bf48-fed4-4a9a-a268-76d80e7b77fd
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:36:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.569 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'env', 'PROCESS_TAG=haproxy-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f987bf48-fed4-4a9a-a268-76d80e7b77fd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:36:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 393 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 27 op/s
Nov 22 09:36:19 compute-0 podman[369996]: 2025-11-22 09:36:19.958546611 +0000 UTC m=+0.058584768 container create d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:36:20 compute-0 systemd[1]: Started libpod-conmon-d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff.scope.
Nov 22 09:36:20 compute-0 podman[369996]: 2025-11-22 09:36:19.929254772 +0000 UTC m=+0.029292959 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:36:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4f634e39e3768979e4e15c5856180a4869bf3f069714fa2e2fa318a4a1ad83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:20 compute-0 podman[369996]: 2025-11-22 09:36:20.049069082 +0000 UTC m=+0.149107269 container init d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:36:20 compute-0 podman[369996]: 2025-11-22 09:36:20.054492127 +0000 UTC m=+0.154530284 container start d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 09:36:20 compute-0 podman[370010]: 2025-11-22 09:36:20.067241264 +0000 UTC m=+0.064745902 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 22 09:36:20 compute-0 podman[370013]: 2025-11-22 09:36:20.073088639 +0000 UTC m=+0.066780502 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Nov 22 09:36:20 compute-0 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [NOTICE]   (370052) : New worker (370055) forked
Nov 22 09:36:20 compute-0 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [NOTICE]   (370052) : Loading success.
Nov 22 09:36:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:20.217 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.218 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:20 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:20.222 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.351 253665 DEBUG nova.compute.manager [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.351 253665 DEBUG nova.compute.manager [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing instance network info cache due to event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.352 253665 DEBUG oslo_concurrency.lockutils [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.613 253665 DEBUG nova.compute.manager [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.613 253665 DEBUG oslo_concurrency.lockutils [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.613 253665 DEBUG oslo_concurrency.lockutils [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.614 253665 DEBUG oslo_concurrency.lockutils [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.614 253665 DEBUG nova.compute.manager [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.614 253665 WARNING nova.compute.manager [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for instance with vm_state active and task_state None.
Nov 22 09:36:20 compute-0 ceph-mon[75021]: pgmap v2227: 305 pgs: 305 active+clean; 393 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 27 op/s
Nov 22 09:36:20 compute-0 nova_compute[253661]: 2025-11-22 09:36:20.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.145 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.226 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.298 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.299 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance network_info: |[{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.299 253665 DEBUG oslo_concurrency.lockutils [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.300 253665 DEBUG nova.network.neutron [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.304 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start _get_guest_xml network_info=[{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.310 253665 WARNING nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.322 253665 DEBUG nova.virt.libvirt.host [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.323 253665 DEBUG nova.virt.libvirt.host [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.337 253665 DEBUG nova.virt.libvirt.host [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.337 253665 DEBUG nova.virt.libvirt.host [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.338 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.338 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.338 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.338 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.340 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.340 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.343 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.448 253665 DEBUG nova.network.neutron [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updated VIF entry in instance network info cache for port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.448 253665 DEBUG nova.network.neutron [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.462 253665 DEBUG oslo_concurrency.lockutils [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.511 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.511 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.549 253665 DEBUG nova.objects.instance [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'flavor' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:21 compute-0 ovn_controller[152872]: 2025-11-22T09:36:21Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:7d:ea 10.100.0.18
Nov 22 09:36:21 compute-0 ovn_controller[152872]: 2025-11-22T09:36:21Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:7d:ea 10.100.0.18
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.592 253665 DEBUG nova.virt.libvirt.vif [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.593 253665 DEBUG nova.network.os_vif_util [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.594 253665 DEBUG nova.network.os_vif_util [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.599 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.603 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.606 253665 DEBUG nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Attempting to detach device tapef638c6f-1e from instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.606 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:ec:7d:ea"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <target dev="tapef638c6f-1e"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]: </interface>
Nov 22 09:36:21 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.614 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.618 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface>not found in domain: <domain type='kvm' id='140'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <name>instance-00000070</name>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:36:19</nova:creationTime>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:port uuid="ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3">
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:36:21 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <system>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='serial'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='uuid'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </system>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <os>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </os>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <features>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </features>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk' index='2'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config' index='1'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:ad:ee:9e'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target dev='tap21b54230-3a'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:ec:7d:ea'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target dev='tapef638c6f-1e'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='net1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <source path='/dev/pts/5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </target>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/5'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <source path='/dev/pts/5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </console>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <video>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </video>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c575,c897</label>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c575,c897</imagelabel>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:36:21 compute-0 nova_compute[253661]: </domain>
Nov 22 09:36:21 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.618 253665 INFO nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully detached device tapef638c6f-1e from instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 from the persistent domain config.
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.618 253665 DEBUG nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] (1/8): Attempting to detach device tapef638c6f-1e with device alias net1 from instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.619 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:ec:7d:ea"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <target dev="tapef638c6f-1e"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]: </interface>
Nov 22 09:36:21 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:36:21 compute-0 kernel: tapef638c6f-1e (unregistering): left promiscuous mode
Nov 22 09:36:21 compute-0 NetworkManager[48920]: <info>  [1763804181.7286] device (tapef638c6f-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:36:21 compute-0 ovn_controller[152872]: 2025-11-22T09:36:21Z|01130|binding|INFO|Releasing lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 from this chassis (sb_readonly=0)
Nov 22 09:36:21 compute-0 ovn_controller[152872]: 2025-11-22T09:36:21Z|01131|binding|INFO|Setting lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 down in Southbound
Nov 22 09:36:21 compute-0 ovn_controller[152872]: 2025-11-22T09:36:21Z|01132|binding|INFO|Removing iface tapef638c6f-1e ovn-installed in OVS
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.737 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.750 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763804181.7432754, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.751 253665 DEBUG nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Start waiting for the detach event from libvirt for device tapef638c6f-1e with device alias net1 for instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.752 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.766 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface>not found in domain: <domain type='kvm' id='140'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <name>instance-00000070</name>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:36:19</nova:creationTime>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:port uuid="ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3">
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:36:21 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <system>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='serial'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='uuid'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </system>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <os>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </os>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <features>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </features>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk' index='2'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config' index='1'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:ad:ee:9e'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target dev='tap21b54230-3a'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <source path='/dev/pts/5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       </target>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/5'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <source path='/dev/pts/5'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </console>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <video>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </video>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c575,c897</label>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c575,c897</imagelabel>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:36:21 compute-0 nova_compute[253661]: </domain>
Nov 22 09:36:21 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.766 253665 INFO nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully detached device tapef638c6f-1e from instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 from the live domain config.
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.767 253665 DEBUG nova.virt.libvirt.vif [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.768 253665 DEBUG nova.network.os_vif_util [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.769 253665 DEBUG nova.network.os_vif_util [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.769 253665 DEBUG os_vif [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.772 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.772 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef638c6f-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.777 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:36:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:36:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2520301668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.794 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.804 253665 INFO os_vif [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e')
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.805 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:36:21</nova:creationTime>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 09:36:21 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:36:21 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:21 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:36:21 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:36:21 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.811 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.838 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:21 compute-0 nova_compute[253661]: 2025-11-22 09:36:21.843 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 405 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:36:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.895 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:7d:ea 10.100.0.18'], port_security=['fa:16:3e:ec:7d:ea 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1ca4427-d611-4bfe-814f-5bb5cca8ded7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.897 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 in datapath f987bf48-fed4-4a9a-a268-76d80e7b77fd unbound from our chassis
Nov 22 09:36:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.900 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f987bf48-fed4-4a9a-a268-76d80e7b77fd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:36:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.901 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc1565c-0f0b-4dcb-9b02-88441b9b5041]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.902 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd namespace which is not needed anymore
Nov 22 09:36:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2520301668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:36:22 compute-0 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [NOTICE]   (370052) : haproxy version is 2.8.14-c23fe91
Nov 22 09:36:22 compute-0 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [NOTICE]   (370052) : path to executable is /usr/sbin/haproxy
Nov 22 09:36:22 compute-0 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [WARNING]  (370052) : Exiting Master process...
Nov 22 09:36:22 compute-0 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [ALERT]    (370052) : Current worker (370055) exited with code 143 (Terminated)
Nov 22 09:36:22 compute-0 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [WARNING]  (370052) : All workers exited. Exiting... (0)
Nov 22 09:36:22 compute-0 systemd[1]: libpod-d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff.scope: Deactivated successfully.
Nov 22 09:36:22 compute-0 podman[370147]: 2025-11-22 09:36:22.087174587 +0000 UTC m=+0.051940303 container died d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff-userdata-shm.mount: Deactivated successfully.
Nov 22 09:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-db4f634e39e3768979e4e15c5856180a4869bf3f069714fa2e2fa318a4a1ad83-merged.mount: Deactivated successfully.
Nov 22 09:36:22 compute-0 podman[370147]: 2025-11-22 09:36:22.138090304 +0000 UTC m=+0.102856000 container cleanup d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 09:36:22 compute-0 systemd[1]: libpod-conmon-d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff.scope: Deactivated successfully.
Nov 22 09:36:22 compute-0 podman[370177]: 2025-11-22 09:36:22.216793221 +0000 UTC m=+0.053878112 container remove d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 09:36:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.233 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[120bb615-ee49-42e1-9999-6b9e483f7ca8]: (4, ('Sat Nov 22 09:36:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd (d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff)\nd61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff\nSat Nov 22 09:36:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd (d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff)\nd61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.235 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[55cc542b-21af-4827-afb9-65c58f3bbb69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.236 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf987bf48-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:22 compute-0 kernel: tapf987bf48-f0: left promiscuous mode
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.259 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.265 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[37fe4ba1-bb57-4e2f-80e9-5acdb0fc0dfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.280 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d01a186-2c69-4c9c-a546-66d0b92a30f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.284 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7b731c45-f6bd-4293-a1e0-dcf3a343ee66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[990fc356-5e26-4a7b-b4be-8dadaa8f1c7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703001, 'reachable_time': 20673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370190, 'error': None, 'target': 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.313 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:36:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.313 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[756711e8-53ce-49fc-8be1-8d04b44f9b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:22 compute-0 systemd[1]: run-netns-ovnmeta\x2df987bf48\x2dfed4\x2d4a9a\x2da268\x2d76d80e7b77fd.mount: Deactivated successfully.
Nov 22 09:36:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:36:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011403817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.360 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.362 253665 DEBUG nova.virt.libvirt.vif [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-635689639',display_name='tempest-TestNetworkAdvancedServerOps-server-635689639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-635689639',id=113,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGat/0/6ionKBrSzyBS7EbGqwOoirIfAackkh+AYjCZXxoZzDjZWyHoUi84+Rs5w5CQ8NN8aOtxfB73LToni6HeOyO4Tgvy+GHztLu+Mg7hY5eYsKNagHEATOhR/nV+7Ew==',key_name='tempest-TestNetworkAdvancedServerOps-353719525',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-920qa6ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:16Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=750659ed-67e0-44d4-a5b3-b8d0029ffa7e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.362 253665 DEBUG nova.network.os_vif_util [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.364 253665 DEBUG nova.network.os_vif_util [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.365 253665 DEBUG nova.objects.instance [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.385 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <uuid>750659ed-67e0-44d4-a5b3-b8d0029ffa7e</uuid>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <name>instance-00000071</name>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-635689639</nova:name>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:36:21</nova:creationTime>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <nova:port uuid="f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b">
Nov 22 09:36:22 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <system>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <entry name="serial">750659ed-67e0-44d4-a5b3-b8d0029ffa7e</entry>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <entry name="uuid">750659ed-67e0-44d4-a5b3-b8d0029ffa7e</entry>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     </system>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <os>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   </os>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <features>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   </features>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk">
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config">
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:36:22 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ac:c2:c9"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <target dev="tapf4a3cf1b-5c"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/console.log" append="off"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <video>
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     </video>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:36:22 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:36:22 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:36:22 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:36:22 compute-0 nova_compute[253661]: </domain>
Nov 22 09:36:22 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.386 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Preparing to wait for external event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.386 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.387 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.387 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.389 253665 DEBUG nova.virt.libvirt.vif [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-635689639',display_name='tempest-TestNetworkAdvancedServerOps-server-635689639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-635689639',id=113,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGat/0/6ionKBrSzyBS7EbGqwOoirIfAackkh+AYjCZXxoZzDjZWyHoUi84+Rs5w5CQ8NN8aOtxfB73LToni6HeOyO4Tgvy+GHztLu+Mg7hY5eYsKNagHEATOhR/nV+7Ew==',key_name='tempest-TestNetworkAdvancedServerOps-353719525',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-920qa6ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:16Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=750659ed-67e0-44d4-a5b3-b8d0029ffa7e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.389 253665 DEBUG nova.network.os_vif_util [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.392 253665 DEBUG nova.network.os_vif_util [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.392 253665 DEBUG os_vif [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.393 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.393 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.400 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4a3cf1b-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.400 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4a3cf1b-5c, col_values=(('external_ids', {'iface-id': 'f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:c2:c9', 'vm-uuid': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:22 compute-0 NetworkManager[48920]: <info>  [1763804182.4028] manager: (tapf4a3cf1b-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.412 253665 INFO os_vif [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c')
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.462 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.463 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.464 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:ac:c2:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.465 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Using config drive
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.488 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:36:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:36:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:36:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:36:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:36:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.764 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.765 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:22 compute-0 nova_compute[253661]: 2025-11-22 09:36:22.765 253665 DEBUG nova.network.neutron [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:36:22 compute-0 ceph-mon[75021]: pgmap v2228: 305 pgs: 305 active+clean; 405 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:36:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2011403817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.214 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.215 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.215 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.216 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.216 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.217 253665 WARNING nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for instance with vm_state active and task_state None.
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.217 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-unplugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.217 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.217 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.218 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.218 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-unplugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.218 253665 WARNING nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-unplugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for instance with vm_state active and task_state None.
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.219 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.219 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.219 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.219 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.220 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.220 253665 WARNING nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for instance with vm_state active and task_state None.
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.222 253665 DEBUG nova.compute.manager [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-deleted-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.223 253665 INFO nova.compute.manager [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Neutron deleted interface ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3; detaching it from the instance and deleting it from the info cache
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.223 253665 DEBUG nova.network.neutron [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.249 253665 DEBUG nova.objects.instance [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'system_metadata' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.272 253665 DEBUG nova.objects.instance [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'flavor' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.289 253665 DEBUG nova.virt.libvirt.vif [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.290 253665 DEBUG nova.network.os_vif_util [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.291 253665 DEBUG nova.network.os_vif_util [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.294 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.298 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface>not found in domain: <domain type='kvm' id='140'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <name>instance-00000070</name>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:36:21</nova:creationTime>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:36:23 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <system>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='serial'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='uuid'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </system>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <os>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </os>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <features>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </features>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk' index='2'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config' index='1'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:ad:ee:9e'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target dev='tap21b54230-3a'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <source path='/dev/pts/5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </target>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/5'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <source path='/dev/pts/5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </console>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <video>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </video>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c575,c897</label>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c575,c897</imagelabel>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:36:23 compute-0 nova_compute[253661]: </domain>
Nov 22 09:36:23 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.299 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.304 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface>not found in domain: <domain type='kvm' id='140'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <name>instance-00000070</name>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:36:21</nova:creationTime>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:36:23 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <system>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='serial'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='uuid'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </system>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <os>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </os>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <features>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </features>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk' index='2'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config' index='1'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:ad:ee:9e'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target dev='tap21b54230-3a'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <source path='/dev/pts/5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       </target>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/5'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <source path='/dev/pts/5'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </console>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </input>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <video>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </video>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c575,c897</label>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c575,c897</imagelabel>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:36:23 compute-0 nova_compute[253661]: </domain>
Nov 22 09:36:23 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.304 253665 WARNING nova.virt.libvirt.driver [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Detaching interface fa:16:3e:ec:7d:ea failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapef638c6f-1e' not found.
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.305 253665 DEBUG nova.virt.libvirt.vif [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.306 253665 DEBUG nova.network.os_vif_util [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.307 253665 DEBUG nova.network.os_vif_util [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.307 253665 DEBUG os_vif [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.310 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef638c6f-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.310 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.313 253665 INFO os_vif [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e')
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.314 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:36:23</nova:creationTime>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 09:36:23 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:36:23 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:36:23 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:36:23 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:36:23 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.344 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Creating config drive at /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.349 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22ufgn1t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.509 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22ufgn1t" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.556 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.562 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.670 253665 DEBUG nova.network.neutron [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updated VIF entry in instance network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.672 253665 DEBUG nova.network.neutron [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.696 253665 DEBUG oslo_concurrency.lockutils [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.735 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.736 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Deleting local config drive /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config because it was imported into RBD.
Nov 22 09:36:23 compute-0 kernel: tapf4a3cf1b-5c: entered promiscuous mode
Nov 22 09:36:23 compute-0 NetworkManager[48920]: <info>  [1763804183.8036] manager: (tapf4a3cf1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/470)
Nov 22 09:36:23 compute-0 systemd-udevd[370192]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:36:23 compute-0 ovn_controller[152872]: 2025-11-22T09:36:23Z|01133|binding|INFO|Claiming lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for this chassis.
Nov 22 09:36:23 compute-0 ovn_controller[152872]: 2025-11-22T09:36:23Z|01134|binding|INFO|f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b: Claiming fa:16:3e:ac:c2:c9 10.100.0.11
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.817 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:c2:c9 10.100.0.11'], port_security=['fa:16:3e:ac:c2:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37020e16-bbf7-4d46-a463-62f41acbbdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bdf6e5f8-acae-4ca0-a205-a73594668944', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f34ee933-6c38-4761-bdaf-c769de521957, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:23 compute-0 NetworkManager[48920]: <info>  [1763804183.8192] device (tapf4a3cf1b-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:36:23 compute-0 NetworkManager[48920]: <info>  [1763804183.8200] device (tapf4a3cf1b-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.819 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b in datapath 37020e16-bbf7-4d46-a463-62f41acbbdab bound to our chassis
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.823 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.827 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:23 compute-0 ovn_controller[152872]: 2025-11-22T09:36:23Z|01135|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b ovn-installed in OVS
Nov 22 09:36:23 compute-0 ovn_controller[152872]: 2025-11-22T09:36:23Z|01136|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b up in Southbound
Nov 22 09:36:23 compute-0 nova_compute[253661]: 2025-11-22 09:36:23.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[089c81ec-a927-41f7-b591-ac697b075e3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.840 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37020e16-b1 in ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.842 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37020e16-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.843 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94963a80-b472-4a01-9d78-e81045b38aeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.843 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3841dd58-67a1-482d-8543-66c3bc5cebb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:23 compute-0 systemd-machined[215941]: New machine qemu-141-instance-00000071.
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.858 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d15d52-f1b1-40db-90cc-3baaf8845e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:23 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-00000071.
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d52f325-a442-44af-8b45-edc137270675]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 405 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.919 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[85538713-6723-4855-baae-43549b45bcf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:23 compute-0 NetworkManager[48920]: <info>  [1763804183.9281] manager: (tap37020e16-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/471)
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.927 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59ba4ed9-bf47-44cf-a79f-067b8d1c11a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.969 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e4ad8a-3007-445f-8042-74b115e7994f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.972 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee0c1e3-ea4d-45ac-96ea-6bf9d7dfea4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:23 compute-0 ovn_controller[152872]: 2025-11-22T09:36:23Z|01137|binding|INFO|Releasing lport 3ff32fba-8fe7-4d58-94eb-b5f91ea2b9e2 from this chassis (sb_readonly=0)
Nov 22 09:36:23 compute-0 ovn_controller[152872]: 2025-11-22T09:36:23Z|01138|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 09:36:23 compute-0 ovn_controller[152872]: 2025-11-22T09:36:23Z|01139|binding|INFO|Releasing lport ff0f834b-9623-4226-98e1-741634e7eb05 from this chassis (sb_readonly=0)
Nov 22 09:36:24 compute-0 NetworkManager[48920]: <info>  [1763804184.0033] device (tap37020e16-b0): carrier: link connected
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.008 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[92cfb2a9-ec6e-4582-8c77-6a04a9e796cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.022 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.031 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[465a8f64-b57a-4169-a2cb-0be084edb204]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37020e16-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:87:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703474, 'reachable_time': 42367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370301, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.050 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d532ece5-16e8-42ee-97fa-dbb88f892507]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:87fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703474, 'tstamp': 703474}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370302, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.074 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32880b3d-d220-4677-ae7b-48f3f483561c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37020e16-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:87:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703474, 'reachable_time': 42367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370303, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.109 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.111 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.111 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.112 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.112 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.115 253665 INFO nova.compute.manager [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Terminating instance
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.117 253665 DEBUG nova.compute.manager [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.116 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5551a2-c0f0-4694-9954-e595c7d95335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.191 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[62c7b17d-2e3d-44b8-b073-98c6d76690b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.193 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37020e16-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.193 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.194 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37020e16-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 kernel: tap37020e16-b0: entered promiscuous mode
Nov 22 09:36:24 compute-0 NetworkManager[48920]: <info>  [1763804184.1973] manager: (tap37020e16-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.201 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37020e16-b0, col_values=(('external_ids', {'iface-id': 'fc048c06-919a-46ba-ac90-0356d56c12a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:24 compute-0 ovn_controller[152872]: 2025-11-22T09:36:24Z|01140|binding|INFO|Releasing lport fc048c06-919a-46ba-ac90-0356d56c12a5 from this chassis (sb_readonly=0)
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.202 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.226 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.229 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.233 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2e3804-61f4-41be-9aa1-03e418d81e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.234 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.235 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'env', 'PROCESS_TAG=haproxy-37020e16-bbf7-4d46-a463-62f41acbbdab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37020e16-bbf7-4d46-a463-62f41acbbdab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:36:24 compute-0 kernel: tap735988ac-a6 (unregistering): left promiscuous mode
Nov 22 09:36:24 compute-0 NetworkManager[48920]: <info>  [1763804184.4211] device (tap735988ac-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.438 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 ovn_controller[152872]: 2025-11-22T09:36:24Z|01141|binding|INFO|Releasing lport 735988ac-a658-458d-975f-872cfa132420 from this chassis (sb_readonly=0)
Nov 22 09:36:24 compute-0 ovn_controller[152872]: 2025-11-22T09:36:24Z|01142|binding|INFO|Setting lport 735988ac-a658-458d-975f-872cfa132420 down in Southbound
Nov 22 09:36:24 compute-0 ovn_controller[152872]: 2025-11-22T09:36:24Z|01143|binding|INFO|Removing iface tap735988ac-a6 ovn-installed in OVS
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.446 253665 INFO nova.network.neutron [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.446 253665 DEBUG nova.network.neutron [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.447 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306'], port_security=['fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe0e:5306/64', 'neutron:device_id': '7b3234ab-db15-43a8-8093-469f6e62db91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d5326a8-c171-4fdf-9f85-e6536ded5f96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=735988ac-a658-458d-975f-872cfa132420) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.508 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 22 09:36:24 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Consumed 15.514s CPU time.
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.532 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:24 compute-0 systemd-machined[215941]: Machine qemu-139-instance-0000006e terminated.
Nov 22 09:36:24 compute-0 podman[370338]: 2025-11-22 09:36:24.628738463 +0000 UTC m=+0.026676716 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.765 253665 INFO nova.virt.libvirt.driver [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance destroyed successfully.
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.765 253665 DEBUG nova.objects.instance [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 7b3234ab-db15-43a8-8093-469f6e62db91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.780 253665 DEBUG nova.virt.libvirt.vif [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-340448396',display_name='tempest-TestGettingAddress-server-340448396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-340448396',id=110,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-quxnyf0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7b3234ab-db15-43a8-8093-469f6e62db91,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.781 253665 DEBUG nova.network.os_vif_util [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.782 253665 DEBUG nova.network.os_vif_util [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.783 253665 DEBUG os_vif [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.785 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap735988ac-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.793 253665 INFO os_vif [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6')
Nov 22 09:36:24 compute-0 podman[370338]: 2025-11-22 09:36:24.834855298 +0000 UTC m=+0.232793531 container create 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.859 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.860 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.860 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.860 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.860 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.861 253665 INFO nova.compute.manager [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Terminating instance
Nov 22 09:36:24 compute-0 nova_compute[253661]: 2025-11-22 09:36:24.862 253665 DEBUG nova.compute.manager [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:36:24 compute-0 systemd[1]: Started libpod-conmon-743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7.scope.
Nov 22 09:36:25 compute-0 ceph-mon[75021]: pgmap v2229: 305 pgs: 305 active+clean; 405 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 22 09:36:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:25 compute-0 kernel: tap21b54230-3a (unregistering): left promiscuous mode
Nov 22 09:36:25 compute-0 NetworkManager[48920]: <info>  [1763804185.0340] device (tap21b54230-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.042 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 ovn_controller[152872]: 2025-11-22T09:36:25Z|01144|binding|INFO|Releasing lport 21b54230-3ad3-4b65-b752-5a1b0472844e from this chassis (sb_readonly=0)
Nov 22 09:36:25 compute-0 ovn_controller[152872]: 2025-11-22T09:36:25Z|01145|binding|INFO|Setting lport 21b54230-3ad3-4b65-b752-5a1b0472844e down in Southbound
Nov 22 09:36:25 compute-0 ovn_controller[152872]: 2025-11-22T09:36:25Z|01146|binding|INFO|Removing iface tap21b54230-3a ovn-installed in OVS
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.046 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bd18a679f16bc779df4f96decf1dfaf90586c9dd2d247ab24ea9be9b34ce71/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.051 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:ee:9e 10.100.0.5'], port_security=['fa:16:3e:ad:ee:9e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '624d1a5b-7d33-4814-8a02-c8e1e513249a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a19e22c3-d4f6-4134-81df-8e8895569f77, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=21b54230-3ad3-4b65-b752-5a1b0472844e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:25 compute-0 podman[370338]: 2025-11-22 09:36:25.072009026 +0000 UTC m=+0.469947279 container init 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 podman[370338]: 2025-11-22 09:36:25.083655545 +0000 UTC m=+0.481593778 container start 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:36:25 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 22 09:36:25 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000070.scope: Consumed 16.626s CPU time.
Nov 22 09:36:25 compute-0 systemd-machined[215941]: Machine qemu-140-instance-00000070 terminated.
Nov 22 09:36:25 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [NOTICE]   (370388) : New worker (370390) forked
Nov 22 09:36:25 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [NOTICE]   (370388) : Loading success.
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.174 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 735988ac-a658-458d-975f-872cfa132420 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 unbound from our chassis
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.175 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3e4e01e-5e3e-4572-b404-ee47aaec1186
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.193 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95ac292d-0295-4fcb-8728-04bf0251f715]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.234 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[33d3c718-b805-4ab7-8821-225ddf0c2541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.238 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9fed38-f957-4d5b-a2ae-f82a8532af56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 NetworkManager[48920]: <info>  [1763804185.2839] manager: (tap21b54230-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.284 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c4653-823e-49d5-96b1-c2c22d9ef979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.304 253665 INFO nova.virt.libvirt.driver [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance destroyed successfully.
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.305 253665 DEBUG nova.objects.instance [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.319 253665 DEBUG nova.virt.libvirt.vif [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.319 253665 DEBUG nova.network.os_vif_util [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.320 253665 DEBUG nova.network.os_vif_util [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.320 253665 DEBUG os_vif [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.322 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.322 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21b54230-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.324 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d58076-cf73-4f06-af35-1ea2a7259688]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3e4e01e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:a9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696240, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370408, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.324 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.330 253665 INFO os_vif [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a')
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.349 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84b2225c-e29c-4679-a6a3-d681c22672fe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd3e4e01e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696257, 'tstamp': 696257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370420, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd3e4e01e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696261, 'tstamp': 696261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370420, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.352 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3e4e01e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.354 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.356 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3e4e01e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.356 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.356 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3e4e01e-50, col_values=(('external_ids', {'iface-id': 'ff0f834b-9623-4226-98e1-741634e7eb05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.357 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.358 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 21b54230-3ad3-4b65-b752-5a1b0472844e in datapath 5c1e456e-4030-4169-b20f-3aec7a20c24e unbound from our chassis
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.359 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c1e456e-4030-4169-b20f-3aec7a20c24e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.360 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[416c0c90-2270-43b7-97a3-ade7929a76d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.361 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e namespace which is not needed anymore
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.399 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Processing event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 WARNING nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state building and task_state spawning.
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-unplugged-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] No waiting events found dispatching network-vif-unplugged-735988ac-a658-458d-975f-872cfa132420 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-unplugged-735988ac-a658-458d-975f-872cfa132420 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:36:25 compute-0 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [NOTICE]   (369490) : haproxy version is 2.8.14-c23fe91
Nov 22 09:36:25 compute-0 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [NOTICE]   (369490) : path to executable is /usr/sbin/haproxy
Nov 22 09:36:25 compute-0 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [WARNING]  (369490) : Exiting Master process...
Nov 22 09:36:25 compute-0 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [ALERT]    (369490) : Current worker (369493) exited with code 143 (Terminated)
Nov 22 09:36:25 compute-0 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [WARNING]  (369490) : All workers exited. Exiting... (0)
Nov 22 09:36:25 compute-0 systemd[1]: libpod-43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230.scope: Deactivated successfully.
Nov 22 09:36:25 compute-0 podman[370454]: 2025-11-22 09:36:25.542583628 +0000 UTC m=+0.077035797 container died 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.576 253665 INFO nova.virt.libvirt.driver [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Deleting instance files /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91_del
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.578 253665 INFO nova.virt.libvirt.driver [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Deletion of /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91_del complete
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.622 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-changed-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.622 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing instance network info cache due to event network-changed-735988ac-a658-458d-975f-872cfa132420. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.622 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.623 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.623 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing network info cache for port 735988ac-a658-458d-975f-872cfa132420 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230-userdata-shm.mount: Deactivated successfully.
Nov 22 09:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8f47abe13203d442e803e0c61cc2d5415b1d609f3c4c552ce2b7214c39c4e00-merged.mount: Deactivated successfully.
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.633 253665 INFO nova.compute.manager [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Took 1.52 seconds to destroy the instance on the hypervisor.
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.634 253665 DEBUG oslo.service.loopingcall [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.635 253665 DEBUG nova.compute.manager [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.635 253665 DEBUG nova.network.neutron [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:36:25 compute-0 podman[370454]: 2025-11-22 09:36:25.638795451 +0000 UTC m=+0.173247610 container cleanup 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:36:25 compute-0 systemd[1]: libpod-conmon-43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230.scope: Deactivated successfully.
Nov 22 09:36:25 compute-0 podman[370470]: 2025-11-22 09:36:25.688490376 +0000 UTC m=+0.117820211 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 22 09:36:25 compute-0 podman[370541]: 2025-11-22 09:36:25.738031269 +0000 UTC m=+0.071819617 container remove 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4f76e3-7066-4f97-8d3c-41c88e83d115]: (4, ('Sat Nov 22 09:36:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e (43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230)\n43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230\nSat Nov 22 09:36:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e (43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230)\n43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.748 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b0a77a-681d-4701-826e-60348527f223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.749 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c1e456e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 kernel: tap5c1e456e-40: left promiscuous mode
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.768 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.771 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74c70c6e-ed5a-4d3b-b7ee-00ef2aec74c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.772 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804185.7712848, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.772 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Started (Lifecycle Event)
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.775 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.780 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.782 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[830cdee7-f179-4819-9cf1-5dc4d9562de8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.784 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba1fc88-e05c-4aed-a3a0-b3a0931404c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.787 253665 INFO nova.virt.libvirt.driver [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance spawned successfully.
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.788 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.793 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.797 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.803 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aa121216-aefb-4baf-96d9-c7ed70efd660]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699787, 'reachable_time': 17261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370569, 'error': None, 'target': 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.806 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:36:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.806 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6691828f-9623-4697-b2d5-339b92af21d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c1e456e\x2d4030\x2d4169\x2db20f\x2d3aec7a20c24e.mount: Deactivated successfully.
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.806 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.807 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.807 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.808 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.808 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.808 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.813 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.814 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804185.7715914, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.814 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Paused (Lifecycle Event)
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.838 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.842 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804185.7813919, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.843 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Resumed (Lifecycle Event)
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.858 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.861 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.875 253665 INFO nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Took 9.65 seconds to spawn the instance on the hypervisor.
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.876 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.884 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:36:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 405 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.946 253665 INFO nova.virt.libvirt.driver [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Deleting instance files /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_del
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.947 253665 INFO nova.virt.libvirt.driver [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Deletion of /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_del complete
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.955 253665 INFO nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Took 10.94 seconds to build instance.
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.980 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:25 compute-0 nova_compute[253661]: 2025-11-22 09:36:25.986 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:26 compute-0 nova_compute[253661]: 2025-11-22 09:36:26.012 253665 INFO nova.compute.manager [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Took 1.15 seconds to destroy the instance on the hypervisor.
Nov 22 09:36:26 compute-0 nova_compute[253661]: 2025-11-22 09:36:26.012 253665 DEBUG oslo.service.loopingcall [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:36:26 compute-0 nova_compute[253661]: 2025-11-22 09:36:26.013 253665 DEBUG nova.compute.manager [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:36:26 compute-0 nova_compute[253661]: 2025-11-22 09:36:26.013 253665 DEBUG nova.network.neutron [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:36:26 compute-0 nova_compute[253661]: 2025-11-22 09:36:26.726 253665 DEBUG nova.network.neutron [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:26 compute-0 nova_compute[253661]: 2025-11-22 09:36:26.751 253665 INFO nova.compute.manager [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Took 1.12 seconds to deallocate network for instance.
Nov 22 09:36:26 compute-0 nova_compute[253661]: 2025-11-22 09:36:26.800 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:26 compute-0 nova_compute[253661]: 2025-11-22 09:36:26.801 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:26 compute-0 nova_compute[253661]: 2025-11-22 09:36:26.917 253665 DEBUG oslo_concurrency.processutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:27 compute-0 ceph-mon[75021]: pgmap v2230: 305 pgs: 305 active+clean; 405 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.089 253665 DEBUG nova.network.neutron [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.105 253665 INFO nova.compute.manager [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Took 1.09 seconds to deallocate network for instance.
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.157 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.230 253665 INFO nova.compute.manager [None req-0cd4ff5d-cc69-4d2c-a4b4-237b8e2e3871 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Pausing
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.231 253665 DEBUG nova.objects.instance [None req-0cd4ff5d-cc69-4d2c-a4b4-237b8e2e3871 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'flavor' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.261 253665 DEBUG nova.compute.manager [None req-0cd4ff5d-cc69-4d2c-a4b4-237b8e2e3871 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.262 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804187.2611864, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.262 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Paused (Lifecycle Event)
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.285 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.292 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.337 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 22 09:36:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:36:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330044043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.416 253665 DEBUG oslo_concurrency.processutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.423 253665 DEBUG nova.compute.provider_tree [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.438 253665 DEBUG nova.scheduler.client.report [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.463 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.467 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.537 253665 INFO nova.scheduler.client.report [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 7b3234ab-db15-43a8-8093-469f6e62db91
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.642 253665 DEBUG oslo_concurrency.processutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.692 253665 DEBUG nova.compute.manager [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.693 253665 DEBUG oslo_concurrency.lockutils [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.693 253665 DEBUG oslo_concurrency.lockutils [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.694 253665 DEBUG oslo_concurrency.lockutils [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.694 253665 DEBUG nova.compute.manager [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] No waiting events found dispatching network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.694 253665 WARNING nova.compute.manager [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received unexpected event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 for instance with vm_state deleted and task_state None.
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.698 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.709 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updated VIF entry in instance network info cache for port 735988ac-a658-458d-975f-872cfa132420. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.710 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.732 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.732 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.732 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing instance network info cache due to event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.733 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.733 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.733 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.850 253665 DEBUG nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.851 253665 DEBUG oslo_concurrency.lockutils [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.851 253665 DEBUG oslo_concurrency.lockutils [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.851 253665 DEBUG oslo_concurrency.lockutils [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.852 253665 DEBUG nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.852 253665 WARNING nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e for instance with vm_state deleted and task_state None.
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.854 253665 DEBUG nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-deleted-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:27 compute-0 nova_compute[253661]: 2025-11-22 09:36:27.855 253665 DEBUG nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-deleted-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 301 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Nov 22 09:36:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:27.978 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:27.979 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:27.980 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/330044043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.048 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:36:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:36:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327648505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.121 253665 DEBUG oslo_concurrency.processutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.127 253665 DEBUG nova.compute.provider_tree [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.160 253665 DEBUG nova.scheduler.client.report [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.183 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.217 253665 INFO nova.scheduler.client.report [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.279 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.394 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.413 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.413 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-unplugged-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-unplugged-21b54230-3ad3-4b65-b752-5a1b0472844e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-unplugged-21b54230-3ad3-4b65-b752-5a1b0472844e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.665 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.668 253665 INFO nova.compute.manager [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Terminating instance
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.669 253665 DEBUG nova.compute.manager [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:36:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:28 compute-0 kernel: tap0334ba91-f8 (unregistering): left promiscuous mode
Nov 22 09:36:28 compute-0 NetworkManager[48920]: <info>  [1763804188.7429] device (tap0334ba91-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:36:28 compute-0 ovn_controller[152872]: 2025-11-22T09:36:28Z|01147|binding|INFO|Releasing lport 0334ba91-f8b0-462b-a47b-b421e8796a21 from this chassis (sb_readonly=0)
Nov 22 09:36:28 compute-0 ovn_controller[152872]: 2025-11-22T09:36:28Z|01148|binding|INFO|Setting lport 0334ba91-f8b0-462b-a47b-b421e8796a21 down in Southbound
Nov 22 09:36:28 compute-0 ovn_controller[152872]: 2025-11-22T09:36:28Z|01149|binding|INFO|Removing iface tap0334ba91-f8 ovn-installed in OVS
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.754 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.760 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.779 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376'], port_security=['fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8::f816:3eff:feb6:3376/64', 'neutron:device_id': '2a866674-0c27-4cfc-89f2-dfe8e9768900', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d5326a8-c171-4fdf-9f85-e6536ded5f96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0334ba91-f8b0-462b-a47b-b421e8796a21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.780 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0334ba91-f8b0-462b-a47b-b421e8796a21 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 unbound from our chassis
Nov 22 09:36:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.782 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3e4e01e-5e3e-4572-b404-ee47aaec1186, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:36:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[60d127bc-0527-4f11-a940-3770aab72ff7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.784 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 namespace which is not needed anymore
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:28 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 22 09:36:28 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Consumed 17.217s CPU time.
Nov 22 09:36:28 compute-0 systemd-machined[215941]: Machine qemu-135-instance-0000006c terminated.
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.919 253665 INFO nova.virt.libvirt.driver [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance destroyed successfully.
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.919 253665 DEBUG nova.objects.instance [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 2a866674-0c27-4cfc-89f2-dfe8e9768900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.936 253665 DEBUG nova.virt.libvirt.vif [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-534004704',display_name='tempest-TestGettingAddress-server-534004704',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-534004704',id=108,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-svtsxafy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:12Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2a866674-0c27-4cfc-89f2-dfe8e9768900,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.937 253665 DEBUG nova.network.os_vif_util [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.938 253665 DEBUG nova.network.os_vif_util [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.939 253665 DEBUG os_vif [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.942 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0334ba91-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:28 compute-0 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [NOTICE]   (364385) : haproxy version is 2.8.14-c23fe91
Nov 22 09:36:28 compute-0 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [NOTICE]   (364385) : path to executable is /usr/sbin/haproxy
Nov 22 09:36:28 compute-0 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [WARNING]  (364385) : Exiting Master process...
Nov 22 09:36:28 compute-0 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [WARNING]  (364385) : Exiting Master process...
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.943 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:28 compute-0 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [ALERT]    (364385) : Current worker (364387) exited with code 143 (Terminated)
Nov 22 09:36:28 compute-0 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [WARNING]  (364385) : All workers exited. Exiting... (0)
Nov 22 09:36:28 compute-0 nova_compute[253661]: 2025-11-22 09:36:28.948 253665 INFO os_vif [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8')
Nov 22 09:36:28 compute-0 systemd[1]: libpod-a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a.scope: Deactivated successfully.
Nov 22 09:36:28 compute-0 podman[370635]: 2025-11-22 09:36:28.953482362 +0000 UTC m=+0.047737918 container died a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:36:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a-userdata-shm.mount: Deactivated successfully.
Nov 22 09:36:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c895ab309e207727e077cfd1951c34149c6808ab71f4a6a14342fc5743fd6a36-merged.mount: Deactivated successfully.
Nov 22 09:36:29 compute-0 podman[370635]: 2025-11-22 09:36:29.023383561 +0000 UTC m=+0.117639117 container cleanup a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 09:36:29 compute-0 systemd[1]: libpod-conmon-a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a.scope: Deactivated successfully.
Nov 22 09:36:29 compute-0 ceph-mon[75021]: pgmap v2231: 305 pgs: 305 active+clean; 301 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Nov 22 09:36:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3327648505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:29 compute-0 podman[370689]: 2025-11-22 09:36:29.100037237 +0000 UTC m=+0.051520342 container remove a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:36:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.110 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88167098-c059-428b-b7ff-b1f332a62c40]: (4, ('Sat Nov 22 09:36:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 (a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a)\na5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a\nSat Nov 22 09:36:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 (a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a)\na5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.113 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f907416-609d-4010-9862-226630b65b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.114 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3e4e01e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.116 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:29 compute-0 kernel: tapd3e4e01e-50: left promiscuous mode
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.134 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18bfc178-afcf-4af0-9cab-b14e12cee02b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.159 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf24145-a157-4783-bdf1-1088533c3cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.162 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20ad9f7e-ad90-44c1-aac9-8c330a1f0773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.187 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd2d2c1-8eb7-4934-b5b5-7532095499e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696232, 'reachable_time': 43043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370705, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.190 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:36:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.190 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e8f876-d86a-480c-bebe-d5844d338e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:29 compute-0 systemd[1]: run-netns-ovnmeta\x2dd3e4e01e\x2d5e3e\x2d4572\x2db404\x2dee47aaec1186.mount: Deactivated successfully.
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.278 253665 INFO nova.compute.manager [None req-da93e537-3a07-47c7-a2b6-83eea84e9f3f 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Unpausing
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.279 253665 DEBUG nova.objects.instance [None req-da93e537-3a07-47c7-a2b6-83eea84e9f3f 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'flavor' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.304 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804189.3041005, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.304 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Resumed (Lifecycle Event)
Nov 22 09:36:29 compute-0 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.309 253665 DEBUG nova.virt.libvirt.guest [None req-da93e537-3a07-47c7-a2b6-83eea84e9f3f 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.310 253665 DEBUG nova.compute.manager [None req-da93e537-3a07-47c7-a2b6-83eea84e9f3f 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.317 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.319 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.338 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.435 253665 INFO nova.virt.libvirt.driver [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Deleting instance files /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900_del
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.436 253665 INFO nova.virt.libvirt.driver [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Deletion of /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900_del complete
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.480 253665 INFO nova.compute.manager [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.483 253665 DEBUG oslo.service.loopingcall [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.484 253665 DEBUG nova.compute.manager [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.484 253665 DEBUG nova.network.neutron [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:36:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 246 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 644 KiB/s wr, 149 op/s
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.925 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.925 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing instance network info cache due to event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.926 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.926 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:29 compute-0 nova_compute[253661]: 2025-11-22 09:36:29.926 253665 DEBUG nova.network.neutron [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.232 253665 DEBUG nova.compute.manager [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.232 253665 DEBUG nova.compute.manager [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing instance network info cache due to event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.232 253665 DEBUG oslo_concurrency.lockutils [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.235 253665 DEBUG oslo_concurrency.lockutils [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.235 253665 DEBUG nova.network.neutron [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.526 253665 DEBUG nova.network.neutron [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.546 253665 INFO nova.compute.manager [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Took 1.06 seconds to deallocate network for instance.
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.605 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.605 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.695 253665 DEBUG oslo_concurrency.processutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:30 compute-0 nova_compute[253661]: 2025-11-22 09:36:30.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:31 compute-0 ceph-mon[75021]: pgmap v2232: 305 pgs: 305 active+clean; 246 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 644 KiB/s wr, 149 op/s
Nov 22 09:36:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:36:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239708323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.152 253665 DEBUG oslo_concurrency.processutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.157 253665 DEBUG nova.compute.provider_tree [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.174 253665 DEBUG nova.scheduler.client.report [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.204 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.237 253665 INFO nova.scheduler.client.report [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 2a866674-0c27-4cfc-89f2-dfe8e9768900
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.299 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.576 253665 DEBUG nova.network.neutron [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updated VIF entry in instance network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.577 253665 DEBUG nova.network.neutron [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.605 253665 DEBUG oslo_concurrency.lockutils [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.704 253665 DEBUG nova.network.neutron [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updated VIF entry in instance network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.704 253665 DEBUG nova.network.neutron [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.721 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.722 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-unplugged-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.722 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.723 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.723 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.723 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] No waiting events found dispatching network-vif-unplugged-0334ba91-f8b0-462b-a47b-b421e8796a21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.724 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-unplugged-0334ba91-f8b0-462b-a47b-b421e8796a21 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.724 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.724 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.725 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.725 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.725 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] No waiting events found dispatching network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.725 253665 WARNING nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received unexpected event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 for instance with vm_state active and task_state deleting.
Nov 22 09:36:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 223 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 295 KiB/s wr, 135 op/s
Nov 22 09:36:31 compute-0 nova_compute[253661]: 2025-11-22 09:36:31.991 253665 DEBUG nova.compute.manager [req-0a474a9a-ed33-49ae-aae1-7e4cfa512080 req-8faf0496-139f-4eb3-83ac-de76125491fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-deleted-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1239708323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:32 compute-0 ovn_controller[152872]: 2025-11-22T09:36:32Z|01150|binding|INFO|Releasing lport fc048c06-919a-46ba-ac90-0356d56c12a5 from this chassis (sb_readonly=0)
Nov 22 09:36:32 compute-0 ovn_controller[152872]: 2025-11-22T09:36:32Z|01151|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 09:36:32 compute-0 nova_compute[253661]: 2025-11-22 09:36:32.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:33 compute-0 ceph-mon[75021]: pgmap v2233: 305 pgs: 305 active+clean; 223 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 295 KiB/s wr, 135 op/s
Nov 22 09:36:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 161 op/s
Nov 22 09:36:33 compute-0 nova_compute[253661]: 2025-11-22 09:36:33.945 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:35 compute-0 ceph-mon[75021]: pgmap v2234: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 161 op/s
Nov 22 09:36:35 compute-0 ovn_controller[152872]: 2025-11-22T09:36:35Z|01152|binding|INFO|Releasing lport fc048c06-919a-46ba-ac90-0356d56c12a5 from this chassis (sb_readonly=0)
Nov 22 09:36:35 compute-0 ovn_controller[152872]: 2025-11-22T09:36:35Z|01153|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 09:36:35 compute-0 nova_compute[253661]: 2025-11-22 09:36:35.223 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:35 compute-0 nova_compute[253661]: 2025-11-22 09:36:35.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 158 op/s
Nov 22 09:36:35 compute-0 nova_compute[253661]: 2025-11-22 09:36:35.998 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:36 compute-0 nova_compute[253661]: 2025-11-22 09:36:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:36:36 compute-0 nova_compute[253661]: 2025-11-22 09:36:36.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:36:36 compute-0 nova_compute[253661]: 2025-11-22 09:36:36.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:36:36 compute-0 sudo[370729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:36 compute-0 sudo[370729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:36 compute-0 sudo[370729]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:36 compute-0 sudo[370754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:36:36 compute-0 sudo[370754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:36 compute-0 sudo[370754]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:36 compute-0 sudo[370779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:36 compute-0 sudo[370779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:36 compute-0 nova_compute[253661]: 2025-11-22 09:36:36.434 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:36 compute-0 nova_compute[253661]: 2025-11-22 09:36:36.434 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:36 compute-0 nova_compute[253661]: 2025-11-22 09:36:36.434 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:36:36 compute-0 sudo[370779]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:36 compute-0 sudo[370804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 09:36:36 compute-0 sudo[370804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:36 compute-0 sudo[370804]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:36:36 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:36:36 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:36 compute-0 sudo[370849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:36 compute-0 sudo[370849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:36 compute-0 sudo[370849]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:36 compute-0 sudo[370874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:36:36 compute-0 sudo[370874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:36 compute-0 sudo[370874]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:36 compute-0 sudo[370899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:36 compute-0 sudo[370899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:36 compute-0 sudo[370899]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:37 compute-0 sudo[370924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:36:37 compute-0 sudo[370924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:37 compute-0 ceph-mon[75021]: pgmap v2235: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 158 op/s
Nov 22 09:36:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:37 compute-0 sudo[370924]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:36:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:36:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:36:37 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:36:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:36:37 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:37 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 66bfe3a4-b9a9-4a7a-9fe9-ce120f573c3e does not exist
Nov 22 09:36:37 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9928045e-b008-4389-a919-e7b811aff2c5 does not exist
Nov 22 09:36:37 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ab2bd940-0c14-46ae-856a-a8fbbbe966fb does not exist
Nov 22 09:36:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:36:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:36:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:36:37 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:36:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:36:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:36:37 compute-0 sudo[370981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:37 compute-0 sudo[370981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:37 compute-0 sudo[370981]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:37 compute-0 sudo[371006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:36:37 compute-0 sudo[371006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:37 compute-0 sudo[371006]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:37 compute-0 sudo[371031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:37 compute-0 sudo[371031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:37 compute-0 sudo[371031]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:37 compute-0 sudo[371056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:36:37 compute-0 sudo[371056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 159 op/s
Nov 22 09:36:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:36:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:36:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:36:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:36:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:36:38 compute-0 nova_compute[253661]: 2025-11-22 09:36:38.188 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:38 compute-0 nova_compute[253661]: 2025-11-22 09:36:38.232 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:38 compute-0 nova_compute[253661]: 2025-11-22 09:36:38.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:36:38 compute-0 nova_compute[253661]: 2025-11-22 09:36:38.233 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:36:38 compute-0 podman[371119]: 2025-11-22 09:36:38.237654957 +0000 UTC m=+0.045718077 container create 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:36:38 compute-0 systemd[1]: Started libpod-conmon-4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153.scope.
Nov 22 09:36:38 compute-0 podman[371119]: 2025-11-22 09:36:38.216787578 +0000 UTC m=+0.024850738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:36:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:38 compute-0 podman[371119]: 2025-11-22 09:36:38.337914491 +0000 UTC m=+0.145977651 container init 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 09:36:38 compute-0 podman[371119]: 2025-11-22 09:36:38.346587677 +0000 UTC m=+0.154650807 container start 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:36:38 compute-0 podman[371119]: 2025-11-22 09:36:38.351013276 +0000 UTC m=+0.159076406 container attach 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:36:38 compute-0 gracious_curie[371135]: 167 167
Nov 22 09:36:38 compute-0 systemd[1]: libpod-4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153.scope: Deactivated successfully.
Nov 22 09:36:38 compute-0 podman[371119]: 2025-11-22 09:36:38.354146234 +0000 UTC m=+0.162209374 container died 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-808da85b293a5e94177fdd56ae4917782efd24694d7667d4e33b7f6a2e601737-merged.mount: Deactivated successfully.
Nov 22 09:36:38 compute-0 podman[371119]: 2025-11-22 09:36:38.402140358 +0000 UTC m=+0.210203498 container remove 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:36:38 compute-0 systemd[1]: libpod-conmon-4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153.scope: Deactivated successfully.
Nov 22 09:36:38 compute-0 podman[371161]: 2025-11-22 09:36:38.593888837 +0000 UTC m=+0.045298778 container create edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 09:36:38 compute-0 systemd[1]: Started libpod-conmon-edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad.scope.
Nov 22 09:36:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:38 compute-0 podman[371161]: 2025-11-22 09:36:38.574534515 +0000 UTC m=+0.025944466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:38 compute-0 podman[371161]: 2025-11-22 09:36:38.695996056 +0000 UTC m=+0.147406027 container init edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:36:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:38 compute-0 podman[371161]: 2025-11-22 09:36:38.704839356 +0000 UTC m=+0.156249297 container start edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:36:38 compute-0 podman[371161]: 2025-11-22 09:36:38.708548328 +0000 UTC m=+0.159958299 container attach edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:36:38 compute-0 nova_compute[253661]: 2025-11-22 09:36:38.949 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:39 compute-0 ceph-mon[75021]: pgmap v2236: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 159 op/s
Nov 22 09:36:39 compute-0 nova_compute[253661]: 2025-11-22 09:36:39.762 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804184.7607296, 7b3234ab-db15-43a8-8093-469f6e62db91 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:39 compute-0 nova_compute[253661]: 2025-11-22 09:36:39.763 253665 INFO nova.compute.manager [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] VM Stopped (Lifecycle Event)
Nov 22 09:36:39 compute-0 nova_compute[253661]: 2025-11-22 09:36:39.799 253665 DEBUG nova.compute.manager [None req-41440bae-1652-46f1-a970-fd02a911f8a4 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:39 compute-0 nova_compute[253661]: 2025-11-22 09:36:39.815 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:39 compute-0 loving_kilby[371177]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:36:39 compute-0 loving_kilby[371177]: --> relative data size: 1.0
Nov 22 09:36:39 compute-0 loving_kilby[371177]: --> All data devices are unavailable
Nov 22 09:36:39 compute-0 systemd[1]: libpod-edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad.scope: Deactivated successfully.
Nov 22 09:36:39 compute-0 podman[371161]: 2025-11-22 09:36:39.861091149 +0000 UTC m=+1.312501090 container died edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 22 09:36:39 compute-0 systemd[1]: libpod-edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad.scope: Consumed 1.063s CPU time.
Nov 22 09:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0-merged.mount: Deactivated successfully.
Nov 22 09:36:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 175 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 655 KiB/s rd, 674 KiB/s wr, 76 op/s
Nov 22 09:36:39 compute-0 podman[371161]: 2025-11-22 09:36:39.922084176 +0000 UTC m=+1.373494117 container remove edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:36:39 compute-0 systemd[1]: libpod-conmon-edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad.scope: Deactivated successfully.
Nov 22 09:36:39 compute-0 sudo[371056]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:40 compute-0 sudo[371219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:40 compute-0 sudo[371219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:40 compute-0 sudo[371219]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:40 compute-0 sudo[371244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:36:40 compute-0 sudo[371244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:40 compute-0 sudo[371244]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:40 compute-0 sudo[371269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:40 compute-0 sudo[371269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:40 compute-0 sudo[371269]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:40 compute-0 sudo[371294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:36:40 compute-0 sudo[371294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:40 compute-0 nova_compute[253661]: 2025-11-22 09:36:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:36:40 compute-0 nova_compute[253661]: 2025-11-22 09:36:40.302 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804185.3014693, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:40 compute-0 nova_compute[253661]: 2025-11-22 09:36:40.303 253665 INFO nova.compute.manager [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] VM Stopped (Lifecycle Event)
Nov 22 09:36:40 compute-0 nova_compute[253661]: 2025-11-22 09:36:40.325 253665 DEBUG nova.compute.manager [None req-02e52f06-cf57-46ec-9819-1f1c61e44eb2 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:40 compute-0 ovn_controller[152872]: 2025-11-22T09:36:40Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:c2:c9 10.100.0.11
Nov 22 09:36:40 compute-0 ovn_controller[152872]: 2025-11-22T09:36:40Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:c2:c9 10.100.0.11
Nov 22 09:36:40 compute-0 podman[371359]: 2025-11-22 09:36:40.606009473 +0000 UTC m=+0.043362229 container create 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:36:40 compute-0 systemd[1]: Started libpod-conmon-0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f.scope.
Nov 22 09:36:40 compute-0 podman[371359]: 2025-11-22 09:36:40.587355469 +0000 UTC m=+0.024708235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:36:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:40 compute-0 podman[371359]: 2025-11-22 09:36:40.707254761 +0000 UTC m=+0.144607537 container init 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:36:40 compute-0 podman[371359]: 2025-11-22 09:36:40.7164841 +0000 UTC m=+0.153836856 container start 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:36:40 compute-0 podman[371359]: 2025-11-22 09:36:40.719906895 +0000 UTC m=+0.157259671 container attach 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:36:40 compute-0 zealous_brown[371375]: 167 167
Nov 22 09:36:40 compute-0 systemd[1]: libpod-0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f.scope: Deactivated successfully.
Nov 22 09:36:40 compute-0 conmon[371375]: conmon 0d157f3f0b828379924b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f.scope/container/memory.events
Nov 22 09:36:40 compute-0 podman[371359]: 2025-11-22 09:36:40.726395027 +0000 UTC m=+0.163747783 container died 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef17f0b0cf6e33b0acd7da39c79722f97ba545b778635927179b270c71ce2561-merged.mount: Deactivated successfully.
Nov 22 09:36:40 compute-0 podman[371359]: 2025-11-22 09:36:40.764905444 +0000 UTC m=+0.202258200 container remove 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:36:40 compute-0 systemd[1]: libpod-conmon-0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f.scope: Deactivated successfully.
Nov 22 09:36:40 compute-0 podman[371399]: 2025-11-22 09:36:40.956244442 +0000 UTC m=+0.042737204 container create 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:36:40 compute-0 systemd[1]: Started libpod-conmon-8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76.scope.
Nov 22 09:36:41 compute-0 nova_compute[253661]: 2025-11-22 09:36:41.001 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:41 compute-0 podman[371399]: 2025-11-22 09:36:40.936735617 +0000 UTC m=+0.023228399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:41 compute-0 podman[371399]: 2025-11-22 09:36:41.050527547 +0000 UTC m=+0.137020329 container init 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 09:36:41 compute-0 podman[371399]: 2025-11-22 09:36:41.058405343 +0000 UTC m=+0.144898105 container start 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 09:36:41 compute-0 podman[371399]: 2025-11-22 09:36:41.062658199 +0000 UTC m=+0.149150981 container attach 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:36:41 compute-0 nova_compute[253661]: 2025-11-22 09:36:41.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:36:41 compute-0 ceph-mon[75021]: pgmap v2237: 305 pgs: 305 active+clean; 175 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 655 KiB/s rd, 674 KiB/s wr, 76 op/s
Nov 22 09:36:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 185 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 408 KiB/s rd, 1.3 MiB/s wr, 77 op/s
Nov 22 09:36:41 compute-0 charming_turing[371416]: {
Nov 22 09:36:41 compute-0 charming_turing[371416]:     "0": [
Nov 22 09:36:41 compute-0 charming_turing[371416]:         {
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "devices": [
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "/dev/loop3"
Nov 22 09:36:41 compute-0 charming_turing[371416]:             ],
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_name": "ceph_lv0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_size": "21470642176",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "name": "ceph_lv0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "tags": {
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.cluster_name": "ceph",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.crush_device_class": "",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.encrypted": "0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.osd_id": "0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.type": "block",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.vdo": "0"
Nov 22 09:36:41 compute-0 charming_turing[371416]:             },
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "type": "block",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "vg_name": "ceph_vg0"
Nov 22 09:36:41 compute-0 charming_turing[371416]:         }
Nov 22 09:36:41 compute-0 charming_turing[371416]:     ],
Nov 22 09:36:41 compute-0 charming_turing[371416]:     "1": [
Nov 22 09:36:41 compute-0 charming_turing[371416]:         {
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "devices": [
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "/dev/loop4"
Nov 22 09:36:41 compute-0 charming_turing[371416]:             ],
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_name": "ceph_lv1",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_size": "21470642176",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "name": "ceph_lv1",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "tags": {
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.cluster_name": "ceph",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.crush_device_class": "",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.encrypted": "0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.osd_id": "1",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.type": "block",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.vdo": "0"
Nov 22 09:36:41 compute-0 charming_turing[371416]:             },
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "type": "block",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "vg_name": "ceph_vg1"
Nov 22 09:36:41 compute-0 charming_turing[371416]:         }
Nov 22 09:36:41 compute-0 charming_turing[371416]:     ],
Nov 22 09:36:41 compute-0 charming_turing[371416]:     "2": [
Nov 22 09:36:41 compute-0 charming_turing[371416]:         {
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "devices": [
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "/dev/loop5"
Nov 22 09:36:41 compute-0 charming_turing[371416]:             ],
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_name": "ceph_lv2",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_size": "21470642176",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "name": "ceph_lv2",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "tags": {
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.cluster_name": "ceph",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.crush_device_class": "",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.encrypted": "0",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.osd_id": "2",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.type": "block",
Nov 22 09:36:41 compute-0 charming_turing[371416]:                 "ceph.vdo": "0"
Nov 22 09:36:41 compute-0 charming_turing[371416]:             },
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "type": "block",
Nov 22 09:36:41 compute-0 charming_turing[371416]:             "vg_name": "ceph_vg2"
Nov 22 09:36:41 compute-0 charming_turing[371416]:         }
Nov 22 09:36:41 compute-0 charming_turing[371416]:     ]
Nov 22 09:36:41 compute-0 charming_turing[371416]: }
Nov 22 09:36:41 compute-0 systemd[1]: libpod-8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76.scope: Deactivated successfully.
Nov 22 09:36:42 compute-0 podman[371425]: 2025-11-22 09:36:42.015470403 +0000 UTC m=+0.025905525 container died 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 22 09:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96-merged.mount: Deactivated successfully.
Nov 22 09:36:42 compute-0 podman[371425]: 2025-11-22 09:36:42.073504747 +0000 UTC m=+0.083939829 container remove 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:36:42 compute-0 systemd[1]: libpod-conmon-8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76.scope: Deactivated successfully.
Nov 22 09:36:42 compute-0 sudo[371294]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:42 compute-0 sudo[371440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:42 compute-0 sudo[371440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:42 compute-0 sudo[371440]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.249 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.249 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:42 compute-0 sudo[371465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:36:42 compute-0 sudo[371465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:42 compute-0 sudo[371465]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:42 compute-0 sudo[371491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:42 compute-0 sudo[371491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:42 compute-0 sudo[371491]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:42 compute-0 sudo[371516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:36:42 compute-0 sudo[371516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:36:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033510652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:42 compute-0 podman[371603]: 2025-11-22 09:36:42.789654406 +0000 UTC m=+0.047100142 container create 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.816 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.816 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:36:42 compute-0 nova_compute[253661]: 2025-11-22 09:36:42.821 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:36:42 compute-0 systemd[1]: Started libpod-conmon-9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915.scope.
Nov 22 09:36:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:42 compute-0 podman[371603]: 2025-11-22 09:36:42.768927632 +0000 UTC m=+0.026373408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:36:42 compute-0 podman[371603]: 2025-11-22 09:36:42.876616849 +0000 UTC m=+0.134062605 container init 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 09:36:42 compute-0 podman[371603]: 2025-11-22 09:36:42.889371067 +0000 UTC m=+0.146816803 container start 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:36:42 compute-0 podman[371603]: 2025-11-22 09:36:42.892623487 +0000 UTC m=+0.150069223 container attach 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:36:42 compute-0 festive_nobel[371620]: 167 167
Nov 22 09:36:42 compute-0 systemd[1]: libpod-9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915.scope: Deactivated successfully.
Nov 22 09:36:42 compute-0 conmon[371620]: conmon 9ba2adaf5186aa070669 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915.scope/container/memory.events
Nov 22 09:36:42 compute-0 podman[371603]: 2025-11-22 09:36:42.900038291 +0000 UTC m=+0.157484087 container died 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0309fd3b478628b11ce8d9cc175665e1e442dec44026ff654d54c7d636e79246-merged.mount: Deactivated successfully.
Nov 22 09:36:42 compute-0 podman[371603]: 2025-11-22 09:36:42.994769168 +0000 UTC m=+0.252214904 container remove 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 22 09:36:43 compute-0 systemd[1]: libpod-conmon-9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915.scope: Deactivated successfully.
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.068 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.069 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3321MB free_disk=59.905818939208984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.069 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.069 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.145 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance cf5e117a-f203-4c8f-b795-01fb355ca5e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.146 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 750659ed-67e0-44d4-a5b3-b8d0029ffa7e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.146 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.146 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.168 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:36:43 compute-0 podman[371644]: 2025-11-22 09:36:43.174289122 +0000 UTC m=+0.039054333 container create 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.187 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.188 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.205 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:36:43 compute-0 systemd[1]: Started libpod-conmon-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope.
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.237 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:36:43 compute-0 podman[371644]: 2025-11-22 09:36:43.158303415 +0000 UTC m=+0.023068646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:36:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:43 compute-0 podman[371644]: 2025-11-22 09:36:43.286170004 +0000 UTC m=+0.150935225 container init 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:36:43 compute-0 podman[371644]: 2025-11-22 09:36:43.293481476 +0000 UTC m=+0.158246687 container start 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:36:43 compute-0 podman[371644]: 2025-11-22 09:36:43.298998823 +0000 UTC m=+0.163764054 container attach 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.299 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:43 compute-0 ceph-mon[75021]: pgmap v2238: 305 pgs: 305 active+clean; 185 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 408 KiB/s rd, 1.3 MiB/s wr, 77 op/s
Nov 22 09:36:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3033510652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:36:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819653557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.745 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.753 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.768 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.897 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.898 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.916 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804188.915716, 2a866674-0c27-4cfc-89f2-dfe8e9768900 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.917 253665 INFO nova.compute.manager [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] VM Stopped (Lifecycle Event)
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.936 253665 DEBUG nova.compute.manager [None req-5356d49e-5875-4915-bd03-aab039725fdd - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:43 compute-0 nova_compute[253661]: 2025-11-22 09:36:43.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]: {
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "osd_id": 1,
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "type": "bluestore"
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:     },
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "osd_id": 0,
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "type": "bluestore"
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:     },
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "osd_id": 2,
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:         "type": "bluestore"
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]:     }
Nov 22 09:36:44 compute-0 amazing_kowalevski[371661]: }
Nov 22 09:36:44 compute-0 systemd[1]: libpod-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope: Deactivated successfully.
Nov 22 09:36:44 compute-0 systemd[1]: libpod-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope: Consumed 1.097s CPU time.
Nov 22 09:36:44 compute-0 conmon[371661]: conmon 7cb6456b74a9a62246bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope/container/memory.events
Nov 22 09:36:44 compute-0 podman[371644]: 2025-11-22 09:36:44.389094593 +0000 UTC m=+1.253859824 container died 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72-merged.mount: Deactivated successfully.
Nov 22 09:36:44 compute-0 podman[371644]: 2025-11-22 09:36:44.453399102 +0000 UTC m=+1.318164313 container remove 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:36:44 compute-0 systemd[1]: libpod-conmon-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope: Deactivated successfully.
Nov 22 09:36:44 compute-0 sudo[371516]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:36:44 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:36:44 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:44 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e46ed7d9-5dd7-4711-a634-5c3789efc3ca does not exist
Nov 22 09:36:44 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c64d6edc-a093-41dd-992c-232999fc859e does not exist
Nov 22 09:36:44 compute-0 sudo[371727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:36:44 compute-0 sudo[371727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:44 compute-0 sudo[371727]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1819653557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:44 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:44 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:36:44 compute-0 sudo[371752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:36:44 compute-0 sudo[371752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:36:44 compute-0 sudo[371752]: pam_unix(sudo:session): session closed for user root
Nov 22 09:36:44 compute-0 nova_compute[253661]: 2025-11-22 09:36:44.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:45 compute-0 ceph-mon[75021]: pgmap v2239: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 22 09:36:45 compute-0 nova_compute[253661]: 2025-11-22 09:36:45.899 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:36:45 compute-0 nova_compute[253661]: 2025-11-22 09:36:45.899 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:36:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 09:36:46 compute-0 nova_compute[253661]: 2025-11-22 09:36:46.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:46 compute-0 nova_compute[253661]: 2025-11-22 09:36:46.019 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:46 compute-0 nova_compute[253661]: 2025-11-22 09:36:46.020 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:46 compute-0 nova_compute[253661]: 2025-11-22 09:36:46.020 253665 INFO nova.compute.manager [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Shelving
Nov 22 09:36:46 compute-0 nova_compute[253661]: 2025-11-22 09:36:46.043 253665 DEBUG nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:36:46 compute-0 nova_compute[253661]: 2025-11-22 09:36:46.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:36:46 compute-0 nova_compute[253661]: 2025-11-22 09:36:46.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.337 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.338 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.352 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.413 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.414 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.422 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.422 253665 INFO nova.compute.claims [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.585 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.632 253665 INFO nova.compute.manager [None req-56dc8adc-669b-4794-aa49-5617ff07e522 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Get console output
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.640 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:36:47 compute-0 ceph-mon[75021]: pgmap v2240: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 09:36:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:36:47 compute-0 nova_compute[253661]: 2025-11-22 09:36:47.992 253665 DEBUG nova.objects.instance [None req-30a0a7c3-42b8-4be8-91c9-fd345d2368f0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.027 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804208.0180218, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.027 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Paused (Lifecycle Event)
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.049 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.064 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.085 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 22 09:36:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:36:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842025287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.115 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.124 253665 DEBUG nova.compute.provider_tree [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.137 253665 DEBUG nova.scheduler.client.report [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.175 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.176 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.268 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.269 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.295 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.323 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.440 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.442 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.443 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Creating image(s)
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.477 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.504 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.531 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.535 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.629 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.631 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.632 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.632 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.656 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.660 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/842025287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.769 253665 DEBUG nova.policy [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:36:48 compute-0 kernel: tapc027d879-91 (unregistering): left promiscuous mode
Nov 22 09:36:48 compute-0 NetworkManager[48920]: <info>  [1763804208.8051] device (tapc027d879-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:36:48 compute-0 ovn_controller[152872]: 2025-11-22T09:36:48Z|01154|binding|INFO|Releasing lport c027d879-91b3-497d-9f51-8476006ea65c from this chassis (sb_readonly=0)
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:48 compute-0 ovn_controller[152872]: 2025-11-22T09:36:48Z|01155|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c down in Southbound
Nov 22 09:36:48 compute-0 ovn_controller[152872]: 2025-11-22T09:36:48Z|01156|binding|INFO|Removing iface tapc027d879-91 ovn-installed in OVS
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.845 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.849 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.850 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 unbound from our chassis
Nov 22 09:36:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.852 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a990966c-0851-457f-bdd5-27cf73032674, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:36:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.853 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0d47f326-0cad-4304-9626-5d75981a7dc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.854 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace which is not needed anymore
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.860 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:48 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 22 09:36:48 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Consumed 16.862s CPU time.
Nov 22 09:36:48 compute-0 systemd-machined[215941]: Machine qemu-136-instance-0000006d terminated.
Nov 22 09:36:48 compute-0 kernel: tapf4a3cf1b-5c (unregistering): left promiscuous mode
Nov 22 09:36:48 compute-0 NetworkManager[48920]: <info>  [1763804208.9418] device (tapf4a3cf1b-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.955 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:48 compute-0 ovn_controller[152872]: 2025-11-22T09:36:48Z|01157|binding|INFO|Releasing lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b from this chassis (sb_readonly=0)
Nov 22 09:36:48 compute-0 ovn_controller[152872]: 2025-11-22T09:36:48Z|01158|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b down in Southbound
Nov 22 09:36:48 compute-0 ovn_controller[152872]: 2025-11-22T09:36:48Z|01159|binding|INFO|Removing iface tapf4a3cf1b-5c ovn-installed in OVS
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.967 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:c2:c9 10.100.0.11'], port_security=['fa:16:3e:ac:c2:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37020e16-bbf7-4d46-a463-62f41acbbdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bdf6e5f8-acae-4ca0-a205-a73594668944', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f34ee933-6c38-4761-bdaf-c769de521957, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:48 compute-0 nova_compute[253661]: 2025-11-22 09:36:48.984 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:48 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 22 09:36:48 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000071.scope: Consumed 15.415s CPU time.
Nov 22 09:36:48 compute-0 systemd-machined[215941]: Machine qemu-141-instance-00000071 terminated.
Nov 22 09:36:49 compute-0 NetworkManager[48920]: <info>  [1763804209.0691] manager: (tapc027d879-91): new Tun device (/org/freedesktop/NetworkManager/Devices/474)
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [NOTICE]   (367601) : haproxy version is 2.8.14-c23fe91
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [NOTICE]   (367601) : path to executable is /usr/sbin/haproxy
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [WARNING]  (367601) : Exiting Master process...
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [ALERT]    (367601) : Current worker (367603) exited with code 143 (Terminated)
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [WARNING]  (367601) : All workers exited. Exiting... (0)
Nov 22 09:36:49 compute-0 systemd[1]: libpod-cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a.scope: Deactivated successfully.
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.091 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance shutdown successfully after 3 seconds.
Nov 22 09:36:49 compute-0 podman[371921]: 2025-11-22 09:36:49.096930109 +0000 UTC m=+0.125862461 container died cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.100 253665 DEBUG nova.compute.manager [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.101 253665 DEBUG oslo_concurrency.lockutils [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.101 253665 DEBUG oslo_concurrency.lockutils [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.101 253665 DEBUG oslo_concurrency.lockutils [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.101 253665 DEBUG nova.compute.manager [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.102 253665 WARNING nova.compute.manager [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state shelving.
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.117 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance destroyed successfully.
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.118 253665 DEBUG nova.objects.instance [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'numa_topology' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:49 compute-0 NetworkManager[48920]: <info>  [1763804209.1434] manager: (tapf4a3cf1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.158 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.161 253665 DEBUG nova.compute.manager [None req-30a0a7c3-42b8-4be8-91c9-fd345d2368f0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a-userdata-shm.mount: Deactivated successfully.
Nov 22 09:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2118a5a184395a8c9e4712c7d009d993e2f304960246506fc15d26ee155b6cdb-merged.mount: Deactivated successfully.
Nov 22 09:36:49 compute-0 podman[371921]: 2025-11-22 09:36:49.241483974 +0000 UTC m=+0.270416326 container cleanup cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:36:49 compute-0 systemd[1]: libpod-conmon-cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a.scope: Deactivated successfully.
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.253 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:49 compute-0 podman[371979]: 2025-11-22 09:36:49.335724468 +0000 UTC m=+0.061738046 container remove cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.341 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.342 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[323b42f7-9fae-4314-a5f8-617c1dd3ea58]: (4, ('Sat Nov 22 09:36:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a)\ncad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a\nSat Nov 22 09:36:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a)\ncad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.343 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f13e4726-6301-41e0-af6d-4190340aeb48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.344 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:49 compute-0 kernel: tapa990966c-00: left promiscuous mode
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.370 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e7370711-bfd4-4af1-bf61-49ee5ecaf073]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.371 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.386 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f780253b-d0ec-4ed4-b3a2-8f07c82b7903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.389 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c72746cb-3d4f-41d3-b703-7b6a3b991e3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.400 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Beginning cold snapshot process
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.407 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[113f773d-a7b7-4858-8414-da8198954c0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698032, 'reachable_time': 15836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372054, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 systemd[1]: run-netns-ovnmeta\x2da990966c\x2d0851\x2d457f\x2dbdd5\x2d27cf73032674.mount: Deactivated successfully.
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.410 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.410 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fc8142-8eaa-40d6-aff1-3fa45548359a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.412 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b in datapath 37020e16-bbf7-4d46-a463-62f41acbbdab unbound from our chassis
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.413 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37020e16-bbf7-4d46-a463-62f41acbbdab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.415 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07ed794e-be31-4a19-9c48-908792e92686]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.415 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab namespace which is not needed anymore
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.471 253665 DEBUG nova.objects.instance [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.484 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.484 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Ensure instance console log exists: /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.485 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.485 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.485 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [NOTICE]   (370388) : haproxy version is 2.8.14-c23fe91
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [NOTICE]   (370388) : path to executable is /usr/sbin/haproxy
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [WARNING]  (370388) : Exiting Master process...
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.559 253665 DEBUG nova.virt.libvirt.imagebackend [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [ALERT]    (370388) : Current worker (370390) exited with code 143 (Terminated)
Nov 22 09:36:49 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [WARNING]  (370388) : All workers exited. Exiting... (0)
Nov 22 09:36:49 compute-0 systemd[1]: libpod-743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7.scope: Deactivated successfully.
Nov 22 09:36:49 compute-0 podman[372090]: 2025-11-22 09:36:49.570293601 +0000 UTC m=+0.053369678 container died 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7-userdata-shm.mount: Deactivated successfully.
Nov 22 09:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-59bd18a679f16bc779df4f96decf1dfaf90586c9dd2d247ab24ea9be9b34ce71-merged.mount: Deactivated successfully.
Nov 22 09:36:49 compute-0 podman[372090]: 2025-11-22 09:36:49.656587097 +0000 UTC m=+0.139663164 container cleanup 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:36:49 compute-0 systemd[1]: libpod-conmon-743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7.scope: Deactivated successfully.
Nov 22 09:36:49 compute-0 podman[372154]: 2025-11-22 09:36:49.719680086 +0000 UTC m=+0.040249762 container remove 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.725 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1bed86c-d8fa-4794-b7a3-3c6479e95c39]: (4, ('Sat Nov 22 09:36:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab (743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7)\n743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7\nSat Nov 22 09:36:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab (743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7)\n743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.727 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73132327-e4db-4912-b8b2-c4af99a47f19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.728 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37020e16-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:49 compute-0 ceph-mon[75021]: pgmap v2241: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.730 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:49 compute-0 kernel: tap37020e16-b0: left promiscuous mode
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.753 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.756 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ec9256-e58d-40e8-922b-33131a937ce4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.773 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f67af7fe-c1d7-4d30-9238-9ef31369f776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.775 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e16c0b6f-93bb-496e-885d-a4c45709ecdc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.795 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f065cf78-127d-4828-ab07-a6fadb646a1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703465, 'reachable_time': 19808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372175, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.797 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:36:49 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.798 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6f44f8ee-41ff-4be8-8459-d3c64d4d2d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:49 compute-0 nova_compute[253661]: 2025-11-22 09:36:49.821 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] creating snapshot(35d14d76d37749a08ddb60dfc3439544) on rbd image(cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:36:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:36:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d37020e16\x2dbbf7\x2d4d46\x2da463\x2d62f41acbbdab.mount: Deactivated successfully.
Nov 22 09:36:50 compute-0 podman[372194]: 2025-11-22 09:36:50.326185529 +0000 UTC m=+0.063137571 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:36:50 compute-0 podman[372195]: 2025-11-22 09:36:50.330648759 +0000 UTC m=+0.071638772 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 09:36:50 compute-0 nova_compute[253661]: 2025-11-22 09:36:50.708 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Successfully created port: 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:36:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Nov 22 09:36:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Nov 22 09:36:50 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Nov 22 09:36:50 compute-0 ceph-mon[75021]: pgmap v2242: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:36:50 compute-0 ceph-mon[75021]: osdmap e304: 3 total, 3 up, 3 in
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.194 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.195 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.195 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.196 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.196 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.196 253665 WARNING nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state shelving_image_uploading.
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.197 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.197 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.198 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.198 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.198 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.199 253665 WARNING nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state suspended and task_state None.
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.199 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.199 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.199 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.200 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.200 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.200 253665 WARNING nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state suspended and task_state None.
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.242 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] cloning vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk@35d14d76d37749a08ddb60dfc3439544 to images/fd10acf7-7116-43c7-8e62-b2aed4e8d629 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.386 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] flattening images/fd10acf7-7116-43c7-8e62-b2aed4e8d629 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.681 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.853 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] removing snapshot(35d14d76d37749a08ddb60dfc3439544) on rbd image(cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.871 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Successfully updated port: 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.887 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.888 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.888 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:36:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 219 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 30 op/s
Nov 22 09:36:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Nov 22 09:36:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Nov 22 09:36:51 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Nov 22 09:36:51 compute-0 nova_compute[253661]: 2025-11-22 09:36:51.981 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] creating snapshot(snap) on rbd image(fd10acf7-7116-43c7-8e62-b2aed4e8d629) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.054 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.062 253665 INFO nova.compute.manager [None req-18f7055b-300e-4c42-8a39-6367c1536af7 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Get console output
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.233 253665 INFO nova.compute.manager [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Resuming
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.235 253665 DEBUG nova.objects.instance [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:36:52
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.266 253665 DEBUG oslo_concurrency.lockutils [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.267 253665 DEBUG oslo_concurrency.lockutils [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.267 253665 DEBUG nova.network.neutron [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:36:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.924 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Nov 22 09:36:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Nov 22 09:36:52 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.954 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.955 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance network_info: |[{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:36:52 compute-0 ceph-mon[75021]: pgmap v2244: 305 pgs: 305 active+clean; 219 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 30 op/s
Nov 22 09:36:52 compute-0 ceph-mon[75021]: osdmap e305: 3 total, 3 up, 3 in
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.959 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start _get_guest_xml network_info=[{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.965 253665 WARNING nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.972 253665 DEBUG nova.virt.libvirt.host [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.973 253665 DEBUG nova.virt.libvirt.host [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.978 253665 DEBUG nova.virt.libvirt.host [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.978 253665 DEBUG nova.virt.libvirt.host [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.979 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.979 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.979 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.981 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.981 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.981 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:36:52 compute-0 nova_compute[253661]: 2025-11-22 09:36:52.983 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:36:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479172189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.492 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.520 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.524 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.810 253665 DEBUG nova.network.neutron [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.828 253665 DEBUG oslo_concurrency.lockutils [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.834 253665 DEBUG nova.virt.libvirt.vif [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-635689639',display_name='tempest-TestNetworkAdvancedServerOps-server-635689639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-635689639',id=113,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGat/0/6ionKBrSzyBS7EbGqwOoirIfAackkh+AYjCZXxoZzDjZWyHoUi84+Rs5w5CQ8NN8aOtxfB73LToni6HeOyO4Tgvy+GHztLu+Mg7hY5eYsKNagHEATOhR/nV+7Ew==',key_name='tempest-TestNetworkAdvancedServerOps-353719525',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:36:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-920qa6ny',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:36:49Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=750659ed-67e0-44d4-a5b3-b8d0029ffa7e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.835 253665 DEBUG nova.network.os_vif_util [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.835 253665 DEBUG nova.network.os_vif_util [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.836 253665 DEBUG os_vif [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.837 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.837 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.841 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4a3cf1b-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.841 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4a3cf1b-5c, col_values=(('external_ids', {'iface-id': 'f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:c2:c9', 'vm-uuid': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.841 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.842 253665 INFO os_vif [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c')
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.862 253665 DEBUG nova.objects.instance [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'numa_topology' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 292 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 8.0 MiB/s wr, 195 op/s
Nov 22 09:36:53 compute-0 kernel: tapf4a3cf1b-5c: entered promiscuous mode
Nov 22 09:36:53 compute-0 ovn_controller[152872]: 2025-11-22T09:36:53Z|01160|binding|INFO|Claiming lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for this chassis.
Nov 22 09:36:53 compute-0 ovn_controller[152872]: 2025-11-22T09:36:53Z|01161|binding|INFO|f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b: Claiming fa:16:3e:ac:c2:c9 10.100.0.11
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:53 compute-0 NetworkManager[48920]: <info>  [1763804213.9408] manager: (tapf4a3cf1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/476)
Nov 22 09:36:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.947 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:c2:c9 10.100.0.11'], port_security=['fa:16:3e:ac:c2:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37020e16-bbf7-4d46-a463-62f41acbbdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bdf6e5f8-acae-4ca0-a205-a73594668944', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f34ee933-6c38-4761-bdaf-c769de521957, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.948 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b in datapath 37020e16-bbf7-4d46-a463-62f41acbbdab bound to our chassis
Nov 22 09:36:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.950 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:53 compute-0 ovn_controller[152872]: 2025-11-22T09:36:53Z|01162|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b ovn-installed in OVS
Nov 22 09:36:53 compute-0 ovn_controller[152872]: 2025-11-22T09:36:53Z|01163|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b up in Southbound
Nov 22 09:36:53 compute-0 nova_compute[253661]: 2025-11-22 09:36:53.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.966 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a5712c83-ee75-474b-b1d0-75dc93cd0893]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.968 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37020e16-b1 in ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:36:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.971 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37020e16-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:36:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.971 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf66f9a-7dba-4f35-a1bb-89f35f9e598e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.972 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4b984b-39a8-469e-acc8-e259b322349d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:53 compute-0 ceph-mon[75021]: osdmap e306: 3 total, 3 up, 3 in
Nov 22 09:36:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1479172189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:36:53 compute-0 systemd-machined[215941]: New machine qemu-142-instance-00000071.
Nov 22 09:36:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.986 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ff37fc9f-20b9-4ec2-b825-36c51ca7f959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:36:53 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000071.
Nov 22 09:36:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841748321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.006 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b6710001-1a42-4e07-ad02-fb837135808a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 systemd-udevd[372406]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.023 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.025 253665 DEBUG nova.virt.libvirt.vif [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-628839971',display_name='tempest-TestNetworkBasicOps-server-628839971',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-628839971',id=114,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIZ5gGdNvaqAtX8j4rLIehpVsycYZstZu428EjSgRsIaTKO3qobX2DWEa45t7eW4vzvXR6ESLf4/AnMv9en3fY5WkAniEGuSXx7koBFV1HR0ktIagOKt25I/jbmVsb/jUA==',key_name='tempest-TestNetworkBasicOps-971917795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-n0lt6esc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:48Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.025 253665 DEBUG nova.network.os_vif_util [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.026 253665 DEBUG nova.network.os_vif_util [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:54 compute-0 NetworkManager[48920]: <info>  [1763804214.0281] device (tapf4a3cf1b-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:36:54 compute-0 NetworkManager[48920]: <info>  [1763804214.0289] device (tapf4a3cf1b-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.029 253665 DEBUG nova.objects.instance [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.036 253665 DEBUG nova.compute.manager [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.036 253665 DEBUG nova.compute.manager [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing instance network info cache due to event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.037 253665 DEBUG oslo_concurrency.lockutils [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.037 253665 DEBUG oslo_concurrency.lockutils [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.037 253665 DEBUG nova.network.neutron [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.042 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6746c887-aafd-427e-abb5-d44c00f351e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 systemd-udevd[372410]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:36:54 compute-0 NetworkManager[48920]: <info>  [1763804214.0488] manager: (tap37020e16-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/477)
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.049 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[044b8fab-0684-4e82-b4fa-4e3778ab1fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.058 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <uuid>1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9</uuid>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <name>instance-00000072</name>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-628839971</nova:name>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:36:52</nova:creationTime>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <nova:port uuid="22b006cb-c06d-4ebb-9f02-ccbbdfc34f26">
Nov 22 09:36:54 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <system>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <entry name="serial">1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9</entry>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <entry name="uuid">1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9</entry>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     </system>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <os>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   </os>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <features>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   </features>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk">
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config">
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       </source>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:36:54 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:c9:83:85"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <target dev="tap22b006cb-c0"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/console.log" append="off"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <video>
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     </video>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:36:54 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:36:54 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:36:54 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:36:54 compute-0 nova_compute[253661]: </domain>
Nov 22 09:36:54 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.060 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Preparing to wait for external event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.061 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.061 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.061 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.062 253665 DEBUG nova.virt.libvirt.vif [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-628839971',display_name='tempest-TestNetworkBasicOps-server-628839971',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-628839971',id=114,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIZ5gGdNvaqAtX8j4rLIehpVsycYZstZu428EjSgRsIaTKO3qobX2DWEa45t7eW4vzvXR6ESLf4/AnMv9en3fY5WkAniEGuSXx7koBFV1HR0ktIagOKt25I/jbmVsb/jUA==',key_name='tempest-TestNetworkBasicOps-971917795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-n0lt6esc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:48Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.062 253665 DEBUG nova.network.os_vif_util [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.063 253665 DEBUG nova.network.os_vif_util [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.063 253665 DEBUG os_vif [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.067 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.067 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.068 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22b006cb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22b006cb-c0, col_values=(('external_ids', {'iface-id': '22b006cb-c06d-4ebb-9f02-ccbbdfc34f26', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:83:85', 'vm-uuid': '1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:54 compute-0 NetworkManager[48920]: <info>  [1763804214.0776] manager: (tap22b006cb-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.079 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.084 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.085 253665 INFO os_vif [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0')
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.085 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2f28cbed-a599-4c7d-84aa-e931b1aa6ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.089 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2c0935-9118-481f-985f-dba8a7277788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 NetworkManager[48920]: <info>  [1763804214.1221] device (tap37020e16-b0): carrier: link connected
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.130 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ddb6a6-727e-45b9-a701-fc66d5374109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.135 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.136 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.136 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:c9:83:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.137 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Using config drive
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.159 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[547c0d9a-4110-4410-a8cc-d2b4bb8ab5fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37020e16-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:87:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706486, 'reachable_time': 43632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372443, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.166 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.179 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[803bcdaa-3eb6-491e-9d1c-777e99deee7d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:87fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706486, 'tstamp': 706486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372462, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.208 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be2921d1-32ee-49c4-a175-c4b2d8b644a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37020e16-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:87:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706486, 'reachable_time': 43632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372463, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.246 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[995043fc-8e6a-4df1-9e8d-cb2579cc2c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.328 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b563278-c386-498b-87b0-7052dbb67f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37020e16-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.331 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37020e16-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:54 compute-0 NetworkManager[48920]: <info>  [1763804214.3344] manager: (tap37020e16-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/479)
Nov 22 09:36:54 compute-0 kernel: tap37020e16-b0: entered promiscuous mode
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.343 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37020e16-b0, col_values=(('external_ids', {'iface-id': 'fc048c06-919a-46ba-ac90-0356d56c12a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:54 compute-0 ovn_controller[152872]: 2025-11-22T09:36:54Z|01164|binding|INFO|Releasing lport fc048c06-919a-46ba-ac90-0356d56c12a5 from this chassis (sb_readonly=0)
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.370 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.371 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.372 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fade5d8d-9796-43e9-b1dc-7994adb50b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.373 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:36:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.374 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'env', 'PROCESS_TAG=haproxy-37020e16-bbf7-4d46-a463-62f41acbbdab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37020e16-bbf7-4d46-a463-62f41acbbdab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.479 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Snapshot image upload complete
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.480 253665 DEBUG nova.compute.manager [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.521 253665 INFO nova.compute.manager [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Shelve offloading
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.532 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance destroyed successfully.
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.532 253665 DEBUG nova.compute.manager [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.534 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.535 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.535 253665 DEBUG nova.network.neutron [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.627 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 750659ed-67e0-44d4-a5b3-b8d0029ffa7e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.628 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804214.6266155, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.628 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Started (Lifecycle Event)
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.647 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.665 253665 DEBUG nova.compute.manager [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.666 253665 DEBUG nova.objects.instance [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.670 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.680 253665 INFO nova.virt.libvirt.driver [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance running successfully.
Nov 22 09:36:54 compute-0 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.683 253665 DEBUG nova.virt.libvirt.guest [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.684 253665 DEBUG nova.compute.manager [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.687 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.687 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804214.6316602, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.687 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Resumed (Lifecycle Event)
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.706 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.709 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:36:54 compute-0 nova_compute[253661]: 2025-11-22 09:36:54.737 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 22 09:36:54 compute-0 podman[372549]: 2025-11-22 09:36:54.827702855 +0000 UTC m=+0.066697690 container create 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:36:54 compute-0 systemd[1]: Started libpod-conmon-7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b.scope.
Nov 22 09:36:54 compute-0 podman[372549]: 2025-11-22 09:36:54.799294899 +0000 UTC m=+0.038289754 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:36:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bba967c037ed38e7577ecc1bb77c57e590063a454084161df85b4b7d476e334b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:54 compute-0 podman[372549]: 2025-11-22 09:36:54.925802005 +0000 UTC m=+0.164796870 container init 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:36:54 compute-0 podman[372549]: 2025-11-22 09:36:54.931901496 +0000 UTC m=+0.170896331 container start 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:36:54 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [NOTICE]   (372568) : New worker (372570) forked
Nov 22 09:36:54 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [NOTICE]   (372568) : Loading success.
Nov 22 09:36:54 compute-0 ceph-mon[75021]: pgmap v2247: 305 pgs: 305 active+clean; 292 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 8.0 MiB/s wr, 195 op/s
Nov 22 09:36:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/841748321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:36:55 compute-0 nova_compute[253661]: 2025-11-22 09:36:55.673 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Creating config drive at /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config
Nov 22 09:36:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:36:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:36:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:36:55 compute-0 nova_compute[253661]: 2025-11-22 09:36:55.679 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps46b75wm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:36:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:36:55 compute-0 nova_compute[253661]: 2025-11-22 09:36:55.837 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps46b75wm" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:55 compute-0 nova_compute[253661]: 2025-11-22 09:36:55.866 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:55 compute-0 nova_compute[253661]: 2025-11-22 09:36:55.870 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 292 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 8.0 MiB/s wr, 193 op/s
Nov 22 09:36:55 compute-0 nova_compute[253661]: 2025-11-22 09:36:55.940 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:55 compute-0 nova_compute[253661]: 2025-11-22 09:36:55.940 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:55 compute-0 nova_compute[253661]: 2025-11-22 09:36:55.952 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.049 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.070 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.070 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.078 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.078 253665 INFO nova.compute.claims [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.084 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.085 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Deleting local config drive /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config because it was imported into RBD.
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.103 253665 DEBUG nova.network.neutron [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updated VIF entry in instance network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.103 253665 DEBUG nova.network.neutron [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.127 253665 DEBUG oslo_concurrency.lockutils [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:56 compute-0 kernel: tap22b006cb-c0: entered promiscuous mode
Nov 22 09:36:56 compute-0 NetworkManager[48920]: <info>  [1763804216.1478] manager: (tap22b006cb-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/480)
Nov 22 09:36:56 compute-0 systemd-udevd[372429]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:36:56 compute-0 ovn_controller[152872]: 2025-11-22T09:36:56Z|01165|binding|INFO|Claiming lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 for this chassis.
Nov 22 09:36:56 compute-0 ovn_controller[152872]: 2025-11-22T09:36:56Z|01166|binding|INFO|22b006cb-c06d-4ebb-9f02-ccbbdfc34f26: Claiming fa:16:3e:c9:83:85 10.100.0.8
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.165 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:83:85 10.100.0.8'], port_security=['fa:16:3e:c9:83:85 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669fa85d-7478-40e5-958b-7300ef3552b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '714d001a-9857-4892-9e43-4add0015169f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f6a9cc6-46e5-4035-8aed-8dfaed3a2f4d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.167 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 in datapath 669fa85d-7478-40e5-958b-7300ef3552b5 bound to our chassis
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.169 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 669fa85d-7478-40e5-958b-7300ef3552b5
Nov 22 09:36:56 compute-0 NetworkManager[48920]: <info>  [1763804216.1729] device (tap22b006cb-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:36:56 compute-0 NetworkManager[48920]: <info>  [1763804216.1737] device (tap22b006cb-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:36:56 compute-0 ovn_controller[152872]: 2025-11-22T09:36:56Z|01167|binding|INFO|Setting lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 ovn-installed in OVS
Nov 22 09:36:56 compute-0 ovn_controller[152872]: 2025-11-22T09:36:56Z|01168|binding|INFO|Setting lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 up in Southbound
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea876067-6195-4a8d-8a85-1339488746c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.187 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap669fa85d-71 in ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.190 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap669fa85d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.190 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69350cbd-a8f9-4525-8aa7-0aa894488bfc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.191 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a00b899-2a57-454f-a8db-9ffc67dd66c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 systemd-machined[215941]: New machine qemu-143-instance-00000072.
Nov 22 09:36:56 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000072.
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.203 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9e75ba39-2988-4f62-b353-8fcc47711678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.221 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5faa5112-ed65-498d-908a-76a44a5719d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.240 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.260 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dd787000-c16e-4fc3-9507-0b43a50c865c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 NetworkManager[48920]: <info>  [1763804216.2685] manager: (tap669fa85d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/481)
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.267 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff38814e-ec28-4407-a4e6-e18dcae5f47b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 podman[372620]: 2025-11-22 09:36:56.284944285 +0000 UTC m=+0.159996010 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.300 253665 DEBUG nova.compute.manager [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.300 253665 DEBUG oslo_concurrency.lockutils [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.301 253665 DEBUG oslo_concurrency.lockutils [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.301 253665 DEBUG oslo_concurrency.lockutils [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.302 253665 DEBUG nova.compute.manager [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.302 253665 WARNING nova.compute.manager [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state active and task_state None.
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.304 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1f4eaf-c62a-4114-b28f-87a33c852704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.308 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[66513390-df3c-46cb-9765-a234af838515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 NetworkManager[48920]: <info>  [1763804216.3408] device (tap669fa85d-70): carrier: link connected
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.351 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[470d50f3-9c19-4009-90ab-dc665fec7f4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.370 253665 DEBUG nova.compute.manager [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.370 253665 DEBUG oslo_concurrency.lockutils [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.371 253665 DEBUG oslo_concurrency.lockutils [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.372 253665 DEBUG oslo_concurrency.lockutils [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.372 253665 DEBUG nova.compute.manager [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Processing event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.373 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6238c5a1-09b8-4f48-92ac-4c7f1a54cbfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap669fa85d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:cb:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706707, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372680, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[447daf92-dff5-4784-9098-83b271f63087]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:cbce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706707, 'tstamp': 706707}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372691, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.419 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8963aec1-9759-4629-89b0-0233dd43abf8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap669fa85d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:cb:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706707, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372701, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.459 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7523d0-51e8-4e1e-966b-e1fea58a3a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.542 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4581a881-1efb-4e1c-9eb4-4fca3fe01a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.544 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap669fa85d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.544 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.545 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap669fa85d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:56 compute-0 NetworkManager[48920]: <info>  [1763804216.5481] manager: (tap669fa85d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:56 compute-0 kernel: tap669fa85d-70: entered promiscuous mode
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.556 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap669fa85d-70, col_values=(('external_ids', {'iface-id': 'b0af7c96-3c08-40c2-b3ca-1e251090d01d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:56 compute-0 ovn_controller[152872]: 2025-11-22T09:36:56Z|01169|binding|INFO|Releasing lport b0af7c96-3c08-40c2-b3ca-1e251090d01d from this chassis (sb_readonly=0)
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.581 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.592 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/669fa85d-7478-40e5-958b-7300ef3552b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/669fa85d-7478-40e5-958b-7300ef3552b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.593 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bd27346a-aeba-485c-9ce0-c1d3ac52e950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.594 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-669fa85d-7478-40e5-958b-7300ef3552b5
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/669fa85d-7478-40e5-958b-7300ef3552b5.pid.haproxy
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 669fa85d-7478-40e5-958b-7300ef3552b5
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:36:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.594 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'env', 'PROCESS_TAG=haproxy-669fa85d-7478-40e5-958b-7300ef3552b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/669fa85d-7478-40e5-958b-7300ef3552b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:36:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:36:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:36:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:36:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:36:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:36:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:36:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462887379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.761 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.770 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804216.7696567, 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.770 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] VM Started (Lifecycle Event)
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.773 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.778 253665 DEBUG nova.compute.provider_tree [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.782 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.786 253665 INFO nova.virt.libvirt.driver [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance spawned successfully.
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.786 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.804 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.806 253665 DEBUG nova.scheduler.client.report [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.816 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.820 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.821 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.821 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.821 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.822 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.822 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.851 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.852 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.855 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.856 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804216.770867, 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.856 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] VM Paused (Lifecycle Event)
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.886 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.891 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804216.7811797, 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.891 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] VM Resumed (Lifecycle Event)
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.909 253665 INFO nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Took 8.47 seconds to spawn the instance on the hypervisor.
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.910 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.911 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.915 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.916 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.924 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.967 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.970 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:36:56 compute-0 nova_compute[253661]: 2025-11-22 09:36:56.993 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.026 253665 INFO nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Took 9.63 seconds to build instance.
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.049 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:57 compute-0 podman[372781]: 2025-11-22 09:36:56.979273442 +0000 UTC m=+0.030090920 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.082 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.084 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.084 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Creating image(s)
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.108 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:57 compute-0 ceph-mon[75021]: pgmap v2248: 305 pgs: 305 active+clean; 292 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 8.0 MiB/s wr, 193 op/s
Nov 22 09:36:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1462887379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.331 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.363 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.368 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.418 253665 DEBUG nova.policy [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.422 253665 DEBUG nova.network.neutron [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.439 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.465 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.466 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.467 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.468 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.497 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:36:57 compute-0 nova_compute[253661]: 2025-11-22 09:36:57.503 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2837c740-6ce1-47d5-ad27-107211f74db7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:36:57 compute-0 podman[372781]: 2025-11-22 09:36:57.539540825 +0000 UTC m=+0.590358223 container create 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:36:57 compute-0 systemd[1]: Started libpod-conmon-16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239.scope.
Nov 22 09:36:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 325 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 9.6 MiB/s wr, 198 op/s
Nov 22 09:36:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d1ba9fa3eb3ce0aa01e1a76d7c9a2d999425c99d5548d6568348e14d15459/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:36:57 compute-0 podman[372781]: 2025-11-22 09:36:57.97444656 +0000 UTC m=+1.025263978 container init 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:36:57 compute-0 podman[372781]: 2025-11-22 09:36:57.981924827 +0000 UTC m=+1.032742225 container start 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 09:36:58 compute-0 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [NOTICE]   (372888) : New worker (372893) forked
Nov 22 09:36:58 compute-0 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [NOTICE]   (372888) : Loading success.
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.153 253665 INFO nova.compute.manager [None req-7a9e7e64-a80c-419c-9955-05b15582d79f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Get console output
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.160 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.308 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2837c740-6ce1-47d5-ad27-107211f74db7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.805s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.386 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.425 253665 DEBUG nova.compute.manager [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.426 253665 DEBUG oslo_concurrency.lockutils [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.426 253665 DEBUG oslo_concurrency.lockutils [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.426 253665 DEBUG oslo_concurrency.lockutils [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.426 253665 DEBUG nova.compute.manager [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.427 253665 WARNING nova.compute.manager [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state active and task_state None.
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.518 253665 DEBUG nova.compute.manager [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.518 253665 DEBUG oslo_concurrency.lockutils [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.519 253665 DEBUG oslo_concurrency.lockutils [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.519 253665 DEBUG oslo_concurrency.lockutils [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.519 253665 DEBUG nova.compute.manager [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] No waiting events found dispatching network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.520 253665 WARNING nova.compute.manager [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received unexpected event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 for instance with vm_state active and task_state None.
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.525 253665 DEBUG nova.objects.instance [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 2837c740-6ce1-47d5-ad27-107211f74db7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.538 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.539 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Ensure instance console log exists: /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.539 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.540 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.540 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:36:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Nov 22 09:36:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Nov 22 09:36:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.780 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Successfully created port: 18df29f5-368d-4b94-ac69-8541de164d02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.793 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance destroyed successfully.
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.793 253665 DEBUG nova.objects.instance [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'resources' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.813 253665 DEBUG nova.virt.libvirt.vif [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member',shelved_at='2025-11-22T09:36:54.480378',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fd10acf7-7116-43c7-8e62-b2aed4e8d629'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:36:49Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.814 253665 DEBUG nova.network.os_vif_util [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.815 253665 DEBUG nova.network.os_vif_util [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.815 253665 DEBUG os_vif [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.817 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc027d879-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:58 compute-0 nova_compute[253661]: 2025-11-22 09:36:58.824 253665 INFO os_vif [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.065 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.066 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.066 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.066 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.066 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.068 253665 INFO nova.compute.manager [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Terminating instance
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.069 253665 DEBUG nova.compute.manager [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:36:59 compute-0 kernel: tapf4a3cf1b-5c (unregistering): left promiscuous mode
Nov 22 09:36:59 compute-0 NetworkManager[48920]: <info>  [1763804219.1664] device (tapf4a3cf1b-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:36:59 compute-0 ovn_controller[152872]: 2025-11-22T09:36:59Z|01170|binding|INFO|Releasing lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b from this chassis (sb_readonly=0)
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:59 compute-0 ovn_controller[152872]: 2025-11-22T09:36:59Z|01171|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b down in Southbound
Nov 22 09:36:59 compute-0 ovn_controller[152872]: 2025-11-22T09:36:59Z|01172|binding|INFO|Removing iface tapf4a3cf1b-5c ovn-installed in OVS
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.189 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.194 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:c2:c9 10.100.0.11'], port_security=['fa:16:3e:ac:c2:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37020e16-bbf7-4d46-a463-62f41acbbdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'bdf6e5f8-acae-4ca0-a205-a73594668944', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f34ee933-6c38-4761-bdaf-c769de521957, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.196 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b in datapath 37020e16-bbf7-4d46-a463-62f41acbbdab unbound from our chassis
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.198 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37020e16-bbf7-4d46-a463-62f41acbbdab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.199 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0cc854-0616-462b-a17b-4d523acf30ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.200 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab namespace which is not needed anymore
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:59 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.238 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Successfully created port: a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:36:59 compute-0 systemd-machined[215941]: Machine qemu-142-instance-00000071 terminated.
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.311 253665 INFO nova.virt.libvirt.driver [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance destroyed successfully.
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.312 253665 DEBUG nova.objects.instance [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.323 253665 DEBUG nova.virt.libvirt.vif [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-635689639',display_name='tempest-TestNetworkAdvancedServerOps-server-635689639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-635689639',id=113,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGat/0/6ionKBrSzyBS7EbGqwOoirIfAackkh+AYjCZXxoZzDjZWyHoUi84+Rs5w5CQ8NN8aOtxfB73LToni6HeOyO4Tgvy+GHztLu+Mg7hY5eYsKNagHEATOhR/nV+7Ew==',key_name='tempest-TestNetworkAdvancedServerOps-353719525',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:36:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-920qa6ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:36:54Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=750659ed-67e0-44d4-a5b3-b8d0029ffa7e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.324 253665 DEBUG nova.network.os_vif_util [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.324 253665 DEBUG nova.network.os_vif_util [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.325 253665 DEBUG os_vif [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.327 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4a3cf1b-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.332 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.340 253665 INFO os_vif [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c')
Nov 22 09:36:59 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [NOTICE]   (372568) : haproxy version is 2.8.14-c23fe91
Nov 22 09:36:59 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [NOTICE]   (372568) : path to executable is /usr/sbin/haproxy
Nov 22 09:36:59 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [WARNING]  (372568) : Exiting Master process...
Nov 22 09:36:59 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [ALERT]    (372568) : Current worker (372570) exited with code 143 (Terminated)
Nov 22 09:36:59 compute-0 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [WARNING]  (372568) : All workers exited. Exiting... (0)
Nov 22 09:36:59 compute-0 systemd[1]: libpod-7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b.scope: Deactivated successfully.
Nov 22 09:36:59 compute-0 podman[373024]: 2025-11-22 09:36:59.403836077 +0000 UTC m=+0.075692733 container died 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b-userdata-shm.mount: Deactivated successfully.
Nov 22 09:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bba967c037ed38e7577ecc1bb77c57e590063a454084161df85b4b7d476e334b-merged.mount: Deactivated successfully.
Nov 22 09:36:59 compute-0 podman[373024]: 2025-11-22 09:36:59.515561026 +0000 UTC m=+0.187417672 container cleanup 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:36:59 compute-0 systemd[1]: libpod-conmon-7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b.scope: Deactivated successfully.
Nov 22 09:36:59 compute-0 podman[373073]: 2025-11-22 09:36:59.613602384 +0000 UTC m=+0.068605067 container remove 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.627 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f9516e90-c2db-44e6-9df5-f94785990259]: (4, ('Sat Nov 22 09:36:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab (7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b)\n7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b\nSat Nov 22 09:36:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab (7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b)\n7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.630 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[75e91f5d-b119-44ac-b642-a8e172294467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.631 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37020e16-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:59 compute-0 kernel: tap37020e16-b0: left promiscuous mode
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.673 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2288b465-d72a-4446-9455-e9a1229ce91d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:59 compute-0 nova_compute[253661]: 2025-11-22 09:36:59.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.689 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32b251a9-32cb-42eb-b7d6-1a56fb90f55d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.690 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82615098-8fa4-4fec-955b-46b171dbc2e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.705 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08c5df62-70db-4fe9-9a88-df1ff1c20730]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706477, 'reachable_time': 37432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373088, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.708 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:36:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.708 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[87765525-730a-4db1-bb4a-ce1e1b444b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:36:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d37020e16\x2dbbf7\x2d4d46\x2da463\x2d62f41acbbdab.mount: Deactivated successfully.
Nov 22 09:36:59 compute-0 ceph-mon[75021]: pgmap v2249: 305 pgs: 305 active+clean; 325 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 9.6 MiB/s wr, 198 op/s
Nov 22 09:36:59 compute-0 ceph-mon[75021]: osdmap e307: 3 total, 3 up, 3 in
Nov 22 09:36:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 336 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 8.4 MiB/s wr, 270 op/s
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.061 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Successfully updated port: 18df29f5-368d-4b94-ac69-8541de164d02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.162 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deleting instance files /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8_del
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.164 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deletion of /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8_del complete
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.348 253665 INFO nova.scheduler.client.report [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Deleted allocations for instance cf5e117a-f203-4c8f-b795-01fb355ca5e8
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.383 253665 INFO nova.virt.libvirt.driver [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Deleting instance files /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e_del
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.384 253665 INFO nova.virt.libvirt.driver [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Deletion of /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e_del complete
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.391 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.391 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.448 253665 INFO nova.compute.manager [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Took 1.38 seconds to destroy the instance on the hypervisor.
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.449 253665 DEBUG oslo.service.loopingcall [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.450 253665 DEBUG nova.compute.manager [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.450 253665 DEBUG nova.network.neutron [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.506 253665 DEBUG oslo_concurrency.processutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.563 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-changed-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.564 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing instance network info cache due to event network-changed-c027d879-91b3-497d-9f51-8476006ea65c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.564 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.565 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.565 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing network info cache for port c027d879-91b3-497d-9f51-8476006ea65c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.606 253665 DEBUG nova.compute.manager [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-changed-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.607 253665 DEBUG nova.compute.manager [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing instance network info cache due to event network-changed-18df29f5-368d-4b94-ac69-8541de164d02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.607 253665 DEBUG oslo_concurrency.lockutils [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.608 253665 DEBUG oslo_concurrency.lockutils [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.608 253665 DEBUG nova.network.neutron [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing network info cache for port 18df29f5-368d-4b94-ac69-8541de164d02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.787 253665 DEBUG nova.network.neutron [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:37:00 compute-0 ceph-mon[75021]: pgmap v2251: 305 pgs: 305 active+clean; 336 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 8.4 MiB/s wr, 270 op/s
Nov 22 09:37:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:37:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694703152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:00 compute-0 nova_compute[253661]: 2025-11-22 09:37:00.994 253665 DEBUG oslo_concurrency.processutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.001 253665 DEBUG nova.compute.provider_tree [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.017 253665 DEBUG nova.scheduler.client.report [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.045 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.106 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 15.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.310 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Successfully updated port: a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.329 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.332 253665 DEBUG nova.network.neutron [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.342 253665 DEBUG oslo_concurrency.lockutils [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.342 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.342 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.565 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.800 253665 DEBUG nova.network.neutron [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.839 253665 INFO nova.compute.manager [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Took 1.39 seconds to deallocate network for instance.
Nov 22 09:37:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2694703152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.898 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:01 compute-0 nova_compute[253661]: 2025-11-22 09:37:01.898 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 316 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 6.7 MiB/s wr, 286 op/s
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.049 253665 DEBUG oslo_concurrency.processutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.294 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updated VIF entry in instance network info cache for port c027d879-91b3-497d-9f51-8476006ea65c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.294 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapc027d879-91", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.315 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing instance network info cache due to event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.475 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:37:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:37:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769277003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.522 253665 DEBUG oslo_concurrency.processutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.529 253665 DEBUG nova.compute.provider_tree [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.547 253665 DEBUG nova.scheduler.client.report [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.580 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.634 253665 DEBUG nova.compute.manager [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.637 253665 DEBUG nova.compute.manager [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing instance network info cache due to event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.638 253665 DEBUG oslo_concurrency.lockutils [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.638 253665 DEBUG oslo_concurrency.lockutils [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.638 253665 DEBUG nova.network.neutron [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.642 253665 INFO nova.scheduler.client.report [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance 750659ed-67e0-44d4-a5b3-b8d0029ffa7e
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.724 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.752 253665 DEBUG nova.compute.manager [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-changed-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.753 253665 DEBUG nova.compute.manager [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing instance network info cache due to event network-changed-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.753 253665 DEBUG oslo_concurrency.lockutils [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.842 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:02 compute-0 ceph-mon[75021]: pgmap v2252: 305 pgs: 305 active+clean; 316 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 6.7 MiB/s wr, 286 op/s
Nov 22 09:37:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/769277003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017212732792863188 of space, bias 1.0, pg target 0.5163819837858956 quantized to 32 (current 32)
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014231600410236375 of space, bias 1.0, pg target 0.4269480123070913 quantized to 32 (current 32)
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.860 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.861 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.861 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.862 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.862 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.862 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.863 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.863 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.863 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.863 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.864 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.864 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:02 compute-0 nova_compute[253661]: 2025-11-22 09:37:02.864 253665 WARNING nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state active and task_state deleting.
Nov 22 09:37:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 213 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.066 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.086 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804209.0854855, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.087 253665 INFO nova.compute.manager [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Stopped (Lifecycle Event)
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.089 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.089 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance network_info: |[{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.090 253665 DEBUG oslo_concurrency.lockutils [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.090 253665 DEBUG nova.network.neutron [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing network info cache for port a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.093 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start _get_guest_xml network_info=[{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.098 253665 WARNING nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.103 253665 DEBUG nova.virt.libvirt.host [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.103 253665 DEBUG nova.virt.libvirt.host [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.110 253665 DEBUG nova.compute.manager [None req-3c260157-268b-4753-ba7b-9e82e8ee55ba - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.116 253665 DEBUG nova.virt.libvirt.host [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.116 253665 DEBUG nova.virt.libvirt.host [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.117 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.117 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.117 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.119 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.119 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.119 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.123 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.331 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.486 253665 DEBUG nova.network.neutron [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updated VIF entry in instance network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.488 253665 DEBUG nova.network.neutron [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.514 253665 DEBUG oslo_concurrency.lockutils [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.515 253665 DEBUG nova.compute.manager [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-deleted-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:37:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/255243496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.600 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.622 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:04 compute-0 nova_compute[253661]: 2025-11-22 09:37:04.626 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:04 compute-0 ceph-mon[75021]: pgmap v2253: 305 pgs: 305 active+clean; 213 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Nov 22 09:37:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/255243496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:37:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602858101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.086 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.087 253665 DEBUG nova.virt.libvirt.vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.088 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.088 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.089 253665 DEBUG nova.virt.libvirt.vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.089 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.090 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.091 253665 DEBUG nova.objects.instance [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2837c740-6ce1-47d5-ad27-107211f74db7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.101 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.101 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.101 253665 INFO nova.compute.manager [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Unshelving
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.106 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <uuid>2837c740-6ce1-47d5-ad27-107211f74db7</uuid>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <name>instance-00000073</name>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-1413808402</nova:name>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:37:04</nova:creationTime>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <nova:port uuid="18df29f5-368d-4b94-ac69-8541de164d02">
Nov 22 09:37:05 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <nova:port uuid="a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de">
Nov 22 09:37:05 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9d:fd83" ipVersion="6"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <system>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <entry name="serial">2837c740-6ce1-47d5-ad27-107211f74db7</entry>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <entry name="uuid">2837c740-6ce1-47d5-ad27-107211f74db7</entry>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </system>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <os>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   </os>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <features>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   </features>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2837c740-6ce1-47d5-ad27-107211f74db7_disk">
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       </source>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2837c740-6ce1-47d5-ad27-107211f74db7_disk.config">
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       </source>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:37:05 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:90:34:a1"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <target dev="tap18df29f5-36"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:9d:fd:83"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <target dev="tapa8c9a54f-9f"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/console.log" append="off"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <video>
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </video>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:37:05 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:37:05 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:37:05 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:37:05 compute-0 nova_compute[253661]: </domain>
Nov 22 09:37:05 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.107 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Preparing to wait for external event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.107 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.107 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Preparing to wait for external event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.109 253665 DEBUG nova.virt.libvirt.vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.109 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.110 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.111 253665 DEBUG os_vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.112 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.112 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.113 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.122 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18df29f5-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.123 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18df29f5-36, col_values=(('external_ids', {'iface-id': '18df29f5-368d-4b94-ac69-8541de164d02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:34:a1', 'vm-uuid': '2837c740-6ce1-47d5-ad27-107211f74db7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.124 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:05 compute-0 NetworkManager[48920]: <info>  [1763804225.1257] manager: (tap18df29f5-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.126 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.133 253665 INFO os_vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36')
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.134 253665 DEBUG nova.virt.libvirt.vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.134 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.135 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.135 253665 DEBUG os_vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.136 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.136 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.139 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.140 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8c9a54f-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.140 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8c9a54f-9f, col_values=(('external_ids', {'iface-id': 'a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:fd:83', 'vm-uuid': '2837c740-6ce1-47d5-ad27-107211f74db7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.141 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:05 compute-0 NetworkManager[48920]: <info>  [1763804225.1425] manager: (tapa8c9a54f-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.150 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.152 253665 INFO os_vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f')
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.183 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.183 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.188 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_requests' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.200 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'numa_topology' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.206 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.206 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.206 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:90:34:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.207 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:9d:fd:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.207 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Using config drive
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.233 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.255 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.255 253665 INFO nova.compute.claims [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.378 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:37:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32284843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.903 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.910 253665 DEBUG nova.compute.provider_tree [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:37:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 213 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.924 253665 DEBUG nova.scheduler.client.report [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:37:05 compute-0 nova_compute[253661]: 2025-11-22 09:37:05.942 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1602858101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/32284843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.171 253665 INFO nova.network.neutron [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating port c027d879-91b3-497d-9f51-8476006ea65c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.194 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Creating config drive at /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.201 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppa6ahe7y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.247 253665 DEBUG nova.network.neutron [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updated VIF entry in instance network info cache for port a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.249 253665 DEBUG nova.network.neutron [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.271 253665 DEBUG oslo_concurrency.lockutils [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.352 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppa6ahe7y" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.385 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:06 compute-0 nova_compute[253661]: 2025-11-22 09:37:06.391 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:07 compute-0 ceph-mon[75021]: pgmap v2254: 305 pgs: 305 active+clean; 213 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.142 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.144 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Deleting local config drive /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config because it was imported into RBD.
Nov 22 09:37:07 compute-0 kernel: tap18df29f5-36: entered promiscuous mode
Nov 22 09:37:07 compute-0 NetworkManager[48920]: <info>  [1763804227.2062] manager: (tap18df29f5-36): new Tun device (/org/freedesktop/NetworkManager/Devices/485)
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01173|binding|INFO|Claiming lport 18df29f5-368d-4b94-ac69-8541de164d02 for this chassis.
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01174|binding|INFO|18df29f5-368d-4b94-ac69-8541de164d02: Claiming fa:16:3e:90:34:a1 10.100.0.7
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.212 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.212 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.213 253665 DEBUG nova.network.neutron [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:07 compute-0 NetworkManager[48920]: <info>  [1763804227.2219] manager: (tapa8c9a54f-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/486)
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.223 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:34:a1 10.100.0.7'], port_security=['fa:16:3e:90:34:a1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2837c740-6ce1-47d5-ad27-107211f74db7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=416fdb0b-60ab-41a3-b089-f86f3fe1761e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=18df29f5-368d-4b94-ac69-8541de164d02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.224 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 18df29f5-368d-4b94-ac69-8541de164d02 in datapath a1a3f352-95a9-4122-aecd-94a4bbf79683 bound to our chassis
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.226 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a3f352-95a9-4122-aecd-94a4bbf79683
Nov 22 09:37:07 compute-0 kernel: tapa8c9a54f-9f: entered promiscuous mode
Nov 22 09:37:07 compute-0 systemd-udevd[373298]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01175|binding|INFO|Setting lport 18df29f5-368d-4b94-ac69-8541de164d02 ovn-installed in OVS
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01176|binding|INFO|Setting lport 18df29f5-368d-4b94-ac69-8541de164d02 up in Southbound
Nov 22 09:37:07 compute-0 systemd-udevd[373297]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.246 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b17d856-e687-464a-bf51-73ee7c012534]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.248 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1a3f352-91 in ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01177|if_status|INFO|Dropped 4 log messages in last 1292 seconds (most recently, 1292 seconds ago) due to excessive rate
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01178|if_status|INFO|Not updating pb chassis for a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de now as sb is readonly
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01179|binding|INFO|Claiming lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de for this chassis.
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01180|binding|INFO|a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de: Claiming fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.251 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1a3f352-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.251 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e609bd03-51f9-43fa-ac38-89cbeb395a5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.254 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abcc3ebd-9aa6-432d-ac1f-0cabacffac2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 NetworkManager[48920]: <info>  [1763804227.2593] device (tap18df29f5-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:37:07 compute-0 NetworkManager[48920]: <info>  [1763804227.2602] device (tap18df29f5-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:37:07 compute-0 NetworkManager[48920]: <info>  [1763804227.2623] device (tapa8c9a54f-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:37:07 compute-0 NetworkManager[48920]: <info>  [1763804227.2629] device (tapa8c9a54f-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.263 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83'], port_security=['fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:fd83/64', 'neutron:device_id': '2837c740-6ce1-47d5-ad27-107211f74db7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f56771e6-e0a6-4947-ad39-6cb384a012bf, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01181|binding|INFO|Setting lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de ovn-installed in OVS
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01182|binding|INFO|Setting lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de up in Southbound
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.275 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f35898d5-a116-41b8-a255-4abb94eff7ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 systemd-machined[215941]: New machine qemu-144-instance-00000073.
Nov 22 09:37:07 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000073.
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.297 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3c38523c-d527-436e-bf1c-53f72bc08db2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.319 253665 DEBUG nova.compute.manager [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-changed-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.319 253665 DEBUG nova.compute.manager [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing instance network info cache due to event network-changed-c027d879-91b3-497d-9f51-8476006ea65c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.319 253665 DEBUG oslo_concurrency.lockutils [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.336 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf4d9f-e99a-4818-89b8-09bb87127601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.343 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74fde671-d136-4309-a7aa-cf6dbcdf1383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 NetworkManager[48920]: <info>  [1763804227.3447] manager: (tapa1a3f352-90): new Veth device (/org/freedesktop/NetworkManager/Devices/487)
Nov 22 09:37:07 compute-0 systemd-udevd[373303]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.388 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c393b1b0-a03d-4615-861c-1d705ac0b9cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.393 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8e975170-9b63-4b2c-9bed-5ed18e426dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 NetworkManager[48920]: <info>  [1763804227.4207] device (tapa1a3f352-90): carrier: link connected
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.429 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf4f569-e2b2-4b83-aa98-f3fa62e17273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.455 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5b5d13-59a9-40d2-9c98-ce720ff01313]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a3f352-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:dc:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707815, 'reachable_time': 26897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373333, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.477 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c43f2c7c-203c-47b2-9fd8-ab60b1eac7cf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:dc76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707815, 'tstamp': 707815}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373334, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01183|binding|INFO|Releasing lport b0af7c96-3c08-40c2-b3ca-1e251090d01d from this chassis (sb_readonly=0)
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.514 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6a4805-91a6-4844-83fd-ca89804b8fde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a3f352-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:dc:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707815, 'reachable_time': 26897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373335, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.587 253665 DEBUG nova.compute.manager [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.588 253665 DEBUG oslo_concurrency.lockutils [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.588 253665 DEBUG oslo_concurrency.lockutils [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.588 253665 DEBUG oslo_concurrency.lockutils [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.589 253665 DEBUG nova.compute.manager [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Processing event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.607 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[377ef26b-4a5f-434e-bb4b-0f823009d119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.690 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[97488eea-1b21-40a1-ac4b-76b65bcb6b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.692 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a3f352-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.693 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.693 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a3f352-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:07 compute-0 kernel: tapa1a3f352-90: entered promiscuous mode
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:07 compute-0 NetworkManager[48920]: <info>  [1763804227.6961] manager: (tapa1a3f352-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.697 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a3f352-90, col_values=(('external_ids', {'iface-id': '6e07e124-b404-4946-958f-042e8d633a40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:07 compute-0 ovn_controller[152872]: 2025-11-22T09:37:07Z|01184|binding|INFO|Releasing lport 6e07e124-b404-4946-958f-042e8d633a40 from this chassis (sb_readonly=0)
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.715 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1a3f352-95a9-4122-aecd-94a4bbf79683.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1a3f352-95a9-4122-aecd-94a4bbf79683.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.716 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b6bcfe-33f9-4491-93c1-b68222541092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.717 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-a1a3f352-95a9-4122-aecd-94a4bbf79683
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/a1a3f352-95a9-4122-aecd-94a4bbf79683.pid.haproxy
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID a1a3f352-95a9-4122-aecd-94a4bbf79683
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:37:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.718 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'env', 'PROCESS_TAG=haproxy-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1a3f352-95a9-4122-aecd-94a4bbf79683.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.837 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804227.8365903, 2837c740-6ce1-47d5-ad27-107211f74db7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.838 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] VM Started (Lifecycle Event)
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.852 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.856 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804227.8367171, 2837c740-6ce1-47d5-ad27-107211f74db7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.856 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] VM Paused (Lifecycle Event)
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.870 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.874 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:07 compute-0 nova_compute[253661]: 2025-11-22 09:37:07.888 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:37:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 213 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Nov 22 09:37:08 compute-0 podman[373410]: 2025-11-22 09:37:08.125030332 +0000 UTC m=+0.075301404 container create aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:37:08 compute-0 podman[373410]: 2025-11-22 09:37:08.076378922 +0000 UTC m=+0.026650014 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:37:08 compute-0 systemd[1]: Started libpod-conmon-aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd.scope.
Nov 22 09:37:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871bb839a73d656755f89d39a1db17cd9b58ff25f5a2a0710db15a0e02acd3f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:08 compute-0 podman[373410]: 2025-11-22 09:37:08.223148632 +0000 UTC m=+0.173419724 container init aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:37:08 compute-0 podman[373410]: 2025-11-22 09:37:08.229616993 +0000 UTC m=+0.179888095 container start aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:37:08 compute-0 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [NOTICE]   (373429) : New worker (373431) forked
Nov 22 09:37:08 compute-0 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [NOTICE]   (373429) : Loading success.
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.305 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de in datapath c883e14c-ad7e-49eb-b0c3-2571140d1e57 unbound from our chassis
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.307 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c883e14c-ad7e-49eb-b0c3-2571140d1e57
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.322 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38ec0988-b759-4afb-8da5-89c7233060ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.326 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc883e14c-a1 in ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.329 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc883e14c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.329 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cfbe8b-d48e-40d4-aec9-952a461dae9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.331 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[014944af-72d5-4995-89bf-fc9b10acb219]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.345 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[bd315248-7924-4956-9e8e-ae0705f3b1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.362 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20ce1cd1-d4ee-4640-a03b-bc06263baec9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.398 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d465991d-04b6-41b5-8573-35fa407c822a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 systemd-udevd[373321]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:37:08 compute-0 NetworkManager[48920]: <info>  [1763804228.4065] manager: (tapc883e14c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/489)
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.405 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25992086-cd0e-42a6-8652-8e12c606ff8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.447 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[369714c2-f60f-4c43-aaf2-6bcd3c0c25d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.450 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fb7f12-6930-4750-8bcb-96b4aaed6894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 NetworkManager[48920]: <info>  [1763804228.4809] device (tapc883e14c-a0): carrier: link connected
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.488 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[57ea02f3-9165-44e5-945d-533a409e7b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.516 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5c33a1-8d97-4b91-a8d0-66b0aabc137a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc883e14c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d1:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707921, 'reachable_time': 43324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373450, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.542 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5fab949b-c28b-490c-98b2-54b7d60d03e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d1f3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707921, 'tstamp': 707921}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373451, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[151be39c-670e-4fbf-812c-dd740dc4cc49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc883e14c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d1:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707921, 'reachable_time': 43324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373452, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.605 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb807c2-02d6-4634-8688-2c60458800aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.649 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c83f6145-6201-43a5-bb6f-ec7dd478b3a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.651 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc883e14c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.651 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.651 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc883e14c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:08 compute-0 kernel: tapc883e14c-a0: entered promiscuous mode
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:08 compute-0 NetworkManager[48920]: <info>  [1763804228.7076] manager: (tapc883e14c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.712 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc883e14c-a0, col_values=(('external_ids', {'iface-id': '8cb4fbf8-c8a1-48f8-bf71-339312c7db31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.714 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:08 compute-0 ovn_controller[152872]: 2025-11-22T09:37:08Z|01185|binding|INFO|Releasing lport 8cb4fbf8-c8a1-48f8-bf71-339312c7db31 from this chassis (sb_readonly=0)
Nov 22 09:37:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.736 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c883e14c-ad7e-49eb-b0c3-2571140d1e57.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c883e14c-ad7e-49eb-b0c3-2571140d1e57.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.737 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f87c52cf-f83a-4d3c-9933-f51162d69c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.739 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-c883e14c-ad7e-49eb-b0c3-2571140d1e57
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/c883e14c-ad7e-49eb-b0c3-2571140d1e57.pid.haproxy
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID c883e14c-ad7e-49eb-b0c3-2571140d1e57
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:37:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.739 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'env', 'PROCESS_TAG=haproxy-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c883e14c-ad7e-49eb-b0c3-2571140d1e57.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.919 253665 DEBUG nova.network.neutron [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.941 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.943 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.943 253665 INFO nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating image(s)
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.968 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.973 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.975 253665 DEBUG oslo_concurrency.lockutils [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:08 compute-0 nova_compute[253661]: 2025-11-22 09:37:08.975 253665 DEBUG nova.network.neutron [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing network info cache for port c027d879-91b3-497d-9f51-8476006ea65c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.020 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.046 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.051 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "ae26e73adffe36046883b3f1778c799a83ef0b41" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.052 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "ae26e73adffe36046883b3f1778c799a83ef0b41" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:09 compute-0 ceph-mon[75021]: pgmap v2255: 305 pgs: 305 active+clean; 213 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Nov 22 09:37:09 compute-0 podman[373537]: 2025-11-22 09:37:09.162928823 +0000 UTC m=+0.068619268 container create e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:37:09 compute-0 systemd[1]: Started libpod-conmon-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868.scope.
Nov 22 09:37:09 compute-0 podman[373537]: 2025-11-22 09:37:09.120105928 +0000 UTC m=+0.025796393 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:37:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8abda532727a5eec38dc91090bcebb9f55c2e54221da7f33b18490782350886/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:09 compute-0 podman[373537]: 2025-11-22 09:37:09.277770259 +0000 UTC m=+0.183460744 container init e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 09:37:09 compute-0 podman[373537]: 2025-11-22 09:37:09.286002843 +0000 UTC m=+0.191693288 container start e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:37:09 compute-0 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [NOTICE]   (373557) : New worker (373559) forked
Nov 22 09:37:09 compute-0 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [NOTICE]   (373557) : Loading success.
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.440 253665 DEBUG nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.440 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.440 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.441 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.441 253665 DEBUG nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Processing event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.441 253665 DEBUG nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.442 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.443 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.443 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.444 253665 DEBUG nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.444 253665 WARNING nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received unexpected event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 for instance with vm_state building and task_state spawning.
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.446 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.449 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804229.4491794, 2837c740-6ce1-47d5-ad27-107211f74db7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.449 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] VM Resumed (Lifecycle Event)
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.451 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.455 253665 INFO nova.virt.libvirt.driver [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance spawned successfully.
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.455 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.482 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.486 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.487 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.487 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.487 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.488 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.488 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.492 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.501 253665 DEBUG nova.virt.libvirt.imagebackend [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/fd10acf7-7116-43c7-8e62-b2aed4e8d629/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/fd10acf7-7116-43c7-8e62-b2aed4e8d629/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.558 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.576 253665 INFO nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Took 12.49 seconds to spawn the instance on the hypervisor.
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.576 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.584 253665 DEBUG nova.virt.libvirt.imagebackend [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Selected location: {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/fd10acf7-7116-43c7-8e62-b2aed4e8d629/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.585 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] cloning images/fd10acf7-7116-43c7-8e62-b2aed4e8d629@snap to None/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.662 253665 DEBUG nova.compute.manager [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.662 253665 DEBUG oslo_concurrency.lockutils [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.663 253665 DEBUG oslo_concurrency.lockutils [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.663 253665 DEBUG oslo_concurrency.lockutils [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.663 253665 DEBUG nova.compute.manager [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.663 253665 WARNING nova.compute.manager [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received unexpected event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de for instance with vm_state building and task_state spawning.
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.707 253665 INFO nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Took 13.71 seconds to build instance.
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.726 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.753 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "ae26e73adffe36046883b3f1778c799a83ef0b41" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 213 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 1.4 MiB/s wr, 109 op/s
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.930 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'migration_context' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:09 compute-0 nova_compute[253661]: 2025-11-22 09:37:09.994 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] flattening vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.388 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Image rbd:vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.389 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.389 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Ensure instance console log exists: /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.390 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.390 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.390 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.392 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start _get_guest_xml network_info=[{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:36:45Z,direct_url=<?>,disk_format='raw',id=fd10acf7-7116-43c7-8e62-b2aed4e8d629,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-627235813-shelved',owner='a31947dfacfc450ba028c42968f103b2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:36:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.399 253665 WARNING nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.405 253665 DEBUG nova.virt.libvirt.host [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.406 253665 DEBUG nova.virt.libvirt.host [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.410 253665 DEBUG nova.virt.libvirt.host [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.412 253665 DEBUG nova.virt.libvirt.host [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.413 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.413 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:36:45Z,direct_url=<?>,disk_format='raw',id=fd10acf7-7116-43c7-8e62-b2aed4e8d629,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-627235813-shelved',owner='a31947dfacfc450ba028c42968f103b2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:36:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.413 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.414 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.414 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.414 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.414 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.415 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.415 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.416 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.416 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.416 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.416 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.435 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.819 253665 DEBUG nova.network.neutron [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updated VIF entry in instance network info cache for port c027d879-91b3-497d-9f51-8476006ea65c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.821 253665 DEBUG nova.network.neutron [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.834 253665 DEBUG oslo_concurrency.lockutils [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:37:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/249608731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:10 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.938 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.969 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:10 compute-0 nova_compute[253661]: 2025-11-22 09:37:10.978 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:11 compute-0 ceph-mon[75021]: pgmap v2256: 305 pgs: 305 active+clean; 213 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 1.4 MiB/s wr, 109 op/s
Nov 22 09:37:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/249608731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:37:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62197803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.482 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.484 253665 DEBUG nova.virt.libvirt.vif [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='fd10acf7-7116-43c7-8e62-b2aed4e8d629',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member',shelved_at='2025-11-22T09:36:54.480378',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fd10acf7-7116-43c7-8e62-b2aed4e8d629'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:05Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.484 253665 DEBUG nova.network.os_vif_util [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.485 253665 DEBUG nova.network.os_vif_util [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.486 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.499 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <uuid>cf5e117a-f203-4c8f-b795-01fb355ca5e8</uuid>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <name>instance-0000006d</name>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersNegativeTestJSON-server-627235813</nova:name>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:37:10</nova:creationTime>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <nova:user uuid="31c7a4aa8fa340d2881ddc3ed426b6db">tempest-ServersNegativeTestJSON-1692723590-project-member</nova:user>
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <nova:project uuid="a31947dfacfc450ba028c42968f103b2">tempest-ServersNegativeTestJSON-1692723590</nova:project>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="fd10acf7-7116-43c7-8e62-b2aed4e8d629"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <nova:port uuid="c027d879-91b3-497d-9f51-8476006ea65c">
Nov 22 09:37:11 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <system>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <entry name="serial">cf5e117a-f203-4c8f-b795-01fb355ca5e8</entry>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <entry name="uuid">cf5e117a-f203-4c8f-b795-01fb355ca5e8</entry>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     </system>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <os>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   </os>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <features>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   </features>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk">
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       </source>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config">
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       </source>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:37:11 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:d9:42:5a"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <target dev="tapc027d879-91"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/console.log" append="off"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <video>
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     </video>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:37:11 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:37:11 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:37:11 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:37:11 compute-0 nova_compute[253661]: </domain>
Nov 22 09:37:11 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.500 253665 DEBUG nova.compute.manager [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Preparing to wait for external event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.500 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.501 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.501 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.501 253665 DEBUG nova.virt.libvirt.vif [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='fd10acf7-7116-43c7-8e62-b2aed4e8d629',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member',shelved_at='2025-11-22T09:36:54.480378',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fd10acf7-7116-43c7-8e62-b2aed4e8d629'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:05Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.502 253665 DEBUG nova.network.os_vif_util [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.502 253665 DEBUG nova.network.os_vif_util [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.503 253665 DEBUG os_vif [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.504 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.504 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.505 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.508 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc027d879-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.508 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc027d879-91, col_values=(('external_ids', {'iface-id': 'c027d879-91b3-497d-9f51-8476006ea65c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:42:5a', 'vm-uuid': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.510 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:11 compute-0 NetworkManager[48920]: <info>  [1763804231.5111] manager: (tapc027d879-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.520 253665 INFO os_vif [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')
Nov 22 09:37:11 compute-0 ovn_controller[152872]: 2025-11-22T09:37:11Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:83:85 10.100.0.8
Nov 22 09:37:11 compute-0 ovn_controller[152872]: 2025-11-22T09:37:11Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:83:85 10.100.0.8
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.588 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.588 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.588 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No VIF found with MAC fa:16:3e:d9:42:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.589 253665 INFO nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Using config drive
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.615 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.641 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:11 compute-0 nova_compute[253661]: 2025-11-22 09:37:11.699 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'keypairs' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 230 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Nov 22 09:37:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/62197803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.296 253665 INFO nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating config drive at /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.304 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0djoiahi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:37:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1323851533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:37:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:37:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1323851533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.471 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0djoiahi" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.510 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.514 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.627 253665 DEBUG nova.compute.manager [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-changed-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.629 253665 DEBUG nova.compute.manager [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing instance network info cache due to event network-changed-18df29f5-368d-4b94-ac69-8541de164d02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.630 253665 DEBUG oslo_concurrency.lockutils [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.630 253665 DEBUG oslo_concurrency.lockutils [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.630 253665 DEBUG nova.network.neutron [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing network info cache for port 18df29f5-368d-4b94-ac69-8541de164d02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.698 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.699 253665 INFO nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deleting local config drive /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config because it was imported into RBD.
Nov 22 09:37:12 compute-0 NetworkManager[48920]: <info>  [1763804232.7550] manager: (tapc027d879-91): new Tun device (/org/freedesktop/NetworkManager/Devices/492)
Nov 22 09:37:12 compute-0 kernel: tapc027d879-91: entered promiscuous mode
Nov 22 09:37:12 compute-0 ovn_controller[152872]: 2025-11-22T09:37:12Z|01186|binding|INFO|Claiming lport c027d879-91b3-497d-9f51-8476006ea65c for this chassis.
Nov 22 09:37:12 compute-0 ovn_controller[152872]: 2025-11-22T09:37:12Z|01187|binding|INFO|c027d879-91b3-497d-9f51-8476006ea65c: Claiming fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.759 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.764 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.766 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 bound to our chassis
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.767 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:37:12 compute-0 ovn_controller[152872]: 2025-11-22T09:37:12Z|01188|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c ovn-installed in OVS
Nov 22 09:37:12 compute-0 ovn_controller[152872]: 2025-11-22T09:37:12Z|01189|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c up in Southbound
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.781 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.785 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1dfecbed-0a29-43bf-85a0-f61386ec631d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.786 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa990966c-01 in ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:37:12 compute-0 nova_compute[253661]: 2025-11-22 09:37:12.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.788 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa990966c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.788 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2212f283-8008-478b-9473-f2ee1a0cb942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.789 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe093ea9-6950-4530-a44c-6b4c8820876f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 systemd-machined[215941]: New machine qemu-145-instance-0000006d.
Nov 22 09:37:12 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-0000006d.
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.801 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[67cf49f5-230f-44d6-98df-680738155f50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 systemd-udevd[373865]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:37:12 compute-0 NetworkManager[48920]: <info>  [1763804232.8211] device (tapc027d879-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:37:12 compute-0 NetworkManager[48920]: <info>  [1763804232.8221] device (tapc027d879-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.826 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fce5ba44-13c1-4e5e-a9d3-0e4fc72106aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.860 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1d864bdf-1cfe-4740-a03b-e6da5355fd47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 NetworkManager[48920]: <info>  [1763804232.8713] manager: (tapa990966c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/493)
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[638a7553-9b6e-413f-ba24-8065ada9fb62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 systemd-udevd[373868]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.914 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bde5e18c-f17e-49e1-9bc9-4da1d5f07e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.922 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84120098-1289-452a-817d-73450e471d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 NetworkManager[48920]: <info>  [1763804232.9485] device (tapa990966c-00): carrier: link connected
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.959 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a0311d-f0a9-4e94-b93c-fafdf42857e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.983 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8945e155-649c-4017-b2d8-727fde2885d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708368, 'reachable_time': 34879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373896, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.001 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e79a85a6-3bdc-4e10-9828-ff017c05c7c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:6fb9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708368, 'tstamp': 708368}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373897, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.022 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdac8aa-a687-4e86-bffa-8a62c8e602f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708368, 'reachable_time': 34879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373898, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.062 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d9fbe12f-91b1-4c16-9483-fcbc7626c274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.149 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f67e159-6e29-4023-8a72-e0daffffb899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.152 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:13 compute-0 kernel: tapa990966c-00: entered promiscuous mode
Nov 22 09:37:13 compute-0 NetworkManager[48920]: <info>  [1763804233.1569] manager: (tapa990966c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/494)
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:13 compute-0 ceph-mon[75021]: pgmap v2257: 305 pgs: 305 active+clean; 230 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Nov 22 09:37:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1323851533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:37:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1323851533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.165 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:13 compute-0 ovn_controller[152872]: 2025-11-22T09:37:13Z|01190|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.166 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.167 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.169 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.170 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dabf6043-e8eb-4932-97aa-3c5494a74dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.172 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:37:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.173 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'env', 'PROCESS_TAG=haproxy-a990966c-0851-457f-bdd5-27cf73032674', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a990966c-0851-457f-bdd5-27cf73032674.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.527 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804233.5259013, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.527 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Started (Lifecycle Event)
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.548 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.552 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804233.526357, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.553 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Paused (Lifecycle Event)
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.567 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.572 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:13 compute-0 nova_compute[253661]: 2025-11-22 09:37:13.591 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:37:13 compute-0 podman[373972]: 2025-11-22 09:37:13.614523069 +0000 UTC m=+0.057236445 container create dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:37:13 compute-0 systemd[1]: Started libpod-conmon-dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e.scope.
Nov 22 09:37:13 compute-0 podman[373972]: 2025-11-22 09:37:13.585783353 +0000 UTC m=+0.028496759 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:37:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52479717ef3cd88457aeffef0253fbcadccac9e1b5c6c7b801a1ce1dcf90c28c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:13 compute-0 podman[373972]: 2025-11-22 09:37:13.727823576 +0000 UTC m=+0.170536962 container init dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 09:37:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:13 compute-0 podman[373972]: 2025-11-22 09:37:13.735796024 +0000 UTC m=+0.178509400 container start dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:37:13 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [NOTICE]   (373992) : New worker (373994) forked
Nov 22 09:37:13 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [NOTICE]   (373992) : Loading success.
Nov 22 09:37:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 325 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.4 MiB/s wr, 261 op/s
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.088 253665 DEBUG nova.network.neutron [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updated VIF entry in instance network info cache for port 18df29f5-368d-4b94-ac69-8541de164d02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.089 253665 DEBUG nova.network.neutron [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.115 253665 DEBUG oslo_concurrency.lockutils [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.310 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804219.3091772, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.311 253665 INFO nova.compute.manager [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Stopped (Lifecycle Event)
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.329 253665 DEBUG nova.compute.manager [None req-ea3ec371-849e-43a4-98b9-972eba9a3aa7 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.851 253665 DEBUG nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.851 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Processing event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.853 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.853 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.853 253665 DEBUG nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.853 253665 WARNING nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state shelved_offloaded and task_state spawning.
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.854 253665 DEBUG nova.compute.manager [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.857 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804234.8574965, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.858 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Resumed (Lifecycle Event)
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.860 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.863 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance spawned successfully.
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.874 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.879 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:14 compute-0 nova_compute[253661]: 2025-11-22 09:37:14.896 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:37:15 compute-0 ceph-mon[75021]: pgmap v2258: 305 pgs: 305 active+clean; 325 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.4 MiB/s wr, 261 op/s
Nov 22 09:37:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 325 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.0 MiB/s wr, 211 op/s
Nov 22 09:37:16 compute-0 nova_compute[253661]: 2025-11-22 09:37:16.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Nov 22 09:37:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Nov 22 09:37:16 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Nov 22 09:37:16 compute-0 nova_compute[253661]: 2025-11-22 09:37:16.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:16 compute-0 nova_compute[253661]: 2025-11-22 09:37:16.605 253665 DEBUG nova.compute.manager [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:16 compute-0 nova_compute[253661]: 2025-11-22 09:37:16.662 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:17 compute-0 ceph-mon[75021]: pgmap v2259: 305 pgs: 305 active+clean; 325 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.0 MiB/s wr, 211 op/s
Nov 22 09:37:17 compute-0 ceph-mon[75021]: osdmap e308: 3 total, 3 up, 3 in
Nov 22 09:37:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 264 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 7.2 MiB/s wr, 315 op/s
Nov 22 09:37:18 compute-0 nova_compute[253661]: 2025-11-22 09:37:18.721 253665 INFO nova.compute.manager [None req-3abe7d23-ec69-4bd0-97f3-4aee927e6e7b 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Get console output
Nov 22 09:37:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:18 compute-0 nova_compute[253661]: 2025-11-22 09:37:18.728 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:37:18 compute-0 nova_compute[253661]: 2025-11-22 09:37:18.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:19 compute-0 ceph-mon[75021]: pgmap v2261: 305 pgs: 305 active+clean; 264 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 7.2 MiB/s wr, 315 op/s
Nov 22 09:37:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 246 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 7.3 MiB/s wr, 359 op/s
Nov 22 09:37:20 compute-0 nova_compute[253661]: 2025-11-22 09:37:20.635 253665 DEBUG nova.compute.manager [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:20 compute-0 nova_compute[253661]: 2025-11-22 09:37:20.637 253665 DEBUG nova.compute.manager [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing instance network info cache due to event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:20 compute-0 nova_compute[253661]: 2025-11-22 09:37:20.638 253665 DEBUG oslo_concurrency.lockutils [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:20 compute-0 nova_compute[253661]: 2025-11-22 09:37:20.639 253665 DEBUG oslo_concurrency.lockutils [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:20 compute-0 nova_compute[253661]: 2025-11-22 09:37:20.640 253665 DEBUG nova.network.neutron [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:21 compute-0 nova_compute[253661]: 2025-11-22 09:37:21.099 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:21 compute-0 ceph-mon[75021]: pgmap v2262: 305 pgs: 305 active+clean; 246 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 7.3 MiB/s wr, 359 op/s
Nov 22 09:37:21 compute-0 podman[374003]: 2025-11-22 09:37:21.380437585 +0000 UTC m=+0.068818262 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 09:37:21 compute-0 podman[374004]: 2025-11-22 09:37:21.383132592 +0000 UTC m=+0.070015672 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 09:37:21 compute-0 nova_compute[253661]: 2025-11-22 09:37:21.515 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 246 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 5.7 MiB/s wr, 283 op/s
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.021 253665 DEBUG nova.objects.instance [None req-12b23e72-c768-4231-a75f-0cbfa3688f84 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.052 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804242.0520916, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.053 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Paused (Lifecycle Event)
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.080 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.084 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.105 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.119 253665 DEBUG nova.network.neutron [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updated VIF entry in instance network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.120 253665 DEBUG nova.network.neutron [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.140 253665 DEBUG oslo_concurrency.lockutils [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.255 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.256 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.258 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:22 compute-0 kernel: tapc027d879-91 (unregistering): left promiscuous mode
Nov 22 09:37:22 compute-0 NetworkManager[48920]: <info>  [1763804242.5960] device (tapc027d879-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:22 compute-0 ovn_controller[152872]: 2025-11-22T09:37:22Z|01191|binding|INFO|Releasing lport c027d879-91b3-497d-9f51-8476006ea65c from this chassis (sb_readonly=0)
Nov 22 09:37:22 compute-0 ovn_controller[152872]: 2025-11-22T09:37:22Z|01192|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c down in Southbound
Nov 22 09:37:22 compute-0 ovn_controller[152872]: 2025-11-22T09:37:22Z|01193|binding|INFO|Removing iface tapc027d879-91 ovn-installed in OVS
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.612 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '9', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.615 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 unbound from our chassis
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.617 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a990966c-0851-457f-bdd5-27cf73032674, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.619 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74921fff-9070-4e68-9676-c3a218a8b86a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.620 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace which is not needed anymore
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:22 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 22 09:37:22 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d0000006d.scope: Consumed 8.306s CPU time.
Nov 22 09:37:22 compute-0 systemd-machined[215941]: Machine qemu-145-instance-0000006d terminated.
Nov 22 09:37:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:37:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:37:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:37:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:37:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:37:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:37:22 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [NOTICE]   (373992) : haproxy version is 2.8.14-c23fe91
Nov 22 09:37:22 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [NOTICE]   (373992) : path to executable is /usr/sbin/haproxy
Nov 22 09:37:22 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [WARNING]  (373992) : Exiting Master process...
Nov 22 09:37:22 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [ALERT]    (373992) : Current worker (373994) exited with code 143 (Terminated)
Nov 22 09:37:22 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [WARNING]  (373992) : All workers exited. Exiting... (0)
Nov 22 09:37:22 compute-0 systemd[1]: libpod-dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e.scope: Deactivated successfully.
Nov 22 09:37:22 compute-0 podman[374067]: 2025-11-22 09:37:22.786146292 +0000 UTC m=+0.056511726 container died dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.790 253665 DEBUG nova.compute.manager [None req-12b23e72-c768-4231-a75f-0cbfa3688f84 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-52479717ef3cd88457aeffef0253fbcadccac9e1b5c6c7b801a1ce1dcf90c28c-merged.mount: Deactivated successfully.
Nov 22 09:37:22 compute-0 podman[374067]: 2025-11-22 09:37:22.843710743 +0000 UTC m=+0.114076167 container cleanup dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:37:22 compute-0 systemd[1]: libpod-conmon-dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e.scope: Deactivated successfully.
Nov 22 09:37:22 compute-0 podman[374107]: 2025-11-22 09:37:22.9163797 +0000 UTC m=+0.046010454 container remove dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.922 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5017013d-f6c0-470d-89b3-91b158ad28e9]: (4, ('Sat Nov 22 09:37:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e)\ndff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e\nSat Nov 22 09:37:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e)\ndff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.924 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3c5cfe-4f62-48bd-ab60-483138c3e20e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.926 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:22 compute-0 kernel: tapa990966c-00: left promiscuous mode
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.928 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:22 compute-0 nova_compute[253661]: 2025-11-22 09:37:22.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.950 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd44b11-aba0-473f-9812-86418ab575c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.968 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[93a4568c-0e46-4380-87b2-59d56f210cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.978 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[191614e5-4844-4d4e-a6b9-19992ce650b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:23.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20746f5c-5c62-4406-a3df-06971e4f0e38]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708359, 'reachable_time': 43102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374127, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:23 compute-0 systemd[1]: run-netns-ovnmeta\x2da990966c\x2d0851\x2d457f\x2dbdd5\x2d27cf73032674.mount: Deactivated successfully.
Nov 22 09:37:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:23.005 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:37:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:23.005 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[843553cf-2170-482c-a65f-a42432f119e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:23 compute-0 nova_compute[253661]: 2025-11-22 09:37:23.268 253665 DEBUG nova.compute.manager [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:23 compute-0 nova_compute[253661]: 2025-11-22 09:37:23.269 253665 DEBUG oslo_concurrency.lockutils [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:23 compute-0 nova_compute[253661]: 2025-11-22 09:37:23.269 253665 DEBUG oslo_concurrency.lockutils [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:23 compute-0 nova_compute[253661]: 2025-11-22 09:37:23.269 253665 DEBUG oslo_concurrency.lockutils [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:23 compute-0 nova_compute[253661]: 2025-11-22 09:37:23.270 253665 DEBUG nova.compute.manager [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:23 compute-0 nova_compute[253661]: 2025-11-22 09:37:23.270 253665 WARNING nova.compute.manager [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state suspended and task_state None.
Nov 22 09:37:23 compute-0 ceph-mon[75021]: pgmap v2263: 305 pgs: 305 active+clean; 246 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 5.7 MiB/s wr, 283 op/s
Nov 22 09:37:23 compute-0 ovn_controller[152872]: 2025-11-22T09:37:23Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:34:a1 10.100.0.7
Nov 22 09:37:23 compute-0 ovn_controller[152872]: 2025-11-22T09:37:23Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:34:a1 10.100.0.7
Nov 22 09:37:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Nov 22 09:37:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Nov 22 09:37:23 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Nov 22 09:37:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 264 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 184 op/s
Nov 22 09:37:24 compute-0 nova_compute[253661]: 2025-11-22 09:37:24.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:24 compute-0 nova_compute[253661]: 2025-11-22 09:37:24.537 253665 INFO nova.compute.manager [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Resuming
Nov 22 09:37:24 compute-0 nova_compute[253661]: 2025-11-22 09:37:24.540 253665 DEBUG nova.objects.instance [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'flavor' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:24 compute-0 nova_compute[253661]: 2025-11-22 09:37:24.577 253665 DEBUG oslo_concurrency.lockutils [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:24 compute-0 nova_compute[253661]: 2025-11-22 09:37:24.578 253665 DEBUG oslo_concurrency.lockutils [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:24 compute-0 nova_compute[253661]: 2025-11-22 09:37:24.578 253665 DEBUG nova.network.neutron [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:37:24 compute-0 ceph-mon[75021]: osdmap e309: 3 total, 3 up, 3 in
Nov 22 09:37:25 compute-0 nova_compute[253661]: 2025-11-22 09:37:25.395 253665 DEBUG nova.compute.manager [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:25 compute-0 nova_compute[253661]: 2025-11-22 09:37:25.396 253665 DEBUG oslo_concurrency.lockutils [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:25 compute-0 nova_compute[253661]: 2025-11-22 09:37:25.397 253665 DEBUG oslo_concurrency.lockutils [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:25 compute-0 nova_compute[253661]: 2025-11-22 09:37:25.397 253665 DEBUG oslo_concurrency.lockutils [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:25 compute-0 nova_compute[253661]: 2025-11-22 09:37:25.397 253665 DEBUG nova.compute.manager [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:25 compute-0 nova_compute[253661]: 2025-11-22 09:37:25.398 253665 WARNING nova.compute.manager [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state suspended and task_state resuming.
Nov 22 09:37:25 compute-0 ceph-mon[75021]: pgmap v2265: 305 pgs: 305 active+clean; 264 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 184 op/s
Nov 22 09:37:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 264 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 151 op/s
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.099 253665 DEBUG nova.network.neutron [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.102 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.111 253665 DEBUG oslo_concurrency.lockutils [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.116 253665 DEBUG nova.virt.libvirt.vif [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:22Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.116 253665 DEBUG nova.network.os_vif_util [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.117 253665 DEBUG nova.network.os_vif_util [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.118 253665 DEBUG os_vif [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.119 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.119 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.119 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.121 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.122 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc027d879-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.122 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc027d879-91, col_values=(('external_ids', {'iface-id': 'c027d879-91b3-497d-9f51-8476006ea65c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:42:5a', 'vm-uuid': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.122 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.123 253665 INFO os_vif [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.149 253665 DEBUG nova.objects.instance [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'numa_topology' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:26 compute-0 kernel: tapc027d879-91: entered promiscuous mode
Nov 22 09:37:26 compute-0 NetworkManager[48920]: <info>  [1763804246.2393] manager: (tapc027d879-91): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Nov 22 09:37:26 compute-0 ovn_controller[152872]: 2025-11-22T09:37:26Z|01194|binding|INFO|Claiming lport c027d879-91b3-497d-9f51-8476006ea65c for this chassis.
Nov 22 09:37:26 compute-0 ovn_controller[152872]: 2025-11-22T09:37:26Z|01195|binding|INFO|c027d879-91b3-497d-9f51-8476006ea65c: Claiming fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.239 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.249 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '10', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.250 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 bound to our chassis
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.252 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:37:26 compute-0 ovn_controller[152872]: 2025-11-22T09:37:26Z|01196|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c ovn-installed in OVS
Nov 22 09:37:26 compute-0 ovn_controller[152872]: 2025-11-22T09:37:26Z|01197|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c up in Southbound
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.259 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.264 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4829b879-3011-4f84-83c3-742848381449]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.269 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa990966c-01 in ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.271 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa990966c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.271 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6aad753f-4a28-4d1d-9a5d-7e3bba278835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.272 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b29e005a-0000-4b9c-bdf5-043368979937]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.290 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f3090ae3-28f6-4c8e-9e9f-47c403445df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 systemd-udevd[374144]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:37:26 compute-0 systemd-machined[215941]: New machine qemu-146-instance-0000006d.
Nov 22 09:37:26 compute-0 NetworkManager[48920]: <info>  [1763804246.3095] device (tapc027d879-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:37:26 compute-0 NetworkManager[48920]: <info>  [1763804246.3102] device (tapc027d879-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:37:26 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-0000006d.
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.316 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7219807-8426-409c-a03a-30e8bccd02e3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.358 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[250d3a81-dba1-4ec1-9b28-a00165bf79bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 NetworkManager[48920]: <info>  [1763804246.3658] manager: (tapa990966c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.364 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10bd31c3-d871-4f4e-9e55-22fcbbe35515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 systemd-udevd[374147]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.402 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[14b3837a-0e12-48bf-b5de-e6ec45990386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.405 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[80e87f61-0b13-4246-9143-f185246eb731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 NetworkManager[48920]: <info>  [1763804246.4310] device (tapa990966c-00): carrier: link connected
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.436 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd9a486-3286-406e-a086-1ff4513e7da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 podman[374148]: 2025-11-22 09:37:26.438512942 +0000 UTC m=+0.103496765 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.460 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b829c7-7ec1-4503-be22-e7a11c3cb49d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709716, 'reachable_time': 16973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374200, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.488 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1f9c7e-2beb-41fc-a187-98485fa72b6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:6fb9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709716, 'tstamp': 709716}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374201, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.509 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd378c5-98e8-4db8-ac89-487938ee26d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709716, 'reachable_time': 16973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374209, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0d612c-f544-4c6c-a4d3-5976cb382164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.620 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84c31a5f-0881-46a3-ae59-3f96eecfd6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.622 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.623 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.623 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 kernel: tapa990966c-00: entered promiscuous mode
Nov 22 09:37:26 compute-0 NetworkManager[48920]: <info>  [1763804246.6266] manager: (tapa990966c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.628 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 ovn_controller[152872]: 2025-11-22T09:37:26Z|01198|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.646 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.649 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.651 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38d9d1ef-bc1e-447c-8638-545dc2acd651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.651 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID a990966c-0851-457f-bdd5-27cf73032674
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:37:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.653 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'env', 'PROCESS_TAG=haproxy-a990966c-0851-457f-bdd5-27cf73032674', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a990966c-0851-457f-bdd5-27cf73032674.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.805 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for cf5e117a-f203-4c8f-b795-01fb355ca5e8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.805 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804246.8042178, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.806 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Started (Lifecycle Event)
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.820 253665 DEBUG nova.compute.manager [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.821 253665 DEBUG nova.objects.instance [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.841 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.848 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance running successfully.
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.850 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:26 compute-0 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.854 253665 DEBUG nova.virt.libvirt.guest [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.854 253665 DEBUG nova.compute.manager [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.866 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.867 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804246.8191342, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.867 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Resumed (Lifecycle Event)
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.883 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:26 compute-0 nova_compute[253661]: 2025-11-22 09:37:26.893 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:27 compute-0 podman[374276]: 2025-11-22 09:37:27.074729943 +0000 UTC m=+0.049830970 container create 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 09:37:27 compute-0 systemd[1]: Started libpod-conmon-8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9.scope.
Nov 22 09:37:27 compute-0 podman[374276]: 2025-11-22 09:37:27.050093361 +0000 UTC m=+0.025194408 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:37:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03312f80a55e34f62a33dfaed471bc61dca49a6488e5599296b3a6b417800e8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:27 compute-0 podman[374276]: 2025-11-22 09:37:27.17750147 +0000 UTC m=+0.152602517 container init 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 09:37:27 compute-0 podman[374276]: 2025-11-22 09:37:27.184526974 +0000 UTC m=+0.159628001 container start 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:37:27 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [NOTICE]   (374296) : New worker (374298) forked
Nov 22 09:37:27 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [NOTICE]   (374296) : Loading success.
Nov 22 09:37:27 compute-0 ceph-mon[75021]: pgmap v2266: 305 pgs: 305 active+clean; 264 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 151 op/s
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.872 253665 DEBUG nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.874 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.874 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.874 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.875 253665 DEBUG nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.875 253665 WARNING nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state None.
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.875 253665 DEBUG nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.876 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.876 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.876 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.876 253665 DEBUG nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:27 compute-0 nova_compute[253661]: 2025-11-22 09:37:27.877 253665 WARNING nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state None.
Nov 22 09:37:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 278 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 120 op/s
Nov 22 09:37:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:27.980 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:27.981 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:27.982 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.541 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.544 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.558 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.559 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.579 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.582 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.662 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.663 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.672 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.673 253665 INFO nova.compute.claims [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:37:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:28 compute-0 nova_compute[253661]: 2025-11-22 09:37:28.839 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:37:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535357608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.314 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.323 253665 DEBUG nova.compute.provider_tree [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.341 253665 DEBUG nova.scheduler.client.report [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.365 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.367 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.369 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.383 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.385 253665 INFO nova.compute.claims [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.462 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.463 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.497 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.521 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.614 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.616 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.617 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Creating image(s)
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.641 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.671 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.696 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.701 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.754 253665 DEBUG nova.policy [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:37:29 compute-0 ceph-mon[75021]: pgmap v2267: 305 pgs: 305 active+clean; 278 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 120 op/s
Nov 22 09:37:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/535357608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.775 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.819 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.821 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.822 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.822 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.848 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:29 compute-0 nova_compute[253661]: 2025-11-22 09:37:29.853 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d2f5b215-3a41-451c-8ad8-68b17c96a678_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 279 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.6 MiB/s wr, 79 op/s
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.243 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d2f5b215-3a41-451c-8ad8-68b17c96a678_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:37:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712136166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.348 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.394 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.431 253665 DEBUG nova.compute.provider_tree [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.450 253665 DEBUG nova.scheduler.client.report [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.473 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.476 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.521 253665 DEBUG nova.objects.instance [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid d2f5b215-3a41-451c-8ad8-68b17c96a678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.526 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.527 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.537 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.538 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Ensure instance console log exists: /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.538 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.538 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.539 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.541 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.570 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.650 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.655 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.655 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Creating image(s)
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.689 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.721 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.753 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.760 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1712136166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.820 253665 DEBUG nova.policy [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.824 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Successfully created port: 1e21d7ad-a6e7-4649-91f2-612de75fe16f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.870 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.873 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.874 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.874 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.903 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:30 compute-0 nova_compute[253661]: 2025-11-22 09:37:30.909 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 117927df-3c9e-4609-b5ba-dc3937b9339d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.267 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 117927df-3c9e-4609-b5ba-dc3937b9339d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.354 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.506 253665 DEBUG nova.objects.instance [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid 117927df-3c9e-4609-b5ba-dc3937b9339d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.526 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.527 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Ensure instance console log exists: /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.528 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.529 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.529 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:31 compute-0 ceph-mon[75021]: pgmap v2268: 305 pgs: 305 active+clean; 279 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.6 MiB/s wr, 79 op/s
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.866 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Successfully updated port: 1e21d7ad-a6e7-4649-91f2-612de75fe16f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.883 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.883 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.883 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:37:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 326 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 4.7 MiB/s wr, 115 op/s
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.946 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Successfully created port: 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.989 253665 DEBUG nova.compute.manager [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.990 253665 DEBUG nova.compute.manager [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing instance network info cache due to event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:31 compute-0 nova_compute[253661]: 2025-11-22 09:37:31.990 253665 DEBUG oslo_concurrency.lockutils [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:32 compute-0 nova_compute[253661]: 2025-11-22 09:37:32.045 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:37:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:32.258 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:32 compute-0 ceph-mon[75021]: pgmap v2269: 305 pgs: 305 active+clean; 326 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 4.7 MiB/s wr, 115 op/s
Nov 22 09:37:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 372 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 5.5 MiB/s wr, 123 op/s
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.158 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.168 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Successfully updated port: 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.198 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.199 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.199 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.210 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.210 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance network_info: |[{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.211 253665 DEBUG oslo_concurrency.lockutils [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.212 253665 DEBUG nova.network.neutron [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.215 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start _get_guest_xml network_info=[{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.228 253665 WARNING nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.237 253665 DEBUG nova.virt.libvirt.host [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.239 253665 DEBUG nova.virt.libvirt.host [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.243 253665 DEBUG nova.virt.libvirt.host [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.243 253665 DEBUG nova.virt.libvirt.host [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.244 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.244 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.245 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.245 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.246 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.246 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.246 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.247 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.247 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.247 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.248 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.248 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.251 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:34 compute-0 ovn_controller[152872]: 2025-11-22T09:37:34Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 09:37:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:37:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266849916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.732 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.760 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:34 compute-0 nova_compute[253661]: 2025-11-22 09:37:34.765 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:34 compute-0 ceph-mon[75021]: pgmap v2270: 305 pgs: 305 active+clean; 372 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 5.5 MiB/s wr, 123 op/s
Nov 22 09:37:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4266849916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.002 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.129 253665 DEBUG nova.compute.manager [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-changed-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.129 253665 DEBUG nova.compute.manager [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Refreshing instance network info cache due to event network-changed-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.130 253665 DEBUG oslo_concurrency.lockutils [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:37:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720972578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.276 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.278 253665 DEBUG nova.virt.libvirt.vif [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=117,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhEnav/8bmHhlravIj7ZzbNKEW+UMvBgA2sDDDC11ma4Sh8uEn9mVvYdSzBFRFowvU98Jl7d9jrFKpsv67Pj9Xp0jWGCVRbBnzzKhVjFFyGFkc+DH0al99fQPTR1eXa1A==',key_name='tempest-TestSecurityGroupsBasicOps-1955317373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-nt0g0idi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:29Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=d2f5b215-3a41-451c-8ad8-68b17c96a678,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.278 253665 DEBUG nova.network.os_vif_util [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.279 253665 DEBUG nova.network.os_vif_util [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.280 253665 DEBUG nova.objects.instance [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid d2f5b215-3a41-451c-8ad8-68b17c96a678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.294 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <uuid>d2f5b215-3a41-451c-8ad8-68b17c96a678</uuid>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <name>instance-00000075</name>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835</nova:name>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:37:34</nova:creationTime>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <nova:port uuid="1e21d7ad-a6e7-4649-91f2-612de75fe16f">
Nov 22 09:37:35 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <system>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <entry name="serial">d2f5b215-3a41-451c-8ad8-68b17c96a678</entry>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <entry name="uuid">d2f5b215-3a41-451c-8ad8-68b17c96a678</entry>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     </system>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <os>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   </os>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <features>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   </features>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d2f5b215-3a41-451c-8ad8-68b17c96a678_disk">
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       </source>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config">
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       </source>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:37:35 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:1a:0f:ac"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <target dev="tap1e21d7ad-a6"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/console.log" append="off"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <video>
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     </video>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:37:35 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:37:35 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:37:35 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:37:35 compute-0 nova_compute[253661]: </domain>
Nov 22 09:37:35 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.294 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Preparing to wait for external event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.295 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.295 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.295 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.296 253665 DEBUG nova.virt.libvirt.vif [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=117,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhEnav/8bmHhlravIj7ZzbNKEW+UMvBgA2sDDDC11ma4Sh8uEn9mVvYdSzBFRFowvU98Jl7d9jrFKpsv67Pj9Xp0jWGCVRbBnzzKhVjFFyGFkc+DH0al99fQPTR1eXa1A==',key_name='tempest-TestSecurityGroupsBasicOps-1955317373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-nt0g0idi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:29Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=d2f5b215-3a41-451c-8ad8-68b17c96a678,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.296 253665 DEBUG nova.network.os_vif_util [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.297 253665 DEBUG nova.network.os_vif_util [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.297 253665 DEBUG os_vif [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.298 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.298 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.299 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.302 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.303 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e21d7ad-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.303 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e21d7ad-a6, col_values=(('external_ids', {'iface-id': '1e21d7ad-a6e7-4649-91f2-612de75fe16f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:0f:ac', 'vm-uuid': 'd2f5b215-3a41-451c-8ad8-68b17c96a678'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:35 compute-0 NetworkManager[48920]: <info>  [1763804255.3069] manager: (tap1e21d7ad-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.310 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.317 253665 INFO os_vif [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6')
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.373 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.374 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.375 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:1a:0f:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.375 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Using config drive
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.413 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 372 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 4.7 MiB/s wr, 104 op/s
Nov 22 09:37:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/720972578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:35 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.994 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Creating config drive at /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:35.999 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixjxwwlr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.149 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixjxwwlr" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.179 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.184 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.269 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.270 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.278 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updating instance_info_cache with network_info: [{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.302 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.302 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance network_info: |[{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.303 253665 DEBUG oslo_concurrency.lockutils [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.303 253665 DEBUG nova.network.neutron [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Refreshing network info cache for port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.306 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start _get_guest_xml network_info=[{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.313 253665 WARNING nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.318 253665 DEBUG nova.virt.libvirt.host [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.320 253665 DEBUG nova.virt.libvirt.host [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.327 253665 DEBUG nova.virt.libvirt.host [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.327 253665 DEBUG nova.virt.libvirt.host [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.327 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.328 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.328 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.328 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.330 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.330 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.330 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.334 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.384 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.385 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Deleting local config drive /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config because it was imported into RBD.
Nov 22 09:37:36 compute-0 kernel: tap1e21d7ad-a6: entered promiscuous mode
Nov 22 09:37:36 compute-0 NetworkManager[48920]: <info>  [1763804256.4691] manager: (tap1e21d7ad-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/499)
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.468 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:36 compute-0 ovn_controller[152872]: 2025-11-22T09:37:36Z|01199|binding|INFO|Claiming lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f for this chassis.
Nov 22 09:37:36 compute-0 ovn_controller[152872]: 2025-11-22T09:37:36Z|01200|binding|INFO|1e21d7ad-a6e7-4649-91f2-612de75fe16f: Claiming fa:16:3e:1a:0f:ac 10.100.0.14
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.480 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:0f:ac 10.100.0.14'], port_security=['fa:16:3e:1a:0f:ac 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2f5b215-3a41-451c-8ad8-68b17c96a678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37126bdf-684b-42ae-b38f-88d563755df6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '443f7e2d-f0e9-45ab-9cf5-08268d38e115 d6d16faa-9388-499f-aa74-b3fccde9fbc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7f8c6c4-9648-452d-b35b-4ce3aef6c8f6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1e21d7ad-a6e7-4649-91f2-612de75fe16f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.482 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1e21d7ad-a6e7-4649-91f2-612de75fe16f in datapath 37126bdf-684b-42ae-b38f-88d563755df6 bound to our chassis
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.483 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37126bdf-684b-42ae-b38f-88d563755df6
Nov 22 09:37:36 compute-0 ovn_controller[152872]: 2025-11-22T09:37:36Z|01201|binding|INFO|Setting lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f ovn-installed in OVS
Nov 22 09:37:36 compute-0 ovn_controller[152872]: 2025-11-22T09:37:36Z|01202|binding|INFO|Setting lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f up in Southbound
Nov 22 09:37:36 compute-0 systemd-udevd[374837]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:36 compute-0 systemd-machined[215941]: New machine qemu-147-instance-00000075.
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.507 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d74cb8-5f2c-4867-aacb-00584483978f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.509 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37126bdf-61 in ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.511 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37126bdf-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[efec8851-97cf-4927-9fc7-dff04253d7a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.513 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[307da282-8766-4528-a953-d4b93df32727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 NetworkManager[48920]: <info>  [1763804256.5168] device (tap1e21d7ad-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:37:36 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000075.
Nov 22 09:37:36 compute-0 NetworkManager[48920]: <info>  [1763804256.5194] device (tap1e21d7ad-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.525 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b211274a-14fb-4274-8b62-2360209ba811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2f2f15-6b7c-4698-9fe0-c53d663da38e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.590 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1228e953-a5d6-4d68-a108-0a2f0edaf047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46bbaa8c-ce9f-428f-b258-e1ff45e64bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 NetworkManager[48920]: <info>  [1763804256.6009] manager: (tap37126bdf-60): new Veth device (/org/freedesktop/NetworkManager/Devices/500)
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.642 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa242b3-09c5-486c-a01c-64f0595fcb9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.644 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9aa41a-42f2-4cc9-b844-8afe7dac746a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 NetworkManager[48920]: <info>  [1763804256.6766] device (tap37126bdf-60): carrier: link connected
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.682 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bad2164e-c73f-4a0c-ba1f-1d7b1f394c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.702 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4e0aaba7-be7b-484c-b707-5d8a783bc971]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37126bdf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:2e:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 350], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710741, 'reachable_time': 19064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374872, 'error': None, 'target': 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.725 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd685f1-a29e-4b50-a774-928983f5316d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:2e59'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710741, 'tstamp': 710741}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374873, 'error': None, 'target': 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c46c4225-3755-41e4-b1a4-e3557b3737e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37126bdf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:2e:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 350], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710741, 'reachable_time': 19064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374874, 'error': None, 'target': 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.809 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e9484e85-aa5d-4c06-b64e-8ea39d5a334c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:37:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712071147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.841 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.877 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.884 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.890 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[258bff3a-f9a8-4f13-9bd1-2cb90f2c0099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.892 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37126bdf-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.892 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.893 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37126bdf-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:36 compute-0 NetworkManager[48920]: <info>  [1763804256.8969] manager: (tap37126bdf-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/501)
Nov 22 09:37:36 compute-0 kernel: tap37126bdf-60: entered promiscuous mode
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.899 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37126bdf-60, col_values=(('external_ids', {'iface-id': 'eea1332c-6e32-4e52-a7c7-645bf860d501'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:36 compute-0 ovn_controller[152872]: 2025-11-22T09:37:36Z|01203|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.917 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37126bdf-684b-42ae-b38f-88d563755df6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37126bdf-684b-42ae-b38f-88d563755df6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.919 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa41080-0efb-4eca-9cd8-1ed00ef4e0c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.920 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-37126bdf-684b-42ae-b38f-88d563755df6
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/37126bdf-684b-42ae-b38f-88d563755df6.pid.haproxy
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 37126bdf-684b-42ae-b38f-88d563755df6
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:37:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.920 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'env', 'PROCESS_TAG=haproxy-37126bdf-684b-42ae-b38f-88d563755df6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37126bdf-684b-42ae-b38f-88d563755df6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:37:36 compute-0 nova_compute[253661]: 2025-11-22 09:37:36.947 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:37 compute-0 ceph-mon[75021]: pgmap v2271: 305 pgs: 305 active+clean; 372 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 4.7 MiB/s wr, 104 op/s
Nov 22 09:37:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/712071147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.044 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804257.0441287, d2f5b215-3a41-451c-8ad8-68b17c96a678 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.045 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] VM Started (Lifecycle Event)
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.069 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.074 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804257.0443964, d2f5b215-3a41-451c-8ad8-68b17c96a678 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.074 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] VM Paused (Lifecycle Event)
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.099 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.100 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.100 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.100 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.102 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.132 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.136 253665 DEBUG nova.network.neutron [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updated VIF entry in instance network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.137 253665 DEBUG nova.network.neutron [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.151 253665 DEBUG oslo_concurrency.lockutils [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:37 compute-0 podman[374988]: 2025-11-22 09:37:37.375792127 +0000 UTC m=+0.063631283 container create 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:37:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:37:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3167941660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.407 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.408 253665 DEBUG nova.virt.libvirt.vif [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-49006445',display_name='tempest-TestNetworkBasicOps-server-49006445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-49006445',id=116,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeXCAlba5Pky/MldtbxyajF3IcXgGA10hH2p6l/rDbu00wotgjV47YpIug01aEvxhEMHebjDZWxS13INHaUqa3arLwLiyV5qzWo5I/KVMb52E8fXSgSsdjLUTCsH4PUgQ==',key_name='tempest-TestNetworkBasicOps-1517678537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-bn5d4cub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:30Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=117927df-3c9e-4609-b5ba-dc3937b9339d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.409 253665 DEBUG nova.network.os_vif_util [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.418 253665 DEBUG nova.network.os_vif_util [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.421 253665 DEBUG nova.objects.instance [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 117927df-3c9e-4609-b5ba-dc3937b9339d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:37 compute-0 podman[374988]: 2025-11-22 09:37:37.343595586 +0000 UTC m=+0.031434772 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.437 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <uuid>117927df-3c9e-4609-b5ba-dc3937b9339d</uuid>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <name>instance-00000074</name>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-49006445</nova:name>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:37:36</nova:creationTime>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:37:37 compute-0 systemd[1]: Started libpod-conmon-2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1.scope.
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <nova:port uuid="01185f9f-cfa0-4eec-8adf-6b2c1516b5b1">
Nov 22 09:37:37 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <system>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <entry name="serial">117927df-3c9e-4609-b5ba-dc3937b9339d</entry>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <entry name="uuid">117927df-3c9e-4609-b5ba-dc3937b9339d</entry>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     </system>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <os>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   </os>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <features>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   </features>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/117927df-3c9e-4609-b5ba-dc3937b9339d_disk">
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config">
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:37:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:9e:8d:06"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <target dev="tap01185f9f-cf"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/console.log" append="off"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <video>
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     </video>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:37:37 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:37:37 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:37:37 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:37:37 compute-0 nova_compute[253661]: </domain>
Nov 22 09:37:37 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.439 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Preparing to wait for external event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.440 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.441 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.441 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.442 253665 DEBUG nova.virt.libvirt.vif [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-49006445',display_name='tempest-TestNetworkBasicOps-server-49006445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-49006445',id=116,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeXCAlba5Pky/MldtbxyajF3IcXgGA10hH2p6l/rDbu00wotgjV47YpIug01aEvxhEMHebjDZWxS13INHaUqa3arLwLiyV5qzWo5I/KVMb52E8fXSgSsdjLUTCsH4PUgQ==',key_name='tempest-TestNetworkBasicOps-1517678537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-bn5d4cub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:30Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=117927df-3c9e-4609-b5ba-dc3937b9339d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.443 253665 DEBUG nova.network.os_vif_util [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.444 253665 DEBUG nova.network.os_vif_util [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.444 253665 DEBUG os_vif [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.446 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.447 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.451 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01185f9f-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.452 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01185f9f-cf, col_values=(('external_ids', {'iface-id': '01185f9f-cfa0-4eec-8adf-6b2c1516b5b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:8d:06', 'vm-uuid': '117927df-3c9e-4609-b5ba-dc3937b9339d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.454 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:37 compute-0 NetworkManager[48920]: <info>  [1763804257.4557] manager: (tap01185f9f-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/502)
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.463 253665 INFO os_vif [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf')
Nov 22 09:37:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8e5aadf5ecb0314ed34aef4194ac1d915ee5e6be36aa131d63c38cd863668f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:37 compute-0 podman[374988]: 2025-11-22 09:37:37.487439564 +0000 UTC m=+0.175278750 container init 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:37:37 compute-0 podman[374988]: 2025-11-22 09:37:37.493099474 +0000 UTC m=+0.180938630 container start 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:37:37 compute-0 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [NOTICE]   (375011) : New worker (375013) forked
Nov 22 09:37:37 compute-0 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [NOTICE]   (375011) : Loading success.
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.526 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.527 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.527 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:9e:8d:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.528 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Using config drive
Nov 22 09:37:37 compute-0 nova_compute[253661]: 2025-11-22 09:37:37.561 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 372 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 4.7 MiB/s wr, 132 op/s
Nov 22 09:37:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3167941660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.688 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Creating config drive at /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.695 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtuydwlv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.772 253665 DEBUG nova.compute.manager [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.773 253665 DEBUG oslo_concurrency.lockutils [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.773 253665 DEBUG oslo_concurrency.lockutils [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.773 253665 DEBUG oslo_concurrency.lockutils [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.773 253665 DEBUG nova.compute.manager [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Processing event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.774 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.779 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804258.779363, d2f5b215-3a41-451c-8ad8-68b17c96a678 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.779 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] VM Resumed (Lifecycle Event)
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.782 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.784 253665 INFO nova.virt.libvirt.driver [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance spawned successfully.
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.784 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.805 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.815 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.815 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.816 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.816 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.817 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.817 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.819 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.852 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtuydwlv" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.889 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.894 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.947 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.956 253665 INFO nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Took 9.34 seconds to spawn the instance on the hypervisor.
Nov 22 09:37:38 compute-0 nova_compute[253661]: 2025-11-22 09:37:38.957 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:39 compute-0 ceph-mon[75021]: pgmap v2272: 305 pgs: 305 active+clean; 372 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 4.7 MiB/s wr, 132 op/s
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.032 253665 INFO nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Took 10.40 seconds to build instance.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.045 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.074 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.075 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Deleting local config drive /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config because it was imported into RBD.
Nov 22 09:37:39 compute-0 kernel: tap01185f9f-cf: entered promiscuous mode
Nov 22 09:37:39 compute-0 NetworkManager[48920]: <info>  [1763804259.1395] manager: (tap01185f9f-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/503)
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.141 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:39 compute-0 ovn_controller[152872]: 2025-11-22T09:37:39Z|01204|binding|INFO|Claiming lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 for this chassis.
Nov 22 09:37:39 compute-0 ovn_controller[152872]: 2025-11-22T09:37:39Z|01205|binding|INFO|01185f9f-cfa0-4eec-8adf-6b2c1516b5b1: Claiming fa:16:3e:9e:8d:06 10.100.0.3
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.153 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:8d:06 10.100.0.3'], port_security=['fa:16:3e:9e:8d:06 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '117927df-3c9e-4609-b5ba-dc3937b9339d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669fa85d-7478-40e5-958b-7300ef3552b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7538651d-e44e-4a35-8243-e31c6426f6e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f6a9cc6-46e5-4035-8aed-8dfaed3a2f4d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.155 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 in datapath 669fa85d-7478-40e5-958b-7300ef3552b5 bound to our chassis
Nov 22 09:37:39 compute-0 NetworkManager[48920]: <info>  [1763804259.1603] device (tap01185f9f-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.157 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 669fa85d-7478-40e5-958b-7300ef3552b5
Nov 22 09:37:39 compute-0 NetworkManager[48920]: <info>  [1763804259.1619] device (tap01185f9f-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:37:39 compute-0 ovn_controller[152872]: 2025-11-22T09:37:39Z|01206|binding|INFO|Setting lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 ovn-installed in OVS
Nov 22 09:37:39 compute-0 ovn_controller[152872]: 2025-11-22T09:37:39Z|01207|binding|INFO|Setting lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 up in Southbound
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.165 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.180 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff4617-e76b-4b5f-b0d5-6e6a9b04cb79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:39 compute-0 systemd-machined[215941]: New machine qemu-148-instance-00000074.
Nov 22 09:37:39 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000074.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.198 253665 DEBUG nova.network.neutron [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updated VIF entry in instance network info cache for port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.198 253665 DEBUG nova.network.neutron [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updating instance_info_cache with network_info: [{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.211 253665 DEBUG oslo_concurrency.lockutils [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.223 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[127d97e2-ce25-4188-8eaa-71236aee6c8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.227 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3d60a055-08e3-4c27-9edc-cd7e434a176c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.268 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[816cfbea-038c-4cdf-ab35-76d7c86ec81e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.292 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f71f77-46fc-4b92-9f55-c00495c0afdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap669fa85d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:cb:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706707, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375105, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.315 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b668a629-0f8f-4cae-acaf-802fbd5fdd8f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap669fa85d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706722, 'tstamp': 706722}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375106, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap669fa85d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706727, 'tstamp': 706727}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375106, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.317 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap669fa85d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.321 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap669fa85d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.321 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.321 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap669fa85d-70, col_values=(('external_ids', {'iface-id': 'b0af7c96-3c08-40c2-b3ca-1e251090d01d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.322 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.447 253665 DEBUG nova.compute.manager [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.449 253665 DEBUG oslo_concurrency.lockutils [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.449 253665 DEBUG oslo_concurrency.lockutils [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.449 253665 DEBUG oslo_concurrency.lockutils [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.450 253665 DEBUG nova.compute.manager [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Processing event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.585 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.586 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.586 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.587 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.587 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.589 253665 INFO nova.compute.manager [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Terminating instance
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.591 253665 DEBUG nova.compute.manager [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.615 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804259.615085, 117927df-3c9e-4609-b5ba-dc3937b9339d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.616 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] VM Started (Lifecycle Event)
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.618 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.623 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.627 253665 INFO nova.virt.libvirt.driver [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance spawned successfully.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.628 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:37:39 compute-0 kernel: tapc027d879-91 (unregistering): left promiscuous mode
Nov 22 09:37:39 compute-0 NetworkManager[48920]: <info>  [1763804259.6425] device (tapc027d879-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.645 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.652 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.658 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.659 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.659 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.660 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.660 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.661 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:37:39 compute-0 ovn_controller[152872]: 2025-11-22T09:37:39Z|01208|binding|INFO|Releasing lport c027d879-91b3-497d-9f51-8476006ea65c from this chassis (sb_readonly=0)
Nov 22 09:37:39 compute-0 ovn_controller[152872]: 2025-11-22T09:37:39Z|01209|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c down in Southbound
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:39 compute-0 ovn_controller[152872]: 2025-11-22T09:37:39Z|01210|binding|INFO|Removing iface tapc027d879-91 ovn-installed in OVS
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.677 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.679 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '11', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.680 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 unbound from our chassis
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.683 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a990966c-0851-457f-bdd5-27cf73032674, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.684 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc60fd0-7cb1-4b32-8d4d-48036afd43ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.684 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace which is not needed anymore
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:39 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 22 09:37:39 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d0000006d.scope: Consumed 7.815s CPU time.
Nov 22 09:37:39 compute-0 systemd-machined[215941]: Machine qemu-146-instance-0000006d terminated.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.711 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.714 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804259.6151934, 117927df-3c9e-4609-b5ba-dc3937b9339d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.714 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] VM Paused (Lifecycle Event)
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.746 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.747 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.748 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.749 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.752 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804259.6228628, 117927df-3c9e-4609-b5ba-dc3937b9339d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.753 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] VM Resumed (Lifecycle Event)
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.763 253665 INFO nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Took 9.11 seconds to spawn the instance on the hypervisor.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.763 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.773 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.776 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.800 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.831 253665 INFO nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Took 11.19 seconds to build instance.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.839 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance destroyed successfully.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.840 253665 DEBUG nova.objects.instance [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'resources' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.849 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.856 253665 DEBUG nova.virt.libvirt.vif [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:26Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:37:39 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [NOTICE]   (374296) : haproxy version is 2.8.14-c23fe91
Nov 22 09:37:39 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [NOTICE]   (374296) : path to executable is /usr/sbin/haproxy
Nov 22 09:37:39 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [ALERT]    (374296) : Current worker (374298) exited with code 143 (Terminated)
Nov 22 09:37:39 compute-0 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [WARNING]  (374296) : All workers exited. Exiting... (0)
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.857 253665 DEBUG nova.network.os_vif_util [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.858 253665 DEBUG nova.network.os_vif_util [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.859 253665 DEBUG os_vif [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:37:39 compute-0 systemd[1]: libpod-8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9.scope: Deactivated successfully.
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.864 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc027d879-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.868 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:39 compute-0 podman[375171]: 2025-11-22 09:37:39.869626706 +0000 UTC m=+0.071249843 container died 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:37:39 compute-0 nova_compute[253661]: 2025-11-22 09:37:39.873 253665 INFO os_vif [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')
Nov 22 09:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9-userdata-shm.mount: Deactivated successfully.
Nov 22 09:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-03312f80a55e34f62a33dfaed471bc61dca49a6488e5599296b3a6b417800e8a-merged.mount: Deactivated successfully.
Nov 22 09:37:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 372 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 3.6 MiB/s wr, 116 op/s
Nov 22 09:37:39 compute-0 podman[375171]: 2025-11-22 09:37:39.949823311 +0000 UTC m=+0.151446448 container cleanup 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:37:39 compute-0 systemd[1]: libpod-conmon-8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9.scope: Deactivated successfully.
Nov 22 09:37:40 compute-0 podman[375228]: 2025-11-22 09:37:40.049449158 +0000 UTC m=+0.068670209 container remove 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:37:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.057 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[569b899d-65af-43e7-b815-eb381862b62a]: (4, ('Sat Nov 22 09:37:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9)\n8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9\nSat Nov 22 09:37:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9)\n8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.059 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae111dc-3f14-4e87-80a0-da638b566fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.060 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:37:40 compute-0 nova_compute[253661]: 2025-11-22 09:37:40.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:40 compute-0 kernel: tapa990966c-00: left promiscuous mode
Nov 22 09:37:40 compute-0 nova_compute[253661]: 2025-11-22 09:37:40.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.089 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4f9a04-b778-47a1-a9eb-b9b0c31809ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.106 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d868301a-eee2-4dd9-9674-d401eee1e29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.109 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9d7cac-5662-4e3a-b204-56eb74f3fa24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.133 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[106bd87a-2625-47a6-8d0e-d93dc6bc4aaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709708, 'reachable_time': 28379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375241, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.136 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:37:40 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.136 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c26cc2af-8dd4-4702-b8a9-3ca1a1a672de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:37:40 compute-0 systemd[1]: run-netns-ovnmeta\x2da990966c\x2d0851\x2d457f\x2dbdd5\x2d27cf73032674.mount: Deactivated successfully.
Nov 22 09:37:40 compute-0 nova_compute[253661]: 2025-11-22 09:37:40.330 253665 INFO nova.virt.libvirt.driver [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deleting instance files /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8_del
Nov 22 09:37:40 compute-0 nova_compute[253661]: 2025-11-22 09:37:40.331 253665 INFO nova.virt.libvirt.driver [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deletion of /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8_del complete
Nov 22 09:37:40 compute-0 nova_compute[253661]: 2025-11-22 09:37:40.408 253665 INFO nova.compute.manager [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 22 09:37:40 compute-0 nova_compute[253661]: 2025-11-22 09:37:40.408 253665 DEBUG oslo.service.loopingcall [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:37:40 compute-0 nova_compute[253661]: 2025-11-22 09:37:40.409 253665 DEBUG nova.compute.manager [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:37:40 compute-0 nova_compute[253661]: 2025-11-22 09:37:40.409 253665 DEBUG nova.network.neutron [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:37:40 compute-0 nova_compute[253661]: 2025-11-22 09:37:40.738 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.029 253665 DEBUG nova.compute.manager [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:41 compute-0 ceph-mon[75021]: pgmap v2273: 305 pgs: 305 active+clean; 372 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 3.6 MiB/s wr, 116 op/s
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.030 253665 DEBUG oslo_concurrency.lockutils [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.032 253665 DEBUG oslo_concurrency.lockutils [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.033 253665 DEBUG oslo_concurrency.lockutils [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.040 253665 DEBUG nova.compute.manager [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] No waiting events found dispatching network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.042 253665 WARNING nova.compute.manager [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received unexpected event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f for instance with vm_state active and task_state None.
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.111 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.592 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.592 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.592 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.593 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.593 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] No waiting events found dispatching network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.593 253665 WARNING nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received unexpected event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 for instance with vm_state active and task_state None.
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.593 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.594 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.594 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.594 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.595 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.595 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.595 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.595 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.596 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.596 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.596 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:37:41 compute-0 nova_compute[253661]: 2025-11-22 09:37:41.596 253665 WARNING nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state deleting.
Nov 22 09:37:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 359 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Nov 22 09:37:42 compute-0 nova_compute[253661]: 2025-11-22 09:37:42.133 253665 DEBUG nova.network.neutron [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:42 compute-0 nova_compute[253661]: 2025-11-22 09:37:42.153 253665 INFO nova.compute.manager [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Took 1.74 seconds to deallocate network for instance.
Nov 22 09:37:42 compute-0 nova_compute[253661]: 2025-11-22 09:37:42.193 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:42 compute-0 nova_compute[253661]: 2025-11-22 09:37:42.194 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:42 compute-0 nova_compute[253661]: 2025-11-22 09:37:42.340 253665 DEBUG oslo_concurrency.processutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:37:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511800722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:42 compute-0 nova_compute[253661]: 2025-11-22 09:37:42.885 253665 DEBUG oslo_concurrency.processutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:42 compute-0 nova_compute[253661]: 2025-11-22 09:37:42.894 253665 DEBUG nova.compute.provider_tree [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:37:42 compute-0 nova_compute[253661]: 2025-11-22 09:37:42.920 253665 DEBUG nova.scheduler.client.report [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:37:42 compute-0 nova_compute[253661]: 2025-11-22 09:37:42.961 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:43 compute-0 nova_compute[253661]: 2025-11-22 09:37:43.037 253665 INFO nova.scheduler.client.report [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Deleted allocations for instance cf5e117a-f203-4c8f-b795-01fb355ca5e8
Nov 22 09:37:43 compute-0 ceph-mon[75021]: pgmap v2274: 305 pgs: 305 active+clean; 359 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Nov 22 09:37:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/511800722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:43 compute-0 nova_compute[253661]: 2025-11-22 09:37:43.148 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:43 compute-0 nova_compute[253661]: 2025-11-22 09:37:43.667 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-deleted-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:43 compute-0 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-changed-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:43 compute-0 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Refreshing instance network info cache due to event network-changed-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:43 compute-0 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:43 compute-0 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:43 compute-0 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Refreshing network info cache for port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.9 MiB/s wr, 246 op/s
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.248 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.248 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:37:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458512515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.747 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:44 compute-0 sudo[375285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:37:44 compute-0 sudo[375285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:44 compute-0 sudo[375285]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.868 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.876 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.876 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:37:44 compute-0 sudo[375313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:37:44 compute-0 sudo[375313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.882 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.882 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:37:44 compute-0 sudo[375313]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.887 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.887 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.896 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:37:44 compute-0 nova_compute[253661]: 2025-11-22 09:37:44.896 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:37:44 compute-0 sudo[375338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:37:44 compute-0 sudo[375338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:44 compute-0 sudo[375338]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:45 compute-0 sudo[375363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:37:45 compute-0 sudo[375363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:45 compute-0 ceph-mon[75021]: pgmap v2275: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.9 MiB/s wr, 246 op/s
Nov 22 09:37:45 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2458512515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.295 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.296 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3041MB free_disk=59.8553581237793GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.296 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.297 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.405 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.405 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2837c740-6ce1-47d5-ad27-107211f74db7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.406 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 117927df-3c9e-4609-b5ba-dc3937b9339d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.406 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d2f5b215-3a41-451c-8ad8-68b17c96a678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.406 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.406 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.490 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:45 compute-0 sudo[375363]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 09:37:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:37:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:37:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:37:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:37:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:37:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:37:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:37:45 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c4d9b9fa-be58-4147-8a04-d29753127dc5 does not exist
Nov 22 09:37:45 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d1ab02a8-3aa5-48bf-b97e-7af5eefc54fe does not exist
Nov 22 09:37:45 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev eb617310-f500-45e2-8b53-07e52ac5bb30 does not exist
Nov 22 09:37:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:37:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:37:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:37:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:37:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:37:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:37:45 compute-0 sudo[375439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:37:45 compute-0 sudo[375439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:45 compute-0 sudo[375439]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.833 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updated VIF entry in instance network info cache for port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.834 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updating instance_info_cache with network_info: [{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:45 compute-0 sudo[375464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:37:45 compute-0 sudo[375464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:45 compute-0 sudo[375464]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.858 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.860 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.860 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing instance network info cache due to event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.860 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.861 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:45 compute-0 nova_compute[253661]: 2025-11-22 09:37:45.861 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:45 compute-0 sudo[375489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:37:45 compute-0 sudo[375489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:45 compute-0 sudo[375489]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 50 KiB/s wr, 210 op/s
Nov 22 09:37:45 compute-0 sudo[375514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:37:45 compute-0 sudo[375514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:37:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301452609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:37:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:37:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:37:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:37:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:37:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:37:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:37:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1301452609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:46 compute-0 nova_compute[253661]: 2025-11-22 09:37:46.055 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:46 compute-0 nova_compute[253661]: 2025-11-22 09:37:46.067 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:37:46 compute-0 nova_compute[253661]: 2025-11-22 09:37:46.081 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:37:46 compute-0 nova_compute[253661]: 2025-11-22 09:37:46.107 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:37:46 compute-0 nova_compute[253661]: 2025-11-22 09:37:46.108 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:46 compute-0 nova_compute[253661]: 2025-11-22 09:37:46.113 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:46 compute-0 podman[375581]: 2025-11-22 09:37:46.365547791 +0000 UTC m=+0.062125396 container create 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 09:37:46 compute-0 systemd[1]: Started libpod-conmon-4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277.scope.
Nov 22 09:37:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:46 compute-0 podman[375581]: 2025-11-22 09:37:46.334783856 +0000 UTC m=+0.031361481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:37:46 compute-0 podman[375581]: 2025-11-22 09:37:46.45033032 +0000 UTC m=+0.146907935 container init 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 09:37:46 compute-0 podman[375581]: 2025-11-22 09:37:46.458853451 +0000 UTC m=+0.155431056 container start 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:37:46 compute-0 podman[375581]: 2025-11-22 09:37:46.462602014 +0000 UTC m=+0.159179649 container attach 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 09:37:46 compute-0 systemd[1]: libpod-4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277.scope: Deactivated successfully.
Nov 22 09:37:46 compute-0 vigilant_hypatia[375597]: 167 167
Nov 22 09:37:46 compute-0 podman[375581]: 2025-11-22 09:37:46.465680061 +0000 UTC m=+0.162257666 container died 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 09:37:46 compute-0 conmon[375597]: conmon 4da64997c495b3d8e775 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277.scope/container/memory.events
Nov 22 09:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fc88188b04c3cff94c4df5837104927987845c11fe06d5ebaa9d522214a5643-merged.mount: Deactivated successfully.
Nov 22 09:37:46 compute-0 podman[375581]: 2025-11-22 09:37:46.507506141 +0000 UTC m=+0.204083736 container remove 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:37:46 compute-0 systemd[1]: libpod-conmon-4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277.scope: Deactivated successfully.
Nov 22 09:37:46 compute-0 podman[375621]: 2025-11-22 09:37:46.706617673 +0000 UTC m=+0.046498728 container create c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 09:37:46 compute-0 systemd[1]: Started libpod-conmon-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope.
Nov 22 09:37:46 compute-0 podman[375621]: 2025-11-22 09:37:46.689479086 +0000 UTC m=+0.029360161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:37:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:46 compute-0 podman[375621]: 2025-11-22 09:37:46.80783072 +0000 UTC m=+0.147711795 container init c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 09:37:46 compute-0 podman[375621]: 2025-11-22 09:37:46.81705704 +0000 UTC m=+0.156938095 container start c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:37:46 compute-0 podman[375621]: 2025-11-22 09:37:46.820253419 +0000 UTC m=+0.160134504 container attach c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:37:47 compute-0 nova_compute[253661]: 2025-11-22 09:37:47.434 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updated VIF entry in instance network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:37:47 compute-0 nova_compute[253661]: 2025-11-22 09:37:47.436 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:47 compute-0 nova_compute[253661]: 2025-11-22 09:37:47.457 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:47 compute-0 ceph-mon[75021]: pgmap v2276: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 50 KiB/s wr, 210 op/s
Nov 22 09:37:47 compute-0 ovn_controller[152872]: 2025-11-22T09:37:47Z|01211|binding|INFO|Releasing lport b0af7c96-3c08-40c2-b3ca-1e251090d01d from this chassis (sb_readonly=0)
Nov 22 09:37:47 compute-0 ovn_controller[152872]: 2025-11-22T09:37:47Z|01212|binding|INFO|Releasing lport 8cb4fbf8-c8a1-48f8-bf71-339312c7db31 from this chassis (sb_readonly=0)
Nov 22 09:37:47 compute-0 ovn_controller[152872]: 2025-11-22T09:37:47Z|01213|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 09:37:47 compute-0 ovn_controller[152872]: 2025-11-22T09:37:47Z|01214|binding|INFO|Releasing lport 6e07e124-b404-4946-958f-042e8d633a40 from this chassis (sb_readonly=0)
Nov 22 09:37:47 compute-0 xenodochial_haslett[375637]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:37:47 compute-0 xenodochial_haslett[375637]: --> relative data size: 1.0
Nov 22 09:37:47 compute-0 xenodochial_haslett[375637]: --> All data devices are unavailable
Nov 22 09:37:47 compute-0 nova_compute[253661]: 2025-11-22 09:37:47.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:47 compute-0 systemd[1]: libpod-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope: Deactivated successfully.
Nov 22 09:37:47 compute-0 systemd[1]: libpod-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope: Consumed 1.022s CPU time.
Nov 22 09:37:47 compute-0 conmon[375637]: conmon c43713471ddf2b09f006 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope/container/memory.events
Nov 22 09:37:47 compute-0 podman[375621]: 2025-11-22 09:37:47.93213829 +0000 UTC m=+1.272019345 container died c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:37:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 50 KiB/s wr, 210 op/s
Nov 22 09:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d-merged.mount: Deactivated successfully.
Nov 22 09:37:48 compute-0 podman[375621]: 2025-11-22 09:37:48.005563455 +0000 UTC m=+1.345444510 container remove c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:37:48 compute-0 systemd[1]: libpod-conmon-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope: Deactivated successfully.
Nov 22 09:37:48 compute-0 sudo[375514]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:48 compute-0 sudo[375680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:37:48 compute-0 sudo[375680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:48 compute-0 sudo[375680]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:48 compute-0 sudo[375705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:37:48 compute-0 sudo[375705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:48 compute-0 sudo[375705]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:48 compute-0 sudo[375730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:37:48 compute-0 sudo[375730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:48 compute-0 sudo[375730]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:48 compute-0 sudo[375755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:37:48 compute-0 sudo[375755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:48 compute-0 nova_compute[253661]: 2025-11-22 09:37:48.423 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:48 compute-0 nova_compute[253661]: 2025-11-22 09:37:48.423 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:48 compute-0 nova_compute[253661]: 2025-11-22 09:37:48.439 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:37:48 compute-0 nova_compute[253661]: 2025-11-22 09:37:48.657 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:48 compute-0 nova_compute[253661]: 2025-11-22 09:37:48.658 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:48 compute-0 nova_compute[253661]: 2025-11-22 09:37:48.666 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:37:48 compute-0 nova_compute[253661]: 2025-11-22 09:37:48.666 253665 INFO nova.compute.claims [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:37:48 compute-0 podman[375819]: 2025-11-22 09:37:48.722640749 +0000 UTC m=+0.057181543 container create 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 22 09:37:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:48 compute-0 systemd[1]: Started libpod-conmon-007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9.scope.
Nov 22 09:37:48 compute-0 podman[375819]: 2025-11-22 09:37:48.699307458 +0000 UTC m=+0.033848283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:37:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:48 compute-0 podman[375819]: 2025-11-22 09:37:48.815023617 +0000 UTC m=+0.149564431 container init 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:37:48 compute-0 nova_compute[253661]: 2025-11-22 09:37:48.820 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:48 compute-0 podman[375819]: 2025-11-22 09:37:48.823005294 +0000 UTC m=+0.157546088 container start 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:37:48 compute-0 podman[375819]: 2025-11-22 09:37:48.826765998 +0000 UTC m=+0.161306792 container attach 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:37:48 compute-0 frosty_northcutt[375835]: 167 167
Nov 22 09:37:48 compute-0 systemd[1]: libpod-007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9.scope: Deactivated successfully.
Nov 22 09:37:48 compute-0 conmon[375835]: conmon 007cb7ed197d62cd4201 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9.scope/container/memory.events
Nov 22 09:37:48 compute-0 podman[375840]: 2025-11-22 09:37:48.876948496 +0000 UTC m=+0.028811718 container died 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 09:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1356bd5089cd3ea77b5384a4efdb75bd702682f3cbc38e0a04e9155934fcab18-merged.mount: Deactivated successfully.
Nov 22 09:37:48 compute-0 podman[375840]: 2025-11-22 09:37:48.917855353 +0000 UTC m=+0.069718585 container remove 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 09:37:48 compute-0 systemd[1]: libpod-conmon-007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9.scope: Deactivated successfully.
Nov 22 09:37:49 compute-0 podman[375881]: 2025-11-22 09:37:49.170829795 +0000 UTC m=+0.050367664 container create 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:37:49 compute-0 systemd[1]: Started libpod-conmon-9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3.scope.
Nov 22 09:37:49 compute-0 podman[375881]: 2025-11-22 09:37:49.151923494 +0000 UTC m=+0.031461383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:37:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:49 compute-0 podman[375881]: 2025-11-22 09:37:49.281824545 +0000 UTC m=+0.161362444 container init 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:37:49 compute-0 podman[375881]: 2025-11-22 09:37:49.293056824 +0000 UTC m=+0.172594693 container start 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:37:49 compute-0 podman[375881]: 2025-11-22 09:37:49.297058934 +0000 UTC m=+0.176596823 container attach 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:37:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:37:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2969552940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.369 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.379 253665 DEBUG nova.compute.provider_tree [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.403 253665 DEBUG nova.scheduler.client.report [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.427 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.429 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.485 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.486 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.513 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.537 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.652 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.653 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.654 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Creating image(s)
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.690 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.730 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:49 compute-0 ceph-mon[75021]: pgmap v2277: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 50 KiB/s wr, 210 op/s
Nov 22 09:37:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2969552940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.764 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.770 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.831 253665 DEBUG nova.policy [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.877 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.878 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.879 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.879 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.908 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:49 compute-0 nova_compute[253661]: 2025-11-22 09:37:49.913 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 39 KiB/s wr, 182 op/s
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.108 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.112 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:50 compute-0 elegant_fermi[375898]: {
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:     "0": [
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:         {
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "devices": [
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "/dev/loop3"
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             ],
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_name": "ceph_lv0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_size": "21470642176",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "name": "ceph_lv0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "tags": {
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.cluster_name": "ceph",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.crush_device_class": "",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.encrypted": "0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.osd_id": "0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.type": "block",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.vdo": "0"
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             },
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "type": "block",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "vg_name": "ceph_vg0"
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:         }
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:     ],
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:     "1": [
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:         {
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "devices": [
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "/dev/loop4"
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             ],
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_name": "ceph_lv1",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_size": "21470642176",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "name": "ceph_lv1",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "tags": {
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.cluster_name": "ceph",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.crush_device_class": "",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.encrypted": "0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.osd_id": "1",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.type": "block",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.vdo": "0"
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             },
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "type": "block",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "vg_name": "ceph_vg1"
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:         }
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:     ],
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:     "2": [
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:         {
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "devices": [
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "/dev/loop5"
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             ],
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_name": "ceph_lv2",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_size": "21470642176",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "name": "ceph_lv2",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "tags": {
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.cluster_name": "ceph",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.crush_device_class": "",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.encrypted": "0",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.osd_id": "2",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.type": "block",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:                 "ceph.vdo": "0"
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             },
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "type": "block",
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:             "vg_name": "ceph_vg2"
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:         }
Nov 22 09:37:50 compute-0 elegant_fermi[375898]:     ]
Nov 22 09:37:50 compute-0 elegant_fermi[375898]: }
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:37:50 compute-0 systemd[1]: libpod-9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3.scope: Deactivated successfully.
Nov 22 09:37:50 compute-0 podman[375881]: 2025-11-22 09:37:50.255541569 +0000 UTC m=+1.135079438 container died 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.258 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5-merged.mount: Deactivated successfully.
Nov 22 09:37:50 compute-0 podman[375881]: 2025-11-22 09:37:50.309960333 +0000 UTC m=+1.189498202 container remove 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:37:50 compute-0 systemd[1]: libpod-conmon-9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3.scope: Deactivated successfully.
Nov 22 09:37:50 compute-0 sudo[375755]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.398 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:37:50 compute-0 sudo[376035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:37:50 compute-0 sudo[376035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:50 compute-0 sudo[376035]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:50 compute-0 sudo[376096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:37:50 compute-0 sudo[376096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:50 compute-0 sudo[376096]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.535 253665 DEBUG nova.objects.instance [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid f16662c4-9b4f-4060-ac76-ebfb960dbb89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.549 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.549 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Ensure instance console log exists: /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.550 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.550 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:37:50 compute-0 nova_compute[253661]: 2025-11-22 09:37:50.550 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:37:50 compute-0 sudo[376136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:37:50 compute-0 sudo[376136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:50 compute-0 sudo[376136]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:50 compute-0 sudo[376164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:37:50 compute-0 sudo[376164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:51 compute-0 podman[376225]: 2025-11-22 09:37:51.006743931 +0000 UTC m=+0.048761434 container create dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 22 09:37:51 compute-0 systemd[1]: Started libpod-conmon-dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2.scope.
Nov 22 09:37:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:51 compute-0 podman[376225]: 2025-11-22 09:37:50.987192085 +0000 UTC m=+0.029209618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:37:51 compute-0 podman[376225]: 2025-11-22 09:37:51.097384356 +0000 UTC m=+0.139401889 container init dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:37:51 compute-0 podman[376225]: 2025-11-22 09:37:51.105491386 +0000 UTC m=+0.147508889 container start dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:37:51 compute-0 podman[376225]: 2025-11-22 09:37:51.109412934 +0000 UTC m=+0.151430437 container attach dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:37:51 compute-0 sad_hofstadter[376241]: 167 167
Nov 22 09:37:51 compute-0 systemd[1]: libpod-dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2.scope: Deactivated successfully.
Nov 22 09:37:51 compute-0 nova_compute[253661]: 2025-11-22 09:37:51.115 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:51 compute-0 podman[376246]: 2025-11-22 09:37:51.168571075 +0000 UTC m=+0.036469438 container died dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:37:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a857dcafeee4ecc351a3bb3674ccadbb22c500c93dfca77112900696c357953-merged.mount: Deactivated successfully.
Nov 22 09:37:51 compute-0 podman[376246]: 2025-11-22 09:37:51.214356444 +0000 UTC m=+0.082254817 container remove dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:37:51 compute-0 systemd[1]: libpod-conmon-dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2.scope: Deactivated successfully.
Nov 22 09:37:51 compute-0 podman[376268]: 2025-11-22 09:37:51.431357971 +0000 UTC m=+0.050588879 container create 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:37:51 compute-0 systemd[1]: Started libpod-conmon-6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d.scope.
Nov 22 09:37:51 compute-0 podman[376268]: 2025-11-22 09:37:51.411348913 +0000 UTC m=+0.030579821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:37:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:37:51 compute-0 podman[376268]: 2025-11-22 09:37:51.55075824 +0000 UTC m=+0.169989158 container init 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:37:51 compute-0 podman[376282]: 2025-11-22 09:37:51.553316123 +0000 UTC m=+0.079835146 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:37:51 compute-0 podman[376268]: 2025-11-22 09:37:51.559907308 +0000 UTC m=+0.179138216 container start 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:37:51 compute-0 podman[376268]: 2025-11-22 09:37:51.564752538 +0000 UTC m=+0.183983446 container attach 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 09:37:51 compute-0 podman[376286]: 2025-11-22 09:37:51.566006869 +0000 UTC m=+0.091534477 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 09:37:51 compute-0 ceph-mon[75021]: pgmap v2278: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 39 KiB/s wr, 182 op/s
Nov 22 09:37:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 304 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 588 KiB/s wr, 194 op/s
Nov 22 09:37:52 compute-0 nova_compute[253661]: 2025-11-22 09:37:52.113 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Successfully created port: ff0231eb-335b-4acd-98c8-d655d887e97a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:37:52
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'volumes', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log']
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:37:52 compute-0 sad_brattain[376305]: {
Nov 22 09:37:52 compute-0 sad_brattain[376305]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "osd_id": 1,
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "type": "bluestore"
Nov 22 09:37:52 compute-0 sad_brattain[376305]:     },
Nov 22 09:37:52 compute-0 sad_brattain[376305]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "osd_id": 0,
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "type": "bluestore"
Nov 22 09:37:52 compute-0 sad_brattain[376305]:     },
Nov 22 09:37:52 compute-0 sad_brattain[376305]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "osd_id": 2,
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:37:52 compute-0 sad_brattain[376305]:         "type": "bluestore"
Nov 22 09:37:52 compute-0 sad_brattain[376305]:     }
Nov 22 09:37:52 compute-0 sad_brattain[376305]: }
Nov 22 09:37:52 compute-0 systemd[1]: libpod-6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d.scope: Deactivated successfully.
Nov 22 09:37:52 compute-0 systemd[1]: libpod-6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d.scope: Consumed 1.128s CPU time.
Nov 22 09:37:52 compute-0 podman[376268]: 2025-11-22 09:37:52.702667036 +0000 UTC m=+1.321897954 container died 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 09:37:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23-merged.mount: Deactivated successfully.
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:37:52 compute-0 podman[376268]: 2025-11-22 09:37:52.776149254 +0000 UTC m=+1.395380172 container remove 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:37:52 compute-0 systemd[1]: libpod-conmon-6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d.scope: Deactivated successfully.
Nov 22 09:37:52 compute-0 sudo[376164]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:37:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:37:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:37:52 compute-0 nova_compute[253661]: 2025-11-22 09:37:52.839 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Successfully created port: d7659b3e-3579-403f-b319-ceb538d9c201 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:37:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c9ddef30-33c4-4779-89e1-782b4be8a0ea does not exist
Nov 22 09:37:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 7e2f04e4-4f73-4ea5-8bd5-baf84e6c4151 does not exist
Nov 22 09:37:52 compute-0 sudo[376367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:37:52 compute-0 sudo[376367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:52 compute-0 sudo[376367]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:52 compute-0 sudo[376392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:37:52 compute-0 sudo[376392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:37:52 compute-0 sudo[376392]: pam_unix(sudo:session): session closed for user root
Nov 22 09:37:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:53 compute-0 ceph-mon[75021]: pgmap v2279: 305 pgs: 305 active+clean; 304 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 588 KiB/s wr, 194 op/s
Nov 22 09:37:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:37:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:37:53 compute-0 nova_compute[253661]: 2025-11-22 09:37:53.887 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Successfully updated port: ff0231eb-335b-4acd-98c8-d655d887e97a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:37:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 352 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 134 op/s
Nov 22 09:37:53 compute-0 ovn_controller[152872]: 2025-11-22T09:37:53Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:0f:ac 10.100.0.14
Nov 22 09:37:53 compute-0 ovn_controller[152872]: 2025-11-22T09:37:53Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:0f:ac 10.100.0.14
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG nova.compute.manager [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG nova.compute.manager [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing instance network info cache due to event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG oslo_concurrency.lockutils [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG oslo_concurrency.lockutils [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG nova.network.neutron [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.269 253665 DEBUG nova.network.neutron [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.661 253665 DEBUG nova.network.neutron [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.673 253665 DEBUG oslo_concurrency.lockutils [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.830 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804259.8278844, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.831 253665 INFO nova.compute.manager [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Stopped (Lifecycle Event)
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.846 253665 DEBUG nova.compute.manager [None req-1ad5c784-459d-4c7b-bfef-f9cd6b945745 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:37:54 compute-0 nova_compute[253661]: 2025-11-22 09:37:54.876 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:55 compute-0 nova_compute[253661]: 2025-11-22 09:37:55.206 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Successfully updated port: d7659b3e-3579-403f-b319-ceb538d9c201 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:37:55 compute-0 nova_compute[253661]: 2025-11-22 09:37:55.223 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:55 compute-0 nova_compute[253661]: 2025-11-22 09:37:55.223 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:55 compute-0 nova_compute[253661]: 2025-11-22 09:37:55.223 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:37:55 compute-0 nova_compute[253661]: 2025-11-22 09:37:55.546 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:37:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:37:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:37:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:37:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:37:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:37:55 compute-0 ceph-mon[75021]: pgmap v2280: 305 pgs: 305 active+clean; 352 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 134 op/s
Nov 22 09:37:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 352 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.8 MiB/s wr, 53 op/s
Nov 22 09:37:55 compute-0 nova_compute[253661]: 2025-11-22 09:37:55.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:56 compute-0 nova_compute[253661]: 2025-11-22 09:37:56.118 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:56 compute-0 nova_compute[253661]: 2025-11-22 09:37:56.185 253665 DEBUG nova.compute.manager [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-changed-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:37:56 compute-0 nova_compute[253661]: 2025-11-22 09:37:56.185 253665 DEBUG nova.compute.manager [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing instance network info cache due to event network-changed-d7659b3e-3579-403f-b319-ceb538d9c201. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:37:56 compute-0 nova_compute[253661]: 2025-11-22 09:37:56.185 253665 DEBUG oslo_concurrency.lockutils [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:37:56 compute-0 ovn_controller[152872]: 2025-11-22T09:37:56Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:8d:06 10.100.0.3
Nov 22 09:37:56 compute-0 ovn_controller[152872]: 2025-11-22T09:37:56Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:8d:06 10.100.0.3
Nov 22 09:37:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:37:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:37:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:37:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:37:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:37:57 compute-0 podman[376417]: 2025-11-22 09:37:57.44938585 +0000 UTC m=+0.124547278 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:37:57 compute-0 ceph-mon[75021]: pgmap v2281: 305 pgs: 305 active+clean; 352 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.8 MiB/s wr, 53 op/s
Nov 22 09:37:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 393 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 5.4 MiB/s wr, 128 op/s
Nov 22 09:37:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.310 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.329 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.329 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance network_info: |[{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.329 253665 DEBUG oslo_concurrency.lockutils [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.330 253665 DEBUG nova.network.neutron [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing network info cache for port d7659b3e-3579-403f-b319-ceb538d9c201 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.334 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start _get_guest_xml network_info=[{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.341 253665 WARNING nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.349 253665 DEBUG nova.virt.libvirt.host [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.350 253665 DEBUG nova.virt.libvirt.host [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.354 253665 DEBUG nova.virt.libvirt.host [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.355 253665 DEBUG nova.virt.libvirt.host [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.355 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.355 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.356 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.356 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.356 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.357 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.357 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.357 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.357 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.358 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.358 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.358 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.362 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:59 compute-0 ceph-mon[75021]: pgmap v2282: 305 pgs: 305 active+clean; 393 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 5.4 MiB/s wr, 128 op/s
Nov 22 09:37:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:37:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/743467742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.842 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.873 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.881 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:37:59 compute-0 nova_compute[253661]: 2025-11-22 09:37:59.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:37:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 405 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 6.0 MiB/s wr, 153 op/s
Nov 22 09:38:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:38:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2023276289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.344 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.347 253665 DEBUG nova.virt.libvirt.vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:49Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.348 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.349 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.350 253665 DEBUG nova.virt.libvirt.vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:49Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.350 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.351 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.353 253665 DEBUG nova.objects.instance [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid f16662c4-9b4f-4060-ac76-ebfb960dbb89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.374 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <uuid>f16662c4-9b4f-4060-ac76-ebfb960dbb89</uuid>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <name>instance-00000076</name>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-1142729147</nova:name>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:37:59</nova:creationTime>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <nova:port uuid="ff0231eb-335b-4acd-98c8-d655d887e97a">
Nov 22 09:38:00 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <nova:port uuid="d7659b3e-3579-403f-b319-ceb538d9c201">
Nov 22 09:38:00 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe88:211f" ipVersion="6"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <system>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <entry name="serial">f16662c4-9b4f-4060-ac76-ebfb960dbb89</entry>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <entry name="uuid">f16662c4-9b4f-4060-ac76-ebfb960dbb89</entry>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </system>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <os>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   </os>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <features>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   </features>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk">
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       </source>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config">
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       </source>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:38:00 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:7e:38:77"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <target dev="tapff0231eb-33"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:88:21:1f"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <target dev="tapd7659b3e-35"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/console.log" append="off"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <video>
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </video>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:38:00 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:38:00 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:38:00 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:38:00 compute-0 nova_compute[253661]: </domain>
Nov 22 09:38:00 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.376 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Preparing to wait for external event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.376 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.376 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Preparing to wait for external event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.378 253665 DEBUG nova.virt.libvirt.vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:49Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.378 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.379 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.379 253665 DEBUG os_vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.380 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.381 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.386 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff0231eb-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.386 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff0231eb-33, col_values=(('external_ids', {'iface-id': 'ff0231eb-335b-4acd-98c8-d655d887e97a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:38:77', 'vm-uuid': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:00 compute-0 NetworkManager[48920]: <info>  [1763804280.3906] manager: (tapff0231eb-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.400 253665 INFO os_vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33')
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.401 253665 DEBUG nova.virt.libvirt.vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:49Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.402 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.403 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.403 253665 DEBUG os_vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.404 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.404 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.406 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.407 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7659b3e-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.407 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7659b3e-35, col_values=(('external_ids', {'iface-id': 'd7659b3e-3579-403f-b319-ceb538d9c201', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:21:1f', 'vm-uuid': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:00 compute-0 NetworkManager[48920]: <info>  [1763804280.4100] manager: (tapd7659b3e-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.412 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.420 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.421 253665 INFO os_vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35')
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.478 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.478 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.478 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:7e:38:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.479 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:88:21:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.479 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Using config drive
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.505 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/743467742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:00 compute-0 ceph-mon[75021]: pgmap v2283: 305 pgs: 305 active+clean; 405 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 6.0 MiB/s wr, 153 op/s
Nov 22 09:38:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2023276289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.927 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Creating config drive at /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config
Nov 22 09:38:00 compute-0 nova_compute[253661]: 2025-11-22 09:38:00.932 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_pc7zp0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.082 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_pc7zp0" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.112 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.119 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.164 253665 DEBUG nova.network.neutron [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updated VIF entry in instance network info cache for port d7659b3e-3579-403f-b319-ceb538d9c201. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.165 253665 DEBUG nova.network.neutron [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.167 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.182 253665 DEBUG oslo_concurrency.lockutils [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.302 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.303 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Deleting local config drive /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config because it was imported into RBD.
Nov 22 09:38:01 compute-0 NetworkManager[48920]: <info>  [1763804281.3719] manager: (tapff0231eb-33): new Tun device (/org/freedesktop/NetworkManager/Devices/506)
Nov 22 09:38:01 compute-0 kernel: tapff0231eb-33: entered promiscuous mode
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 ovn_controller[152872]: 2025-11-22T09:38:01Z|01215|binding|INFO|Claiming lport ff0231eb-335b-4acd-98c8-d655d887e97a for this chassis.
Nov 22 09:38:01 compute-0 ovn_controller[152872]: 2025-11-22T09:38:01Z|01216|binding|INFO|ff0231eb-335b-4acd-98c8-d655d887e97a: Claiming fa:16:3e:7e:38:77 10.100.0.14
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.395 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:38:77 10.100.0.14'], port_security=['fa:16:3e:7e:38:77 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=416fdb0b-60ab-41a3-b089-f86f3fe1761e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ff0231eb-335b-4acd-98c8-d655d887e97a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:01 compute-0 NetworkManager[48920]: <info>  [1763804281.3969] manager: (tapd7659b3e-35): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.396 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ff0231eb-335b-4acd-98c8-d655d887e97a in datapath a1a3f352-95a9-4122-aecd-94a4bbf79683 bound to our chassis
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.399 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a3f352-95a9-4122-aecd-94a4bbf79683
Nov 22 09:38:01 compute-0 kernel: tapd7659b3e-35: entered promiscuous mode
Nov 22 09:38:01 compute-0 ovn_controller[152872]: 2025-11-22T09:38:01Z|01217|binding|INFO|Setting lport ff0231eb-335b-4acd-98c8-d655d887e97a ovn-installed in OVS
Nov 22 09:38:01 compute-0 ovn_controller[152872]: 2025-11-22T09:38:01Z|01218|binding|INFO|Setting lport ff0231eb-335b-4acd-98c8-d655d887e97a up in Southbound
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 ovn_controller[152872]: 2025-11-22T09:38:01Z|01219|if_status|INFO|Not updating pb chassis for d7659b3e-3579-403f-b319-ceb538d9c201 now as sb is readonly
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.411 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 ovn_controller[152872]: 2025-11-22T09:38:01Z|01220|binding|INFO|Claiming lport d7659b3e-3579-403f-b319-ceb538d9c201 for this chassis.
Nov 22 09:38:01 compute-0 ovn_controller[152872]: 2025-11-22T09:38:01Z|01221|binding|INFO|d7659b3e-3579-403f-b319-ceb538d9c201: Claiming fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.423 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f'], port_security=['fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:211f/64', 'neutron:device_id': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f56771e6-e0a6-4947-ad39-6cb384a012bf, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=d7659b3e-3579-403f-b319-ceb538d9c201) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.425 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaaa050f-eaeb-4529-8198-c27fbf646ffa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_controller[152872]: 2025-11-22T09:38:01Z|01222|binding|INFO|Setting lport d7659b3e-3579-403f-b319-ceb538d9c201 ovn-installed in OVS
Nov 22 09:38:01 compute-0 ovn_controller[152872]: 2025-11-22T09:38:01Z|01223|binding|INFO|Setting lport d7659b3e-3579-403f-b319-ceb538d9c201 up in Southbound
Nov 22 09:38:01 compute-0 systemd-udevd[376586]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:38:01 compute-0 systemd-udevd[376587]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 systemd-machined[215941]: New machine qemu-149-instance-00000076.
Nov 22 09:38:01 compute-0 NetworkManager[48920]: <info>  [1763804281.4500] device (tapff0231eb-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:38:01 compute-0 NetworkManager[48920]: <info>  [1763804281.4510] device (tapff0231eb-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:38:01 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-00000076.
Nov 22 09:38:01 compute-0 NetworkManager[48920]: <info>  [1763804281.4601] device (tapd7659b3e-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:38:01 compute-0 NetworkManager[48920]: <info>  [1763804281.4610] device (tapd7659b3e-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.470 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bda59c-f544-47e6-aba1-52d5684eb188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.475 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4755b4-97c6-46ef-bcab-f52c265cf154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.509 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[222fb592-e90c-4496-ad06-3362b10ed14b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.531 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2328e2a0-722a-4fce-be7c-d248853ade59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a3f352-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:dc:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707815, 'reachable_time': 26897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376601, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21c45c79-600a-468d-bf67-4bf5d2256df5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa1a3f352-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707837, 'tstamp': 707837}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376603, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa1a3f352-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707841, 'tstamp': 707841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376603, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.555 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a3f352-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.558 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a3f352-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.558 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.559 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a3f352-90, col_values=(('external_ids', {'iface-id': '6e07e124-b404-4946-958f-042e8d633a40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.559 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.561 162862 INFO neutron.agent.ovn.metadata.agent [-] Port d7659b3e-3579-403f-b319-ceb538d9c201 in datapath c883e14c-ad7e-49eb-b0c3-2571140d1e57 unbound from our chassis
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.562 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c883e14c-ad7e-49eb-b0c3-2571140d1e57
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.581 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe508c1-0a66-431e-9da1-c292d5fbed45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.618 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[829e37ac-429a-4ec2-abba-7f60bb321448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.622 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6367a06a-710a-4ddb-9c3f-ae2babf47d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.664 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[426ee172-b5b4-462f-8026-0bd651e2a880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.683 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26e1103f-661c-4360-9cd5-5a60bd673341]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc883e14c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d1:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707921, 'reachable_time': 43324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376609, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.704 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[16e3a0ce-c439-4d4a-b61e-652f327b1419]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc883e14c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707938, 'tstamp': 707938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376610, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.707 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc883e14c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.710 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc883e14c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.710 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.711 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc883e14c-a0, col_values=(('external_ids', {'iface-id': '8cb4fbf8-c8a1-48f8-bf71-339312c7db31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.711 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:01 compute-0 nova_compute[253661]: 2025-11-22 09:38:01.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 405 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.062 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804282.0614202, f16662c4-9b4f-4060-ac76-ebfb960dbb89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.062 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] VM Started (Lifecycle Event)
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.089 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.093 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804282.0628104, f16662c4-9b4f-4060-ac76-ebfb960dbb89 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.093 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] VM Paused (Lifecycle Event)
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.110 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.114 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.130 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.246 253665 INFO nova.compute.manager [None req-1aaa91e8-a61b-4c66-9729-2d567802b9d8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Get console output
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.254 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.608 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.609 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.609 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.610 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.610 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.611 253665 INFO nova.compute.manager [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Terminating instance
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.612 253665 DEBUG nova.compute.manager [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:38:02 compute-0 kernel: tap01185f9f-cf (unregistering): left promiscuous mode
Nov 22 09:38:02 compute-0 NetworkManager[48920]: <info>  [1763804282.6674] device (tap01185f9f-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:38:02 compute-0 ovn_controller[152872]: 2025-11-22T09:38:02Z|01224|binding|INFO|Releasing lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 from this chassis (sb_readonly=0)
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.681 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:02 compute-0 ovn_controller[152872]: 2025-11-22T09:38:02Z|01225|binding|INFO|Setting lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 down in Southbound
Nov 22 09:38:02 compute-0 ovn_controller[152872]: 2025-11-22T09:38:02Z|01226|binding|INFO|Removing iface tap01185f9f-cf ovn-installed in OVS
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.702 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:8d:06 10.100.0.3'], port_security=['fa:16:3e:9e:8d:06 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '117927df-3c9e-4609-b5ba-dc3937b9339d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669fa85d-7478-40e5-958b-7300ef3552b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7538651d-e44e-4a35-8243-e31c6426f6e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f6a9cc6-46e5-4035-8aed-8dfaed3a2f4d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.703 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 in datapath 669fa85d-7478-40e5-958b-7300ef3552b5 unbound from our chassis
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.705 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 669fa85d-7478-40e5-958b-7300ef3552b5
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.727 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e388c49-cc30-4a33-a44a-394a913c3c07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:02 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 22 09:38:02 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Consumed 16.656s CPU time.
Nov 22 09:38:02 compute-0 systemd-machined[215941]: Machine qemu-148-instance-00000074 terminated.
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.768 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec4eb5d-691e-476c-b336-226e6650858f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.771 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[75b4a783-e5f1-40de-b26f-2860a8dedc17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.803 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[72977d67-9bbf-41e4-a1ae-5d970040bc7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.824 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[824ee2c2-fdd5-4e67-b431-6a35bc1829e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap669fa85d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:cb:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706707, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376663, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.845 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23256544-bc7e-421c-88a2-b69df496c91a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap669fa85d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706722, 'tstamp': 706722}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376666, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap669fa85d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706727, 'tstamp': 706727}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376666, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.847 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap669fa85d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.864 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap669fa85d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.865 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.865 253665 INFO nova.virt.libvirt.driver [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance destroyed successfully.
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.865 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap669fa85d-70, col_values=(('external_ids', {'iface-id': 'b0af7c96-3c08-40c2-b3ca-1e251090d01d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.866 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.866 253665 DEBUG nova.objects.instance [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 117927df-3c9e-4609-b5ba-dc3937b9339d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003381054344110178 of space, bias 1.0, pg target 1.0143163032330533 quantized to 32 (current 32)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:38:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.879 253665 DEBUG nova.virt.libvirt.vif [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-49006445',display_name='tempest-TestNetworkBasicOps-server-49006445',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-49006445',id=116,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeXCAlba5Pky/MldtbxyajF3IcXgGA10hH2p6l/rDbu00wotgjV47YpIug01aEvxhEMHebjDZWxS13INHaUqa3arLwLiyV5qzWo5I/KVMb52E8fXSgSsdjLUTCsH4PUgQ==',key_name='tempest-TestNetworkBasicOps-1517678537',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-bn5d4cub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:39Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=117927df-3c9e-4609-b5ba-dc3937b9339d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.880 253665 DEBUG nova.network.os_vif_util [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.881 253665 DEBUG nova.network.os_vif_util [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.881 253665 DEBUG os_vif [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.883 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01185f9f-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.888 253665 INFO os_vif [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf')
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.937 253665 DEBUG nova.compute.manager [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-unplugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.938 253665 DEBUG oslo_concurrency.lockutils [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.938 253665 DEBUG oslo_concurrency.lockutils [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.938 253665 DEBUG oslo_concurrency.lockutils [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.938 253665 DEBUG nova.compute.manager [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] No waiting events found dispatching network-vif-unplugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:02 compute-0 nova_compute[253661]: 2025-11-22 09:38:02.939 253665 DEBUG nova.compute.manager [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-unplugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:38:03 compute-0 ceph-mon[75021]: pgmap v2284: 305 pgs: 305 active+clean; 405 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Nov 22 09:38:03 compute-0 nova_compute[253661]: 2025-11-22 09:38:03.328 253665 INFO nova.virt.libvirt.driver [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Deleting instance files /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d_del
Nov 22 09:38:03 compute-0 nova_compute[253661]: 2025-11-22 09:38:03.329 253665 INFO nova.virt.libvirt.driver [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Deletion of /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d_del complete
Nov 22 09:38:03 compute-0 nova_compute[253661]: 2025-11-22 09:38:03.385 253665 INFO nova.compute.manager [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 22 09:38:03 compute-0 nova_compute[253661]: 2025-11-22 09:38:03.385 253665 DEBUG oslo.service.loopingcall [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:38:03 compute-0 nova_compute[253661]: 2025-11-22 09:38:03.386 253665 DEBUG nova.compute.manager [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:38:03 compute-0 nova_compute[253661]: 2025-11-22 09:38:03.386 253665 DEBUG nova.network.neutron [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:38:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 382 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 656 KiB/s rd, 5.5 MiB/s wr, 139 op/s
Nov 22 09:38:05 compute-0 ceph-mon[75021]: pgmap v2285: 305 pgs: 305 active+clean; 382 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 656 KiB/s rd, 5.5 MiB/s wr, 139 op/s
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.067 253665 DEBUG nova.compute.manager [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.067 253665 DEBUG oslo_concurrency.lockutils [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.068 253665 DEBUG oslo_concurrency.lockutils [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.068 253665 DEBUG oslo_concurrency.lockutils [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.068 253665 DEBUG nova.compute.manager [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] No waiting events found dispatching network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.069 253665 WARNING nova.compute.manager [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received unexpected event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 for instance with vm_state active and task_state deleting.
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.199 253665 DEBUG nova.compute.manager [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.199 253665 DEBUG oslo_concurrency.lockutils [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.200 253665 DEBUG oslo_concurrency.lockutils [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.200 253665 DEBUG oslo_concurrency.lockutils [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.200 253665 DEBUG nova.compute.manager [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Processing event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.450 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.881 253665 DEBUG nova.network.neutron [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.898 253665 INFO nova.compute.manager [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Took 2.51 seconds to deallocate network for instance.
Nov 22 09:38:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 382 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 3.3 MiB/s wr, 111 op/s
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.946 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:05 compute-0 nova_compute[253661]: 2025-11-22 09:38:05.946 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:06 compute-0 nova_compute[253661]: 2025-11-22 09:38:06.093 253665 DEBUG oslo_concurrency.processutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:06 compute-0 nova_compute[253661]: 2025-11-22 09:38:06.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4285726818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:06 compute-0 nova_compute[253661]: 2025-11-22 09:38:06.564 253665 DEBUG oslo_concurrency.processutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:06 compute-0 nova_compute[253661]: 2025-11-22 09:38:06.571 253665 DEBUG nova.compute.provider_tree [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:06 compute-0 nova_compute[253661]: 2025-11-22 09:38:06.588 253665 DEBUG nova.scheduler.client.report [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:06 compute-0 nova_compute[253661]: 2025-11-22 09:38:06.610 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:06 compute-0 nova_compute[253661]: 2025-11-22 09:38:06.636 253665 INFO nova.scheduler.client.report [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 117927df-3c9e-4609-b5ba-dc3937b9339d
Nov 22 09:38:06 compute-0 nova_compute[253661]: 2025-11-22 09:38:06.692 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:07 compute-0 ceph-mon[75021]: pgmap v2286: 305 pgs: 305 active+clean; 382 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 3.3 MiB/s wr, 111 op/s
Nov 22 09:38:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4285726818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.298 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.298 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.299 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.299 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.299 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No event matching network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a in dict_keys([('network-vif-plugged', 'd7659b3e-3579-403f-b319-ceb538d9c201')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.300 253665 WARNING nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received unexpected event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a for instance with vm_state building and task_state spawning.
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.300 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.301 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.301 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.302 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.302 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Processing event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.303 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.303 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.303 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.304 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.304 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.304 253665 WARNING nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received unexpected event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 for instance with vm_state building and task_state spawning.
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.306 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance event wait completed in 5 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.311 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804287.3109167, f16662c4-9b4f-4060-ac76-ebfb960dbb89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.312 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] VM Resumed (Lifecycle Event)
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.316 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.322 253665 INFO nova.virt.libvirt.driver [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance spawned successfully.
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.322 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.337 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.345 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.354 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.355 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.356 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.357 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.358 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.359 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.366 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.417 253665 INFO nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Took 17.76 seconds to spawn the instance on the hypervisor.
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.418 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.425 253665 DEBUG nova.compute.manager [req-9ac417ec-ac01-47a1-a15a-fbb7f341e98e req-16d12dad-d431-492b-8f80-6cf933c71d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-deleted-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.508 253665 INFO nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Took 18.87 seconds to build instance.
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.525 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:07 compute-0 nova_compute[253661]: 2025-11-22 09:38:07.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 326 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.3 MiB/s wr, 139 op/s
Nov 22 09:38:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.883 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.884 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.884 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.884 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.884 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.886 253665 INFO nova.compute.manager [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Terminating instance
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.887 253665 DEBUG nova.compute.manager [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:38:08 compute-0 kernel: tap22b006cb-c0 (unregistering): left promiscuous mode
Nov 22 09:38:08 compute-0 NetworkManager[48920]: <info>  [1763804288.9403] device (tap22b006cb-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:38:08 compute-0 ovn_controller[152872]: 2025-11-22T09:38:08Z|01227|binding|INFO|Releasing lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 from this chassis (sb_readonly=0)
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:08 compute-0 ovn_controller[152872]: 2025-11-22T09:38:08Z|01228|binding|INFO|Setting lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 down in Southbound
Nov 22 09:38:08 compute-0 ovn_controller[152872]: 2025-11-22T09:38:08Z|01229|binding|INFO|Removing iface tap22b006cb-c0 ovn-installed in OVS
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.959 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:83:85 10.100.0.8'], port_security=['fa:16:3e:c9:83:85 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669fa85d-7478-40e5-958b-7300ef3552b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '714d001a-9857-4892-9e43-4add0015169f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f6a9cc6-46e5-4035-8aed-8dfaed3a2f4d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.960 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 in datapath 669fa85d-7478-40e5-958b-7300ef3552b5 unbound from our chassis
Nov 22 09:38:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.963 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 669fa85d-7478-40e5-958b-7300ef3552b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:38:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.965 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1e4752-2d95-4c4f-af54-68fba04dd830]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.965 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 namespace which is not needed anymore
Nov 22 09:38:08 compute-0 nova_compute[253661]: 2025-11-22 09:38:08.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:09 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Deactivated successfully.
Nov 22 09:38:09 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Consumed 16.588s CPU time.
Nov 22 09:38:09 compute-0 systemd-machined[215941]: Machine qemu-143-instance-00000072 terminated.
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.129 253665 INFO nova.virt.libvirt.driver [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance destroyed successfully.
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.130 253665 DEBUG nova.objects.instance [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:09 compute-0 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [NOTICE]   (372888) : haproxy version is 2.8.14-c23fe91
Nov 22 09:38:09 compute-0 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [NOTICE]   (372888) : path to executable is /usr/sbin/haproxy
Nov 22 09:38:09 compute-0 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [WARNING]  (372888) : Exiting Master process...
Nov 22 09:38:09 compute-0 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [WARNING]  (372888) : Exiting Master process...
Nov 22 09:38:09 compute-0 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [ALERT]    (372888) : Current worker (372893) exited with code 143 (Terminated)
Nov 22 09:38:09 compute-0 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [WARNING]  (372888) : All workers exited. Exiting... (0)
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.151 253665 DEBUG nova.virt.libvirt.vif [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-628839971',display_name='tempest-TestNetworkBasicOps-server-628839971',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-628839971',id=114,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIZ5gGdNvaqAtX8j4rLIehpVsycYZstZu428EjSgRsIaTKO3qobX2DWEa45t7eW4vzvXR6ESLf4/AnMv9en3fY5WkAniEGuSXx7koBFV1HR0ktIagOKt25I/jbmVsb/jUA==',key_name='tempest-TestNetworkBasicOps-971917795',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:36:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-n0lt6esc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:36:56Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.152 253665 DEBUG nova.network.os_vif_util [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:09 compute-0 systemd[1]: libpod-16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239.scope: Deactivated successfully.
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.152 253665 DEBUG nova.network.os_vif_util [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.153 253665 DEBUG os_vif [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.154 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.155 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b006cb-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.159 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:38:09 compute-0 podman[376742]: 2025-11-22 09:38:09.161120764 +0000 UTC m=+0.061148002 container died 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.162 253665 INFO os_vif [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0')
Nov 22 09:38:09 compute-0 ceph-mon[75021]: pgmap v2287: 305 pgs: 305 active+clean; 326 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.3 MiB/s wr, 139 op/s
Nov 22 09:38:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239-userdata-shm.mount: Deactivated successfully.
Nov 22 09:38:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb6d1ba9fa3eb3ce0aa01e1a76d7c9a2d999425c99d5548d6568348e14d15459-merged.mount: Deactivated successfully.
Nov 22 09:38:09 compute-0 podman[376742]: 2025-11-22 09:38:09.214299926 +0000 UTC m=+0.114327144 container cleanup 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:38:09 compute-0 systemd[1]: libpod-conmon-16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239.scope: Deactivated successfully.
Nov 22 09:38:09 compute-0 podman[376800]: 2025-11-22 09:38:09.29445792 +0000 UTC m=+0.054790334 container remove 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:38:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.307 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e58471f1-4878-43ce-8081-852fee8e6e69]: (4, ('Sat Nov 22 09:38:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 (16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239)\n16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239\nSat Nov 22 09:38:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 (16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239)\n16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c2ea86-ab9b-4373-b292-ffd41febecee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.311 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap669fa85d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:09 compute-0 kernel: tap669fa85d-70: left promiscuous mode
Nov 22 09:38:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.322 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5aebc9cf-998f-4753-ab08-24987518de34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.314 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.344 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3e9c6a-a87b-4998-8d60-a3958e5e1e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.346 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d315649a-6c67-41b0-aaa8-4a3cadf3d291]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c96f8028-798d-4d9f-a82f-2dba2262f3ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706699, 'reachable_time': 17662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376815, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d669fa85d\x2d7478\x2d40e5\x2d958b\x2d7300ef3552b5.mount: Deactivated successfully.
Nov 22 09:38:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.373 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:38:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.373 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe7392f-1e07-4afa-ba9d-82c5ebc9b6f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.400 253665 DEBUG nova.compute.manager [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-unplugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.401 253665 DEBUG oslo_concurrency.lockutils [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.401 253665 DEBUG oslo_concurrency.lockutils [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.401 253665 DEBUG oslo_concurrency.lockutils [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.401 253665 DEBUG nova.compute.manager [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] No waiting events found dispatching network-vif-unplugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.402 253665 DEBUG nova.compute.manager [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-unplugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.826 253665 INFO nova.virt.libvirt.driver [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Deleting instance files /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_del
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.827 253665 INFO nova.virt.libvirt.driver [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Deletion of /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_del complete
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.868 253665 INFO nova.compute.manager [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Took 0.98 seconds to destroy the instance on the hypervisor.
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.869 253665 DEBUG oslo.service.loopingcall [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.869 253665 DEBUG nova.compute.manager [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:38:09 compute-0 nova_compute[253661]: 2025-11-22 09:38:09.869 253665 DEBUG nova.network.neutron [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:38:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 326 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 759 KiB/s rd, 654 KiB/s wr, 84 op/s
Nov 22 09:38:10 compute-0 nova_compute[253661]: 2025-11-22 09:38:10.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.125 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:11 compute-0 ceph-mon[75021]: pgmap v2288: 305 pgs: 305 active+clean; 326 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 759 KiB/s rd, 654 KiB/s wr, 84 op/s
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.403 253665 DEBUG nova.network.neutron [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.437 253665 INFO nova.compute.manager [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Took 1.57 seconds to deallocate network for instance.
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.493 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.494 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.550 253665 DEBUG nova.compute.manager [req-d99a69d3-1e4e-45f8-b5e6-7435341cdc2a req-d5586a9d-4aa0-4bf3-a8c7-858206577624 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-deleted-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.561 253665 DEBUG nova.compute.manager [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.562 253665 DEBUG oslo_concurrency.lockutils [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.562 253665 DEBUG oslo_concurrency.lockutils [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.563 253665 DEBUG oslo_concurrency.lockutils [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.563 253665 DEBUG nova.compute.manager [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] No waiting events found dispatching network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.563 253665 WARNING nova.compute.manager [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received unexpected event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 for instance with vm_state deleted and task_state None.
Nov 22 09:38:11 compute-0 nova_compute[253661]: 2025-11-22 09:38:11.623 253665 DEBUG oslo_concurrency.processutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 303 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 41 KiB/s wr, 85 op/s
Nov 22 09:38:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976523645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:12 compute-0 nova_compute[253661]: 2025-11-22 09:38:12.125 253665 DEBUG oslo_concurrency.processutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:12 compute-0 nova_compute[253661]: 2025-11-22 09:38:12.132 253665 DEBUG nova.compute.provider_tree [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:12 compute-0 nova_compute[253661]: 2025-11-22 09:38:12.144 253665 DEBUG nova.scheduler.client.report [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:12 compute-0 nova_compute[253661]: 2025-11-22 09:38:12.165 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1976523645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:12 compute-0 nova_compute[253661]: 2025-11-22 09:38:12.191 253665 INFO nova.scheduler.client.report [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9
Nov 22 09:38:12 compute-0 nova_compute[253661]: 2025-11-22 09:38:12.250 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:12 compute-0 nova_compute[253661]: 2025-11-22 09:38:12.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:38:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2062388428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:38:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:38:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2062388428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:38:13 compute-0 ceph-mon[75021]: pgmap v2289: 305 pgs: 305 active+clean; 303 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 41 KiB/s wr, 85 op/s
Nov 22 09:38:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2062388428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:38:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2062388428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:38:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 246 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 38 KiB/s wr, 131 op/s
Nov 22 09:38:14 compute-0 nova_compute[253661]: 2025-11-22 09:38:14.159 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:15 compute-0 ceph-mon[75021]: pgmap v2290: 305 pgs: 305 active+clean; 246 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 38 KiB/s wr, 131 op/s
Nov 22 09:38:15 compute-0 nova_compute[253661]: 2025-11-22 09:38:15.532 253665 DEBUG nova.compute.manager [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:15 compute-0 nova_compute[253661]: 2025-11-22 09:38:15.532 253665 DEBUG nova.compute.manager [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing instance network info cache due to event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:38:15 compute-0 nova_compute[253661]: 2025-11-22 09:38:15.533 253665 DEBUG oslo_concurrency.lockutils [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:15 compute-0 nova_compute[253661]: 2025-11-22 09:38:15.533 253665 DEBUG oslo_concurrency.lockutils [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:15 compute-0 nova_compute[253661]: 2025-11-22 09:38:15.533 253665 DEBUG nova.network.neutron [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:38:15 compute-0 ovn_controller[152872]: 2025-11-22T09:38:15Z|01230|binding|INFO|Releasing lport 8cb4fbf8-c8a1-48f8-bf71-339312c7db31 from this chassis (sb_readonly=0)
Nov 22 09:38:15 compute-0 ovn_controller[152872]: 2025-11-22T09:38:15Z|01231|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 09:38:15 compute-0 ovn_controller[152872]: 2025-11-22T09:38:15Z|01232|binding|INFO|Releasing lport 6e07e124-b404-4946-958f-042e8d633a40 from this chassis (sb_readonly=0)
Nov 22 09:38:15 compute-0 nova_compute[253661]: 2025-11-22 09:38:15.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 246 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 120 op/s
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.231 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.232 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.245 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.254 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.254 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.285 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.359 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.360 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.366 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.366 253665 INFO nova.compute.claims [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.390 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.555 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.686 253665 DEBUG nova.network.neutron [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updated VIF entry in instance network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.687 253665 DEBUG nova.network.neutron [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:16 compute-0 nova_compute[253661]: 2025-11-22 09:38:16.706 253665 DEBUG oslo_concurrency.lockutils [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064126094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.121 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.129 253665 DEBUG nova.compute.provider_tree [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.146 253665 DEBUG nova.scheduler.client.report [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.176 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.177 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.181 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.191 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.191 253665 INFO nova.compute.claims [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.235 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.235 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.255 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.273 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:38:17 compute-0 ceph-mon[75021]: pgmap v2291: 305 pgs: 305 active+clean; 246 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 120 op/s
Nov 22 09:38:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4064126094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.363 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.366 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.367 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Creating image(s)
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.389 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.418 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.443 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.448 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.513 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.558 253665 DEBUG nova.policy [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '24fbabe00a26461eaa9027f7105ae97c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.570 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.572 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.574 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.574 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.601 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.607 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.863 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804282.8613734, 117927df-3c9e-4609-b5ba-dc3937b9339d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.865 253665 INFO nova.compute.manager [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] VM Stopped (Lifecycle Event)
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.893 253665 DEBUG nova.compute.manager [None req-0e22a9ad-59e2-41aa-a804-f2307df1b760 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 254 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 97 KiB/s wr, 124 op/s
Nov 22 09:38:17 compute-0 nova_compute[253661]: 2025-11-22 09:38:17.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840286575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.007 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.015 253665 DEBUG nova.compute.provider_tree [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.034 253665 DEBUG nova.scheduler.client.report [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.064 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.065 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.071 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.144 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.145 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.155 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] resizing rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.198 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.227 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.269 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Successfully created port: 761d949a-b334-4144-be7a-5f02c905c715 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.280 253665 DEBUG nova.objects.instance [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lazy-loading 'migration_context' on Instance uuid d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3840286575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.299 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.300 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Ensure instance console log exists: /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.300 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.301 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.301 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.344 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.346 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.347 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Creating image(s)
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.381 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.415 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.447 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.452 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.506 253665 DEBUG nova.policy [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '58f15faf9ac94307a17022836fe74e23', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84cc8edaaa54443997ac9f33f8fab7ce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.545 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.547 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.548 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.548 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.577 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.583 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6e3727ef-288f-4e26-8d29-f85423546391_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:18 compute-0 nova_compute[253661]: 2025-11-22 09:38:18.951 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6e3727ef-288f-4e26-8d29-f85423546391_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:18 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.040 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] resizing rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.159 253665 DEBUG nova.objects.instance [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lazy-loading 'migration_context' on Instance uuid 6e3727ef-288f-4e26-8d29-f85423546391 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.163 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.182 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.183 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Ensure instance console log exists: /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.183 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.184 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.184 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.195 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Successfully created port: a28c191e-c725-404b-a4cb-e5b89c914f67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:38:19 compute-0 ceph-mon[75021]: pgmap v2292: 305 pgs: 305 active+clean; 254 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 97 KiB/s wr, 124 op/s
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.313 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Successfully updated port: 761d949a-b334-4144-be7a-5f02c905c715 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.328 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.329 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquired lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.329 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.435 253665 DEBUG nova.compute.manager [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-changed-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.438 253665 DEBUG nova.compute.manager [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing instance network info cache due to event network-changed-761d949a-b334-4144-be7a-5f02c905c715. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.439 253665 DEBUG oslo_concurrency.lockutils [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:19 compute-0 nova_compute[253661]: 2025-11-22 09:38:19.561 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:38:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 285 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 110 op/s
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.194 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Successfully updated port: a28c191e-c725-404b-a4cb-e5b89c914f67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.207 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.207 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquired lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.207 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.312 253665 DEBUG nova.compute.manager [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-changed-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.313 253665 DEBUG nova.compute.manager [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Refreshing instance network info cache due to event network-changed-a28c191e-c725-404b-a4cb-e5b89c914f67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.313 253665 DEBUG oslo_concurrency.lockutils [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.391 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.407 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.424 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Releasing lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.425 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance network_info: |[{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.425 253665 DEBUG oslo_concurrency.lockutils [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.425 253665 DEBUG nova.network.neutron [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing network info cache for port 761d949a-b334-4144-be7a-5f02c905c715 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.428 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start _get_guest_xml network_info=[{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.433 253665 WARNING nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.440 253665 DEBUG nova.virt.libvirt.host [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.441 253665 DEBUG nova.virt.libvirt.host [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.444 253665 DEBUG nova.virt.libvirt.host [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.445 253665 DEBUG nova.virt.libvirt.host [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.445 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.445 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.451 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:38:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630693833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:20 compute-0 nova_compute[253661]: 2025-11-22 09:38:20.987 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.021 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.026 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:21 compute-0 ceph-mon[75021]: pgmap v2293: 305 pgs: 305 active+clean; 285 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 110 op/s
Nov 22 09:38:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1630693833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:21 compute-0 ovn_controller[152872]: 2025-11-22T09:38:21Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:38:77 10.100.0.14
Nov 22 09:38:21 compute-0 ovn_controller[152872]: 2025-11-22T09:38:21Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:38:77 10.100.0.14
Nov 22 09:38:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:38:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885037816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.579 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.582 253665 DEBUG nova.virt.libvirt.vif [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2070464237-ac',id=119,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNtSLVn2f2AjktFMVEQRNrPDPgiu6XGcAVHoUX9ErUANDAfx8scLKesh39J38uCHme4Kr1WaGaUgPEF++ZKW4JdZA91CWGfVEKx+uaYRX1tqW4xZuiIvDOiFoDeabW/cjQ==',key_name='tempest-TestSecurityGroupsBasicOps-580779993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9ddb669b6144eee90dc043099e8df8c',ramdisk_id='',reservation_id='r-b9fzu52l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2070464237',owner_user_name='tempest-TestSecurityGroupsBasicOps-2070464237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:17Z,user_data=None,user_id='24fbabe00a26461eaa9027f7105ae97c',uuid=d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.583 253665 DEBUG nova.network.os_vif_util [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converting VIF {"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.585 253665 DEBUG nova.network.os_vif_util [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.587 253665 DEBUG nova.objects.instance [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lazy-loading 'pci_devices' on Instance uuid d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.602 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <uuid>d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3</uuid>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <name>instance-00000077</name>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641</nova:name>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:38:20</nova:creationTime>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <nova:user uuid="24fbabe00a26461eaa9027f7105ae97c">tempest-TestSecurityGroupsBasicOps-2070464237-project-member</nova:user>
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <nova:project uuid="a9ddb669b6144eee90dc043099e8df8c">tempest-TestSecurityGroupsBasicOps-2070464237</nova:project>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <nova:port uuid="761d949a-b334-4144-be7a-5f02c905c715">
Nov 22 09:38:21 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <system>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <entry name="serial">d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3</entry>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <entry name="uuid">d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3</entry>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     </system>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <os>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   </os>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <features>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   </features>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk">
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       </source>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config">
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       </source>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:38:21 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:85:a8:14"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <target dev="tap761d949a-b3"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/console.log" append="off"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <video>
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     </video>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:38:21 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:38:21 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:38:21 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:38:21 compute-0 nova_compute[253661]: </domain>
Nov 22 09:38:21 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.603 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Preparing to wait for external event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.603 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.604 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.604 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.605 253665 DEBUG nova.virt.libvirt.vif [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2070464237-ac',id=119,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNtSLVn2f2AjktFMVEQRNrPDPgiu6XGcAVHoUX9ErUANDAfx8scLKesh39J38uCHme4Kr1WaGaUgPEF++ZKW4JdZA91CWGfVEKx+uaYRX1tqW4xZuiIvDOiFoDeabW/cjQ==',key_name='tempest-TestSecurityGroupsBasicOps-580779993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9ddb669b6144eee90dc043099e8df8c',ramdisk_id='',reservation_id='r-b9fzu52l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2070464237',owner_user_name='tempest-TestSecurityGroupsBasicOps-2070464237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:17Z,user_data=None,user_id='24fbabe00a26461eaa9027f7105ae97c',uuid=d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.605 253665 DEBUG nova.network.os_vif_util [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converting VIF {"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.606 253665 DEBUG nova.network.os_vif_util [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.606 253665 DEBUG os_vif [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.607 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.608 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.611 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap761d949a-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.613 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap761d949a-b3, col_values=(('external_ids', {'iface-id': '761d949a-b334-4144-be7a-5f02c905c715', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:a8:14', 'vm-uuid': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:21 compute-0 NetworkManager[48920]: <info>  [1763804301.6169] manager: (tap761d949a-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.619 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.625 253665 INFO os_vif [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3')
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.690 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.690 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.690 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] No VIF found with MAC fa:16:3e:85:a8:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.691 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Using config drive
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.718 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.728 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Updating instance_info_cache with network_info: [{"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:21 compute-0 podman[377282]: 2025-11-22 09:38:21.731134624 +0000 UTC m=+0.068854865 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:38:21 compute-0 podman[377283]: 2025-11-22 09:38:21.741746217 +0000 UTC m=+0.075988460 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.764 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Releasing lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.764 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance network_info: |[{"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.766 253665 DEBUG oslo_concurrency.lockutils [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.766 253665 DEBUG nova.network.neutron [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Refreshing network info cache for port a28c191e-c725-404b-a4cb-e5b89c914f67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.771 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start _get_guest_xml network_info=[{"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.776 253665 WARNING nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.781 253665 DEBUG nova.virt.libvirt.host [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.782 253665 DEBUG nova.virt.libvirt.host [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.793 253665 DEBUG nova.virt.libvirt.host [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.793 253665 DEBUG nova.virt.libvirt.host [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.794 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.794 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.795 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.795 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.796 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.796 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.797 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.797 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.797 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.798 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.798 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.798 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.801 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:21 compute-0 nova_compute[253661]: 2025-11-22 09:38:21.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 316 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 127 op/s
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.161 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Creating config drive at /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.169 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq5_2mgb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.288 253665 DEBUG nova.network.neutron [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updated VIF entry in instance network info cache for port 761d949a-b334-4144-be7a-5f02c905c715. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.288 253665 DEBUG nova.network.neutron [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.305 253665 DEBUG oslo_concurrency.lockutils [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:38:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3387314890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3885037816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3387314890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.330 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq5_2mgb" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.356 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.360 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.403 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.427 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.432 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.540 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.542 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Deleting local config drive /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config because it was imported into RBD.
Nov 22 09:38:22 compute-0 NetworkManager[48920]: <info>  [1763804302.6208] manager: (tap761d949a-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/509)
Nov 22 09:38:22 compute-0 kernel: tap761d949a-b3: entered promiscuous mode
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:22 compute-0 ovn_controller[152872]: 2025-11-22T09:38:22Z|01233|binding|INFO|Claiming lport 761d949a-b334-4144-be7a-5f02c905c715 for this chassis.
Nov 22 09:38:22 compute-0 ovn_controller[152872]: 2025-11-22T09:38:22Z|01234|binding|INFO|761d949a-b334-4144-be7a-5f02c905c715: Claiming fa:16:3e:85:a8:14 10.100.0.8
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.640 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:a8:14 10.100.0.8'], port_security=['fa:16:3e:85:a8:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6befe33f-63d2-41aa-b574-8eb9b323c484 8fecaa1a-36f4-4ef4-bac2-46e5b8b5f461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1cb9034-c4c3-45e7-9e31-5c5d3f434f14, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=761d949a-b334-4144-be7a-5f02c905c715) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.641 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 761d949a-b334-4144-be7a-5f02c905c715 in datapath a8c9b48b-687a-480f-aff5-bd1fee4c2bbd bound to our chassis
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.644 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8c9b48b-687a-480f-aff5-bd1fee4c2bbd
Nov 22 09:38:22 compute-0 ovn_controller[152872]: 2025-11-22T09:38:22Z|01235|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 ovn-installed in OVS
Nov 22 09:38:22 compute-0 ovn_controller[152872]: 2025-11-22T09:38:22Z|01236|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 up in Southbound
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.658 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.663 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25e98c39-ca9c-4b37-9560-ecc16dfd3d73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.664 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8c9b48b-61 in ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.666 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8c9b48b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.666 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[034fd7b2-2454-4589-8e4f-bf639f679104]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.667 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b523a676-e557-4527-bf2a-f460d96bfab7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:22 compute-0 systemd-machined[215941]: New machine qemu-150-instance-00000077.
Nov 22 09:38:22 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-00000077.
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.691 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[63af098c-258a-4a5f-8f3f-1f7681a1b2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:22 compute-0 systemd-udevd[377452]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.709 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5769ab-a229-4534-91f0-21fa824586b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:22 compute-0 NetworkManager[48920]: <info>  [1763804302.7195] device (tap761d949a-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:38:22 compute-0 NetworkManager[48920]: <info>  [1763804302.7212] device (tap761d949a-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:38:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:38:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:38:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.747 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6f9634-f6c7-4c29-8ae2-563ddb135305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:38:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:38:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:38:22 compute-0 NetworkManager[48920]: <info>  [1763804302.7582] manager: (tapa8c9b48b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/510)
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.756 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc73ac8-afee-425f-8540-2281e17272f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.818 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1be4b618-be2c-41ad-bfa4-8eeef714d6dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.824 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e083b641-b8e5-40d5-87f5-61c5ea47a3e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:38:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594992208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.944 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.946 253665 DEBUG nova.virt.libvirt.vif [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-771105155',id=120,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84cc8edaaa54443997ac9f33f8fab7ce',ramdisk_id='',reservation_id='r-uce6hdys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-495917723',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-495917723-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:18Z,user_data=None,user_id='58f15faf9ac94307a17022836fe74e23',uuid=6e3727ef-288f-4e26-8d29-f85423546391,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.946 253665 DEBUG nova.network.os_vif_util [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converting VIF {"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.947 253665 DEBUG nova.network.os_vif_util [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.948 253665 DEBUG nova.objects.instance [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e3727ef-288f-4e26-8d29-f85423546391 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.966 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <uuid>6e3727ef-288f-4e26-8d29-f85423546391</uuid>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <name>instance-00000078</name>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-771105155</nova:name>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:38:21</nova:creationTime>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <nova:user uuid="58f15faf9ac94307a17022836fe74e23">tempest-ServersNegativeTestMultiTenantJSON-495917723-project-member</nova:user>
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <nova:project uuid="84cc8edaaa54443997ac9f33f8fab7ce">tempest-ServersNegativeTestMultiTenantJSON-495917723</nova:project>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <nova:port uuid="a28c191e-c725-404b-a4cb-e5b89c914f67">
Nov 22 09:38:22 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <system>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <entry name="serial">6e3727ef-288f-4e26-8d29-f85423546391</entry>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <entry name="uuid">6e3727ef-288f-4e26-8d29-f85423546391</entry>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     </system>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <os>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   </os>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <features>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   </features>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6e3727ef-288f-4e26-8d29-f85423546391_disk">
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       </source>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6e3727ef-288f-4e26-8d29-f85423546391_disk.config">
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       </source>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:38:22 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:13:6e:9b"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <target dev="tapa28c191e-c7"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/console.log" append="off"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <video>
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     </video>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:38:22 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:38:22 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:38:22 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:38:22 compute-0 nova_compute[253661]: </domain>
Nov 22 09:38:22 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.969 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Preparing to wait for external event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.969 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.969 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.970 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.970 253665 DEBUG nova.virt.libvirt.vif [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-771105155',id=120,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84cc8edaaa54443997ac9f33f8fab7ce',ramdisk_id='',reservation_id='r-uce6hdys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-495917723',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-495917723-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:18Z,user_data=None,user_id='58f15faf9ac94307a17022836fe74e23',uuid=6e3727ef-288f-4e26-8d29-f85423546391,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.970 253665 DEBUG nova.network.os_vif_util [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converting VIF {"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.972 253665 DEBUG nova.network.os_vif_util [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.973 253665 DEBUG os_vif [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.974 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.974 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.978 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa28c191e-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.978 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa28c191e-c7, col_values=(('external_ids', {'iface-id': 'a28c191e-c725-404b-a4cb-e5b89c914f67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:6e:9b', 'vm-uuid': '6e3727ef-288f-4e26-8d29-f85423546391'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.979 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:22 compute-0 NetworkManager[48920]: <info>  [1763804302.9810] manager: (tapa28c191e-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:22 compute-0 nova_compute[253661]: 2025-11-22 09:38:22.991 253665 INFO os_vif [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7')
Nov 22 09:38:23 compute-0 NetworkManager[48920]: <info>  [1763804303.0061] device (tapa8c9b48b-60): carrier: link connected
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.017 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23d30165-cef7-4adc-bddd-f89ed7c1c7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.041 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[725da78e-f9f0-44bd-af4e-eafa3d43f291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8c9b48b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:59:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715374, 'reachable_time': 40748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377530, 'error': None, 'target': 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.066 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc610dc1-f737-444f-99ac-df9151f530e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:5919'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 715374, 'tstamp': 715374}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377531, 'error': None, 'target': 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.095 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0b25c1db-5fd3-4860-8f46-dfb46bd74fbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8c9b48b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:59:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715374, 'reachable_time': 40748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377533, 'error': None, 'target': 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.124 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804303.12357, d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] VM Started (Lifecycle Event)
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.136 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.137 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.137 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] No VIF found with MAC fa:16:3e:13:6e:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.137 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21733754-b429-49b2-b2a7-c2bca06bc53c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.139 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Using config drive
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.168 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.177 253665 DEBUG nova.compute.manager [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.178 253665 DEBUG oslo_concurrency.lockutils [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.178 253665 DEBUG oslo_concurrency.lockutils [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.178 253665 DEBUG oslo_concurrency.lockutils [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.178 253665 DEBUG nova.compute.manager [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Processing event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.179 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.180 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.190 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.191 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.196 253665 INFO nova.virt.libvirt.driver [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance spawned successfully.
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.196 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.207 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.207 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804303.1239512, d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.208 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] VM Paused (Lifecycle Event)
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.215 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.215 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.216 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.216 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.216 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.216 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.216 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13e57839-fcb5-4ceb-9e82-95696bf1dd58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.218 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c9b48b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.218 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.219 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8c9b48b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.224 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:23 compute-0 NetworkManager[48920]: <info>  [1763804303.2252] manager: (tapa8c9b48b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Nov 22 09:38:23 compute-0 kernel: tapa8c9b48b-60: entered promiscuous mode
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.230 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.231 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8c9b48b-60, col_values=(('external_ids', {'iface-id': '9e57ed14-a93d-454a-9d37-00035fb43663'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:23 compute-0 ovn_controller[152872]: 2025-11-22T09:38:23Z|01237|binding|INFO|Releasing lport 9e57ed14-a93d-454a-9d37-00035fb43663 from this chassis (sb_readonly=0)
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.239 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.243 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804303.1831286, d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.243 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] VM Resumed (Lifecycle Event)
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.259 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8c9b48b-687a-480f-aff5-bd1fee4c2bbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8c9b48b-687a-480f-aff5-bd1fee4c2bbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.260 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc6ad51-9750-4b99-a327-c57a96d43ba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.261 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/a8c9b48b-687a-480f-aff5-bd1fee4c2bbd.pid.haproxy
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID a8c9b48b-687a-480f-aff5-bd1fee4c2bbd
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:38:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.262 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'env', 'PROCESS_TAG=haproxy-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8c9b48b-687a-480f-aff5-bd1fee4c2bbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.265 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.267 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.276 253665 INFO nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Took 5.91 seconds to spawn the instance on the hypervisor.
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.276 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.284 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:38:23 compute-0 ceph-mon[75021]: pgmap v2294: 305 pgs: 305 active+clean; 316 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 127 op/s
Nov 22 09:38:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2594992208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.333 253665 INFO nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Took 6.99 seconds to build instance.
Nov 22 09:38:23 compute-0 nova_compute[253661]: 2025-11-22 09:38:23.351 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:23 compute-0 podman[377586]: 2025-11-22 09:38:23.685604579 +0000 UTC m=+0.057767338 container create 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:38:23 compute-0 systemd[1]: Started libpod-conmon-2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e.scope.
Nov 22 09:38:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:23 compute-0 podman[377586]: 2025-11-22 09:38:23.656565987 +0000 UTC m=+0.028728786 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:38:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bc45d9f28847928b45ee55af7005c19133177176f15c3c95d23db67f15d5f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:23 compute-0 podman[377586]: 2025-11-22 09:38:23.772693884 +0000 UTC m=+0.144856673 container init 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:38:23 compute-0 podman[377586]: 2025-11-22 09:38:23.777888593 +0000 UTC m=+0.150051362 container start 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:38:23 compute-0 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [NOTICE]   (377606) : New worker (377608) forked
Nov 22 09:38:23 compute-0 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [NOTICE]   (377606) : Loading success.
Nov 22 09:38:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 371 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.7 MiB/s wr, 158 op/s
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.127 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804289.1257915, 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.127 253665 INFO nova.compute.manager [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] VM Stopped (Lifecycle Event)
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.153 253665 DEBUG nova.compute.manager [None req-f09ba0db-3166-4b14-8b41-64ccd812cb40 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.335 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.336 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.361 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Creating config drive at /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.366 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp515hid6o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.444 253665 DEBUG nova.network.neutron [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Updated VIF entry in instance network info cache for port a28c191e-c725-404b-a4cb-e5b89c914f67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.446 253665 DEBUG nova.network.neutron [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Updating instance_info_cache with network_info: [{"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.460 253665 DEBUG oslo_concurrency.lockutils [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.520 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp515hid6o" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.549 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.555 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config 6e3727ef-288f-4e26-8d29-f85423546391_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.784 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config 6e3727ef-288f-4e26-8d29-f85423546391_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.786 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Deleting local config drive /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config because it was imported into RBD.
Nov 22 09:38:24 compute-0 NetworkManager[48920]: <info>  [1763804304.8579] manager: (tapa28c191e-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Nov 22 09:38:24 compute-0 kernel: tapa28c191e-c7: entered promiscuous mode
Nov 22 09:38:24 compute-0 systemd-udevd[377479]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:24 compute-0 ovn_controller[152872]: 2025-11-22T09:38:24Z|01238|binding|INFO|Claiming lport a28c191e-c725-404b-a4cb-e5b89c914f67 for this chassis.
Nov 22 09:38:24 compute-0 ovn_controller[152872]: 2025-11-22T09:38:24Z|01239|binding|INFO|a28c191e-c725-404b-a4cb-e5b89c914f67: Claiming fa:16:3e:13:6e:9b 10.100.0.3
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.873 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.879 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:6e:9b 10.100.0.3'], port_security=['fa:16:3e:13:6e:9b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6e3727ef-288f-4e26-8d29-f85423546391', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d994c6fb-564e-4523-afe4-89804b993385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84cc8edaaa54443997ac9f33f8fab7ce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '621b389d-2096-4ee1-8e3b-c5cb3466897b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56f26a9d-1a5c-40a1-8f03-488332bb450e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a28c191e-c725-404b-a4cb-e5b89c914f67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.881 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a28c191e-c725-404b-a4cb-e5b89c914f67 in datapath d994c6fb-564e-4523-afe4-89804b993385 bound to our chassis
Nov 22 09:38:24 compute-0 NetworkManager[48920]: <info>  [1763804304.8855] device (tapa28c191e-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.883 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d994c6fb-564e-4523-afe4-89804b993385
Nov 22 09:38:24 compute-0 NetworkManager[48920]: <info>  [1763804304.8869] device (tapa28c191e-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:38:24 compute-0 ovn_controller[152872]: 2025-11-22T09:38:24Z|01240|binding|INFO|Setting lport a28c191e-c725-404b-a4cb-e5b89c914f67 ovn-installed in OVS
Nov 22 09:38:24 compute-0 ovn_controller[152872]: 2025-11-22T09:38:24Z|01241|binding|INFO|Setting lport a28c191e-c725-404b-a4cb-e5b89c914f67 up in Southbound
Nov 22 09:38:24 compute-0 nova_compute[253661]: 2025-11-22 09:38:24.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.905 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[57bf1648-98fd-4774-af4c-16e1936652ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.907 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd994c6fb-51 in ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.909 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd994c6fb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.910 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0821ac85-ae3a-4631-b5ce-aa8c549459a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.912 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9822c923-b91a-4520-8377-c119c456ad30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:24 compute-0 systemd-machined[215941]: New machine qemu-151-instance-00000078.
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.930 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd8dcb1-679e-428d-a003-50fbc8462f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:24 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-00000078.
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.962 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5a69378c-cbde-4061-8eab-13c15f64e2ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.996 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[66620c46-1a4a-4ae5-912f-f688b90d2f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 NetworkManager[48920]: <info>  [1763804305.0033] manager: (tapd994c6fb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/514)
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.007 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cb221a9b-6e0b-4aee-871f-e4778c657a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.053 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd31307-2f3d-4645-9013-02f45ac49c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.060 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0faefdbd-054d-4ec2-8fae-a260243df9f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 NetworkManager[48920]: <info>  [1763804305.1004] device (tapd994c6fb-50): carrier: link connected
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.106 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d9df4e5b-1222-4f7e-b003-402db68b04fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.123 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec3924e-76e5-43b9-8db3-456b9b6e901f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd994c6fb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:09:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715583, 'reachable_time': 37348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377688, 'error': None, 'target': 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.147 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88de0866-5726-4933-8940-b54a8537bac2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:9a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 715583, 'tstamp': 715583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377689, 'error': None, 'target': 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.163 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8c8c9f-85da-4cc9-ad30-254977d8cf6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd994c6fb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:09:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715583, 'reachable_time': 37348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377690, 'error': None, 'target': 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.220 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7942634c-9c75-46be-98f4-e1db184de802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.276 253665 DEBUG nova.compute.manager [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.277 253665 DEBUG oslo_concurrency.lockutils [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.277 253665 DEBUG oslo_concurrency.lockutils [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.277 253665 DEBUG oslo_concurrency.lockutils [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.278 253665 DEBUG nova.compute.manager [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.278 253665 WARNING nova.compute.manager [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state active and task_state None.
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.305 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7601e614-6235-4ab4-bf68-8f6f8a7829d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.307 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd994c6fb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.307 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.308 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd994c6fb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.311 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:25 compute-0 NetworkManager[48920]: <info>  [1763804305.3126] manager: (tapd994c6fb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/515)
Nov 22 09:38:25 compute-0 kernel: tapd994c6fb-50: entered promiscuous mode
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.316 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd994c6fb-50, col_values=(('external_ids', {'iface-id': '01bfc096-4605-4de9-9175-6d95e7483385'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.318 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:25 compute-0 ovn_controller[152872]: 2025-11-22T09:38:25Z|01242|binding|INFO|Releasing lport 01bfc096-4605-4de9-9175-6d95e7483385 from this chassis (sb_readonly=0)
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.334 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:25 compute-0 ceph-mon[75021]: pgmap v2295: 305 pgs: 305 active+clean; 371 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.7 MiB/s wr, 158 op/s
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.336 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d994c6fb-564e-4523-afe4-89804b993385.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d994c6fb-564e-4523-afe4-89804b993385.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.337 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a75a5f3b-8d1e-4776-a8cb-1b82c5a74fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.337 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-d994c6fb-564e-4523-afe4-89804b993385
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/d994c6fb-564e-4523-afe4-89804b993385.pid.haproxy
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID d994c6fb-564e-4523-afe4-89804b993385
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:38:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.338 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'env', 'PROCESS_TAG=haproxy-d994c6fb-564e-4523-afe4-89804b993385', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d994c6fb-564e-4523-afe4-89804b993385.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.524 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804305.5236468, 6e3727ef-288f-4e26-8d29-f85423546391 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.525 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] VM Started (Lifecycle Event)
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.571 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.581 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804305.5239272, 6e3727ef-288f-4e26-8d29-f85423546391 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.581 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] VM Paused (Lifecycle Event)
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.599 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.602 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:38:25 compute-0 nova_compute[253661]: 2025-11-22 09:38:25.618 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:38:25 compute-0 podman[377764]: 2025-11-22 09:38:25.78789394 +0000 UTC m=+0.073827898 container create 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 09:38:25 compute-0 systemd[1]: Started libpod-conmon-0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5.scope.
Nov 22 09:38:25 compute-0 podman[377764]: 2025-11-22 09:38:25.745072194 +0000 UTC m=+0.031006142 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:38:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ce9d2c62dc97e3c042547d057ab8624862e781d05baaf78a7c8f0d1a0a1d0c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:25 compute-0 podman[377764]: 2025-11-22 09:38:25.888038671 +0000 UTC m=+0.173972609 container init 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:38:25 compute-0 podman[377764]: 2025-11-22 09:38:25.895464795 +0000 UTC m=+0.181398713 container start 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:38:25 compute-0 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [NOTICE]   (377783) : New worker (377785) forked
Nov 22 09:38:25 compute-0 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [NOTICE]   (377783) : Loading success.
Nov 22 09:38:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 371 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 5.7 MiB/s wr, 112 op/s
Nov 22 09:38:26 compute-0 nova_compute[253661]: 2025-11-22 09:38:26.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:27 compute-0 ceph-mon[75021]: pgmap v2296: 305 pgs: 305 active+clean; 371 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 5.7 MiB/s wr, 112 op/s
Nov 22 09:38:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 372 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1011 KiB/s rd, 5.7 MiB/s wr, 157 op/s
Nov 22 09:38:27 compute-0 nova_compute[253661]: 2025-11-22 09:38:27.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:27.982 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:27.983 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:27.985 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:28 compute-0 ovn_controller[152872]: 2025-11-22T09:38:28Z|01243|memory|INFO|peak resident set size grew 53% in last 3294.1 seconds, from 15872 kB to 24268 kB
Nov 22 09:38:28 compute-0 ovn_controller[152872]: 2025-11-22T09:38:28Z|01244|memory|INFO|idl-cells-OVN_Southbound:10807 idl-cells-Open_vSwitch:1326 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:2 lflow-cache-entries-cache-expr:408 lflow-cache-entries-cache-matches:288 lflow-cache-size-KB:1726 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:738 ofctrl_installed_flow_usage-KB:538 ofctrl_sb_flow_ref_usage-KB:275
Nov 22 09:38:28 compute-0 podman[377794]: 2025-11-22 09:38:28.470740188 +0000 UTC m=+0.149743515 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:38:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.038 253665 DEBUG nova.compute.manager [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.038 253665 DEBUG oslo_concurrency.lockutils [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.039 253665 DEBUG oslo_concurrency.lockutils [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.039 253665 DEBUG oslo_concurrency.lockutils [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.039 253665 DEBUG nova.compute.manager [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Processing event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.040 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.044 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804309.0438735, 6e3727ef-288f-4e26-8d29-f85423546391 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.044 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] VM Resumed (Lifecycle Event)
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.046 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.055 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance spawned successfully.
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.056 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.087 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.093 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.097 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.098 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.098 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.098 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.099 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.099 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.127 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.196 253665 INFO nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Took 10.85 seconds to spawn the instance on the hypervisor.
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.198 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.300 253665 INFO nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Took 12.95 seconds to build instance.
Nov 22 09:38:29 compute-0 nova_compute[253661]: 2025-11-22 09:38:29.318 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:29.339 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:29 compute-0 ceph-mon[75021]: pgmap v2297: 305 pgs: 305 active+clean; 372 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1011 KiB/s rd, 5.7 MiB/s wr, 157 op/s
Nov 22 09:38:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 372 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 200 op/s
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.712 253665 DEBUG nova.compute.manager [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.714 253665 DEBUG nova.compute.manager [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing instance network info cache due to event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.714 253665 DEBUG oslo_concurrency.lockutils [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.714 253665 DEBUG oslo_concurrency.lockutils [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.714 253665 DEBUG nova.network.neutron [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.744 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.745 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.746 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.746 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.747 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.749 253665 INFO nova.compute.manager [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Terminating instance
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.751 253665 DEBUG nova.compute.manager [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:38:30 compute-0 kernel: tapff0231eb-33 (unregistering): left promiscuous mode
Nov 22 09:38:30 compute-0 NetworkManager[48920]: <info>  [1763804310.8469] device (tapff0231eb-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:38:30 compute-0 ovn_controller[152872]: 2025-11-22T09:38:30Z|01245|binding|INFO|Releasing lport ff0231eb-335b-4acd-98c8-d655d887e97a from this chassis (sb_readonly=0)
Nov 22 09:38:30 compute-0 ovn_controller[152872]: 2025-11-22T09:38:30Z|01246|binding|INFO|Setting lport ff0231eb-335b-4acd-98c8-d655d887e97a down in Southbound
Nov 22 09:38:30 compute-0 ovn_controller[152872]: 2025-11-22T09:38:30Z|01247|binding|INFO|Removing iface tapff0231eb-33 ovn-installed in OVS
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:30 compute-0 kernel: tapd7659b3e-35 (unregistering): left promiscuous mode
Nov 22 09:38:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.873 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:38:77 10.100.0.14'], port_security=['fa:16:3e:7e:38:77 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=416fdb0b-60ab-41a3-b089-f86f3fe1761e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ff0231eb-335b-4acd-98c8-d655d887e97a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.876 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ff0231eb-335b-4acd-98c8-d655d887e97a in datapath a1a3f352-95a9-4122-aecd-94a4bbf79683 unbound from our chassis
Nov 22 09:38:30 compute-0 NetworkManager[48920]: <info>  [1763804310.8789] device (tapd7659b3e-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.886 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a3f352-95a9-4122-aecd-94a4bbf79683
Nov 22 09:38:30 compute-0 ovn_controller[152872]: 2025-11-22T09:38:30Z|01248|binding|INFO|Releasing lport d7659b3e-3579-403f-b319-ceb538d9c201 from this chassis (sb_readonly=0)
Nov 22 09:38:30 compute-0 ovn_controller[152872]: 2025-11-22T09:38:30Z|01249|binding|INFO|Setting lport d7659b3e-3579-403f-b319-ceb538d9c201 down in Southbound
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:30 compute-0 ovn_controller[152872]: 2025-11-22T09:38:30Z|01250|binding|INFO|Removing iface tapd7659b3e-35 ovn-installed in OVS
Nov 22 09:38:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.908 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f'], port_security=['fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:211f/64', 'neutron:device_id': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f56771e6-e0a6-4947-ad39-6cb384a012bf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=d7659b3e-3579-403f-b319-ceb538d9c201) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.914 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a162d837-15b6-4169-bac2-975ae10607c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:30 compute-0 nova_compute[253661]: 2025-11-22 09:38:30.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:30 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Deactivated successfully.
Nov 22 09:38:30 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Consumed 15.077s CPU time.
Nov 22 09:38:30 compute-0 systemd-machined[215941]: Machine qemu-149-instance-00000076 terminated.
Nov 22 09:38:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.958 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb07afe-1d49-4e66-94cd-d3ad5d8047c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.963 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[da8b3155-0de5-4361-99a0-8fb2647f3354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 NetworkManager[48920]: <info>  [1763804311.0024] manager: (tapd7659b3e-35): new Tun device (/org/freedesktop/NetworkManager/Devices/516)
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.004 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.015 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a77ed8fc-3d38-488f-b880-fcf1da2a4f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.017 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.027 253665 INFO nova.virt.libvirt.driver [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance destroyed successfully.
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.027 253665 DEBUG nova.objects.instance [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid f16662c4-9b4f-4060-ac76-ebfb960dbb89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.041 253665 DEBUG nova.virt.libvirt.vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:07Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.043 253665 DEBUG nova.network.os_vif_util [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.041 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[561d0619-62e4-46ca-a95d-29351949e367]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a3f352-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:dc:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707815, 'reachable_time': 26897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377855, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.044 253665 DEBUG nova.network.os_vif_util [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.045 253665 DEBUG os_vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.049 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff0231eb-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.056 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.061 253665 INFO os_vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33')
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.062 253665 DEBUG nova.virt.libvirt.vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:07Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.062 253665 DEBUG nova.network.os_vif_util [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.063 253665 DEBUG nova.network.os_vif_util [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.063 253665 DEBUG os_vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.065 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.065 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7659b3e-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.068 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c93e00-e4c8-4ad4-82dd-483434847372]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa1a3f352-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707837, 'tstamp': 707837}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377856, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa1a3f352-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707841, 'tstamp': 707841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377856, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.070 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a3f352-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.070 253665 INFO os_vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35')
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.072 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a3f352-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.072 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.073 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a3f352-90, col_values=(('external_ids', {'iface-id': '6e07e124-b404-4946-958f-042e8d633a40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.073 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.074 162862 INFO neutron.agent.ovn.metadata.agent [-] Port d7659b3e-3579-403f-b319-ceb538d9c201 in datapath c883e14c-ad7e-49eb-b0c3-2571140d1e57 unbound from our chassis
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.076 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c883e14c-ad7e-49eb-b0c3-2571140d1e57
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.097 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a23113cf-f221-4215-8804-37a0c26497a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.143 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb2996a-d7bf-4bfe-8d22-a7c89dbe4dfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.147 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[376f4a9f-7a40-41c8-8db1-529bfca029ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.194 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[caa32111-4c3e-410a-927d-1af0fbf6f418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.202 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.203 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.221 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc448cd7-afa0-4123-b9cd-6d7ff27cf3f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc883e14c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d1:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707921, 'reachable_time': 43324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377881, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.228 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.250 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4efdcf04-56d3-4b0c-a026-4cb6204fd3a9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc883e14c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707938, 'tstamp': 707938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377882, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.252 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc883e14c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.255 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.257 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc883e14c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.258 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.258 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc883e14c-a0, col_values=(('external_ids', {'iface-id': '8cb4fbf8-c8a1-48f8-bf71-339312c7db31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.258 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.301 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.302 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.309 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.310 253665 INFO nova.compute.claims [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:38:31 compute-0 ceph-mon[75021]: pgmap v2298: 305 pgs: 305 active+clean; 372 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 200 op/s
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.522 253665 INFO nova.virt.libvirt.driver [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Deleting instance files /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89_del
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.523 253665 INFO nova.virt.libvirt.driver [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Deletion of /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89_del complete
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.559 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.612 253665 INFO nova.compute.manager [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.613 253665 DEBUG oslo.service.loopingcall [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.614 253665 DEBUG nova.compute.manager [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.614 253665 DEBUG nova.network.neutron [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.751 253665 DEBUG nova.compute.manager [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.751 253665 DEBUG oslo_concurrency.lockutils [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.752 253665 DEBUG oslo_concurrency.lockutils [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.752 253665 DEBUG oslo_concurrency.lockutils [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.753 253665 DEBUG nova.compute.manager [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] No waiting events found dispatching network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:31 compute-0 nova_compute[253661]: 2025-11-22 09:38:31.754 253665 WARNING nova.compute.manager [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received unexpected event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 for instance with vm_state active and task_state None.
Nov 22 09:38:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 348 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 216 op/s
Nov 22 09:38:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532818359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.048 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.060 253665 DEBUG nova.compute.provider_tree [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.073 253665 DEBUG nova.scheduler.client.report [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.089 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.090 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.130 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.131 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.148 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.171 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.267 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.269 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.270 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Creating image(s)
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.299 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.328 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.357 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3532818359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.364 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.426 253665 DEBUG nova.network.neutron [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updated VIF entry in instance network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.427 253665 DEBUG nova.network.neutron [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.431 253665 DEBUG nova.policy [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.453 253665 DEBUG oslo_concurrency.lockutils [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.482 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.483 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.483 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.484 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.518 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.525 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.709 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.710 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.711 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.711 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.711 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.721 253665 INFO nova.compute.manager [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Terminating instance
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.723 253665 DEBUG nova.compute.manager [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.842 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-unplugged-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.843 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.843 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.844 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.844 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-unplugged-ff0231eb-335b-4acd-98c8-d655d887e97a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.844 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-unplugged-ff0231eb-335b-4acd-98c8-d655d887e97a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.844 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 WARNING nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received unexpected event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a for instance with vm_state active and task_state deleting.
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-unplugged-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-unplugged-d7659b3e-3579-403f-b319-ceb538d9c201 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-unplugged-d7659b3e-3579-403f-b319-ceb538d9c201 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 WARNING nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received unexpected event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 for instance with vm_state active and task_state deleting.
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-deleted-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 INFO nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Neutron deleted interface d7659b3e-3579-403f-b319-ceb538d9c201; detaching it from the instance and deleting it from the info cache
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG nova.network.neutron [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:32 compute-0 kernel: tapa28c191e-c7 (unregistering): left promiscuous mode
Nov 22 09:38:32 compute-0 NetworkManager[48920]: <info>  [1763804312.8677] device (tapa28c191e-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:38:32 compute-0 ovn_controller[152872]: 2025-11-22T09:38:32Z|01251|binding|INFO|Releasing lport a28c191e-c725-404b-a4cb-e5b89c914f67 from this chassis (sb_readonly=0)
Nov 22 09:38:32 compute-0 ovn_controller[152872]: 2025-11-22T09:38:32Z|01252|binding|INFO|Setting lport a28c191e-c725-404b-a4cb-e5b89c914f67 down in Southbound
Nov 22 09:38:32 compute-0 ovn_controller[152872]: 2025-11-22T09:38:32Z|01253|binding|INFO|Removing iface tapa28c191e-c7 ovn-installed in OVS
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.886 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:6e:9b 10.100.0.3'], port_security=['fa:16:3e:13:6e:9b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6e3727ef-288f-4e26-8d29-f85423546391', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d994c6fb-564e-4523-afe4-89804b993385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84cc8edaaa54443997ac9f33f8fab7ce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '621b389d-2096-4ee1-8e3b-c5cb3466897b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56f26a9d-1a5c-40a1-8f03-488332bb450e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a28c191e-c725-404b-a4cb-e5b89c914f67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.888 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a28c191e-c725-404b-a4cb-e5b89c914f67 in datapath d994c6fb-564e-4523-afe4-89804b993385 unbound from our chassis
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.889 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Detach interface failed, port_id=d7659b3e-3579-403f-b319-ceb538d9c201, reason: Instance f16662c4-9b4f-4060-ac76-ebfb960dbb89 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:38:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.890 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d994c6fb-564e-4523-afe4-89804b993385, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:38:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.891 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d69e81-4f3e-42c3-955e-f60a41be8a46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.892 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 namespace which is not needed anymore
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.907 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:32 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Deactivated successfully.
Nov 22 09:38:32 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Consumed 4.229s CPU time.
Nov 22 09:38:32 compute-0 systemd-machined[215941]: Machine qemu-151-instance-00000078 terminated.
Nov 22 09:38:32 compute-0 nova_compute[253661]: 2025-11-22 09:38:32.934 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.028 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance destroyed successfully.
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.029 253665 DEBUG nova.objects.instance [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lazy-loading 'resources' on Instance uuid 6e3727ef-288f-4e26-8d29-f85423546391 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.036 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:38:33 compute-0 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [NOTICE]   (377783) : haproxy version is 2.8.14-c23fe91
Nov 22 09:38:33 compute-0 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [NOTICE]   (377783) : path to executable is /usr/sbin/haproxy
Nov 22 09:38:33 compute-0 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [WARNING]  (377783) : Exiting Master process...
Nov 22 09:38:33 compute-0 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [ALERT]    (377783) : Current worker (377785) exited with code 143 (Terminated)
Nov 22 09:38:33 compute-0 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [WARNING]  (377783) : All workers exited. Exiting... (0)
Nov 22 09:38:33 compute-0 systemd[1]: libpod-0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5.scope: Deactivated successfully.
Nov 22 09:38:33 compute-0 podman[378050]: 2025-11-22 09:38:33.069076802 +0000 UTC m=+0.049652025 container died 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.078 253665 DEBUG nova.virt.libvirt.vif [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-771105155',id=120,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84cc8edaaa54443997ac9f33f8fab7ce',ramdisk_id='',reservation_id='r-uce6hdys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-495917723',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-495917723-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:29Z,user_data=None,user_id='58f15faf9ac94307a17022836fe74e23',uuid=6e3727ef-288f-4e26-8d29-f85423546391,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.079 253665 DEBUG nova.network.os_vif_util [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converting VIF {"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.080 253665 DEBUG nova.network.os_vif_util [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.081 253665 DEBUG os_vif [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.086 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa28c191e-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.095 253665 INFO os_vif [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7')
Nov 22 09:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5-userdata-shm.mount: Deactivated successfully.
Nov 22 09:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ce9d2c62dc97e3c042547d057ab8624862e781d05baaf78a7c8f0d1a0a1d0c-merged.mount: Deactivated successfully.
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.123 253665 DEBUG nova.network.neutron [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:33 compute-0 podman[378050]: 2025-11-22 09:38:33.124793358 +0000 UTC m=+0.105368601 container cleanup 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 09:38:33 compute-0 systemd[1]: libpod-conmon-0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5.scope: Deactivated successfully.
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.144 253665 INFO nova.compute.manager [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Took 1.53 seconds to deallocate network for instance.
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.192 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.193 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.201 253665 DEBUG nova.objects.instance [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:33 compute-0 podman[378129]: 2025-11-22 09:38:33.212408718 +0000 UTC m=+0.053914423 container remove 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.213 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.213 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Ensure instance console log exists: /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.214 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.214 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.214 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.220 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[086645fb-c4ea-48d8-98e2-f66f527d5fb6]: (4, ('Sat Nov 22 09:38:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 (0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5)\n0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5\nSat Nov 22 09:38:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 (0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5)\n0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.223 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e401cb84-0360-4df3-b3cc-cdf1fbc206f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.226 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd994c6fb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:33 compute-0 kernel: tapd994c6fb-50: left promiscuous mode
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.255 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3b013c22-f891-467a-a441-94dfc61ba373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.274 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9087a841-4e62-44b0-80ec-664445b14f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.276 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b909443-d2fe-453b-8f7c-044cc7d79a9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.295 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac859420-1ebd-46af-beb2-dbc7c9b51ef8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715572, 'reachable_time': 24910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378165, 'error': None, 'target': 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:33 compute-0 systemd[1]: run-netns-ovnmeta\x2dd994c6fb\x2d564e\x2d4523\x2dafe4\x2d89804b993385.mount: Deactivated successfully.
Nov 22 09:38:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.299 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:38:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.299 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[fac5e1ea-9f3d-4d84-99d3-17c2fc09895d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.336 253665 DEBUG oslo_concurrency.processutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:33 compute-0 ceph-mon[75021]: pgmap v2299: 305 pgs: 305 active+clean; 348 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 216 op/s
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.444 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Successfully created port: 12ab8505-5ae2-427c-aaf6-9431683a99c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.602 253665 INFO nova.virt.libvirt.driver [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Deleting instance files /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391_del
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.603 253665 INFO nova.virt.libvirt.driver [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Deletion of /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391_del complete
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.653 253665 INFO nova.compute.manager [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Took 0.93 seconds to destroy the instance on the hypervisor.
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.653 253665 DEBUG oslo.service.loopingcall [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.654 253665 DEBUG nova.compute.manager [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.654 253665 DEBUG nova.network.neutron [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:38:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807006791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.838 253665 DEBUG oslo_concurrency.processutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.845 253665 DEBUG nova.compute.provider_tree [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.859 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-changed-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.860 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing instance network info cache due to event network-changed-761d949a-b334-4144-be7a-5f02c905c715. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.860 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.860 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.860 253665 DEBUG nova.network.neutron [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing network info cache for port 761d949a-b334-4144-be7a-5f02c905c715 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.863 253665 DEBUG nova.scheduler.client.report [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.888 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.924 253665 INFO nova.scheduler.client.report [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance f16662c4-9b4f-4060-ac76-ebfb960dbb89
Nov 22 09:38:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 292 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 271 op/s
Nov 22 09:38:33 compute-0 nova_compute[253661]: 2025-11-22 09:38:33.999 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2807006791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.390720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314390786, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2110, "num_deletes": 253, "total_data_size": 3275166, "memory_usage": 3321744, "flush_reason": "Manual Compaction"}
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314416618, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3196116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45301, "largest_seqno": 47410, "table_properties": {"data_size": 3186732, "index_size": 5814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20186, "raw_average_key_size": 20, "raw_value_size": 3167563, "raw_average_value_size": 3215, "num_data_blocks": 257, "num_entries": 985, "num_filter_entries": 985, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804113, "oldest_key_time": 1763804113, "file_creation_time": 1763804314, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 25966 microseconds, and 9362 cpu microseconds.
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.416686) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3196116 bytes OK
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.416718) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.419632) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.419689) EVENT_LOG_v1 {"time_micros": 1763804314419675, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.419733) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3266239, prev total WAL file size 3266239, number of live WAL files 2.
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.421024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3121KB)], [104(8768KB)]
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314421088, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12174890, "oldest_snapshot_seqno": -1}
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7122 keys, 10511573 bytes, temperature: kUnknown
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314515125, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10511573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10462303, "index_size": 30342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 183560, "raw_average_key_size": 25, "raw_value_size": 10333346, "raw_average_value_size": 1450, "num_data_blocks": 1191, "num_entries": 7122, "num_filter_entries": 7122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804314, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.515502) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10511573 bytes
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.516929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.3 rd, 111.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 7644, records dropped: 522 output_compression: NoCompression
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.516957) EVENT_LOG_v1 {"time_micros": 1763804314516944, "job": 62, "event": "compaction_finished", "compaction_time_micros": 94136, "compaction_time_cpu_micros": 28926, "output_level": 6, "num_output_files": 1, "total_output_size": 10511573, "num_input_records": 7644, "num_output_records": 7122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314518175, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314520960, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.420871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:34 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.521 253665 DEBUG nova.network.neutron [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.536 253665 INFO nova.compute.manager [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Took 0.88 seconds to deallocate network for instance.
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.581 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.582 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.698 253665 DEBUG oslo_concurrency.processutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.923 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Successfully updated port: 12ab8505-5ae2-427c-aaf6-9431683a99c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.929 253665 DEBUG nova.compute.manager [req-9477974e-3184-474b-8d7a-33264c0977c4 req-36f5b0fe-3e5b-4c15-ab2d-d45157f71976 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-deleted-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.929 253665 DEBUG nova.compute.manager [req-9477974e-3184-474b-8d7a-33264c0977c4 req-36f5b0fe-3e5b-4c15-ab2d-d45157f71976 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-deleted-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.939 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.939 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:34 compute-0 nova_compute[253661]: 2025-11-22 09:38:34.939 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.094 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.094 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.094 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.095 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.095 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.096 253665 INFO nova.compute.manager [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Terminating instance
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.097 253665 DEBUG nova.compute.manager [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.133 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:38:35 compute-0 kernel: tap18df29f5-36 (unregistering): left promiscuous mode
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.157 253665 DEBUG nova.network.neutron [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updated VIF entry in instance network info cache for port 761d949a-b334-4144-be7a-5f02c905c715. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.158 253665 DEBUG nova.network.neutron [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:35 compute-0 NetworkManager[48920]: <info>  [1763804315.1679] device (tap18df29f5-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:38:35 compute-0 ovn_controller[152872]: 2025-11-22T09:38:35Z|01254|binding|INFO|Releasing lport 18df29f5-368d-4b94-ac69-8541de164d02 from this chassis (sb_readonly=0)
Nov 22 09:38:35 compute-0 ovn_controller[152872]: 2025-11-22T09:38:35Z|01255|binding|INFO|Setting lport 18df29f5-368d-4b94-ac69-8541de164d02 down in Southbound
Nov 22 09:38:35 compute-0 ovn_controller[152872]: 2025-11-22T09:38:35Z|01256|binding|INFO|Removing iface tap18df29f5-36 ovn-installed in OVS
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.182 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.187 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:34:a1 10.100.0.7'], port_security=['fa:16:3e:90:34:a1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2837c740-6ce1-47d5-ad27-107211f74db7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=416fdb0b-60ab-41a3-b089-f86f3fe1761e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=18df29f5-368d-4b94-ac69-8541de164d02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.189 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 18df29f5-368d-4b94-ac69-8541de164d02 in datapath a1a3f352-95a9-4122-aecd-94a4bbf79683 unbound from our chassis
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.191 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a3f352-95a9-4122-aecd-94a4bbf79683, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.191 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.192 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-unplugged-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.192 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.192 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[809a063a-c949-46f4-8de7-b4dabd5c5091]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] No waiting events found dispatching network-vif-unplugged-a28c191e-c725-404b-a4cb-e5b89c914f67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.193 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 namespace which is not needed anymore
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-unplugged-a28c191e-c725-404b-a4cb-e5b89c914f67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.194 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.194 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] No waiting events found dispatching network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.194 253665 WARNING nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received unexpected event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 for instance with vm_state active and task_state deleting.
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980003339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 kernel: tapa8c9a54f-9f (unregistering): left promiscuous mode
Nov 22 09:38:35 compute-0 NetworkManager[48920]: <info>  [1763804315.2218] device (tapa8c9a54f-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.225 253665 DEBUG oslo_concurrency.processutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:35 compute-0 ovn_controller[152872]: 2025-11-22T09:38:35Z|01257|binding|INFO|Releasing lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de from this chassis (sb_readonly=0)
Nov 22 09:38:35 compute-0 ovn_controller[152872]: 2025-11-22T09:38:35Z|01258|binding|INFO|Setting lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de down in Southbound
Nov 22 09:38:35 compute-0 ovn_controller[152872]: 2025-11-22T09:38:35Z|01259|binding|INFO|Removing iface tapa8c9a54f-9f ovn-installed in OVS
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.242 253665 DEBUG nova.compute.provider_tree [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.259 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.284 253665 DEBUG nova.scheduler.client.report [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.286 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83'], port_security=['fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:fd83/64', 'neutron:device_id': '2837c740-6ce1-47d5-ad27-107211f74db7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f56771e6-e0a6-4947-ad39-6cb384a012bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:35 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 22 09:38:35 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Consumed 17.731s CPU time.
Nov 22 09:38:35 compute-0 systemd-machined[215941]: Machine qemu-144-instance-00000073 terminated.
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.306 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.329 253665 INFO nova.scheduler.client.report [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Deleted allocations for instance 6e3727ef-288f-4e26-8d29-f85423546391
Nov 22 09:38:35 compute-0 NetworkManager[48920]: <info>  [1763804315.3360] manager: (tapa8c9a54f-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [NOTICE]   (373429) : haproxy version is 2.8.14-c23fe91
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [NOTICE]   (373429) : path to executable is /usr/sbin/haproxy
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [WARNING]  (373429) : Exiting Master process...
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [ALERT]    (373429) : Current worker (373431) exited with code 143 (Terminated)
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [WARNING]  (373429) : All workers exited. Exiting... (0)
Nov 22 09:38:35 compute-0 systemd[1]: libpod-aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd.scope: Deactivated successfully.
Nov 22 09:38:35 compute-0 podman[378236]: 2025-11-22 09:38:35.350928759 +0000 UTC m=+0.057112421 container died aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.363 253665 INFO nova.virt.libvirt.driver [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance destroyed successfully.
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.364 253665 DEBUG nova.objects.instance [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 2837c740-6ce1-47d5-ad27-107211f74db7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd-userdata-shm.mount: Deactivated successfully.
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.383 253665 DEBUG nova.virt.libvirt.vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:09Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.385 253665 DEBUG nova.network.os_vif_util [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.385 253665 DEBUG nova.network.os_vif_util [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-871bb839a73d656755f89d39a1db17cd9b58ff25f5a2a0710db15a0e02acd3f2-merged.mount: Deactivated successfully.
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.386 253665 DEBUG os_vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18df29f5-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:38:35 compute-0 ceph-mon[75021]: pgmap v2300: 305 pgs: 305 active+clean; 292 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 271 op/s
Nov 22 09:38:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2980003339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.403 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:35 compute-0 podman[378236]: 2025-11-22 09:38:35.404228455 +0000 UTC m=+0.110412117 container cleanup aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.404 253665 INFO os_vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36')
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.405 253665 DEBUG nova.virt.libvirt.vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:09Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.406 253665 DEBUG nova.network.os_vif_util [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.407 253665 DEBUG nova.network.os_vif_util [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.407 253665 DEBUG os_vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.410 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c9a54f-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.413 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.416 253665 INFO os_vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f')
Nov 22 09:38:35 compute-0 systemd[1]: libpod-conmon-aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd.scope: Deactivated successfully.
Nov 22 09:38:35 compute-0 podman[378284]: 2025-11-22 09:38:35.517346338 +0000 UTC m=+0.080772480 container remove aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.526 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[337e7e91-7034-4faa-9dde-73e6f04ca691]: (4, ('Sat Nov 22 09:38:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 (aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd)\naecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd\nSat Nov 22 09:38:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 (aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd)\naecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.528 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c8733e1b-6189-42d4-9630-8599dcc9555d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.529 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a3f352-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 kernel: tapa1a3f352-90: left promiscuous mode
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9d2307-edcf-463c-adc6-017955e2f729]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[697b809a-1128-4484-9b01-809161c59350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.571 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31221561-68fc-4869-9d07-0469c56e2f68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.591 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4e55965a-87dd-4334-bbc0-cc5a029dcafe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707806, 'reachable_time': 26109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378317, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 systemd[1]: run-netns-ovnmeta\x2da1a3f352\x2d95a9\x2d4122\x2daecd\x2d94a4bbf79683.mount: Deactivated successfully.
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.595 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.596 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea8ccc0-b61b-4eb1-97cd-6f7414668902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.596 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de in datapath c883e14c-ad7e-49eb-b0c3-2571140d1e57 unbound from our chassis
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.598 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c883e14c-ad7e-49eb-b0c3-2571140d1e57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.599 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[937a0986-32d4-4aa7-9d2d-26a0f8648012]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.600 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 namespace which is not needed anymore
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [NOTICE]   (373557) : haproxy version is 2.8.14-c23fe91
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [NOTICE]   (373557) : path to executable is /usr/sbin/haproxy
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [WARNING]  (373557) : Exiting Master process...
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [ALERT]    (373557) : Current worker (373559) exited with code 143 (Terminated)
Nov 22 09:38:35 compute-0 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [WARNING]  (373557) : All workers exited. Exiting... (0)
Nov 22 09:38:35 compute-0 systemd[1]: libpod-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868.scope: Deactivated successfully.
Nov 22 09:38:35 compute-0 conmon[373552]: conmon e8905d359dd14150034f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868.scope/container/memory.events
Nov 22 09:38:35 compute-0 podman[378335]: 2025-11-22 09:38:35.751295236 +0000 UTC m=+0.048108197 container died e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868-userdata-shm.mount: Deactivated successfully.
Nov 22 09:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8abda532727a5eec38dc91090bcebb9f55c2e54221da7f33b18490782350886-merged.mount: Deactivated successfully.
Nov 22 09:38:35 compute-0 podman[378335]: 2025-11-22 09:38:35.872366017 +0000 UTC m=+0.169178978 container cleanup e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:38:35 compute-0 systemd[1]: libpod-conmon-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868.scope: Deactivated successfully.
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.946 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.947 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.948 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 292 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 213 op/s
Nov 22 09:38:35 compute-0 podman[378366]: 2025-11-22 09:38:35.978373013 +0000 UTC m=+0.079107198 container remove e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59be89f3-5750-47fe-8b77-16d13201659d]: (4, ('Sat Nov 22 09:38:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 (e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868)\ne8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868\nSat Nov 22 09:38:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 (e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868)\ne8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.988 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd9b9ce-7eac-4009-823d-719c8313d508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.989 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc883e14c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:35 compute-0 nova_compute[253661]: 2025-11-22 09:38:35.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:35 compute-0 kernel: tapc883e14c-a0: left promiscuous mode
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb61b2a3-f419-4500-b76f-c5eb4bcca2a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6242b209-6442-4663-854d-1836ef5e1845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.034 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4220ef93-25f3-49ec-a3e8-cfc462bc51d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.055 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08bd7ff2-930d-45d0-b6bf-d01711b2bedd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707912, 'reachable_time': 35242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378382, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.058 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:38:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.058 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[00206ffc-dd3c-480a-be98-db92cf0792fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.247 253665 INFO nova.virt.libvirt.driver [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Deleting instance files /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7_del
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.248 253665 INFO nova.virt.libvirt.driver [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Deletion of /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7_del complete
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.346 253665 INFO nova.compute.manager [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Took 1.25 seconds to destroy the instance on the hypervisor.
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.347 253665 DEBUG oslo.service.loopingcall [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.347 253665 DEBUG nova.compute.manager [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.347 253665 DEBUG nova.network.neutron [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:38:36 compute-0 systemd[1]: run-netns-ovnmeta\x2dc883e14c\x2dad7e\x2d49eb\x2db0c3\x2d2571140d1e57.mount: Deactivated successfully.
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.610 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.628 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.628 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance network_info: |[{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.629 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.629 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.632 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start _get_guest_xml network_info=[{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.640 253665 WARNING nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.649 253665 DEBUG nova.virt.libvirt.host [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.650 253665 DEBUG nova.virt.libvirt.host [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.655 253665 DEBUG nova.virt.libvirt.host [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.656 253665 DEBUG nova.virt.libvirt.host [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.656 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.656 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:38:36 compute-0 nova_compute[253661]: 2025-11-22 09:38:36.661 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.011 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-unplugged-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.013 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.014 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.014 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-unplugged-18df29f5-368d-4b94-ac69-8541de164d02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-unplugged-18df29f5-368d-4b94-ac69-8541de164d02 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.016 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.016 253665 WARNING nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received unexpected event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 for instance with vm_state active and task_state deleting.
Nov 22 09:38:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:38:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152459162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:37 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.155 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.182 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.187 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.301 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.302 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.304 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:37 compute-0 ceph-mon[75021]: pgmap v2301: 305 pgs: 305 active+clean; 292 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 213 op/s
Nov 22 09:38:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3152459162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:38:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518556906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.692 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.694 253665 DEBUG nova.virt.libvirt.vif [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:32Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.694 253665 DEBUG nova.network.os_vif_util [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.695 253665 DEBUG nova.network.os_vif_util [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.697 253665 DEBUG nova.objects.instance [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.711 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <name>instance-00000079</name>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:38:36</nova:creationTime>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 09:38:37 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <system>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <entry name="serial">9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <entry name="uuid">9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     </system>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <os>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   </os>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <features>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   </features>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk">
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config">
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:38:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:30:a0:d3"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <target dev="tap12ab8505-5a"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log" append="off"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <video>
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     </video>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:38:37 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:38:37 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:38:37 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:38:37 compute-0 nova_compute[253661]: </domain>
Nov 22 09:38:37 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.712 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Preparing to wait for external event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.713 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.713 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.713 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.714 253665 DEBUG nova.virt.libvirt.vif [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:32Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.714 253665 DEBUG nova.network.os_vif_util [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.715 253665 DEBUG nova.network.os_vif_util [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.715 253665 DEBUG os_vif [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.717 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.717 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.720 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.720 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12ab8505-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.721 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12ab8505-5a, col_values=(('external_ids', {'iface-id': '12ab8505-5ae2-427c-aaf6-9431683a99c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:a0:d3', 'vm-uuid': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:37 compute-0 NetworkManager[48920]: <info>  [1763804317.7236] manager: (tap12ab8505-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.730 253665 INFO os_vif [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a')
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.793 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.793 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.793 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:30:a0:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.794 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Using config drive
Nov 22 09:38:37 compute-0 nova_compute[253661]: 2025-11-22 09:38:37.815 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 252 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 271 op/s
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.038 253665 DEBUG nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.039 253665 DEBUG oslo_concurrency.lockutils [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.039 253665 DEBUG oslo_concurrency.lockutils [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.039 253665 DEBUG oslo_concurrency.lockutils [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 DEBUG nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 WARNING nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received unexpected event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de for instance with vm_state active and task_state deleting.
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 DEBUG nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-deleted-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 INFO nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Neutron deleted interface 18df29f5-368d-4b94-ac69-8541de164d02; detaching it from the instance and deleting it from the info cache
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 DEBUG nova.network.neutron [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [{"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.069 253665 DEBUG nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Detach interface failed, port_id=18df29f5-368d-4b94-ac69-8541de164d02, reason: Instance 2837c740-6ce1-47d5-ad27-107211f74db7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.097 253665 DEBUG nova.network.neutron [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.113 253665 INFO nova.compute.manager [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Took 1.77 seconds to deallocate network for instance.
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.168 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.168 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.284 253665 DEBUG oslo_concurrency.processutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.332 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:38 compute-0 ovn_controller[152872]: 2025-11-22T09:38:38Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:a8:14 10.100.0.8
Nov 22 09:38:38 compute-0 ovn_controller[152872]: 2025-11-22T09:38:38Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:a8:14 10.100.0.8
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.341 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Creating config drive at /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.348 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9t_sogkw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.505 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9t_sogkw" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.586 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.592 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2518556906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.637 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.639 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.660 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.662 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-changed-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.662 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing instance network info cache due to event network-changed-18df29f5-368d-4b94-ac69-8541de164d02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.663 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.663 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.663 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing network info cache for port 18df29f5-368d-4b94-ac69-8541de164d02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:38:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160455238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.768 253665 DEBUG oslo_concurrency.processutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.774 253665 DEBUG nova.compute.provider_tree [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.788 253665 DEBUG nova.scheduler.client.report [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.806 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.830 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.831 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Deleting local config drive /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config because it was imported into RBD.
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.839 253665 INFO nova.scheduler.client.report [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 2837c740-6ce1-47d5-ad27-107211f74db7
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.847 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:38:38 compute-0 kernel: tap12ab8505-5a: entered promiscuous mode
Nov 22 09:38:38 compute-0 NetworkManager[48920]: <info>  [1763804318.8889] manager: (tap12ab8505-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:38 compute-0 ovn_controller[152872]: 2025-11-22T09:38:38Z|01260|binding|INFO|Claiming lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 for this chassis.
Nov 22 09:38:38 compute-0 ovn_controller[152872]: 2025-11-22T09:38:38Z|01261|binding|INFO|12ab8505-5ae2-427c-aaf6-9431683a99c8: Claiming fa:16:3e:30:a0:d3 10.100.0.3
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.901 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:a0:d3 10.100.0.3'], port_security=['fa:16:3e:30:a0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80502cee-0a40-4541-8461-41de74f7266c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c64167e3-035b-4f1b-bee4-b85857c701f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1bcb3a6-b65a-4848-8c3b-e1169d9ae614, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=12ab8505-5ae2-427c-aaf6-9431683a99c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.903 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 12ab8505-5ae2-427c-aaf6-9431683a99c8 in datapath 80502cee-0a40-4541-8461-41de74f7266c bound to our chassis
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.905 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80502cee-0a40-4541-8461-41de74f7266c
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.908 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:38 compute-0 ovn_controller[152872]: 2025-11-22T09:38:38Z|01262|binding|INFO|Setting lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 ovn-installed in OVS
Nov 22 09:38:38 compute-0 ovn_controller[152872]: 2025-11-22T09:38:38Z|01263|binding|INFO|Setting lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 up in Southbound
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.911 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.923 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0df37f99-2615-4ef7-829d-5783790201d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.924 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80502cee-01 in ovnmeta-80502cee-0a40-4541-8461-41de74f7266c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.927 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80502cee-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.927 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bdc32b-2429-4ac5-9999-c1dcce2ed884]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.929 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bafe0886-c9a9-4542-99f8-e96ca2850d5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:38 compute-0 systemd-machined[215941]: New machine qemu-152-instance-00000079.
Nov 22 09:38:38 compute-0 systemd-udevd[378542]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:38:38 compute-0 nova_compute[253661]: 2025-11-22 09:38:38.933 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.943 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0a60bcc4-ee81-4191-9c52-d6e51cb0711c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:38 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-00000079.
Nov 22 09:38:38 compute-0 NetworkManager[48920]: <info>  [1763804318.9541] device (tap12ab8505-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:38:38 compute-0 NetworkManager[48920]: <info>  [1763804318.9562] device (tap12ab8505-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:38:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.973 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11332b59-df49-4738-8473-5dfac97326fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.015 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f2527b8b-3a67-4744-a4ef-6fe973a01997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 systemd-udevd[378546]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.023 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a2ddee-0074-44c0-b289-ad373c466fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 NetworkManager[48920]: <info>  [1763804319.0244] manager: (tap80502cee-00): new Veth device (/org/freedesktop/NetworkManager/Devices/520)
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.061 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e6407ea4-5d61-4427-8965-76ffe7199a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.064 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc6f44c-59e5-4353-847e-ccff14c64ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 NetworkManager[48920]: <info>  [1763804319.0936] device (tap80502cee-00): carrier: link connected
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.103 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8a035b0b-0e85-400c-96f6-30a3635e21b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.123 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8bad26af-7fe9-4fad-918b-5b25ca43829a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80502cee-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:2a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716983, 'reachable_time': 31024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378574, 'error': None, 'target': 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.141 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0413c694-f8a1-48ba-a490-9f7e97396c51]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:2a77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 716983, 'tstamp': 716983}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378575, 'error': None, 'target': 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.162 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cadfda-edec-453e-862c-befb48df79a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80502cee-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:2a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716983, 'reachable_time': 31024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378576, 'error': None, 'target': 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.202 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf4a3cc-5857-4eb7-ba31-0345ea864fe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.232 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.245 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.246 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-unplugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.246 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.246 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.247 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.247 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-unplugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.247 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-unplugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6d3c2d-ebc0-4937-b71f-5ba6c3f0af4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.280 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80502cee-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.280 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.281 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80502cee-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.283 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:39 compute-0 kernel: tap80502cee-00: entered promiscuous mode
Nov 22 09:38:39 compute-0 NetworkManager[48920]: <info>  [1763804319.2840] manager: (tap80502cee-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.287 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80502cee-00, col_values=(('external_ids', {'iface-id': 'e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.288 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:39 compute-0 ovn_controller[152872]: 2025-11-22T09:38:39Z|01264|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.306 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80502cee-0a40-4541-8461-41de74f7266c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80502cee-0a40-4541-8461-41de74f7266c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.308 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[984f8d1f-d79d-4561-b318-785505f12384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.309 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-80502cee-0a40-4541-8461-41de74f7266c
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/80502cee-0a40-4541-8461-41de74f7266c.pid.haproxy
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 80502cee-0a40-4541-8461-41de74f7266c
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:38:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.309 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'env', 'PROCESS_TAG=haproxy-80502cee-0a40-4541-8461-41de74f7266c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80502cee-0a40-4541-8461-41de74f7266c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.511 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804319.5109382, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.522 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] VM Started (Lifecycle Event)
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.539 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.547 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804319.512336, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.548 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] VM Paused (Lifecycle Event)
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.563 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.566 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.587 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:38:39 compute-0 ceph-mon[75021]: pgmap v2302: 305 pgs: 305 active+clean; 252 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 271 op/s
Nov 22 09:38:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4160455238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:39 compute-0 ovn_controller[152872]: 2025-11-22T09:38:39Z|01265|binding|INFO|Releasing lport 9e57ed14-a93d-454a-9d37-00035fb43663 from this chassis (sb_readonly=0)
Nov 22 09:38:39 compute-0 ovn_controller[152872]: 2025-11-22T09:38:39Z|01266|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 09:38:39 compute-0 ovn_controller[152872]: 2025-11-22T09:38:39Z|01267|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 09:38:39 compute-0 nova_compute[253661]: 2025-11-22 09:38:39.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:39 compute-0 podman[378650]: 2025-11-22 09:38:39.76456182 +0000 UTC m=+0.077436186 container create 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:38:39 compute-0 podman[378650]: 2025-11-22 09:38:39.720225728 +0000 UTC m=+0.033100114 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:38:39 compute-0 systemd[1]: Started libpod-conmon-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f.scope.
Nov 22 09:38:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f37bc7976ef071fba5e665fbf02212daac86ba36a65621a352596852c57db43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:39 compute-0 podman[378650]: 2025-11-22 09:38:39.883915609 +0000 UTC m=+0.196790005 container init 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:38:39 compute-0 podman[378650]: 2025-11-22 09:38:39.891435946 +0000 UTC m=+0.204310312 container start 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:38:39 compute-0 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [NOTICE]   (378669) : New worker (378671) forked
Nov 22 09:38:39 compute-0 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [NOTICE]   (378669) : Loading success.
Nov 22 09:38:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 242 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 283 op/s
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.191 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-deleted-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.192 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.192 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.193 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.193 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.194 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Processing event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.194 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.195 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.195 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.196 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.196 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.197 253665 WARNING nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 for instance with vm_state building and task_state spawning.
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.198 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.204 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804320.2041552, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.205 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] VM Resumed (Lifecycle Event)
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.208 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.215 253665 INFO nova.virt.libvirt.driver [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance spawned successfully.
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.216 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.240 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.247 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.248 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.248 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.248 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.249 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.249 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.252 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.292 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.315 253665 INFO nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Took 8.05 seconds to spawn the instance on the hypervisor.
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.316 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.379 253665 INFO nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Took 9.11 seconds to build instance.
Nov 22 09:38:40 compute-0 nova_compute[253661]: 2025-11-22 09:38:40.394 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:41 compute-0 nova_compute[253661]: 2025-11-22 09:38:41.143 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:41 compute-0 nova_compute[253661]: 2025-11-22 09:38:41.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:41 compute-0 ceph-mon[75021]: pgmap v2303: 305 pgs: 305 active+clean; 242 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 283 op/s
Nov 22 09:38:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 263 op/s
Nov 22 09:38:42 compute-0 nova_compute[253661]: 2025-11-22 09:38:42.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:43 compute-0 ceph-mon[75021]: pgmap v2304: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 263 op/s
Nov 22 09:38:43 compute-0 nova_compute[253661]: 2025-11-22 09:38:43.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:43 compute-0 ovn_controller[152872]: 2025-11-22T09:38:43Z|01268|binding|INFO|Releasing lport 9e57ed14-a93d-454a-9d37-00035fb43663 from this chassis (sb_readonly=0)
Nov 22 09:38:43 compute-0 ovn_controller[152872]: 2025-11-22T09:38:43Z|01269|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 09:38:43 compute-0 ovn_controller[152872]: 2025-11-22T09:38:43Z|01270|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 09:38:43 compute-0 nova_compute[253661]: 2025-11-22 09:38:43.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 282 op/s
Nov 22 09:38:44 compute-0 nova_compute[253661]: 2025-11-22 09:38:44.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:44 compute-0 nova_compute[253661]: 2025-11-22 09:38:44.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:38:44 compute-0 ovn_controller[152872]: 2025-11-22T09:38:44Z|01271|binding|INFO|Releasing lport 9e57ed14-a93d-454a-9d37-00035fb43663 from this chassis (sb_readonly=0)
Nov 22 09:38:44 compute-0 ovn_controller[152872]: 2025-11-22T09:38:44Z|01272|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 09:38:44 compute-0 ovn_controller[152872]: 2025-11-22T09:38:44Z|01273|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 09:38:44 compute-0 nova_compute[253661]: 2025-11-22 09:38:44.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:44 compute-0 nova_compute[253661]: 2025-11-22 09:38:44.630 253665 DEBUG nova.compute.manager [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:44 compute-0 nova_compute[253661]: 2025-11-22 09:38:44.630 253665 DEBUG nova.compute.manager [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:38:44 compute-0 nova_compute[253661]: 2025-11-22 09:38:44.631 253665 DEBUG oslo_concurrency.lockutils [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:44 compute-0 nova_compute[253661]: 2025-11-22 09:38:44.631 253665 DEBUG oslo_concurrency.lockutils [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:44 compute-0 nova_compute[253661]: 2025-11-22 09:38:44.631 253665 DEBUG nova.network.neutron [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:38:45 compute-0 ceph-mon[75021]: pgmap v2305: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 282 op/s
Nov 22 09:38:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 191 op/s
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.019 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804311.0169735, f16662c4-9b4f-4060-ac76-ebfb960dbb89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.020 253665 INFO nova.compute.manager [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] VM Stopped (Lifecycle Event)
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.042 253665 DEBUG nova.compute.manager [None req-1547c055-69e9-402c-b169-05a7611c7efa - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.200 253665 DEBUG nova.network.neutron [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.201 253665 DEBUG nova.network.neutron [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.220 253665 DEBUG oslo_concurrency.lockutils [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084621905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:46 compute-0 nova_compute[253661]: 2025-11-22 09:38:46.986 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.066 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.067 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.071 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.072 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.075 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.076 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.298 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.302 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3122MB free_disk=59.87643051147461GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.302 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.303 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.520 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d2f5b215-3a41-451c-8ad8-68b17c96a678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:38:47 compute-0 ceph-mon[75021]: pgmap v2306: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 191 op/s
Nov 22 09:38:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4084621905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.704 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.755 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 191 op/s
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.986 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804312.9588223, 6e3727ef-288f-4e26-8d29-f85423546391 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:47 compute-0 nova_compute[253661]: 2025-11-22 09:38:47.987 253665 INFO nova.compute.manager [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] VM Stopped (Lifecycle Event)
Nov 22 09:38:48 compute-0 nova_compute[253661]: 2025-11-22 09:38:48.002 253665 DEBUG nova.compute.manager [None req-00163750-ba67-49c2-a919-07ae60c89bcc - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797788429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:48 compute-0 nova_compute[253661]: 2025-11-22 09:38:48.206 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:48 compute-0 nova_compute[253661]: 2025-11-22 09:38:48.213 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:48 compute-0 nova_compute[253661]: 2025-11-22 09:38:48.228 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:48 compute-0 nova_compute[253661]: 2025-11-22 09:38:48.247 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:38:48 compute-0 nova_compute[253661]: 2025-11-22 09:38:48.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3797788429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:49 compute-0 nova_compute[253661]: 2025-11-22 09:38:49.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:49 compute-0 nova_compute[253661]: 2025-11-22 09:38:49.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:49 compute-0 nova_compute[253661]: 2025-11-22 09:38:49.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:38:49 compute-0 ceph-mon[75021]: pgmap v2307: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 191 op/s
Nov 22 09:38:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 909 KiB/s wr, 134 op/s
Nov 22 09:38:50 compute-0 nova_compute[253661]: 2025-11-22 09:38:50.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:38:50 compute-0 nova_compute[253661]: 2025-11-22 09:38:50.353 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804315.3516412, 2837c740-6ce1-47d5-ad27-107211f74db7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:38:50 compute-0 nova_compute[253661]: 2025-11-22 09:38:50.353 253665 INFO nova.compute.manager [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] VM Stopped (Lifecycle Event)
Nov 22 09:38:50 compute-0 nova_compute[253661]: 2025-11-22 09:38:50.376 253665 DEBUG nova.compute.manager [None req-41fabec6-82e4-4d3f-b70c-3655ff8db9fb - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:38:51 compute-0 nova_compute[253661]: 2025-11-22 09:38:51.150 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:51 compute-0 ceph-mon[75021]: pgmap v2308: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 909 KiB/s wr, 134 op/s
Nov 22 09:38:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 38 KiB/s wr, 77 op/s
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:38:52
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:38:52 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 22 09:38:52 compute-0 podman[378725]: 2025-11-22 09:38:52.383805162 +0000 UTC m=+0.077183241 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 09:38:52 compute-0 podman[378726]: 2025-11-22 09:38:52.397548854 +0000 UTC m=+0.089706972 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.482 253665 DEBUG nova.compute.manager [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-changed-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.482 253665 DEBUG nova.compute.manager [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing instance network info cache due to event network-changed-761d949a-b334-4144-be7a-5f02c905c715. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.483 253665 DEBUG oslo_concurrency.lockutils [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.483 253665 DEBUG oslo_concurrency.lockutils [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.483 253665 DEBUG nova.network.neutron [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing network info cache for port 761d949a-b334-4144-be7a-5f02c905c715 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.605 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.606 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.606 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.606 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.606 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.608 253665 INFO nova.compute.manager [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Terminating instance
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.609 253665 DEBUG nova.compute.manager [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:38:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:52 compute-0 kernel: tap761d949a-b3 (unregistering): left promiscuous mode
Nov 22 09:38:52 compute-0 NetworkManager[48920]: <info>  [1763804332.8345] device (tap761d949a-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.843 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:52 compute-0 ovn_controller[152872]: 2025-11-22T09:38:52Z|01274|binding|INFO|Releasing lport 761d949a-b334-4144-be7a-5f02c905c715 from this chassis (sb_readonly=0)
Nov 22 09:38:52 compute-0 ovn_controller[152872]: 2025-11-22T09:38:52Z|01275|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 down in Southbound
Nov 22 09:38:52 compute-0 ovn_controller[152872]: 2025-11-22T09:38:52Z|01276|binding|INFO|Removing iface tap761d949a-b3 ovn-installed in OVS
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.851 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:a8:14 10.100.0.8'], port_security=['fa:16:3e:85:a8:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6befe33f-63d2-41aa-b574-8eb9b323c484 8fecaa1a-36f4-4ef4-bac2-46e5b8b5f461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1cb9034-c4c3-45e7-9e31-5c5d3f434f14, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=761d949a-b334-4144-be7a-5f02c905c715) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.852 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 761d949a-b334-4144-be7a-5f02c905c715 in datapath a8c9b48b-687a-480f-aff5-bd1fee4c2bbd unbound from our chassis
Nov 22 09:38:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.854 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:38:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.856 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1908c454-5d3e-42d3-8356-17d36aa0a193]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.856 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd namespace which is not needed anymore
Nov 22 09:38:52 compute-0 nova_compute[253661]: 2025-11-22 09:38:52.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:52 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 22 09:38:52 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Consumed 14.491s CPU time.
Nov 22 09:38:52 compute-0 systemd-machined[215941]: Machine qemu-150-instance-00000077 terminated.
Nov 22 09:38:53 compute-0 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [NOTICE]   (377606) : haproxy version is 2.8.14-c23fe91
Nov 22 09:38:53 compute-0 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [NOTICE]   (377606) : path to executable is /usr/sbin/haproxy
Nov 22 09:38:53 compute-0 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [WARNING]  (377606) : Exiting Master process...
Nov 22 09:38:53 compute-0 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [ALERT]    (377606) : Current worker (377608) exited with code 143 (Terminated)
Nov 22 09:38:53 compute-0 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [WARNING]  (377606) : All workers exited. Exiting... (0)
Nov 22 09:38:53 compute-0 systemd[1]: libpod-2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e.scope: Deactivated successfully.
Nov 22 09:38:53 compute-0 kernel: tap761d949a-b3: entered promiscuous mode
Nov 22 09:38:53 compute-0 kernel: tap761d949a-b3 (unregistering): left promiscuous mode
Nov 22 09:38:53 compute-0 podman[378786]: 2025-11-22 09:38:53.035933259 +0000 UTC m=+0.070047793 container died 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:38:53 compute-0 NetworkManager[48920]: <info>  [1763804333.0399] manager: (tap761d949a-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/522)
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.039 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01277|binding|INFO|Claiming lport 761d949a-b334-4144-be7a-5f02c905c715 for this chassis.
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01278|binding|INFO|761d949a-b334-4144-be7a-5f02c905c715: Claiming fa:16:3e:85:a8:14 10.100.0.8
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.052 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:a8:14 10.100.0.8'], port_security=['fa:16:3e:85:a8:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6befe33f-63d2-41aa-b574-8eb9b323c484 8fecaa1a-36f4-4ef4-bac2-46e5b8b5f461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1cb9034-c4c3-45e7-9e31-5c5d3f434f14, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=761d949a-b334-4144-be7a-5f02c905c715) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.059 253665 INFO nova.virt.libvirt.driver [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance destroyed successfully.
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.059 253665 DEBUG nova.objects.instance [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lazy-loading 'resources' on Instance uuid d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01279|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 ovn-installed in OVS
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01280|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 up in Southbound
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01281|binding|INFO|Releasing lport 761d949a-b334-4144-be7a-5f02c905c715 from this chassis (sb_readonly=1)
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01282|if_status|INFO|Dropped 2 log messages in last 235 seconds (most recently, 235 seconds ago) due to excessive rate
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01283|if_status|INFO|Not setting lport 761d949a-b334-4144-be7a-5f02c905c715 down as sb is readonly
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01284|binding|INFO|Removing iface tap761d949a-b3 ovn-installed in OVS
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.071 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01285|binding|INFO|Releasing lport 761d949a-b334-4144-be7a-5f02c905c715 from this chassis (sb_readonly=0)
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|01286|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 down in Southbound
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.073 253665 DEBUG nova.virt.libvirt.vif [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2070464237-ac',id=119,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNtSLVn2f2AjktFMVEQRNrPDPgiu6XGcAVHoUX9ErUANDAfx8scLKesh39J38uCHme4Kr1WaGaUgPEF++ZKW4JdZA91CWGfVEKx+uaYRX1tqW4xZuiIvDOiFoDeabW/cjQ==',key_name='tempest-TestSecurityGroupsBasicOps-580779993',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9ddb669b6144eee90dc043099e8df8c',ramdisk_id='',reservation_id='r-b9fzu52l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2070464237',owner_user_name='tempest-TestSecurityGroupsBasicOps-2070464237-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:23Z,user_data=None,user_id='24fbabe00a26461eaa9027f7105ae97c',uuid=d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.074 253665 DEBUG nova.network.os_vif_util [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converting VIF {"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.075 253665 DEBUG nova.network.os_vif_util [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.075 253665 DEBUG os_vif [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.078 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.078 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap761d949a-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.080 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:a8:14 10.100.0.8'], port_security=['fa:16:3e:85:a8:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6befe33f-63d2-41aa-b574-8eb9b323c484 8fecaa1a-36f4-4ef4-bac2-46e5b8b5f461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1cb9034-c4c3-45e7-9e31-5c5d3f434f14, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=761d949a-b334-4144-be7a-5f02c905c715) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:38:53 compute-0 sudo[378798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-58bc45d9f28847928b45ee55af7005c19133177176f15c3c95d23db67f15d5f1-merged.mount: Deactivated successfully.
Nov 22 09:38:53 compute-0 sudo[378798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.093 253665 INFO os_vif [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3')
Nov 22 09:38:53 compute-0 sudo[378798]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:53 compute-0 podman[378786]: 2025-11-22 09:38:53.108405732 +0000 UTC m=+0.142520246 container cleanup 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:38:53 compute-0 systemd[1]: libpod-conmon-2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e.scope: Deactivated successfully.
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.121 253665 DEBUG nova.compute.manager [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-unplugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.121 253665 DEBUG oslo_concurrency.lockutils [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.122 253665 DEBUG oslo_concurrency.lockutils [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.122 253665 DEBUG oslo_concurrency.lockutils [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.122 253665 DEBUG nova.compute.manager [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-unplugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.122 253665 DEBUG nova.compute.manager [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-unplugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:38:53 compute-0 sudo[378845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:38:53 compute-0 sudo[378845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:53 compute-0 sudo[378845]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:53 compute-0 podman[378864]: 2025-11-22 09:38:53.202221775 +0000 UTC m=+0.064275940 container remove 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.210 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b32bb7e9-c9bf-4978-87c4-070ae19408ae]: (4, ('Sat Nov 22 09:38:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd (2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e)\n2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e\nSat Nov 22 09:38:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd (2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e)\n2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.212 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[306c1ebc-6129-4bf3-817d-8f04065901b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.213 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c9b48b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:38:53 compute-0 sudo[378897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:38:53 compute-0 sudo[378897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:53 compute-0 kernel: tapa8c9b48b-60: left promiscuous mode
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.248 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:53 compute-0 sudo[378897]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.252 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78d73fec-1749-4972-bf24-d4b98154b2c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.264 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ce9788-8d3d-43a0-a9ea-5b05a8fd2a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.271 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[daeb9bad-3c20-4dae-baf4-2c5bcb80a971]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.294 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1797bbf6-d025-459e-a1b9-fab9aa3ed056]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715347, 'reachable_time': 22668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378932, 'error': None, 'target': 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:53 compute-0 systemd[1]: run-netns-ovnmeta\x2da8c9b48b\x2d687a\x2d480f\x2daff5\x2dbd1fee4c2bbd.mount: Deactivated successfully.
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.298 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.298 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e436cb-3599-4fef-a6a7-e8830891dc7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.303 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 761d949a-b334-4144-be7a-5f02c905c715 in datapath a8c9b48b-687a-480f-aff5-bd1fee4c2bbd unbound from our chassis
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.306 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.307 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58beda35-9087-4e39-aaf3-732898eb9985]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.308 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 761d949a-b334-4144-be7a-5f02c905c715 in datapath a8c9b48b-687a-480f-aff5-bd1fee4c2bbd unbound from our chassis
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.310 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:38:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.310 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[061ab4b5-f22f-4a6e-9cac-81d80d9b6177]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:38:53 compute-0 sudo[378922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 09:38:53 compute-0 sudo[378922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:53 compute-0 ceph-mon[75021]: pgmap v2309: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 38 KiB/s wr, 77 op/s
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.794 253665 INFO nova.virt.libvirt.driver [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Deleting instance files /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_del
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.795 253665 INFO nova.virt.libvirt.driver [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Deletion of /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_del complete
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:a0:d3 10.100.0.3
Nov 22 09:38:53 compute-0 ovn_controller[152872]: 2025-11-22T09:38:53Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:a0:d3 10.100.0.3
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.874 253665 INFO nova.compute.manager [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Took 1.26 seconds to destroy the instance on the hypervisor.
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.874 253665 DEBUG oslo.service.loopingcall [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.874 253665 DEBUG nova.compute.manager [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:38:53 compute-0 nova_compute[253661]: 2025-11-22 09:38:53.875 253665 DEBUG nova.network.neutron [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:38:53 compute-0 podman[379025]: 2025-11-22 09:38:53.919677277 +0000 UTC m=+0.068546276 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:38:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 214 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1012 KiB/s wr, 76 op/s
Nov 22 09:38:54 compute-0 podman[379025]: 2025-11-22 09:38:54.025840077 +0000 UTC m=+0.174709056 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:38:54 compute-0 ceph-mon[75021]: pgmap v2310: 305 pgs: 305 active+clean; 214 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1012 KiB/s wr, 76 op/s
Nov 22 09:38:55 compute-0 sudo[378922]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:38:55 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:38:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:38:55 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:38:55 compute-0 sudo[379181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:38:55 compute-0 sudo[379181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:55 compute-0 sudo[379181]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.220 253665 DEBUG nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.223 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.223 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.223 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.224 253665 DEBUG nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.224 253665 WARNING nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state active and task_state deleting.
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.224 253665 DEBUG nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.225 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.225 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.225 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.226 253665 DEBUG nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.226 253665 WARNING nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state active and task_state deleting.
Nov 22 09:38:55 compute-0 sudo[379206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:38:55 compute-0 sudo[379206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:55 compute-0 sudo[379206]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:55 compute-0 sudo[379231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:38:55 compute-0 sudo[379231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.335 253665 DEBUG nova.network.neutron [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:55 compute-0 sudo[379231]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.352 253665 INFO nova.compute.manager [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Took 1.48 seconds to deallocate network for instance.
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.391 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.392 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:55 compute-0 sudo[379256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:38:55 compute-0 sudo[379256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.502 253665 DEBUG oslo_concurrency.processutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.638 253665 DEBUG nova.network.neutron [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updated VIF entry in instance network info cache for port 761d949a-b334-4144-be7a-5f02c905c715. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.639 253665 DEBUG nova.network.neutron [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.662 253665 DEBUG oslo_concurrency.lockutils [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:38:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:38:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:38:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:38:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:38:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:38:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:38:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2920870187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 214 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 1012 KiB/s wr, 25 op/s
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.971 253665 DEBUG oslo_concurrency.processutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.980 253665 DEBUG nova.compute.provider_tree [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:38:55 compute-0 sudo[379256]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:55 compute-0 nova_compute[253661]: 2025-11-22 09:38:55.995 253665 DEBUG nova.scheduler.client.report [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:38:56 compute-0 nova_compute[253661]: 2025-11-22 09:38:56.010 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:38:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:38:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:38:56 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:38:56 compute-0 nova_compute[253661]: 2025-11-22 09:38:56.035 253665 INFO nova.scheduler.client.report [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Deleted allocations for instance d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3
Nov 22 09:38:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:38:56 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:38:56 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 734191c9-652a-4b7a-bf3a-b900a14a3708 does not exist
Nov 22 09:38:56 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev daf05849-ce1f-42bf-a190-62560e25d731 does not exist
Nov 22 09:38:56 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d9b8daa6-e972-47e2-87ce-f1a74bf95add does not exist
Nov 22 09:38:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:38:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:38:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:38:56 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:38:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:38:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:38:56 compute-0 nova_compute[253661]: 2025-11-22 09:38:56.087 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:56 compute-0 sudo[379335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:38:56 compute-0 sudo[379335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:56 compute-0 sudo[379335]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:56 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:38:56 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:38:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2920870187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:38:56 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:38:56 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:38:56 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:38:56 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:38:56 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:38:56 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:38:56 compute-0 nova_compute[253661]: 2025-11-22 09:38:56.150 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:56 compute-0 sudo[379360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:38:56 compute-0 sudo[379360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:56 compute-0 sudo[379360]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:56 compute-0 sudo[379385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:38:56 compute-0 sudo[379385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:56 compute-0 sudo[379385]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:56 compute-0 sudo[379410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:38:56 compute-0 sudo[379410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:38:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:38:56 compute-0 podman[379476]: 2025-11-22 09:38:56.635816003 +0000 UTC m=+0.039799260 container create a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 09:38:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:38:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:38:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:38:56 compute-0 systemd[1]: Started libpod-conmon-a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96.scope.
Nov 22 09:38:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:38:56 compute-0 podman[379476]: 2025-11-22 09:38:56.619291042 +0000 UTC m=+0.023274329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:38:56 compute-0 podman[379476]: 2025-11-22 09:38:56.719945735 +0000 UTC m=+0.123929022 container init a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:38:56 compute-0 podman[379476]: 2025-11-22 09:38:56.729592105 +0000 UTC m=+0.133575362 container start a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:38:56 compute-0 podman[379476]: 2025-11-22 09:38:56.733109093 +0000 UTC m=+0.137092370 container attach a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:38:56 compute-0 silly_gauss[379492]: 167 167
Nov 22 09:38:56 compute-0 systemd[1]: libpod-a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96.scope: Deactivated successfully.
Nov 22 09:38:56 compute-0 conmon[379492]: conmon a4db668e1f1c776f2f55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96.scope/container/memory.events
Nov 22 09:38:56 compute-0 podman[379476]: 2025-11-22 09:38:56.73780166 +0000 UTC m=+0.141784927 container died a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7396942851a74e4305ffeb1b48aa39b7853302604fcc518c54b6250a8c757e4-merged.mount: Deactivated successfully.
Nov 22 09:38:56 compute-0 podman[379476]: 2025-11-22 09:38:56.778391769 +0000 UTC m=+0.182375026 container remove a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 09:38:56 compute-0 systemd[1]: libpod-conmon-a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96.scope: Deactivated successfully.
Nov 22 09:38:56 compute-0 podman[379515]: 2025-11-22 09:38:56.971518972 +0000 UTC m=+0.048216171 container create 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:38:57 compute-0 systemd[1]: Started libpod-conmon-1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b.scope.
Nov 22 09:38:57 compute-0 podman[379515]: 2025-11-22 09:38:56.953492114 +0000 UTC m=+0.030189343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:38:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:57 compute-0 podman[379515]: 2025-11-22 09:38:57.084535343 +0000 UTC m=+0.161232562 container init 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:38:57 compute-0 podman[379515]: 2025-11-22 09:38:57.092517781 +0000 UTC m=+0.169214980 container start 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:38:57 compute-0 podman[379515]: 2025-11-22 09:38:57.098146491 +0000 UTC m=+0.174843700 container attach 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 09:38:57 compute-0 ceph-mon[75021]: pgmap v2311: 305 pgs: 305 active+clean; 214 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 1012 KiB/s wr, 25 op/s
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.354 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.354 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 WARNING nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state deleted and task_state None.
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 WARNING nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state deleted and task_state None.
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-deleted-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.357 253665 INFO nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Neutron deleted interface 761d949a-b334-4144-be7a-5f02c905c715; detaching it from the instance and deleting it from the info cache
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.357 253665 DEBUG nova.network.neutron [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 22 09:38:57 compute-0 nova_compute[253661]: 2025-11-22 09:38:57.361 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Detach interface failed, port_id=761d949a-b334-4144-be7a-5f02c905c715, reason: Instance d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:38:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 195 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Nov 22 09:38:58 compute-0 nova_compute[253661]: 2025-11-22 09:38:58.084 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:38:58 compute-0 happy_elgamal[379532]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:38:58 compute-0 happy_elgamal[379532]: --> relative data size: 1.0
Nov 22 09:38:58 compute-0 happy_elgamal[379532]: --> All data devices are unavailable
Nov 22 09:38:58 compute-0 systemd[1]: libpod-1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b.scope: Deactivated successfully.
Nov 22 09:38:58 compute-0 systemd[1]: libpod-1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b.scope: Consumed 1.106s CPU time.
Nov 22 09:38:58 compute-0 podman[379515]: 2025-11-22 09:38:58.268043995 +0000 UTC m=+1.344741194 container died 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405-merged.mount: Deactivated successfully.
Nov 22 09:38:58 compute-0 podman[379515]: 2025-11-22 09:38:58.441541379 +0000 UTC m=+1.518238578 container remove 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:38:58 compute-0 systemd[1]: libpod-conmon-1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b.scope: Deactivated successfully.
Nov 22 09:38:58 compute-0 sudo[379410]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:58 compute-0 sudo[379576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:38:58 compute-0 sudo[379576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:58 compute-0 sudo[379576]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:58 compute-0 sudo[379607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:38:58 compute-0 sudo[379607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:58 compute-0 sudo[379607]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:38:58 compute-0 sudo[379648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:38:58 compute-0 sudo[379648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:58 compute-0 sudo[379648]: pam_unix(sudo:session): session closed for user root
Nov 22 09:38:58 compute-0 podman[379600]: 2025-11-22 09:38:58.769995797 +0000 UTC m=+0.169973508 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:38:58 compute-0 sudo[379678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:38:58 compute-0 sudo[379678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.888821) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804338888879, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 485, "num_deletes": 256, "total_data_size": 413910, "memory_usage": 423032, "flush_reason": "Manual Compaction"}
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804338940426, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 410299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47411, "largest_seqno": 47895, "table_properties": {"data_size": 407548, "index_size": 787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6637, "raw_average_key_size": 18, "raw_value_size": 401916, "raw_average_value_size": 1125, "num_data_blocks": 34, "num_entries": 357, "num_filter_entries": 357, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804315, "oldest_key_time": 1763804315, "file_creation_time": 1763804338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 51650 microseconds, and 2247 cpu microseconds.
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.940469) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 410299 bytes OK
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.940494) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.996848) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.996928) EVENT_LOG_v1 {"time_micros": 1763804338996914, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.996960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 411003, prev total WAL file size 411003, number of live WAL files 2.
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.997675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373534' seq:72057594037927935, type:22 .. '6C6F676D0032303036' seq:0, type:0; will stop at (end)
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(400KB)], [107(10MB)]
Nov 22 09:38:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804338997722, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10921872, "oldest_snapshot_seqno": -1}
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6955 keys, 10788561 bytes, temperature: kUnknown
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804339132735, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10788561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10739610, "index_size": 30471, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 181053, "raw_average_key_size": 26, "raw_value_size": 10612750, "raw_average_value_size": 1525, "num_data_blocks": 1193, "num_entries": 6955, "num_filter_entries": 6955, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.133006) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10788561 bytes
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.149633) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.8 rd, 79.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(52.9) write-amplify(26.3) OK, records in: 7479, records dropped: 524 output_compression: NoCompression
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.149689) EVENT_LOG_v1 {"time_micros": 1763804339149670, "job": 64, "event": "compaction_finished", "compaction_time_micros": 135106, "compaction_time_cpu_micros": 27682, "output_level": 6, "num_output_files": 1, "total_output_size": 10788561, "num_input_records": 7479, "num_output_records": 6955, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804339149998, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804339154049, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.997526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:38:59 compute-0 podman[379746]: 2025-11-22 09:38:59.179476181 +0000 UTC m=+0.025456664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:38:59 compute-0 podman[379746]: 2025-11-22 09:38:59.28122025 +0000 UTC m=+0.127200743 container create d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:38:59 compute-0 ceph-mon[75021]: pgmap v2312: 305 pgs: 305 active+clean; 195 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Nov 22 09:38:59 compute-0 systemd[1]: Started libpod-conmon-d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa.scope.
Nov 22 09:38:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:38:59 compute-0 nova_compute[253661]: 2025-11-22 09:38:59.493 253665 INFO nova.compute.manager [None req-20e98732-75d2-48cc-a1c0-1dbd54b757d8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Get console output
Nov 22 09:38:59 compute-0 nova_compute[253661]: 2025-11-22 09:38:59.502 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:38:59 compute-0 podman[379746]: 2025-11-22 09:38:59.51157785 +0000 UTC m=+0.357558333 container init d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 09:38:59 compute-0 podman[379746]: 2025-11-22 09:38:59.519905747 +0000 UTC m=+0.365886190 container start d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:38:59 compute-0 compassionate_elion[379763]: 167 167
Nov 22 09:38:59 compute-0 systemd[1]: libpod-d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa.scope: Deactivated successfully.
Nov 22 09:38:59 compute-0 podman[379746]: 2025-11-22 09:38:59.599365513 +0000 UTC m=+0.445345996 container attach d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:38:59 compute-0 podman[379746]: 2025-11-22 09:38:59.600670355 +0000 UTC m=+0.446650808 container died d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fc61eb8d3cb3f8a8d3a124395d7538a58a6cd03c98f5aac6cdaa7835ac699bf-merged.mount: Deactivated successfully.
Nov 22 09:38:59 compute-0 podman[379746]: 2025-11-22 09:38:59.680182322 +0000 UTC m=+0.526162775 container remove d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 09:38:59 compute-0 systemd[1]: libpod-conmon-d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa.scope: Deactivated successfully.
Nov 22 09:38:59 compute-0 podman[379787]: 2025-11-22 09:38:59.879630373 +0000 UTC m=+0.042411406 container create f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:38:59 compute-0 systemd[1]: Started libpod-conmon-f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163.scope.
Nov 22 09:38:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:38:59 compute-0 podman[379787]: 2025-11-22 09:38:59.862316491 +0000 UTC m=+0.025097564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:38:59 compute-0 podman[379787]: 2025-11-22 09:38:59.970516633 +0000 UTC m=+0.133297696 container init f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 09:38:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 200 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 22 09:38:59 compute-0 podman[379787]: 2025-11-22 09:38:59.978996214 +0000 UTC m=+0.141777257 container start f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:38:59 compute-0 podman[379787]: 2025-11-22 09:38:59.983814563 +0000 UTC m=+0.146595636 container attach f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:39:00 compute-0 musing_mclean[379804]: {
Nov 22 09:39:00 compute-0 musing_mclean[379804]:     "0": [
Nov 22 09:39:00 compute-0 musing_mclean[379804]:         {
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "devices": [
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "/dev/loop3"
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             ],
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_name": "ceph_lv0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_size": "21470642176",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "name": "ceph_lv0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "tags": {
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.cluster_name": "ceph",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.crush_device_class": "",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.encrypted": "0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.osd_id": "0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.type": "block",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.vdo": "0"
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             },
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "type": "block",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "vg_name": "ceph_vg0"
Nov 22 09:39:00 compute-0 musing_mclean[379804]:         }
Nov 22 09:39:00 compute-0 musing_mclean[379804]:     ],
Nov 22 09:39:00 compute-0 musing_mclean[379804]:     "1": [
Nov 22 09:39:00 compute-0 musing_mclean[379804]:         {
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "devices": [
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "/dev/loop4"
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             ],
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_name": "ceph_lv1",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_size": "21470642176",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "name": "ceph_lv1",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "tags": {
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.cluster_name": "ceph",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.crush_device_class": "",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.encrypted": "0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.osd_id": "1",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.type": "block",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.vdo": "0"
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             },
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "type": "block",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "vg_name": "ceph_vg1"
Nov 22 09:39:00 compute-0 musing_mclean[379804]:         }
Nov 22 09:39:00 compute-0 musing_mclean[379804]:     ],
Nov 22 09:39:00 compute-0 musing_mclean[379804]:     "2": [
Nov 22 09:39:00 compute-0 musing_mclean[379804]:         {
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "devices": [
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "/dev/loop5"
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             ],
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_name": "ceph_lv2",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_size": "21470642176",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "name": "ceph_lv2",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "tags": {
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.cluster_name": "ceph",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.crush_device_class": "",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.encrypted": "0",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.osd_id": "2",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.type": "block",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:                 "ceph.vdo": "0"
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             },
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "type": "block",
Nov 22 09:39:00 compute-0 musing_mclean[379804]:             "vg_name": "ceph_vg2"
Nov 22 09:39:00 compute-0 musing_mclean[379804]:         }
Nov 22 09:39:00 compute-0 musing_mclean[379804]:     ]
Nov 22 09:39:00 compute-0 musing_mclean[379804]: }
Nov 22 09:39:00 compute-0 systemd[1]: libpod-f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163.scope: Deactivated successfully.
Nov 22 09:39:00 compute-0 podman[379787]: 2025-11-22 09:39:00.82652316 +0000 UTC m=+0.989304203 container died f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747-merged.mount: Deactivated successfully.
Nov 22 09:39:00 compute-0 podman[379787]: 2025-11-22 09:39:00.891747363 +0000 UTC m=+1.054528416 container remove f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:39:00 compute-0 systemd[1]: libpod-conmon-f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163.scope: Deactivated successfully.
Nov 22 09:39:00 compute-0 sudo[379678]: pam_unix(sudo:session): session closed for user root
Nov 22 09:39:01 compute-0 sudo[379827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:39:01 compute-0 sudo[379827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:39:01 compute-0 sudo[379827]: pam_unix(sudo:session): session closed for user root
Nov 22 09:39:01 compute-0 sudo[379852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:39:01 compute-0 sudo[379852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:39:01 compute-0 sudo[379852]: pam_unix(sudo:session): session closed for user root
Nov 22 09:39:01 compute-0 sudo[379877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:39:01 compute-0 sudo[379877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:39:01 compute-0 sudo[379877]: pam_unix(sudo:session): session closed for user root
Nov 22 09:39:01 compute-0 nova_compute[253661]: 2025-11-22 09:39:01.183 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:01 compute-0 sudo[379902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:39:01 compute-0 sudo[379902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:39:01 compute-0 ceph-mon[75021]: pgmap v2313: 305 pgs: 305 active+clean; 200 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 22 09:39:01 compute-0 podman[379966]: 2025-11-22 09:39:01.600198431 +0000 UTC m=+0.041626416 container create 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:39:01 compute-0 systemd[1]: Started libpod-conmon-24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84.scope.
Nov 22 09:39:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:39:01 compute-0 podman[379966]: 2025-11-22 09:39:01.581990008 +0000 UTC m=+0.023418023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:39:01 compute-0 ovn_controller[152872]: 2025-11-22T09:39:01Z|01287|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 09:39:01 compute-0 ovn_controller[152872]: 2025-11-22T09:39:01Z|01288|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 09:39:01 compute-0 podman[379966]: 2025-11-22 09:39:01.693095991 +0000 UTC m=+0.134524006 container init 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 09:39:01 compute-0 podman[379966]: 2025-11-22 09:39:01.704096705 +0000 UTC m=+0.145524690 container start 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:39:01 compute-0 podman[379966]: 2025-11-22 09:39:01.707693423 +0000 UTC m=+0.149121408 container attach 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:39:01 compute-0 magical_cartwright[379982]: 167 167
Nov 22 09:39:01 compute-0 systemd[1]: libpod-24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84.scope: Deactivated successfully.
Nov 22 09:39:01 compute-0 podman[379966]: 2025-11-22 09:39:01.712720169 +0000 UTC m=+0.154148154 container died 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc40b2a45cfe92dc929fcf04cfea837af13a23c3239fc723483cc00043fe3b88-merged.mount: Deactivated successfully.
Nov 22 09:39:01 compute-0 nova_compute[253661]: 2025-11-22 09:39:01.747 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:01 compute-0 podman[379966]: 2025-11-22 09:39:01.761022631 +0000 UTC m=+0.202450606 container remove 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:39:01 compute-0 systemd[1]: libpod-conmon-24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84.scope: Deactivated successfully.
Nov 22 09:39:01 compute-0 podman[380006]: 2025-11-22 09:39:01.96772313 +0000 UTC m=+0.041974714 container create c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:39:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 200 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 09:39:02 compute-0 systemd[1]: Started libpod-conmon-c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e.scope.
Nov 22 09:39:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:39:02 compute-0 podman[380006]: 2025-11-22 09:39:01.949042156 +0000 UTC m=+0.023293760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:39:02 compute-0 podman[380006]: 2025-11-22 09:39:02.058347154 +0000 UTC m=+0.132598798 container init c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:39:02 compute-0 podman[380006]: 2025-11-22 09:39:02.065573264 +0000 UTC m=+0.139824848 container start c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:39:02 compute-0 podman[380006]: 2025-11-22 09:39:02.069245106 +0000 UTC m=+0.143496710 container attach c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:39:02 compute-0 nova_compute[253661]: 2025-11-22 09:39:02.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015189276470013674 of space, bias 1.0, pg target 0.4556782941004102 quantized to 32 (current 32)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:39:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]: {
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "osd_id": 1,
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "type": "bluestore"
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:     },
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "osd_id": 0,
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "type": "bluestore"
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:     },
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "osd_id": 2,
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:         "type": "bluestore"
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]:     }
Nov 22 09:39:03 compute-0 hardcore_williamson[380022]: }
Nov 22 09:39:03 compute-0 systemd[1]: libpod-c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e.scope: Deactivated successfully.
Nov 22 09:39:03 compute-0 systemd[1]: libpod-c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e.scope: Consumed 1.118s CPU time.
Nov 22 09:39:03 compute-0 podman[380006]: 2025-11-22 09:39:03.187958266 +0000 UTC m=+1.262209890 container died c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575-merged.mount: Deactivated successfully.
Nov 22 09:39:03 compute-0 podman[380006]: 2025-11-22 09:39:03.290037085 +0000 UTC m=+1.364288679 container remove c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:39:03 compute-0 systemd[1]: libpod-conmon-c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e.scope: Deactivated successfully.
Nov 22 09:39:03 compute-0 ceph-mon[75021]: pgmap v2314: 305 pgs: 305 active+clean; 200 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 09:39:03 compute-0 sudo[379902]: pam_unix(sudo:session): session closed for user root
Nov 22 09:39:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:39:03 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:39:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:39:03 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:39:03 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 273ac386-cade-44f8-83c7-d6e1b0186ba8 does not exist
Nov 22 09:39:03 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c1ca2862-edbb-46fe-adc8-bcb53e33f9a1 does not exist
Nov 22 09:39:03 compute-0 sudo[380069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:39:03 compute-0 sudo[380069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:39:03 compute-0 sudo[380069]: pam_unix(sudo:session): session closed for user root
Nov 22 09:39:03 compute-0 sudo[380094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:39:03 compute-0 sudo[380094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:39:03 compute-0 sudo[380094]: pam_unix(sudo:session): session closed for user root
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.556 253665 DEBUG nova.compute.manager [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.556 253665 DEBUG nova.compute.manager [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing instance network info cache due to event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.557 253665 DEBUG oslo_concurrency.lockutils [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.557 253665 DEBUG oslo_concurrency.lockutils [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.557 253665 DEBUG nova.network.neutron [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.652 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.653 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.653 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.653 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.654 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.655 253665 INFO nova.compute.manager [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Terminating instance
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.656 253665 DEBUG nova.compute.manager [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:39:03 compute-0 kernel: tap1e21d7ad-a6 (unregistering): left promiscuous mode
Nov 22 09:39:03 compute-0 NetworkManager[48920]: <info>  [1763804343.7210] device (tap1e21d7ad-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.730 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:03 compute-0 ovn_controller[152872]: 2025-11-22T09:39:03Z|01289|binding|INFO|Releasing lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f from this chassis (sb_readonly=0)
Nov 22 09:39:03 compute-0 ovn_controller[152872]: 2025-11-22T09:39:03Z|01290|binding|INFO|Setting lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f down in Southbound
Nov 22 09:39:03 compute-0 ovn_controller[152872]: 2025-11-22T09:39:03Z|01291|binding|INFO|Removing iface tap1e21d7ad-a6 ovn-installed in OVS
Nov 22 09:39:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.737 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:0f:ac 10.100.0.14'], port_security=['fa:16:3e:1a:0f:ac 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2f5b215-3a41-451c-8ad8-68b17c96a678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37126bdf-684b-42ae-b38f-88d563755df6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '443f7e2d-f0e9-45ab-9cf5-08268d38e115 d6d16faa-9388-499f-aa74-b3fccde9fbc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7f8c6c4-9648-452d-b35b-4ce3aef6c8f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1e21d7ad-a6e7-4649-91f2-612de75fe16f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:39:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.739 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1e21d7ad-a6e7-4649-91f2-612de75fe16f in datapath 37126bdf-684b-42ae-b38f-88d563755df6 unbound from our chassis
Nov 22 09:39:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.742 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37126bdf-684b-42ae-b38f-88d563755df6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:39:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.743 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82650785-8a33-4844-8b19-6beba649c2f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.744 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 namespace which is not needed anymore
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.749 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:03 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 22 09:39:03 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Consumed 17.977s CPU time.
Nov 22 09:39:03 compute-0 systemd-machined[215941]: Machine qemu-147-instance-00000075 terminated.
Nov 22 09:39:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.902 253665 INFO nova.virt.libvirt.driver [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance destroyed successfully.
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.903 253665 DEBUG nova.objects.instance [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid d2f5b215-3a41-451c-8ad8-68b17c96a678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.913 253665 DEBUG nova.virt.libvirt.vif [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=117,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhEnav/8bmHhlravIj7ZzbNKEW+UMvBgA2sDDDC11ma4Sh8uEn9mVvYdSzBFRFowvU98Jl7d9jrFKpsv67Pj9Xp0jWGCVRbBnzzKhVjFFyGFkc+DH0al99fQPTR1eXa1A==',key_name='tempest-TestSecurityGroupsBasicOps-1955317373',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-nt0g0idi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:39Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=d2f5b215-3a41-451c-8ad8-68b17c96a678,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.915 253665 DEBUG nova.network.os_vif_util [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:03 compute-0 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [NOTICE]   (375011) : haproxy version is 2.8.14-c23fe91
Nov 22 09:39:03 compute-0 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [NOTICE]   (375011) : path to executable is /usr/sbin/haproxy
Nov 22 09:39:03 compute-0 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [WARNING]  (375011) : Exiting Master process...
Nov 22 09:39:03 compute-0 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [WARNING]  (375011) : Exiting Master process...
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.916 253665 DEBUG nova.network.os_vif_util [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.917 253665 DEBUG os_vif [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:39:03 compute-0 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [ALERT]    (375011) : Current worker (375013) exited with code 143 (Terminated)
Nov 22 09:39:03 compute-0 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [WARNING]  (375011) : All workers exited. Exiting... (0)
Nov 22 09:39:03 compute-0 systemd[1]: libpod-2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1.scope: Deactivated successfully.
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.923 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e21d7ad-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.926 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:03 compute-0 podman[380143]: 2025-11-22 09:39:03.927376275 +0000 UTC m=+0.056403245 container died 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.929 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:03 compute-0 nova_compute[253661]: 2025-11-22 09:39:03.932 253665 INFO os_vif [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6')
Nov 22 09:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1-userdata-shm.mount: Deactivated successfully.
Nov 22 09:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a8e5aadf5ecb0314ed34aef4194ac1d915ee5e6be36aa131d63c38cd863668f-merged.mount: Deactivated successfully.
Nov 22 09:39:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 200 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 09:39:03 compute-0 podman[380143]: 2025-11-22 09:39:03.983862989 +0000 UTC m=+0.112889969 container cleanup 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:39:03 compute-0 systemd[1]: libpod-conmon-2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1.scope: Deactivated successfully.
Nov 22 09:39:04 compute-0 podman[380197]: 2025-11-22 09:39:04.066652718 +0000 UTC m=+0.054205719 container remove 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.073 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[495ada6d-603c-4272-9b96-f7f8cf89ede5]: (4, ('Sat Nov 22 09:39:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 (2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1)\n2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1\nSat Nov 22 09:39:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 (2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1)\n2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.075 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[acb1db57-38d8-4de6-a3fe-e5f7c3282069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.079 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37126bdf-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.081 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:04 compute-0 kernel: tap37126bdf-60: left promiscuous mode
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.113 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[54badb81-e926-4156-95df-94c90caa7151]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.129 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[364b6941-e552-40fe-b156-e68d775c079a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.131 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73004a20-672b-4c1e-a554-701c0a5d07e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.150 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1a21d2-dcf9-48cb-b60c-54f15f3caa82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710732, 'reachable_time': 29533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380212, 'error': None, 'target': 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d37126bdf\x2d684b\x2d42ae\x2db38f\x2d88d563755df6.mount: Deactivated successfully.
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.155 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.155 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[187d5292-dfc7-40ee-8460-721ac66b08d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:04 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:39:04 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.397 253665 INFO nova.virt.libvirt.driver [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Deleting instance files /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678_del
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.399 253665 INFO nova.virt.libvirt.driver [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Deletion of /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678_del complete
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.490 253665 INFO nova.compute.manager [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.491 253665 DEBUG oslo.service.loopingcall [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.491 253665 DEBUG nova.compute.manager [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.492 253665 DEBUG nova.network.neutron [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.502 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:0f:0b 2001:db8:0:1:f816:3eff:fe8d:f0b 2001:db8::f816:3eff:fe8d:f0b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8d:f0b/64 2001:db8::f816:3eff:fe8d:f0b/64', 'neutron:device_id': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1) old=Port_Binding(mac=['fa:16:3e:8d:0f:0b 2001:db8::f816:3eff:fe8d:f0b'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8d:f0b/64', 'neutron:device_id': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.504 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1 in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb updated
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.505 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20228844-2184-465b-8bc3-e846cfb6d3cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:39:04 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.506 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[794ebc08-01ef-4e92-ba26-909fc556eaf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.593 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.594 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:04 compute-0 nova_compute[253661]: 2025-11-22 09:39:04.595 253665 DEBUG nova.objects.instance [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'flavor' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.291 253665 DEBUG nova.network.neutron [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updated VIF entry in instance network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.292 253665 DEBUG nova.network.neutron [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.375 253665 DEBUG oslo_concurrency.lockutils [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:05 compute-0 ceph-mon[75021]: pgmap v2315: 305 pgs: 305 active+clean; 200 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.528 253665 DEBUG nova.objects.instance [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.548 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.754 253665 DEBUG nova.network.neutron [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.773 253665 INFO nova.compute.manager [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Took 1.28 seconds to deallocate network for instance.
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.847 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.848 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:05 compute-0 nova_compute[253661]: 2025-11-22 09:39:05.941 253665 DEBUG oslo_concurrency.processutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 200 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 1.2 MiB/s wr, 69 op/s
Nov 22 09:39:06 compute-0 nova_compute[253661]: 2025-11-22 09:39:06.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:06 compute-0 nova_compute[253661]: 2025-11-22 09:39:06.355 253665 DEBUG nova.policy [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:39:06 compute-0 nova_compute[253661]: 2025-11-22 09:39:06.412 253665 DEBUG nova.compute.manager [req-650388d3-d5e8-4aa7-be8f-7943ae93eee1 req-2eb9d447-061d-4aa7-84bb-b938c752f5f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-vif-deleted-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:39:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869904138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:06 compute-0 nova_compute[253661]: 2025-11-22 09:39:06.471 253665 DEBUG oslo_concurrency.processutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:06 compute-0 nova_compute[253661]: 2025-11-22 09:39:06.479 253665 DEBUG nova.compute.provider_tree [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:39:06 compute-0 nova_compute[253661]: 2025-11-22 09:39:06.495 253665 DEBUG nova.scheduler.client.report [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:39:06 compute-0 nova_compute[253661]: 2025-11-22 09:39:06.520 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:06 compute-0 nova_compute[253661]: 2025-11-22 09:39:06.552 253665 INFO nova.scheduler.client.report [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance d2f5b215-3a41-451c-8ad8-68b17c96a678
Nov 22 09:39:06 compute-0 nova_compute[253661]: 2025-11-22 09:39:06.658 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:07 compute-0 nova_compute[253661]: 2025-11-22 09:39:07.036 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Successfully created port: b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:39:07 compute-0 ceph-mon[75021]: pgmap v2316: 305 pgs: 305 active+clean; 200 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 1.2 MiB/s wr, 69 op/s
Nov 22 09:39:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/869904138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 140 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 1.2 MiB/s wr, 81 op/s
Nov 22 09:39:08 compute-0 nova_compute[253661]: 2025-11-22 09:39:08.057 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804333.0546021, d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:08 compute-0 nova_compute[253661]: 2025-11-22 09:39:08.057 253665 INFO nova.compute.manager [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] VM Stopped (Lifecycle Event)
Nov 22 09:39:08 compute-0 nova_compute[253661]: 2025-11-22 09:39:08.076 253665 DEBUG nova.compute.manager [None req-8a8c487e-7e58-4555-ab90-36743f4eba9c - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:08 compute-0 nova_compute[253661]: 2025-11-22 09:39:08.231 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Successfully updated port: b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:39:08 compute-0 nova_compute[253661]: 2025-11-22 09:39:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:08 compute-0 nova_compute[253661]: 2025-11-22 09:39:08.253 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:08 compute-0 nova_compute[253661]: 2025-11-22 09:39:08.253 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:39:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:08 compute-0 nova_compute[253661]: 2025-11-22 09:39:08.926 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:09 compute-0 ceph-mon[75021]: pgmap v2317: 305 pgs: 305 active+clean; 140 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 1.2 MiB/s wr, 81 op/s
Nov 22 09:39:09 compute-0 ovn_controller[152872]: 2025-11-22T09:39:09Z|01292|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 09:39:09 compute-0 nova_compute[253661]: 2025-11-22 09:39:09.929 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 43 KiB/s wr, 37 op/s
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.451 253665 DEBUG nova.compute.manager [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.452 253665 DEBUG nova.compute.manager [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.452 253665 DEBUG oslo_concurrency.lockutils [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.696 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.717 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.719 253665 DEBUG oslo_concurrency.lockutils [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.720 253665 DEBUG nova.network.neutron [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.723 253665 DEBUG nova.virt.libvirt.vif [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.723 253665 DEBUG nova.network.os_vif_util [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.724 253665 DEBUG nova.network.os_vif_util [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.725 253665 DEBUG os_vif [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.726 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.726 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.729 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9b8fcd6-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.730 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9b8fcd6-fb, col_values=(('external_ids', {'iface-id': 'b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:d1:9a', 'vm-uuid': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:10 compute-0 NetworkManager[48920]: <info>  [1763804350.7322] manager: (tapb9b8fcd6-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.735 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.737 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.741 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.741 253665 INFO os_vif [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb')
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.742 253665 DEBUG nova.virt.libvirt.vif [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.742 253665 DEBUG nova.network.os_vif_util [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.743 253665 DEBUG nova.network.os_vif_util [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.746 253665 DEBUG nova.virt.libvirt.guest [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] attach device xml: <interface type="ethernet">
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:d7:d1:9a"/>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <target dev="tapb9b8fcd6-fb"/>
Nov 22 09:39:10 compute-0 nova_compute[253661]: </interface>
Nov 22 09:39:10 compute-0 nova_compute[253661]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 09:39:10 compute-0 NetworkManager[48920]: <info>  [1763804350.7626] manager: (tapb9b8fcd6-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/524)
Nov 22 09:39:10 compute-0 kernel: tapb9b8fcd6-fb: entered promiscuous mode
Nov 22 09:39:10 compute-0 ovn_controller[152872]: 2025-11-22T09:39:10Z|01293|binding|INFO|Claiming lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for this chassis.
Nov 22 09:39:10 compute-0 ovn_controller[152872]: 2025-11-22T09:39:10Z|01294|binding|INFO|b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f: Claiming fa:16:3e:d7:d1:9a 10.100.0.24
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.765 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.773 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:d1:9a 10.100.0.24'], port_security=['fa:16:3e:d7:d1:9a 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5061ce-83c8-4f7d-bdd0-cc8d1c8db63d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.774 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f in datapath 30756ec6-103b-4571-a5dc-9b4a481bc5b1 bound to our chassis
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.776 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30756ec6-103b-4571-a5dc-9b4a481bc5b1
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.787 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[28d03135-81e8-40c9-b557-5d0a8bca8146]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.788 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap30756ec6-11 in ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.790 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap30756ec6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.790 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8f2098-8f2e-4ebc-87fe-558288f41c33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.790 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e988795-c414-4f40-813e-8eb341542057]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 systemd-udevd[380243]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.803 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a04e3422-16e1-42d6-bc63-ff683272af51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 NetworkManager[48920]: <info>  [1763804350.8081] device (tapb9b8fcd6-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:39:10 compute-0 NetworkManager[48920]: <info>  [1763804350.8106] device (tapb9b8fcd6-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:10 compute-0 ovn_controller[152872]: 2025-11-22T09:39:10Z|01295|binding|INFO|Setting lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f ovn-installed in OVS
Nov 22 09:39:10 compute-0 ovn_controller[152872]: 2025-11-22T09:39:10Z|01296|binding|INFO|Setting lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f up in Southbound
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3efafc71-d2a2-4572-a365-e7c751fe93c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.848 253665 DEBUG nova.virt.libvirt.driver [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.848 253665 DEBUG nova.virt.libvirt.driver [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.848 253665 DEBUG nova.virt.libvirt.driver [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:30:a0:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.848 253665 DEBUG nova.virt.libvirt.driver [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:d7:d1:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.866 253665 DEBUG nova.virt.libvirt.guest [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:39:10</nova:creationTime>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:39:10 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 09:39:10 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     <nova:port uuid="b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f">
Nov 22 09:39:10 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 22 09:39:10 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:39:10 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:39:10 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:39:10 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.867 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5edf5c-8f2f-4010-82b2-ecab52c7c523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d8163bf1-39a8-454b-9130-e6a034f6546b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 NetworkManager[48920]: <info>  [1763804350.8784] manager: (tap30756ec6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/525)
Nov 22 09:39:10 compute-0 nova_compute[253661]: 2025-11-22 09:39:10.886 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.917 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[47ca3f5a-6360-48fc-be39-9e8222fba3a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.921 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd5b9ee-54da-4661-a645-04376eeb27bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 NetworkManager[48920]: <info>  [1763804350.9496] device (tap30756ec6-10): carrier: link connected
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.958 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[851d65f9-b037-4496-84ca-383cfb0a3158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.979 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c18afa-8e2c-4acd-a796-5e321324fc80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30756ec6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:cb:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720168, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380269, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.999 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3383f930-d2be-44c7-a0a3-b6614ee8c913]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:cbf9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720168, 'tstamp': 720168}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380270, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.023 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c77aca19-1d23-4840-8f09-78a666f6c2c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30756ec6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:cb:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720168, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380271, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.063 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbbab6a-3aca-4787-a403-ab497ddcc8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.138 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e91721b8-35b8-4d12-93c6-804a2ac3d2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.140 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30756ec6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.140 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.141 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30756ec6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:11 compute-0 kernel: tap30756ec6-10: entered promiscuous mode
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:11 compute-0 NetworkManager[48920]: <info>  [1763804351.1876] manager: (tap30756ec6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.191 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30756ec6-10, col_values=(('external_ids', {'iface-id': 'ef3a77cb-c20e-4c0c-b747-f8d33bfa04a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:11 compute-0 ovn_controller[152872]: 2025-11-22T09:39:11Z|01297|binding|INFO|Releasing lport ef3a77cb-c20e-4c0c-b747-f8d33bfa04a5 from this chassis (sb_readonly=0)
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.194 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/30756ec6-103b-4571-a5dc-9b4a481bc5b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/30756ec6-103b-4571-a5dc-9b4a481bc5b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc8e88c-02fe-4686-bf0a-23c838613525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.196 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-30756ec6-103b-4571-a5dc-9b4a481bc5b1
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/30756ec6-103b-4571-a5dc-9b4a481bc5b1.pid.haproxy
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 30756ec6-103b-4571-a5dc-9b4a481bc5b1
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:39:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.196 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'env', 'PROCESS_TAG=haproxy-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/30756ec6-103b-4571-a5dc-9b4a481bc5b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.214 253665 DEBUG nova.compute.manager [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.214 253665 DEBUG oslo_concurrency.lockutils [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.215 253665 DEBUG oslo_concurrency.lockutils [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.215 253665 DEBUG oslo_concurrency.lockutils [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.215 253665 DEBUG nova.compute.manager [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:39:11 compute-0 nova_compute[253661]: 2025-11-22 09:39:11.215 253665 WARNING nova.compute.manager [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for instance with vm_state active and task_state None.
Nov 22 09:39:11 compute-0 ceph-mon[75021]: pgmap v2318: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 43 KiB/s wr, 37 op/s
Nov 22 09:39:11 compute-0 podman[380303]: 2025-11-22 09:39:11.621382883 +0000 UTC m=+0.054701332 container create 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:39:11 compute-0 systemd[1]: Started libpod-conmon-87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20.scope.
Nov 22 09:39:11 compute-0 podman[380303]: 2025-11-22 09:39:11.593180691 +0000 UTC m=+0.026499160 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:39:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f89d97ed2ee8aa5d0ef9bc0221d90f0d63faaffe83c22b65aecd4abf2565382/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:39:11 compute-0 podman[380303]: 2025-11-22 09:39:11.721133903 +0000 UTC m=+0.154452362 container init 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:39:11 compute-0 podman[380303]: 2025-11-22 09:39:11.727348558 +0000 UTC m=+0.160666997 container start 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:39:11 compute-0 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [NOTICE]   (380322) : New worker (380324) forked
Nov 22 09:39:11 compute-0 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [NOTICE]   (380322) : Loading success.
Nov 22 09:39:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.348 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.349 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:39:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188009071' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:39:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:39:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188009071' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.393 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:39:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/4188009071' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:39:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/4188009071' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.618 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.619 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.625 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.625 253665 INFO nova.compute.claims [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.797 253665 DEBUG nova.network.neutron [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.798 253665 DEBUG nova.network.neutron [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.815 253665 DEBUG oslo_concurrency.lockutils [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:12 compute-0 nova_compute[253661]: 2025-11-22 09:39:12.867 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:39:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802943002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.333 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.341 253665 DEBUG nova.compute.provider_tree [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.355 253665 DEBUG nova.scheduler.client.report [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.377 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.379 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:39:13 compute-0 ceph-mon[75021]: pgmap v2319: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Nov 22 09:39:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3802943002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.429 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.430 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.447 253665 DEBUG nova.compute.manager [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.447 253665 DEBUG oslo_concurrency.lockutils [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.447 253665 DEBUG oslo_concurrency.lockutils [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.448 253665 DEBUG oslo_concurrency.lockutils [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.448 253665 DEBUG nova.compute.manager [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.448 253665 WARNING nova.compute.manager [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for instance with vm_state active and task_state None.
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.449 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.467 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.558 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.560 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.560 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Creating image(s)
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.585 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.615 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.643 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.648 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.700 253665 DEBUG nova.policy [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:39:13 compute-0 ovn_controller[152872]: 2025-11-22T09:39:13Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:d1:9a 10.100.0.24
Nov 22 09:39:13 compute-0 ovn_controller[152872]: 2025-11-22T09:39:13Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:d1:9a 10.100.0.24
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.737 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.738 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.739 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.739 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.769 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:13 compute-0 nova_compute[253661]: 2025-11-22 09:39:13.773 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ba0b1c52-c98b-4c2f-a213-e203719ada54_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Nov 22 09:39:14 compute-0 nova_compute[253661]: 2025-11-22 09:39:14.146 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ba0b1c52-c98b-4c2f-a213-e203719ada54_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:14 compute-0 nova_compute[253661]: 2025-11-22 09:39:14.218 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:39:14 compute-0 nova_compute[253661]: 2025-11-22 09:39:14.346 253665 DEBUG nova.objects.instance [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid ba0b1c52-c98b-4c2f-a213-e203719ada54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:14 compute-0 nova_compute[253661]: 2025-11-22 09:39:14.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:14 compute-0 nova_compute[253661]: 2025-11-22 09:39:14.384 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:39:14 compute-0 nova_compute[253661]: 2025-11-22 09:39:14.384 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Ensure instance console log exists: /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:39:14 compute-0 nova_compute[253661]: 2025-11-22 09:39:14.384 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:14 compute-0 nova_compute[253661]: 2025-11-22 09:39:14.385 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:14 compute-0 nova_compute[253661]: 2025-11-22 09:39:14.385 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:15 compute-0 nova_compute[253661]: 2025-11-22 09:39:15.419 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Successfully created port: 2a619e33-769d-4ebf-b212-40975e40d3ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:39:15 compute-0 ceph-mon[75021]: pgmap v2320: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Nov 22 09:39:15 compute-0 nova_compute[253661]: 2025-11-22 09:39:15.732 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Nov 22 09:39:16 compute-0 nova_compute[253661]: 2025-11-22 09:39:16.190 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Successfully created port: 27382337-7fe1-4d29-942c-7735f8c98a06 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:39:16 compute-0 nova_compute[253661]: 2025-11-22 09:39:16.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.080 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Successfully updated port: 2a619e33-769d-4ebf-b212-40975e40d3ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.199 253665 DEBUG nova.compute.manager [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.199 253665 DEBUG nova.compute.manager [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing instance network info cache due to event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.200 253665 DEBUG oslo_concurrency.lockutils [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.200 253665 DEBUG oslo_concurrency.lockutils [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.200 253665 DEBUG nova.network.neutron [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:39:17 compute-0 ceph-mon[75021]: pgmap v2321: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Nov 22 09:39:17 compute-0 nova_compute[253661]: 2025-11-22 09:39:17.449 253665 DEBUG nova.network.neutron [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:39:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 141 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 996 KiB/s wr, 30 op/s
Nov 22 09:39:18 compute-0 nova_compute[253661]: 2025-11-22 09:39:18.513 253665 DEBUG nova.network.neutron [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:18 compute-0 nova_compute[253661]: 2025-11-22 09:39:18.543 253665 DEBUG oslo_concurrency.lockutils [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:18 compute-0 nova_compute[253661]: 2025-11-22 09:39:18.632 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Successfully updated port: 27382337-7fe1-4d29-942c-7735f8c98a06 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:39:18 compute-0 nova_compute[253661]: 2025-11-22 09:39:18.656 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:18 compute-0 nova_compute[253661]: 2025-11-22 09:39:18.657 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:18 compute-0 nova_compute[253661]: 2025-11-22 09:39:18.657 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:39:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:18 compute-0 nova_compute[253661]: 2025-11-22 09:39:18.900 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804343.8996782, d2f5b215-3a41-451c-8ad8-68b17c96a678 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:18 compute-0 nova_compute[253661]: 2025-11-22 09:39:18.901 253665 INFO nova.compute.manager [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] VM Stopped (Lifecycle Event)
Nov 22 09:39:18 compute-0 nova_compute[253661]: 2025-11-22 09:39:18.929 253665 DEBUG nova.compute.manager [None req-de1cb40f-8975-488a-b16c-040b51901d59 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:19 compute-0 nova_compute[253661]: 2025-11-22 09:39:19.001 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:39:19 compute-0 nova_compute[253661]: 2025-11-22 09:39:19.410 253665 DEBUG nova.compute.manager [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-changed-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:19 compute-0 nova_compute[253661]: 2025-11-22 09:39:19.411 253665 DEBUG nova.compute.manager [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing instance network info cache due to event network-changed-27382337-7fe1-4d29-942c-7735f8c98a06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:39:19 compute-0 nova_compute[253661]: 2025-11-22 09:39:19.412 253665 DEBUG oslo_concurrency.lockutils [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:19 compute-0 ceph-mon[75021]: pgmap v2322: 305 pgs: 305 active+clean; 141 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 996 KiB/s wr, 30 op/s
Nov 22 09:39:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 167 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.657 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.658 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.676 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.735 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.745 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.746 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.752 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.752 253665 INFO nova.compute.claims [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:39:20 compute-0 nova_compute[253661]: 2025-11-22 09:39:20.898 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:39:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2987396762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.380 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.388 253665 DEBUG nova.compute.provider_tree [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.410 253665 DEBUG nova.scheduler.client.report [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.434 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.435 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:39:21 compute-0 ceph-mon[75021]: pgmap v2323: 305 pgs: 305 active+clean; 167 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 22 09:39:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2987396762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.484 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.485 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.509 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.526 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.615 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.617 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.618 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Creating image(s)
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.643 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.668 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.695 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.701 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.800 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.802 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.803 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.804 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.840 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.845 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d7865a13-0d41-44d6-aac2-10cca6e1348a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:21 compute-0 nova_compute[253661]: 2025-11-22 09:39:21.952 253665 DEBUG nova.policy [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:39:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 167 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.182 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d7865a13-0d41-44d6-aac2-10cca6e1348a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.256 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.362 253665 DEBUG nova.objects.instance [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid d7865a13-0d41-44d6-aac2-10cca6e1348a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.376 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.376 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Ensure instance console log exists: /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.377 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.377 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.378 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.692 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.716 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.717 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance network_info: |[{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.717 253665 DEBUG oslo_concurrency.lockutils [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.718 253665 DEBUG nova.network.neutron [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing network info cache for port 27382337-7fe1-4d29-942c-7735f8c98a06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.722 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start _get_guest_xml network_info=[{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.728 253665 WARNING nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.733 253665 DEBUG nova.virt.libvirt.host [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.734 253665 DEBUG nova.virt.libvirt.host [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.743 253665 DEBUG nova.virt.libvirt.host [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.743 253665 DEBUG nova.virt.libvirt.host [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.744 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.744 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.745 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.745 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.745 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.746 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.746 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.746 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.746 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.747 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.747 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.747 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:39:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:39:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:39:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:39:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:39:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:39:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:39:22 compute-0 nova_compute[253661]: 2025-11-22 09:39:22.751 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:39:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944524559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.266 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.290 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.302 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:23 compute-0 podman[380747]: 2025-11-22 09:39:23.367113063 +0000 UTC m=+0.057032560 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:39:23 compute-0 podman[380751]: 2025-11-22 09:39:23.386441994 +0000 UTC m=+0.069653733 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:39:23 compute-0 ceph-mon[75021]: pgmap v2324: 305 pgs: 305 active+clean; 167 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 22 09:39:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2944524559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.696 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Successfully created port: 54a61ee9-1fb8-4c5c-8716-613fc3355afb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:39:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:39:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368777439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.777 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.780 253665 DEBUG nova.virt.libvirt.vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:13Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.780 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.782 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.784 253665 DEBUG nova.virt.libvirt.vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:13Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.785 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.786 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.788 253665 DEBUG nova.objects.instance [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid ba0b1c52-c98b-4c2f-a213-e203719ada54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.810 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <uuid>ba0b1c52-c98b-4c2f-a213-e203719ada54</uuid>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <name>instance-0000007a</name>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-206573176</nova:name>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:39:22</nova:creationTime>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <nova:port uuid="2a619e33-769d-4ebf-b212-40975e40d3ca">
Nov 22 09:39:23 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <nova:port uuid="27382337-7fe1-4d29-942c-7735f8c98a06">
Nov 22 09:39:23 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe81:ef0f" ipVersion="6"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe81:ef0f" ipVersion="6"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <system>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <entry name="serial">ba0b1c52-c98b-4c2f-a213-e203719ada54</entry>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <entry name="uuid">ba0b1c52-c98b-4c2f-a213-e203719ada54</entry>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </system>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <os>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   </os>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <features>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   </features>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:39:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/ba0b1c52-c98b-4c2f-a213-e203719ada54_disk">
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config">
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       </source>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:39:23 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:09:e0:cc"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <target dev="tap2a619e33-76"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:81:ef:0f"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <target dev="tap27382337-7f"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/console.log" append="off"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <video>
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </video>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:39:23 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:39:23 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:39:23 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:39:23 compute-0 nova_compute[253661]: </domain>
Nov 22 09:39:23 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.811 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Preparing to wait for external event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.811 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.811 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.812 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.812 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Preparing to wait for external event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.812 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.812 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.813 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.813 253665 DEBUG nova.virt.libvirt.vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:13Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.814 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.815 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.815 253665 DEBUG os_vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.817 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.817 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.823 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a619e33-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.824 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a619e33-76, col_values=(('external_ids', {'iface-id': '2a619e33-769d-4ebf-b212-40975e40d3ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:e0:cc', 'vm-uuid': 'ba0b1c52-c98b-4c2f-a213-e203719ada54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:23 compute-0 NetworkManager[48920]: <info>  [1763804363.8274] manager: (tap2a619e33-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.836 253665 INFO os_vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76')
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.837 253665 DEBUG nova.virt.libvirt.vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:13Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.838 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.839 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.839 253665 DEBUG os_vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.843 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.844 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27382337-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.844 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27382337-7f, col_values=(('external_ids', {'iface-id': '27382337-7fe1-4d29-942c-7735f8c98a06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:ef:0f', 'vm-uuid': 'ba0b1c52-c98b-4c2f-a213-e203719ada54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:23 compute-0 NetworkManager[48920]: <info>  [1763804363.8474] manager: (tap27382337-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.855 253665 INFO os_vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f')
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.924 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.925 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.925 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:09:e0:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.925 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:81:ef:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.926 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Using config drive
Nov 22 09:39:23 compute-0 nova_compute[253661]: 2025-11-22 09:39:23.949 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 198 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.9 MiB/s wr, 54 op/s
Nov 22 09:39:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1368777439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.498 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Creating config drive at /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.504 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wreox54 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.653 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wreox54" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.681 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.685 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:24.768 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:39:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:24.789 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.791 253665 DEBUG nova.network.neutron [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updated VIF entry in instance network info cache for port 27382337-7fe1-4d29-942c-7735f8c98a06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.792 253665 DEBUG nova.network.neutron [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.795 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.814 253665 DEBUG oslo_concurrency.lockutils [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.846 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Successfully updated port: 54a61ee9-1fb8-4c5c-8716-613fc3355afb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.892 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.893 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Deleting local config drive /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config because it was imported into RBD.
Nov 22 09:39:24 compute-0 kernel: tap2a619e33-76: entered promiscuous mode
Nov 22 09:39:24 compute-0 NetworkManager[48920]: <info>  [1763804364.9695] manager: (tap2a619e33-76): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Nov 22 09:39:24 compute-0 nova_compute[253661]: 2025-11-22 09:39:24.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:24 compute-0 ovn_controller[152872]: 2025-11-22T09:39:24Z|01298|if_status|INFO|Not updating pb chassis for 2a619e33-769d-4ebf-b212-40975e40d3ca now as sb is readonly
Nov 22 09:39:24 compute-0 NetworkManager[48920]: <info>  [1763804364.9912] manager: (tap27382337-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Nov 22 09:39:24 compute-0 kernel: tap27382337-7f: entered promiscuous mode
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.002 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:25 compute-0 systemd-udevd[380885]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:39:25 compute-0 systemd-udevd[380887]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:25 compute-0 NetworkManager[48920]: <info>  [1763804365.0308] device (tap27382337-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:39:25 compute-0 NetworkManager[48920]: <info>  [1763804365.0318] device (tap2a619e33-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:39:25 compute-0 NetworkManager[48920]: <info>  [1763804365.0325] device (tap27382337-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:39:25 compute-0 NetworkManager[48920]: <info>  [1763804365.0329] device (tap2a619e33-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:39:25 compute-0 systemd-machined[215941]: New machine qemu-153-instance-0000007a.
Nov 22 09:39:25 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-0000007a.
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.050 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.051 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.051 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:39:25 compute-0 ovn_controller[152872]: 2025-11-22T09:39:25Z|01299|binding|INFO|Claiming lport 27382337-7fe1-4d29-942c-7735f8c98a06 for this chassis.
Nov 22 09:39:25 compute-0 ovn_controller[152872]: 2025-11-22T09:39:25Z|01300|binding|INFO|27382337-7fe1-4d29-942c-7735f8c98a06: Claiming fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f
Nov 22 09:39:25 compute-0 ovn_controller[152872]: 2025-11-22T09:39:25Z|01301|binding|INFO|Claiming lport 2a619e33-769d-4ebf-b212-40975e40d3ca for this chassis.
Nov 22 09:39:25 compute-0 ovn_controller[152872]: 2025-11-22T09:39:25Z|01302|binding|INFO|2a619e33-769d-4ebf-b212-40975e40d3ca: Claiming fa:16:3e:09:e0:cc 10.100.0.10
Nov 22 09:39:25 compute-0 ovn_controller[152872]: 2025-11-22T09:39:25Z|01303|binding|INFO|Setting lport 27382337-7fe1-4d29-942c-7735f8c98a06 ovn-installed in OVS
Nov 22 09:39:25 compute-0 ovn_controller[152872]: 2025-11-22T09:39:25Z|01304|binding|INFO|Setting lport 2a619e33-769d-4ebf-b212-40975e40d3ca ovn-installed in OVS
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.213 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.272 253665 DEBUG nova.compute.manager [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-changed-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.273 253665 DEBUG nova.compute.manager [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Refreshing instance network info cache due to event network-changed-54a61ee9-1fb8-4c5c-8716-613fc3355afb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.273 253665 DEBUG oslo_concurrency.lockutils [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:25 compute-0 ovn_controller[152872]: 2025-11-22T09:39:25Z|01305|binding|INFO|Setting lport 27382337-7fe1-4d29-942c-7735f8c98a06 up in Southbound
Nov 22 09:39:25 compute-0 ovn_controller[152872]: 2025-11-22T09:39:25Z|01306|binding|INFO|Setting lport 2a619e33-769d-4ebf-b212-40975e40d3ca up in Southbound
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.320 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:e0:cc 10.100.0.10'], port_security=['fa:16:3e:09:e0:cc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ba0b1c52-c98b-4c2f-a213-e203719ada54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33aa2b15-84be-4fa8-858f-98182293b1b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82afa9d-1a09-411a-8866-4ce961a27350, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a619e33-769d-4ebf-b212-40975e40d3ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.323 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f'], port_security=['fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe81:ef0f/64 2001:db8::f816:3eff:fe81:ef0f/64', 'neutron:device_id': 'ba0b1c52-c98b-4c2f-a213-e203719ada54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=27382337-7fe1-4d29-942c-7735f8c98a06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.324 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a619e33-769d-4ebf-b212-40975e40d3ca in datapath 33aa2b15-84be-4fa8-858f-98182293b1b2 bound to our chassis
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.327 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33aa2b15-84be-4fa8-858f-98182293b1b2
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.343 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b27fe9d-edd4-459c-a793-cc41eccfd659]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.345 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap33aa2b15-81 in ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.347 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap33aa2b15-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.347 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ceaff0-d018-4dab-9442-129343c12d46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.348 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a982d2f-c679-4446-9895-0b068cd533ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.366 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a61b622d-6a2e-4e36-9854-616e69ff37de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.392 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3398218d-4ee4-4abc-97ae-a47dd82b6980]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.431 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[adcad5a4-8cd3-4a84-9154-a23899e43e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 NetworkManager[48920]: <info>  [1763804365.4405] manager: (tap33aa2b15-80): new Veth device (/org/freedesktop/NetworkManager/Devices/531)
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.441 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f866c8a1-b323-45ba-bb1e-da27f7fa4cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.477 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b13d5e-426b-4917-97c0-550f795e5d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.480 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cf6bb0-981b-45f5-aa8c-710e8fdd0ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ceph-mon[75021]: pgmap v2325: 305 pgs: 305 active+clean; 198 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.9 MiB/s wr, 54 op/s
Nov 22 09:39:25 compute-0 NetworkManager[48920]: <info>  [1763804365.5115] device (tap33aa2b15-80): carrier: link connected
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.519 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a2797e92-e5c2-4fe7-879e-6a6402d5ac83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.537 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a658cae8-471a-45e6-9993-444c271f0ca9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33aa2b15-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:23:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721624, 'reachable_time': 19894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380924, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c102ec71-211b-4080-9367-f4bc92e69258]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:23b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721624, 'tstamp': 721624}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380925, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.575 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a22f46ba-bedd-43df-b65c-c014796d2ffd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33aa2b15-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:23:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721624, 'reachable_time': 19894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380926, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.612 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d5eac2-446d-4094-9833-f7391abfd886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.683 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a02bb340-be22-4ef0-8a07-f3d987e76ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.685 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33aa2b15-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.686 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.687 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33aa2b15-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:25 compute-0 NetworkManager[48920]: <info>  [1763804365.6896] manager: (tap33aa2b15-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Nov 22 09:39:25 compute-0 kernel: tap33aa2b15-80: entered promiscuous mode
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.692 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.692 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33aa2b15-80, col_values=(('external_ids', {'iface-id': 'c8541406-177e-4d49-a6da-f639419da399'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:25 compute-0 ovn_controller[152872]: 2025-11-22T09:39:25Z|01307|binding|INFO|Releasing lport c8541406-177e-4d49-a6da-f639419da399 from this chassis (sb_readonly=0)
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.711 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/33aa2b15-84be-4fa8-858f-98182293b1b2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/33aa2b15-84be-4fa8-858f-98182293b1b2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.712 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea105f7f-a720-46e8-9f26-3cbc420f8b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.713 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-33aa2b15-84be-4fa8-858f-98182293b1b2
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/33aa2b15-84be-4fa8-858f-98182293b1b2.pid.haproxy
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 33aa2b15-84be-4fa8-858f-98182293b1b2
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:39:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.715 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'env', 'PROCESS_TAG=haproxy-33aa2b15-84be-4fa8-858f-98182293b1b2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/33aa2b15-84be-4fa8-858f-98182293b1b2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.855 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804365.8550584, ba0b1c52-c98b-4c2f-a213-e203719ada54 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.856 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] VM Started (Lifecycle Event)
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.859 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.875 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.880 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804365.8581522, ba0b1c52-c98b-4c2f-a213-e203719ada54 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.881 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] VM Paused (Lifecycle Event)
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.913 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.917 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:39:25 compute-0 nova_compute[253661]: 2025-11-22 09:39:25.940 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:39:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 198 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.9 MiB/s wr, 53 op/s
Nov 22 09:39:26 compute-0 podman[381001]: 2025-11-22 09:39:26.110841286 +0000 UTC m=+0.053335689 container create 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:39:26 compute-0 systemd[1]: Started libpod-conmon-2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733.scope.
Nov 22 09:39:26 compute-0 podman[381001]: 2025-11-22 09:39:26.08284554 +0000 UTC m=+0.025339963 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:39:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80866c630f9de6dcc7211912df360c883a2569930a017e86eb8d48a712ac4e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:39:26 compute-0 podman[381001]: 2025-11-22 09:39:26.19751138 +0000 UTC m=+0.140005803 container init 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:26 compute-0 podman[381001]: 2025-11-22 09:39:26.205679594 +0000 UTC m=+0.148173987 container start 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:39:26 compute-0 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [NOTICE]   (381020) : New worker (381022) forked
Nov 22 09:39:26 compute-0 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [NOTICE]   (381020) : Loading success.
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.275 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 27382337-7fe1-4d29-942c-7735f8c98a06 in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb unbound from our chassis
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.277 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20228844-2184-465b-8bc3-e846cfb6d3cb
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.294 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a237e08-7a8c-407f-bdf9-474dba13899a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.295 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20228844-21 in ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.297 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20228844-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.298 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1698afc-6d35-4205-9843-ec9ef1b03b62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.299 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[758ec8ba-b82a-499a-a421-ef0e9a5881a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.313 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6f12e3a3-811b-49e7-a75f-66b25d4b4822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.338 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1d940d-915e-4c5f-9f02-88d8055e0494]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.374 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d4d57b-15d1-4c31-8076-40aa77fda0d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.381 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4f27ba99-caa6-4c18-a262-c073c811e6c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 NetworkManager[48920]: <info>  [1763804366.3827] manager: (tap20228844-20): new Veth device (/org/freedesktop/NetworkManager/Devices/533)
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.422 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[266ccc74-0a1a-48d3-9418-ab27a97f73a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.425 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb2d736-db72-4a85-9c04-cf1f0a7f883f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 NetworkManager[48920]: <info>  [1763804366.4517] device (tap20228844-20): carrier: link connected
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.458 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e13bd9-2c28-4d8f-8cdb-070e6793029e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.477 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13157d32-6fba-4e29-a034-c81b8b659fe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20228844-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:0f:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721718, 'reachable_time': 39887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381041, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.498 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[35fc7dd9-c962-47fd-9723-5b39cfc0193a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8d:f0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721718, 'tstamp': 721718}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381042, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a227d1c3-402c-4512-b5c7-7c7cc612c381]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20228844-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:0f:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721718, 'reachable_time': 39887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381043, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.559 253665 DEBUG nova.compute.manager [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.560 253665 DEBUG oslo_concurrency.lockutils [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.560 253665 DEBUG oslo_concurrency.lockutils [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.560 253665 DEBUG oslo_concurrency.lockutils [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.561 253665 DEBUG nova.compute.manager [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Processing event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.563 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0b5069-a903-4466-8077-2316b46da023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.603 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1ddba5e2-f72e-47a0-8334-c66bbd448297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.605 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20228844-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.605 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.605 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20228844-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:26 compute-0 NetworkManager[48920]: <info>  [1763804366.6085] manager: (tap20228844-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Nov 22 09:39:26 compute-0 kernel: tap20228844-20: entered promiscuous mode
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.613 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20228844-20, col_values=(('external_ids', {'iface-id': 'c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:26 compute-0 ovn_controller[152872]: 2025-11-22T09:39:26Z|01308|binding|INFO|Releasing lport c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1 from this chassis (sb_readonly=0)
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.617 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20228844-2184-465b-8bc3-e846cfb6d3cb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20228844-2184-465b-8bc3-e846cfb6d3cb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.619 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58eccc14-f847-4550-8dcc-5af6018a33fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.620 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-20228844-2184-465b-8bc3-e846cfb6d3cb
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/20228844-2184-465b-8bc3-e846cfb6d3cb.pid.haproxy
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 20228844-2184-465b-8bc3-e846cfb6d3cb
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:39:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.620 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'env', 'PROCESS_TAG=haproxy-20228844-2184-465b-8bc3-e846cfb6d3cb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20228844-2184-465b-8bc3-e846cfb6d3cb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.642 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.642 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.679 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.834 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.835 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.842 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:39:26 compute-0 nova_compute[253661]: 2025-11-22 09:39:26.843 253665 INFO nova.compute.claims [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:39:26 compute-0 podman[381073]: 2025-11-22 09:39:26.984647616 +0000 UTC m=+0.051699577 container create fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:39:27 compute-0 systemd[1]: Started libpod-conmon-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a.scope.
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.052 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:39:27 compute-0 podman[381073]: 2025-11-22 09:39:26.958519306 +0000 UTC m=+0.025571287 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb633aced8fc5d6611d564ed19dc9e108a0bec8470c6c0ad7e16b832c8c4335b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:39:27 compute-0 podman[381073]: 2025-11-22 09:39:27.072334506 +0000 UTC m=+0.139386487 container init fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 09:39:27 compute-0 podman[381073]: 2025-11-22 09:39:27.078075599 +0000 UTC m=+0.145127550 container start fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:39:27 compute-0 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [NOTICE]   (381093) : New worker (381095) forked
Nov 22 09:39:27 compute-0 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [NOTICE]   (381093) : Loading success.
Nov 22 09:39:27 compute-0 ceph-mon[75021]: pgmap v2326: 305 pgs: 305 active+clean; 198 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.9 MiB/s wr, 53 op/s
Nov 22 09:39:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:39:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584709581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.524 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.534 253665 DEBUG nova.compute.provider_tree [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.552 253665 DEBUG nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.552 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.553 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.553 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.553 253665 DEBUG nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Processing event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.554 253665 DEBUG nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.554 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.554 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.555 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.555 253665 DEBUG nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.555 253665 WARNING nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received unexpected event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca for instance with vm_state building and task_state spawning.
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.557 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.558 253665 DEBUG nova.scheduler.client.report [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.565 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804367.5648136, ba0b1c52-c98b-4c2f-a213-e203719ada54 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.565 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] VM Resumed (Lifecycle Event)
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.568 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.573 253665 INFO nova.virt.libvirt.driver [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance spawned successfully.
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.574 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.588 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.592 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.593 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.600 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.621 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.625 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.626 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.626 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.626 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.627 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.634 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.706 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.707 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.738 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Updating instance_info_cache with network_info: [{"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.742 253665 INFO nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Took 14.18 seconds to spawn the instance on the hypervisor.
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.742 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.744 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.772 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.773 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance network_info: |[{"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.773 253665 DEBUG oslo_concurrency.lockutils [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.775 253665 DEBUG nova.network.neutron [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Refreshing network info cache for port 54a61ee9-1fb8-4c5c-8716-613fc3355afb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.782 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start _get_guest_xml network_info=[{"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.788 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.792 253665 WARNING nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.799 253665 DEBUG nova.virt.libvirt.host [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.799 253665 DEBUG nova.virt.libvirt.host [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.802 253665 DEBUG nova.virt.libvirt.host [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.802 253665 DEBUG nova.virt.libvirt.host [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.803 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.803 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.805 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.805 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.805 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.805 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.808 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:27.983 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:27.984 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:27 compute-0 nova_compute[253661]: 2025-11-22 09:39:27.978 253665 INFO nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Took 15.39 seconds to build instance.
Nov 22 09:39:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:27.985 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 213 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.081 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.101 253665 DEBUG nova.policy [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.190 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.191 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.192 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Creating image(s)
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.219 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.247 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.273 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.279 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:39:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589652562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.339 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.362 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.367 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.413 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.414 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.415 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.415 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.445 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.450 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3584709581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1589652562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.761 253665 DEBUG nova.compute.manager [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.762 253665 DEBUG oslo_concurrency.lockutils [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.762 253665 DEBUG oslo_concurrency.lockutils [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.762 253665 DEBUG oslo_concurrency.lockutils [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.762 253665 DEBUG nova.compute.manager [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.763 253665 WARNING nova.compute.manager [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received unexpected event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 for instance with vm_state active and task_state None.
Nov 22 09:39:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.848 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:39:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388129535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.930 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.931 253665 DEBUG nova.virt.libvirt.vif [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1997303369',display_name='tempest-TestNetworkBasicOps-server-1997303369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1997303369',id=123,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG0uOorJ/gmaNrR6qSN8/HnR9fMkzDH2WfxtPrvyBivOyhJCMxJEV6zlpNVePFSMCgazPwKP4Vum9MI8Qs/y/+T2quaiVANmzVrFFYwVnCOps2b+X6LuQ32XNX42/GMXcg==',key_name='tempest-TestNetworkBasicOps-799900934',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-2eyiney1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:21Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=d7865a13-0d41-44d6-aac2-10cca6e1348a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.932 253665 DEBUG nova.network.os_vif_util [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.932 253665 DEBUG nova.network.os_vif_util [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.934 253665 DEBUG nova.objects.instance [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7865a13-0d41-44d6-aac2-10cca6e1348a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.947 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <uuid>d7865a13-0d41-44d6-aac2-10cca6e1348a</uuid>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <name>instance-0000007b</name>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-1997303369</nova:name>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:39:27</nova:creationTime>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <nova:port uuid="54a61ee9-1fb8-4c5c-8716-613fc3355afb">
Nov 22 09:39:28 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <system>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <entry name="serial">d7865a13-0d41-44d6-aac2-10cca6e1348a</entry>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <entry name="uuid">d7865a13-0d41-44d6-aac2-10cca6e1348a</entry>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     </system>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <os>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   </os>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <features>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   </features>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d7865a13-0d41-44d6-aac2-10cca6e1348a_disk">
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       </source>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config">
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       </source>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:39:28 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:58:21:e3"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <target dev="tap54a61ee9-1f"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/console.log" append="off"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <video>
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     </video>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:39:28 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:39:28 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:39:28 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:39:28 compute-0 nova_compute[253661]: </domain>
Nov 22 09:39:28 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.948 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Preparing to wait for external event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.948 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.948 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.948 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.949 253665 DEBUG nova.virt.libvirt.vif [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1997303369',display_name='tempest-TestNetworkBasicOps-server-1997303369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1997303369',id=123,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG0uOorJ/gmaNrR6qSN8/HnR9fMkzDH2WfxtPrvyBivOyhJCMxJEV6zlpNVePFSMCgazPwKP4Vum9MI8Qs/y/+T2quaiVANmzVrFFYwVnCOps2b+X6LuQ32XNX42/GMXcg==',key_name='tempest-TestNetworkBasicOps-799900934',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-2eyiney1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:21Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=d7865a13-0d41-44d6-aac2-10cca6e1348a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.950 253665 DEBUG nova.network.os_vif_util [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.951 253665 DEBUG nova.network.os_vif_util [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.952 253665 DEBUG os_vif [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.953 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.953 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.957 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54a61ee9-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.958 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54a61ee9-1f, col_values=(('external_ids', {'iface-id': '54a61ee9-1fb8-4c5c-8716-613fc3355afb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:21:e3', 'vm-uuid': 'd7865a13-0d41-44d6-aac2-10cca6e1348a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:28 compute-0 NetworkManager[48920]: <info>  [1763804368.9609] manager: (tap54a61ee9-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:28 compute-0 nova_compute[253661]: 2025-11-22 09:39:28.970 253665 INFO os_vif [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f')
Nov 22 09:39:29 compute-0 podman[381283]: 2025-11-22 09:39:29.116906791 +0000 UTC m=+0.106423647 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.213 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.214 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.214 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:58:21:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.214 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Using config drive
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.236 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.281 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.831s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.352 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.454 253665 DEBUG nova.objects.instance [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid 44f1789d-14f7-46df-a863-e8c3c418f7f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.467 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.467 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Ensure instance console log exists: /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.468 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.468 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:29 compute-0 nova_compute[253661]: 2025-11-22 09:39:29.468 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:29 compute-0 ceph-mon[75021]: pgmap v2327: 305 pgs: 305 active+clean; 213 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Nov 22 09:39:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/388129535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 213 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.6 MiB/s wr, 74 op/s
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.100 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Creating config drive at /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.105 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9gecf4_w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.259 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9gecf4_w" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.299 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.304 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.480 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.481 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Deleting local config drive /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config because it was imported into RBD.
Nov 22 09:39:30 compute-0 NetworkManager[48920]: <info>  [1763804370.5447] manager: (tap54a61ee9-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/536)
Nov 22 09:39:30 compute-0 kernel: tap54a61ee9-1f: entered promiscuous mode
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:30 compute-0 ovn_controller[152872]: 2025-11-22T09:39:30Z|01309|binding|INFO|Claiming lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb for this chassis.
Nov 22 09:39:30 compute-0 ovn_controller[152872]: 2025-11-22T09:39:30Z|01310|binding|INFO|54a61ee9-1fb8-4c5c-8716-613fc3355afb: Claiming fa:16:3e:58:21:e3 10.100.0.20
Nov 22 09:39:30 compute-0 systemd-udevd[381452]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.587 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:30 compute-0 ovn_controller[152872]: 2025-11-22T09:39:30Z|01311|binding|INFO|Setting lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb ovn-installed in OVS
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.594 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:30 compute-0 NetworkManager[48920]: <info>  [1763804370.6062] device (tap54a61ee9-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:39:30 compute-0 NetworkManager[48920]: <info>  [1763804370.6073] device (tap54a61ee9-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:39:30 compute-0 systemd-machined[215941]: New machine qemu-154-instance-0000007b.
Nov 22 09:39:30 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-0000007b.
Nov 22 09:39:30 compute-0 ovn_controller[152872]: 2025-11-22T09:39:30Z|01312|binding|INFO|Setting lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb up in Southbound
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.654 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:21:e3 10.100.0.20'], port_security=['fa:16:3e:58:21:e3 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'd7865a13-0d41-44d6-aac2-10cca6e1348a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268f1f5a-a38b-4a4b-99c8-6f247601dc2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5061ce-83c8-4f7d-bdd0-cc8d1c8db63d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=54a61ee9-1fb8-4c5c-8716-613fc3355afb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.656 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 54a61ee9-1fb8-4c5c-8716-613fc3355afb in datapath 30756ec6-103b-4571-a5dc-9b4a481bc5b1 bound to our chassis
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.658 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30756ec6-103b-4571-a5dc-9b4a481bc5b1
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.678 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4feea2-4c01-49e5-ba24-36d14e49ae82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.713 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[43c6bfd9-888b-4d6c-ac78-2b0e56822457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.717 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcda24e-4ad9-42fb-9ebf-42e59e0a30d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.728 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Successfully created port: 7381e299-12bd-46ec-8abf-df35fe0bf48a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.758 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[be86a877-c73f-430a-953a-5d17fb9a19bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.776 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c96139b8-d994-4b06-864b-d0fdfc75626a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30756ec6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:cb:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720168, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381467, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19554eb2-99f1-4a9f-98e4-90d6ec1f5752]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap30756ec6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720183, 'tstamp': 720183}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381468, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30756ec6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720186, 'tstamp': 720186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381468, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.796 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30756ec6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.798 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.800 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30756ec6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.800 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.800 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30756ec6-10, col_values=(('external_ids', {'iface-id': 'ef3a77cb-c20e-4c0c-b747-f8d33bfa04a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.801 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.999 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804370.998435, d7865a13-0d41-44d6-aac2-10cca6e1348a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:30 compute-0 nova_compute[253661]: 2025-11-22 09:39:30.999 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] VM Started (Lifecycle Event)
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.025 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.029 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804370.9986734, d7865a13-0d41-44d6-aac2-10cca6e1348a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.029 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] VM Paused (Lifecycle Event)
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.046 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.051 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.070 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.266 253665 DEBUG nova.network.neutron [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Updated VIF entry in instance network info cache for port 54a61ee9-1fb8-4c5c-8716-613fc3355afb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.267 253665 DEBUG nova.network.neutron [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Updating instance_info_cache with network_info: [{"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.280 253665 DEBUG oslo_concurrency.lockutils [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.430 253665 DEBUG nova.compute.manager [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.431 253665 DEBUG oslo_concurrency.lockutils [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.431 253665 DEBUG oslo_concurrency.lockutils [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.432 253665 DEBUG oslo_concurrency.lockutils [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.432 253665 DEBUG nova.compute.manager [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Processing event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.433 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.436 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804371.4364312, d7865a13-0d41-44d6-aac2-10cca6e1348a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.437 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] VM Resumed (Lifecycle Event)
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.439 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.443 253665 INFO nova.virt.libvirt.driver [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance spawned successfully.
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.444 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.459 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.465 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.472 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.473 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.473 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.474 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.474 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.475 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.519 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:39:31 compute-0 ceph-mon[75021]: pgmap v2328: 305 pgs: 305 active+clean; 213 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.6 MiB/s wr, 74 op/s
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.562 253665 INFO nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Took 9.95 seconds to spawn the instance on the hypervisor.
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.563 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.639 253665 INFO nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Took 10.92 seconds to build instance.
Nov 22 09:39:31 compute-0 nova_compute[253661]: 2025-11-22 09:39:31.661 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 230 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 735 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Nov 22 09:39:32 compute-0 nova_compute[253661]: 2025-11-22 09:39:32.296 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Successfully updated port: 7381e299-12bd-46ec-8abf-df35fe0bf48a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:39:32 compute-0 nova_compute[253661]: 2025-11-22 09:39:32.379 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:32 compute-0 nova_compute[253661]: 2025-11-22 09:39:32.379 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:32 compute-0 nova_compute[253661]: 2025-11-22 09:39:32.379 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:39:32 compute-0 nova_compute[253661]: 2025-11-22 09:39:32.421 253665 DEBUG nova.compute.manager [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:32 compute-0 nova_compute[253661]: 2025-11-22 09:39:32.422 253665 DEBUG nova.compute.manager [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing instance network info cache due to event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:39:32 compute-0 nova_compute[253661]: 2025-11-22 09:39:32.423 253665 DEBUG oslo_concurrency.lockutils [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:32 compute-0 nova_compute[253661]: 2025-11-22 09:39:32.600 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:39:33 compute-0 ceph-mon[75021]: pgmap v2329: 305 pgs: 305 active+clean; 230 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 735 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Nov 22 09:39:33 compute-0 nova_compute[253661]: 2025-11-22 09:39:33.568 253665 DEBUG nova.compute.manager [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:33 compute-0 nova_compute[253661]: 2025-11-22 09:39:33.569 253665 DEBUG oslo_concurrency.lockutils [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:33 compute-0 nova_compute[253661]: 2025-11-22 09:39:33.569 253665 DEBUG oslo_concurrency.lockutils [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:33 compute-0 nova_compute[253661]: 2025-11-22 09:39:33.570 253665 DEBUG oslo_concurrency.lockutils [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:33 compute-0 nova_compute[253661]: 2025-11-22 09:39:33.570 253665 DEBUG nova.compute.manager [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] No waiting events found dispatching network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:39:33 compute-0 nova_compute[253661]: 2025-11-22 09:39:33.570 253665 WARNING nova.compute.manager [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received unexpected event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb for instance with vm_state active and task_state None.
Nov 22 09:39:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:33.792 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:33 compute-0 nova_compute[253661]: 2025-11-22 09:39:33.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Nov 22 09:39:35 compute-0 ceph-mon[75021]: pgmap v2330: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.584 253665 DEBUG nova.compute.manager [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.585 253665 DEBUG nova.compute.manager [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing instance network info cache due to event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.585 253665 DEBUG oslo_concurrency.lockutils [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.586 253665 DEBUG oslo_concurrency.lockutils [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.586 253665 DEBUG nova.network.neutron [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.717 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.757 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.758 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance network_info: |[{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.760 253665 DEBUG oslo_concurrency.lockutils [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.766 253665 DEBUG nova.network.neutron [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.776 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start _get_guest_xml network_info=[{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.784 253665 WARNING nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.790 253665 DEBUG nova.virt.libvirt.host [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.791 253665 DEBUG nova.virt.libvirt.host [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.801 253665 DEBUG nova.virt.libvirt.host [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.802 253665 DEBUG nova.virt.libvirt.host [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.802 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.803 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.803 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.804 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.804 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.804 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.804 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.805 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.805 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.806 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.806 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.806 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:39:35 compute-0 nova_compute[253661]: 2025-11-22 09:39:35.810 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:39:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472232399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.281 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.302 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.307 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3472232399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:39:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027698613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.786 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.790 253665 DEBUG nova.virt.libvirt.vif [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=124,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-zgn5mokh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:27Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=44f1789d-14f7-46df-a863-e8c3c418f7f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.791 253665 DEBUG nova.network.os_vif_util [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.792 253665 DEBUG nova.network.os_vif_util [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.794 253665 DEBUG nova.objects.instance [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid 44f1789d-14f7-46df-a863-e8c3c418f7f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.812 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <uuid>44f1789d-14f7-46df-a863-e8c3c418f7f3</uuid>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <name>instance-0000007c</name>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749</nova:name>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:39:35</nova:creationTime>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <nova:port uuid="7381e299-12bd-46ec-8abf-df35fe0bf48a">
Nov 22 09:39:36 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <system>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <entry name="serial">44f1789d-14f7-46df-a863-e8c3c418f7f3</entry>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <entry name="uuid">44f1789d-14f7-46df-a863-e8c3c418f7f3</entry>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     </system>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <os>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   </os>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <features>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   </features>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/44f1789d-14f7-46df-a863-e8c3c418f7f3_disk">
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       </source>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config">
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       </source>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:39:36 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:9b:ff:5c"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <target dev="tap7381e299-12"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/console.log" append="off"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <video>
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     </video>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:39:36 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:39:36 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:39:36 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:39:36 compute-0 nova_compute[253661]: </domain>
Nov 22 09:39:36 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.819 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Preparing to wait for external event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.820 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.820 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.821 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.822 253665 DEBUG nova.virt.libvirt.vif [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=124,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-zgn5mokh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:27Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=44f1789d-14f7-46df-a863-e8c3c418f7f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.822 253665 DEBUG nova.network.os_vif_util [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.823 253665 DEBUG nova.network.os_vif_util [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.824 253665 DEBUG os_vif [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.825 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.825 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.826 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.830 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7381e299-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.831 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7381e299-12, col_values=(('external_ids', {'iface-id': '7381e299-12bd-46ec-8abf-df35fe0bf48a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:ff:5c', 'vm-uuid': '44f1789d-14f7-46df-a863-e8c3c418f7f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:36 compute-0 NetworkManager[48920]: <info>  [1763804376.8340] manager: (tap7381e299-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.841 253665 INFO os_vif [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12')
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.885 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.887 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.887 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:9b:ff:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.888 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Using config drive
Nov 22 09:39:36 compute-0 nova_compute[253661]: 2025-11-22 09:39:36.908 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:37 compute-0 ceph-mon[75021]: pgmap v2331: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Nov 22 09:39:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2027698613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:39:37 compute-0 nova_compute[253661]: 2025-11-22 09:39:37.798 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Creating config drive at /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config
Nov 22 09:39:37 compute-0 nova_compute[253661]: 2025-11-22 09:39:37.808 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdlyujis execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:37 compute-0 nova_compute[253661]: 2025-11-22 09:39:37.973 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdlyujis" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 176 op/s
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.001 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.005 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.042 253665 DEBUG nova.network.neutron [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updated VIF entry in instance network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.043 253665 DEBUG nova.network.neutron [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.066 253665 DEBUG oslo_concurrency.lockutils [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.168 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.169 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Deleting local config drive /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config because it was imported into RBD.
Nov 22 09:39:38 compute-0 kernel: tap7381e299-12: entered promiscuous mode
Nov 22 09:39:38 compute-0 NetworkManager[48920]: <info>  [1763804378.2118] manager: (tap7381e299-12): new Tun device (/org/freedesktop/NetworkManager/Devices/538)
Nov 22 09:39:38 compute-0 ovn_controller[152872]: 2025-11-22T09:39:38Z|01313|binding|INFO|Claiming lport 7381e299-12bd-46ec-8abf-df35fe0bf48a for this chassis.
Nov 22 09:39:38 compute-0 ovn_controller[152872]: 2025-11-22T09:39:38Z|01314|binding|INFO|7381e299-12bd-46ec-8abf-df35fe0bf48a: Claiming fa:16:3e:9b:ff:5c 10.100.0.3
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.221 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:38 compute-0 ovn_controller[152872]: 2025-11-22T09:39:38Z|01315|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a ovn-installed in OVS
Nov 22 09:39:38 compute-0 systemd-udevd[381645]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.248 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:39:38 compute-0 NetworkManager[48920]: <info>  [1763804378.2560] device (tap7381e299-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:39:38 compute-0 NetworkManager[48920]: <info>  [1763804378.2572] device (tap7381e299-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:39:38 compute-0 systemd-machined[215941]: New machine qemu-155-instance-0000007c.
Nov 22 09:39:38 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007c.
Nov 22 09:39:38 compute-0 ovn_controller[152872]: 2025-11-22T09:39:38Z|01316|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a up in Southbound
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.281 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ff:5c 10.100.0.3'], port_security=['fa:16:3e:9b:ff:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44f1789d-14f7-46df-a863-e8c3c418f7f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa d24f9530-589a-4ee7-9767-0df91de410f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7381e299-12bd-46ec-8abf-df35fe0bf48a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.283 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7381e299-12bd-46ec-8abf-df35fe0bf48a in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 bound to our chassis
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.284 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 705357ee-1033-4907-905f-d41aa6dcfd73
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.287 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.304 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2bd56e-dd2e-4b11-8cd4-16c2bfd2c07e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.305 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap705357ee-11 in ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.307 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap705357ee-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.307 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[253cd634-2663-4025-8c92-6992591cae87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.308 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfabd50-4e76-45e7-a0f2-dadc2242c8dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.333 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d146c9-83e8-464a-ae67-58bcc7a93c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.364 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0060b635-21b0-48fa-998b-153e682fd14f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.405 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[616a4d21-77b2-45c3-b67d-57891785116f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 NetworkManager[48920]: <info>  [1763804378.4126] manager: (tap705357ee-10): new Veth device (/org/freedesktop/NetworkManager/Devices/539)
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.415 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1957099d-e173-4e88-a18c-248bff98a28f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.461 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6ae116-7cf5-40bd-8c73-f45131c621e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.467 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[92f8cfc9-efc9-4bb7-ae5c-49b6d9155c39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 NetworkManager[48920]: <info>  [1763804378.4991] device (tap705357ee-10): carrier: link connected
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.509 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[96491370-7509-4228-99c8-4916b6a43e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.528 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48509dd7-3a8b-4ced-8548-535966383493]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap705357ee-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:aa:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722923, 'reachable_time': 34041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381680, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.547 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaffb370-e89a-41cc-a41a-6116c381971b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:aa4a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722923, 'tstamp': 722923}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381682, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1224e1b-dae4-4545-8524-1a2aec9dd2bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap705357ee-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:aa:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722923, 'reachable_time': 34041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381697, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.608 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc6c420-f480-47dd-95eb-bec3b867bb25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.671 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d553168-20fd-4404-9d60-8ba632a18872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705357ee-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap705357ee-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.675 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:38 compute-0 NetworkManager[48920]: <info>  [1763804378.6766] manager: (tap705357ee-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/540)
Nov 22 09:39:38 compute-0 kernel: tap705357ee-10: entered promiscuous mode
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.680 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.680 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap705357ee-10, col_values=(('external_ids', {'iface-id': 'e4d17104-1aeb-4ffd-be7b-ed782324874a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.681 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:38 compute-0 ovn_controller[152872]: 2025-11-22T09:39:38Z|01317|binding|INFO|Releasing lport e4d17104-1aeb-4ffd-be7b-ed782324874a from this chassis (sb_readonly=0)
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.707 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/705357ee-1033-4907-905f-d41aa6dcfd73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/705357ee-1033-4907-905f-d41aa6dcfd73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.708 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae754ad-1abd-4086-9076-23cc97cbf6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.709 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-705357ee-1033-4907-905f-d41aa6dcfd73
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/705357ee-1033-4907-905f-d41aa6dcfd73.pid.haproxy
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 705357ee-1033-4907-905f-d41aa6dcfd73
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:39:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.710 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'env', 'PROCESS_TAG=haproxy-705357ee-1033-4907-905f-d41aa6dcfd73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/705357ee-1033-4907-905f-d41aa6dcfd73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.718 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.718 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.718 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.719 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.721 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804378.7206905, 44f1789d-14f7-46df-a863-e8c3c418f7f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.721 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] VM Started (Lifecycle Event)
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.742 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.747 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804378.7209592, 44f1789d-14f7-46df-a863-e8c3c418f7f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.747 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] VM Paused (Lifecycle Event)
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.762 253665 DEBUG nova.network.neutron [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updated VIF entry in instance network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.764 253665 DEBUG nova.network.neutron [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.780 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.782 253665 DEBUG oslo_concurrency.lockutils [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.785 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.803 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:39:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.982 253665 DEBUG nova.compute.manager [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.983 253665 DEBUG oslo_concurrency.lockutils [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.983 253665 DEBUG oslo_concurrency.lockutils [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.983 253665 DEBUG oslo_concurrency.lockutils [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.984 253665 DEBUG nova.compute.manager [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Processing event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.985 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.989 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804378.9890647, 44f1789d-14f7-46df-a863-e8c3c418f7f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.989 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] VM Resumed (Lifecycle Event)
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.992 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.995 253665 INFO nova.virt.libvirt.driver [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance spawned successfully.
Nov 22 09:39:38 compute-0 nova_compute[253661]: 2025-11-22 09:39:38.995 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.014 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.025 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.029 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.030 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.030 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.031 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.031 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.032 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.065 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:39:39 compute-0 podman[381753]: 2025-11-22 09:39:39.13787202 +0000 UTC m=+0.060230979 container create 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 09:39:39 compute-0 systemd[1]: Started libpod-conmon-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e.scope.
Nov 22 09:39:39 compute-0 podman[381753]: 2025-11-22 09:39:39.107244888 +0000 UTC m=+0.029603887 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:39:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ca2159f1a5a63e92ff60160fdc469795b2978d3e0860bcbe102db9710cb41/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:39:39 compute-0 podman[381753]: 2025-11-22 09:39:39.251512696 +0000 UTC m=+0.173871675 container init 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:39:39 compute-0 podman[381753]: 2025-11-22 09:39:39.257851003 +0000 UTC m=+0.180209972 container start 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 09:39:39 compute-0 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [NOTICE]   (381772) : New worker (381774) forked
Nov 22 09:39:39 compute-0 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [NOTICE]   (381772) : Loading success.
Nov 22 09:39:39 compute-0 ceph-mon[75021]: pgmap v2332: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 176 op/s
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.694 253665 INFO nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Took 11.50 seconds to spawn the instance on the hypervisor.
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.695 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.782 253665 INFO nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Took 12.98 seconds to build instance.
Nov 22 09:39:39 compute-0 nova_compute[253661]: 2025-11-22 09:39:39.800 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.111 253665 DEBUG nova.compute.manager [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.112 253665 DEBUG oslo_concurrency.lockutils [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.113 253665 DEBUG oslo_concurrency.lockutils [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.114 253665 DEBUG oslo_concurrency.lockutils [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.114 253665 DEBUG nova.compute.manager [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] No waiting events found dispatching network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.115 253665 WARNING nova.compute.manager [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received unexpected event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a for instance with vm_state active and task_state None.
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.192 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.236 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.237 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:39:41 compute-0 ceph-mon[75021]: pgmap v2333: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Nov 22 09:39:41 compute-0 nova_compute[253661]: 2025-11-22 09:39:41.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 279 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Nov 22 09:39:42 compute-0 nova_compute[253661]: 2025-11-22 09:39:42.208 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:42 compute-0 nova_compute[253661]: 2025-11-22 09:39:42.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:42 compute-0 nova_compute[253661]: 2025-11-22 09:39:42.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:42 compute-0 ovn_controller[152872]: 2025-11-22T09:39:42Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:e0:cc 10.100.0.10
Nov 22 09:39:42 compute-0 ovn_controller[152872]: 2025-11-22T09:39:42Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:e0:cc 10.100.0.10
Nov 22 09:39:43 compute-0 nova_compute[253661]: 2025-11-22 09:39:43.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:43 compute-0 ceph-mon[75021]: pgmap v2334: 305 pgs: 305 active+clean; 279 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Nov 22 09:39:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 293 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.2 MiB/s wr, 269 op/s
Nov 22 09:39:44 compute-0 nova_compute[253661]: 2025-11-22 09:39:44.399 253665 DEBUG nova.compute.manager [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:44 compute-0 nova_compute[253661]: 2025-11-22 09:39:44.399 253665 DEBUG nova.compute.manager [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing instance network info cache due to event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:39:44 compute-0 nova_compute[253661]: 2025-11-22 09:39:44.400 253665 DEBUG oslo_concurrency.lockutils [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:44 compute-0 nova_compute[253661]: 2025-11-22 09:39:44.400 253665 DEBUG oslo_concurrency.lockutils [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:44 compute-0 nova_compute[253661]: 2025-11-22 09:39:44.400 253665 DEBUG nova.network.neutron [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:39:45 compute-0 ceph-mon[75021]: pgmap v2335: 305 pgs: 305 active+clean; 293 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.2 MiB/s wr, 269 op/s
Nov 22 09:39:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 293 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.251 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:46 compute-0 ovn_controller[152872]: 2025-11-22T09:39:46Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:21:e3 10.100.0.20
Nov 22 09:39:46 compute-0 ovn_controller[152872]: 2025-11-22T09:39:46Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:21:e3 10.100.0.20
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.559 253665 DEBUG nova.network.neutron [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updated VIF entry in instance network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.560 253665 DEBUG nova.network.neutron [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.588 253665 DEBUG oslo_concurrency.lockutils [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:39:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240403227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.792 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.792 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.796 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.797 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.799 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.800 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.803 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.803 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:39:46 compute-0 nova_compute[253661]: 2025-11-22 09:39:46.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.028 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.030 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2933MB free_disk=59.85542297363281GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.030 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.031 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.115 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.115 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance ba0b1c52-c98b-4c2f-a213-e203719ada54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.116 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d7865a13-0d41-44d6-aac2-10cca6e1348a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.116 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 44f1789d-14f7-46df-a863-e8c3c418f7f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.116 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.117 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.216 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:47 compute-0 ceph-mon[75021]: pgmap v2336: 305 pgs: 305 active+clean; 293 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Nov 22 09:39:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1240403227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:39:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452286703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.686 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.693 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.710 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.730 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:39:47 compute-0 nova_compute[253661]: 2025-11-22 09:39:47.731 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 310 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 177 op/s
Nov 22 09:39:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3452286703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:49 compute-0 ceph-mon[75021]: pgmap v2337: 305 pgs: 305 active+clean; 310 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 177 op/s
Nov 22 09:39:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 201 op/s
Nov 22 09:39:51 compute-0 nova_compute[253661]: 2025-11-22 09:39:51.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:51 compute-0 ceph-mon[75021]: pgmap v2338: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 201 op/s
Nov 22 09:39:51 compute-0 nova_compute[253661]: 2025-11-22 09:39:51.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 198 op/s
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:39:52
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:39:52 compute-0 nova_compute[253661]: 2025-11-22 09:39:52.732 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:52 compute-0 nova_compute[253661]: 2025-11-22 09:39:52.732 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:39:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:39:53 compute-0 nova_compute[253661]: 2025-11-22 09:39:53.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:39:53 compute-0 ceph-mon[75021]: pgmap v2339: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 198 op/s
Nov 22 09:39:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.826740) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393826775, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 706, "num_deletes": 250, "total_data_size": 832358, "memory_usage": 845216, "flush_reason": "Manual Compaction"}
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393831733, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 547992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47896, "largest_seqno": 48601, "table_properties": {"data_size": 544847, "index_size": 989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8440, "raw_average_key_size": 20, "raw_value_size": 538281, "raw_average_value_size": 1312, "num_data_blocks": 44, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804339, "oldest_key_time": 1763804339, "file_creation_time": 1763804393, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 5016 microseconds, and 2188 cpu microseconds.
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.831758) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 547992 bytes OK
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.831772) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836299) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836337) EVENT_LOG_v1 {"time_micros": 1763804393836332, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836354) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 828702, prev total WAL file size 828702, number of live WAL files 2.
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836808) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373534' seq:72057594037927935, type:22 .. '6D6772737461740032303035' seq:0, type:0; will stop at (end)
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(535KB)], [110(10MB)]
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393836860, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11336553, "oldest_snapshot_seqno": -1}
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6876 keys, 8306246 bytes, temperature: kUnknown
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393874553, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8306246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8262017, "index_size": 25941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 179595, "raw_average_key_size": 26, "raw_value_size": 8140718, "raw_average_value_size": 1183, "num_data_blocks": 1006, "num_entries": 6876, "num_filter_entries": 6876, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804393, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.874807) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8306246 bytes
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.876339) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 300.1 rd, 219.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(35.8) write-amplify(15.2) OK, records in: 7365, records dropped: 489 output_compression: NoCompression
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.876358) EVENT_LOG_v1 {"time_micros": 1763804393876348, "job": 66, "event": "compaction_finished", "compaction_time_micros": 37780, "compaction_time_cpu_micros": 22266, "output_level": 6, "num_output_files": 1, "total_output_size": 8306246, "num_input_records": 7365, "num_output_records": 6876, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393876563, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393878211, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:39:53 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:39:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Nov 22 09:39:54 compute-0 podman[381829]: 2025-11-22 09:39:54.373175811 +0000 UTC m=+0.064622138 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:39:54 compute-0 podman[381830]: 2025-11-22 09:39:54.390166973 +0000 UTC m=+0.080902963 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:39:54 compute-0 ceph-mon[75021]: pgmap v2340: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Nov 22 09:39:54 compute-0 nova_compute[253661]: 2025-11-22 09:39:54.964 253665 DEBUG nova.compute.manager [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:39:54 compute-0 nova_compute[253661]: 2025-11-22 09:39:54.964 253665 DEBUG nova.compute.manager [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:39:54 compute-0 nova_compute[253661]: 2025-11-22 09:39:54.965 253665 DEBUG oslo_concurrency.lockutils [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:39:54 compute-0 nova_compute[253661]: 2025-11-22 09:39:54.965 253665 DEBUG oslo_concurrency.lockutils [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:39:54 compute-0 nova_compute[253661]: 2025-11-22 09:39:54.965 253665 DEBUG nova.network.neutron [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:39:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:39:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:39:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:39:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:39:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:39:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 22 09:39:56 compute-0 nova_compute[253661]: 2025-11-22 09:39:56.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:39:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:39:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:39:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:39:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:39:56 compute-0 nova_compute[253661]: 2025-11-22 09:39:56.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:39:56 compute-0 ovn_controller[152872]: 2025-11-22T09:39:56Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:ff:5c 10.100.0.3
Nov 22 09:39:56 compute-0 ovn_controller[152872]: 2025-11-22T09:39:56Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:ff:5c 10.100.0.3
Nov 22 09:39:57 compute-0 ceph-mon[75021]: pgmap v2341: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.124 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.124 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.139 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.228 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.229 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.236 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.236 253665 INFO nova.compute.claims [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.403 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.521 253665 DEBUG nova.network.neutron [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.522 253665 DEBUG nova.network.neutron [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.539 253665 DEBUG oslo_concurrency.lockutils [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:39:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:39:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577341996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.859 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.867 253665 DEBUG nova.compute.provider_tree [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.888 253665 DEBUG nova.scheduler.client.report [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.927 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.928 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.964 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.965 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:39:57 compute-0 nova_compute[253661]: 2025-11-22 09:39:57.992 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:39:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 334 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.7 MiB/s wr, 78 op/s
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.011 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:39:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1577341996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.097 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.098 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.099 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Creating image(s)
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.121 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.146 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.179 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.185 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.244 253665 DEBUG nova.policy [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.285 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.287 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.288 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.289 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.328 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.334 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7f88c3e8-e667-4d9a-8178-c99843560719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.682 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7f88c3e8-e667-4d9a-8178-c99843560719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.758 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:39:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.868 253665 DEBUG nova.objects.instance [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f88c3e8-e667-4d9a-8178-c99843560719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.886 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.886 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Ensure instance console log exists: /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.887 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.887 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:39:58 compute-0 nova_compute[253661]: 2025-11-22 09:39:58.887 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:39:59 compute-0 ceph-mon[75021]: pgmap v2342: 305 pgs: 305 active+clean; 334 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.7 MiB/s wr, 78 op/s
Nov 22 09:39:59 compute-0 podman[382057]: 2025-11-22 09:39:59.410149104 +0000 UTC m=+0.098324846 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:39:59 compute-0 nova_compute[253661]: 2025-11-22 09:39:59.562 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Successfully created port: 83f684f5-d7e5-44a8-960d-efe4ce81e023 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:40:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 373 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 3.5 MiB/s wr, 125 op/s
Nov 22 09:40:00 compute-0 nova_compute[253661]: 2025-11-22 09:40:00.256 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Successfully created port: 454bebe0-5237-48cb-8cf5-10be46f6d33a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:40:01 compute-0 ceph-mon[75021]: pgmap v2343: 305 pgs: 305 active+clean; 373 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 3.5 MiB/s wr, 125 op/s
Nov 22 09:40:01 compute-0 nova_compute[253661]: 2025-11-22 09:40:01.267 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:01 compute-0 nova_compute[253661]: 2025-11-22 09:40:01.716 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Successfully updated port: 83f684f5-d7e5-44a8-960d-efe4ce81e023 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:40:01 compute-0 nova_compute[253661]: 2025-11-22 09:40:01.803 253665 DEBUG nova.compute.manager [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:01 compute-0 nova_compute[253661]: 2025-11-22 09:40:01.804 253665 DEBUG nova.compute.manager [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing instance network info cache due to event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:40:01 compute-0 nova_compute[253661]: 2025-11-22 09:40:01.804 253665 DEBUG oslo_concurrency.lockutils [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:01 compute-0 nova_compute[253661]: 2025-11-22 09:40:01.804 253665 DEBUG oslo_concurrency.lockutils [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:01 compute-0 nova_compute[253661]: 2025-11-22 09:40:01.804 253665 DEBUG nova.network.neutron [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:40:01 compute-0 nova_compute[253661]: 2025-11-22 09:40:01.844 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:01 compute-0 nova_compute[253661]: 2025-11-22 09:40:01.985 253665 DEBUG nova.network.neutron [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Nov 22 09:40:02 compute-0 nova_compute[253661]: 2025-11-22 09:40:02.427 253665 DEBUG nova.network.neutron [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:02 compute-0 nova_compute[253661]: 2025-11-22 09:40:02.443 253665 DEBUG oslo_concurrency.lockutils [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:02 compute-0 nova_compute[253661]: 2025-11-22 09:40:02.445 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Successfully updated port: 454bebe0-5237-48cb-8cf5-10be46f6d33a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:40:02 compute-0 nova_compute[253661]: 2025-11-22 09:40:02.454 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:02 compute-0 nova_compute[253661]: 2025-11-22 09:40:02.455 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:02 compute-0 nova_compute[253661]: 2025-11-22 09:40:02.455 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:40:02 compute-0 nova_compute[253661]: 2025-11-22 09:40:02.720 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033671915698053004 of space, bias 1.0, pg target 1.0101574709415901 quantized to 32 (current 32)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:40:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 22 09:40:03 compute-0 ceph-mon[75021]: pgmap v2344: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Nov 22 09:40:03 compute-0 sudo[382083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:40:03 compute-0 sudo[382083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:03 compute-0 sudo[382083]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:03 compute-0 sudo[382108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:40:03 compute-0 sudo[382108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:03 compute-0 sudo[382108]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:03 compute-0 sudo[382133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:40:03 compute-0 sudo[382133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:03 compute-0 sudo[382133]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:03 compute-0 sudo[382158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:40:03 compute-0 sudo[382158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 22 09:40:04 compute-0 nova_compute[253661]: 2025-11-22 09:40:04.015 253665 DEBUG nova.compute.manager [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-changed-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:04 compute-0 nova_compute[253661]: 2025-11-22 09:40:04.017 253665 DEBUG nova.compute.manager [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing instance network info cache due to event network-changed-454bebe0-5237-48cb-8cf5-10be46f6d33a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:40:04 compute-0 nova_compute[253661]: 2025-11-22 09:40:04.017 253665 DEBUG oslo_concurrency.lockutils [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:04 compute-0 sudo[382158]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:40:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:40:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:40:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:40:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:40:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:40:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0fa3dd52-8aa3-4d06-b35c-6f7493056435 does not exist
Nov 22 09:40:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d3cd356f-5ee3-4816-acac-b56f8350b4a2 does not exist
Nov 22 09:40:04 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ea6f0012-08ba-48d7-92ae-f3c6bd427431 does not exist
Nov 22 09:40:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:40:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:40:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:40:04 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:40:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:40:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:40:04 compute-0 sudo[382214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:40:04 compute-0 sudo[382214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:04 compute-0 sudo[382214]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:04 compute-0 sudo[382239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:40:04 compute-0 sudo[382239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:04 compute-0 sudo[382239]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:04 compute-0 sudo[382264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:40:04 compute-0 sudo[382264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:04 compute-0 sudo[382264]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:04 compute-0 sudo[382289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:40:04 compute-0 sudo[382289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.199 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.214 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.214 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance network_info: |[{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.215 253665 DEBUG oslo_concurrency.lockutils [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.215 253665 DEBUG nova.network.neutron [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing network info cache for port 454bebe0-5237-48cb-8cf5-10be46f6d33a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.229 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start _get_guest_xml network_info=[{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.235 253665 WARNING nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.242 253665 DEBUG nova.virt.libvirt.host [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.243 253665 DEBUG nova.virt.libvirt.host [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.249 253665 DEBUG nova.virt.libvirt.host [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.250 253665 DEBUG nova.virt.libvirt.host [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.250 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.250 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.251 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.251 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.251 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.251 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.253 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.255 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:05 compute-0 ceph-mon[75021]: pgmap v2345: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 22 09:40:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:40:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:40:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:40:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:40:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:40:05 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:40:05 compute-0 podman[382354]: 2025-11-22 09:40:05.474512497 +0000 UTC m=+0.113731910 container create e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 09:40:05 compute-0 podman[382354]: 2025-11-22 09:40:05.387879493 +0000 UTC m=+0.027098916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:40:05 compute-0 systemd[1]: Started libpod-conmon-e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759.scope.
Nov 22 09:40:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:40:05 compute-0 podman[382354]: 2025-11-22 09:40:05.592719947 +0000 UTC m=+0.231939380 container init e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:40:05 compute-0 podman[382354]: 2025-11-22 09:40:05.602715185 +0000 UTC m=+0.241934608 container start e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:40:05 compute-0 podman[382354]: 2025-11-22 09:40:05.608004907 +0000 UTC m=+0.247224350 container attach e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:40:05 compute-0 sleepy_lamarr[382389]: 167 167
Nov 22 09:40:05 compute-0 systemd[1]: libpod-e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759.scope: Deactivated successfully.
Nov 22 09:40:05 compute-0 podman[382354]: 2025-11-22 09:40:05.611188486 +0000 UTC m=+0.250407899 container died e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:40:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b554de41d47141a7a19f8bc69bc620771cbc5d7bd0f10d2875df11f32bbfdd1c-merged.mount: Deactivated successfully.
Nov 22 09:40:05 compute-0 podman[382354]: 2025-11-22 09:40:05.683900334 +0000 UTC m=+0.323119757 container remove e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:40:05 compute-0 systemd[1]: libpod-conmon-e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759.scope: Deactivated successfully.
Nov 22 09:40:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:40:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4050339384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.815 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.842 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:05 compute-0 nova_compute[253661]: 2025-11-22 09:40:05.849 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:05 compute-0 podman[382433]: 2025-11-22 09:40:05.942125366 +0000 UTC m=+0.054801664 container create 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:40:05 compute-0 systemd[1]: Started libpod-conmon-44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf.scope.
Nov 22 09:40:06 compute-0 podman[382433]: 2025-11-22 09:40:05.91658525 +0000 UTC m=+0.029261578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:40:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 22 09:40:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:06 compute-0 podman[382433]: 2025-11-22 09:40:06.044022109 +0000 UTC m=+0.156698427 container init 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:40:06 compute-0 podman[382433]: 2025-11-22 09:40:06.051084455 +0000 UTC m=+0.163760753 container start 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:40:06 compute-0 podman[382433]: 2025-11-22 09:40:06.05891123 +0000 UTC m=+0.171587528 container attach 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.304 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:40:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045376775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.368 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.370 253665 DEBUG nova.virt.libvirt.vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.370 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.371 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.372 253665 DEBUG nova.virt.libvirt.vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.373 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.373 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.376 253665 DEBUG nova.objects.instance [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f88c3e8-e667-4d9a-8178-c99843560719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.408 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <uuid>7f88c3e8-e667-4d9a-8178-c99843560719</uuid>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <name>instance-0000007d</name>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-1324430276</nova:name>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:40:05</nova:creationTime>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <nova:port uuid="83f684f5-d7e5-44a8-960d-efe4ce81e023">
Nov 22 09:40:06 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <nova:port uuid="454bebe0-5237-48cb-8cf5-10be46f6d33a">
Nov 22 09:40:06 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0c:2b99" ipVersion="6"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe0c:2b99" ipVersion="6"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <system>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <entry name="serial">7f88c3e8-e667-4d9a-8178-c99843560719</entry>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <entry name="uuid">7f88c3e8-e667-4d9a-8178-c99843560719</entry>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </system>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <os>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   </os>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <features>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   </features>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/7f88c3e8-e667-4d9a-8178-c99843560719_disk">
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/7f88c3e8-e667-4d9a-8178-c99843560719_disk.config">
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:40:06 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:29:93:6c"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <target dev="tap83f684f5-d7"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:0c:2b:99"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <target dev="tap454bebe0-52"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/console.log" append="off"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <video>
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </video>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:40:06 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:40:06 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:40:06 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:40:06 compute-0 nova_compute[253661]: </domain>
Nov 22 09:40:06 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Preparing to wait for external event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Preparing to wait for external event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.410 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.410 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.410 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.410 253665 DEBUG nova.virt.libvirt.vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.411 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.411 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.412 253665 DEBUG os_vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.415 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.419 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83f684f5-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.420 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83f684f5-d7, col_values=(('external_ids', {'iface-id': '83f684f5-d7e5-44a8-960d-efe4ce81e023', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:93:6c', 'vm-uuid': '7f88c3e8-e667-4d9a-8178-c99843560719'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.421 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:40:06 compute-0 NetworkManager[48920]: <info>  [1763804406.4243] manager: (tap83f684f5-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.434 253665 INFO os_vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7')
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.435 253665 DEBUG nova.virt.libvirt.vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.435 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.436 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.436 253665 DEBUG os_vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.437 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.437 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.439 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.439 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap454bebe0-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.439 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap454bebe0-52, col_values=(('external_ids', {'iface-id': '454bebe0-5237-48cb-8cf5-10be46f6d33a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:2b:99', 'vm-uuid': '7f88c3e8-e667-4d9a-8178-c99843560719'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:06 compute-0 NetworkManager[48920]: <info>  [1763804406.4419] manager: (tap454bebe0-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.449 253665 INFO os_vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52')
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.523 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.524 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.524 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:29:93:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.524 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:0c:2b:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.525 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Using config drive
Nov 22 09:40:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4050339384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:40:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3045376775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:40:06 compute-0 nova_compute[253661]: 2025-11-22 09:40:06.560 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:07 compute-0 wizardly_khayyam[382452]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:40:07 compute-0 wizardly_khayyam[382452]: --> relative data size: 1.0
Nov 22 09:40:07 compute-0 wizardly_khayyam[382452]: --> All data devices are unavailable
Nov 22 09:40:07 compute-0 systemd[1]: libpod-44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf.scope: Deactivated successfully.
Nov 22 09:40:07 compute-0 systemd[1]: libpod-44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf.scope: Consumed 1.110s CPU time.
Nov 22 09:40:07 compute-0 podman[382433]: 2025-11-22 09:40:07.323677533 +0000 UTC m=+1.436353851 container died 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:40:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07-merged.mount: Deactivated successfully.
Nov 22 09:40:07 compute-0 nova_compute[253661]: 2025-11-22 09:40:07.753 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Creating config drive at /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config
Nov 22 09:40:07 compute-0 nova_compute[253661]: 2025-11-22 09:40:07.758 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp87djkys6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:07 compute-0 podman[382433]: 2025-11-22 09:40:07.760553427 +0000 UTC m=+1.873229765 container remove 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:40:07 compute-0 ceph-mon[75021]: pgmap v2346: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 22 09:40:07 compute-0 sudo[382289]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:07 compute-0 systemd[1]: libpod-conmon-44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf.scope: Deactivated successfully.
Nov 22 09:40:07 compute-0 nova_compute[253661]: 2025-11-22 09:40:07.811 253665 DEBUG nova.network.neutron [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updated VIF entry in instance network info cache for port 454bebe0-5237-48cb-8cf5-10be46f6d33a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:40:07 compute-0 nova_compute[253661]: 2025-11-22 09:40:07.812 253665 DEBUG nova.network.neutron [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:07 compute-0 nova_compute[253661]: 2025-11-22 09:40:07.831 253665 DEBUG oslo_concurrency.lockutils [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:07 compute-0 sudo[382536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:40:07 compute-0 sudo[382536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:07 compute-0 sudo[382536]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:07 compute-0 sudo[382563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:40:07 compute-0 nova_compute[253661]: 2025-11-22 09:40:07.918 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp87djkys6" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:07 compute-0 sudo[382563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:07 compute-0 sudo[382563]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:07 compute-0 nova_compute[253661]: 2025-11-22 09:40:07.948 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:07 compute-0 nova_compute[253661]: 2025-11-22 09:40:07.954 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:07 compute-0 sudo[382589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:40:07 compute-0 sudo[382589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:07 compute-0 sudo[382589]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 22 09:40:08 compute-0 sudo[382632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:40:08 compute-0 sudo[382632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.210 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.212 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Deleting local config drive /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config because it was imported into RBD.
Nov 22 09:40:08 compute-0 NetworkManager[48920]: <info>  [1763804408.2841] manager: (tap83f684f5-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Nov 22 09:40:08 compute-0 kernel: tap83f684f5-d7: entered promiscuous mode
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01318|binding|INFO|Claiming lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 for this chassis.
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01319|binding|INFO|83f684f5-d7e5-44a8-960d-efe4ce81e023: Claiming fa:16:3e:29:93:6c 10.100.0.14
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.299 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:93:6c 10.100.0.14'], port_security=['fa:16:3e:29:93:6c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7f88c3e8-e667-4d9a-8178-c99843560719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33aa2b15-84be-4fa8-858f-98182293b1b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82afa9d-1a09-411a-8866-4ce961a27350, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=83f684f5-d7e5-44a8-960d-efe4ce81e023) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.300 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 83f684f5-d7e5-44a8-960d-efe4ce81e023 in datapath 33aa2b15-84be-4fa8-858f-98182293b1b2 bound to our chassis
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.303 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33aa2b15-84be-4fa8-858f-98182293b1b2
Nov 22 09:40:08 compute-0 NetworkManager[48920]: <info>  [1763804408.3161] manager: (tap454bebe0-52): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Nov 22 09:40:08 compute-0 kernel: tap454bebe0-52: entered promiscuous mode
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01320|binding|INFO|Setting lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 ovn-installed in OVS
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01321|binding|INFO|Setting lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 up in Southbound
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.327 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[63c9a075-6853-498b-912a-e335acd40e60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01322|if_status|INFO|Dropped 14 log messages in last 44 seconds (most recently, 44 seconds ago) due to excessive rate
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01323|if_status|INFO|Not updating pb chassis for 454bebe0-5237-48cb-8cf5-10be46f6d33a now as sb is readonly
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01324|binding|INFO|Claiming lport 454bebe0-5237-48cb-8cf5-10be46f6d33a for this chassis.
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01325|binding|INFO|454bebe0-5237-48cb-8cf5-10be46f6d33a: Claiming fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99
Nov 22 09:40:08 compute-0 systemd-udevd[382725]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:40:08 compute-0 systemd-udevd[382721]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.335 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99'], port_security=['fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0c:2b99/64 2001:db8::f816:3eff:fe0c:2b99/64', 'neutron:device_id': '7f88c3e8-e667-4d9a-8178-c99843560719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=454bebe0-5237-48cb-8cf5-10be46f6d33a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01326|binding|INFO|Setting lport 454bebe0-5237-48cb-8cf5-10be46f6d33a ovn-installed in OVS
Nov 22 09:40:08 compute-0 ovn_controller[152872]: 2025-11-22T09:40:08Z|01327|binding|INFO|Setting lport 454bebe0-5237-48cb-8cf5-10be46f6d33a up in Southbound
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:08 compute-0 NetworkManager[48920]: <info>  [1763804408.3492] device (tap454bebe0-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:40:08 compute-0 NetworkManager[48920]: <info>  [1763804408.3501] device (tap454bebe0-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:40:08 compute-0 NetworkManager[48920]: <info>  [1763804408.3526] device (tap83f684f5-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:40:08 compute-0 NetworkManager[48920]: <info>  [1763804408.3536] device (tap83f684f5-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:40:08 compute-0 systemd-machined[215941]: New machine qemu-156-instance-0000007d.
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.384 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0fedb400-2e5e-4b99-a7d8-e6885e1991e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.388 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8fbc20-6412-4adf-be4b-8a00d2ac030e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-0000007d.
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.429 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[19224d84-f53a-4fb0-baf0-e3ee7855b0f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.453 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8d50c73a-da45-41de-a667-9a82d4a7ae4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33aa2b15-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:23:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721624, 'reachable_time': 43164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382754, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 podman[382740]: 2025-11-22 09:40:08.465261113 +0000 UTC m=+0.051847581 container create 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.475 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[427b96c3-dc76-4882-a09e-a0f8ddbe83c4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33aa2b15-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721638, 'tstamp': 721638}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382761, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33aa2b15-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721641, 'tstamp': 721641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382761, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.478 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33aa2b15-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.501 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33aa2b15-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.502 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.502 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33aa2b15-80, col_values=(('external_ids', {'iface-id': 'c8541406-177e-4d49-a6da-f639419da399'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.502 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.503 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 454bebe0-5237-48cb-8cf5-10be46f6d33a in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb unbound from our chassis
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.505 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20228844-2184-465b-8bc3-e846cfb6d3cb
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.526 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d62d1f03-bbba-450b-9126-c1b290c04b5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 systemd[1]: Started libpod-conmon-82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151.scope.
Nov 22 09:40:08 compute-0 podman[382740]: 2025-11-22 09:40:08.445008418 +0000 UTC m=+0.031594906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:40:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.568 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe33ea10-6dba-44bc-a06d-d58e4bb0a025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.572 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ec58e6bf-ff32-45ea-8986-a38f1931b86e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 podman[382740]: 2025-11-22 09:40:08.581686038 +0000 UTC m=+0.168272506 container init 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:40:08 compute-0 podman[382740]: 2025-11-22 09:40:08.590693861 +0000 UTC m=+0.177280329 container start 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:40:08 compute-0 thirsty_tesla[382767]: 167 167
Nov 22 09:40:08 compute-0 systemd[1]: libpod-82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151.scope: Deactivated successfully.
Nov 22 09:40:08 compute-0 conmon[382767]: conmon 82edc5e6290b15f0f133 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151.scope/container/memory.events
Nov 22 09:40:08 compute-0 podman[382740]: 2025-11-22 09:40:08.599393678 +0000 UTC m=+0.185980166 container attach 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:40:08 compute-0 podman[382740]: 2025-11-22 09:40:08.600458754 +0000 UTC m=+0.187045222 container died 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.611 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d0a4df-6d06-495e-88b9-d9483299b4f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf6c8669fd2b72544fd3133b23252ffcfd04eb907d89576f71bf24064b1c1530-merged.mount: Deactivated successfully.
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.638 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cea10039-706d-42b9-9c7a-9cfd81edd90c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20228844-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:0f:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721718, 'reachable_time': 22099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382783, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.660 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ed81a7-4f39-481e-b4ac-15c72830730d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap20228844-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721733, 'tstamp': 721733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382786, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.662 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20228844-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.663 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.665 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20228844-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.666 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.666 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20228844-20, col_values=(('external_ids', {'iface-id': 'c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.666 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:08 compute-0 podman[382740]: 2025-11-22 09:40:08.666449225 +0000 UTC m=+0.253035693 container remove 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:40:08 compute-0 systemd[1]: libpod-conmon-82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151.scope: Deactivated successfully.
Nov 22 09:40:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.886 253665 DEBUG nova.compute.manager [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.887 253665 DEBUG oslo_concurrency.lockutils [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.887 253665 DEBUG oslo_concurrency.lockutils [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.887 253665 DEBUG oslo_concurrency.lockutils [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:08 compute-0 nova_compute[253661]: 2025-11-22 09:40:08.887 253665 DEBUG nova.compute.manager [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Processing event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:40:08 compute-0 podman[382801]: 2025-11-22 09:40:08.910453134 +0000 UTC m=+0.052473157 container create 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:40:08 compute-0 systemd[1]: Started libpod-conmon-933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785.scope.
Nov 22 09:40:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:40:08 compute-0 podman[382801]: 2025-11-22 09:40:08.883422761 +0000 UTC m=+0.025442814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:09 compute-0 podman[382801]: 2025-11-22 09:40:09.00279466 +0000 UTC m=+0.144814683 container init 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 09:40:09 compute-0 podman[382801]: 2025-11-22 09:40:09.01369071 +0000 UTC m=+0.155710733 container start 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:40:09 compute-0 podman[382801]: 2025-11-22 09:40:09.017371852 +0000 UTC m=+0.159391975 container attach 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:40:09 compute-0 nova_compute[253661]: 2025-11-22 09:40:09.049 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804409.0483472, 7f88c3e8-e667-4d9a-8178-c99843560719 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:40:09 compute-0 nova_compute[253661]: 2025-11-22 09:40:09.050 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] VM Started (Lifecycle Event)
Nov 22 09:40:09 compute-0 nova_compute[253661]: 2025-11-22 09:40:09.069 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:40:09 compute-0 nova_compute[253661]: 2025-11-22 09:40:09.075 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804409.049988, 7f88c3e8-e667-4d9a-8178-c99843560719 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:40:09 compute-0 nova_compute[253661]: 2025-11-22 09:40:09.075 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] VM Paused (Lifecycle Event)
Nov 22 09:40:09 compute-0 nova_compute[253661]: 2025-11-22 09:40:09.091 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:40:09 compute-0 nova_compute[253661]: 2025-11-22 09:40:09.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:40:09 compute-0 nova_compute[253661]: 2025-11-22 09:40:09.110 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:40:09 compute-0 ceph-mon[75021]: pgmap v2347: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]: {
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:     "0": [
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:         {
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "devices": [
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "/dev/loop3"
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             ],
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_name": "ceph_lv0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_size": "21470642176",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "name": "ceph_lv0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "tags": {
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.cluster_name": "ceph",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.crush_device_class": "",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.encrypted": "0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.osd_id": "0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.type": "block",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.vdo": "0"
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             },
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "type": "block",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "vg_name": "ceph_vg0"
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:         }
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:     ],
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:     "1": [
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:         {
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "devices": [
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "/dev/loop4"
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             ],
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_name": "ceph_lv1",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_size": "21470642176",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "name": "ceph_lv1",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "tags": {
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.cluster_name": "ceph",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.crush_device_class": "",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.encrypted": "0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.osd_id": "1",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.type": "block",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.vdo": "0"
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             },
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "type": "block",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "vg_name": "ceph_vg1"
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:         }
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:     ],
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:     "2": [
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:         {
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "devices": [
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "/dev/loop5"
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             ],
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_name": "ceph_lv2",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_size": "21470642176",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "name": "ceph_lv2",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "tags": {
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.cluster_name": "ceph",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.crush_device_class": "",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.encrypted": "0",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.osd_id": "2",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.type": "block",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:                 "ceph.vdo": "0"
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             },
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "type": "block",
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:             "vg_name": "ceph_vg2"
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:         }
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]:     ]
Nov 22 09:40:09 compute-0 competent_stonebraker[382852]: }
Nov 22 09:40:09 compute-0 systemd[1]: libpod-933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785.scope: Deactivated successfully.
Nov 22 09:40:09 compute-0 podman[382801]: 2025-11-22 09:40:09.886275741 +0000 UTC m=+1.028295764 container died 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:40:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44-merged.mount: Deactivated successfully.
Nov 22 09:40:09 compute-0 podman[382801]: 2025-11-22 09:40:09.952296363 +0000 UTC m=+1.094316386 container remove 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:40:09 compute-0 systemd[1]: libpod-conmon-933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785.scope: Deactivated successfully.
Nov 22 09:40:09 compute-0 sudo[382632]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 3.4 MiB/s wr, 83 op/s
Nov 22 09:40:10 compute-0 sudo[382872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:40:10 compute-0 sudo[382872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:10 compute-0 sudo[382872]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:10 compute-0 sudo[382897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:40:10 compute-0 sudo[382897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:10 compute-0 sudo[382897]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:10 compute-0 sudo[382922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:40:10 compute-0 sudo[382922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:10 compute-0 sudo[382922]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:10 compute-0 sudo[382947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:40:10 compute-0 sudo[382947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:10 compute-0 podman[383011]: 2025-11-22 09:40:10.61324688 +0000 UTC m=+0.045446722 container create 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:40:10 compute-0 systemd[1]: Started libpod-conmon-6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75.scope.
Nov 22 09:40:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:40:10 compute-0 podman[383011]: 2025-11-22 09:40:10.685945468 +0000 UTC m=+0.118145330 container init 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:40:10 compute-0 podman[383011]: 2025-11-22 09:40:10.592741409 +0000 UTC m=+0.024941271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:40:10 compute-0 podman[383011]: 2025-11-22 09:40:10.695265359 +0000 UTC m=+0.127465201 container start 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:40:10 compute-0 bold_gauss[383028]: 167 167
Nov 22 09:40:10 compute-0 systemd[1]: libpod-6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75.scope: Deactivated successfully.
Nov 22 09:40:10 compute-0 podman[383011]: 2025-11-22 09:40:10.700920019 +0000 UTC m=+0.133119911 container attach 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:40:10 compute-0 podman[383011]: 2025-11-22 09:40:10.702093379 +0000 UTC m=+0.134293221 container died 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a555f728f317851c9ef9548b238d46db31aacaf547e960bca3b8ff64dd2c642-merged.mount: Deactivated successfully.
Nov 22 09:40:10 compute-0 podman[383011]: 2025-11-22 09:40:10.768558512 +0000 UTC m=+0.200758354 container remove 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:40:10 compute-0 systemd[1]: libpod-conmon-6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75.scope: Deactivated successfully.
Nov 22 09:40:11 compute-0 podman[383052]: 2025-11-22 09:40:11.009493144 +0000 UTC m=+0.051730088 container create 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.021 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.024 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.024 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.024 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No event matching network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 in dict_keys([('network-vif-plugged', '454bebe0-5237-48cb-8cf5-10be46f6d33a')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 WARNING nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received unexpected event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 for instance with vm_state building and task_state spawning.
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Processing event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.027 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.027 253665 WARNING nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received unexpected event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a for instance with vm_state building and task_state spawning.
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.028 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.034 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804411.033806, 7f88c3e8-e667-4d9a-8178-c99843560719 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.034 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] VM Resumed (Lifecycle Event)
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.036 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.041 253665 INFO nova.virt.libvirt.driver [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance spawned successfully.
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.041 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.054 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.064 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.064 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.065 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.066 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.066 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.066 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:11 compute-0 systemd[1]: Started libpod-conmon-231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156.scope.
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.071 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:40:11 compute-0 podman[383052]: 2025-11-22 09:40:10.985469577 +0000 UTC m=+0.027706531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.094 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:40:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.119 253665 INFO nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Took 13.02 seconds to spawn the instance on the hypervisor.
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.120 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:40:11 compute-0 podman[383052]: 2025-11-22 09:40:11.123922929 +0000 UTC m=+0.166159903 container init 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:40:11 compute-0 podman[383052]: 2025-11-22 09:40:11.134132983 +0000 UTC m=+0.176369927 container start 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 09:40:11 compute-0 podman[383052]: 2025-11-22 09:40:11.141544577 +0000 UTC m=+0.183781531 container attach 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.189 253665 INFO nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Took 13.99 seconds to build instance.
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.202 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.310 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:11 compute-0 nova_compute[253661]: 2025-11-22 09:40:11.441 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:11 compute-0 ceph-mon[75021]: pgmap v2348: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 3.4 MiB/s wr, 83 op/s
Nov 22 09:40:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 MiB/s wr, 21 op/s
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]: {
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "osd_id": 1,
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "type": "bluestore"
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:     },
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "osd_id": 0,
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "type": "bluestore"
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:     },
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "osd_id": 2,
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:         "type": "bluestore"
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]:     }
Nov 22 09:40:12 compute-0 wizardly_franklin[383068]: }
Nov 22 09:40:12 compute-0 systemd[1]: libpod-231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156.scope: Deactivated successfully.
Nov 22 09:40:12 compute-0 systemd[1]: libpod-231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156.scope: Consumed 1.101s CPU time.
Nov 22 09:40:12 compute-0 podman[383052]: 2025-11-22 09:40:12.262670828 +0000 UTC m=+1.304907772 container died 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834-merged.mount: Deactivated successfully.
Nov 22 09:40:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:40:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432937170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:40:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:40:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432937170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:40:12 compute-0 podman[383052]: 2025-11-22 09:40:12.395972833 +0000 UTC m=+1.438209767 container remove 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:40:12 compute-0 systemd[1]: libpod-conmon-231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156.scope: Deactivated successfully.
Nov 22 09:40:12 compute-0 sudo[382947]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:40:12 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:40:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:40:12 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:40:12 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev bdbad584-66db-4b20-938f-fc9c4bb40a4d does not exist
Nov 22 09:40:12 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ae7a0bdc-fd07-424c-be03-3bb667de7821 does not exist
Nov 22 09:40:12 compute-0 sudo[383113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:40:12 compute-0 sudo[383113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:12 compute-0 sudo[383113]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:12 compute-0 sudo[383138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:40:12 compute-0 sudo[383138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:40:12 compute-0 sudo[383138]: pam_unix(sudo:session): session closed for user root
Nov 22 09:40:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2432937170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:40:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2432937170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:40:12 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:40:12 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:40:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:13 compute-0 ceph-mon[75021]: pgmap v2349: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 MiB/s wr, 21 op/s
Nov 22 09:40:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 973 KiB/s rd, 378 KiB/s wr, 55 op/s
Nov 22 09:40:14 compute-0 nova_compute[253661]: 2025-11-22 09:40:14.872 253665 DEBUG nova.compute.manager [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:14 compute-0 nova_compute[253661]: 2025-11-22 09:40:14.873 253665 DEBUG nova.compute.manager [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing instance network info cache due to event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:40:14 compute-0 nova_compute[253661]: 2025-11-22 09:40:14.874 253665 DEBUG oslo_concurrency.lockutils [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:14 compute-0 nova_compute[253661]: 2025-11-22 09:40:14.874 253665 DEBUG oslo_concurrency.lockutils [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:14 compute-0 nova_compute[253661]: 2025-11-22 09:40:14.874 253665 DEBUG nova.network.neutron [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:40:14 compute-0 ceph-mon[75021]: pgmap v2350: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 973 KiB/s rd, 378 KiB/s wr, 55 op/s
Nov 22 09:40:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 35 KiB/s wr, 43 op/s
Nov 22 09:40:16 compute-0 nova_compute[253661]: 2025-11-22 09:40:16.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:16 compute-0 nova_compute[253661]: 2025-11-22 09:40:16.432 253665 DEBUG nova.network.neutron [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updated VIF entry in instance network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:40:16 compute-0 nova_compute[253661]: 2025-11-22 09:40:16.432 253665 DEBUG nova.network.neutron [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:16 compute-0 nova_compute[253661]: 2025-11-22 09:40:16.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:16 compute-0 nova_compute[253661]: 2025-11-22 09:40:16.447 253665 DEBUG oslo_concurrency.lockutils [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:17 compute-0 ceph-mon[75021]: pgmap v2351: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 35 KiB/s wr, 43 op/s
Nov 22 09:40:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 38 KiB/s wr, 76 op/s
Nov 22 09:40:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:19 compute-0 ceph-mon[75021]: pgmap v2352: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 38 KiB/s wr, 76 op/s
Nov 22 09:40:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 75 op/s
Nov 22 09:40:20 compute-0 nova_compute[253661]: 2025-11-22 09:40:20.985 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:20 compute-0 nova_compute[253661]: 2025-11-22 09:40:20.986 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.003 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.093 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.094 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.107 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.107 253665 INFO nova.compute.claims [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.347 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:21 compute-0 ceph-mon[75021]: pgmap v2353: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 75 op/s
Nov 22 09:40:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:40:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259479379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.814 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.819 253665 DEBUG nova.compute.provider_tree [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.844 253665 DEBUG nova.scheduler.client.report [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.878 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.878 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.929 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.929 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.952 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:40:21 compute-0 nova_compute[253661]: 2025-11-22 09:40:21.972 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:40:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 71 op/s
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.062 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.064 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.064 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Creating image(s)
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.092 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.127 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.159 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.164 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.258 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.259 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.260 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.261 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.287 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.292 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:22 compute-0 nova_compute[253661]: 2025-11-22 09:40:22.371 253665 DEBUG nova.policy [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:40:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4259479379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:40:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:40:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:40:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:40:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:40:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:40:23 compute-0 nova_compute[253661]: 2025-11-22 09:40:23.078 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:23 compute-0 nova_compute[253661]: 2025-11-22 09:40:23.164 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:40:23 compute-0 nova_compute[253661]: 2025-11-22 09:40:23.404 253665 DEBUG nova.objects.instance [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid aaeb1088-1220-47e3-9462-ba96b1d4e87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:23 compute-0 nova_compute[253661]: 2025-11-22 09:40:23.420 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:40:23 compute-0 nova_compute[253661]: 2025-11-22 09:40:23.420 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Ensure instance console log exists: /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:40:23 compute-0 nova_compute[253661]: 2025-11-22 09:40:23.421 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:23 compute-0 nova_compute[253661]: 2025-11-22 09:40:23.421 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:23 compute-0 nova_compute[253661]: 2025-11-22 09:40:23.421 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:23 compute-0 nova_compute[253661]: 2025-11-22 09:40:23.440 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Successfully created port: 01cd64f6-47ab-4640-ae46-6834065ff09b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:40:23 compute-0 ceph-mon[75021]: pgmap v2354: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 71 op/s
Nov 22 09:40:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 84 op/s
Nov 22 09:40:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:24.825 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:24.826 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:40:24 compute-0 nova_compute[253661]: 2025-11-22 09:40:24.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:24 compute-0 ceph-mon[75021]: pgmap v2355: 305 pgs: 305 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 84 op/s
Nov 22 09:40:24 compute-0 nova_compute[253661]: 2025-11-22 09:40:24.986 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Successfully updated port: 01cd64f6-47ab-4640-ae46-6834065ff09b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:40:25 compute-0 nova_compute[253661]: 2025-11-22 09:40:25.000 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:25 compute-0 nova_compute[253661]: 2025-11-22 09:40:25.001 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:25 compute-0 nova_compute[253661]: 2025-11-22 09:40:25.001 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:40:25 compute-0 nova_compute[253661]: 2025-11-22 09:40:25.069 253665 DEBUG nova.compute.manager [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-changed-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:25 compute-0 nova_compute[253661]: 2025-11-22 09:40:25.069 253665 DEBUG nova.compute.manager [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Refreshing instance network info cache due to event network-changed-01cd64f6-47ab-4640-ae46-6834065ff09b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:40:25 compute-0 nova_compute[253661]: 2025-11-22 09:40:25.070 253665 DEBUG oslo_concurrency.lockutils [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:25 compute-0 nova_compute[253661]: 2025-11-22 09:40:25.150 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:40:25 compute-0 podman[383351]: 2025-11-22 09:40:25.393525284 +0000 UTC m=+0.069667182 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 22 09:40:25 compute-0 podman[383352]: 2025-11-22 09:40:25.431285256 +0000 UTC m=+0.107228629 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:40:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1007 KiB/s rd, 1.6 MiB/s wr, 47 op/s
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.229 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Updating instance_info_cache with network_info: [{"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.245 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.246 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance network_info: |[{"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.246 253665 DEBUG oslo_concurrency.lockutils [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.247 253665 DEBUG nova.network.neutron [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Refreshing network info cache for port 01cd64f6-47ab-4640-ae46-6834065ff09b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.251 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start _get_guest_xml network_info=[{"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.257 253665 WARNING nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.269 253665 DEBUG nova.virt.libvirt.host [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.270 253665 DEBUG nova.virt.libvirt.host [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.275 253665 DEBUG nova.virt.libvirt.host [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.276 253665 DEBUG nova.virt.libvirt.host [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.276 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.276 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.277 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.278 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.278 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.279 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.279 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.279 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.279 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.280 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.280 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.280 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.285 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.447 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:40:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576432687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.768 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.795 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:26 compute-0 nova_compute[253661]: 2025-11-22 09:40:26.802 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:27 compute-0 ceph-mon[75021]: pgmap v2356: 305 pgs: 305 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1007 KiB/s rd, 1.6 MiB/s wr, 47 op/s
Nov 22 09:40:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3576432687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:40:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:40:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2629522887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.279 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.282 253665 DEBUG nova.virt.libvirt.vif [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:40:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=126,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-9t0kovo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:40:22Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=aaeb1088-1220-47e3-9462-ba96b1d4e87a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.283 253665 DEBUG nova.network.os_vif_util [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.284 253665 DEBUG nova.network.os_vif_util [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.286 253665 DEBUG nova.objects.instance [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid aaeb1088-1220-47e3-9462-ba96b1d4e87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.300 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <uuid>aaeb1088-1220-47e3-9462-ba96b1d4e87a</uuid>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <name>instance-0000007e</name>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427</nova:name>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:40:26</nova:creationTime>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <nova:port uuid="01cd64f6-47ab-4640-ae46-6834065ff09b">
Nov 22 09:40:27 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <system>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <entry name="serial">aaeb1088-1220-47e3-9462-ba96b1d4e87a</entry>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <entry name="uuid">aaeb1088-1220-47e3-9462-ba96b1d4e87a</entry>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     </system>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <os>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   </os>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <features>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   </features>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk">
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config">
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:40:27 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:3b:50:06"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <target dev="tap01cd64f6-47"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/console.log" append="off"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <video>
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     </video>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:40:27 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:40:27 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:40:27 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:40:27 compute-0 nova_compute[253661]: </domain>
Nov 22 09:40:27 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.301 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Preparing to wait for external event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.301 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.302 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.302 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.303 253665 DEBUG nova.virt.libvirt.vif [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:40:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=126,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-9t0kovo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:40:22Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=aaeb1088-1220-47e3-9462-ba96b1d4e87a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.303 253665 DEBUG nova.network.os_vif_util [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.304 253665 DEBUG nova.network.os_vif_util [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.305 253665 DEBUG os_vif [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.306 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.307 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01cd64f6-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.315 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01cd64f6-47, col_values=(('external_ids', {'iface-id': '01cd64f6-47ab-4640-ae46-6834065ff09b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:50:06', 'vm-uuid': 'aaeb1088-1220-47e3-9462-ba96b1d4e87a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.317 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:27 compute-0 NetworkManager[48920]: <info>  [1763804427.3185] manager: (tap01cd64f6-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.320 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.328 253665 INFO os_vif [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47')
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.398 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.398 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.399 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:3b:50:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.399 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Using config drive
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.422 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:27 compute-0 ovn_controller[152872]: 2025-11-22T09:40:27Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:93:6c 10.100.0.14
Nov 22 09:40:27 compute-0 ovn_controller[152872]: 2025-11-22T09:40:27Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:93:6c 10.100.0.14
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.927 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Creating config drive at /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.933 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ni59n8a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:27.984 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:27.985 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:27.986 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.993 253665 DEBUG nova.network.neutron [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Updated VIF entry in instance network info cache for port 01cd64f6-47ab-4640-ae46-6834065ff09b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:40:27 compute-0 nova_compute[253661]: 2025-11-22 09:40:27.994 253665 DEBUG nova.network.neutron [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Updating instance_info_cache with network_info: [{"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.019 253665 DEBUG oslo_concurrency.lockutils [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 68 op/s
Nov 22 09:40:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2629522887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.100 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ni59n8a" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.129 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.133 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.357 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.358 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Deleting local config drive /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config because it was imported into RBD.
Nov 22 09:40:28 compute-0 auditd[703]: Audit daemon rotating log files
Nov 22 09:40:28 compute-0 kernel: tap01cd64f6-47: entered promiscuous mode
Nov 22 09:40:28 compute-0 NetworkManager[48920]: <info>  [1763804428.4227] manager: (tap01cd64f6-47): new Tun device (/org/freedesktop/NetworkManager/Devices/546)
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.425 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:28 compute-0 ovn_controller[152872]: 2025-11-22T09:40:28Z|01328|binding|INFO|Claiming lport 01cd64f6-47ab-4640-ae46-6834065ff09b for this chassis.
Nov 22 09:40:28 compute-0 ovn_controller[152872]: 2025-11-22T09:40:28Z|01329|binding|INFO|01cd64f6-47ab-4640-ae46-6834065ff09b: Claiming fa:16:3e:3b:50:06 10.100.0.5
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.448 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:50:06 10.100.0.5'], port_security=['fa:16:3e:3b:50:06 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aaeb1088-1220-47e3-9462-ba96b1d4e87a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=01cd64f6-47ab-4640-ae46-6834065ff09b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.450 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 01cd64f6-47ab-4640-ae46-6834065ff09b in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 bound to our chassis
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.454 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 705357ee-1033-4907-905f-d41aa6dcfd73
Nov 22 09:40:28 compute-0 ovn_controller[152872]: 2025-11-22T09:40:28Z|01330|binding|INFO|Setting lport 01cd64f6-47ab-4640-ae46-6834065ff09b ovn-installed in OVS
Nov 22 09:40:28 compute-0 ovn_controller[152872]: 2025-11-22T09:40:28Z|01331|binding|INFO|Setting lport 01cd64f6-47ab-4640-ae46-6834065ff09b up in Southbound
Nov 22 09:40:28 compute-0 systemd-udevd[383525]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.466 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.478 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d80928e-f9aa-4fc9-ad80-ee589d843340]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:28 compute-0 NetworkManager[48920]: <info>  [1763804428.4826] device (tap01cd64f6-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:40:28 compute-0 NetworkManager[48920]: <info>  [1763804428.4838] device (tap01cd64f6-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:40:28 compute-0 systemd-machined[215941]: New machine qemu-157-instance-0000007e.
Nov 22 09:40:28 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-0000007e.
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.524 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d99c2cc3-6a16-4a21-b24c-48e9ebd5df4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.529 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44322403-6499-466c-aa57-2283bc7e4a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.564 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[79d40de6-9d3a-4553-8221-920ce98f9e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.585 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2290a456-5604-4541-9b09-ab9a4cc71d4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap705357ee-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:aa:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722923, 'reachable_time': 34041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383538, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.605 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[97c85a47-d1a4-4161-ac59-dc28a8e8d925]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap705357ee-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722937, 'tstamp': 722937}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383540, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap705357ee-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722940, 'tstamp': 722940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383540, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.607 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705357ee-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.610 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap705357ee-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.610 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.610 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap705357ee-10, col_values=(('external_ids', {'iface-id': 'e4d17104-1aeb-4ffd-be7b-ed782324874a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.610 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.806 253665 DEBUG nova.compute.manager [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.806 253665 DEBUG oslo_concurrency.lockutils [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.806 253665 DEBUG oslo_concurrency.lockutils [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.807 253665 DEBUG oslo_concurrency.lockutils [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.807 253665 DEBUG nova.compute.manager [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Processing event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.867 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.867 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804428.8662567, aaeb1088-1220-47e3-9462-ba96b1d4e87a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.868 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] VM Started (Lifecycle Event)
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.872 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.875 253665 INFO nova.virt.libvirt.driver [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance spawned successfully.
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.875 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:40:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.891 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.897 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.902 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.902 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.903 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.903 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.903 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:28 compute-0 nova_compute[253661]: 2025-11-22 09:40:28.904 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.049 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.050 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804428.8665924, aaeb1088-1220-47e3-9462-ba96b1d4e87a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.050 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] VM Paused (Lifecycle Event)
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.070 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.075 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804428.8708794, aaeb1088-1220-47e3-9462-ba96b1d4e87a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.076 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] VM Resumed (Lifecycle Event)
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.080 253665 INFO nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Took 7.02 seconds to spawn the instance on the hypervisor.
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.080 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.100 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:40:29 compute-0 ceph-mon[75021]: pgmap v2357: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 68 op/s
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.130 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.158 253665 INFO nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Took 8.09 seconds to build instance.
Nov 22 09:40:29 compute-0 nova_compute[253661]: 2025-11-22 09:40:29.184 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 480 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Nov 22 09:40:30 compute-0 podman[383584]: 2025-11-22 09:40:30.416771049 +0000 UTC m=+0.100087294 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 22 09:40:30 compute-0 nova_compute[253661]: 2025-11-22 09:40:30.889 253665 DEBUG nova.compute.manager [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:30 compute-0 nova_compute[253661]: 2025-11-22 09:40:30.889 253665 DEBUG oslo_concurrency.lockutils [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:30 compute-0 nova_compute[253661]: 2025-11-22 09:40:30.890 253665 DEBUG oslo_concurrency.lockutils [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:30 compute-0 nova_compute[253661]: 2025-11-22 09:40:30.890 253665 DEBUG oslo_concurrency.lockutils [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:30 compute-0 nova_compute[253661]: 2025-11-22 09:40:30.891 253665 DEBUG nova.compute.manager [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] No waiting events found dispatching network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:30 compute-0 nova_compute[253661]: 2025-11-22 09:40:30.891 253665 WARNING nova.compute.manager [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received unexpected event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b for instance with vm_state active and task_state None.
Nov 22 09:40:31 compute-0 ceph-mon[75021]: pgmap v2358: 305 pgs: 305 active+clean; 480 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Nov 22 09:40:31 compute-0 nova_compute[253661]: 2025-11-22 09:40:31.322 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 476 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Nov 22 09:40:32 compute-0 nova_compute[253661]: 2025-11-22 09:40:32.318 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:32.828 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:33 compute-0 ceph-mon[75021]: pgmap v2359: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 476 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Nov 22 09:40:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Nov 22 09:40:35 compute-0 ceph-mon[75021]: pgmap v2360: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Nov 22 09:40:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 154 op/s
Nov 22 09:40:36 compute-0 nova_compute[253661]: 2025-11-22 09:40:36.325 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:37 compute-0 ceph-mon[75021]: pgmap v2361: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 154 op/s
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.224 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.225 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.225 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.225 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.226 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.227 253665 INFO nova.compute.manager [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Terminating instance
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.228 253665 DEBUG nova.compute.manager [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:40:37 compute-0 kernel: tap54a61ee9-1f (unregistering): left promiscuous mode
Nov 22 09:40:37 compute-0 NetworkManager[48920]: <info>  [1763804437.3039] device (tap54a61ee9-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:40:37 compute-0 ovn_controller[152872]: 2025-11-22T09:40:37Z|01332|binding|INFO|Releasing lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb from this chassis (sb_readonly=0)
Nov 22 09:40:37 compute-0 ovn_controller[152872]: 2025-11-22T09:40:37Z|01333|binding|INFO|Setting lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb down in Southbound
Nov 22 09:40:37 compute-0 ovn_controller[152872]: 2025-11-22T09:40:37Z|01334|binding|INFO|Removing iface tap54a61ee9-1f ovn-installed in OVS
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.324 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.333 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:21:e3 10.100.0.20'], port_security=['fa:16:3e:58:21:e3 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'd7865a13-0d41-44d6-aac2-10cca6e1348a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268f1f5a-a38b-4a4b-99c8-6f247601dc2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5061ce-83c8-4f7d-bdd0-cc8d1c8db63d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=54a61ee9-1fb8-4c5c-8716-613fc3355afb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.334 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 54a61ee9-1fb8-4c5c-8716-613fc3355afb in datapath 30756ec6-103b-4571-a5dc-9b4a481bc5b1 unbound from our chassis
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.336 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30756ec6-103b-4571-a5dc-9b4a481bc5b1
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.363 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae60f038-6c80-46ff-adc5-cfcffa3c3a87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:37 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Nov 22 09:40:37 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007b.scope: Consumed 16.237s CPU time.
Nov 22 09:40:37 compute-0 systemd-machined[215941]: Machine qemu-154-instance-0000007b terminated.
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.410 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[830ea263-1ea1-4223-b2a1-094f8aac1d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.414 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[868d9f93-e309-4de5-91ee-83c8270905fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.468 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed5802b-5032-4110-a232-7c2ca694bdc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.484 253665 INFO nova.virt.libvirt.driver [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance destroyed successfully.
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.486 253665 DEBUG nova.objects.instance [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid d7865a13-0d41-44d6-aac2-10cca6e1348a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.504 253665 DEBUG nova.virt.libvirt.vif [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1997303369',display_name='tempest-TestNetworkBasicOps-server-1997303369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1997303369',id=123,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG0uOorJ/gmaNrR6qSN8/HnR9fMkzDH2WfxtPrvyBivOyhJCMxJEV6zlpNVePFSMCgazPwKP4Vum9MI8Qs/y/+T2quaiVANmzVrFFYwVnCOps2b+X6LuQ32XNX42/GMXcg==',key_name='tempest-TestNetworkBasicOps-799900934',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:39:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-2eyiney1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:39:31Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=d7865a13-0d41-44d6-aac2-10cca6e1348a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.506 253665 DEBUG nova.network.os_vif_util [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.507 253665 DEBUG nova.network.os_vif_util [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.507 253665 DEBUG os_vif [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.509 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.510 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54a61ee9-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3bea3fd2-1cf4-4df2-9533-176cd9bc1e14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30756ec6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:cb:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720168, 'reachable_time': 38410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383628, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.519 253665 INFO os_vif [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f')
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.538 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0ca198-1191-4340-bead-a9e3a030e0be]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap30756ec6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720183, 'tstamp': 720183}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383633, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30756ec6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720186, 'tstamp': 720186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383633, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.541 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30756ec6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.578 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.580 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30756ec6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.581 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.583 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30756ec6-10, col_values=(('external_ids', {'iface-id': 'ef3a77cb-c20e-4c0c-b747-f8d33bfa04a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.583 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.672 253665 DEBUG nova.compute.manager [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-unplugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.673 253665 DEBUG oslo_concurrency.lockutils [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.674 253665 DEBUG oslo_concurrency.lockutils [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.674 253665 DEBUG oslo_concurrency.lockutils [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.674 253665 DEBUG nova.compute.manager [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] No waiting events found dispatching network-vif-unplugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:37 compute-0 nova_compute[253661]: 2025-11-22 09:40:37.675 253665 DEBUG nova.compute.manager [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-unplugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:40:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 155 op/s
Nov 22 09:40:38 compute-0 nova_compute[253661]: 2025-11-22 09:40:38.184 253665 INFO nova.virt.libvirt.driver [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Deleting instance files /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a_del
Nov 22 09:40:38 compute-0 nova_compute[253661]: 2025-11-22 09:40:38.186 253665 INFO nova.virt.libvirt.driver [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Deletion of /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a_del complete
Nov 22 09:40:38 compute-0 nova_compute[253661]: 2025-11-22 09:40:38.244 253665 INFO nova.compute.manager [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Took 1.02 seconds to destroy the instance on the hypervisor.
Nov 22 09:40:38 compute-0 nova_compute[253661]: 2025-11-22 09:40:38.245 253665 DEBUG oslo.service.loopingcall [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:40:38 compute-0 nova_compute[253661]: 2025-11-22 09:40:38.246 253665 DEBUG nova.compute.manager [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:40:38 compute-0 nova_compute[253661]: 2025-11-22 09:40:38.246 253665 DEBUG nova.network.neutron [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:40:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:39 compute-0 ceph-mon[75021]: pgmap v2362: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 155 op/s
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.229 253665 DEBUG nova.network.neutron [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.249 253665 INFO nova.compute.manager [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Took 1.00 seconds to deallocate network for instance.
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.287 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.288 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.355 253665 DEBUG nova.compute.manager [req-326ca425-5747-4abd-96c7-d08c2aa7f5db req-7e1caab8-364f-4044-bce5-7ef405fa015a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-deleted-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.449 253665 DEBUG oslo_concurrency.processutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.853 253665 DEBUG nova.compute.manager [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.854 253665 DEBUG oslo_concurrency.lockutils [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.854 253665 DEBUG oslo_concurrency.lockutils [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.854 253665 DEBUG oslo_concurrency.lockutils [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.855 253665 DEBUG nova.compute.manager [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] No waiting events found dispatching network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.855 253665 WARNING nova.compute.manager [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received unexpected event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb for instance with vm_state deleted and task_state None.
Nov 22 09:40:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:40:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131172211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.980 253665 DEBUG oslo_concurrency.processutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:39 compute-0 nova_compute[253661]: 2025-11-22 09:40:39.987 253665 DEBUG nova.compute.provider_tree [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.014 253665 DEBUG nova.scheduler.client.report [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:40:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 432 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 160 op/s
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.040 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.068 253665 INFO nova.scheduler.client.report [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance d7865a13-0d41-44d6-aac2-10cca6e1348a
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.148 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/131172211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.697 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.698 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:40 compute-0 nova_compute[253661]: 2025-11-22 09:40:40.698 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.065 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.065 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.077 253665 DEBUG nova.objects.instance [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'flavor' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.094 253665 DEBUG nova.virt.libvirt.vif [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.095 253665 DEBUG nova.network.os_vif_util [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.096 253665 DEBUG nova.network.os_vif_util [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.103 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.108 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.113 253665 DEBUG nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Attempting to detach device tapb9b8fcd6-fb from instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.113 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:d7:d1:9a"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <target dev="tapb9b8fcd6-fb"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]: </interface>
Nov 22 09:40:41 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.124 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.131 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <name>instance-00000079</name>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:39:10</nova:creationTime>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:port uuid="b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f">
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:40:41 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <system>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='serial'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='uuid'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </system>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <os>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </os>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <features>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </features>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk' index='2'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config' index='1'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:30:a0:d3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target dev='tap12ab8505-5a'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:d7:d1:9a'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target dev='tapb9b8fcd6-fb'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='net1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </target>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </console>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <video>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </video>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c379,c882</label>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c882</imagelabel>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:40:41 compute-0 nova_compute[253661]: </domain>
Nov 22 09:40:41 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.132 253665 INFO nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully detached device tapb9b8fcd6-fb from instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c from the persistent domain config.
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.132 253665 DEBUG nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] (1/8): Attempting to detach device tapb9b8fcd6-fb with device alias net1 from instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.133 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] detach device xml: <interface type="ethernet">
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <mac address="fa:16:3e:d7:d1:9a"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <model type="virtio"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <mtu size="1442"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <target dev="tapb9b8fcd6-fb"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]: </interface>
Nov 22 09:40:41 compute-0 nova_compute[253661]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 09:40:41 compute-0 ceph-mon[75021]: pgmap v2363: 305 pgs: 305 active+clean; 432 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 160 op/s
Nov 22 09:40:41 compute-0 kernel: tapb9b8fcd6-fb (unregistering): left promiscuous mode
Nov 22 09:40:41 compute-0 NetworkManager[48920]: <info>  [1763804441.2415] device (tapb9b8fcd6-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:41 compute-0 ovn_controller[152872]: 2025-11-22T09:40:41Z|01335|binding|INFO|Releasing lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f from this chassis (sb_readonly=0)
Nov 22 09:40:41 compute-0 ovn_controller[152872]: 2025-11-22T09:40:41Z|01336|binding|INFO|Setting lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f down in Southbound
Nov 22 09:40:41 compute-0 ovn_controller[152872]: 2025-11-22T09:40:41Z|01337|binding|INFO|Removing iface tapb9b8fcd6-fb ovn-installed in OVS
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.257 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:d1:9a 10.100.0.24', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5061ce-83c8-4f7d-bdd0-cc8d1c8db63d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.258 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f in datapath 30756ec6-103b-4571-a5dc-9b4a481bc5b1 unbound from our chassis
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.260 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 30756ec6-103b-4571-a5dc-9b4a481bc5b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.264 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[188e3300-f4ec-4894-bd13-7e9a9b1402c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.264 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 namespace which is not needed anymore
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.267 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763804441.2672188, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.273 253665 DEBUG nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Start waiting for the detach event from libvirt for device tapb9b8fcd6-fb with device alias net1 for instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.273 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.278 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <name>instance-00000079</name>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:39:10</nova:creationTime>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:port uuid="b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f">
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:40:41 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <system>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='serial'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='uuid'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </system>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <os>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </os>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <features>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </features>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk' index='2'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config' index='1'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:30:a0:d3'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target dev='tap12ab8505-5a'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       </target>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </console>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <video>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </video>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c379,c882</label>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c882</imagelabel>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:40:41 compute-0 nova_compute[253661]: </domain>
Nov 22 09:40:41 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.278 253665 INFO nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully detached device tapb9b8fcd6-fb from instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c from the live domain config.
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.279 253665 DEBUG nova.virt.libvirt.vif [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.279 253665 DEBUG nova.network.os_vif_util [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.280 253665 DEBUG nova.network.os_vif_util [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.280 253665 DEBUG os_vif [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.282 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.283 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b8fcd6-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.285 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.294 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.296 253665 INFO os_vif [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb')
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.298 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:40:41</nova:creationTime>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 09:40:41 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:40:41 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:40:41 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:40:41 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:40:41 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:41 compute-0 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [NOTICE]   (380322) : haproxy version is 2.8.14-c23fe91
Nov 22 09:40:41 compute-0 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [NOTICE]   (380322) : path to executable is /usr/sbin/haproxy
Nov 22 09:40:41 compute-0 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [WARNING]  (380322) : Exiting Master process...
Nov 22 09:40:41 compute-0 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [WARNING]  (380322) : Exiting Master process...
Nov 22 09:40:41 compute-0 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [ALERT]    (380322) : Current worker (380324) exited with code 143 (Terminated)
Nov 22 09:40:41 compute-0 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [WARNING]  (380322) : All workers exited. Exiting... (0)
Nov 22 09:40:41 compute-0 systemd[1]: libpod-87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20.scope: Deactivated successfully.
Nov 22 09:40:41 compute-0 podman[383698]: 2025-11-22 09:40:41.418942888 +0000 UTC m=+0.056065099 container died 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.448 253665 DEBUG nova.compute.manager [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-unplugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.450 253665 DEBUG oslo_concurrency.lockutils [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.450 253665 DEBUG oslo_concurrency.lockutils [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.450 253665 DEBUG oslo_concurrency.lockutils [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.451 253665 DEBUG nova.compute.manager [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-unplugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.451 253665 WARNING nova.compute.manager [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-unplugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for instance with vm_state active and task_state None.
Nov 22 09:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20-userdata-shm.mount: Deactivated successfully.
Nov 22 09:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f89d97ed2ee8aa5d0ef9bc0221d90f0d63faaffe83c22b65aecd4abf2565382-merged.mount: Deactivated successfully.
Nov 22 09:40:41 compute-0 podman[383698]: 2025-11-22 09:40:41.488909726 +0000 UTC m=+0.126031937 container cleanup 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:40:41 compute-0 systemd[1]: libpod-conmon-87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20.scope: Deactivated successfully.
Nov 22 09:40:41 compute-0 podman[383726]: 2025-11-22 09:40:41.561392445 +0000 UTC m=+0.047493341 container remove 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1e5da1-5c8b-4e5a-b55d-18f0fb0651e5]: (4, ('Sat Nov 22 09:40:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 (87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20)\n87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20\nSat Nov 22 09:40:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 (87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20)\n87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.570 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae01796-7fa4-49b8-91eb-88385fe99530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.571 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30756ec6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:41 compute-0 kernel: tap30756ec6-10: left promiscuous mode
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:41 compute-0 nova_compute[253661]: 2025-11-22 09:40:41.592 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f991b9-bf64-4188-91f5-1376bca3c6f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.616 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34a63f23-6c12-4d94-80da-9fe22f251353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.618 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[02506e95-5db4-4dbe-bce0-b12e27d8acc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[403ef274-8597-4bda-854a-8b51180c1551]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720159, 'reachable_time': 39906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383742, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d30756ec6\x2d103b\x2d4571\x2da5dc\x2d9b4a481bc5b1.mount: Deactivated successfully.
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.647 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:40:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.647 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[89919f23-7776-4f0b-be97-4c294743e25c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.015 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.016 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.016 253665 DEBUG nova.network.neutron [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:40:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 494 KiB/s wr, 118 op/s
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.096 253665 DEBUG nova.compute.manager [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-deleted-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.097 253665 INFO nova.compute.manager [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Neutron deleted interface b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f; detaching it from the instance and deleting it from the info cache
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.097 253665 DEBUG nova.network.neutron [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.121 253665 DEBUG nova.objects.instance [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'system_metadata' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.141 253665 DEBUG nova.objects.instance [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'flavor' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.165 253665 DEBUG nova.virt.libvirt.vif [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.165 253665 DEBUG nova.network.os_vif_util [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.166 253665 DEBUG nova.network.os_vif_util [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.169 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.174 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <name>instance-00000079</name>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:40:41</nova:creationTime>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:40:42 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <system>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='serial'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='uuid'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </system>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <os>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </os>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <features>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </features>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk' index='2'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config' index='1'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:30:a0:d3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target dev='tap12ab8505-5a'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </target>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </console>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <video>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </video>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c379,c882</label>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c882</imagelabel>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:40:42 compute-0 nova_compute[253661]: </domain>
Nov 22 09:40:42 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.175 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.185 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <name>instance-00000079</name>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:40:41</nova:creationTime>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:40:42 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <memory unit='KiB'>131072</memory>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <vcpu placement='static'>1</vcpu>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <resource>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <partition>/machine</partition>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </resource>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <sysinfo type='smbios'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <system>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='manufacturer'>RDO</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='product'>OpenStack Compute</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='serial'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='uuid'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <entry name='family'>Virtual Machine</entry>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </system>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <os>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <boot dev='hd'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <smbios mode='sysinfo'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </os>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <features>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <vmcoreinfo state='on'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </features>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <cpu mode='custom' match='exact' check='full'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <vendor>AMD</vendor>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='x2apic'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc-deadline'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='hypervisor'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='tsc_adjust'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='spec-ctrl'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='stibp'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='ssbd'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='cmp_legacy'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='overflow-recov'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='succor'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='ibrs'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='amd-ssbd'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='virt-ssbd'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='lbrv'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='tsc-scale'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='vmcb-clean'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='flushbyasid'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='pause-filter'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='pfthreshold'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='xsaves'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='svm'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='require' name='topoext'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='npt'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <feature policy='disable' name='nrip-save'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <clock offset='utc'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <timer name='pit' tickpolicy='delay'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <timer name='hpet' present='no'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <on_poweroff>destroy</on_poweroff>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <on_reboot>restart</on_reboot>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <on_crash>destroy</on_crash>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <disk type='network' device='disk'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk' index='2'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target dev='vda' bus='virtio'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='virtio-disk0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <disk type='network' device='cdrom'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <driver name='qemu' type='raw' cache='none'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <auth username='openstack'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config' index='1'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <host name='192.168.122.100' port='6789'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target dev='sda' bus='sata'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <readonly/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='sata0-0-0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='0' model='pcie-root'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pcie.0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='1' port='0x10'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='2' port='0x11'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='3' port='0x12'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='4' port='0x13'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.4'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='5' port='0x14'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.5'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='6' port='0x15'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.6'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='7' port='0x16'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.7'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='8' port='0x17'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.8'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='9' port='0x18'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.9'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='10' port='0x19'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.10'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='11' port='0x1a'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.11'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='12' port='0x1b'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.12'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='13' port='0x1c'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.13'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='14' port='0x1d'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.14'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='15' port='0x1e'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.15'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='16' port='0x1f'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.16'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='17' port='0x20'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.17'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='18' port='0x21'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.18'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='19' port='0x22'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.19'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='20' port='0x23'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.20'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='21' port='0x24'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.21'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='22' port='0x25'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.22'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='23' port='0x26'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.23'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='24' port='0x27'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.24'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-root-port'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target chassis='25' port='0x28'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.25'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model name='pcie-pci-bridge'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='pci.26'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='usb'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <controller type='sata' index='0'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='ide'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </controller>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <interface type='ethernet'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <mac address='fa:16:3e:30:a0:d3'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target dev='tap12ab8505-5a'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model type='virtio'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <driver name='vhost' rx_queue_size='512'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <mtu size='1442'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='net0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <serial type='pty'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target type='isa-serial' port='0'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:         <model name='isa-serial'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       </target>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <console type='pty' tty='/dev/pts/0'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <source path='/dev/pts/0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <target type='serial' port='0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='serial0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </console>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <input type='tablet' bus='usb'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='input0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='usb' bus='0' port='1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <input type='mouse' bus='ps2'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='input1'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <input type='keyboard' bus='ps2'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='input2'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </input>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <listen type='address' address='::0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </graphics>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <audio id='1' type='none'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <video>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <model type='virtio' heads='1' primary='yes'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='video0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </video>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <watchdog model='itco' action='reset'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='watchdog0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </watchdog>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <memballoon model='virtio'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <stats period='10'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='balloon0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <rng model='virtio'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <backend model='random'>/dev/urandom</backend>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <alias name='rng0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <label>system_u:system_r:svirt_t:s0:c379,c882</label>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c882</imagelabel>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <label>+107:+107</label>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <imagelabel>+107:+107</imagelabel>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </seclabel>
Nov 22 09:40:42 compute-0 nova_compute[253661]: </domain>
Nov 22 09:40:42 compute-0 nova_compute[253661]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.185 253665 WARNING nova.virt.libvirt.driver [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Detaching interface fa:16:3e:d7:d1:9a failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapb9b8fcd6-fb' not found.
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.186 253665 DEBUG nova.virt.libvirt.vif [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.186 253665 DEBUG nova.network.os_vif_util [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.187 253665 DEBUG nova.network.os_vif_util [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.188 253665 DEBUG os_vif [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.189 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.190 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b8fcd6-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.190 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.192 253665 INFO os_vif [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb')
Nov 22 09:40:42 compute-0 nova_compute[253661]: 2025-11-22 09:40:42.193 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:creationTime>2025-11-22 09:40:42</nova:creationTime>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:flavor name="m1.nano">
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:memory>128</nova:memory>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:disk>1</nova:disk>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:swap>0</nova:swap>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:vcpus>1</nova:vcpus>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </nova:flavor>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:owner>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </nova:owner>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   <nova:ports>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 09:40:42 compute-0 nova_compute[253661]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:40:42 compute-0 nova_compute[253661]:     </nova:port>
Nov 22 09:40:42 compute-0 nova_compute[253661]:   </nova:ports>
Nov 22 09:40:42 compute-0 nova_compute[253661]: </nova:instance>
Nov 22 09:40:42 compute-0 nova_compute[253661]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 22 09:40:43 compute-0 ovn_controller[152872]: 2025-11-22T09:40:43Z|01338|binding|INFO|Releasing lport c8541406-177e-4d49-a6da-f639419da399 from this chassis (sb_readonly=0)
Nov 22 09:40:43 compute-0 ovn_controller[152872]: 2025-11-22T09:40:43Z|01339|binding|INFO|Releasing lport c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1 from this chassis (sb_readonly=0)
Nov 22 09:40:43 compute-0 ovn_controller[152872]: 2025-11-22T09:40:43Z|01340|binding|INFO|Releasing lport e4d17104-1aeb-4ffd-be7b-ed782324874a from this chassis (sb_readonly=0)
Nov 22 09:40:43 compute-0 ovn_controller[152872]: 2025-11-22T09:40:43Z|01341|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:43 compute-0 ceph-mon[75021]: pgmap v2364: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 494 KiB/s wr, 118 op/s
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.467 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.469 253665 INFO nova.network.neutron [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.470 253665 DEBUG nova.network.neutron [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.488 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:43 compute-0 ovn_controller[152872]: 2025-11-22T09:40:43Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:50:06 10.100.0.5
Nov 22 09:40:43 compute-0 ovn_controller[152872]: 2025-11-22T09:40:43Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:50:06 10.100.0.5
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.548 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.548 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.549 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.549 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.551 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.565 253665 DEBUG nova.compute.manager [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.565 253665 DEBUG oslo_concurrency.lockutils [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.566 253665 DEBUG oslo_concurrency.lockutils [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.566 253665 DEBUG oslo_concurrency.lockutils [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.567 253665 DEBUG nova.compute.manager [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:43 compute-0 nova_compute[253661]: 2025-11-22 09:40:43.567 253665 WARNING nova.compute.manager [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for instance with vm_state active and task_state None.
Nov 22 09:40:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 421 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 132 op/s
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.232 253665 DEBUG nova.compute.manager [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.233 253665 DEBUG nova.compute.manager [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.233 253665 DEBUG oslo_concurrency.lockutils [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.233 253665 DEBUG oslo_concurrency.lockutils [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.233 253665 DEBUG nova.network.neutron [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.277 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.277 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.277 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.277 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.278 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.279 253665 INFO nova.compute.manager [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Terminating instance
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.280 253665 DEBUG nova.compute.manager [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:40:44 compute-0 kernel: tap12ab8505-5a (unregistering): left promiscuous mode
Nov 22 09:40:44 compute-0 NetworkManager[48920]: <info>  [1763804444.3272] device (tap12ab8505-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:44 compute-0 ovn_controller[152872]: 2025-11-22T09:40:44Z|01342|binding|INFO|Releasing lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 from this chassis (sb_readonly=0)
Nov 22 09:40:44 compute-0 ovn_controller[152872]: 2025-11-22T09:40:44Z|01343|binding|INFO|Setting lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 down in Southbound
Nov 22 09:40:44 compute-0 ovn_controller[152872]: 2025-11-22T09:40:44Z|01344|binding|INFO|Removing iface tap12ab8505-5a ovn-installed in OVS
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.359 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:a0:d3 10.100.0.3'], port_security=['fa:16:3e:30:a0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80502cee-0a40-4541-8461-41de74f7266c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c64167e3-035b-4f1b-bee4-b85857c701f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1bcb3a6-b65a-4848-8c3b-e1169d9ae614, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=12ab8505-5ae2-427c-aaf6-9431683a99c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.361 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 12ab8505-5ae2-427c-aaf6-9431683a99c8 in datapath 80502cee-0a40-4541-8461-41de74f7266c unbound from our chassis
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.363 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80502cee-0a40-4541-8461-41de74f7266c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.363 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e150337b-e8fc-45d4-8897-8b364c653312]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.364 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80502cee-0a40-4541-8461-41de74f7266c namespace which is not needed anymore
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.364 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:44 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 22 09:40:44 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Consumed 19.720s CPU time.
Nov 22 09:40:44 compute-0 systemd-machined[215941]: Machine qemu-152-instance-00000079 terminated.
Nov 22 09:40:44 compute-0 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [NOTICE]   (378669) : haproxy version is 2.8.14-c23fe91
Nov 22 09:40:44 compute-0 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [NOTICE]   (378669) : path to executable is /usr/sbin/haproxy
Nov 22 09:40:44 compute-0 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [WARNING]  (378669) : Exiting Master process...
Nov 22 09:40:44 compute-0 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [ALERT]    (378669) : Current worker (378671) exited with code 143 (Terminated)
Nov 22 09:40:44 compute-0 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [WARNING]  (378669) : All workers exited. Exiting... (0)
Nov 22 09:40:44 compute-0 systemd[1]: libpod-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f.scope: Deactivated successfully.
Nov 22 09:40:44 compute-0 conmon[378665]: conmon 7e12ed01fc6355b1a9cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f.scope/container/memory.events
Nov 22 09:40:44 compute-0 podman[383767]: 2025-11-22 09:40:44.501175154 +0000 UTC m=+0.047656405 container died 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.516 253665 INFO nova.virt.libvirt.driver [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance destroyed successfully.
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.517 253665 DEBUG nova.objects.instance [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.533 253665 DEBUG nova.virt.libvirt.vif [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.534 253665 DEBUG nova.network.os_vif_util [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.534 253665 DEBUG nova.network.os_vif_util [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f-userdata-shm.mount: Deactivated successfully.
Nov 22 09:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f37bc7976ef071fba5e665fbf02212daac86ba36a65621a352596852c57db43-merged.mount: Deactivated successfully.
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.541 253665 DEBUG os_vif [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.546 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12ab8505-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:44 compute-0 podman[383767]: 2025-11-22 09:40:44.55346133 +0000 UTC m=+0.099942581 container cleanup 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.554 253665 INFO os_vif [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a')
Nov 22 09:40:44 compute-0 systemd[1]: libpod-conmon-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f.scope: Deactivated successfully.
Nov 22 09:40:44 compute-0 podman[383805]: 2025-11-22 09:40:44.633147285 +0000 UTC m=+0.053480896 container remove 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.641 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f97663a6-1e3b-4560-88d7-8f1eb30e8932]: (4, ('Sat Nov 22 09:40:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c (7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f)\n7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f\nSat Nov 22 09:40:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c (7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f)\n7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7154f2-d704-4bae-9695-dafbd5cfe332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.645 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80502cee-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:44 compute-0 kernel: tap80502cee-00: left promiscuous mode
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:44 compute-0 nova_compute[253661]: 2025-11-22 09:40:44.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.665 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1995449e-280e-4c5d-af9a-786cee99192e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.680 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c78e5a3f-dd42-4e2f-b449-ad718950a040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.681 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0a6015-b13e-4570-9220-2079657ea426]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.699 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b57f3ee-04fa-4059-8378-01ccac4a2cce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716974, 'reachable_time': 15804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383836, 'error': None, 'target': 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d80502cee\x2d0a40\x2d4541\x2d8461\x2d41de74f7266c.mount: Deactivated successfully.
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.702 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80502cee-0a40-4541-8461-41de74f7266c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:40:44 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.702 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3658176e-6086-4607-83c8-ffeeb11ed332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.071 253665 INFO nova.virt.libvirt.driver [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Deleting instance files /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_del
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.072 253665 INFO nova.virt.libvirt.driver [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Deletion of /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_del complete
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.121 253665 INFO nova.compute.manager [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.122 253665 DEBUG oslo.service.loopingcall [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.123 253665 DEBUG nova.compute.manager [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.123 253665 DEBUG nova.network.neutron [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:40:45 compute-0 ceph-mon[75021]: pgmap v2365: 305 pgs: 305 active+clean; 421 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 132 op/s
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.656 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-unplugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.656 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-unplugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-unplugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.658 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.658 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.658 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.658 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.659 253665 WARNING nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 for instance with vm_state active and task_state deleting.
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.773 253665 DEBUG nova.network.neutron [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.773 253665 DEBUG nova.network.neutron [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.791 253665 DEBUG oslo_concurrency.lockutils [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.954 253665 DEBUG nova.network.neutron [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:45 compute-0 nova_compute[253661]: 2025-11-22 09:40:45.968 253665 INFO nova.compute.manager [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Took 0.84 seconds to deallocate network for instance.
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.003 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.004 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 421 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.124 253665 DEBUG oslo_concurrency.processutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:40:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3808430014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.586 253665 DEBUG oslo_concurrency.processutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.593 253665 DEBUG nova.compute.provider_tree [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.607 253665 DEBUG nova.scheduler.client.report [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.626 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.628 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.629 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.629 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.629 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.702 253665 INFO nova.scheduler.client.report [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c
Nov 22 09:40:46 compute-0 nova_compute[253661]: 2025-11-22 09:40:46.802 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:40:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162660923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.132 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.214 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.214 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.220 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.220 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.224 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.225 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.228 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.229 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:40:47 compute-0 ceph-mon[75021]: pgmap v2366: 305 pgs: 305 active+clean; 421 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Nov 22 09:40:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3808430014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4162660923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.480 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.481 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2866MB free_disk=59.76681137084961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.481 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.481 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.540 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance ba0b1c52-c98b-4c2f-a213-e203719ada54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 44f1789d-14f7-46df-a863-e8c3c418f7f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 7f88c3e8-e667-4d9a-8178-c99843560719 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance aaeb1088-1220-47e3-9462-ba96b1d4e87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.628 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:47 compute-0 nova_compute[253661]: 2025-11-22 09:40:47.833 253665 DEBUG nova.compute.manager [req-24485250-2905-4fa7-b77c-de4593b3ed04 req-89fa3920-df18-4be7-a866-ccc6d989605d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-deleted-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 22 09:40:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:40:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932120452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:48 compute-0 nova_compute[253661]: 2025-11-22 09:40:48.136 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:48 compute-0 nova_compute[253661]: 2025-11-22 09:40:48.142 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:40:48 compute-0 nova_compute[253661]: 2025-11-22 09:40:48.161 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:40:48 compute-0 nova_compute[253661]: 2025-11-22 09:40:48.177 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:40:48 compute-0 nova_compute[253661]: 2025-11-22 09:40:48.178 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:48 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/932120452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:49 compute-0 ceph-mon[75021]: pgmap v2367: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 22 09:40:49 compute-0 nova_compute[253661]: 2025-11-22 09:40:49.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:49 compute-0 nova_compute[253661]: 2025-11-22 09:40:49.929 253665 DEBUG nova.compute.manager [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:49 compute-0 nova_compute[253661]: 2025-11-22 09:40:49.930 253665 DEBUG nova.compute.manager [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing instance network info cache due to event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:40:49 compute-0 nova_compute[253661]: 2025-11-22 09:40:49.930 253665 DEBUG oslo_concurrency.lockutils [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:49 compute-0 nova_compute[253661]: 2025-11-22 09:40:49.930 253665 DEBUG oslo_concurrency.lockutils [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:49 compute-0 nova_compute[253661]: 2025-11-22 09:40:49.931 253665 DEBUG nova.network.neutron [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.028 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.028 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.029 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.029 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.029 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.030 253665 INFO nova.compute.manager [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Terminating instance
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.032 253665 DEBUG nova.compute.manager [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:40:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 22 09:40:50 compute-0 kernel: tap83f684f5-d7 (unregistering): left promiscuous mode
Nov 22 09:40:50 compute-0 NetworkManager[48920]: <info>  [1763804450.0992] device (tap83f684f5-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.107 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 ovn_controller[152872]: 2025-11-22T09:40:50Z|01345|binding|INFO|Releasing lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 from this chassis (sb_readonly=0)
Nov 22 09:40:50 compute-0 ovn_controller[152872]: 2025-11-22T09:40:50Z|01346|binding|INFO|Setting lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 down in Southbound
Nov 22 09:40:50 compute-0 ovn_controller[152872]: 2025-11-22T09:40:50Z|01347|binding|INFO|Removing iface tap83f684f5-d7 ovn-installed in OVS
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.110 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.124 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 kernel: tap454bebe0-52 (unregistering): left promiscuous mode
Nov 22 09:40:50 compute-0 NetworkManager[48920]: <info>  [1763804450.1294] device (tap454bebe0-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:40:50 compute-0 ovn_controller[152872]: 2025-11-22T09:40:50Z|01348|binding|INFO|Releasing lport 454bebe0-5237-48cb-8cf5-10be46f6d33a from this chassis (sb_readonly=1)
Nov 22 09:40:50 compute-0 ovn_controller[152872]: 2025-11-22T09:40:50Z|01349|binding|INFO|Removing iface tap454bebe0-52 ovn-installed in OVS
Nov 22 09:40:50 compute-0 ovn_controller[152872]: 2025-11-22T09:40:50Z|01350|if_status|INFO|Dropped 2 log messages in last 117 seconds (most recently, 117 seconds ago) due to excessive rate
Nov 22 09:40:50 compute-0 ovn_controller[152872]: 2025-11-22T09:40:50Z|01351|if_status|INFO|Not setting lport 454bebe0-5237-48cb-8cf5-10be46f6d33a down as sb is readonly
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.178 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.178 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:40:50 compute-0 ovn_controller[152872]: 2025-11-22T09:40:50Z|01352|binding|INFO|Setting lport 454bebe0-5237-48cb-8cf5-10be46f6d33a down in Southbound
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.196 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:93:6c 10.100.0.14'], port_security=['fa:16:3e:29:93:6c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7f88c3e8-e667-4d9a-8178-c99843560719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33aa2b15-84be-4fa8-858f-98182293b1b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82afa9d-1a09-411a-8866-4ce961a27350, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=83f684f5-d7e5-44a8-960d-efe4ce81e023) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.199 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 83f684f5-d7e5-44a8-960d-efe4ce81e023 in datapath 33aa2b15-84be-4fa8-858f-98182293b1b2 unbound from our chassis
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.200 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33aa2b15-84be-4fa8-858f-98182293b1b2
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.215 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c632a2f9-994c-459c-8919-175e721d3714]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 22 09:40:50 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007d.scope: Consumed 18.361s CPU time.
Nov 22 09:40:50 compute-0 systemd-machined[215941]: Machine qemu-156-instance-0000007d terminated.
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.247 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[104382f8-904d-438d-8ab7-3946c438fa37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.252 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf84dfb0-5bb9-45ad-a45f-9a4a73327a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.257 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99'], port_security=['fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0c:2b99/64 2001:db8::f816:3eff:fe0c:2b99/64', 'neutron:device_id': '7f88c3e8-e667-4d9a-8178-c99843560719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=454bebe0-5237-48cb-8cf5-10be46f6d33a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.286 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a446e5d7-9488-472c-9804-091d97dc51d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.304 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a695ef2f-c366-4a75-996e-8d96ed377840]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33aa2b15-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:23:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721624, 'reachable_time': 43164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383921, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.319 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[511e9399-f614-4408-adb0-9f71e8ad1b77]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33aa2b15-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721638, 'tstamp': 721638}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383922, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33aa2b15-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721641, 'tstamp': 721641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383922, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.321 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33aa2b15-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.322 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.331 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33aa2b15-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.332 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.333 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33aa2b15-80, col_values=(('external_ids', {'iface-id': 'c8541406-177e-4d49-a6da-f639419da399'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.334 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.336 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 454bebe0-5237-48cb-8cf5-10be46f6d33a in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb unbound from our chassis
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.339 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20228844-2184-465b-8bc3-e846cfb6d3cb
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.358 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20300581-df39-449b-8486-b6c39ee9c191]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.396 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f0eb76-bf54-45f6-b2df-8b87d4b983ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.400 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[262a894a-8d8d-4079-ac36-57bd8b67894b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.439 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[634e46eb-12f5-4b94-9ac5-d4ee73a5c4c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 NetworkManager[48920]: <info>  [1763804450.4627] manager: (tap454bebe0-52): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.463 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a497838e-8189-4843-8f43-9c40c549066b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20228844-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:0f:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721718, 'reachable_time': 22099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383931, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.483 253665 INFO nova.virt.libvirt.driver [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance destroyed successfully.
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.483 253665 DEBUG nova.objects.instance [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 7f88c3e8-e667-4d9a-8178-c99843560719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.491 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3709b3-de89-470e-a2d4-2bd9901026e2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap20228844-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721733, 'tstamp': 721733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383946, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.493 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20228844-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.497 253665 DEBUG nova.virt.libvirt.vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:40:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:40:11Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.497 253665 DEBUG nova.network.os_vif_util [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.498 253665 DEBUG nova.network.os_vif_util [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.499 253665 DEBUG os_vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.500 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83f684f5-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.504 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20228844-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.504 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20228844-20, col_values=(('external_ids', {'iface-id': 'c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.510 253665 INFO os_vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7')
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.511 253665 DEBUG nova.virt.libvirt.vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:40:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:40:11Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.511 253665 DEBUG nova.network.os_vif_util [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.512 253665 DEBUG nova.network.os_vif_util [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.513 253665 DEBUG os_vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.514 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap454bebe0-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.520 253665 INFO os_vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52')
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.946 253665 INFO nova.virt.libvirt.driver [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Deleting instance files /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719_del
Nov 22 09:40:50 compute-0 nova_compute[253661]: 2025-11-22 09:40:50.947 253665 INFO nova.virt.libvirt.driver [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Deletion of /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719_del complete
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.081 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.082 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.083 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.083 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.084 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.085 253665 INFO nova.compute.manager [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Terminating instance
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.087 253665 DEBUG nova.compute.manager [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:40:51 compute-0 kernel: tap01cd64f6-47 (unregistering): left promiscuous mode
Nov 22 09:40:51 compute-0 NetworkManager[48920]: <info>  [1763804451.1444] device (tap01cd64f6-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:40:51 compute-0 ovn_controller[152872]: 2025-11-22T09:40:51Z|01353|binding|INFO|Releasing lport 01cd64f6-47ab-4640-ae46-6834065ff09b from this chassis (sb_readonly=0)
Nov 22 09:40:51 compute-0 ovn_controller[152872]: 2025-11-22T09:40:51Z|01354|binding|INFO|Setting lport 01cd64f6-47ab-4640-ae46-6834065ff09b down in Southbound
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 ovn_controller[152872]: 2025-11-22T09:40:51Z|01355|binding|INFO|Removing iface tap01cd64f6-47 ovn-installed in OVS
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.154 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.166 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.174 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:50:06 10.100.0.5'], port_security=['fa:16:3e:3b:50:06 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aaeb1088-1220-47e3-9462-ba96b1d4e87a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=01cd64f6-47ab-4640-ae46-6834065ff09b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.175 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 01cd64f6-47ab-4640-ae46-6834065ff09b in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 unbound from our chassis
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.176 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 705357ee-1033-4907-905f-d41aa6dcfd73
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.186 253665 INFO nova.compute.manager [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Took 1.15 seconds to destroy the instance on the hypervisor.
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.186 253665 DEBUG oslo.service.loopingcall [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.187 253665 DEBUG nova.compute.manager [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.187 253665 DEBUG nova.network.neutron [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1d25fb-83d7-4c46-b435-cc59981b07a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:51 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Nov 22 09:40:51 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007e.scope: Consumed 15.575s CPU time.
Nov 22 09:40:51 compute-0 systemd-machined[215941]: Machine qemu-157-instance-0000007e terminated.
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.236 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[430c1c73-6b4c-4d38-9fa3-42d06c058b2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.239 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[003befae-b771-4fb9-a4d6-6cb95521617f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.274 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1b3c6b-ae2b-44e7-acf8-2ffab8a786ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.300 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed34f90-49f6-4bd5-b31e-8223cac58e40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap705357ee-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:aa:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722923, 'reachable_time': 34041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383983, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:51 compute-0 ceph-mon[75021]: pgmap v2368: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.320 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.325 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf464281-f691-47fa-857a-b7dab6438302]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap705357ee-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722937, 'tstamp': 722937}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383986, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap705357ee-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722940, 'tstamp': 722940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383986, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.328 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705357ee-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.328 253665 INFO nova.virt.libvirt.driver [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance destroyed successfully.
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.329 253665 DEBUG nova.objects.instance [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid aaeb1088-1220-47e3-9462-ba96b1d4e87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.334 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.334 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap705357ee-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.335 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.335 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap705357ee-10, col_values=(('external_ids', {'iface-id': 'e4d17104-1aeb-4ffd-be7b-ed782324874a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.335 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.355 253665 DEBUG nova.virt.libvirt.vif [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:40:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=126,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:40:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-9t0kovo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:40:29Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=aaeb1088-1220-47e3-9462-ba96b1d4e87a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.355 253665 DEBUG nova.network.os_vif_util [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.356 253665 DEBUG nova.network.os_vif_util [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.356 253665 DEBUG os_vif [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.357 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.357 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01cd64f6-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.361 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.363 253665 INFO os_vif [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47')
Nov 22 09:40:51 compute-0 ovn_controller[152872]: 2025-11-22T09:40:51Z|01356|binding|INFO|Releasing lport c8541406-177e-4d49-a6da-f639419da399 from this chassis (sb_readonly=0)
Nov 22 09:40:51 compute-0 ovn_controller[152872]: 2025-11-22T09:40:51Z|01357|binding|INFO|Releasing lport c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1 from this chassis (sb_readonly=0)
Nov 22 09:40:51 compute-0 ovn_controller[152872]: 2025-11-22T09:40:51Z|01358|binding|INFO|Releasing lport e4d17104-1aeb-4ffd-be7b-ed782324874a from this chassis (sb_readonly=0)
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.492 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.666 253665 DEBUG nova.compute.manager [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-unplugged-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.666 253665 DEBUG oslo_concurrency.lockutils [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.666 253665 DEBUG oslo_concurrency.lockutils [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.666 253665 DEBUG oslo_concurrency.lockutils [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.667 253665 DEBUG nova.compute.manager [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] No waiting events found dispatching network-vif-unplugged-01cd64f6-47ab-4640-ae46-6834065ff09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.667 253665 DEBUG nova.compute.manager [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-unplugged-01cd64f6-47ab-4640-ae46-6834065ff09b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.764 253665 INFO nova.virt.libvirt.driver [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Deleting instance files /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a_del
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.766 253665 INFO nova.virt.libvirt.driver [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Deletion of /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a_del complete
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.817 253665 INFO nova.compute.manager [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.818 253665 DEBUG oslo.service.loopingcall [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.818 253665 DEBUG nova.compute.manager [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:40:51 compute-0 nova_compute[253661]: 2025-11-22 09:40:51.818 253665 DEBUG nova.network.neutron [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.076 253665 DEBUG nova.network.neutron [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updated VIF entry in instance network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.077 253665 DEBUG nova.network.neutron [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.096 253665 DEBUG oslo_concurrency.lockutils [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:40:52
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'backups', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'volumes']
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.480 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804437.4767847, d7865a13-0d41-44d6-aac2-10cca6e1348a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.481 253665 INFO nova.compute.manager [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] VM Stopped (Lifecycle Event)
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.503 253665 DEBUG nova.compute.manager [None req-b4ed506e-f46f-49cd-90d7-fec8b90b07a4 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:40:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.768 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-unplugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.768 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.769 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.769 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.769 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-unplugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.769 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-unplugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.770 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.770 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.770 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.770 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.771 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.771 253665 WARNING nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received unexpected event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 for instance with vm_state active and task_state deleting.
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.771 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-unplugged-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.771 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.772 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.772 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.772 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-unplugged-454bebe0-5237-48cb-8cf5-10be46f6d33a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:52 compute-0 nova_compute[253661]: 2025-11-22 09:40:52.772 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-unplugged-454bebe0-5237-48cb-8cf5-10be46f6d33a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.245 253665 DEBUG nova.network.neutron [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.265 253665 INFO nova.compute.manager [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Took 1.45 seconds to deallocate network for instance.
Nov 22 09:40:53 compute-0 ceph-mon[75021]: pgmap v2369: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.321 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.322 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.401 253665 DEBUG nova.network.neutron [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.416 253665 INFO nova.compute.manager [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Took 2.23 seconds to deallocate network for instance.
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.453 253665 DEBUG oslo_concurrency.processutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.493 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.806 253665 DEBUG nova.compute.manager [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.807 253665 DEBUG oslo_concurrency.lockutils [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.807 253665 DEBUG oslo_concurrency.lockutils [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.808 253665 DEBUG oslo_concurrency.lockutils [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.808 253665 DEBUG nova.compute.manager [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] No waiting events found dispatching network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.808 253665 WARNING nova.compute.manager [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received unexpected event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b for instance with vm_state deleted and task_state None.
Nov 22 09:40:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:40:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149363620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.975 253665 DEBUG oslo_concurrency.processutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:53 compute-0 nova_compute[253661]: 2025-11-22 09:40:53.983 253665 DEBUG nova.compute.provider_tree [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.003 253665 DEBUG nova.scheduler.client.report [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.033 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.037 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 200 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 2.2 MiB/s wr, 152 op/s
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.069 253665 INFO nova.scheduler.client.report [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance aaeb1088-1220-47e3-9462-ba96b1d4e87a
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.130 253665 DEBUG oslo_concurrency.processutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.166 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3149363620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:40:54 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/37611534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.610 253665 DEBUG oslo_concurrency.processutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.615 253665 DEBUG nova.compute.provider_tree [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.632 253665 DEBUG nova.scheduler.client.report [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.668 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.700 253665 INFO nova.scheduler.client.report [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 7f88c3e8-e667-4d9a-8178-c99843560719
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.784 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.959 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.959 253665 DEBUG oslo_concurrency.lockutils [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.960 253665 DEBUG oslo_concurrency.lockutils [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.960 253665 DEBUG oslo_concurrency.lockutils [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.960 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.961 253665 WARNING nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received unexpected event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a for instance with vm_state deleted and task_state None.
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.961 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-deleted-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.961 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-deleted-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:54 compute-0 nova_compute[253661]: 2025-11-22 09:40:54.962 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-deleted-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:55 compute-0 nova_compute[253661]: 2025-11-22 09:40:55.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:55 compute-0 ceph-mon[75021]: pgmap v2370: 305 pgs: 305 active+clean; 200 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 2.2 MiB/s wr, 152 op/s
Nov 22 09:40:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/37611534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:40:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:40:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:40:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:40:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:40:55 compute-0 nova_compute[253661]: 2025-11-22 09:40:55.873 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:55 compute-0 nova_compute[253661]: 2025-11-22 09:40:55.873 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:55 compute-0 nova_compute[253661]: 2025-11-22 09:40:55.874 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:55 compute-0 nova_compute[253661]: 2025-11-22 09:40:55.874 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:55 compute-0 nova_compute[253661]: 2025-11-22 09:40:55.874 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:55 compute-0 nova_compute[253661]: 2025-11-22 09:40:55.876 253665 INFO nova.compute.manager [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Terminating instance
Nov 22 09:40:55 compute-0 nova_compute[253661]: 2025-11-22 09:40:55.877 253665 DEBUG nova.compute.manager [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:40:55 compute-0 kernel: tap7381e299-12 (unregistering): left promiscuous mode
Nov 22 09:40:55 compute-0 NetworkManager[48920]: <info>  [1763804455.9367] device (tap7381e299-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01359|binding|INFO|Releasing lport 7381e299-12bd-46ec-8abf-df35fe0bf48a from this chassis (sb_readonly=0)
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01360|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a down in Southbound
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01361|binding|INFO|Removing iface tap7381e299-12 ovn-installed in OVS
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.019 253665 DEBUG nova.compute.manager [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.020 253665 DEBUG nova.compute.manager [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing instance network info cache due to event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.021 253665 DEBUG oslo_concurrency.lockutils [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.021 253665 DEBUG oslo_concurrency.lockutils [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.021 253665 DEBUG nova.network.neutron [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.026 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ff:5c 10.100.0.3'], port_security=['fa:16:3e:9b:ff:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44f1789d-14f7-46df-a863-e8c3c418f7f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa d24f9530-589a-4ee7-9767-0df91de410f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7381e299-12bd-46ec-8abf-df35fe0bf48a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.027 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7381e299-12bd-46ec-8abf-df35fe0bf48a in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 unbound from our chassis
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.029 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 705357ee-1033-4907-905f-d41aa6dcfd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.030 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd2fe73-587d-4ff2-9cbd-ea3497f6f07c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.031 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 namespace which is not needed anymore
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.033 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 200 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 618 KiB/s wr, 116 op/s
Nov 22 09:40:56 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Nov 22 09:40:56 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007c.scope: Consumed 19.181s CPU time.
Nov 22 09:40:56 compute-0 systemd-machined[215941]: Machine qemu-155-instance-0000007c terminated.
Nov 22 09:40:56 compute-0 podman[384061]: 2025-11-22 09:40:56.122301979 +0000 UTC m=+0.064277301 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:40:56 compute-0 podman[384062]: 2025-11-22 09:40:56.140573795 +0000 UTC m=+0.102772861 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 22 09:40:56 compute-0 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [NOTICE]   (381772) : haproxy version is 2.8.14-c23fe91
Nov 22 09:40:56 compute-0 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [NOTICE]   (381772) : path to executable is /usr/sbin/haproxy
Nov 22 09:40:56 compute-0 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [WARNING]  (381772) : Exiting Master process...
Nov 22 09:40:56 compute-0 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [ALERT]    (381772) : Current worker (381774) exited with code 143 (Terminated)
Nov 22 09:40:56 compute-0 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [WARNING]  (381772) : All workers exited. Exiting... (0)
Nov 22 09:40:56 compute-0 systemd[1]: libpod-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e.scope: Deactivated successfully.
Nov 22 09:40:56 compute-0 conmon[381768]: conmon 91a8862898354d729f7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e.scope/container/memory.events
Nov 22 09:40:56 compute-0 podman[384116]: 2025-11-22 09:40:56.168636059 +0000 UTC m=+0.048907364 container died 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 09:40:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:40:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e0ca2159f1a5a63e92ff60160fdc469795b2978d3e0860bcbe102db9710cb41-merged.mount: Deactivated successfully.
Nov 22 09:40:56 compute-0 podman[384116]: 2025-11-22 09:40:56.214200152 +0000 UTC m=+0.094471457 container cleanup 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:40:56 compute-0 systemd[1]: libpod-conmon-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e.scope: Deactivated successfully.
Nov 22 09:40:56 compute-0 podman[384145]: 2025-11-22 09:40:56.284006325 +0000 UTC m=+0.046916365 container remove 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.291 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e838e75-ae83-4b94-926f-6f6cd2224282]: (4, ('Sat Nov 22 09:40:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 (91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e)\n91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e\nSat Nov 22 09:40:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 (91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e)\n91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.293 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01231ce2-1b2c-41e4-8acc-4f49b9ee5156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.294 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705357ee-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 kernel: tap705357ee-10: left promiscuous mode
Nov 22 09:40:56 compute-0 kernel: tap7381e299-12: entered promiscuous mode
Nov 22 09:40:56 compute-0 kernel: tap7381e299-12 (unregistering): left promiscuous mode
Nov 22 09:40:56 compute-0 NetworkManager[48920]: <info>  [1763804456.3021] manager: (tap7381e299-12): new Tun device (/org/freedesktop/NetworkManager/Devices/548)
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01362|binding|INFO|Claiming lport 7381e299-12bd-46ec-8abf-df35fe0bf48a for this chassis.
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01363|binding|INFO|7381e299-12bd-46ec-8abf-df35fe0bf48a: Claiming fa:16:3e:9b:ff:5c 10.100.0.3
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.319 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[96144c78-559d-4cb8-a90a-354c13bdc24a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.321 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ff:5c 10.100.0.3'], port_security=['fa:16:3e:9b:ff:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44f1789d-14f7-46df-a863-e8c3c418f7f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa d24f9530-589a-4ee7-9767-0df91de410f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7381e299-12bd-46ec-8abf-df35fe0bf48a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.327 253665 INFO nova.virt.libvirt.driver [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance destroyed successfully.
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.328 253665 DEBUG nova.objects.instance [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid 44f1789d-14f7-46df-a863-e8c3c418f7f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.336 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1bbfd95b-5833-461d-bda0-09e82962ba84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01364|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a ovn-installed in OVS
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01365|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a up in Southbound
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01366|binding|INFO|Releasing lport 7381e299-12bd-46ec-8abf-df35fe0bf48a from this chassis (sb_readonly=1)
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01367|binding|INFO|Removing iface tap7381e299-12 ovn-installed in OVS
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.338 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f02505ff-a6be-477d-94fe-c45bd61d0088]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.339 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01368|binding|INFO|Releasing lport 7381e299-12bd-46ec-8abf-df35fe0bf48a from this chassis (sb_readonly=0)
Nov 22 09:40:56 compute-0 ovn_controller[152872]: 2025-11-22T09:40:56Z|01369|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a down in Southbound
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.342 253665 DEBUG nova.virt.libvirt.vif [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=124,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:39:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-zgn5mokh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:39:39Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=44f1789d-14f7-46df-a863-e8c3c418f7f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.343 253665 DEBUG nova.network.os_vif_util [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.343 253665 DEBUG nova.network.os_vif_util [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.344 253665 DEBUG os_vif [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.345 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7381e299-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.350 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ff:5c 10.100.0.3'], port_security=['fa:16:3e:9b:ff:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44f1789d-14f7-46df-a863-e8c3c418f7f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa d24f9530-589a-4ee7-9767-0df91de410f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7381e299-12bd-46ec-8abf-df35fe0bf48a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.355 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abdb6fec-ea43-4fed-ab1f-84fa8c585778]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722913, 'reachable_time': 19787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384168, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d705357ee\x2d1033\x2d4907\x2d905f\x2dd41aa6dcfd73.mount: Deactivated successfully.
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.361 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.362 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3b320ec7-5940-45ad-8ade-c8b26db05431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.363 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7381e299-12bd-46ec-8abf-df35fe0bf48a in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 unbound from our chassis
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.365 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 705357ee-1033-4907-905f-d41aa6dcfd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.366 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94ede78f-8b23-449c-afdc-c6883ddf1afb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.367 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7381e299-12bd-46ec-8abf-df35fe0bf48a in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 unbound from our chassis
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.368 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 705357ee-1033-4907-905f-d41aa6dcfd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:40:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.368 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7af7885-0958-4490-b3ce-23f178220653]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.369 253665 INFO os_vif [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12')
Nov 22 09:40:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:40:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:40:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:40:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:40:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.786 253665 INFO nova.virt.libvirt.driver [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Deleting instance files /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3_del
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.787 253665 INFO nova.virt.libvirt.driver [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Deletion of /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3_del complete
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.858 253665 INFO nova.compute.manager [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Took 0.98 seconds to destroy the instance on the hypervisor.
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.860 253665 DEBUG oslo.service.loopingcall [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.860 253665 DEBUG nova.compute.manager [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:40:56 compute-0 nova_compute[253661]: 2025-11-22 09:40:56.860 253665 DEBUG nova.network.neutron [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.097 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.098 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.098 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.098 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.099 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.100 253665 INFO nova.compute.manager [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Terminating instance
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.101 253665 DEBUG nova.compute.manager [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:40:57 compute-0 kernel: tap2a619e33-76 (unregistering): left promiscuous mode
Nov 22 09:40:57 compute-0 NetworkManager[48920]: <info>  [1763804457.1701] device (tap2a619e33-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:40:57 compute-0 ovn_controller[152872]: 2025-11-22T09:40:57Z|01370|binding|INFO|Releasing lport 2a619e33-769d-4ebf-b212-40975e40d3ca from this chassis (sb_readonly=0)
Nov 22 09:40:57 compute-0 ovn_controller[152872]: 2025-11-22T09:40:57Z|01371|binding|INFO|Setting lport 2a619e33-769d-4ebf-b212-40975e40d3ca down in Southbound
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 ovn_controller[152872]: 2025-11-22T09:40:57Z|01372|binding|INFO|Removing iface tap2a619e33-76 ovn-installed in OVS
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.188 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:e0:cc 10.100.0.10'], port_security=['fa:16:3e:09:e0:cc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ba0b1c52-c98b-4c2f-a213-e203719ada54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33aa2b15-84be-4fa8-858f-98182293b1b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82afa9d-1a09-411a-8866-4ce961a27350, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a619e33-769d-4ebf-b212-40975e40d3ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.189 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a619e33-769d-4ebf-b212-40975e40d3ca in datapath 33aa2b15-84be-4fa8-858f-98182293b1b2 unbound from our chassis
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.190 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 33aa2b15-84be-4fa8-858f-98182293b1b2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.191 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[85710aad-abbc-45b8-9b8a-7b2c01905251]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.192 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 namespace which is not needed anymore
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 kernel: tap27382337-7f (unregistering): left promiscuous mode
Nov 22 09:40:57 compute-0 NetworkManager[48920]: <info>  [1763804457.2019] device (tap27382337-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 ovn_controller[152872]: 2025-11-22T09:40:57Z|01373|binding|INFO|Releasing lport 27382337-7fe1-4d29-942c-7735f8c98a06 from this chassis (sb_readonly=0)
Nov 22 09:40:57 compute-0 ovn_controller[152872]: 2025-11-22T09:40:57Z|01374|binding|INFO|Setting lport 27382337-7fe1-4d29-942c-7735f8c98a06 down in Southbound
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.221 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 ovn_controller[152872]: 2025-11-22T09:40:57Z|01375|binding|INFO|Removing iface tap27382337-7f ovn-installed in OVS
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.225 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.230 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f'], port_security=['fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe81:ef0f/64 2001:db8::f816:3eff:fe81:ef0f/64', 'neutron:device_id': 'ba0b1c52-c98b-4c2f-a213-e203719ada54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=27382337-7fe1-4d29-942c-7735f8c98a06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 22 09:40:57 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007a.scope: Consumed 17.770s CPU time.
Nov 22 09:40:57 compute-0 systemd-machined[215941]: Machine qemu-153-instance-0000007a terminated.
Nov 22 09:40:57 compute-0 NetworkManager[48920]: <info>  [1763804457.3287] manager: (tap2a619e33-76): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Nov 22 09:40:57 compute-0 ceph-mon[75021]: pgmap v2371: 305 pgs: 305 active+clean; 200 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 618 KiB/s wr, 116 op/s
Nov 22 09:40:57 compute-0 NetworkManager[48920]: <info>  [1763804457.3413] manager: (tap27382337-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [NOTICE]   (381020) : haproxy version is 2.8.14-c23fe91
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [NOTICE]   (381020) : path to executable is /usr/sbin/haproxy
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [WARNING]  (381020) : Exiting Master process...
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [WARNING]  (381020) : Exiting Master process...
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [ALERT]    (381020) : Current worker (381022) exited with code 143 (Terminated)
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [WARNING]  (381020) : All workers exited. Exiting... (0)
Nov 22 09:40:57 compute-0 systemd[1]: libpod-2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733.scope: Deactivated successfully.
Nov 22 09:40:57 compute-0 podman[384214]: 2025-11-22 09:40:57.354178598 +0000 UTC m=+0.052388941 container died 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.355 253665 INFO nova.virt.libvirt.driver [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance destroyed successfully.
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.355 253665 DEBUG nova.objects.instance [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid ba0b1c52-c98b-4c2f-a213-e203719ada54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.374 253665 DEBUG nova.virt.libvirt.vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:39:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:39:27Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.375 253665 DEBUG nova.network.os_vif_util [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.376 253665 DEBUG nova.network.os_vif_util [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.376 253665 DEBUG os_vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.379 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a619e33-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733-userdata-shm.mount: Deactivated successfully.
Nov 22 09:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d80866c630f9de6dcc7211912df360c883a2569930a017e86eb8d48a712ac4e8-merged.mount: Deactivated successfully.
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.395 253665 INFO os_vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76')
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.396 253665 DEBUG nova.virt.libvirt.vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:39:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:39:27Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.397 253665 DEBUG nova.network.os_vif_util [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.398 253665 DEBUG nova.network.os_vif_util [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.398 253665 DEBUG os_vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:40:57 compute-0 podman[384214]: 2025-11-22 09:40:57.399002632 +0000 UTC m=+0.097212975 container cleanup 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.400 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27382337-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.405 253665 INFO os_vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f')
Nov 22 09:40:57 compute-0 systemd[1]: libpod-conmon-2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733.scope: Deactivated successfully.
Nov 22 09:40:57 compute-0 podman[384269]: 2025-11-22 09:40:57.486247621 +0000 UTC m=+0.059975615 container remove 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f6aceb-ba8b-4f47-a842-fcdbccb9aae7]: (4, ('Sat Nov 22 09:40:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 (2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733)\n2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733\nSat Nov 22 09:40:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 (2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733)\n2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.496 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce396e3b-2ff4-488e-a268-78dede27dab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.497 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33aa2b15-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 kernel: tap33aa2b15-80: left promiscuous mode
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.517 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c40d7100-6bb4-4f3d-a743-7066b2984d91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.532 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[089fd739-39b4-47ab-ac55-761f40daa47e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.534 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af34d5f7-593b-46ba-a1a6-f035de2a2f82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f468303f-c78e-48fa-95ee-6b805b899462]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721616, 'reachable_time': 37303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384297, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d33aa2b15\x2d84be\x2d4fa8\x2d858f\x2d98182293b1b2.mount: Deactivated successfully.
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.556 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.556 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[566fcd69-3dcc-4df8-b6a2-603610bf087e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.558 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 27382337-7fe1-4d29-942c-7735f8c98a06 in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb unbound from our chassis
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.559 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20228844-2184-465b-8bc3-e846cfb6d3cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.560 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1391dd-8871-41f3-921d-5862570ae2d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.560 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb namespace which is not needed anymore
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.650 253665 DEBUG nova.network.neutron [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updated VIF entry in instance network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.652 253665 DEBUG nova.network.neutron [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:40:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1466 writes, 7346 keys, 1466 commit groups, 1.0 writes per commit group, ingest: 9.07 MB, 0.02 MB/s
                                           Interval WAL: 1466 writes, 1466 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     45.8      1.21              0.21        33    0.037       0      0       0.0       0.0
                                             L6      1/0    7.92 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.6     95.3     80.3      3.19              0.80        32    0.100    187K    17K       0.0       0.0
                                            Sum      1/0    7.92 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.6     69.0     70.8      4.40              1.01        65    0.068    187K    17K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7    101.3     99.3      0.80              0.29        16    0.050     58K   4045       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     95.3     80.3      3.19              0.80        32    0.100    187K    17K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.9      1.21              0.21        32    0.038       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.054, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.30 GB write, 0.07 MB/s write, 0.30 GB read, 0.07 MB/s read, 4.4 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 33.93 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000263 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2230,32.53 MB,10.7008%) FilterBlock(66,536.73 KB,0.172419%) IndexBlock(66,899.67 KB,0.289008%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.674 253665 DEBUG oslo_concurrency.lockutils [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [NOTICE]   (381093) : haproxy version is 2.8.14-c23fe91
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [NOTICE]   (381093) : path to executable is /usr/sbin/haproxy
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [WARNING]  (381093) : Exiting Master process...
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [WARNING]  (381093) : Exiting Master process...
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [ALERT]    (381093) : Current worker (381095) exited with code 143 (Terminated)
Nov 22 09:40:57 compute-0 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [WARNING]  (381093) : All workers exited. Exiting... (0)
Nov 22 09:40:57 compute-0 systemd[1]: libpod-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a.scope: Deactivated successfully.
Nov 22 09:40:57 compute-0 conmon[381088]: conmon fce4d4eaa218605777ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a.scope/container/memory.events
Nov 22 09:40:57 compute-0 podman[384317]: 2025-11-22 09:40:57.713196861 +0000 UTC m=+0.051715834 container died fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a-userdata-shm.mount: Deactivated successfully.
Nov 22 09:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb633aced8fc5d6611d564ed19dc9e108a0bec8470c6c0ad7e16b832c8c4335b-merged.mount: Deactivated successfully.
Nov 22 09:40:57 compute-0 podman[384317]: 2025-11-22 09:40:57.758665931 +0000 UTC m=+0.097184894 container cleanup fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:40:57 compute-0 systemd[1]: libpod-conmon-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a.scope: Deactivated successfully.
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.795 253665 DEBUG nova.compute.manager [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-unplugged-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.796 253665 DEBUG oslo_concurrency.lockutils [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.796 253665 DEBUG oslo_concurrency.lockutils [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.796 253665 DEBUG oslo_concurrency.lockutils [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.796 253665 DEBUG nova.compute.manager [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-unplugged-2a619e33-769d-4ebf-b212-40975e40d3ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.797 253665 DEBUG nova.compute.manager [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-unplugged-2a619e33-769d-4ebf-b212-40975e40d3ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:40:57 compute-0 podman[384344]: 2025-11-22 09:40:57.829884689 +0000 UTC m=+0.046046785 container remove fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.831 253665 INFO nova.virt.libvirt.driver [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Deleting instance files /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54_del
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.832 253665 INFO nova.virt.libvirt.driver [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Deletion of /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54_del complete
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.837 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0299e8-1ef4-407d-98dd-bd043b466d43]: (4, ('Sat Nov 22 09:40:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb (fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a)\nfce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a\nSat Nov 22 09:40:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb (fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a)\nfce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81ac0d9e-3ad2-48e7-a4f5-303bb34a83b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.840 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20228844-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 kernel: tap20228844-20: left promiscuous mode
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.860 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b522ad64-91ee-49ea-908f-86fc9d639edf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.875 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03c60b60-1349-4436-9c62-d14d98ef69e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.876 253665 INFO nova.compute.manager [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.877 253665 DEBUG oslo.service.loopingcall [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.877 253665 DEBUG nova.compute.manager [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:40:57 compute-0 nova_compute[253661]: 2025-11-22 09:40:57.877 253665 DEBUG nova.network.neutron [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c712acaa-d031-4b9b-b5ce-8e75c41a696d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.904 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfa1c66-0e53-47da-9f1a-205571a6b818]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721710, 'reachable_time': 22514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384360, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.907 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:40:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.907 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[be8494b8-306d-4ea5-b365-21b657dcb1a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:40:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 175 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 619 KiB/s wr, 126 op/s
Nov 22 09:40:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d20228844\x2d2184\x2d465b\x2d8bc3\x2de846cfb6d3cb.mount: Deactivated successfully.
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.534 253665 DEBUG nova.compute.manager [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.534 253665 DEBUG nova.compute.manager [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing instance network info cache due to event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.535 253665 DEBUG oslo_concurrency.lockutils [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.535 253665 DEBUG oslo_concurrency.lockutils [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.535 253665 DEBUG nova.network.neutron [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.564 253665 DEBUG nova.network.neutron [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.589 253665 INFO nova.compute.manager [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Took 1.73 seconds to deallocate network for instance.
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.639 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.640 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:40:58 compute-0 nova_compute[253661]: 2025-11-22 09:40:58.706 253665 DEBUG oslo_concurrency.processutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:40:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:40:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:40:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741029208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:59 compute-0 nova_compute[253661]: 2025-11-22 09:40:59.183 253665 DEBUG oslo_concurrency.processutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:40:59 compute-0 nova_compute[253661]: 2025-11-22 09:40:59.191 253665 DEBUG nova.compute.provider_tree [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:40:59 compute-0 nova_compute[253661]: 2025-11-22 09:40:59.211 253665 DEBUG nova.scheduler.client.report [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:40:59 compute-0 nova_compute[253661]: 2025-11-22 09:40:59.235 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:59 compute-0 nova_compute[253661]: 2025-11-22 09:40:59.266 253665 INFO nova.scheduler.client.report [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance 44f1789d-14f7-46df-a863-e8c3c418f7f3
Nov 22 09:40:59 compute-0 nova_compute[253661]: 2025-11-22 09:40:59.340 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:40:59 compute-0 ceph-mon[75021]: pgmap v2372: 305 pgs: 305 active+clean; 175 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 619 KiB/s wr, 126 op/s
Nov 22 09:40:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2741029208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:40:59 compute-0 nova_compute[253661]: 2025-11-22 09:40:59.512 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804444.5116167, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:40:59 compute-0 nova_compute[253661]: 2025-11-22 09:40:59.513 253665 INFO nova.compute.manager [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] VM Stopped (Lifecycle Event)
Nov 22 09:40:59 compute-0 nova_compute[253661]: 2025-11-22 09:40:59.543 253665 DEBUG nova.compute.manager [None req-09d9529d-daaa-49ce-9677-2acfa98ef7ea - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.041 253665 DEBUG nova.compute.manager [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.042 253665 DEBUG oslo_concurrency.lockutils [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.042 253665 DEBUG oslo_concurrency.lockutils [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.042 253665 DEBUG oslo_concurrency.lockutils [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.043 253665 DEBUG nova.compute.manager [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.043 253665 WARNING nova.compute.manager [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received unexpected event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca for instance with vm_state active and task_state deleting.
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.043 253665 DEBUG nova.compute.manager [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-deleted-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 65 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 113 KiB/s wr, 138 op/s
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.598 253665 DEBUG nova.network.neutron [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.625 253665 INFO nova.compute.manager [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Took 2.75 seconds to deallocate network for instance.
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.696 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.697 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.766 253665 DEBUG oslo_concurrency.processutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.811 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-unplugged-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.812 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.812 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.812 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.813 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] No waiting events found dispatching network-vif-unplugged-7381e299-12bd-46ec-8abf-df35fe0bf48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.813 253665 WARNING nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received unexpected event network-vif-unplugged-7381e299-12bd-46ec-8abf-df35fe0bf48a for instance with vm_state deleted and task_state None.
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.813 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.813 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.814 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.814 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.814 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] No waiting events found dispatching network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.814 253665 WARNING nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received unexpected event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a for instance with vm_state deleted and task_state None.
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-deleted-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.816 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:00 compute-0 nova_compute[253661]: 2025-11-22 09:41:00.816 253665 WARNING nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received unexpected event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 for instance with vm_state deleted and task_state None.
Nov 22 09:41:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:41:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1116268201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.225 253665 DEBUG oslo_concurrency.processutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.228 253665 DEBUG nova.network.neutron [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updated VIF entry in instance network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.228 253665 DEBUG nova.network.neutron [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.234 253665 DEBUG nova.compute.provider_tree [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.246 253665 DEBUG nova.scheduler.client.report [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.250 253665 DEBUG oslo_concurrency.lockutils [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.267 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.298 253665 INFO nova.scheduler.client.report [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance ba0b1c52-c98b-4c2f-a213-e203719ada54
Nov 22 09:41:01 compute-0 ceph-mon[75021]: pgmap v2373: 305 pgs: 305 active+clean; 65 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 113 KiB/s wr, 138 op/s
Nov 22 09:41:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1116268201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.361 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.371 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:01 compute-0 podman[384405]: 2025-11-22 09:41:01.432549788 +0000 UTC m=+0.122450650 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 09:41:01 compute-0 nova_compute[253661]: 2025-11-22 09:41:01.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 23 KiB/s wr, 112 op/s
Nov 22 09:41:02 compute-0 nova_compute[253661]: 2025-11-22 09:41:02.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:41:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:41:02 compute-0 nova_compute[253661]: 2025-11-22 09:41:02.961 253665 DEBUG nova.compute.manager [req-9f5c4b41-2802-4458-8a7e-c1779ba9b410 req-27cad0f2-ed90-4519-8ee3-6d04ae5ed87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-deleted-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:02 compute-0 nova_compute[253661]: 2025-11-22 09:41:02.962 253665 INFO nova.compute.manager [req-9f5c4b41-2802-4458-8a7e-c1779ba9b410 req-27cad0f2-ed90-4519-8ee3-6d04ae5ed87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Neutron deleted interface 2a619e33-769d-4ebf-b212-40975e40d3ca; detaching it from the instance and deleting it from the info cache
Nov 22 09:41:02 compute-0 nova_compute[253661]: 2025-11-22 09:41:02.962 253665 DEBUG nova.network.neutron [req-9f5c4b41-2802-4458-8a7e-c1779ba9b410 req-27cad0f2-ed90-4519-8ee3-6d04ae5ed87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 22 09:41:02 compute-0 nova_compute[253661]: 2025-11-22 09:41:02.966 253665 DEBUG nova.compute.manager [req-9f5c4b41-2802-4458-8a7e-c1779ba9b410 req-27cad0f2-ed90-4519-8ee3-6d04ae5ed87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Detach interface failed, port_id=2a619e33-769d-4ebf-b212-40975e40d3ca, reason: Instance ba0b1c52-c98b-4c2f-a213-e203719ada54 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:41:03 compute-0 ceph-mon[75021]: pgmap v2374: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 23 KiB/s wr, 112 op/s
Nov 22 09:41:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 21 KiB/s wr, 104 op/s
Nov 22 09:41:04 compute-0 nova_compute[253661]: 2025-11-22 09:41:04.714 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:04 compute-0 nova_compute[253661]: 2025-11-22 09:41:04.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:05 compute-0 ceph-mon[75021]: pgmap v2375: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 21 KiB/s wr, 104 op/s
Nov 22 09:41:05 compute-0 nova_compute[253661]: 2025-11-22 09:41:05.480 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804450.4791248, 7f88c3e8-e667-4d9a-8178-c99843560719 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:05 compute-0 nova_compute[253661]: 2025-11-22 09:41:05.482 253665 INFO nova.compute.manager [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] VM Stopped (Lifecycle Event)
Nov 22 09:41:05 compute-0 nova_compute[253661]: 2025-11-22 09:41:05.504 253665 DEBUG nova.compute.manager [None req-2c0dcb17-9bff-4c3d-bffe-dae980b54395 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Nov 22 09:41:06 compute-0 nova_compute[253661]: 2025-11-22 09:41:06.326 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804451.3253665, aaeb1088-1220-47e3-9462-ba96b1d4e87a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:06 compute-0 nova_compute[253661]: 2025-11-22 09:41:06.326 253665 INFO nova.compute.manager [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] VM Stopped (Lifecycle Event)
Nov 22 09:41:06 compute-0 nova_compute[253661]: 2025-11-22 09:41:06.352 253665 DEBUG nova.compute.manager [None req-372409af-39b3-464b-b494-ca545dbad6ac - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:06 compute-0 nova_compute[253661]: 2025-11-22 09:41:06.362 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:07 compute-0 ceph-mon[75021]: pgmap v2376: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Nov 22 09:41:07 compute-0 nova_compute[253661]: 2025-11-22 09:41:07.406 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Nov 22 09:41:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:09 compute-0 ceph-mon[75021]: pgmap v2377: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Nov 22 09:41:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 KiB/s wr, 46 op/s
Nov 22 09:41:10 compute-0 nova_compute[253661]: 2025-11-22 09:41:10.883 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:10 compute-0 nova_compute[253661]: 2025-11-22 09:41:10.884 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:10 compute-0 nova_compute[253661]: 2025-11-22 09:41:10.900 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:41:10 compute-0 nova_compute[253661]: 2025-11-22 09:41:10.995 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:10 compute-0 nova_compute[253661]: 2025-11-22 09:41:10.996 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.004 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.005 253665 INFO nova.compute.claims [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.115 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.325 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804456.32309, 44f1789d-14f7-46df-a863-e8c3c418f7f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.325 253665 INFO nova.compute.manager [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] VM Stopped (Lifecycle Event)
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.350 253665 DEBUG nova.compute.manager [None req-527332f5-65bf-4284-8a4f-6d809b5cfd08 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.365 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:11 compute-0 ceph-mon[75021]: pgmap v2378: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 KiB/s wr, 46 op/s
Nov 22 09:41:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:41:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707255038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.607 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.614 253665 DEBUG nova.compute.provider_tree [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.629 253665 DEBUG nova.scheduler.client.report [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.664 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.665 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.725 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.726 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.746 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.764 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.850 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.851 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.852 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Creating image(s)
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.872 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.894 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.915 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:11 compute-0 nova_compute[253661]: 2025-11-22 09:41:11.920 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.003 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.004 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.005 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.005 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.025 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.028 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 1.4 KiB/s wr, 9 op/s
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.317 253665 DEBUG nova.policy [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.331 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.357 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804457.352855, ba0b1c52-c98b-4c2f-a213-e203719ada54 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.357 253665 INFO nova.compute.manager [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] VM Stopped (Lifecycle Event)
Nov 22 09:41:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:41:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233261021' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:41:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:41:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233261021' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.393 253665 DEBUG nova.compute.manager [None req-25f0e0e4-c7cc-4981-a8db-415afb6614f4 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.399 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:41:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2707255038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3233261021' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:41:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3233261021' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.706 253665 DEBUG nova.objects.instance [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid de63fafb-9cce-47c5-8cdc-f5c348b1777a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:12 compute-0 sudo[384604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.725 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.726 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Ensure instance console log exists: /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:41:12 compute-0 sudo[384604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.727 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.727 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:12 compute-0 nova_compute[253661]: 2025-11-22 09:41:12.727 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:12 compute-0 sudo[384604]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:12 compute-0 sudo[384647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:41:12 compute-0 sudo[384647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:12 compute-0 sudo[384647]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:12 compute-0 sudo[384672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:41:12 compute-0 sudo[384672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:12 compute-0 sudo[384672]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:12 compute-0 sudo[384697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:41:12 compute-0 sudo[384697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:13 compute-0 sudo[384697]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:41:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:41:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:41:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:41:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:41:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:41:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9ace6b39-f32f-4971-9184-611c8252821f does not exist
Nov 22 09:41:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0dc3dfa4-747f-48f0-b2e4-2cc70cdfce42 does not exist
Nov 22 09:41:13 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 454c5f95-9c08-4aa6-8edb-413f3f3e52ea does not exist
Nov 22 09:41:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:41:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:41:13 compute-0 ceph-mon[75021]: pgmap v2379: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 1.4 KiB/s wr, 9 op/s
Nov 22 09:41:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:41:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:41:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:41:13 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:41:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:41:13 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:41:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:41:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:41:13 compute-0 sudo[384752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:41:13 compute-0 sudo[384752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:13 compute-0 sudo[384752]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:13 compute-0 sudo[384777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:41:13 compute-0 sudo[384777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:13 compute-0 sudo[384777]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:13 compute-0 nova_compute[253661]: 2025-11-22 09:41:13.537 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Successfully updated port: a86218e5-015d-4324-b94e-b87b21f3333d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:41:13 compute-0 nova_compute[253661]: 2025-11-22 09:41:13.554 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:41:13 compute-0 nova_compute[253661]: 2025-11-22 09:41:13.554 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:41:13 compute-0 nova_compute[253661]: 2025-11-22 09:41:13.554 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:41:13 compute-0 sudo[384802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:41:13 compute-0 sudo[384802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:13 compute-0 sudo[384802]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:13 compute-0 nova_compute[253661]: 2025-11-22 09:41:13.637 253665 DEBUG nova.compute.manager [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:13 compute-0 nova_compute[253661]: 2025-11-22 09:41:13.637 253665 DEBUG nova.compute.manager [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Refreshing instance network info cache due to event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:41:13 compute-0 nova_compute[253661]: 2025-11-22 09:41:13.638 253665 DEBUG oslo_concurrency.lockutils [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:41:13 compute-0 sudo[384827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:41:13 compute-0 sudo[384827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:13 compute-0 nova_compute[253661]: 2025-11-22 09:41:13.701 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:41:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:13 compute-0 podman[384892]: 2025-11-22 09:41:13.993044212 +0000 UTC m=+0.047828939 container create c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 09:41:14 compute-0 systemd[1]: Started libpod-conmon-c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488.scope.
Nov 22 09:41:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:41:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 65 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Nov 22 09:41:14 compute-0 podman[384892]: 2025-11-22 09:41:13.973175647 +0000 UTC m=+0.027960374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:41:14 compute-0 podman[384892]: 2025-11-22 09:41:14.067434138 +0000 UTC m=+0.122218865 container init c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:41:14 compute-0 podman[384892]: 2025-11-22 09:41:14.074261974 +0000 UTC m=+0.129046681 container start c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:41:14 compute-0 podman[384892]: 2025-11-22 09:41:14.077514614 +0000 UTC m=+0.132299321 container attach c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:41:14 compute-0 goofy_rosalind[384909]: 167 167
Nov 22 09:41:14 compute-0 systemd[1]: libpod-c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488.scope: Deactivated successfully.
Nov 22 09:41:14 compute-0 conmon[384909]: conmon c20ef0d8949d7ea1b749 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488.scope/container/memory.events
Nov 22 09:41:14 compute-0 podman[384892]: 2025-11-22 09:41:14.080968408 +0000 UTC m=+0.135753115 container died c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 22 09:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e82c79c9f7904ddfa669e4f159bfe8e6f08e92aaed3048656db6388d65b30d7d-merged.mount: Deactivated successfully.
Nov 22 09:41:14 compute-0 podman[384892]: 2025-11-22 09:41:14.120264717 +0000 UTC m=+0.175049434 container remove c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:41:14 compute-0 systemd[1]: libpod-conmon-c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488.scope: Deactivated successfully.
Nov 22 09:41:14 compute-0 podman[384933]: 2025-11-22 09:41:14.287238134 +0000 UTC m=+0.051551320 container create 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:41:14 compute-0 systemd[1]: Started libpod-conmon-51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800.scope.
Nov 22 09:41:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:14 compute-0 podman[384933]: 2025-11-22 09:41:14.358435721 +0000 UTC m=+0.122748927 container init 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 09:41:14 compute-0 podman[384933]: 2025-11-22 09:41:14.268861914 +0000 UTC m=+0.033175120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:41:14 compute-0 podman[384933]: 2025-11-22 09:41:14.36740335 +0000 UTC m=+0.131716536 container start 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.369 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Updating instance_info_cache with network_info: [{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:14 compute-0 podman[384933]: 2025-11-22 09:41:14.371568192 +0000 UTC m=+0.135881398 container attach 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.405 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.405 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Instance network_info: |[{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.406 253665 DEBUG oslo_concurrency.lockutils [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.406 253665 DEBUG nova.network.neutron [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Refreshing network info cache for port a86218e5-015d-4324-b94e-b87b21f3333d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.411 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start _get_guest_xml network_info=[{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.416 253665 WARNING nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.424 253665 DEBUG nova.virt.libvirt.host [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:41:14 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:41:14 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.427 253665 DEBUG nova.virt.libvirt.host [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.438 253665 DEBUG nova.virt.libvirt.host [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.439 253665 DEBUG nova.virt.libvirt.host [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.439 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.439 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.440 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.440 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.441 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.441 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.441 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.441 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.442 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.442 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.442 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.442 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.445 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:41:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171995316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.890 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.921 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:14 compute-0 nova_compute[253661]: 2025-11-22 09:41:14.927 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:41:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220121214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.410 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.412 253665 DEBUG nova.virt.libvirt.vif [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-801609584',display_name='tempest-TestNetworkBasicOps-server-801609584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-801609584',id=127,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBR3KlC+I+BrQmt+UKktZu9qmsBf630tj2ls2EAmBhPrPHG9I1DIKGXJ13OKDXnaKtyixc97nbX6Fgi3vYqBPQ5wohq9YCdMs+5UaDa5kTzpHNni4MDhpWBjxoEExVT1mA==',key_name='tempest-TestNetworkBasicOps-1734152809',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-091r93vu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:11Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=de63fafb-9cce-47c5-8cdc-f5c348b1777a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:41:15 compute-0 stoic_margulis[384949]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:41:15 compute-0 stoic_margulis[384949]: --> relative data size: 1.0
Nov 22 09:41:15 compute-0 stoic_margulis[384949]: --> All data devices are unavailable
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.413 253665 DEBUG nova.network.os_vif_util [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.415 253665 DEBUG nova.network.os_vif_util [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.417 253665 DEBUG nova.objects.instance [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid de63fafb-9cce-47c5-8cdc-f5c348b1777a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.429 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <uuid>de63fafb-9cce-47c5-8cdc-f5c348b1777a</uuid>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <name>instance-0000007f</name>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-801609584</nova:name>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:41:14</nova:creationTime>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <nova:port uuid="a86218e5-015d-4324-b94e-b87b21f3333d">
Nov 22 09:41:15 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <system>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <entry name="serial">de63fafb-9cce-47c5-8cdc-f5c348b1777a</entry>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <entry name="uuid">de63fafb-9cce-47c5-8cdc-f5c348b1777a</entry>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     </system>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <os>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   </os>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <features>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   </features>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk">
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config">
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       </source>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:41:15 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:9c:8b:9e"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <target dev="tapa86218e5-01"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/console.log" append="off"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <video>
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     </video>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:41:15 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:41:15 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:41:15 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:41:15 compute-0 nova_compute[253661]: </domain>
Nov 22 09:41:15 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.430 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Preparing to wait for external event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.431 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.431 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.432 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.433 253665 DEBUG nova.virt.libvirt.vif [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-801609584',display_name='tempest-TestNetworkBasicOps-server-801609584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-801609584',id=127,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBR3KlC+I+BrQmt+UKktZu9qmsBf630tj2ls2EAmBhPrPHG9I1DIKGXJ13OKDXnaKtyixc97nbX6Fgi3vYqBPQ5wohq9YCdMs+5UaDa5kTzpHNni4MDhpWBjxoEExVT1mA==',key_name='tempest-TestNetworkBasicOps-1734152809',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-091r93vu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:11Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=de63fafb-9cce-47c5-8cdc-f5c348b1777a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.433 253665 DEBUG nova.network.os_vif_util [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:41:15 compute-0 ceph-mon[75021]: pgmap v2380: 305 pgs: 305 active+clean; 65 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Nov 22 09:41:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2171995316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/220121214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.434 253665 DEBUG nova.network.os_vif_util [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.434 253665 DEBUG os_vif [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.436 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.436 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.441 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.442 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa86218e5-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.442 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa86218e5-01, col_values=(('external_ids', {'iface-id': 'a86218e5-015d-4324-b94e-b87b21f3333d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:8b:9e', 'vm-uuid': 'de63fafb-9cce-47c5-8cdc-f5c348b1777a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:15 compute-0 NetworkManager[48920]: <info>  [1763804475.4465] manager: (tapa86218e5-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.447 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:41:15 compute-0 systemd[1]: libpod-51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800.scope: Deactivated successfully.
Nov 22 09:41:15 compute-0 systemd[1]: libpod-51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800.scope: Consumed 1.018s CPU time.
Nov 22 09:41:15 compute-0 podman[384933]: 2025-11-22 09:41:15.450525328 +0000 UTC m=+1.214838514 container died 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.452 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.453 253665 INFO os_vif [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01')
Nov 22 09:41:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30-merged.mount: Deactivated successfully.
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.503 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.503 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.503 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:9c:8b:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.504 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Using config drive
Nov 22 09:41:15 compute-0 podman[384933]: 2025-11-22 09:41:15.511993149 +0000 UTC m=+1.276306335 container remove 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:41:15 compute-0 systemd[1]: libpod-conmon-51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800.scope: Deactivated successfully.
Nov 22 09:41:15 compute-0 nova_compute[253661]: 2025-11-22 09:41:15.527 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:15 compute-0 sudo[384827]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:15 compute-0 sudo[385074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:41:15 compute-0 sudo[385074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:15 compute-0 sudo[385074]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:15 compute-0 sudo[385099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:41:15 compute-0 sudo[385099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:15 compute-0 sudo[385099]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:15 compute-0 sudo[385124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:41:15 compute-0 sudo[385124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:15 compute-0 sudo[385124]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:15 compute-0 sudo[385149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:41:15 compute-0 sudo[385149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 65 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.124 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Creating config drive at /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config
Nov 22 09:41:16 compute-0 podman[385214]: 2025-11-22 09:41:16.12778725 +0000 UTC m=+0.041605027 container create 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.129 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpohgzvf82 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:16 compute-0 systemd[1]: Started libpod-conmon-484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84.scope.
Nov 22 09:41:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:41:16 compute-0 podman[385214]: 2025-11-22 09:41:16.200637168 +0000 UTC m=+0.114454985 container init 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:41:16 compute-0 podman[385214]: 2025-11-22 09:41:16.109502104 +0000 UTC m=+0.023319911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:41:16 compute-0 podman[385214]: 2025-11-22 09:41:16.207877205 +0000 UTC m=+0.121694992 container start 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:41:16 compute-0 admiring_bhabha[385232]: 167 167
Nov 22 09:41:16 compute-0 podman[385214]: 2025-11-22 09:41:16.212151659 +0000 UTC m=+0.125969436 container attach 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:41:16 compute-0 systemd[1]: libpod-484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84.scope: Deactivated successfully.
Nov 22 09:41:16 compute-0 podman[385214]: 2025-11-22 09:41:16.212875807 +0000 UTC m=+0.126693594 container died 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 09:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b914034428f425237100a5134871ed82feda09b5a5cb67823ed1fc4ba6f219be-merged.mount: Deactivated successfully.
Nov 22 09:41:16 compute-0 podman[385214]: 2025-11-22 09:41:16.251625264 +0000 UTC m=+0.165443051 container remove 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 22 09:41:16 compute-0 systemd[1]: libpod-conmon-484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84.scope: Deactivated successfully.
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.271 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpohgzvf82" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.294 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.297 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.367 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:16 compute-0 podman[385275]: 2025-11-22 09:41:16.42148612 +0000 UTC m=+0.051865567 container create 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:41:16 compute-0 systemd[1]: Started libpod-conmon-2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd.scope.
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.464 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.464 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Deleting local config drive /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config because it was imported into RBD.
Nov 22 09:41:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:16 compute-0 podman[385275]: 2025-11-22 09:41:16.492839041 +0000 UTC m=+0.123218548 container init 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:41:16 compute-0 podman[385275]: 2025-11-22 09:41:16.40103219 +0000 UTC m=+0.031411687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:41:16 compute-0 podman[385275]: 2025-11-22 09:41:16.501970544 +0000 UTC m=+0.132349991 container start 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:41:16 compute-0 podman[385275]: 2025-11-22 09:41:16.505985552 +0000 UTC m=+0.136365009 container attach 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 09:41:16 compute-0 kernel: tapa86218e5-01: entered promiscuous mode
Nov 22 09:41:16 compute-0 NetworkManager[48920]: <info>  [1763804476.5231] manager: (tapa86218e5-01): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Nov 22 09:41:16 compute-0 ovn_controller[152872]: 2025-11-22T09:41:16Z|01376|binding|INFO|Claiming lport a86218e5-015d-4324-b94e-b87b21f3333d for this chassis.
Nov 22 09:41:16 compute-0 ovn_controller[152872]: 2025-11-22T09:41:16Z|01377|binding|INFO|a86218e5-015d-4324-b94e-b87b21f3333d: Claiming fa:16:3e:9c:8b:9e 10.100.0.7
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.545 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8b:9e 10.100.0.7'], port_security=['fa:16:3e:9c:8b:9e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'de63fafb-9cce-47c5-8cdc-f5c348b1777a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1915d045-a483-4ba0-9f22-02eb1e398b68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c342dd41-b1eb-43d0-a96a-717d17dead9b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a86218e5-015d-4324-b94e-b87b21f3333d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.546 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a86218e5-015d-4324-b94e-b87b21f3333d in datapath 1915d045-a483-4ba0-9f22-02eb1e398b68 bound to our chassis
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.548 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1915d045-a483-4ba0-9f22-02eb1e398b68
Nov 22 09:41:16 compute-0 systemd-udevd[385326]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.562 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c5675a80-c5c1-48e8-b083-43174f239fa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.563 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1915d045-a1 in ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.566 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1915d045-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.566 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[556111d1-d26c-4cd3-97b8-ebadb53547cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b75e8e6-c35b-4a1f-968e-0a96bdca61af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 systemd-machined[215941]: New machine qemu-158-instance-0000007f.
Nov 22 09:41:16 compute-0 NetworkManager[48920]: <info>  [1763804476.5746] device (tapa86218e5-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:41:16 compute-0 NetworkManager[48920]: <info>  [1763804476.5754] device (tapa86218e5-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.586 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[7560725c-2607-46cc-af1b-aaa727deed3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-0000007f.
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:16 compute-0 ovn_controller[152872]: 2025-11-22T09:41:16Z|01378|binding|INFO|Setting lport a86218e5-015d-4324-b94e-b87b21f3333d ovn-installed in OVS
Nov 22 09:41:16 compute-0 ovn_controller[152872]: 2025-11-22T09:41:16Z|01379|binding|INFO|Setting lport a86218e5-015d-4324-b94e-b87b21f3333d up in Southbound
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.609 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[050cd6b4-71bc-4113-aee4-e1be99ccaf61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.611 253665 DEBUG nova.network.neutron [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Updated VIF entry in instance network info cache for port a86218e5-015d-4324-b94e-b87b21f3333d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.612 253665 DEBUG nova.network.neutron [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Updating instance_info_cache with network_info: [{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.625 253665 DEBUG oslo_concurrency.lockutils [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.643 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1f061489-1284-445d-a8f4-abf72e5d5054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 NetworkManager[48920]: <info>  [1763804476.6573] manager: (tap1915d045-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/553)
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.656 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8891ae50-4e1d-4b5f-a8af-4e241eccab31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.689 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[75a9ace3-f2e7-478f-b97c-93f88d9e5462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.693 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3cb50d-2b12-4151-8b3c-47d9c69ea77b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 NetworkManager[48920]: <info>  [1763804476.7168] device (tap1915d045-a0): carrier: link connected
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.721 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4a0360-5116-4f79-ae27-2c5c82b7b638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.736 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78178f2c-1c0c-41c3-8ef8-de2355669794]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1915d045-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:8a:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 391], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732745, 'reachable_time': 31586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385361, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.753 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b20a3114-4ffd-4050-a97d-13e0d2f658d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:8a9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 732745, 'tstamp': 732745}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385362, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.772 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[14ba98a6-adc3-49c2-b9d2-ae40f76a2f4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1915d045-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:8a:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 391], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732745, 'reachable_time': 31586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385363, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.800 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7c3199-b5ce-4aa6-9457-bffce2607a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.865 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8af71a5-7802-455d-b54b-f3000a73ab72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.866 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1915d045-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.867 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.867 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1915d045-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:16 compute-0 NetworkManager[48920]: <info>  [1763804476.9135] manager: (tap1915d045-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Nov 22 09:41:16 compute-0 kernel: tap1915d045-a0: entered promiscuous mode
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.919 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1915d045-a0, col_values=(('external_ids', {'iface-id': '3f753c2a-471e-42be-9ebf-5498238bbd2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.920 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:16 compute-0 ovn_controller[152872]: 2025-11-22T09:41:16Z|01380|binding|INFO|Releasing lport 3f753c2a-471e-42be-9ebf-5498238bbd2c from this chassis (sb_readonly=0)
Nov 22 09:41:16 compute-0 nova_compute[253661]: 2025-11-22 09:41:16.935 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.936 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1915d045-a483-4ba0-9f22-02eb1e398b68.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1915d045-a483-4ba0-9f22-02eb1e398b68.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[27440e6a-5f8b-4402-98f9-1c2bcd4f868b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.938 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-1915d045-a483-4ba0-9f22-02eb1e398b68
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/1915d045-a483-4ba0-9f22-02eb1e398b68.pid.haproxy
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 1915d045-a483-4ba0-9f22-02eb1e398b68
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:41:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.938 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'env', 'PROCESS_TAG=haproxy-1915d045-a483-4ba0-9f22-02eb1e398b68', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1915d045-a483-4ba0-9f22-02eb1e398b68.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:41:17 compute-0 nova_compute[253661]: 2025-11-22 09:41:17.123 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804477.1225963, de63fafb-9cce-47c5-8cdc-f5c348b1777a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:17 compute-0 nova_compute[253661]: 2025-11-22 09:41:17.123 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] VM Started (Lifecycle Event)
Nov 22 09:41:17 compute-0 nova_compute[253661]: 2025-11-22 09:41:17.139 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:17 compute-0 nova_compute[253661]: 2025-11-22 09:41:17.144 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804477.1228218, de63fafb-9cce-47c5-8cdc-f5c348b1777a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:17 compute-0 nova_compute[253661]: 2025-11-22 09:41:17.144 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] VM Paused (Lifecycle Event)
Nov 22 09:41:17 compute-0 nova_compute[253661]: 2025-11-22 09:41:17.160 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:17 compute-0 nova_compute[253661]: 2025-11-22 09:41:17.163 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:41:17 compute-0 nova_compute[253661]: 2025-11-22 09:41:17.181 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:41:17 compute-0 funny_swirles[385310]: {
Nov 22 09:41:17 compute-0 funny_swirles[385310]:     "0": [
Nov 22 09:41:17 compute-0 funny_swirles[385310]:         {
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "devices": [
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "/dev/loop3"
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             ],
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_name": "ceph_lv0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_size": "21470642176",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "name": "ceph_lv0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "tags": {
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.cluster_name": "ceph",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.crush_device_class": "",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.encrypted": "0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.osd_id": "0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.type": "block",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.vdo": "0"
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             },
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "type": "block",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "vg_name": "ceph_vg0"
Nov 22 09:41:17 compute-0 funny_swirles[385310]:         }
Nov 22 09:41:17 compute-0 funny_swirles[385310]:     ],
Nov 22 09:41:17 compute-0 funny_swirles[385310]:     "1": [
Nov 22 09:41:17 compute-0 funny_swirles[385310]:         {
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "devices": [
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "/dev/loop4"
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             ],
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_name": "ceph_lv1",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_size": "21470642176",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "name": "ceph_lv1",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "tags": {
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.cluster_name": "ceph",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.crush_device_class": "",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.encrypted": "0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.osd_id": "1",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.type": "block",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.vdo": "0"
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             },
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "type": "block",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "vg_name": "ceph_vg1"
Nov 22 09:41:17 compute-0 funny_swirles[385310]:         }
Nov 22 09:41:17 compute-0 funny_swirles[385310]:     ],
Nov 22 09:41:17 compute-0 funny_swirles[385310]:     "2": [
Nov 22 09:41:17 compute-0 funny_swirles[385310]:         {
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "devices": [
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "/dev/loop5"
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             ],
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_name": "ceph_lv2",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_size": "21470642176",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "name": "ceph_lv2",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "tags": {
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.cluster_name": "ceph",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.crush_device_class": "",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.encrypted": "0",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.osd_id": "2",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.type": "block",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:                 "ceph.vdo": "0"
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             },
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "type": "block",
Nov 22 09:41:17 compute-0 funny_swirles[385310]:             "vg_name": "ceph_vg2"
Nov 22 09:41:17 compute-0 funny_swirles[385310]:         }
Nov 22 09:41:17 compute-0 funny_swirles[385310]:     ]
Nov 22 09:41:17 compute-0 funny_swirles[385310]: }
Nov 22 09:41:17 compute-0 systemd[1]: libpod-2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd.scope: Deactivated successfully.
Nov 22 09:41:17 compute-0 podman[385437]: 2025-11-22 09:41:17.362014008 +0000 UTC m=+0.093776840 container create 4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 09:41:17 compute-0 podman[385437]: 2025-11-22 09:41:17.294224202 +0000 UTC m=+0.025987074 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:41:17 compute-0 systemd[1]: Started libpod-conmon-4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85.scope.
Nov 22 09:41:17 compute-0 podman[385452]: 2025-11-22 09:41:17.398365355 +0000 UTC m=+0.024502970 container died 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:41:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b7b2a7a5a23716c4c3002911f51c4d5a41209bbe3eaadcfdb8becdce955832/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab-merged.mount: Deactivated successfully.
Nov 22 09:41:17 compute-0 ceph-mon[75021]: pgmap v2381: 305 pgs: 305 active+clean; 65 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Nov 22 09:41:17 compute-0 podman[385437]: 2025-11-22 09:41:17.455491559 +0000 UTC m=+0.187254421 container init 4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:41:17 compute-0 podman[385437]: 2025-11-22 09:41:17.462114261 +0000 UTC m=+0.193877093 container start 4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:41:17 compute-0 podman[385452]: 2025-11-22 09:41:17.466232831 +0000 UTC m=+0.092370426 container remove 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:41:17 compute-0 systemd[1]: libpod-conmon-2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd.scope: Deactivated successfully.
Nov 22 09:41:17 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [NOTICE]   (385474) : New worker (385476) forked
Nov 22 09:41:17 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [NOTICE]   (385474) : Loading success.
Nov 22 09:41:17 compute-0 sudo[385149]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:17 compute-0 sudo[385485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:41:17 compute-0 sudo[385485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:17 compute-0 sudo[385485]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:17 compute-0 sudo[385510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:41:17 compute-0 sudo[385510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:17 compute-0 sudo[385510]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:17 compute-0 sudo[385535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:41:17 compute-0 sudo[385535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:17 compute-0 sudo[385535]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:17 compute-0 sudo[385560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:41:17 compute-0 sudo[385560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:41:18 compute-0 podman[385625]: 2025-11-22 09:41:18.120553783 +0000 UTC m=+0.069649281 container create 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:41:18 compute-0 systemd[1]: Started libpod-conmon-3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514.scope.
Nov 22 09:41:18 compute-0 podman[385625]: 2025-11-22 09:41:18.072739066 +0000 UTC m=+0.021834594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:41:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:41:18 compute-0 podman[385625]: 2025-11-22 09:41:18.205154148 +0000 UTC m=+0.154249646 container init 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:41:18 compute-0 podman[385625]: 2025-11-22 09:41:18.215408008 +0000 UTC m=+0.164503516 container start 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:41:18 compute-0 podman[385625]: 2025-11-22 09:41:18.21998376 +0000 UTC m=+0.169079328 container attach 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 09:41:18 compute-0 exciting_leavitt[385641]: 167 167
Nov 22 09:41:18 compute-0 systemd[1]: libpod-3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514.scope: Deactivated successfully.
Nov 22 09:41:18 compute-0 podman[385625]: 2025-11-22 09:41:18.222251336 +0000 UTC m=+0.171346844 container died 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:41:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5226e1b02dc803701ca7876db97f61c1b1065c3673c23cc6a65eed7fafed48b4-merged.mount: Deactivated successfully.
Nov 22 09:41:18 compute-0 podman[385625]: 2025-11-22 09:41:18.268121645 +0000 UTC m=+0.217217153 container remove 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:41:18 compute-0 systemd[1]: libpod-conmon-3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514.scope: Deactivated successfully.
Nov 22 09:41:18 compute-0 podman[385664]: 2025-11-22 09:41:18.417722327 +0000 UTC m=+0.023912894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:41:18 compute-0 podman[385664]: 2025-11-22 09:41:18.554248129 +0000 UTC m=+0.160438686 container create 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:41:18 compute-0 systemd[1]: Started libpod-conmon-1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9.scope.
Nov 22 09:41:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:18 compute-0 podman[385664]: 2025-11-22 09:41:18.641095979 +0000 UTC m=+0.247286556 container init 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 09:41:18 compute-0 podman[385664]: 2025-11-22 09:41:18.646423549 +0000 UTC m=+0.252614096 container start 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:41:18 compute-0 podman[385664]: 2025-11-22 09:41:18.649780601 +0000 UTC m=+0.255971158 container attach 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:41:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:19 compute-0 ceph-mon[75021]: pgmap v2382: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:41:19 compute-0 elegant_curran[385680]: {
Nov 22 09:41:19 compute-0 elegant_curran[385680]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "osd_id": 1,
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "type": "bluestore"
Nov 22 09:41:19 compute-0 elegant_curran[385680]:     },
Nov 22 09:41:19 compute-0 elegant_curran[385680]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "osd_id": 0,
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "type": "bluestore"
Nov 22 09:41:19 compute-0 elegant_curran[385680]:     },
Nov 22 09:41:19 compute-0 elegant_curran[385680]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "osd_id": 2,
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:41:19 compute-0 elegant_curran[385680]:         "type": "bluestore"
Nov 22 09:41:19 compute-0 elegant_curran[385680]:     }
Nov 22 09:41:19 compute-0 elegant_curran[385680]: }
Nov 22 09:41:19 compute-0 systemd[1]: libpod-1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9.scope: Deactivated successfully.
Nov 22 09:41:19 compute-0 systemd[1]: libpod-1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9.scope: Consumed 1.029s CPU time.
Nov 22 09:41:19 compute-0 podman[385664]: 2025-11-22 09:41:19.670220399 +0000 UTC m=+1.276410946 container died 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b-merged.mount: Deactivated successfully.
Nov 22 09:41:19 compute-0 podman[385664]: 2025-11-22 09:41:19.744001111 +0000 UTC m=+1.350191648 container remove 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:41:19 compute-0 systemd[1]: libpod-conmon-1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9.scope: Deactivated successfully.
Nov 22 09:41:19 compute-0 sudo[385560]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:41:19 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:41:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:41:19 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:41:19 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev fe1d546c-0be4-4a9d-9c15-da9eedf7d7e5 does not exist
Nov 22 09:41:19 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 354cdb3b-33c6-4463-9e2a-17488542c0fb does not exist
Nov 22 09:41:19 compute-0 nova_compute[253661]: 2025-11-22 09:41:19.825 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:19 compute-0 nova_compute[253661]: 2025-11-22 09:41:19.826 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:19 compute-0 nova_compute[253661]: 2025-11-22 09:41:19.840 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:41:19 compute-0 sudo[385726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:41:19 compute-0 sudo[385726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:19 compute-0 sudo[385726]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:19 compute-0 nova_compute[253661]: 2025-11-22 09:41:19.915 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:19 compute-0 nova_compute[253661]: 2025-11-22 09:41:19.916 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:19 compute-0 nova_compute[253661]: 2025-11-22 09:41:19.929 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:41:19 compute-0 nova_compute[253661]: 2025-11-22 09:41:19.929 253665 INFO nova.compute.claims [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:41:19 compute-0 sudo[385751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:41:19 compute-0 sudo[385751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:41:19 compute-0 sudo[385751]: pam_unix(sudo:session): session closed for user root
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.045 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:41:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2620661734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.506 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.513 253665 DEBUG nova.compute.provider_tree [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.529 253665 DEBUG nova.scheduler.client.report [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.559 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.560 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.609 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.609 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.624 253665 INFO nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.642 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.723 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.724 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.725 253665 INFO nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Creating image(s)
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.757 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.784 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:41:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:41:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2620661734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.834 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.840 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.950 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.951 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.952 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.952 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.978 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:20 compute-0 nova_compute[253661]: 2025-11-22 09:41:20.984 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.042 253665 DEBUG nova.policy [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.204 253665 DEBUG nova.compute.manager [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.205 253665 DEBUG oslo_concurrency.lockutils [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.206 253665 DEBUG oslo_concurrency.lockutils [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.206 253665 DEBUG oslo_concurrency.lockutils [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.206 253665 DEBUG nova.compute.manager [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Processing event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.208 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.216 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804481.2140331, de63fafb-9cce-47c5-8cdc-f5c348b1777a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.217 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] VM Resumed (Lifecycle Event)
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.223 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.229 253665 INFO nova.virt.libvirt.driver [-] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Instance spawned successfully.
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.232 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.240 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.251 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.253 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.253 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.253 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.254 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.254 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.261 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.321 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.357 253665 INFO nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Took 9.51 seconds to spawn the instance on the hypervisor.
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.358 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.390 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.402 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.504 253665 DEBUG nova.objects.instance [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid a20eee04-e3b6-4162-91f7-e6c92d8a07fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.612 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.612 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Ensure instance console log exists: /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.613 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.613 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.613 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.652 253665 INFO nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Took 10.70 seconds to build instance.
Nov 22 09:41:21 compute-0 nova_compute[253661]: 2025-11-22 09:41:21.674 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:21 compute-0 ceph-mon[75021]: pgmap v2383: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 22 09:41:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 107 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.6 MiB/s wr, 39 op/s
Nov 22 09:41:22 compute-0 nova_compute[253661]: 2025-11-22 09:41:22.585 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Successfully created port: c9b1b309-4443-4694-8649-f59d1739cdaf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:41:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:41:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:41:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:41:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:41:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:41:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:41:23 compute-0 ceph-mon[75021]: pgmap v2384: 305 pgs: 305 active+clean; 107 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.6 MiB/s wr, 39 op/s
Nov 22 09:41:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:23 compute-0 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG nova.compute.manager [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:23 compute-0 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG oslo_concurrency.lockutils [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:23 compute-0 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG oslo_concurrency.lockutils [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:23 compute-0 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG oslo_concurrency.lockutils [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:23 compute-0 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG nova.compute.manager [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] No waiting events found dispatching network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:23 compute-0 nova_compute[253661]: 2025-11-22 09:41:23.966 253665 WARNING nova.compute.manager [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received unexpected event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d for instance with vm_state active and task_state None.
Nov 22 09:41:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Nov 22 09:41:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:25.049 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:41:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:25.050 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:41:25 compute-0 nova_compute[253661]: 2025-11-22 09:41:25.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:25 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:25.051 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:25 compute-0 nova_compute[253661]: 2025-11-22 09:41:25.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:25 compute-0 nova_compute[253661]: 2025-11-22 09:41:25.670 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Successfully updated port: c9b1b309-4443-4694-8649-f59d1739cdaf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:41:25 compute-0 nova_compute[253661]: 2025-11-22 09:41:25.681 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:41:25 compute-0 nova_compute[253661]: 2025-11-22 09:41:25.682 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:41:25 compute-0 nova_compute[253661]: 2025-11-22 09:41:25.682 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:41:25 compute-0 ceph-mon[75021]: pgmap v2385: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Nov 22 09:41:26 compute-0 nova_compute[253661]: 2025-11-22 09:41:25.999 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:41:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 99 op/s
Nov 22 09:41:26 compute-0 nova_compute[253661]: 2025-11-22 09:41:26.068 253665 DEBUG nova.compute.manager [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-changed-c9b1b309-4443-4694-8649-f59d1739cdaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:26 compute-0 nova_compute[253661]: 2025-11-22 09:41:26.069 253665 DEBUG nova.compute.manager [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Refreshing instance network info cache due to event network-changed-c9b1b309-4443-4694-8649-f59d1739cdaf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:41:26 compute-0 nova_compute[253661]: 2025-11-22 09:41:26.069 253665 DEBUG oslo_concurrency.lockutils [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:41:26 compute-0 podman[385965]: 2025-11-22 09:41:26.367753274 +0000 UTC m=+0.062915647 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 22 09:41:26 compute-0 nova_compute[253661]: 2025-11-22 09:41:26.371 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:26 compute-0 podman[385966]: 2025-11-22 09:41:26.38233595 +0000 UTC m=+0.077482502 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.052 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:27 compute-0 NetworkManager[48920]: <info>  [1763804487.0553] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Nov 22 09:41:27 compute-0 NetworkManager[48920]: <info>  [1763804487.0569] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.171 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:27 compute-0 ovn_controller[152872]: 2025-11-22T09:41:27Z|01381|binding|INFO|Releasing lport 3f753c2a-471e-42be-9ebf-5498238bbd2c from this chassis (sb_readonly=0)
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.807 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Updating instance_info_cache with network_info: [{"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:27 compute-0 ceph-mon[75021]: pgmap v2386: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 99 op/s
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.957 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.958 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Instance network_info: |[{"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.958 253665 DEBUG oslo_concurrency.lockutils [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.958 253665 DEBUG nova.network.neutron [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Refreshing network info cache for port c9b1b309-4443-4694-8649-f59d1739cdaf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.961 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Start _get_guest_xml network_info=[{"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.965 253665 WARNING nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.971 253665 DEBUG nova.virt.libvirt.host [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.972 253665 DEBUG nova.virt.libvirt.host [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.976 253665 DEBUG nova.virt.libvirt.host [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.977 253665 DEBUG nova.virt.libvirt.host [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.977 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.977 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.978 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.978 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.978 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.978 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.980 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:41:27 compute-0 nova_compute[253661]: 2025-11-22 09:41:27.982 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:27.985 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:27.986 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:27.986 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 113 op/s
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.185 253665 DEBUG nova.compute.manager [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.185 253665 DEBUG nova.compute.manager [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Refreshing instance network info cache due to event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.186 253665 DEBUG oslo_concurrency.lockutils [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.186 253665 DEBUG oslo_concurrency.lockutils [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.186 253665 DEBUG nova.network.neutron [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Refreshing network info cache for port a86218e5-015d-4324-b94e-b87b21f3333d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.391 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.391 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.392 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.392 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.392 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.393 253665 INFO nova.compute.manager [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Terminating instance
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.394 253665 DEBUG nova.compute.manager [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:41:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:41:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300531492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:28 compute-0 kernel: tapa86218e5-01 (unregistering): left promiscuous mode
Nov 22 09:41:28 compute-0 NetworkManager[48920]: <info>  [1763804488.4344] device (tapa86218e5-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.443 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:28 compute-0 ovn_controller[152872]: 2025-11-22T09:41:28Z|01382|binding|INFO|Releasing lport a86218e5-015d-4324-b94e-b87b21f3333d from this chassis (sb_readonly=0)
Nov 22 09:41:28 compute-0 ovn_controller[152872]: 2025-11-22T09:41:28Z|01383|binding|INFO|Setting lport a86218e5-015d-4324-b94e-b87b21f3333d down in Southbound
Nov 22 09:41:28 compute-0 ovn_controller[152872]: 2025-11-22T09:41:28Z|01384|binding|INFO|Removing iface tapa86218e5-01 ovn-installed in OVS
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.467 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8b:9e 10.100.0.7'], port_security=['fa:16:3e:9c:8b:9e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'de63fafb-9cce-47c5-8cdc-f5c348b1777a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1915d045-a483-4ba0-9f22-02eb1e398b68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c342dd41-b1eb-43d0-a96a-717d17dead9b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a86218e5-015d-4324-b94e-b87b21f3333d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.468 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a86218e5-015d-4324-b94e-b87b21f3333d in datapath 1915d045-a483-4ba0-9f22-02eb1e398b68 unbound from our chassis
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.469 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.470 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1915d045-a483-4ba0-9f22-02eb1e398b68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.472 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[29596ad0-1853-46f0-9cb4-bb86bed5bac6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.473 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 namespace which is not needed anymore
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.478 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:28 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Nov 22 09:41:28 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007f.scope: Consumed 7.888s CPU time.
Nov 22 09:41:28 compute-0 systemd-machined[215941]: Machine qemu-158-instance-0000007f terminated.
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:28 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [NOTICE]   (385474) : haproxy version is 2.8.14-c23fe91
Nov 22 09:41:28 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [NOTICE]   (385474) : path to executable is /usr/sbin/haproxy
Nov 22 09:41:28 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [WARNING]  (385474) : Exiting Master process...
Nov 22 09:41:28 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [WARNING]  (385474) : Exiting Master process...
Nov 22 09:41:28 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [ALERT]    (385474) : Current worker (385476) exited with code 143 (Terminated)
Nov 22 09:41:28 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [WARNING]  (385474) : All workers exited. Exiting... (0)
Nov 22 09:41:28 compute-0 systemd[1]: libpod-4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85.scope: Deactivated successfully.
Nov 22 09:41:28 compute-0 podman[386069]: 2025-11-22 09:41:28.627463753 +0000 UTC m=+0.059600336 container died 4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.642 253665 INFO nova.virt.libvirt.driver [-] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Instance destroyed successfully.
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.643 253665 DEBUG nova.objects.instance [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid de63fafb-9cce-47c5-8cdc-f5c348b1777a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85-userdata-shm.mount: Deactivated successfully.
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.658 253665 DEBUG nova.virt.libvirt.vif [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:41:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-801609584',display_name='tempest-TestNetworkBasicOps-server-801609584',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-801609584',id=127,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBR3KlC+I+BrQmt+UKktZu9qmsBf630tj2ls2EAmBhPrPHG9I1DIKGXJ13OKDXnaKtyixc97nbX6Fgi3vYqBPQ5wohq9YCdMs+5UaDa5kTzpHNni4MDhpWBjxoEExVT1mA==',key_name='tempest-TestNetworkBasicOps-1734152809',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:41:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-091r93vu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:41:21Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=de63fafb-9cce-47c5-8cdc-f5c348b1777a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.659 253665 DEBUG nova.network.os_vif_util [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.660 253665 DEBUG nova.network.os_vif_util [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.661 253665 DEBUG os_vif [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:41:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4b7b2a7a5a23716c4c3002911f51c4d5a41209bbe3eaadcfdb8becdce955832-merged.mount: Deactivated successfully.
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.663 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa86218e5-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:28 compute-0 podman[386069]: 2025-11-22 09:41:28.667982312 +0000 UTC m=+0.100118895 container cleanup 4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:28 compute-0 rsyslogd[1005]: imjournal from <np0005532048:systemd>: begin to drop messages due to rate-limiting
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.714 253665 INFO os_vif [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01')
Nov 22 09:41:28 compute-0 systemd[1]: libpod-conmon-4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85.scope: Deactivated successfully.
Nov 22 09:41:28 compute-0 podman[386125]: 2025-11-22 09:41:28.781428731 +0000 UTC m=+0.046629580 container remove 4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.789 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7e540e09-38fd-4420-812b-12ebabc60fc1]: (4, ('Sat Nov 22 09:41:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 (4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85)\n4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85\nSat Nov 22 09:41:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 (4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85)\n4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.791 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc91489-0e1b-4bca-9f37-0753c8cf6286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.793 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1915d045-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:28 compute-0 kernel: tap1915d045-a0: left promiscuous mode
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3685d66e-e57c-43a2-b670-1f52f33b52d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.795 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.828 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4773e25c-27cc-4def-a44e-a4964a0d02f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.829 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[80418415-9fcd-44e6-9895-9ddd70ba52dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.847 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[236cfe0d-43c0-4b59-a541-68bf4befd6b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732737, 'reachable_time': 36594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386158, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.850 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:41:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.850 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd9129e-e8f5-475b-8d90-6720dd142a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d1915d045\x2da483\x2d4ba0\x2d9f22\x2d02eb1e398b68.mount: Deactivated successfully.
Nov 22 09:41:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2300531492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:41:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225513990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.987 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.989 253665 DEBUG nova.virt.libvirt.vif [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1195722437',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1195722437',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=128,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeIQlod6SLlJs6wXaW/tOBT0OwlVoa1NblddrZfa0iZcCCy3VKsljRdo05hr0OlgVcOZCWvarkr0J6owEiNG90xD3zssKyelcoT/ASloY4ZMoGABuSF90CB1rG7SqL//Q==',key_name='tempest-TestSecurityGroupsBasicOps-469285974',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-0s5qomfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:20Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=a20eee04-e3b6-4162-91f7-e6c92d8a07fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.989 253665 DEBUG nova.network.os_vif_util [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.990 253665 DEBUG nova.network.os_vif_util [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:0e,bridge_name='br-int',has_traffic_filtering=True,id=c9b1b309-4443-4694-8649-f59d1739cdaf,network=Network(67335e4a-26f1-458f-8b59-73c6186dbd75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b1b309-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:41:28 compute-0 nova_compute[253661]: 2025-11-22 09:41:28.991 253665 DEBUG nova.objects.instance [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid a20eee04-e3b6-4162-91f7-e6c92d8a07fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.004 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <uuid>a20eee04-e3b6-4162-91f7-e6c92d8a07fa</uuid>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <name>instance-00000080</name>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1195722437</nova:name>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:41:27</nova:creationTime>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <nova:port uuid="c9b1b309-4443-4694-8649-f59d1739cdaf">
Nov 22 09:41:29 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <system>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <entry name="serial">a20eee04-e3b6-4162-91f7-e6c92d8a07fa</entry>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <entry name="uuid">a20eee04-e3b6-4162-91f7-e6c92d8a07fa</entry>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     </system>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <os>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   </os>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <features>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   </features>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk">
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       </source>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk.config">
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       </source>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:41:29 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:b5:31:0e"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <target dev="tapc9b1b309-44"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa/console.log" append="off"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <video>
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     </video>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:41:29 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:41:29 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:41:29 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:41:29 compute-0 nova_compute[253661]: </domain>
Nov 22 09:41:29 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.005 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Preparing to wait for external event network-vif-plugged-c9b1b309-4443-4694-8649-f59d1739cdaf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.006 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.006 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.006 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.007 253665 DEBUG nova.virt.libvirt.vif [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1195722437',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1195722437',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=128,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeIQlod6SLlJs6wXaW/tOBT0OwlVoa1NblddrZfa0iZcCCy3VKsljRdo05hr0OlgVcOZCWvarkr0J6owEiNG90xD3zssKyelcoT/ASloY4ZMoGABuSF90CB1rG7SqL//Q==',key_name='tempest-TestSecurityGroupsBasicOps-469285974',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-0s5qomfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:20Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=a20eee04-e3b6-4162-91f7-e6c92d8a07fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.007 253665 DEBUG nova.network.os_vif_util [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.008 253665 DEBUG nova.network.os_vif_util [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:0e,bridge_name='br-int',has_traffic_filtering=True,id=c9b1b309-4443-4694-8649-f59d1739cdaf,network=Network(67335e4a-26f1-458f-8b59-73c6186dbd75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b1b309-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.008 253665 DEBUG os_vif [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:0e,bridge_name='br-int',has_traffic_filtering=True,id=c9b1b309-4443-4694-8649-f59d1739cdaf,network=Network(67335e4a-26f1-458f-8b59-73c6186dbd75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b1b309-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.010 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.010 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.012 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.013 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9b1b309-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.013 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9b1b309-44, col_values=(('external_ids', {'iface-id': 'c9b1b309-4443-4694-8649-f59d1739cdaf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:31:0e', 'vm-uuid': 'a20eee04-e3b6-4162-91f7-e6c92d8a07fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.014 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:29 compute-0 NetworkManager[48920]: <info>  [1763804489.0158] manager: (tapc9b1b309-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/557)
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.020 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.021 253665 INFO os_vif [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:0e,bridge_name='br-int',has_traffic_filtering=True,id=c9b1b309-4443-4694-8649-f59d1739cdaf,network=Network(67335e4a-26f1-458f-8b59-73c6186dbd75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b1b309-44')
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.078 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.079 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.079 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:b5:31:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.080 253665 INFO nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Using config drive
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.104 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.116 253665 INFO nova.virt.libvirt.driver [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Deleting instance files /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a_del
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.117 253665 INFO nova.virt.libvirt.driver [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Deletion of /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a_del complete
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.200 253665 INFO nova.compute.manager [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.200 253665 DEBUG oslo.service.loopingcall [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.201 253665 DEBUG nova.compute.manager [-] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:41:29 compute-0 nova_compute[253661]: 2025-11-22 09:41:29.201 253665 DEBUG nova.network.neutron [-] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:41:29 compute-0 ceph-mon[75021]: pgmap v2387: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 113 op/s
Nov 22 09:41:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2225513990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.087 253665 DEBUG nova.network.neutron [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Updated VIF entry in instance network info cache for port a86218e5-015d-4324-b94e-b87b21f3333d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.088 253665 DEBUG nova.network.neutron [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Updating instance_info_cache with network_info: [{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.109 253665 DEBUG oslo_concurrency.lockutils [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.254 253665 INFO nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Creating config drive at /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa/disk.config
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.260 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkxvhfxv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.347 253665 DEBUG nova.compute.manager [req-b91f3548-d392-499c-8bcd-78a4f671f34a req-8a597cd3-feec-4a84-b605-a96203e4bd1d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-vif-unplugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.348 253665 DEBUG oslo_concurrency.lockutils [req-b91f3548-d392-499c-8bcd-78a4f671f34a req-8a597cd3-feec-4a84-b605-a96203e4bd1d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.348 253665 DEBUG oslo_concurrency.lockutils [req-b91f3548-d392-499c-8bcd-78a4f671f34a req-8a597cd3-feec-4a84-b605-a96203e4bd1d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.348 253665 DEBUG oslo_concurrency.lockutils [req-b91f3548-d392-499c-8bcd-78a4f671f34a req-8a597cd3-feec-4a84-b605-a96203e4bd1d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.348 253665 DEBUG nova.compute.manager [req-b91f3548-d392-499c-8bcd-78a4f671f34a req-8a597cd3-feec-4a84-b605-a96203e4bd1d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] No waiting events found dispatching network-vif-unplugged-a86218e5-015d-4324-b94e-b87b21f3333d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.349 253665 DEBUG nova.compute.manager [req-b91f3548-d392-499c-8bcd-78a4f671f34a req-8a597cd3-feec-4a84-b605-a96203e4bd1d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-vif-unplugged-a86218e5-015d-4324-b94e-b87b21f3333d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.417 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkxvhfxv" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.444 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.448 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa/disk.config a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.618 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa/disk.config a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.619 253665 INFO nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Deleting local config drive /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa/disk.config because it was imported into RBD.
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.654 253665 DEBUG nova.network.neutron [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Updated VIF entry in instance network info cache for port c9b1b309-4443-4694-8649-f59d1739cdaf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.655 253665 DEBUG nova.network.neutron [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Updating instance_info_cache with network_info: [{"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.669 253665 DEBUG oslo_concurrency.lockutils [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:41:30 compute-0 kernel: tapc9b1b309-44: entered promiscuous mode
Nov 22 09:41:30 compute-0 systemd-udevd[386048]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:41:30 compute-0 NetworkManager[48920]: <info>  [1763804490.6823] manager: (tapc9b1b309-44): new Tun device (/org/freedesktop/NetworkManager/Devices/558)
Nov 22 09:41:30 compute-0 ovn_controller[152872]: 2025-11-22T09:41:30Z|01385|binding|INFO|Claiming lport c9b1b309-4443-4694-8649-f59d1739cdaf for this chassis.
Nov 22 09:41:30 compute-0 ovn_controller[152872]: 2025-11-22T09:41:30Z|01386|binding|INFO|c9b1b309-4443-4694-8649-f59d1739cdaf: Claiming fa:16:3e:b5:31:0e 10.100.0.4
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.683 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.690 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:31:0e 10.100.0.4'], port_security=['fa:16:3e:b5:31:0e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a20eee04-e3b6-4162-91f7-e6c92d8a07fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67335e4a-26f1-458f-8b59-73c6186dbd75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e683f2c-bc31-4f76-b401-2b59e8ba1209 21f9656a-2fe6-497f-894e-5ee9ca5cd27c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=387fb749-1edb-467d-93c7-5991046afa48, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c9b1b309-4443-4694-8649-f59d1739cdaf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.691 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c9b1b309-4443-4694-8649-f59d1739cdaf in datapath 67335e4a-26f1-458f-8b59-73c6186dbd75 bound to our chassis
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.692 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67335e4a-26f1-458f-8b59-73c6186dbd75
Nov 22 09:41:30 compute-0 NetworkManager[48920]: <info>  [1763804490.6977] device (tapc9b1b309-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:41:30 compute-0 NetworkManager[48920]: <info>  [1763804490.6990] device (tapc9b1b309-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:41:30 compute-0 ovn_controller[152872]: 2025-11-22T09:41:30Z|01387|binding|INFO|Setting lport c9b1b309-4443-4694-8649-f59d1739cdaf ovn-installed in OVS
Nov 22 09:41:30 compute-0 ovn_controller[152872]: 2025-11-22T09:41:30Z|01388|binding|INFO|Setting lport c9b1b309-4443-4694-8649-f59d1739cdaf up in Southbound
Nov 22 09:41:30 compute-0 nova_compute[253661]: 2025-11-22 09:41:30.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.708 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47bd266a-18c0-4dbd-90ef-4224d041b382]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.709 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67335e4a-21 in ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.712 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67335e4a-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.712 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2770c5b6-b807-4393-9c81-82a60341f015]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.714 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[62992c89-1200-4ff3-b5e3-a36aaab13957]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.728 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2af66820-2835-4df8-aa9c-771223bf469b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 systemd-machined[215941]: New machine qemu-159-instance-00000080.
Nov 22 09:41:30 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-00000080.
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec95fe4-8ecb-4d90-803a-91c23e1f9d0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.784 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6f5615-7209-449e-9c7a-81ee023030e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.790 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d36d1ce8-b5a5-4af2-b13b-1a682d4c8a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 NetworkManager[48920]: <info>  [1763804490.7917] manager: (tap67335e4a-20): new Veth device (/org/freedesktop/NetworkManager/Devices/559)
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.830 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[291aedac-2891-4ce8-8b3d-ddd13344435c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.834 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dafa60a3-31fd-4a32-9237-a85ebe24a911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 NetworkManager[48920]: <info>  [1763804490.8647] device (tap67335e4a-20): carrier: link connected
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.872 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6e934a-c914-4c64-bf66-713a88c396d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95720480-38bc-4a6f-b919-d80a4c3a2a87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67335e4a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:eb:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734160, 'reachable_time': 37427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386265, 'error': None, 'target': 'ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.915 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8d392630-e874-4c8e-9163-9a2bb1542b92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:eb72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734160, 'tstamp': 734160}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386266, 'error': None, 'target': 'ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.940 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a2f4f7-4d37-4fd0-a610-f65a6fae6567]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67335e4a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:eb:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734160, 'reachable_time': 37427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386267, 'error': None, 'target': 'ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:30.977 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccda8f8-b770-4b41-9f39-57e2f83a5a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:31.045 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a234a376-d2d6-44f2-8647-500163c88751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:31.046 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67335e4a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:31.047 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:31.047 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67335e4a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.049 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:31 compute-0 kernel: tap67335e4a-20: entered promiscuous mode
Nov 22 09:41:31 compute-0 NetworkManager[48920]: <info>  [1763804491.0515] manager: (tap67335e4a-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:31.052 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67335e4a-20, col_values=(('external_ids', {'iface-id': '20c489ca-96e1-48b1-ae7d-4329fe15be09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:31 compute-0 ovn_controller[152872]: 2025-11-22T09:41:31Z|01389|binding|INFO|Releasing lport 20c489ca-96e1-48b1-ae7d-4329fe15be09 from this chassis (sb_readonly=0)
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.067 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:31.068 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67335e4a-26f1-458f-8b59-73c6186dbd75.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67335e4a-26f1-458f-8b59-73c6186dbd75.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:31.069 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdcee84-c955-453c-b20a-e0fcdcc4252a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:31.069 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-67335e4a-26f1-458f-8b59-73c6186dbd75
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/67335e4a-26f1-458f-8b59-73c6186dbd75.pid.haproxy
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 67335e4a-26f1-458f-8b59-73c6186dbd75
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:41:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:31.070 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75', 'env', 'PROCESS_TAG=haproxy-67335e4a-26f1-458f-8b59-73c6186dbd75', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67335e4a-26f1-458f-8b59-73c6186dbd75.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.373 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:31 compute-0 podman[386300]: 2025-11-22 09:41:31.487740872 +0000 UTC m=+0.052720238 container create 210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:41:31 compute-0 systemd[1]: Started libpod-conmon-210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd.scope.
Nov 22 09:41:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:41:31 compute-0 podman[386300]: 2025-11-22 09:41:31.461392319 +0000 UTC m=+0.026371475 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be87e79a00608a7885618162faead6dff3026b7c8c21800786f95acf01a4a2b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:31 compute-0 podman[386300]: 2025-11-22 09:41:31.578347744 +0000 UTC m=+0.143326890 container init 210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 09:41:31 compute-0 podman[386300]: 2025-11-22 09:41:31.584617837 +0000 UTC m=+0.149596973 container start 210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:41:31 compute-0 neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75[386350]: [NOTICE]   (386373) : New worker (386380) forked
Nov 22 09:41:31 compute-0 neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75[386350]: [NOTICE]   (386373) : Loading success.
Nov 22 09:41:31 compute-0 podman[386331]: 2025-11-22 09:41:31.638888011 +0000 UTC m=+0.105766422 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.660 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804491.660348, a20eee04-e3b6-4162-91f7-e6c92d8a07fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.661 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] VM Started (Lifecycle Event)
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.676 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.680 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804491.660525, a20eee04-e3b6-4162-91f7-e6c92d8a07fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.680 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] VM Paused (Lifecycle Event)
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.696 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.698 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.712 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.884 253665 DEBUG nova.network.neutron [-] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.901 253665 INFO nova.compute.manager [-] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Took 2.70 seconds to deallocate network for instance.
Nov 22 09:41:31 compute-0 ceph-mon[75021]: pgmap v2388: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.941 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:31 compute-0 nova_compute[253661]: 2025-11-22 09:41:31.942 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 111 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.161 253665 DEBUG oslo_concurrency.processutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.457 253665 DEBUG nova.compute.manager [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.458 253665 DEBUG oslo_concurrency.lockutils [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.458 253665 DEBUG oslo_concurrency.lockutils [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.459 253665 DEBUG oslo_concurrency.lockutils [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.459 253665 DEBUG nova.compute.manager [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] No waiting events found dispatching network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.459 253665 WARNING nova.compute.manager [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received unexpected event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d for instance with vm_state deleted and task_state None.
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.459 253665 DEBUG nova.compute.manager [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-vif-plugged-c9b1b309-4443-4694-8649-f59d1739cdaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.460 253665 DEBUG oslo_concurrency.lockutils [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.460 253665 DEBUG oslo_concurrency.lockutils [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.460 253665 DEBUG oslo_concurrency.lockutils [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.461 253665 DEBUG nova.compute.manager [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Processing event network-vif-plugged-c9b1b309-4443-4694-8649-f59d1739cdaf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.461 253665 DEBUG nova.compute.manager [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-vif-plugged-c9b1b309-4443-4694-8649-f59d1739cdaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.461 253665 DEBUG oslo_concurrency.lockutils [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.461 253665 DEBUG oslo_concurrency.lockutils [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.462 253665 DEBUG oslo_concurrency.lockutils [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.462 253665 DEBUG nova.compute.manager [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] No waiting events found dispatching network-vif-plugged-c9b1b309-4443-4694-8649-f59d1739cdaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.462 253665 WARNING nova.compute.manager [req-426dd053-2ed2-41df-8891-47a4ef93c11a req-43dade4c-51a9-4f02-9d47-b01ee804640f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received unexpected event network-vif-plugged-c9b1b309-4443-4694-8649-f59d1739cdaf for instance with vm_state building and task_state spawning.
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.463 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.487 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804492.4665563, a20eee04-e3b6-4162-91f7-e6c92d8a07fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.490 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] VM Resumed (Lifecycle Event)
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.494 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.509 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.517 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.520 253665 INFO nova.virt.libvirt.driver [-] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Instance spawned successfully.
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.521 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.552 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.570 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.571 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.571 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.572 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.572 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.573 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:41:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735887552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.633 253665 INFO nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Took 11.91 seconds to spawn the instance on the hypervisor.
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.635 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.656 253665 DEBUG oslo_concurrency.processutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.663 253665 DEBUG nova.compute.provider_tree [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.702 253665 DEBUG nova.scheduler.client.report [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.718 253665 INFO nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Took 12.83 seconds to build instance.
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.730 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.737 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.754 253665 INFO nova.scheduler.client.report [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance de63fafb-9cce-47c5-8cdc-f5c348b1777a
Nov 22 09:41:32 compute-0 nova_compute[253661]: 2025-11-22 09:41:32.824 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/735887552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:33 compute-0 ceph-mon[75021]: pgmap v2389: 305 pgs: 305 active+clean; 111 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 22 09:41:34 compute-0 nova_compute[253661]: 2025-11-22 09:41:34.015 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.0 MiB/s wr, 154 op/s
Nov 22 09:41:35 compute-0 nova_compute[253661]: 2025-11-22 09:41:35.472 253665 DEBUG nova.compute.manager [req-a64c40a8-46dd-4af5-8906-3fe8d6565623 req-5669964a-b336-4b77-a23b-447e096e50f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-changed-c9b1b309-4443-4694-8649-f59d1739cdaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:35 compute-0 nova_compute[253661]: 2025-11-22 09:41:35.472 253665 DEBUG nova.compute.manager [req-a64c40a8-46dd-4af5-8906-3fe8d6565623 req-5669964a-b336-4b77-a23b-447e096e50f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Refreshing instance network info cache due to event network-changed-c9b1b309-4443-4694-8649-f59d1739cdaf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:41:35 compute-0 nova_compute[253661]: 2025-11-22 09:41:35.472 253665 DEBUG oslo_concurrency.lockutils [req-a64c40a8-46dd-4af5-8906-3fe8d6565623 req-5669964a-b336-4b77-a23b-447e096e50f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:41:35 compute-0 nova_compute[253661]: 2025-11-22 09:41:35.472 253665 DEBUG oslo_concurrency.lockutils [req-a64c40a8-46dd-4af5-8906-3fe8d6565623 req-5669964a-b336-4b77-a23b-447e096e50f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:41:35 compute-0 nova_compute[253661]: 2025-11-22 09:41:35.473 253665 DEBUG nova.network.neutron [req-a64c40a8-46dd-4af5-8906-3fe8d6565623 req-5669964a-b336-4b77-a23b-447e096e50f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Refreshing network info cache for port c9b1b309-4443-4694-8649-f59d1739cdaf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:41:35 compute-0 ceph-mon[75021]: pgmap v2390: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.0 MiB/s wr, 154 op/s
Nov 22 09:41:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 14 KiB/s wr, 79 op/s
Nov 22 09:41:36 compute-0 nova_compute[253661]: 2025-11-22 09:41:36.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:37 compute-0 nova_compute[253661]: 2025-11-22 09:41:37.259 253665 DEBUG nova.network.neutron [req-a64c40a8-46dd-4af5-8906-3fe8d6565623 req-5669964a-b336-4b77-a23b-447e096e50f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Updated VIF entry in instance network info cache for port c9b1b309-4443-4694-8649-f59d1739cdaf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:41:37 compute-0 nova_compute[253661]: 2025-11-22 09:41:37.260 253665 DEBUG nova.network.neutron [req-a64c40a8-46dd-4af5-8906-3fe8d6565623 req-5669964a-b336-4b77-a23b-447e096e50f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Updating instance_info_cache with network_info: [{"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:37 compute-0 nova_compute[253661]: 2025-11-22 09:41:37.345 253665 DEBUG oslo_concurrency.lockutils [req-a64c40a8-46dd-4af5-8906-3fe8d6565623 req-5669964a-b336-4b77-a23b-447e096e50f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:41:37 compute-0 ceph-mon[75021]: pgmap v2391: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 14 KiB/s wr, 79 op/s
Nov 22 09:41:37 compute-0 nova_compute[253661]: 2025-11-22 09:41:37.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 109 op/s
Nov 22 09:41:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:39 compute-0 nova_compute[253661]: 2025-11-22 09:41:39.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:39 compute-0 ceph-mon[75021]: pgmap v2392: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 109 op/s
Nov 22 09:41:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 09:41:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:41.223 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:46:6c 2001:db8:0:1:f816:3eff:fe90:466c 2001:db8::f816:3eff:fe90:466c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe90:466c/64 2001:db8::f816:3eff:fe90:466c/64', 'neutron:device_id': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f6914f-16a1-4223-85f8-aa4fada62acd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28e6ccec-e6eb-4baf-91d9-14c9f27dcba7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef3b4cae-0bef-4b3d-812b-86120687d0c9) old=Port_Binding(mac=['fa:16:3e:90:46:6c 2001:db8::f816:3eff:fe90:466c'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe90:466c/64', 'neutron:device_id': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f6914f-16a1-4223-85f8-aa4fada62acd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:41:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:41.225 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef3b4cae-0bef-4b3d-812b-86120687d0c9 in datapath 14f6914f-16a1-4223-85f8-aa4fada62acd updated
Nov 22 09:41:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:41.227 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14f6914f-16a1-4223-85f8-aa4fada62acd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:41:41 compute-0 nova_compute[253661]: 2025-11-22 09:41:41.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:41 compute-0 nova_compute[253661]: 2025-11-22 09:41:41.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:41:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:41.228 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc979ed-d4d7-40a2-8b68-9e84fee2a536]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:41 compute-0 nova_compute[253661]: 2025-11-22 09:41:41.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:41:41 compute-0 nova_compute[253661]: 2025-11-22 09:41:41.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:41 compute-0 ceph-mon[75021]: pgmap v2393: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 09:41:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.339 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.340 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.369 253665 DEBUG nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.463 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.464 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.473 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.474 253665 INFO nova.compute.claims [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:41:42 compute-0 nova_compute[253661]: 2025-11-22 09:41:42.604 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:41:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3682883429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.056 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.063 253665 DEBUG nova.compute.provider_tree [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.081 253665 DEBUG nova.scheduler.client.report [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.191 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.192 253665 DEBUG nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.424 253665 DEBUG nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.425 253665 DEBUG nova.network.neutron [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.463 253665 INFO nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.483 253665 DEBUG nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.632 253665 DEBUG nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.634 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.634 253665 INFO nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Creating image(s)
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.658 253665 DEBUG nova.storage.rbd_utils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.683 253665 DEBUG nova.storage.rbd_utils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.712 253665 DEBUG nova.storage.rbd_utils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.716 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.754 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804488.6368399, de63fafb-9cce-47c5-8cdc-f5c348b1777a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.755 253665 INFO nova.compute.manager [-] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] VM Stopped (Lifecycle Event)
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.790 253665 DEBUG nova.compute.manager [None req-5351d51c-56f3-4f2b-bdff-756d4c0034cb - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.801 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.802 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.803 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.803 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.830 253665 DEBUG nova.storage.rbd_utils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:43 compute-0 nova_compute[253661]: 2025-11-22 09:41:43.834 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:43 compute-0 ceph-mon[75021]: pgmap v2394: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 09:41:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3682883429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.018 253665 DEBUG nova.policy [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:41:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.153 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.220 253665 DEBUG nova.storage.rbd_utils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.250 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.313 253665 DEBUG nova.objects.instance [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.328 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.329 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Ensure instance console log exists: /var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.329 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.330 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:44 compute-0 nova_compute[253661]: 2025-11-22 09:41:44.330 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:45 compute-0 ovn_controller[152872]: 2025-11-22T09:41:45Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:31:0e 10.100.0.4
Nov 22 09:41:45 compute-0 ovn_controller[152872]: 2025-11-22T09:41:45Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:31:0e 10.100.0.4
Nov 22 09:41:45 compute-0 nova_compute[253661]: 2025-11-22 09:41:45.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:45 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 22 09:41:45 compute-0 ceph-mon[75021]: pgmap v2395: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 22 09:41:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 341 B/s wr, 41 op/s
Nov 22 09:41:46 compute-0 nova_compute[253661]: 2025-11-22 09:41:46.430 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:46 compute-0 nova_compute[253661]: 2025-11-22 09:41:46.531 253665 DEBUG nova.network.neutron [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Successfully updated port: a86218e5-015d-4324-b94e-b87b21f3333d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:41:46 compute-0 nova_compute[253661]: 2025-11-22 09:41:46.550 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:41:46 compute-0 nova_compute[253661]: 2025-11-22 09:41:46.551 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:41:46 compute-0 nova_compute[253661]: 2025-11-22 09:41:46.551 253665 DEBUG nova.network.neutron [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:41:46 compute-0 nova_compute[253661]: 2025-11-22 09:41:46.692 253665 DEBUG nova.compute.manager [req-eaa903bb-8af8-4c99-a5f2-40683915dd77 req-0d0e64cb-67a3-43f3-880a-5234858360be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Received event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:46 compute-0 nova_compute[253661]: 2025-11-22 09:41:46.693 253665 DEBUG nova.compute.manager [req-eaa903bb-8af8-4c99-a5f2-40683915dd77 req-0d0e64cb-67a3-43f3-880a-5234858360be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Refreshing instance network info cache due to event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:41:46 compute-0 nova_compute[253661]: 2025-11-22 09:41:46.693 253665 DEBUG oslo_concurrency.lockutils [req-eaa903bb-8af8-4c99-a5f2-40683915dd77 req-0d0e64cb-67a3-43f3-880a-5234858360be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:41:46 compute-0 nova_compute[253661]: 2025-11-22 09:41:46.774 253665 DEBUG nova.network.neutron [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:41:47 compute-0 ceph-mon[75021]: pgmap v2396: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 341 B/s wr, 41 op/s
Nov 22 09:41:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 105 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 54 op/s
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.240 253665 DEBUG nova.network.neutron [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Updating instance_info_cache with network_info: [{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.245 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.245 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.245 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.246 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.323 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.324 253665 DEBUG nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Instance network_info: |[{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.325 253665 DEBUG oslo_concurrency.lockutils [req-eaa903bb-8af8-4c99-a5f2-40683915dd77 req-0d0e64cb-67a3-43f3-880a-5234858360be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.326 253665 DEBUG nova.network.neutron [req-eaa903bb-8af8-4c99-a5f2-40683915dd77 req-0d0e64cb-67a3-43f3-880a-5234858360be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Refreshing network info cache for port a86218e5-015d-4324-b94e-b87b21f3333d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.332 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Start _get_guest_xml network_info=[{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.338 253665 WARNING nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.348 253665 DEBUG nova.virt.libvirt.host [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.349 253665 DEBUG nova.virt.libvirt.host [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.352 253665 DEBUG nova.virt.libvirt.host [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.353 253665 DEBUG nova.virt.libvirt.host [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.353 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.354 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.354 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.355 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.355 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.355 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.355 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.356 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.356 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.356 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.357 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.357 253665 DEBUG nova.virt.hardware [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.361 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:41:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2088068090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.709 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.778 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.778 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:41:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:41:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3553109343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.838 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.860 253665 DEBUG nova.storage.rbd_utils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.864 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.927 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.928 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:48 compute-0 nova_compute[253661]: 2025-11-22 09:41:48.947 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:41:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2088068090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3553109343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.022 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.023 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3458MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.024 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.024 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.118 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance a20eee04-e3b6-4162-91f7-e6c92d8a07fa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.119 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.121 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.139 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d749b709-79b8-40b6-8c2e-4d301bdc8e67 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.140 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.140 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.159 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.186 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.187 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.213 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.234 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.286 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:41:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856060110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.335 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.337 253665 DEBUG nova.virt.libvirt.vif [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-112382128',display_name='tempest-TestNetworkBasicOps-server-112382128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-112382128',id=129,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPXLPQL0g0CJBWRQqCU3qdweb6Q7x5vuVMn0c9T3/VExcdeyXztbfqLW2NYzsgoiSVCEW4IKTBdlrFO0j/9bRjPFM+ttmxZRRnod54gXEQDfgeBR42stS/1W43jolMLhwQ==',key_name='tempest-TestNetworkBasicOps-1723054139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-bpng50ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:43Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=2e7b4cf2-c2c2-4df1-aa03-550a07687b4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.338 253665 DEBUG nova.network.os_vif_util [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.340 253665 DEBUG nova.network.os_vif_util [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.342 253665 DEBUG nova.objects.instance [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.360 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <uuid>2e7b4cf2-c2c2-4df1-aa03-550a07687b4e</uuid>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <name>instance-00000081</name>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-112382128</nova:name>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:41:48</nova:creationTime>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <nova:port uuid="a86218e5-015d-4324-b94e-b87b21f3333d">
Nov 22 09:41:49 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <system>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <entry name="serial">2e7b4cf2-c2c2-4df1-aa03-550a07687b4e</entry>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <entry name="uuid">2e7b4cf2-c2c2-4df1-aa03-550a07687b4e</entry>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     </system>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <os>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   </os>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <features>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   </features>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk">
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       </source>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk.config">
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       </source>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:41:49 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:9c:8b:9e"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <target dev="tapa86218e5-01"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e/console.log" append="off"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <video>
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     </video>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:41:49 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:41:49 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:41:49 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:41:49 compute-0 nova_compute[253661]: </domain>
Nov 22 09:41:49 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.362 253665 DEBUG nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Preparing to wait for external event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.362 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.363 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.363 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.364 253665 DEBUG nova.virt.libvirt.vif [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-112382128',display_name='tempest-TestNetworkBasicOps-server-112382128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-112382128',id=129,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPXLPQL0g0CJBWRQqCU3qdweb6Q7x5vuVMn0c9T3/VExcdeyXztbfqLW2NYzsgoiSVCEW4IKTBdlrFO0j/9bRjPFM+ttmxZRRnod54gXEQDfgeBR42stS/1W43jolMLhwQ==',key_name='tempest-TestNetworkBasicOps-1723054139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-bpng50ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:43Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=2e7b4cf2-c2c2-4df1-aa03-550a07687b4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.364 253665 DEBUG nova.network.os_vif_util [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.365 253665 DEBUG nova.network.os_vif_util [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.365 253665 DEBUG os_vif [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.366 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.367 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.370 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.370 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa86218e5-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.371 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa86218e5-01, col_values=(('external_ids', {'iface-id': 'a86218e5-015d-4324-b94e-b87b21f3333d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:8b:9e', 'vm-uuid': '2e7b4cf2-c2c2-4df1-aa03-550a07687b4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:49 compute-0 NetworkManager[48920]: <info>  [1763804509.3736] manager: (tapa86218e5-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.380 253665 INFO os_vif [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01')
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.432 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.432 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.433 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:9c:8b:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.433 253665 INFO nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Using config drive
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.455 253665 DEBUG nova.storage.rbd_utils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:41:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901278203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.737 253665 DEBUG nova.network.neutron [req-eaa903bb-8af8-4c99-a5f2-40683915dd77 req-0d0e64cb-67a3-43f3-880a-5234858360be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Updated VIF entry in instance network info cache for port a86218e5-015d-4324-b94e-b87b21f3333d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.738 253665 DEBUG nova.network.neutron [req-eaa903bb-8af8-4c99-a5f2-40683915dd77 req-0d0e64cb-67a3-43f3-880a-5234858360be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Updating instance_info_cache with network_info: [{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.756 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.759 253665 DEBUG oslo_concurrency.lockutils [req-eaa903bb-8af8-4c99-a5f2-40683915dd77 req-0d0e64cb-67a3-43f3-880a-5234858360be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.765 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.774 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.794 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.794 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.795 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.802 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.803 253665 INFO nova.compute.claims [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.807 253665 INFO nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Creating config drive at /var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e/disk.config
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.813 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmid_a075 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.960 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmid_a075" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.982 253665 DEBUG nova.storage.rbd_utils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:49 compute-0 nova_compute[253661]: 2025-11-22 09:41:49.985 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e/disk.config 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:50 compute-0 ceph-mon[75021]: pgmap v2397: 305 pgs: 305 active+clean; 105 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 54 op/s
Nov 22 09:41:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2856060110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:41:50 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3901278203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.051 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 478 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.136 253665 DEBUG oslo_concurrency.processutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e/disk.config 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.137 253665 INFO nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Deleting local config drive /var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e/disk.config because it was imported into RBD.
Nov 22 09:41:50 compute-0 kernel: tapa86218e5-01: entered promiscuous mode
Nov 22 09:41:50 compute-0 NetworkManager[48920]: <info>  [1763804510.1785] manager: (tapa86218e5-01): new Tun device (/org/freedesktop/NetworkManager/Devices/562)
Nov 22 09:41:50 compute-0 ovn_controller[152872]: 2025-11-22T09:41:50Z|01390|binding|INFO|Claiming lport a86218e5-015d-4324-b94e-b87b21f3333d for this chassis.
Nov 22 09:41:50 compute-0 ovn_controller[152872]: 2025-11-22T09:41:50Z|01391|binding|INFO|a86218e5-015d-4324-b94e-b87b21f3333d: Claiming fa:16:3e:9c:8b:9e 10.100.0.7
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.198 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8b:9e 10.100.0.7'], port_security=['fa:16:3e:9c:8b:9e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2e7b4cf2-c2c2-4df1-aa03-550a07687b4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1915d045-a483-4ba0-9f22-02eb1e398b68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c342dd41-b1eb-43d0-a96a-717d17dead9b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a86218e5-015d-4324-b94e-b87b21f3333d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.200 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a86218e5-015d-4324-b94e-b87b21f3333d in datapath 1915d045-a483-4ba0-9f22-02eb1e398b68 bound to our chassis
Nov 22 09:41:50 compute-0 ovn_controller[152872]: 2025-11-22T09:41:50Z|01392|binding|INFO|Setting lport a86218e5-015d-4324-b94e-b87b21f3333d ovn-installed in OVS
Nov 22 09:41:50 compute-0 ovn_controller[152872]: 2025-11-22T09:41:50Z|01393|binding|INFO|Setting lport a86218e5-015d-4324-b94e-b87b21f3333d up in Southbound
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.202 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1915d045-a483-4ba0-9f22-02eb1e398b68
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.202 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.204 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:50 compute-0 systemd-udevd[386806]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.216 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc51ad4-afcf-4eab-9d50-b7d76c790330]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.218 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1915d045-a1 in ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.220 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1915d045-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.220 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8241bbf1-07c4-4e11-bf7a-abb6bdfc92a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.221 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff0ac34-1228-416a-aced-17077c9f6a13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 NetworkManager[48920]: <info>  [1763804510.2277] device (tapa86218e5-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:41:50 compute-0 NetworkManager[48920]: <info>  [1763804510.2288] device (tapa86218e5-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.232 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1aced7ed-df43-4729-aac1-3a221baca9c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 systemd-machined[215941]: New machine qemu-160-instance-00000081.
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.255 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d1f616-96c3-4976-ab14-b4161ae311df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-00000081.
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.295 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9192bc7c-80f1-4688-97b6-166b55835433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.299 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ae563d-0b2e-44a8-8715-2fa7aead72e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 NetworkManager[48920]: <info>  [1763804510.3011] manager: (tap1915d045-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/563)
Nov 22 09:41:50 compute-0 systemd-udevd[386813]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.332 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c474ecfd-c1f4-4b50-b06d-faa2cae11466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.339 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fd342065-f419-4c30-b149-2a7e9ec099a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 NetworkManager[48920]: <info>  [1763804510.3640] device (tap1915d045-a0): carrier: link connected
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.374 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[56a1fe19-b09c-410d-bc1e-8bd368fbdae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.394 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[831701cf-740f-4a40-8371-70d413ef92f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1915d045-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:8a:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736110, 'reachable_time': 39675, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386842, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.412 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[913f7f48-8791-4910-a559-71301c6bbe33]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:8a9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736110, 'tstamp': 736110}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386843, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.430 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0592355-a4d6-4daa-8205-0a0bad907158]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1915d045-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:8a:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736110, 'reachable_time': 39675, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386844, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.462 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19f63d32-1bc5-4514-a947-429b0f0d0f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:41:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015858314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.519 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.529 253665 DEBUG nova.compute.provider_tree [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.535 253665 DEBUG nova.compute.manager [req-01336dd0-7189-45ba-88e1-414f980b9a6d req-a24f4f90-0253-4f28-8e9a-426f047912cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Received event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.536 253665 DEBUG oslo_concurrency.lockutils [req-01336dd0-7189-45ba-88e1-414f980b9a6d req-a24f4f90-0253-4f28-8e9a-426f047912cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.536 253665 DEBUG oslo_concurrency.lockutils [req-01336dd0-7189-45ba-88e1-414f980b9a6d req-a24f4f90-0253-4f28-8e9a-426f047912cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.536 253665 DEBUG oslo_concurrency.lockutils [req-01336dd0-7189-45ba-88e1-414f980b9a6d req-a24f4f90-0253-4f28-8e9a-426f047912cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.536 253665 DEBUG nova.compute.manager [req-01336dd0-7189-45ba-88e1-414f980b9a6d req-a24f4f90-0253-4f28-8e9a-426f047912cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Processing event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.534 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fceec0b0-932d-43f3-a290-287e1bf58648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.539 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1915d045-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.539 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.540 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1915d045-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.542 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:50 compute-0 NetworkManager[48920]: <info>  [1763804510.5431] manager: (tap1915d045-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Nov 22 09:41:50 compute-0 kernel: tap1915d045-a0: entered promiscuous mode
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.549 253665 DEBUG nova.scheduler.client.report [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.550 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1915d045-a0, col_values=(('external_ids', {'iface-id': '3f753c2a-471e-42be-9ebf-5498238bbd2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:50 compute-0 ovn_controller[152872]: 2025-11-22T09:41:50Z|01394|binding|INFO|Releasing lport 3f753c2a-471e-42be-9ebf-5498238bbd2c from this chassis (sb_readonly=0)
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.555 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1915d045-a483-4ba0-9f22-02eb1e398b68.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1915d045-a483-4ba0-9f22-02eb1e398b68.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a312c0a-ca30-4b2b-b6a4-f636f17afc31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.557 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-1915d045-a483-4ba0-9f22-02eb1e398b68
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/1915d045-a483-4ba0-9f22-02eb1e398b68.pid.haproxy
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 1915d045-a483-4ba0-9f22-02eb1e398b68
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:41:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:50.558 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'env', 'PROCESS_TAG=haproxy-1915d045-a483-4ba0-9f22-02eb1e398b68', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1915d045-a483-4ba0-9f22-02eb1e398b68.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.567 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.582 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.583 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.625 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.626 253665 DEBUG nova.network.neutron [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.639 253665 INFO nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.651 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.759 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.760 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.761 253665 INFO nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Creating image(s)
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.789 253665 DEBUG nova.storage.rbd_utils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.815 253665 DEBUG nova.storage.rbd_utils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.836 253665 DEBUG nova.storage.rbd_utils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.839 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.879 253665 DEBUG nova.policy [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.884 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.884 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.917 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.918 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.919 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.920 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:50 compute-0 podman[386933]: 2025-11-22 09:41:50.938862787 +0000 UTC m=+0.057712000 container create 45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.952 253665 DEBUG nova.storage.rbd_utils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:41:50 compute-0 nova_compute[253661]: 2025-11-22 09:41:50.959 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:41:50 compute-0 systemd[1]: Started libpod-conmon-45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e.scope.
Nov 22 09:41:50 compute-0 podman[386933]: 2025-11-22 09:41:50.905064922 +0000 UTC m=+0.023914145 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:41:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1015858314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:41:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee93aea549ad190e39628cd3f43254de3bf2fcc126e71564c38dd104a62ecde/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:41:51 compute-0 podman[386933]: 2025-11-22 09:41:51.042139447 +0000 UTC m=+0.160988670 container init 45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 09:41:51 compute-0 podman[386933]: 2025-11-22 09:41:51.05042831 +0000 UTC m=+0.169277503 container start 45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 09:41:51 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[386969]: [NOTICE]   (387004) : New worker (387008) forked
Nov 22 09:41:51 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[386969]: [NOTICE]   (387004) : Loading success.
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.218 253665 DEBUG nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.221 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804511.2176793, 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.221 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] VM Started (Lifecycle Event)
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.225 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.230 253665 INFO nova.virt.libvirt.driver [-] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Instance spawned successfully.
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.230 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.248 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.256 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.256 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.257 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.257 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.257 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.258 253665 DEBUG nova.virt.libvirt.driver [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.261 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.261 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.333 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.334 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804511.2178576, 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.334 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] VM Paused (Lifecycle Event)
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.342 253665 DEBUG nova.storage.rbd_utils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.378 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.381 253665 INFO nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Took 7.75 seconds to spawn the instance on the hypervisor.
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.381 253665 DEBUG nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.385 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804511.224154, 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.386 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] VM Resumed (Lifecycle Event)
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.459 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.462 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.467 253665 DEBUG nova.objects.instance [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid d749b709-79b8-40b6-8c2e-4d301bdc8e67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.498 253665 INFO nova.compute.manager [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Took 9.07 seconds to build instance.
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.503 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.504 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Ensure instance console log exists: /var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.504 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.504 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.505 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:51 compute-0 nova_compute[253661]: 2025-11-22 09:41:51.546 253665 DEBUG oslo_concurrency.lockutils [None req-07c31c4d-72b1-4481-bf0f-f3d84ac99023 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:52 compute-0 ceph-mon[75021]: pgmap v2398: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 478 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 185 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 5.0 MiB/s wr, 97 op/s
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:41:52
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes']
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:41:52 compute-0 nova_compute[253661]: 2025-11-22 09:41:52.647 253665 DEBUG nova.compute.manager [req-5e354bfc-db4d-46f9-825b-97fd273c7e34 req-b4a575e5-3837-4ad0-9a26-ec9700bcf73b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Received event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:52 compute-0 nova_compute[253661]: 2025-11-22 09:41:52.647 253665 DEBUG oslo_concurrency.lockutils [req-5e354bfc-db4d-46f9-825b-97fd273c7e34 req-b4a575e5-3837-4ad0-9a26-ec9700bcf73b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:52 compute-0 nova_compute[253661]: 2025-11-22 09:41:52.647 253665 DEBUG oslo_concurrency.lockutils [req-5e354bfc-db4d-46f9-825b-97fd273c7e34 req-b4a575e5-3837-4ad0-9a26-ec9700bcf73b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:52 compute-0 nova_compute[253661]: 2025-11-22 09:41:52.648 253665 DEBUG oslo_concurrency.lockutils [req-5e354bfc-db4d-46f9-825b-97fd273c7e34 req-b4a575e5-3837-4ad0-9a26-ec9700bcf73b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:52 compute-0 nova_compute[253661]: 2025-11-22 09:41:52.648 253665 DEBUG nova.compute.manager [req-5e354bfc-db4d-46f9-825b-97fd273c7e34 req-b4a575e5-3837-4ad0-9a26-ec9700bcf73b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] No waiting events found dispatching network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:52 compute-0 nova_compute[253661]: 2025-11-22 09:41:52.648 253665 WARNING nova.compute.manager [req-5e354bfc-db4d-46f9-825b-97fd273c7e34 req-b4a575e5-3837-4ad0-9a26-ec9700bcf73b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Received unexpected event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d for instance with vm_state active and task_state None.
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:41:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:41:53 compute-0 nova_compute[253661]: 2025-11-22 09:41:53.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:54 compute-0 ceph-mon[75021]: pgmap v2399: 305 pgs: 305 active+clean; 185 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 5.0 MiB/s wr, 97 op/s
Nov 22 09:41:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 213 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 167 op/s
Nov 22 09:41:54 compute-0 nova_compute[253661]: 2025-11-22 09:41:54.365 253665 DEBUG nova.network.neutron [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Successfully created port: 868d876c-3d4f-4618-aedd-e1ce97d50ae9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:41:54 compute-0 nova_compute[253661]: 2025-11-22 09:41:54.374 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:54 compute-0 nova_compute[253661]: 2025-11-22 09:41:54.966 253665 DEBUG oslo_concurrency.lockutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:54 compute-0 nova_compute[253661]: 2025-11-22 09:41:54.967 253665 DEBUG oslo_concurrency.lockutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:54 compute-0 nova_compute[253661]: 2025-11-22 09:41:54.967 253665 DEBUG oslo_concurrency.lockutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:54 compute-0 nova_compute[253661]: 2025-11-22 09:41:54.967 253665 DEBUG oslo_concurrency.lockutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:54 compute-0 nova_compute[253661]: 2025-11-22 09:41:54.967 253665 DEBUG oslo_concurrency.lockutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:54 compute-0 nova_compute[253661]: 2025-11-22 09:41:54.969 253665 INFO nova.compute.manager [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Terminating instance
Nov 22 09:41:54 compute-0 nova_compute[253661]: 2025-11-22 09:41:54.970 253665 DEBUG nova.compute.manager [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:41:55 compute-0 kernel: tapa86218e5-01 (unregistering): left promiscuous mode
Nov 22 09:41:55 compute-0 NetworkManager[48920]: <info>  [1763804515.0779] device (tapa86218e5-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:55 compute-0 ovn_controller[152872]: 2025-11-22T09:41:55Z|01395|binding|INFO|Releasing lport a86218e5-015d-4324-b94e-b87b21f3333d from this chassis (sb_readonly=0)
Nov 22 09:41:55 compute-0 ovn_controller[152872]: 2025-11-22T09:41:55Z|01396|binding|INFO|Setting lport a86218e5-015d-4324-b94e-b87b21f3333d down in Southbound
Nov 22 09:41:55 compute-0 ovn_controller[152872]: 2025-11-22T09:41:55Z|01397|binding|INFO|Removing iface tapa86218e5-01 ovn-installed in OVS
Nov 22 09:41:55 compute-0 ceph-mon[75021]: pgmap v2400: 305 pgs: 305 active+clean; 213 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 167 op/s
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.107 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.110 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8b:9e 10.100.0.7'], port_security=['fa:16:3e:9c:8b:9e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2e7b4cf2-c2c2-4df1-aa03-550a07687b4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1915d045-a483-4ba0-9f22-02eb1e398b68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '9', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.239', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c342dd41-b1eb-43d0-a96a-717d17dead9b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a86218e5-015d-4324-b94e-b87b21f3333d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.112 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a86218e5-015d-4324-b94e-b87b21f3333d in datapath 1915d045-a483-4ba0-9f22-02eb1e398b68 unbound from our chassis
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.113 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1915d045-a483-4ba0-9f22-02eb1e398b68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.115 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ad12b965-1408-4228-90f4-5b3b52d209a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.115 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 namespace which is not needed anymore
Nov 22 09:41:55 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000081.scope: Deactivated successfully.
Nov 22 09:41:55 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000081.scope: Consumed 4.812s CPU time.
Nov 22 09:41:55 compute-0 systemd-machined[215941]: Machine qemu-160-instance-00000081 terminated.
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.250 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:41:55 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[386969]: [NOTICE]   (387004) : haproxy version is 2.8.14-c23fe91
Nov 22 09:41:55 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[386969]: [NOTICE]   (387004) : path to executable is /usr/sbin/haproxy
Nov 22 09:41:55 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[386969]: [ALERT]    (387004) : Current worker (387008) exited with code 143 (Terminated)
Nov 22 09:41:55 compute-0 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[386969]: [WARNING]  (387004) : All workers exited. Exiting... (0)
Nov 22 09:41:55 compute-0 systemd[1]: libpod-45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e.scope: Deactivated successfully.
Nov 22 09:41:55 compute-0 conmon[386969]: conmon 45ebdc91a2e9abf7e83f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e.scope/container/memory.events
Nov 22 09:41:55 compute-0 podman[387139]: 2025-11-22 09:41:55.268617105 +0000 UTC m=+0.052649967 container died 45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.292 253665 INFO nova.virt.libvirt.driver [-] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Instance destroyed successfully.
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.296 253665 DEBUG nova.objects.instance [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:41:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee93aea549ad190e39628cd3f43254de3bf2fcc126e71564c38dd104a62ecde-merged.mount: Deactivated successfully.
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.310 253665 DEBUG nova.virt.libvirt.vif [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-112382128',display_name='tempest-TestNetworkBasicOps-server-112382128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-112382128',id=129,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPXLPQL0g0CJBWRQqCU3qdweb6Q7x5vuVMn0c9T3/VExcdeyXztbfqLW2NYzsgoiSVCEW4IKTBdlrFO0j/9bRjPFM+ttmxZRRnod54gXEQDfgeBR42stS/1W43jolMLhwQ==',key_name='tempest-TestNetworkBasicOps-1723054139',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:41:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-bpng50ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:41:51Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=2e7b4cf2-c2c2-4df1-aa03-550a07687b4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.312 253665 DEBUG nova.network.os_vif_util [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.313 253665 DEBUG nova.network.os_vif_util [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.313 253665 DEBUG os_vif [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.315 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa86218e5-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.317 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.318 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.321 253665 INFO os_vif [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01')
Nov 22 09:41:55 compute-0 podman[387139]: 2025-11-22 09:41:55.324515539 +0000 UTC m=+0.108548401 container cleanup 45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:41:55 compute-0 systemd[1]: libpod-conmon-45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e.scope: Deactivated successfully.
Nov 22 09:41:55 compute-0 podman[387191]: 2025-11-22 09:41:55.411058912 +0000 UTC m=+0.058280474 container remove 45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.419 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98ea0b7e-ae8c-4931-9dcf-f87acf92edd2]: (4, ('Sat Nov 22 09:41:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 (45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e)\n45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e\nSat Nov 22 09:41:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 (45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e)\n45ebdc91a2e9abf7e83f17bca82d2df9c53da38fc3755716b5f3d54390c29f9e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.422 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be25230b-ec7b-4b4b-beec-aff71967c68a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.424 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1915d045-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:55 compute-0 kernel: tap1915d045-a0: left promiscuous mode
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.444 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d067992a-1197-49be-9b10-326b5660c89e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.462 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e27e584-c043-4e43-8ca4-af01bc0eaf94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.463 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c3d3c1-03f1-4d58-a4a6-58bf46fe570e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.486 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98810419-caaf-4ec4-b746-6c8a1740f1f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736102, 'reachable_time': 16919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387209, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d1915d045\x2da483\x2d4ba0\x2d9f22\x2d02eb1e398b68.mount: Deactivated successfully.
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.492 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:41:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:55.493 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[efd1af3e-a1e8-4b4f-961e-68127877caed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:41:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:41:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:41:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:41:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.896 253665 INFO nova.virt.libvirt.driver [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Deleting instance files /var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_del
Nov 22 09:41:55 compute-0 nova_compute[253661]: 2025-11-22 09:41:55.897 253665 INFO nova.virt.libvirt.driver [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Deletion of /var/lib/nova/instances/2e7b4cf2-c2c2-4df1-aa03-550a07687b4e_del complete
Nov 22 09:41:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 213 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 159 op/s
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.172 253665 INFO nova.compute.manager [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Took 1.20 seconds to destroy the instance on the hypervisor.
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.172 253665 DEBUG oslo.service.loopingcall [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.173 253665 DEBUG nova.compute.manager [-] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.173 253665 DEBUG nova.network.neutron [-] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.331 253665 DEBUG nova.compute.manager [req-1b55e595-e1be-4692-9ac0-e57cf8827bea req-6964acc3-51e9-41f1-adf8-e9b6c378952c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Received event network-vif-unplugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.331 253665 DEBUG oslo_concurrency.lockutils [req-1b55e595-e1be-4692-9ac0-e57cf8827bea req-6964acc3-51e9-41f1-adf8-e9b6c378952c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.332 253665 DEBUG oslo_concurrency.lockutils [req-1b55e595-e1be-4692-9ac0-e57cf8827bea req-6964acc3-51e9-41f1-adf8-e9b6c378952c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.332 253665 DEBUG oslo_concurrency.lockutils [req-1b55e595-e1be-4692-9ac0-e57cf8827bea req-6964acc3-51e9-41f1-adf8-e9b6c378952c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.332 253665 DEBUG nova.compute.manager [req-1b55e595-e1be-4692-9ac0-e57cf8827bea req-6964acc3-51e9-41f1-adf8-e9b6c378952c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] No waiting events found dispatching network-vif-unplugged-a86218e5-015d-4324-b94e-b87b21f3333d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.332 253665 DEBUG nova.compute.manager [req-1b55e595-e1be-4692-9ac0-e57cf8827bea req-6964acc3-51e9-41f1-adf8-e9b6c378952c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Received event network-vif-unplugged-a86218e5-015d-4324-b94e-b87b21f3333d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:41:56 compute-0 nova_compute[253661]: 2025-11-22 09:41:56.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:41:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:41:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:41:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:41:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:41:57 compute-0 ceph-mon[75021]: pgmap v2401: 305 pgs: 305 active+clean; 213 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 159 op/s
Nov 22 09:41:57 compute-0 podman[387211]: 2025-11-22 09:41:57.372866428 +0000 UTC m=+0.062440156 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 09:41:57 compute-0 podman[387212]: 2025-11-22 09:41:57.378437194 +0000 UTC m=+0.066763971 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:41:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 202 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 196 op/s
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.080 253665 DEBUG nova.network.neutron [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Successfully created port: 005f7c99-8e6b-4818-9749-1360f814f253 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.443 253665 DEBUG nova.compute.manager [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Received event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.444 253665 DEBUG oslo_concurrency.lockutils [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.444 253665 DEBUG oslo_concurrency.lockutils [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.445 253665 DEBUG oslo_concurrency.lockutils [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.445 253665 DEBUG nova.compute.manager [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] No waiting events found dispatching network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.446 253665 WARNING nova.compute.manager [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Received unexpected event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d for instance with vm_state active and task_state deleting.
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.446 253665 DEBUG nova.compute.manager [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-changed-c9b1b309-4443-4694-8649-f59d1739cdaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.447 253665 DEBUG nova.compute.manager [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Refreshing instance network info cache due to event network-changed-c9b1b309-4443-4694-8649-f59d1739cdaf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.448 253665 DEBUG oslo_concurrency.lockutils [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.448 253665 DEBUG oslo_concurrency.lockutils [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.448 253665 DEBUG nova.network.neutron [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Refreshing network info cache for port c9b1b309-4443-4694-8649-f59d1739cdaf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.757 253665 DEBUG oslo_concurrency.lockutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.758 253665 DEBUG oslo_concurrency.lockutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.758 253665 DEBUG oslo_concurrency.lockutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.759 253665 DEBUG oslo_concurrency.lockutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.759 253665 DEBUG oslo_concurrency.lockutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.760 253665 INFO nova.compute.manager [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Terminating instance
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.761 253665 DEBUG nova.compute.manager [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:41:58 compute-0 kernel: tapc9b1b309-44 (unregistering): left promiscuous mode
Nov 22 09:41:58 compute-0 NetworkManager[48920]: <info>  [1763804518.8253] device (tapc9b1b309-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:41:58 compute-0 ovn_controller[152872]: 2025-11-22T09:41:58Z|01398|binding|INFO|Releasing lport c9b1b309-4443-4694-8649-f59d1739cdaf from this chassis (sb_readonly=0)
Nov 22 09:41:58 compute-0 ovn_controller[152872]: 2025-11-22T09:41:58Z|01399|binding|INFO|Setting lport c9b1b309-4443-4694-8649-f59d1739cdaf down in Southbound
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.888 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:58 compute-0 ovn_controller[152872]: 2025-11-22T09:41:58Z|01400|binding|INFO|Removing iface tapc9b1b309-44 ovn-installed in OVS
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:58.931 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:31:0e 10.100.0.4'], port_security=['fa:16:3e:b5:31:0e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a20eee04-e3b6-4162-91f7-e6c92d8a07fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67335e4a-26f1-458f-8b59-73c6186dbd75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e683f2c-bc31-4f76-b401-2b59e8ba1209 21f9656a-2fe6-497f-894e-5ee9ca5cd27c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=387fb749-1edb-467d-93c7-5991046afa48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c9b1b309-4443-4694-8649-f59d1739cdaf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:41:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:58.932 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c9b1b309-4443-4694-8649-f59d1739cdaf in datapath 67335e4a-26f1-458f-8b59-73c6186dbd75 unbound from our chassis
Nov 22 09:41:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:58.934 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67335e4a-26f1-458f-8b59-73c6186dbd75, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:41:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:58.935 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[811e94cc-0075-41ce-8acd-72b10cf8c156]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:58.935 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75 namespace which is not needed anymore
Nov 22 09:41:58 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000080.scope: Deactivated successfully.
Nov 22 09:41:58 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000080.scope: Consumed 14.254s CPU time.
Nov 22 09:41:58 compute-0 systemd-machined[215941]: Machine qemu-159-instance-00000080 terminated.
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.985 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:58 compute-0 nova_compute[253661]: 2025-11-22 09:41:58.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.000 253665 INFO nova.virt.libvirt.driver [-] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Instance destroyed successfully.
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.001 253665 DEBUG nova.objects.instance [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid a20eee04-e3b6-4162-91f7-e6c92d8a07fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.015 253665 DEBUG nova.virt.libvirt.vif [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:41:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1195722437',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1195722437',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=128,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeIQlod6SLlJs6wXaW/tOBT0OwlVoa1NblddrZfa0iZcCCy3VKsljRdo05hr0OlgVcOZCWvarkr0J6owEiNG90xD3zssKyelcoT/ASloY4ZMoGABuSF90CB1rG7SqL//Q==',key_name='tempest-TestSecurityGroupsBasicOps-469285974',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-0s5qomfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:41:32Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=a20eee04-e3b6-4162-91f7-e6c92d8a07fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.016 253665 DEBUG nova.network.os_vif_util [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.017 253665 DEBUG nova.network.os_vif_util [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:31:0e,bridge_name='br-int',has_traffic_filtering=True,id=c9b1b309-4443-4694-8649-f59d1739cdaf,network=Network(67335e4a-26f1-458f-8b59-73c6186dbd75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b1b309-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.017 253665 DEBUG os_vif [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:31:0e,bridge_name='br-int',has_traffic_filtering=True,id=c9b1b309-4443-4694-8649-f59d1739cdaf,network=Network(67335e4a-26f1-458f-8b59-73c6186dbd75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b1b309-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.019 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9b1b309-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.020 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.022 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.024 253665 INFO os_vif [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:31:0e,bridge_name='br-int',has_traffic_filtering=True,id=c9b1b309-4443-4694-8649-f59d1739cdaf,network=Network(67335e4a-26f1-458f-8b59-73c6186dbd75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b1b309-44')
Nov 22 09:41:59 compute-0 neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75[386350]: [NOTICE]   (386373) : haproxy version is 2.8.14-c23fe91
Nov 22 09:41:59 compute-0 neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75[386350]: [NOTICE]   (386373) : path to executable is /usr/sbin/haproxy
Nov 22 09:41:59 compute-0 neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75[386350]: [WARNING]  (386373) : Exiting Master process...
Nov 22 09:41:59 compute-0 neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75[386350]: [WARNING]  (386373) : Exiting Master process...
Nov 22 09:41:59 compute-0 neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75[386350]: [ALERT]    (386373) : Current worker (386380) exited with code 143 (Terminated)
Nov 22 09:41:59 compute-0 neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75[386350]: [WARNING]  (386373) : All workers exited. Exiting... (0)
Nov 22 09:41:59 compute-0 systemd[1]: libpod-210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd.scope: Deactivated successfully.
Nov 22 09:41:59 compute-0 podman[387281]: 2025-11-22 09:41:59.078810079 +0000 UTC m=+0.052501582 container died 210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd-userdata-shm.mount: Deactivated successfully.
Nov 22 09:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-be87e79a00608a7885618162faead6dff3026b7c8c21800786f95acf01a4a2b1-merged.mount: Deactivated successfully.
Nov 22 09:41:59 compute-0 ceph-mon[75021]: pgmap v2402: 305 pgs: 305 active+clean; 202 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 196 op/s
Nov 22 09:41:59 compute-0 podman[387281]: 2025-11-22 09:41:59.123651763 +0000 UTC m=+0.097343266 container cleanup 210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:41:59 compute-0 systemd[1]: libpod-conmon-210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd.scope: Deactivated successfully.
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.194 253665 DEBUG nova.network.neutron [-] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:41:59 compute-0 podman[387327]: 2025-11-22 09:41:59.195087227 +0000 UTC m=+0.049991041 container remove 210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:41:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:59.202 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9c8da5-f101-47bb-b338-b2ed7dabc787]: (4, ('Sat Nov 22 09:41:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75 (210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd)\n210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd\nSat Nov 22 09:41:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75 (210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd)\n210cf711dec7e28b90b827a6b1fbc853cbfa4daa16cd671a04da3afe8a841fcd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:59.203 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a36683f2-79b4-4ae0-b004-6f1b5a93fd85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:59.205 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67335e4a-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:59 compute-0 kernel: tap67335e4a-20: left promiscuous mode
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.220 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:41:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:59.223 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6d4760-fefc-4c0a-b3aa-3c4cd864012c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:59.240 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c814065a-69b9-4a7c-b472-cd38fb919052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:59.241 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e351112a-ed2a-4738-86cb-80fc20a0dd71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.244 253665 INFO nova.compute.manager [-] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Took 3.07 seconds to deallocate network for instance.
Nov 22 09:41:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:59.256 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc68f7f-1c53-4936-b561-c75d14b75ee2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734151, 'reachable_time': 27588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387345, 'error': None, 'target': 'ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:59.258 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67335e4a-26f1-458f-8b59-73c6186dbd75 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:41:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:41:59.258 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[7950a81b-e8d9-42b7-a1f2-419cf58ab57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:41:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d67335e4a\x2d26f1\x2d458f\x2d8b59\x2d73c6186dbd75.mount: Deactivated successfully.
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.621 253665 INFO nova.virt.libvirt.driver [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Deleting instance files /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa_del
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.622 253665 INFO nova.virt.libvirt.driver [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Deletion of /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa_del complete
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.747 253665 DEBUG oslo_concurrency.lockutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.748 253665 DEBUG oslo_concurrency.lockutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.819 253665 INFO nova.compute.manager [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Took 1.06 seconds to destroy the instance on the hypervisor.
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.821 253665 DEBUG oslo.service.loopingcall [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.821 253665 DEBUG nova.compute.manager [-] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.821 253665 DEBUG nova.network.neutron [-] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:41:59 compute-0 nova_compute[253661]: 2025-11-22 09:41:59.870 253665 DEBUG oslo_concurrency.processutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 167 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.4 MiB/s wr, 197 op/s
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.282 253665 DEBUG nova.network.neutron [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Updated VIF entry in instance network info cache for port c9b1b309-4443-4694-8649-f59d1739cdaf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.283 253665 DEBUG nova.network.neutron [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Updating instance_info_cache with network_info: [{"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:42:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/318276610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.342 253665 DEBUG oslo_concurrency.processutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.349 253665 DEBUG nova.compute.provider_tree [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.362 253665 DEBUG nova.scheduler.client.report [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.397 253665 DEBUG oslo_concurrency.lockutils [req-9c8f1833-44a3-4cb0-9c90-8caa9fd2221d req-c3d7c5ef-655c-40b4-bf51-243d43affaae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.403 253665 DEBUG oslo_concurrency.lockutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.466 253665 INFO nova.scheduler.client.report [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.552 253665 DEBUG nova.compute.manager [req-df3b01f9-b2c3-45cf-9201-220def165ca5 req-33fe55ae-fc92-4726-8c8e-fb40e28edbbd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-vif-unplugged-c9b1b309-4443-4694-8649-f59d1739cdaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.552 253665 DEBUG oslo_concurrency.lockutils [req-df3b01f9-b2c3-45cf-9201-220def165ca5 req-33fe55ae-fc92-4726-8c8e-fb40e28edbbd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.553 253665 DEBUG oslo_concurrency.lockutils [req-df3b01f9-b2c3-45cf-9201-220def165ca5 req-33fe55ae-fc92-4726-8c8e-fb40e28edbbd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.553 253665 DEBUG oslo_concurrency.lockutils [req-df3b01f9-b2c3-45cf-9201-220def165ca5 req-33fe55ae-fc92-4726-8c8e-fb40e28edbbd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.553 253665 DEBUG nova.compute.manager [req-df3b01f9-b2c3-45cf-9201-220def165ca5 req-33fe55ae-fc92-4726-8c8e-fb40e28edbbd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] No waiting events found dispatching network-vif-unplugged-c9b1b309-4443-4694-8649-f59d1739cdaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.553 253665 DEBUG nova.compute.manager [req-df3b01f9-b2c3-45cf-9201-220def165ca5 req-33fe55ae-fc92-4726-8c8e-fb40e28edbbd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-vif-unplugged-c9b1b309-4443-4694-8649-f59d1739cdaf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:42:00 compute-0 nova_compute[253661]: 2025-11-22 09:42:00.604 253665 DEBUG oslo_concurrency.lockutils [None req-1ac2df1e-6cbe-4fe8-a748-7dd08275a77d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "2e7b4cf2-c2c2-4df1-aa03-550a07687b4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.073 253665 DEBUG nova.network.neutron [-] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.092 253665 INFO nova.compute.manager [-] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Took 1.27 seconds to deallocate network for instance.
Nov 22 09:42:01 compute-0 ceph-mon[75021]: pgmap v2403: 305 pgs: 305 active+clean; 167 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.4 MiB/s wr, 197 op/s
Nov 22 09:42:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/318276610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.125 253665 DEBUG nova.network.neutron [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Successfully updated port: 868d876c-3d4f-4618-aedd-e1ce97d50ae9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.153 253665 DEBUG oslo_concurrency.lockutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.154 253665 DEBUG oslo_concurrency.lockutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.255 253665 DEBUG oslo_concurrency.processutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.362 253665 DEBUG nova.compute.manager [req-8417f071-3764-489a-8f29-21781b56f864 req-d7a262ed-de88-40b4-a035-ee423cfd7449 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-vif-deleted-c9b1b309-4443-4694-8649-f59d1739cdaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:42:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268401445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.724 253665 DEBUG oslo_concurrency.processutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.729 253665 DEBUG nova.compute.provider_tree [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:42:01 compute-0 nova_compute[253661]: 2025-11-22 09:42:01.746 253665 DEBUG nova.scheduler.client.report [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.012 253665 DEBUG oslo_concurrency.lockutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 142 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Nov 22 09:42:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2268401445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.223 253665 INFO nova.scheduler.client.report [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance a20eee04-e3b6-4162-91f7-e6c92d8a07fa
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.321 253665 DEBUG oslo_concurrency.lockutils [None req-d713baf3-30c5-4a6b-8d92-4b066f0e427b 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:02 compute-0 podman[387391]: 2025-11-22 09:42:02.383740202 +0000 UTC m=+0.077988885 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.659 253665 DEBUG nova.compute.manager [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-changed-868d876c-3d4f-4618-aedd-e1ce97d50ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.659 253665 DEBUG nova.compute.manager [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Refreshing instance network info cache due to event network-changed-868d876c-3d4f-4618-aedd-e1ce97d50ae9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.659 253665 DEBUG oslo_concurrency.lockutils [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.659 253665 DEBUG oslo_concurrency.lockutils [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.660 253665 DEBUG nova.network.neutron [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Refreshing network info cache for port 868d876c-3d4f-4618-aedd-e1ce97d50ae9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.879 253665 DEBUG nova.network.neutron [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Successfully updated port: 005f7c99-8e6b-4818-9749-1360f814f253 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.899 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008596191883180308 of space, bias 1.0, pg target 0.25788575649540924 quantized to 32 (current 32)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:42:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:42:02 compute-0 nova_compute[253661]: 2025-11-22 09:42:02.946 253665 DEBUG nova.network.neutron [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:42:03 compute-0 ceph-mon[75021]: pgmap v2404: 305 pgs: 305 active+clean; 142 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.480 253665 DEBUG nova.network.neutron [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.496 253665 DEBUG oslo_concurrency.lockutils [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.496 253665 DEBUG nova.compute.manager [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-vif-plugged-c9b1b309-4443-4694-8649-f59d1739cdaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.496 253665 DEBUG oslo_concurrency.lockutils [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.496 253665 DEBUG oslo_concurrency.lockutils [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.496 253665 DEBUG oslo_concurrency.lockutils [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.497 253665 DEBUG nova.compute.manager [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] No waiting events found dispatching network-vif-plugged-c9b1b309-4443-4694-8649-f59d1739cdaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.497 253665 WARNING nova.compute.manager [req-b52cc4c3-8367-4002-8547-ea0f3781fb1d req-1a6c4212-2140-4e35-908a-27b92b0d7e92 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received unexpected event network-vif-plugged-c9b1b309-4443-4694-8649-f59d1739cdaf for instance with vm_state deleted and task_state None.
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.497 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.497 253665 DEBUG nova.network.neutron [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:42:03 compute-0 nova_compute[253661]: 2025-11-22 09:42:03.749 253665 DEBUG nova.network.neutron [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:42:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:04 compute-0 nova_compute[253661]: 2025-11-22 09:42:04.022 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 736 KiB/s wr, 148 op/s
Nov 22 09:42:04 compute-0 nova_compute[253661]: 2025-11-22 09:42:04.771 253665 DEBUG nova.compute.manager [req-2c683556-c31a-4126-a931-5161455a7dd0 req-a64cb8a4-0833-4737-a71c-4d9739fdff27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-changed-005f7c99-8e6b-4818-9749-1360f814f253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:04 compute-0 nova_compute[253661]: 2025-11-22 09:42:04.773 253665 DEBUG nova.compute.manager [req-2c683556-c31a-4126-a931-5161455a7dd0 req-a64cb8a4-0833-4737-a71c-4d9739fdff27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Refreshing instance network info cache due to event network-changed-005f7c99-8e6b-4818-9749-1360f814f253. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:42:04 compute-0 nova_compute[253661]: 2025-11-22 09:42:04.773 253665 DEBUG oslo_concurrency.lockutils [req-2c683556-c31a-4126-a931-5161455a7dd0 req-a64cb8a4-0833-4737-a71c-4d9739fdff27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:05 compute-0 ceph-mon[75021]: pgmap v2405: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 736 KiB/s wr, 148 op/s
Nov 22 09:42:05 compute-0 nova_compute[253661]: 2025-11-22 09:42:05.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:05 compute-0 nova_compute[253661]: 2025-11-22 09:42:05.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 805 KiB/s rd, 4.3 KiB/s wr, 78 op/s
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.151502) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804526152031, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1378, "num_deletes": 251, "total_data_size": 2034320, "memory_usage": 2060752, "flush_reason": "Manual Compaction"}
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804526164450, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 2002531, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48602, "largest_seqno": 49979, "table_properties": {"data_size": 1996141, "index_size": 3593, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13816, "raw_average_key_size": 19, "raw_value_size": 1983195, "raw_average_value_size": 2870, "num_data_blocks": 161, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804394, "oldest_key_time": 1763804394, "file_creation_time": 1763804526, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 12552 microseconds, and 5043 cpu microseconds.
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.164490) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 2002531 bytes OK
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.164511) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.166329) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.166343) EVENT_LOG_v1 {"time_micros": 1763804526166339, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.166359) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2028167, prev total WAL file size 2028167, number of live WAL files 2.
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.167020) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1955KB)], [113(8111KB)]
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804526167049, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10308777, "oldest_snapshot_seqno": -1}
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7053 keys, 8626835 bytes, temperature: kUnknown
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804526207045, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8626835, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8581102, "index_size": 26997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 183984, "raw_average_key_size": 26, "raw_value_size": 8456271, "raw_average_value_size": 1198, "num_data_blocks": 1046, "num_entries": 7053, "num_filter_entries": 7053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804526, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.207391) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8626835 bytes
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.209529) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 257.1 rd, 215.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.9 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(9.5) write-amplify(4.3) OK, records in: 7567, records dropped: 514 output_compression: NoCompression
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.209588) EVENT_LOG_v1 {"time_micros": 1763804526209565, "job": 68, "event": "compaction_finished", "compaction_time_micros": 40090, "compaction_time_cpu_micros": 20289, "output_level": 6, "num_output_files": 1, "total_output_size": 8626835, "num_input_records": 7567, "num_output_records": 7053, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804526210594, "job": 68, "event": "table_file_deletion", "file_number": 115}
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804526213840, "job": 68, "event": "table_file_deletion", "file_number": 113}
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.166978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.213994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.214002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.214005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.214007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:42:06 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:42:06.214009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.470 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.854 253665 DEBUG nova.network.neutron [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updating instance_info_cache with network_info: [{"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.894 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.894 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Instance network_info: |[{"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.895 253665 DEBUG oslo_concurrency.lockutils [req-2c683556-c31a-4126-a931-5161455a7dd0 req-a64cb8a4-0833-4737-a71c-4d9739fdff27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.895 253665 DEBUG nova.network.neutron [req-2c683556-c31a-4126-a931-5161455a7dd0 req-a64cb8a4-0833-4737-a71c-4d9739fdff27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Refreshing network info cache for port 005f7c99-8e6b-4818-9749-1360f814f253 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.901 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Start _get_guest_xml network_info=[{"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.906 253665 WARNING nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.919 253665 DEBUG nova.virt.libvirt.host [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.920 253665 DEBUG nova.virt.libvirt.host [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.928 253665 DEBUG nova.virt.libvirt.host [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.929 253665 DEBUG nova.virt.libvirt.host [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.929 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.929 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.930 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.930 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.930 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.930 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.931 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.931 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.931 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.931 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.931 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.932 253665 DEBUG nova.virt.hardware [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:42:06 compute-0 nova_compute[253661]: 2025-11-22 09:42:06.935 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:07 compute-0 ceph-mon[75021]: pgmap v2406: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 805 KiB/s rd, 4.3 KiB/s wr, 78 op/s
Nov 22 09:42:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:42:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243777467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.425 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.447 253665 DEBUG nova.storage.rbd_utils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.451 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:42:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3225216883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.937 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.939 253665 DEBUG nova.virt.libvirt.vif [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1425734754',display_name='tempest-TestGettingAddress-server-1425734754',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1425734754',id=130,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-dage9ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:50Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d749b709-79b8-40b6-8c2e-4d301bdc8e67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.940 253665 DEBUG nova.network.os_vif_util [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.940 253665 DEBUG nova.network.os_vif_util [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:9d:36,bridge_name='br-int',has_traffic_filtering=True,id=868d876c-3d4f-4618-aedd-e1ce97d50ae9,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap868d876c-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.942 253665 DEBUG nova.virt.libvirt.vif [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1425734754',display_name='tempest-TestGettingAddress-server-1425734754',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1425734754',id=130,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-dage9ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:50Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d749b709-79b8-40b6-8c2e-4d301bdc8e67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.942 253665 DEBUG nova.network.os_vif_util [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.943 253665 DEBUG nova.network.os_vif_util [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=005f7c99-8e6b-4818-9749-1360f814f253,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap005f7c99-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.944 253665 DEBUG nova.objects.instance [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid d749b709-79b8-40b6-8c2e-4d301bdc8e67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.967 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <uuid>d749b709-79b8-40b6-8c2e-4d301bdc8e67</uuid>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <name>instance-00000082</name>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-1425734754</nova:name>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:42:06</nova:creationTime>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <nova:port uuid="868d876c-3d4f-4618-aedd-e1ce97d50ae9">
Nov 22 09:42:07 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <nova:port uuid="005f7c99-8e6b-4818-9749-1360f814f253">
Nov 22 09:42:07 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe16:1bad" ipVersion="6"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe16:1bad" ipVersion="6"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <system>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <entry name="serial">d749b709-79b8-40b6-8c2e-4d301bdc8e67</entry>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <entry name="uuid">d749b709-79b8-40b6-8c2e-4d301bdc8e67</entry>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </system>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <os>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   </os>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <features>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   </features>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk">
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk.config">
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       </source>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:42:07 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ec:9d:36"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <target dev="tap868d876c-3d"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:16:1b:ad"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <target dev="tap005f7c99-8e"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67/console.log" append="off"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <video>
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </video>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:42:07 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:42:07 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:42:07 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:42:07 compute-0 nova_compute[253661]: </domain>
Nov 22 09:42:07 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.967 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Preparing to wait for external event network-vif-plugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.968 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.968 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.968 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.968 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Preparing to wait for external event network-vif-plugged-005f7c99-8e6b-4818-9749-1360f814f253 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.968 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.969 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.969 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.969 253665 DEBUG nova.virt.libvirt.vif [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1425734754',display_name='tempest-TestGettingAddress-server-1425734754',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1425734754',id=130,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-dage9ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:50Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d749b709-79b8-40b6-8c2e-4d301bdc8e67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.970 253665 DEBUG nova.network.os_vif_util [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.970 253665 DEBUG nova.network.os_vif_util [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:9d:36,bridge_name='br-int',has_traffic_filtering=True,id=868d876c-3d4f-4618-aedd-e1ce97d50ae9,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap868d876c-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.971 253665 DEBUG os_vif [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:9d:36,bridge_name='br-int',has_traffic_filtering=True,id=868d876c-3d4f-4618-aedd-e1ce97d50ae9,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap868d876c-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.972 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.972 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.976 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap868d876c-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:07 compute-0 nova_compute[253661]: 2025-11-22 09:42:07.976 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap868d876c-3d, col_values=(('external_ids', {'iface-id': '868d876c-3d4f-4618-aedd-e1ce97d50ae9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:9d:36', 'vm-uuid': 'd749b709-79b8-40b6-8c2e-4d301bdc8e67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.015 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:08 compute-0 NetworkManager[48920]: <info>  [1763804528.0164] manager: (tap868d876c-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.020 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.021 253665 INFO os_vif [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:9d:36,bridge_name='br-int',has_traffic_filtering=True,id=868d876c-3d4f-4618-aedd-e1ce97d50ae9,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap868d876c-3d')
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.022 253665 DEBUG nova.virt.libvirt.vif [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1425734754',display_name='tempest-TestGettingAddress-server-1425734754',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1425734754',id=130,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-dage9ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:50Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d749b709-79b8-40b6-8c2e-4d301bdc8e67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.023 253665 DEBUG nova.network.os_vif_util [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.023 253665 DEBUG nova.network.os_vif_util [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=005f7c99-8e6b-4818-9749-1360f814f253,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap005f7c99-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.024 253665 DEBUG os_vif [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=005f7c99-8e6b-4818-9749-1360f814f253,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap005f7c99-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.024 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.024 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.026 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.026 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap005f7c99-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.027 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap005f7c99-8e, col_values=(('external_ids', {'iface-id': '005f7c99-8e6b-4818-9749-1360f814f253', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:1b:ad', 'vm-uuid': 'd749b709-79b8-40b6-8c2e-4d301bdc8e67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.028 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:08 compute-0 NetworkManager[48920]: <info>  [1763804528.0292] manager: (tap005f7c99-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/566)
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.030 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.036 253665 INFO os_vif [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=005f7c99-8e6b-4818-9749-1360f814f253,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap005f7c99-8e')
Nov 22 09:42:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 805 KiB/s rd, 4.3 KiB/s wr, 78 op/s
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.119 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.120 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.120 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:ec:9d:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.120 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:16:1b:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.120 253665 INFO nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Using config drive
Nov 22 09:42:08 compute-0 nova_compute[253661]: 2025-11-22 09:42:08.141 253665 DEBUG nova.storage.rbd_utils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/243777467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3225216883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:09 compute-0 ceph-mon[75021]: pgmap v2407: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 805 KiB/s rd, 4.3 KiB/s wr, 78 op/s
Nov 22 09:42:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.5 KiB/s wr, 41 op/s
Nov 22 09:42:10 compute-0 nova_compute[253661]: 2025-11-22 09:42:10.283 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804515.281614, 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:10 compute-0 nova_compute[253661]: 2025-11-22 09:42:10.286 253665 INFO nova.compute.manager [-] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] VM Stopped (Lifecycle Event)
Nov 22 09:42:10 compute-0 nova_compute[253661]: 2025-11-22 09:42:10.305 253665 DEBUG nova.compute.manager [None req-04c59c5d-ed79-4360-aa31-c9b01e80fa02 - - - - - -] [instance: 2e7b4cf2-c2c2-4df1-aa03-550a07687b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:11 compute-0 ceph-mon[75021]: pgmap v2408: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.5 KiB/s wr, 41 op/s
Nov 22 09:42:11 compute-0 nova_compute[253661]: 2025-11-22 09:42:11.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Nov 22 09:42:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:42:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/693810721' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:42:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:42:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/693810721' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:42:13 compute-0 nova_compute[253661]: 2025-11-22 09:42:13.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:13 compute-0 ceph-mon[75021]: pgmap v2409: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Nov 22 09:42:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/693810721' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:42:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/693810721' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:42:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:13.999 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804518.9978406, a20eee04-e3b6-4162-91f7-e6c92d8a07fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:13.999 253665 INFO nova.compute.manager [-] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] VM Stopped (Lifecycle Event)
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:14.024 253665 DEBUG nova.compute.manager [None req-dcf5a289-80f1-4e0c-a44e-08091d3e42d0 - - - - - -] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 25 op/s
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:14.620 253665 INFO nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Creating config drive at /var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67/disk.config
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:14.626 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfmbt1jdh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:14.770 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfmbt1jdh" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:14.796 253665 DEBUG nova.storage.rbd_utils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:14.801 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67/disk.config d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:14.962 253665 DEBUG oslo_concurrency.processutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67/disk.config d749b709-79b8-40b6-8c2e-4d301bdc8e67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:14 compute-0 nova_compute[253661]: 2025-11-22 09:42:14.963 253665 INFO nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Deleting local config drive /var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67/disk.config because it was imported into RBD.
Nov 22 09:42:15 compute-0 NetworkManager[48920]: <info>  [1763804535.0352] manager: (tap868d876c-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/567)
Nov 22 09:42:15 compute-0 kernel: tap868d876c-3d: entered promiscuous mode
Nov 22 09:42:15 compute-0 ovn_controller[152872]: 2025-11-22T09:42:15Z|01401|binding|INFO|Claiming lport 868d876c-3d4f-4618-aedd-e1ce97d50ae9 for this chassis.
Nov 22 09:42:15 compute-0 ovn_controller[152872]: 2025-11-22T09:42:15Z|01402|binding|INFO|868d876c-3d4f-4618-aedd-e1ce97d50ae9: Claiming fa:16:3e:ec:9d:36 10.100.0.14
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.042 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:15 compute-0 NetworkManager[48920]: <info>  [1763804535.0569] manager: (tap005f7c99-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/568)
Nov 22 09:42:15 compute-0 systemd-udevd[387559]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:42:15 compute-0 systemd-udevd[387558]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.071 253665 DEBUG nova.network.neutron [req-2c683556-c31a-4126-a931-5161455a7dd0 req-a64cb8a4-0833-4737-a71c-4d9739fdff27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updated VIF entry in instance network info cache for port 005f7c99-8e6b-4818-9749-1360f814f253. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.071 253665 DEBUG nova.network.neutron [req-2c683556-c31a-4126-a931-5161455a7dd0 req-a64cb8a4-0833-4737-a71c-4d9739fdff27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updating instance_info_cache with network_info: [{"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:15 compute-0 NetworkManager[48920]: <info>  [1763804535.0865] device (tap868d876c-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:42:15 compute-0 NetworkManager[48920]: <info>  [1763804535.0895] device (tap868d876c-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.095 253665 DEBUG oslo_concurrency.lockutils [req-2c683556-c31a-4126-a931-5161455a7dd0 req-a64cb8a4-0833-4737-a71c-4d9739fdff27 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:15 compute-0 systemd-machined[215941]: New machine qemu-161-instance-00000082.
Nov 22 09:42:15 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000082.
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.131 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:9d:36 10.100.0.14'], port_security=['fa:16:3e:ec:9d:36 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd749b709-79b8-40b6-8c2e-4d301bdc8e67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f6f2cc35-7f29-4a49-a136-053a456001be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68715752-6987-44a9-a236-673079985d56, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=868d876c-3d4f-4618-aedd-e1ce97d50ae9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.132 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 868d876c-3d4f-4618-aedd-e1ce97d50ae9 in datapath b7054ec2-03d5-4428-a2b8-9c9905d4fcef bound to our chassis
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.134 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7054ec2-03d5-4428-a2b8-9c9905d4fcef
Nov 22 09:42:15 compute-0 kernel: tap005f7c99-8e: entered promiscuous mode
Nov 22 09:42:15 compute-0 NetworkManager[48920]: <info>  [1763804535.1353] device (tap005f7c99-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:42:15 compute-0 NetworkManager[48920]: <info>  [1763804535.1370] device (tap005f7c99-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:42:15 compute-0 ovn_controller[152872]: 2025-11-22T09:42:15Z|01403|binding|INFO|Claiming lport 005f7c99-8e6b-4818-9749-1360f814f253 for this chassis.
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.137 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:15 compute-0 ovn_controller[152872]: 2025-11-22T09:42:15Z|01404|binding|INFO|005f7c99-8e6b-4818-9749-1360f814f253: Claiming fa:16:3e:16:1b:ad 2001:db8:0:1:f816:3eff:fe16:1bad 2001:db8::f816:3eff:fe16:1bad
Nov 22 09:42:15 compute-0 ovn_controller[152872]: 2025-11-22T09:42:15Z|01405|binding|INFO|Setting lport 868d876c-3d4f-4618-aedd-e1ce97d50ae9 ovn-installed in OVS
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.147 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[778bfaeb-8abc-4d78-9b4d-7879579a5b32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.147 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb7054ec2-01 in ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.150 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb7054ec2-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.150 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8eaaacab-a4ce-4a98-b549-69544f744b5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[881f0e84-fdcf-4221-a7e3-0b35e0462849]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_controller[152872]: 2025-11-22T09:42:15Z|01406|binding|INFO|Setting lport 868d876c-3d4f-4618-aedd-e1ce97d50ae9 up in Southbound
Nov 22 09:42:15 compute-0 ovn_controller[152872]: 2025-11-22T09:42:15Z|01407|binding|INFO|Setting lport 005f7c99-8e6b-4818-9749-1360f814f253 ovn-installed in OVS
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.160 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:1b:ad 2001:db8:0:1:f816:3eff:fe16:1bad 2001:db8::f816:3eff:fe16:1bad'], port_security=['fa:16:3e:16:1b:ad 2001:db8:0:1:f816:3eff:fe16:1bad 2001:db8::f816:3eff:fe16:1bad'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe16:1bad/64 2001:db8::f816:3eff:fe16:1bad/64', 'neutron:device_id': 'd749b709-79b8-40b6-8c2e-4d301bdc8e67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f6914f-16a1-4223-85f8-aa4fada62acd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f6f2cc35-7f29-4a49-a136-053a456001be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28e6ccec-e6eb-4baf-91d9-14c9f27dcba7, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=005f7c99-8e6b-4818-9749-1360f814f253) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.165 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.165 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e4f2c4-4b84-4c84-9dee-12557d627a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.181 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7fed2b9-864d-432c-bac6-7b3318b89f27]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.217 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[55927c64-9de6-4912-9024-f143bfecaeda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 systemd-udevd[387564]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:42:15 compute-0 NetworkManager[48920]: <info>  [1763804535.2280] manager: (tapb7054ec2-00): new Veth device (/org/freedesktop/NetworkManager/Devices/569)
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.227 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b502d41a-0c70-4ac7-b4dd-61f1cf75d5f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.259 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4baa5f67-0d6c-40c2-91f6-aff04b6cd05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.262 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[338c25e1-0de8-4dff-b149-c4c4e20ce087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 NetworkManager[48920]: <info>  [1763804535.2917] device (tapb7054ec2-00): carrier: link connected
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.299 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a4bd8644-b5b8-4a67-b706-21bfd067db14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.322 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c5337e9e-b9be-4856-8dd2-1edf4ad13685]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7054ec2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:e8:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738602, 'reachable_time': 21831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387595, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_controller[152872]: 2025-11-22T09:42:15Z|01408|binding|INFO|Setting lport 005f7c99-8e6b-4818-9749-1360f814f253 up in Southbound
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.346 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a932643f-b020-4f41-8073-e703de3fafe7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:e8f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738602, 'tstamp': 738602}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387596, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ceph-mon[75021]: pgmap v2410: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 25 op/s
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.368 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95f2d1b5-708d-4611-9b59-660c9894453c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7054ec2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:e8:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738602, 'reachable_time': 21831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387597, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.402 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[56a5d035-9f80-4966-b9f7-46c9c0051d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.465 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d3c103-d003-47c6-955f-74d495046a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.467 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7054ec2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.467 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.468 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7054ec2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.470 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:15 compute-0 kernel: tapb7054ec2-00: entered promiscuous mode
Nov 22 09:42:15 compute-0 NetworkManager[48920]: <info>  [1763804535.4725] manager: (tapb7054ec2-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/570)
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.474 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7054ec2-00, col_values=(('external_ids', {'iface-id': 'd0d06639-6ce0-40be-a634-96c5ecf3d6fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:15 compute-0 ovn_controller[152872]: 2025-11-22T09:42:15Z|01409|binding|INFO|Releasing lport d0d06639-6ce0-40be-a634-96c5ecf3d6fe from this chassis (sb_readonly=0)
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.479 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b7054ec2-03d5-4428-a2b8-9c9905d4fcef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b7054ec2-03d5-4428-a2b8-9c9905d4fcef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.488 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9a5e79-9e04-4d88-8a9e-f0d027079345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:15 compute-0 nova_compute[253661]: 2025-11-22 09:42:15.489 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.490 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-b7054ec2-03d5-4428-a2b8-9c9905d4fcef
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/b7054ec2-03d5-4428-a2b8-9c9905d4fcef.pid.haproxy
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID b7054ec2-03d5-4428-a2b8-9c9905d4fcef
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:42:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:15.492 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'env', 'PROCESS_TAG=haproxy-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b7054ec2-03d5-4428-a2b8-9c9905d4fcef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:42:15 compute-0 podman[387629]: 2025-11-22 09:42:15.90649602 +0000 UTC m=+0.068360090 container create fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:42:15 compute-0 systemd[1]: Started libpod-conmon-fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa.scope.
Nov 22 09:42:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0821a1de6c097e13970fe5dd6f5c72c376d47e57bdbbe281b1d977650b349e04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:15 compute-0 podman[387629]: 2025-11-22 09:42:15.880382352 +0000 UTC m=+0.042246452 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:42:15 compute-0 podman[387629]: 2025-11-22 09:42:15.979393059 +0000 UTC m=+0.141257149 container init fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:42:15 compute-0 podman[387629]: 2025-11-22 09:42:15.985146839 +0000 UTC m=+0.147010919 container start fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.008 253665 DEBUG nova.compute.manager [req-ec872eb6-9fe0-4456-b3c2-cfa0a1911c5d req-b6dc89fd-5136-4049-b5bf-1cce9d5c16be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-plugged-005f7c99-8e6b-4818-9749-1360f814f253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.008 253665 DEBUG oslo_concurrency.lockutils [req-ec872eb6-9fe0-4456-b3c2-cfa0a1911c5d req-b6dc89fd-5136-4049-b5bf-1cce9d5c16be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.009 253665 DEBUG oslo_concurrency.lockutils [req-ec872eb6-9fe0-4456-b3c2-cfa0a1911c5d req-b6dc89fd-5136-4049-b5bf-1cce9d5c16be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.009 253665 DEBUG oslo_concurrency.lockutils [req-ec872eb6-9fe0-4456-b3c2-cfa0a1911c5d req-b6dc89fd-5136-4049-b5bf-1cce9d5c16be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.009 253665 DEBUG nova.compute.manager [req-ec872eb6-9fe0-4456-b3c2-cfa0a1911c5d req-b6dc89fd-5136-4049-b5bf-1cce9d5c16be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Processing event network-vif-plugged-005f7c99-8e6b-4818-9749-1360f814f253 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:42:16 compute-0 neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef[387681]: [NOTICE]   (387690) : New worker (387693) forked
Nov 22 09:42:16 compute-0 neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef[387681]: [NOTICE]   (387690) : Loading success.
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.050 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 005f7c99-8e6b-4818-9749-1360f814f253 in datapath 14f6914f-16a1-4223-85f8-aa4fada62acd unbound from our chassis
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.053 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14f6914f-16a1-4223-85f8-aa4fada62acd
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.065 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6296955e-3f89-4e38-a6b6-19e5421911ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.066 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804536.065534, d749b709-79b8-40b6-8c2e-4d301bdc8e67 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.066 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] VM Started (Lifecycle Event)
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.067 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14f6914f-11 in ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.069 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14f6914f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.069 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0de7764-8af0-4b68-beb1-f51177f7474d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.070 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[51600c28-aa47-4072-b313-79fb959a8438]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.085 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ed274ec5-7d82-4a88-9bd2-d90fba505313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.092 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.097 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804536.0659552, d749b709-79b8-40b6-8c2e-4d301bdc8e67 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.097 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] VM Paused (Lifecycle Event)
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.101 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd6c8aa-b353-46d9-b058-64464ceef830]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.118 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.122 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.136 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f652ee78-61cd-4c17-8977-2d0af5e83afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b9392d-66cf-42f7-b10f-d6d9a4fec977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 NetworkManager[48920]: <info>  [1763804536.1428] manager: (tap14f6914f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/571)
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.148 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.155 253665 DEBUG nova.compute.manager [req-f889ad7c-a973-4750-9268-3529bf183209 req-06a9434e-928f-482f-94d8-1ec16cfdb763 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-plugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.155 253665 DEBUG oslo_concurrency.lockutils [req-f889ad7c-a973-4750-9268-3529bf183209 req-06a9434e-928f-482f-94d8-1ec16cfdb763 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.156 253665 DEBUG oslo_concurrency.lockutils [req-f889ad7c-a973-4750-9268-3529bf183209 req-06a9434e-928f-482f-94d8-1ec16cfdb763 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.156 253665 DEBUG oslo_concurrency.lockutils [req-f889ad7c-a973-4750-9268-3529bf183209 req-06a9434e-928f-482f-94d8-1ec16cfdb763 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.158 253665 DEBUG nova.compute.manager [req-f889ad7c-a973-4750-9268-3529bf183209 req-06a9434e-928f-482f-94d8-1ec16cfdb763 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Processing event network-vif-plugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.159 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.162 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804536.1624777, d749b709-79b8-40b6-8c2e-4d301bdc8e67 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.163 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] VM Resumed (Lifecycle Event)
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.164 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.168 253665 INFO nova.virt.libvirt.driver [-] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Instance spawned successfully.
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.168 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.180 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0e0a4e-4651-4ab0-8251-4232b381ab0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.184 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6c41fc-94ec-48d2-88fe-7397d4fe9b7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.192 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.199 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.203 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.204 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.204 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.205 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.205 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.206 253665 DEBUG nova.virt.libvirt.driver [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:16 compute-0 NetworkManager[48920]: <info>  [1763804536.2114] device (tap14f6914f-10): carrier: link connected
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.218 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e4eda1-92d0-42bf-82be-f45be673c0d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.232 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.235 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[913cdf05-dfbc-4a1f-b0d6-41f33032fc83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f6914f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:46:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738694, 'reachable_time': 32374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387712, 'error': None, 'target': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.252 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea877913-8e56-4b39-b615-8dd6a295d8a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:466c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738694, 'tstamp': 738694}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387713, 'error': None, 'target': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.270 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[24f378b1-bfe8-460d-8dfa-234aa5d9d884]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f6914f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:46:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738694, 'reachable_time': 32374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387714, 'error': None, 'target': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.279 253665 INFO nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Took 25.52 seconds to spawn the instance on the hypervisor.
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.279 253665 DEBUG nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.302 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03aed01a-1a28-4ebf-8ea3-ba8a1c2a522c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.333 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af40daaf-a89c-4258-9b82-023910aa6d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.334 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f6914f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.334 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.335 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f6914f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:16 compute-0 kernel: tap14f6914f-10: entered promiscuous mode
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.339 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:16 compute-0 NetworkManager[48920]: <info>  [1763804536.3398] manager: (tap14f6914f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.340 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14f6914f-10, col_values=(('external_ids', {'iface-id': 'ef3b4cae-0bef-4b3d-812b-86120687d0c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:16 compute-0 ovn_controller[152872]: 2025-11-22T09:42:16Z|01410|binding|INFO|Releasing lport ef3b4cae-0bef-4b3d-812b-86120687d0c9 from this chassis (sb_readonly=0)
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.356 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14f6914f-16a1-4223-85f8-aa4fada62acd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14f6914f-16a1-4223-85f8-aa4fada62acd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.357 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bff401c1-488b-4c71-80cd-4a9e61199a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.357 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-14f6914f-16a1-4223-85f8-aa4fada62acd
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/14f6914f-16a1-4223-85f8-aa4fada62acd.pid.haproxy
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 14f6914f-16a1-4223-85f8-aa4fada62acd
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:42:16 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:16.358 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'env', 'PROCESS_TAG=haproxy-14f6914f-16a1-4223-85f8-aa4fada62acd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14f6914f-16a1-4223-85f8-aa4fada62acd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.450 253665 INFO nova.compute.manager [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Took 27.38 seconds to build instance.
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:16 compute-0 nova_compute[253661]: 2025-11-22 09:42:16.499 253665 DEBUG oslo_concurrency.lockutils [None req-da42d7d1-7ba3-414f-b6e7-9c40bd930f51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:16 compute-0 podman[387742]: 2025-11-22 09:42:16.729065288 +0000 UTC m=+0.058246573 container create 463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:42:16 compute-0 systemd[1]: Started libpod-conmon-463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287.scope.
Nov 22 09:42:16 compute-0 podman[387742]: 2025-11-22 09:42:16.698663096 +0000 UTC m=+0.027844441 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:42:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89f242686f0e2d9942688a514074e019e91c326057cca7e1af2a238c306df4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:16 compute-0 podman[387742]: 2025-11-22 09:42:16.808957648 +0000 UTC m=+0.138138953 container init 463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:42:16 compute-0 podman[387742]: 2025-11-22 09:42:16.815002726 +0000 UTC m=+0.144184001 container start 463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:42:16 compute-0 neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd[387757]: [NOTICE]   (387761) : New worker (387763) forked
Nov 22 09:42:16 compute-0 neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd[387757]: [NOTICE]   (387761) : Loading success.
Nov 22 09:42:17 compute-0 ceph-mon[75021]: pgmap v2411: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.073 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 12 KiB/s wr, 1 op/s
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.224 253665 DEBUG nova.compute.manager [req-de61800a-1d60-4304-9cac-5660fac4dc4d req-539b831e-d566-4a57-b3f1-bf0b1bda137c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-plugged-005f7c99-8e6b-4818-9749-1360f814f253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.224 253665 DEBUG oslo_concurrency.lockutils [req-de61800a-1d60-4304-9cac-5660fac4dc4d req-539b831e-d566-4a57-b3f1-bf0b1bda137c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.224 253665 DEBUG oslo_concurrency.lockutils [req-de61800a-1d60-4304-9cac-5660fac4dc4d req-539b831e-d566-4a57-b3f1-bf0b1bda137c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.224 253665 DEBUG oslo_concurrency.lockutils [req-de61800a-1d60-4304-9cac-5660fac4dc4d req-539b831e-d566-4a57-b3f1-bf0b1bda137c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.225 253665 DEBUG nova.compute.manager [req-de61800a-1d60-4304-9cac-5660fac4dc4d req-539b831e-d566-4a57-b3f1-bf0b1bda137c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] No waiting events found dispatching network-vif-plugged-005f7c99-8e6b-4818-9749-1360f814f253 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.225 253665 WARNING nova.compute.manager [req-de61800a-1d60-4304-9cac-5660fac4dc4d req-539b831e-d566-4a57-b3f1-bf0b1bda137c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received unexpected event network-vif-plugged-005f7c99-8e6b-4818-9749-1360f814f253 for instance with vm_state active and task_state None.
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.307 253665 DEBUG nova.compute.manager [req-835ee407-2bd8-4322-a4b0-00ca58c9fb56 req-9a9dd515-e114-4efe-a816-e7f9aa0b9736 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-plugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.308 253665 DEBUG oslo_concurrency.lockutils [req-835ee407-2bd8-4322-a4b0-00ca58c9fb56 req-9a9dd515-e114-4efe-a816-e7f9aa0b9736 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.308 253665 DEBUG oslo_concurrency.lockutils [req-835ee407-2bd8-4322-a4b0-00ca58c9fb56 req-9a9dd515-e114-4efe-a816-e7f9aa0b9736 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.308 253665 DEBUG oslo_concurrency.lockutils [req-835ee407-2bd8-4322-a4b0-00ca58c9fb56 req-9a9dd515-e114-4efe-a816-e7f9aa0b9736 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.309 253665 DEBUG nova.compute.manager [req-835ee407-2bd8-4322-a4b0-00ca58c9fb56 req-9a9dd515-e114-4efe-a816-e7f9aa0b9736 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] No waiting events found dispatching network-vif-plugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:42:18 compute-0 nova_compute[253661]: 2025-11-22 09:42:18.309 253665 WARNING nova.compute.manager [req-835ee407-2bd8-4322-a4b0-00ca58c9fb56 req-9a9dd515-e114-4efe-a816-e7f9aa0b9736 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received unexpected event network-vif-plugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 for instance with vm_state active and task_state None.
Nov 22 09:42:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:19 compute-0 ceph-mon[75021]: pgmap v2412: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 12 KiB/s wr, 1 op/s
Nov 22 09:42:20 compute-0 sudo[387772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:42:20 compute-0 sudo[387772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:20 compute-0 sudo[387772]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 735 KiB/s rd, 12 KiB/s wr, 34 op/s
Nov 22 09:42:20 compute-0 sudo[387797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:42:20 compute-0 sudo[387797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:20 compute-0 sudo[387797]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:20 compute-0 sudo[387822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:42:20 compute-0 sudo[387822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:20 compute-0 sudo[387822]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:20 compute-0 sudo[387847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:42:20 compute-0 sudo[387847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:20 compute-0 sudo[387847]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:42:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:42:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:42:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:42:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:42:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:42:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 78dfb15e-649b-4a6b-8a8a-02474b662b00 does not exist
Nov 22 09:42:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ec56bd01-6992-4455-914c-885f6720f2e4 does not exist
Nov 22 09:42:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 7d560cb4-9bee-476b-9194-98c6078bd413 does not exist
Nov 22 09:42:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:42:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:42:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:42:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:42:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:42:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:42:20 compute-0 sudo[387903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:42:20 compute-0 sudo[387903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:20 compute-0 sudo[387903]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:20 compute-0 sudo[387928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:42:20 compute-0 sudo[387928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:20 compute-0 sudo[387928]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:21 compute-0 sudo[387953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:42:21 compute-0 sudo[387953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:21 compute-0 sudo[387953]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:21 compute-0 sudo[387978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:42:21 compute-0 sudo[387978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:21 compute-0 ceph-mon[75021]: pgmap v2413: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 735 KiB/s rd, 12 KiB/s wr, 34 op/s
Nov 22 09:42:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:42:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:42:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:42:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:42:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:42:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:42:21 compute-0 nova_compute[253661]: 2025-11-22 09:42:21.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:21 compute-0 podman[388043]: 2025-11-22 09:42:21.489567009 +0000 UTC m=+0.050143764 container create 8c60959dff57d032ae2945f52047e2ad4e3630b0979dac4082bbfb5b01f5337d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 09:42:21 compute-0 systemd[1]: Started libpod-conmon-8c60959dff57d032ae2945f52047e2ad4e3630b0979dac4082bbfb5b01f5337d.scope.
Nov 22 09:42:21 compute-0 podman[388043]: 2025-11-22 09:42:21.466300192 +0000 UTC m=+0.026876967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:42:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:21 compute-0 podman[388043]: 2025-11-22 09:42:21.59281569 +0000 UTC m=+0.153392465 container init 8c60959dff57d032ae2945f52047e2ad4e3630b0979dac4082bbfb5b01f5337d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:42:21 compute-0 podman[388043]: 2025-11-22 09:42:21.601659335 +0000 UTC m=+0.162236090 container start 8c60959dff57d032ae2945f52047e2ad4e3630b0979dac4082bbfb5b01f5337d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:42:21 compute-0 podman[388043]: 2025-11-22 09:42:21.606181367 +0000 UTC m=+0.166758152 container attach 8c60959dff57d032ae2945f52047e2ad4e3630b0979dac4082bbfb5b01f5337d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:42:21 compute-0 festive_shockley[388059]: 167 167
Nov 22 09:42:21 compute-0 systemd[1]: libpod-8c60959dff57d032ae2945f52047e2ad4e3630b0979dac4082bbfb5b01f5337d.scope: Deactivated successfully.
Nov 22 09:42:21 compute-0 podman[388043]: 2025-11-22 09:42:21.608621966 +0000 UTC m=+0.169198721 container died 8c60959dff57d032ae2945f52047e2ad4e3630b0979dac4082bbfb5b01f5337d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:42:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6458859a0e018ac1b0de5c2cc3981c03a72c3b05112dcc6ddab1eaa3ad40082d-merged.mount: Deactivated successfully.
Nov 22 09:42:21 compute-0 podman[388043]: 2025-11-22 09:42:21.644085442 +0000 UTC m=+0.204662197 container remove 8c60959dff57d032ae2945f52047e2ad4e3630b0979dac4082bbfb5b01f5337d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 22 09:42:21 compute-0 systemd[1]: libpod-conmon-8c60959dff57d032ae2945f52047e2ad4e3630b0979dac4082bbfb5b01f5337d.scope: Deactivated successfully.
Nov 22 09:42:21 compute-0 podman[388083]: 2025-11-22 09:42:21.849097476 +0000 UTC m=+0.059838092 container create 238b375fb34f0035e50ffef68d2dff7e9259cb370d4e5e474b4df9b335fea7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:42:21 compute-0 systemd[1]: Started libpod-conmon-238b375fb34f0035e50ffef68d2dff7e9259cb370d4e5e474b4df9b335fea7eb.scope.
Nov 22 09:42:21 compute-0 podman[388083]: 2025-11-22 09:42:21.818164551 +0000 UTC m=+0.028905247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:42:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3e9431b3f427d13c6c21d2f0ac46552add7e5f4eeb0afa934212b78244c510/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3e9431b3f427d13c6c21d2f0ac46552add7e5f4eeb0afa934212b78244c510/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3e9431b3f427d13c6c21d2f0ac46552add7e5f4eeb0afa934212b78244c510/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3e9431b3f427d13c6c21d2f0ac46552add7e5f4eeb0afa934212b78244c510/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3e9431b3f427d13c6c21d2f0ac46552add7e5f4eeb0afa934212b78244c510/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:21 compute-0 podman[388083]: 2025-11-22 09:42:21.95413519 +0000 UTC m=+0.164875836 container init 238b375fb34f0035e50ffef68d2dff7e9259cb370d4e5e474b4df9b335fea7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 09:42:21 compute-0 podman[388083]: 2025-11-22 09:42:21.962865942 +0000 UTC m=+0.173606558 container start 238b375fb34f0035e50ffef68d2dff7e9259cb370d4e5e474b4df9b335fea7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:42:21 compute-0 podman[388083]: 2025-11-22 09:42:21.967767583 +0000 UTC m=+0.178508219 container attach 238b375fb34f0035e50ffef68d2dff7e9259cb370d4e5e474b4df9b335fea7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:42:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:42:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:42:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:42:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:42:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:42:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:42:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:42:22 compute-0 hungry_buck[388100]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:42:22 compute-0 hungry_buck[388100]: --> relative data size: 1.0
Nov 22 09:42:22 compute-0 hungry_buck[388100]: --> All data devices are unavailable
Nov 22 09:42:22 compute-0 systemd[1]: libpod-238b375fb34f0035e50ffef68d2dff7e9259cb370d4e5e474b4df9b335fea7eb.scope: Deactivated successfully.
Nov 22 09:42:22 compute-0 podman[388083]: 2025-11-22 09:42:22.988617301 +0000 UTC m=+1.199357947 container died 238b375fb34f0035e50ffef68d2dff7e9259cb370d4e5e474b4df9b335fea7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 09:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd3e9431b3f427d13c6c21d2f0ac46552add7e5f4eeb0afa934212b78244c510-merged.mount: Deactivated successfully.
Nov 22 09:42:23 compute-0 podman[388083]: 2025-11-22 09:42:23.046110074 +0000 UTC m=+1.256850690 container remove 238b375fb34f0035e50ffef68d2dff7e9259cb370d4e5e474b4df9b335fea7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:42:23 compute-0 systemd[1]: libpod-conmon-238b375fb34f0035e50ffef68d2dff7e9259cb370d4e5e474b4df9b335fea7eb.scope: Deactivated successfully.
Nov 22 09:42:23 compute-0 sudo[387978]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:23 compute-0 nova_compute[253661]: 2025-11-22 09:42:23.075 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:23 compute-0 sudo[388141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:42:23 compute-0 sudo[388141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:23 compute-0 sudo[388141]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:23 compute-0 sudo[388166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:42:23 compute-0 sudo[388166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:23 compute-0 sudo[388166]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:23 compute-0 sudo[388191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:42:23 compute-0 sudo[388191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:23 compute-0 sudo[388191]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:23 compute-0 sudo[388216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:42:23 compute-0 sudo[388216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:23 compute-0 ceph-mon[75021]: pgmap v2414: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:42:23 compute-0 podman[388280]: 2025-11-22 09:42:23.597911673 +0000 UTC m=+0.036219075 container create 5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:42:23 compute-0 systemd[1]: Started libpod-conmon-5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4.scope.
Nov 22 09:42:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:23 compute-0 podman[388280]: 2025-11-22 09:42:23.581915653 +0000 UTC m=+0.020223085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:42:23 compute-0 podman[388280]: 2025-11-22 09:42:23.679901785 +0000 UTC m=+0.118209217 container init 5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:42:23 compute-0 podman[388280]: 2025-11-22 09:42:23.686542378 +0000 UTC m=+0.124849780 container start 5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:42:23 compute-0 podman[388280]: 2025-11-22 09:42:23.690066463 +0000 UTC m=+0.128373885 container attach 5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 09:42:23 compute-0 practical_hertz[388297]: 167 167
Nov 22 09:42:23 compute-0 systemd[1]: libpod-5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4.scope: Deactivated successfully.
Nov 22 09:42:23 compute-0 conmon[388297]: conmon 5edc9c14479109687358 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4.scope/container/memory.events
Nov 22 09:42:23 compute-0 podman[388280]: 2025-11-22 09:42:23.693769384 +0000 UTC m=+0.132076786 container died 5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d8d417c4d0e312327113ef3539bc2c98b978f134de2efbefe525803540e49e-merged.mount: Deactivated successfully.
Nov 22 09:42:23 compute-0 podman[388280]: 2025-11-22 09:42:23.731059993 +0000 UTC m=+0.169367395 container remove 5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:42:23 compute-0 systemd[1]: libpod-conmon-5edc9c14479109687358539204b3af5cd1638b1bca9b094ca8b979a8ddd5dcf4.scope: Deactivated successfully.
Nov 22 09:42:23 compute-0 podman[388321]: 2025-11-22 09:42:23.897305842 +0000 UTC m=+0.042554640 container create 0ff58ad2e82c06ee99d8e466b955995a5e9f8b2a5ad7652e393912b43dd8501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:42:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:23 compute-0 systemd[1]: Started libpod-conmon-0ff58ad2e82c06ee99d8e466b955995a5e9f8b2a5ad7652e393912b43dd8501d.scope.
Nov 22 09:42:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfff951f945c4015097c6825b4f7d43c3f320a34a48b829828c87c4fe3007454/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfff951f945c4015097c6825b4f7d43c3f320a34a48b829828c87c4fe3007454/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfff951f945c4015097c6825b4f7d43c3f320a34a48b829828c87c4fe3007454/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfff951f945c4015097c6825b4f7d43c3f320a34a48b829828c87c4fe3007454/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:23 compute-0 podman[388321]: 2025-11-22 09:42:23.879865656 +0000 UTC m=+0.025114484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:42:23 compute-0 podman[388321]: 2025-11-22 09:42:23.978942104 +0000 UTC m=+0.124190912 container init 0ff58ad2e82c06ee99d8e466b955995a5e9f8b2a5ad7652e393912b43dd8501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:42:23 compute-0 podman[388321]: 2025-11-22 09:42:23.988444316 +0000 UTC m=+0.133693124 container start 0ff58ad2e82c06ee99d8e466b955995a5e9f8b2a5ad7652e393912b43dd8501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:42:23 compute-0 podman[388321]: 2025-11-22 09:42:23.99185804 +0000 UTC m=+0.137106848 container attach 0ff58ad2e82c06ee99d8e466b955995a5e9f8b2a5ad7652e393912b43dd8501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:42:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:42:24 compute-0 pensive_keller[388337]: {
Nov 22 09:42:24 compute-0 pensive_keller[388337]:     "0": [
Nov 22 09:42:24 compute-0 pensive_keller[388337]:         {
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "devices": [
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "/dev/loop3"
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             ],
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_name": "ceph_lv0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_size": "21470642176",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "name": "ceph_lv0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "tags": {
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.cluster_name": "ceph",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.crush_device_class": "",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.encrypted": "0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.osd_id": "0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.type": "block",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.vdo": "0"
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             },
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "type": "block",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "vg_name": "ceph_vg0"
Nov 22 09:42:24 compute-0 pensive_keller[388337]:         }
Nov 22 09:42:24 compute-0 pensive_keller[388337]:     ],
Nov 22 09:42:24 compute-0 pensive_keller[388337]:     "1": [
Nov 22 09:42:24 compute-0 pensive_keller[388337]:         {
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "devices": [
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "/dev/loop4"
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             ],
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_name": "ceph_lv1",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_size": "21470642176",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "name": "ceph_lv1",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "tags": {
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.cluster_name": "ceph",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.crush_device_class": "",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.encrypted": "0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.osd_id": "1",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.type": "block",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.vdo": "0"
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             },
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "type": "block",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "vg_name": "ceph_vg1"
Nov 22 09:42:24 compute-0 pensive_keller[388337]:         }
Nov 22 09:42:24 compute-0 pensive_keller[388337]:     ],
Nov 22 09:42:24 compute-0 pensive_keller[388337]:     "2": [
Nov 22 09:42:24 compute-0 pensive_keller[388337]:         {
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "devices": [
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "/dev/loop5"
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             ],
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_name": "ceph_lv2",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_size": "21470642176",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "name": "ceph_lv2",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "tags": {
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.cluster_name": "ceph",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.crush_device_class": "",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.encrypted": "0",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.osd_id": "2",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.type": "block",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:                 "ceph.vdo": "0"
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             },
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "type": "block",
Nov 22 09:42:24 compute-0 pensive_keller[388337]:             "vg_name": "ceph_vg2"
Nov 22 09:42:24 compute-0 pensive_keller[388337]:         }
Nov 22 09:42:24 compute-0 pensive_keller[388337]:     ]
Nov 22 09:42:24 compute-0 pensive_keller[388337]: }
Nov 22 09:42:24 compute-0 systemd[1]: libpod-0ff58ad2e82c06ee99d8e466b955995a5e9f8b2a5ad7652e393912b43dd8501d.scope: Deactivated successfully.
Nov 22 09:42:24 compute-0 podman[388346]: 2025-11-22 09:42:24.816003917 +0000 UTC m=+0.023029203 container died 0ff58ad2e82c06ee99d8e466b955995a5e9f8b2a5ad7652e393912b43dd8501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 09:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfff951f945c4015097c6825b4f7d43c3f320a34a48b829828c87c4fe3007454-merged.mount: Deactivated successfully.
Nov 22 09:42:24 compute-0 podman[388346]: 2025-11-22 09:42:24.870371494 +0000 UTC m=+0.077396760 container remove 0ff58ad2e82c06ee99d8e466b955995a5e9f8b2a5ad7652e393912b43dd8501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:42:24 compute-0 systemd[1]: libpod-conmon-0ff58ad2e82c06ee99d8e466b955995a5e9f8b2a5ad7652e393912b43dd8501d.scope: Deactivated successfully.
Nov 22 09:42:24 compute-0 sudo[388216]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:24 compute-0 sudo[388361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:42:24 compute-0 sudo[388361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:24 compute-0 sudo[388361]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:25 compute-0 sudo[388386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:42:25 compute-0 sudo[388386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:25 compute-0 sudo[388386]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:25 compute-0 sudo[388411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:42:25 compute-0 sudo[388411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:25 compute-0 sudo[388411]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:25 compute-0 sudo[388436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:42:25 compute-0 sudo[388436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:25 compute-0 ceph-mon[75021]: pgmap v2415: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:42:25 compute-0 podman[388500]: 2025-11-22 09:42:25.454987495 +0000 UTC m=+0.037154599 container create fb045bc43505e7abf2365f3e744f38801e7a307ec242eccffe5cb2bb9383e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:42:25 compute-0 systemd[1]: Started libpod-conmon-fb045bc43505e7abf2365f3e744f38801e7a307ec242eccffe5cb2bb9383e4c2.scope.
Nov 22 09:42:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:25 compute-0 podman[388500]: 2025-11-22 09:42:25.52858016 +0000 UTC m=+0.110747304 container init fb045bc43505e7abf2365f3e744f38801e7a307ec242eccffe5cb2bb9383e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:42:25 compute-0 podman[388500]: 2025-11-22 09:42:25.440385678 +0000 UTC m=+0.022552792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:42:25 compute-0 podman[388500]: 2025-11-22 09:42:25.536808231 +0000 UTC m=+0.118975335 container start fb045bc43505e7abf2365f3e744f38801e7a307ec242eccffe5cb2bb9383e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 09:42:25 compute-0 podman[388500]: 2025-11-22 09:42:25.540246885 +0000 UTC m=+0.122414049 container attach fb045bc43505e7abf2365f3e744f38801e7a307ec242eccffe5cb2bb9383e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:42:25 compute-0 nifty_spence[388517]: 167 167
Nov 22 09:42:25 compute-0 podman[388500]: 2025-11-22 09:42:25.541843025 +0000 UTC m=+0.124010129 container died fb045bc43505e7abf2365f3e744f38801e7a307ec242eccffe5cb2bb9383e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:42:25 compute-0 systemd[1]: libpod-fb045bc43505e7abf2365f3e744f38801e7a307ec242eccffe5cb2bb9383e4c2.scope: Deactivated successfully.
Nov 22 09:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ae65a729e33c307bb6c0b7298af5a97d0490b4682e129d7eb36f658b44bce04-merged.mount: Deactivated successfully.
Nov 22 09:42:25 compute-0 podman[388500]: 2025-11-22 09:42:25.578556751 +0000 UTC m=+0.160723855 container remove fb045bc43505e7abf2365f3e744f38801e7a307ec242eccffe5cb2bb9383e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:42:25 compute-0 systemd[1]: libpod-conmon-fb045bc43505e7abf2365f3e744f38801e7a307ec242eccffe5cb2bb9383e4c2.scope: Deactivated successfully.
Nov 22 09:42:25 compute-0 podman[388541]: 2025-11-22 09:42:25.765775231 +0000 UTC m=+0.047273065 container create 1c4f65f7fcccfc32c3245c3eb17837170b823517d73994c38fa51da94deab17b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 09:42:25 compute-0 systemd[1]: Started libpod-conmon-1c4f65f7fcccfc32c3245c3eb17837170b823517d73994c38fa51da94deab17b.scope.
Nov 22 09:42:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9e9e641aa5c11260e5a04e3b0f19f65d61f0975fd622def61a1a2daed67827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9e9e641aa5c11260e5a04e3b0f19f65d61f0975fd622def61a1a2daed67827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9e9e641aa5c11260e5a04e3b0f19f65d61f0975fd622def61a1a2daed67827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9e9e641aa5c11260e5a04e3b0f19f65d61f0975fd622def61a1a2daed67827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:25 compute-0 podman[388541]: 2025-11-22 09:42:25.741345074 +0000 UTC m=+0.022842968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:42:25 compute-0 podman[388541]: 2025-11-22 09:42:25.846931972 +0000 UTC m=+0.128429896 container init 1c4f65f7fcccfc32c3245c3eb17837170b823517d73994c38fa51da94deab17b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:42:25 compute-0 podman[388541]: 2025-11-22 09:42:25.858293539 +0000 UTC m=+0.139791373 container start 1c4f65f7fcccfc32c3245c3eb17837170b823517d73994c38fa51da94deab17b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:42:25 compute-0 podman[388541]: 2025-11-22 09:42:25.861750393 +0000 UTC m=+0.143248317 container attach 1c4f65f7fcccfc32c3245c3eb17837170b823517d73994c38fa51da94deab17b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 09:42:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:42:26 compute-0 nova_compute[253661]: 2025-11-22 09:42:26.479 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:26 compute-0 gracious_goodall[388558]: {
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "osd_id": 1,
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "type": "bluestore"
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:     },
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "osd_id": 0,
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "type": "bluestore"
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:     },
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "osd_id": 2,
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:         "type": "bluestore"
Nov 22 09:42:26 compute-0 gracious_goodall[388558]:     }
Nov 22 09:42:26 compute-0 gracious_goodall[388558]: }
Nov 22 09:42:26 compute-0 systemd[1]: libpod-1c4f65f7fcccfc32c3245c3eb17837170b823517d73994c38fa51da94deab17b.scope: Deactivated successfully.
Nov 22 09:42:26 compute-0 podman[388541]: 2025-11-22 09:42:26.823273404 +0000 UTC m=+1.104771248 container died 1c4f65f7fcccfc32c3245c3eb17837170b823517d73994c38fa51da94deab17b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:42:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab9e9e641aa5c11260e5a04e3b0f19f65d61f0975fd622def61a1a2daed67827-merged.mount: Deactivated successfully.
Nov 22 09:42:26 compute-0 podman[388541]: 2025-11-22 09:42:26.867194775 +0000 UTC m=+1.148692609 container remove 1c4f65f7fcccfc32c3245c3eb17837170b823517d73994c38fa51da94deab17b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:42:26 compute-0 systemd[1]: libpod-conmon-1c4f65f7fcccfc32c3245c3eb17837170b823517d73994c38fa51da94deab17b.scope: Deactivated successfully.
Nov 22 09:42:26 compute-0 sudo[388436]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:42:26 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:42:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:42:26 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:42:26 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev eafa1625-7ef9-4398-9230-ab318fa74179 does not exist
Nov 22 09:42:26 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8fa1c2bd-8cec-4b07-a8b8-cfd61d040431 does not exist
Nov 22 09:42:26 compute-0 sudo[388603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:42:26 compute-0 sudo[388603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:26 compute-0 sudo[388603]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:27 compute-0 sudo[388628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:42:27 compute-0 sudo[388628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:42:27 compute-0 sudo[388628]: pam_unix(sudo:session): session closed for user root
Nov 22 09:42:27 compute-0 ceph-mon[75021]: pgmap v2416: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:42:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:42:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:42:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:27.985 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:27.987 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:27.987 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:42:28 compute-0 nova_compute[253661]: 2025-11-22 09:42:28.126 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:28 compute-0 podman[388654]: 2025-11-22 09:42:28.386438849 +0000 UTC m=+0.061945973 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Nov 22 09:42:28 compute-0 podman[388655]: 2025-11-22 09:42:28.398177856 +0000 UTC m=+0.073684900 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:42:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:29 compute-0 ovn_controller[152872]: 2025-11-22T09:42:29Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:9d:36 10.100.0.14
Nov 22 09:42:29 compute-0 ovn_controller[152872]: 2025-11-22T09:42:29Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:9d:36 10.100.0.14
Nov 22 09:42:29 compute-0 ceph-mon[75021]: pgmap v2417: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:42:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 102 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 94 op/s
Nov 22 09:42:31 compute-0 nova_compute[253661]: 2025-11-22 09:42:31.482 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:31 compute-0 ceph-mon[75021]: pgmap v2418: 305 pgs: 305 active+clean; 102 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 94 op/s
Nov 22 09:42:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 111 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Nov 22 09:42:33 compute-0 nova_compute[253661]: 2025-11-22 09:42:33.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:33 compute-0 podman[388693]: 2025-11-22 09:42:33.389656548 +0000 UTC m=+0.083823007 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:42:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:33 compute-0 ceph-mon[75021]: pgmap v2419: 305 pgs: 305 active+clean; 111 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Nov 22 09:42:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 09:42:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:34.739 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:42:34 compute-0 nova_compute[253661]: 2025-11-22 09:42:34.740 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:34.741 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:42:35 compute-0 nova_compute[253661]: 2025-11-22 09:42:35.161 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:35 compute-0 NetworkManager[48920]: <info>  [1763804555.1623] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Nov 22 09:42:35 compute-0 NetworkManager[48920]: <info>  [1763804555.1637] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Nov 22 09:42:35 compute-0 ovn_controller[152872]: 2025-11-22T09:42:35Z|01411|binding|INFO|Releasing lport d0d06639-6ce0-40be-a634-96c5ecf3d6fe from this chassis (sb_readonly=0)
Nov 22 09:42:35 compute-0 ovn_controller[152872]: 2025-11-22T09:42:35Z|01412|binding|INFO|Releasing lport ef3b4cae-0bef-4b3d-812b-86120687d0c9 from this chassis (sb_readonly=0)
Nov 22 09:42:35 compute-0 nova_compute[253661]: 2025-11-22 09:42:35.220 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:35 compute-0 nova_compute[253661]: 2025-11-22 09:42:35.228 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:35.744 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:35 compute-0 ceph-mon[75021]: pgmap v2420: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 09:42:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 09:42:36 compute-0 nova_compute[253661]: 2025-11-22 09:42:36.254 253665 DEBUG nova.compute.manager [req-ac848ad4-f6fc-464b-b078-98dc7ca29941 req-c7db43de-ab72-48c8-b777-3411c1e33e0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-changed-868d876c-3d4f-4618-aedd-e1ce97d50ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:36 compute-0 nova_compute[253661]: 2025-11-22 09:42:36.255 253665 DEBUG nova.compute.manager [req-ac848ad4-f6fc-464b-b078-98dc7ca29941 req-c7db43de-ab72-48c8-b777-3411c1e33e0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Refreshing instance network info cache due to event network-changed-868d876c-3d4f-4618-aedd-e1ce97d50ae9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:42:36 compute-0 nova_compute[253661]: 2025-11-22 09:42:36.255 253665 DEBUG oslo_concurrency.lockutils [req-ac848ad4-f6fc-464b-b078-98dc7ca29941 req-c7db43de-ab72-48c8-b777-3411c1e33e0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:36 compute-0 nova_compute[253661]: 2025-11-22 09:42:36.256 253665 DEBUG oslo_concurrency.lockutils [req-ac848ad4-f6fc-464b-b078-98dc7ca29941 req-c7db43de-ab72-48c8-b777-3411c1e33e0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:36 compute-0 nova_compute[253661]: 2025-11-22 09:42:36.256 253665 DEBUG nova.network.neutron [req-ac848ad4-f6fc-464b-b078-98dc7ca29941 req-c7db43de-ab72-48c8-b777-3411c1e33e0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Refreshing network info cache for port 868d876c-3d4f-4618-aedd-e1ce97d50ae9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:42:36 compute-0 nova_compute[253661]: 2025-11-22 09:42:36.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:37 compute-0 nova_compute[253661]: 2025-11-22 09:42:37.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:37 compute-0 ceph-mon[75021]: pgmap v2421: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 09:42:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:42:38 compute-0 nova_compute[253661]: 2025-11-22 09:42:38.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:38 compute-0 nova_compute[253661]: 2025-11-22 09:42:38.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:39 compute-0 nova_compute[253661]: 2025-11-22 09:42:39.271 253665 DEBUG nova.network.neutron [req-ac848ad4-f6fc-464b-b078-98dc7ca29941 req-c7db43de-ab72-48c8-b777-3411c1e33e0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updated VIF entry in instance network info cache for port 868d876c-3d4f-4618-aedd-e1ce97d50ae9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:42:39 compute-0 nova_compute[253661]: 2025-11-22 09:42:39.271 253665 DEBUG nova.network.neutron [req-ac848ad4-f6fc-464b-b078-98dc7ca29941 req-c7db43de-ab72-48c8-b777-3411c1e33e0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updating instance_info_cache with network_info: [{"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:39 compute-0 nova_compute[253661]: 2025-11-22 09:42:39.313 253665 DEBUG oslo_concurrency.lockutils [req-ac848ad4-f6fc-464b-b078-98dc7ca29941 req-c7db43de-ab72-48c8-b777-3411c1e33e0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:40 compute-0 ceph-mon[75021]: pgmap v2422: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:42:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.291 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.292 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.307 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.387 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.387 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.396 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.397 253665 INFO nova.compute.claims [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.487 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.519 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:42:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513477763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.967 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.974 253665 DEBUG nova.compute.provider_tree [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:42:41 compute-0 nova_compute[253661]: 2025-11-22 09:42:41.988 253665 DEBUG nova.scheduler.client.report [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.020 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.021 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:42:42 compute-0 ceph-mon[75021]: pgmap v2423: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:42:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3513477763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.091 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.092 253665 DEBUG nova.network.neutron [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:42:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 217 KiB/s rd, 797 KiB/s wr, 41 op/s
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.108 253665 INFO nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.124 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.212 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.213 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.214 253665 INFO nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Creating image(s)
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.235 253665 DEBUG nova.storage.rbd_utils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.257 253665 DEBUG nova.storage.rbd_utils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.280 253665 DEBUG nova.storage.rbd_utils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.284 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.322 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.357 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.358 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.359 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.359 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.379 253665 DEBUG nova.storage.rbd_utils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.383 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.421 253665 DEBUG nova.policy [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.656 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.724 253665 DEBUG nova.storage.rbd_utils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.807 253665 DEBUG nova.objects.instance [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 4b40bebc-2343-478c-aacb-b4ae1fc87907 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.832 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.833 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Ensure instance console log exists: /var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.833 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.833 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:42 compute-0 nova_compute[253661]: 2025-11-22 09:42:42.834 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:43 compute-0 ceph-mon[75021]: pgmap v2424: 305 pgs: 305 active+clean; 121 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 217 KiB/s rd, 797 KiB/s wr, 41 op/s
Nov 22 09:42:43 compute-0 nova_compute[253661]: 2025-11-22 09:42:43.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:43 compute-0 nova_compute[253661]: 2025-11-22 09:42:43.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:42:43 compute-0 nova_compute[253661]: 2025-11-22 09:42:43.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:42:43 compute-0 nova_compute[253661]: 2025-11-22 09:42:43.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:42:43 compute-0 nova_compute[253661]: 2025-11-22 09:42:43.247 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:42:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 144 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 1.1 MiB/s wr, 39 op/s
Nov 22 09:42:44 compute-0 nova_compute[253661]: 2025-11-22 09:42:44.395 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:44 compute-0 nova_compute[253661]: 2025-11-22 09:42:44.395 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:44 compute-0 nova_compute[253661]: 2025-11-22 09:42:44.395 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:42:44 compute-0 nova_compute[253661]: 2025-11-22 09:42:44.396 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d749b709-79b8-40b6-8c2e-4d301bdc8e67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:42:44 compute-0 nova_compute[253661]: 2025-11-22 09:42:44.966 253665 DEBUG nova.network.neutron [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Successfully created port: 4a5edb92-edbe-4724-be56-e8dddc03d872 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:42:45 compute-0 ceph-mon[75021]: pgmap v2425: 305 pgs: 305 active+clean; 144 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 1.1 MiB/s wr, 39 op/s
Nov 22 09:42:45 compute-0 nova_compute[253661]: 2025-11-22 09:42:45.907 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "9712bbe7-5d4c-41ad-8725-d063d344ef31" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:45 compute-0 nova_compute[253661]: 2025-11-22 09:42:45.908 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:45 compute-0 nova_compute[253661]: 2025-11-22 09:42:45.934 253665 DEBUG nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:42:45 compute-0 nova_compute[253661]: 2025-11-22 09:42:45.996 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:45 compute-0 nova_compute[253661]: 2025-11-22 09:42:45.997 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.004 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.004 253665 INFO nova.compute.claims [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.007 253665 DEBUG nova.network.neutron [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Successfully created port: 738a955b-d3fe-4521-9c7f-f6ae50a9112e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:42:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 144 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.0 MiB/s wr, 24 op/s
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.126 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.488 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:42:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2263802408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.580 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.585 253665 DEBUG nova.compute.provider_tree [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.602 253665 DEBUG nova.scheduler.client.report [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.634 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.635 253665 DEBUG nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.722 253665 DEBUG nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.722 253665 DEBUG nova.network.neutron [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.737 253665 INFO nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.755 253665 DEBUG nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.852 253665 DEBUG nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.853 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.853 253665 INFO nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Creating image(s)
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.874 253665 DEBUG nova.storage.rbd_utils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.897 253665 DEBUG nova.storage.rbd_utils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.917 253665 DEBUG nova.storage.rbd_utils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:46 compute-0 nova_compute[253661]: 2025-11-22 09:42:46.920 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.008 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.009 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.010 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.010 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.031 253665 DEBUG nova.storage.rbd_utils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.035 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.076 253665 DEBUG nova.network.neutron [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Successfully updated port: 4a5edb92-edbe-4724-be56-e8dddc03d872 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.081 253665 DEBUG nova.policy [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:42:47 compute-0 ceph-mon[75021]: pgmap v2426: 305 pgs: 305 active+clean; 144 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.0 MiB/s wr, 24 op/s
Nov 22 09:42:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2263802408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.126 253665 DEBUG nova.compute.manager [req-d0d90949-0fff-4087-aea4-4a1962463b53 req-503aac50-f1db-4b6b-98af-22435ce4ec07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-changed-4a5edb92-edbe-4724-be56-e8dddc03d872 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.126 253665 DEBUG nova.compute.manager [req-d0d90949-0fff-4087-aea4-4a1962463b53 req-503aac50-f1db-4b6b-98af-22435ce4ec07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Refreshing instance network info cache due to event network-changed-4a5edb92-edbe-4724-be56-e8dddc03d872. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.127 253665 DEBUG oslo_concurrency.lockutils [req-d0d90949-0fff-4087-aea4-4a1962463b53 req-503aac50-f1db-4b6b-98af-22435ce4ec07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.127 253665 DEBUG oslo_concurrency.lockutils [req-d0d90949-0fff-4087-aea4-4a1962463b53 req-503aac50-f1db-4b6b-98af-22435ce4ec07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.127 253665 DEBUG nova.network.neutron [req-d0d90949-0fff-4087-aea4-4a1962463b53 req-503aac50-f1db-4b6b-98af-22435ce4ec07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Refreshing network info cache for port 4a5edb92-edbe-4724-be56-e8dddc03d872 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.325 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.389 253665 DEBUG nova.storage.rbd_utils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.430 253665 DEBUG nova.network.neutron [req-d0d90949-0fff-4087-aea4-4a1962463b53 req-503aac50-f1db-4b6b-98af-22435ce4ec07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.483 253665 DEBUG nova.objects.instance [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid 9712bbe7-5d4c-41ad-8725-d063d344ef31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.493 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.493 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Ensure instance console log exists: /var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.494 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.494 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.494 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.692 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updating instance_info_cache with network_info: [{"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.713 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.713 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.715 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.716 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.716 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.745 253665 DEBUG nova.network.neutron [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Successfully updated port: 738a955b-d3fe-4521-9c7f-f6ae50a9112e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.762 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.890 253665 DEBUG nova.network.neutron [req-d0d90949-0fff-4087-aea4-4a1962463b53 req-503aac50-f1db-4b6b-98af-22435ce4ec07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.905 253665 DEBUG oslo_concurrency.lockutils [req-d0d90949-0fff-4087-aea4-4a1962463b53 req-503aac50-f1db-4b6b-98af-22435ce4ec07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.906 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:47 compute-0 nova_compute[253661]: 2025-11-22 09:42:47.906 253665 DEBUG nova.network.neutron [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:42:48 compute-0 nova_compute[253661]: 2025-11-22 09:42:48.080 253665 DEBUG nova.network.neutron [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:42:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 167 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:42:48 compute-0 nova_compute[253661]: 2025-11-22 09:42:48.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:48 compute-0 nova_compute[253661]: 2025-11-22 09:42:48.319 253665 DEBUG nova.network.neutron [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Successfully created port: 29eab695-fcfb-43d5-b708-0f720ee0fc39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:42:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:49 compute-0 ceph-mon[75021]: pgmap v2427: 305 pgs: 305 active+clean; 167 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.831 253665 DEBUG nova.network.neutron [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Updating instance_info_cache with network_info: [{"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.839 253665 DEBUG nova.compute.manager [req-e47461cc-ebcc-466f-a87b-8f99b222a057 req-1b450912-083d-4ef5-8fa9-a56e8efddbbe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-changed-738a955b-d3fe-4521-9c7f-f6ae50a9112e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.839 253665 DEBUG nova.compute.manager [req-e47461cc-ebcc-466f-a87b-8f99b222a057 req-1b450912-083d-4ef5-8fa9-a56e8efddbbe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Refreshing instance network info cache due to event network-changed-738a955b-d3fe-4521-9c7f-f6ae50a9112e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.840 253665 DEBUG oslo_concurrency.lockutils [req-e47461cc-ebcc-466f-a87b-8f99b222a057 req-1b450912-083d-4ef5-8fa9-a56e8efddbbe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.850 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.851 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Instance network_info: |[{"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.851 253665 DEBUG oslo_concurrency.lockutils [req-e47461cc-ebcc-466f-a87b-8f99b222a057 req-1b450912-083d-4ef5-8fa9-a56e8efddbbe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.851 253665 DEBUG nova.network.neutron [req-e47461cc-ebcc-466f-a87b-8f99b222a057 req-1b450912-083d-4ef5-8fa9-a56e8efddbbe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Refreshing network info cache for port 738a955b-d3fe-4521-9c7f-f6ae50a9112e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.855 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Start _get_guest_xml network_info=[{"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.861 253665 WARNING nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.873 253665 DEBUG nova.virt.libvirt.host [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.875 253665 DEBUG nova.virt.libvirt.host [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.879 253665 DEBUG nova.virt.libvirt.host [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.880 253665 DEBUG nova.virt.libvirt.host [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.881 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.881 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.881 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.882 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.882 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.882 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.882 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.883 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.883 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.883 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.883 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.884 253665 DEBUG nova.virt.hardware [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:42:49 compute-0 nova_compute[253661]: 2025-11-22 09:42:49.887 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 196 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 32 op/s
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.140 253665 DEBUG nova.network.neutron [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Successfully updated port: 29eab695-fcfb-43d5-b708-0f720ee0fc39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.155 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.155 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.156 253665 DEBUG nova.network.neutron [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.265 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.265 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.266 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.266 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:42:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/447495676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.348 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.383 253665 DEBUG nova.storage.rbd_utils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.387 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.427 253665 DEBUG nova.network.neutron [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.486 253665 DEBUG nova.compute.manager [req-dbe84ea4-e0e7-4537-8010-670fc285bf8e req-fa9a0149-86aa-48ce-b02c-7079ae733914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Received event network-changed-29eab695-fcfb-43d5-b708-0f720ee0fc39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.487 253665 DEBUG nova.compute.manager [req-dbe84ea4-e0e7-4537-8010-670fc285bf8e req-fa9a0149-86aa-48ce-b02c-7079ae733914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Refreshing instance network info cache due to event network-changed-29eab695-fcfb-43d5-b708-0f720ee0fc39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.487 253665 DEBUG oslo_concurrency.lockutils [req-dbe84ea4-e0e7-4537-8010-670fc285bf8e req-fa9a0149-86aa-48ce-b02c-7079ae733914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:42:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540152902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.772 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:42:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/881063278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.849 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.850 253665 DEBUG nova.virt.libvirt.vif [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1789709758',display_name='tempest-TestGettingAddress-server-1789709758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1789709758',id=131,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-xw3p0wga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:42:42Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=4b40bebc-2343-478c-aacb-b4ae1fc87907,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.850 253665 DEBUG nova.network.os_vif_util [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.851 253665 DEBUG nova.network.os_vif_util [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:bf:b2,bridge_name='br-int',has_traffic_filtering=True,id=4a5edb92-edbe-4724-be56-e8dddc03d872,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a5edb92-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.852 253665 DEBUG nova.virt.libvirt.vif [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1789709758',display_name='tempest-TestGettingAddress-server-1789709758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1789709758',id=131,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-xw3p0wga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:42:42Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=4b40bebc-2343-478c-aacb-b4ae1fc87907,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.852 253665 DEBUG nova.network.os_vif_util [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.852 253665 DEBUG nova.network.os_vif_util [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:47:48,bridge_name='br-int',has_traffic_filtering=True,id=738a955b-d3fe-4521-9c7f-f6ae50a9112e,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a955b-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.853 253665 DEBUG nova.objects.instance [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b40bebc-2343-478c-aacb-b4ae1fc87907 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.857 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.858 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.870 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <uuid>4b40bebc-2343-478c-aacb-b4ae1fc87907</uuid>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <name>instance-00000083</name>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-1789709758</nova:name>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:42:49</nova:creationTime>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <nova:port uuid="4a5edb92-edbe-4724-be56-e8dddc03d872">
Nov 22 09:42:50 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <nova:port uuid="738a955b-d3fe-4521-9c7f-f6ae50a9112e">
Nov 22 09:42:50 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fedf:4748" ipVersion="6"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fedf:4748" ipVersion="6"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <system>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <entry name="serial">4b40bebc-2343-478c-aacb-b4ae1fc87907</entry>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <entry name="uuid">4b40bebc-2343-478c-aacb-b4ae1fc87907</entry>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </system>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <os>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   </os>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <features>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   </features>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4b40bebc-2343-478c-aacb-b4ae1fc87907_disk">
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       </source>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/4b40bebc-2343-478c-aacb-b4ae1fc87907_disk.config">
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       </source>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:42:50 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:24:bf:b2"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <target dev="tap4a5edb92-ed"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:df:47:48"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <target dev="tap738a955b-d3"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907/console.log" append="off"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <video>
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </video>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:42:50 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:42:50 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:42:50 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:42:50 compute-0 nova_compute[253661]: </domain>
Nov 22 09:42:50 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.871 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Preparing to wait for external event network-vif-plugged-4a5edb92-edbe-4724-be56-e8dddc03d872 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.871 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.871 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.872 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.872 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Preparing to wait for external event network-vif-plugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.872 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.872 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.872 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.873 253665 DEBUG nova.virt.libvirt.vif [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1789709758',display_name='tempest-TestGettingAddress-server-1789709758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1789709758',id=131,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-xw3p0wga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:42:42Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=4b40bebc-2343-478c-aacb-b4ae1fc87907,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.873 253665 DEBUG nova.network.os_vif_util [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.874 253665 DEBUG nova.network.os_vif_util [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:bf:b2,bridge_name='br-int',has_traffic_filtering=True,id=4a5edb92-edbe-4724-be56-e8dddc03d872,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a5edb92-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.874 253665 DEBUG os_vif [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:bf:b2,bridge_name='br-int',has_traffic_filtering=True,id=4a5edb92-edbe-4724-be56-e8dddc03d872,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a5edb92-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.874 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.875 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.875 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.878 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a5edb92-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.878 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a5edb92-ed, col_values=(('external_ids', {'iface-id': '4a5edb92-edbe-4724-be56-e8dddc03d872', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:bf:b2', 'vm-uuid': '4b40bebc-2343-478c-aacb-b4ae1fc87907'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.879 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:50 compute-0 NetworkManager[48920]: <info>  [1763804570.8806] manager: (tap4a5edb92-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.887 253665 INFO os_vif [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:bf:b2,bridge_name='br-int',has_traffic_filtering=True,id=4a5edb92-edbe-4724-be56-e8dddc03d872,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a5edb92-ed')
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.888 253665 DEBUG nova.virt.libvirt.vif [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1789709758',display_name='tempest-TestGettingAddress-server-1789709758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1789709758',id=131,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-xw3p0wga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:42:42Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=4b40bebc-2343-478c-aacb-b4ae1fc87907,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.888 253665 DEBUG nova.network.os_vif_util [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.889 253665 DEBUG nova.network.os_vif_util [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:47:48,bridge_name='br-int',has_traffic_filtering=True,id=738a955b-d3fe-4521-9c7f-f6ae50a9112e,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a955b-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.889 253665 DEBUG os_vif [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:47:48,bridge_name='br-int',has_traffic_filtering=True,id=738a955b-d3fe-4521-9c7f-f6ae50a9112e,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a955b-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.892 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.892 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.894 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738a955b-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.894 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap738a955b-d3, col_values=(('external_ids', {'iface-id': '738a955b-d3fe-4521-9c7f-f6ae50a9112e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:47:48', 'vm-uuid': '4b40bebc-2343-478c-aacb-b4ae1fc87907'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.895 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:50 compute-0 NetworkManager[48920]: <info>  [1763804570.8963] manager: (tap738a955b-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.896 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.903 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.904 253665 INFO os_vif [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:47:48,bridge_name='br-int',has_traffic_filtering=True,id=738a955b-d3fe-4521-9c7f-f6ae50a9112e,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a955b-d3')
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.946 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.946 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.947 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:24:bf:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.947 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:df:47:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.947 253665 INFO nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Using config drive
Nov 22 09:42:50 compute-0 nova_compute[253661]: 2025-11-22 09:42:50.966 253665 DEBUG nova.storage.rbd_utils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.092 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.093 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3433MB free_disk=59.90663146972656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.093 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.093 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:51 compute-0 ceph-mon[75021]: pgmap v2428: 305 pgs: 305 active+clean; 196 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 32 op/s
Nov 22 09:42:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/447495676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2540152902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/881063278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.174 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d749b709-79b8-40b6-8c2e-4d301bdc8e67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.174 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 4b40bebc-2343-478c-aacb-b4ae1fc87907 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.175 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9712bbe7-5d4c-41ad-8725-d063d344ef31 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.175 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.175 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.246 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.423 253665 INFO nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Creating config drive at /var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907/disk.config
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.428 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpigcf68w6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.465 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.466 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.498 253665 DEBUG nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.572 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpigcf68w6" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.596 253665 DEBUG nova.storage.rbd_utils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.600 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907/disk.config 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.636 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:42:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3594492684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.681 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.688 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.702 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.720 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.721 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.721 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.730 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.730 253665 INFO nova.compute.claims [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.774 253665 DEBUG oslo_concurrency.processutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907/disk.config 4b40bebc-2343-478c-aacb-b4ae1fc87907_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.775 253665 INFO nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Deleting local config drive /var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907/disk.config because it was imported into RBD.
Nov 22 09:42:51 compute-0 kernel: tap4a5edb92-ed: entered promiscuous mode
Nov 22 09:42:51 compute-0 NetworkManager[48920]: <info>  [1763804571.8453] manager: (tap4a5edb92-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01413|binding|INFO|Claiming lport 4a5edb92-edbe-4724-be56-e8dddc03d872 for this chassis.
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01414|binding|INFO|4a5edb92-edbe-4724-be56-e8dddc03d872: Claiming fa:16:3e:24:bf:b2 10.100.0.6
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.848 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.857 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:bf:b2 10.100.0.6'], port_security=['fa:16:3e:24:bf:b2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4b40bebc-2343-478c-aacb-b4ae1fc87907', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f6f2cc35-7f29-4a49-a136-053a456001be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68715752-6987-44a9-a236-673079985d56, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4a5edb92-edbe-4724-be56-e8dddc03d872) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.858 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4a5edb92-edbe-4724-be56-e8dddc03d872 in datapath b7054ec2-03d5-4428-a2b8-9c9905d4fcef bound to our chassis
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.859 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7054ec2-03d5-4428-a2b8-9c9905d4fcef
Nov 22 09:42:51 compute-0 NetworkManager[48920]: <info>  [1763804571.8652] manager: (tap738a955b-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Nov 22 09:42:51 compute-0 kernel: tap738a955b-d3: entered promiscuous mode
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01415|binding|INFO|Setting lport 4a5edb92-edbe-4724-be56-e8dddc03d872 ovn-installed in OVS
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01416|binding|INFO|Setting lport 4a5edb92-edbe-4724-be56-e8dddc03d872 up in Southbound
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.868 253665 DEBUG nova.network.neutron [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updating instance_info_cache with network_info: [{"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.870 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01417|if_status|INFO|Dropped 1 log messages in last 163 seconds (most recently, 163 seconds ago) due to excessive rate
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01418|if_status|INFO|Not updating pb chassis for 738a955b-d3fe-4521-9c7f-f6ae50a9112e now as sb is readonly
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01419|binding|INFO|Claiming lport 738a955b-d3fe-4521-9c7f-f6ae50a9112e for this chassis.
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01420|binding|INFO|738a955b-d3fe-4521-9c7f-f6ae50a9112e: Claiming fa:16:3e:df:47:48 2001:db8:0:1:f816:3eff:fedf:4748 2001:db8::f816:3eff:fedf:4748
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.873 253665 DEBUG nova.network.neutron [req-e47461cc-ebcc-466f-a87b-8f99b222a057 req-1b450912-083d-4ef5-8fa9-a56e8efddbbe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Updated VIF entry in instance network info cache for port 738a955b-d3fe-4521-9c7f-f6ae50a9112e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.874 253665 DEBUG nova.network.neutron [req-e47461cc-ebcc-466f-a87b-8f99b222a057 req-1b450912-083d-4ef5-8fa9-a56e8efddbbe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Updating instance_info_cache with network_info: [{"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[208c43ff-39b6-4f6f-bf5e-d81dc59184fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.881 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:47:48 2001:db8:0:1:f816:3eff:fedf:4748 2001:db8::f816:3eff:fedf:4748'], port_security=['fa:16:3e:df:47:48 2001:db8:0:1:f816:3eff:fedf:4748 2001:db8::f816:3eff:fedf:4748'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fedf:4748/64 2001:db8::f816:3eff:fedf:4748/64', 'neutron:device_id': '4b40bebc-2343-478c-aacb-b4ae1fc87907', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f6914f-16a1-4223-85f8-aa4fada62acd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f6f2cc35-7f29-4a49-a136-053a456001be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28e6ccec-e6eb-4baf-91d9-14c9f27dcba7, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=738a955b-d3fe-4521-9c7f-f6ae50a9112e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:42:51 compute-0 systemd-udevd[389282]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01421|binding|INFO|Setting lport 738a955b-d3fe-4521-9c7f-f6ae50a9112e ovn-installed in OVS
Nov 22 09:42:51 compute-0 ovn_controller[152872]: 2025-11-22T09:42:51Z|01422|binding|INFO|Setting lport 738a955b-d3fe-4521-9c7f-f6ae50a9112e up in Southbound
Nov 22 09:42:51 compute-0 systemd-udevd[389284]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.891 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.891 253665 DEBUG nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Instance network_info: |[{"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.892 253665 DEBUG oslo_concurrency.lockutils [req-e47461cc-ebcc-466f-a87b-8f99b222a057 req-1b450912-083d-4ef5-8fa9-a56e8efddbbe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.893 253665 DEBUG oslo_concurrency.lockutils [req-dbe84ea4-e0e7-4537-8010-670fc285bf8e req-fa9a0149-86aa-48ce-b02c-7079ae733914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.893 253665 DEBUG nova.network.neutron [req-dbe84ea4-e0e7-4537-8010-670fc285bf8e req-fa9a0149-86aa-48ce-b02c-7079ae733914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Refreshing network info cache for port 29eab695-fcfb-43d5-b708-0f720ee0fc39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.896 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Start _get_guest_xml network_info=[{"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:42:51 compute-0 NetworkManager[48920]: <info>  [1763804571.9027] device (tap4a5edb92-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:42:51 compute-0 NetworkManager[48920]: <info>  [1763804571.9034] device (tap4a5edb92-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:42:51 compute-0 NetworkManager[48920]: <info>  [1763804571.9059] device (tap738a955b-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:42:51 compute-0 NetworkManager[48920]: <info>  [1763804571.9065] device (tap738a955b-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:42:51 compute-0 systemd-machined[215941]: New machine qemu-162-instance-00000083.
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.910 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:51 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000083.
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.918 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8d31f4-061b-4aa6-9a62-af8eb7159709]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.921 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[524d64cb-7ef1-484c-a754-595a14f1e8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.945 253665 WARNING nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.951 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3c9c43-4550-479d-a3f1-1f10b74bf7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.962 253665 DEBUG nova.virt.libvirt.host [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.963 253665 DEBUG nova.virt.libvirt.host [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.968 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19980550-6589-400d-826d-12d1fd2f476f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7054ec2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:e8:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738602, 'reachable_time': 21831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389296, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.967 253665 DEBUG nova.virt.libvirt.host [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.968 253665 DEBUG nova.virt.libvirt.host [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.968 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.969 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.969 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.969 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.969 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.969 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.970 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.970 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.970 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.970 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.970 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.971 253665 DEBUG nova.virt.hardware [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:42:51 compute-0 nova_compute[253661]: 2025-11-22 09:42:51.974 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[52fe496c-0504-4ed9-983a-001d15a2b81b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb7054ec2-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738616, 'tstamp': 738616}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389301, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb7054ec2-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738619, 'tstamp': 738619}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389301, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.987 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7054ec2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.990 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7054ec2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.991 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.991 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7054ec2-00, col_values=(('external_ids', {'iface-id': 'd0d06639-6ce0-40be-a634-96c5ecf3d6fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.991 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.992 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 738a955b-d3fe-4521-9c7f-f6ae50a9112e in datapath 14f6914f-16a1-4223-85f8-aa4fada62acd unbound from our chassis
Nov 22 09:42:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:51.994 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14f6914f-16a1-4223-85f8-aa4fada62acd
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.010 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[52be378b-6fe6-4ffe-b93b-70d3ed6e5aaa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.053 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[881c94a6-aa15-4982-8607-0821f5aa73be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.062 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d7aad5b4-2eb0-4c50-981e-2eb8fcfafb4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.096 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9c96ee-c2f5-48fc-bbd5-86f88805da2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 213 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.099 253665 DEBUG nova.compute.manager [req-9b97df9a-3299-4a60-a592-b6c9d2f49432 req-bc574148-6f90-431c-9f2a-40383ac6a84f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-plugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.099 253665 DEBUG oslo_concurrency.lockutils [req-9b97df9a-3299-4a60-a592-b6c9d2f49432 req-bc574148-6f90-431c-9f2a-40383ac6a84f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.100 253665 DEBUG oslo_concurrency.lockutils [req-9b97df9a-3299-4a60-a592-b6c9d2f49432 req-bc574148-6f90-431c-9f2a-40383ac6a84f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.100 253665 DEBUG oslo_concurrency.lockutils [req-9b97df9a-3299-4a60-a592-b6c9d2f49432 req-bc574148-6f90-431c-9f2a-40383ac6a84f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.100 253665 DEBUG nova.compute.manager [req-9b97df9a-3299-4a60-a592-b6c9d2f49432 req-bc574148-6f90-431c-9f2a-40383ac6a84f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Processing event network-vif-plugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.116 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[56b561ba-dd1b-4885-bc8c-b560dfc65c9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f6914f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:46:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738694, 'reachable_time': 32374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389328, 'error': None, 'target': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[967fcff8-d2b4-481e-8fbb-487b3eb3ec3e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14f6914f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738706, 'tstamp': 738706}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389330, 'error': None, 'target': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f6914f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3594492684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.147 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f6914f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.147 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.147 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14f6914f-10, col_values=(('external_ids', {'iface-id': 'ef3b4cae-0bef-4b3d-812b-86120687d0c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:52.148 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:42:52
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'vms', '.rgw.root', '.mgr']
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:42:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:42:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195822993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.401 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.405 253665 DEBUG nova.compute.provider_tree [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.423 253665 DEBUG nova.scheduler.client.report [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.456 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.457 253665 DEBUG nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.507 253665 DEBUG nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.508 253665 DEBUG nova.network.neutron [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.525 253665 INFO nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:42:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:42:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179886045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.542 253665 DEBUG nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.555 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.577 253665 DEBUG nova.storage.rbd_utils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.581 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.632 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804572.5751805, 4b40bebc-2343-478c-aacb-b4ae1fc87907 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.633 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] VM Started (Lifecycle Event)
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.640 253665 DEBUG nova.compute.manager [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-plugged-4a5edb92-edbe-4724-be56-e8dddc03d872 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.640 253665 DEBUG oslo_concurrency.lockutils [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.640 253665 DEBUG oslo_concurrency.lockutils [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.641 253665 DEBUG oslo_concurrency.lockutils [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.641 253665 DEBUG nova.compute.manager [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Processing event network-vif-plugged-4a5edb92-edbe-4724-be56-e8dddc03d872 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.641 253665 DEBUG nova.compute.manager [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-plugged-4a5edb92-edbe-4724-be56-e8dddc03d872 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.641 253665 DEBUG oslo_concurrency.lockutils [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.641 253665 DEBUG oslo_concurrency.lockutils [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.641 253665 DEBUG oslo_concurrency.lockutils [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.642 253665 DEBUG nova.compute.manager [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] No waiting events found dispatching network-vif-plugged-4a5edb92-edbe-4724-be56-e8dddc03d872 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.642 253665 WARNING nova.compute.manager [req-55370db8-25a7-4047-b2fd-50d58dcd1888 req-bf84588a-225f-4041-b11d-9ddfc1c67eac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received unexpected event network-vif-plugged-4a5edb92-edbe-4724-be56-e8dddc03d872 for instance with vm_state building and task_state spawning.
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.644 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.648 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.652 253665 INFO nova.virt.libvirt.driver [-] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Instance spawned successfully.
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.652 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.665 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.671 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.675 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.676 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.676 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.676 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.677 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.677 253665 DEBUG nova.virt.libvirt.driver [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.698 253665 DEBUG nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.699 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.700 253665 INFO nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Creating image(s)
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.724 253665 DEBUG nova.storage.rbd_utils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:42:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.757 253665 DEBUG nova.storage.rbd_utils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.777 253665 DEBUG nova.storage.rbd_utils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.783 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.824 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.825 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804572.5754926, 4b40bebc-2343-478c-aacb-b4ae1fc87907 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.825 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] VM Paused (Lifecycle Event)
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.829 253665 INFO nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Took 10.62 seconds to spawn the instance on the hypervisor.
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.829 253665 DEBUG nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.862 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.863 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.863 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.864 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.886 253665 DEBUG nova.storage.rbd_utils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.893 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.934 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.943 253665 INFO nova.compute.manager [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Took 11.58 seconds to build instance.
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.945 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804572.6472523, 4b40bebc-2343-478c-aacb-b4ae1fc87907 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.945 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] VM Resumed (Lifecycle Event)
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.960 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.964 253665 DEBUG oslo_concurrency.lockutils [None req-e61eb785-752e-4431-8df3-1e7849304af3 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:52 compute-0 nova_compute[253661]: 2025-11-22 09:42:52.965 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:42:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:42:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054778596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.085 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.087 253665 DEBUG nova.virt.libvirt.vif [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:42:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-316962792',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-316962792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=132,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDAinPFxvJPb+F5aFOzMr/F2AeCA+zy9wSOLs/tEMQaNXxaZ2HNnh0s+13uWL7hsaz7nbRxU8LxOIxkt/PQ9hxO5mowXPIJIN0kjLr/YjJtROiFYKcno5ZGoM+YASq6MA==',key_name='tempest-TestSecurityGroupsBasicOps-33766932',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-j0u3e3ck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:42:46Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=9712bbe7-5d4c-41ad-8725-d063d344ef31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.088 253665 DEBUG nova.network.os_vif_util [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.089 253665 DEBUG nova.network.os_vif_util [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:93,bridge_name='br-int',has_traffic_filtering=True,id=29eab695-fcfb-43d5-b708-0f720ee0fc39,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eab695-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.094 253665 DEBUG nova.objects.instance [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid 9712bbe7-5d4c-41ad-8725-d063d344ef31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.109 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <uuid>9712bbe7-5d4c-41ad-8725-d063d344ef31</uuid>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <name>instance-00000084</name>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-316962792</nova:name>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:42:51</nova:creationTime>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <nova:port uuid="29eab695-fcfb-43d5-b708-0f720ee0fc39">
Nov 22 09:42:53 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <system>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <entry name="serial">9712bbe7-5d4c-41ad-8725-d063d344ef31</entry>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <entry name="uuid">9712bbe7-5d4c-41ad-8725-d063d344ef31</entry>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     </system>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <os>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   </os>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <features>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   </features>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9712bbe7-5d4c-41ad-8725-d063d344ef31_disk">
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       </source>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/9712bbe7-5d4c-41ad-8725-d063d344ef31_disk.config">
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       </source>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:42:53 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:c1:cf:93"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <target dev="tap29eab695-fc"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31/console.log" append="off"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <video>
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     </video>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:42:53 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:42:53 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:42:53 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:42:53 compute-0 nova_compute[253661]: </domain>
Nov 22 09:42:53 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.109 253665 DEBUG nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Preparing to wait for external event network-vif-plugged-29eab695-fcfb-43d5-b708-0f720ee0fc39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.109 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.110 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.110 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.110 253665 DEBUG nova.virt.libvirt.vif [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:42:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-316962792',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-316962792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=132,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDAinPFxvJPb+F5aFOzMr/F2AeCA+zy9wSOLs/tEMQaNXxaZ2HNnh0s+13uWL7hsaz7nbRxU8LxOIxkt/PQ9hxO5mowXPIJIN0kjLr/YjJtROiFYKcno5ZGoM+YASq6MA==',key_name='tempest-TestSecurityGroupsBasicOps-33766932',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-j0u3e3ck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:42:46Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=9712bbe7-5d4c-41ad-8725-d063d344ef31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.111 253665 DEBUG nova.network.os_vif_util [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.111 253665 DEBUG nova.network.os_vif_util [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:93,bridge_name='br-int',has_traffic_filtering=True,id=29eab695-fcfb-43d5-b708-0f720ee0fc39,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eab695-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.112 253665 DEBUG os_vif [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:93,bridge_name='br-int',has_traffic_filtering=True,id=29eab695-fcfb-43d5-b708-0f720ee0fc39,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eab695-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.112 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.112 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.113 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.115 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.116 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29eab695-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.116 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29eab695-fc, col_values=(('external_ids', {'iface-id': '29eab695-fcfb-43d5-b708-0f720ee0fc39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:cf:93', 'vm-uuid': '9712bbe7-5d4c-41ad-8725-d063d344ef31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.118 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.120 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:42:53 compute-0 NetworkManager[48920]: <info>  [1763804573.1216] manager: (tap29eab695-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.131 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.133 253665 INFO os_vif [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:93,bridge_name='br-int',has_traffic_filtering=True,id=29eab695-fcfb-43d5-b708-0f720ee0fc39,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eab695-fc')
Nov 22 09:42:53 compute-0 ceph-mon[75021]: pgmap v2429: 305 pgs: 305 active+clean; 213 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 22 09:42:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1195822993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:42:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1179886045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2054778596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.189 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.189 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.189 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:c1:cf:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.190 253665 INFO nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Using config drive
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.210 253665 DEBUG nova.storage.rbd_utils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.218 253665 DEBUG nova.policy [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.221 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.299 253665 DEBUG nova.storage.rbd_utils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.422 253665 DEBUG nova.objects.instance [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid 61d505d6-e331-4728-9e1b-ffbb4cc5ea50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.458 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.459 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Ensure instance console log exists: /var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.459 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.459 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.460 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.806 253665 INFO nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Creating config drive at /var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31/disk.config
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.815 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1qu_j45 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.947 253665 DEBUG nova.network.neutron [req-dbe84ea4-e0e7-4537-8010-670fc285bf8e req-fa9a0149-86aa-48ce-b02c-7079ae733914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updated VIF entry in instance network info cache for port 29eab695-fcfb-43d5-b708-0f720ee0fc39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.948 253665 DEBUG nova.network.neutron [req-dbe84ea4-e0e7-4537-8010-670fc285bf8e req-fa9a0149-86aa-48ce-b02c-7079ae733914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updating instance_info_cache with network_info: [{"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.963 253665 DEBUG oslo_concurrency.lockutils [req-dbe84ea4-e0e7-4537-8010-670fc285bf8e req-fa9a0149-86aa-48ce-b02c-7079ae733914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:53 compute-0 nova_compute[253661]: 2025-11-22 09:42:53.978 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1qu_j45" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.005 253665 DEBUG nova.storage.rbd_utils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.011 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31/disk.config 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 241 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 4.7 MiB/s wr, 89 op/s
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.242 253665 DEBUG nova.compute.manager [req-fde51936-7005-4d2f-bb6f-ae28bcd83a74 req-89dc76b0-dbc4-469f-a5f6-0d7148504792 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-plugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.243 253665 DEBUG oslo_concurrency.lockutils [req-fde51936-7005-4d2f-bb6f-ae28bcd83a74 req-89dc76b0-dbc4-469f-a5f6-0d7148504792 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.243 253665 DEBUG oslo_concurrency.lockutils [req-fde51936-7005-4d2f-bb6f-ae28bcd83a74 req-89dc76b0-dbc4-469f-a5f6-0d7148504792 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.244 253665 DEBUG oslo_concurrency.lockutils [req-fde51936-7005-4d2f-bb6f-ae28bcd83a74 req-89dc76b0-dbc4-469f-a5f6-0d7148504792 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.244 253665 DEBUG nova.compute.manager [req-fde51936-7005-4d2f-bb6f-ae28bcd83a74 req-89dc76b0-dbc4-469f-a5f6-0d7148504792 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] No waiting events found dispatching network-vif-plugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.245 253665 WARNING nova.compute.manager [req-fde51936-7005-4d2f-bb6f-ae28bcd83a74 req-89dc76b0-dbc4-469f-a5f6-0d7148504792 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received unexpected event network-vif-plugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e for instance with vm_state active and task_state None.
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.550 253665 DEBUG nova.network.neutron [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Successfully created port: 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.722 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.771 253665 DEBUG oslo_concurrency.processutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31/disk.config 9712bbe7-5d4c-41ad-8725-d063d344ef31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.772 253665 INFO nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Deleting local config drive /var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31/disk.config because it was imported into RBD.
Nov 22 09:42:54 compute-0 kernel: tap29eab695-fc: entered promiscuous mode
Nov 22 09:42:54 compute-0 ovn_controller[152872]: 2025-11-22T09:42:54Z|01423|binding|INFO|Claiming lport 29eab695-fcfb-43d5-b708-0f720ee0fc39 for this chassis.
Nov 22 09:42:54 compute-0 ovn_controller[152872]: 2025-11-22T09:42:54Z|01424|binding|INFO|29eab695-fcfb-43d5-b708-0f720ee0fc39: Claiming fa:16:3e:c1:cf:93 10.100.0.3
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:54 compute-0 NetworkManager[48920]: <info>  [1763804574.8379] manager: (tap29eab695-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/580)
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.842 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:cf:93 10.100.0.3'], port_security=['fa:16:3e:c1:cf:93 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9712bbe7-5d4c-41ad-8725-d063d344ef31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3612e32-3934-4c5d-9617-e49cc61dda86 ec2b4fe9-a0a4-4ce7-b67b-b101ede1b3af', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c8d1ab5-5912-4930-8d33-fd5a1b07be2a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=29eab695-fcfb-43d5-b708-0f720ee0fc39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.844 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 29eab695-fcfb-43d5-b708-0f720ee0fc39 in datapath d0ae8a74-13a0-46e6-ad55-3404cb0e971f bound to our chassis
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.846 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0ae8a74-13a0-46e6-ad55-3404cb0e971f
Nov 22 09:42:54 compute-0 NetworkManager[48920]: <info>  [1763804574.8485] device (tap29eab695-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.847 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:54 compute-0 ovn_controller[152872]: 2025-11-22T09:42:54Z|01425|binding|INFO|Setting lport 29eab695-fcfb-43d5-b708-0f720ee0fc39 up in Southbound
Nov 22 09:42:54 compute-0 ovn_controller[152872]: 2025-11-22T09:42:54Z|01426|binding|INFO|Setting lport 29eab695-fcfb-43d5-b708-0f720ee0fc39 ovn-installed in OVS
Nov 22 09:42:54 compute-0 nova_compute[253661]: 2025-11-22 09:42:54.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:54 compute-0 NetworkManager[48920]: <info>  [1763804574.8553] device (tap29eab695-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5d93b0-73f4-4c6d-9e8c-e939663e9fad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.866 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0ae8a74-11 in ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.868 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0ae8a74-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c171c1-74cb-440a-9829-3905510eccd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[db710165-dccb-41fe-913a-00d1a7cb77ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.888 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b14573-d1c5-4396-9b1a-3dfb4e1cdb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:54 compute-0 systemd-machined[215941]: New machine qemu-163-instance-00000084.
Nov 22 09:42:54 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000084.
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.913 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea61d2a-b085-4e16-a5a7-005319b0d9e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.948 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[68ba06bb-0c02-40d1-bf6d-1873cae29fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:54.955 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c85a513-f92b-4223-9a9d-db4014c51716]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:54 compute-0 systemd-udevd[389679]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:42:54 compute-0 NetworkManager[48920]: <info>  [1763804574.9564] manager: (tapd0ae8a74-10): new Veth device (/org/freedesktop/NetworkManager/Devices/581)
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.002 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f6aec4e2-e381-49a5-9cfb-39da9c72347c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.006 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[673b03f2-044a-4e3f-afb7-8fe96577a55c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:55 compute-0 NetworkManager[48920]: <info>  [1763804575.0320] device (tapd0ae8a74-10): carrier: link connected
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.040 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[47d44490-4160-47d3-8a8b-5ce51531d3c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.061 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08a78af9-b6fd-4478-b0d8-447005cee6cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0ae8a74-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d3:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 406], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742577, 'reachable_time': 43515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389709, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.085 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[160c92f8-f1fe-4d20-b177-06f38beff97d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:d358'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742577, 'tstamp': 742577}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389710, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.114 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[40924527-fa7b-46a4-8a22-40ff9488a490]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0ae8a74-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d3:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 406], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742577, 'reachable_time': 43515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389711, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.150 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc681f8-109b-4a6d-87f9-f12ff88c56d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1942de3-536e-4aa9-becd-6b4580b6813e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.223 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0ae8a74-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.224 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.224 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0ae8a74-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.225 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:55 compute-0 NetworkManager[48920]: <info>  [1763804575.2266] manager: (tapd0ae8a74-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Nov 22 09:42:55 compute-0 kernel: tapd0ae8a74-10: entered promiscuous mode
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.229 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0ae8a74-10, col_values=(('external_ids', {'iface-id': 'a6ff8b0d-d874-4d01-a908-4435ceea7174'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:55 compute-0 ceph-mon[75021]: pgmap v2430: 305 pgs: 305 active+clean; 241 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 4.7 MiB/s wr, 89 op/s
Nov 22 09:42:55 compute-0 ovn_controller[152872]: 2025-11-22T09:42:55Z|01427|binding|INFO|Releasing lport a6ff8b0d-d874-4d01-a908-4435ceea7174 from this chassis (sb_readonly=0)
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.245 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0ae8a74-13a0-46e6-ad55-3404cb0e971f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0ae8a74-13a0-46e6-ad55-3404cb0e971f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.247 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a3db29-2c0f-46f9-859f-f154998ca37b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.248 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-d0ae8a74-13a0-46e6-ad55-3404cb0e971f
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/d0ae8a74-13a0-46e6-ad55-3404cb0e971f.pid.haproxy
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID d0ae8a74-13a0-46e6-ad55-3404cb0e971f
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:42:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:55.249 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'env', 'PROCESS_TAG=haproxy-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0ae8a74-13a0-46e6-ad55-3404cb0e971f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.469 253665 DEBUG nova.compute.manager [req-d8720ea4-c8d7-4408-b80f-781d3ca58785 req-faae522e-2d62-4993-8566-c032f3a11f2d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Received event network-vif-plugged-29eab695-fcfb-43d5-b708-0f720ee0fc39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.474 253665 DEBUG oslo_concurrency.lockutils [req-d8720ea4-c8d7-4408-b80f-781d3ca58785 req-faae522e-2d62-4993-8566-c032f3a11f2d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.474 253665 DEBUG oslo_concurrency.lockutils [req-d8720ea4-c8d7-4408-b80f-781d3ca58785 req-faae522e-2d62-4993-8566-c032f3a11f2d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.474 253665 DEBUG oslo_concurrency.lockutils [req-d8720ea4-c8d7-4408-b80f-781d3ca58785 req-faae522e-2d62-4993-8566-c032f3a11f2d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.475 253665 DEBUG nova.compute.manager [req-d8720ea4-c8d7-4408-b80f-781d3ca58785 req-faae522e-2d62-4993-8566-c032f3a11f2d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Processing event network-vif-plugged-29eab695-fcfb-43d5-b708-0f720ee0fc39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.549 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804575.5489767, 9712bbe7-5d4c-41ad-8725-d063d344ef31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.550 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] VM Started (Lifecycle Event)
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.553 253665 DEBUG nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.556 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.560 253665 INFO nova.virt.libvirt.driver [-] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Instance spawned successfully.
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.560 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.570 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.576 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.580 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.581 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.582 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.582 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.583 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.583 253665 DEBUG nova.virt.libvirt.driver [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.617 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.619 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804575.5492435, 9712bbe7-5d4c-41ad-8725-d063d344ef31 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.619 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] VM Paused (Lifecycle Event)
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.638 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.641 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804575.5559385, 9712bbe7-5d4c-41ad-8725-d063d344ef31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.642 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] VM Resumed (Lifecycle Event)
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.647 253665 INFO nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Took 8.79 seconds to spawn the instance on the hypervisor.
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.648 253665 DEBUG nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.667 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.670 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.692 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.705 253665 INFO nova.compute.manager [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Took 9.73 seconds to build instance.
Nov 22 09:42:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:42:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:42:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:42:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:42:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.722 253665 DEBUG nova.network.neutron [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Successfully updated port: 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.725 253665 DEBUG oslo_concurrency.lockutils [None req-209518ec-6a87-454b-a8a0-4c6d491ef6fc 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:55 compute-0 podman[389782]: 2025-11-22 09:42:55.728870947 +0000 UTC m=+0.106827819 container create 0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:42:55 compute-0 podman[389782]: 2025-11-22 09:42:55.64584628 +0000 UTC m=+0.023803182 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.741 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.742 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.742 253665 DEBUG nova.network.neutron [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:42:55 compute-0 systemd[1]: Started libpod-conmon-0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa.scope.
Nov 22 09:42:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db5ec8a4c513b2bd8a996899fae508998ec3f845793f58606bdbbc7e9e1913a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:55 compute-0 podman[389782]: 2025-11-22 09:42:55.875737641 +0000 UTC m=+0.253694523 container init 0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:42:55 compute-0 podman[389782]: 2025-11-22 09:42:55.881089262 +0000 UTC m=+0.259046144 container start 0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 09:42:55 compute-0 neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f[389797]: [NOTICE]   (389801) : New worker (389803) forked
Nov 22 09:42:55 compute-0 neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f[389797]: [NOTICE]   (389801) : Loading success.
Nov 22 09:42:55 compute-0 nova_compute[253661]: 2025-11-22 09:42:55.961 253665 DEBUG nova.network.neutron [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:42:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 241 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.7 MiB/s wr, 65 op/s
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.346 253665 DEBUG nova.compute.manager [req-94fa9f27-3ea9-4a4d-9b46-823cd34165ae req-2d38ea87-584b-4472-be46-cf2c0fdd0b93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-changed-4a5edb92-edbe-4724-be56-e8dddc03d872 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.346 253665 DEBUG nova.compute.manager [req-94fa9f27-3ea9-4a4d-9b46-823cd34165ae req-2d38ea87-584b-4472-be46-cf2c0fdd0b93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Refreshing instance network info cache due to event network-changed-4a5edb92-edbe-4724-be56-e8dddc03d872. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.347 253665 DEBUG oslo_concurrency.lockutils [req-94fa9f27-3ea9-4a4d-9b46-823cd34165ae req-2d38ea87-584b-4472-be46-cf2c0fdd0b93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.347 253665 DEBUG oslo_concurrency.lockutils [req-94fa9f27-3ea9-4a4d-9b46-823cd34165ae req-2d38ea87-584b-4472-be46-cf2c0fdd0b93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.347 253665 DEBUG nova.network.neutron [req-94fa9f27-3ea9-4a4d-9b46-823cd34165ae req-2d38ea87-584b-4472-be46-cf2c0fdd0b93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Refreshing network info cache for port 4a5edb92-edbe-4724-be56-e8dddc03d872 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.492 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:42:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:42:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:42:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:42:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.879 253665 DEBUG nova.network.neutron [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Updating instance_info_cache with network_info: [{"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.902 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.902 253665 DEBUG nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Instance network_info: |[{"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.904 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Start _get_guest_xml network_info=[{"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.908 253665 WARNING nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.912 253665 DEBUG nova.virt.libvirt.host [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.912 253665 DEBUG nova.virt.libvirt.host [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.914 253665 DEBUG nova.virt.libvirt.host [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.915 253665 DEBUG nova.virt.libvirt.host [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.915 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.915 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.915 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.916 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.916 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.916 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.916 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.916 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.916 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.917 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.917 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.917 253665 DEBUG nova.virt.hardware [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:42:56 compute-0 nova_compute[253661]: 2025-11-22 09:42:56.920 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:57 compute-0 ceph-mon[75021]: pgmap v2431: 305 pgs: 305 active+clean; 241 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.7 MiB/s wr, 65 op/s
Nov 22 09:42:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:42:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606231917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.377 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.410 253665 DEBUG nova.storage.rbd_utils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.414 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.572 253665 DEBUG nova.compute.manager [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Received event network-vif-plugged-29eab695-fcfb-43d5-b708-0f720ee0fc39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.573 253665 DEBUG oslo_concurrency.lockutils [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.573 253665 DEBUG oslo_concurrency.lockutils [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.573 253665 DEBUG oslo_concurrency.lockutils [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.573 253665 DEBUG nova.compute.manager [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] No waiting events found dispatching network-vif-plugged-29eab695-fcfb-43d5-b708-0f720ee0fc39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.574 253665 WARNING nova.compute.manager [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Received unexpected event network-vif-plugged-29eab695-fcfb-43d5-b708-0f720ee0fc39 for instance with vm_state active and task_state None.
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.574 253665 DEBUG nova.compute.manager [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received event network-changed-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.574 253665 DEBUG nova.compute.manager [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Refreshing instance network info cache due to event network-changed-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.574 253665 DEBUG oslo_concurrency.lockutils [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.574 253665 DEBUG oslo_concurrency.lockutils [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.575 253665 DEBUG nova.network.neutron [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Refreshing network info cache for port 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:42:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:42:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2482574095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.858 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.859 253665 DEBUG nova.virt.libvirt.vif [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:42:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-137504459',display_name='tempest-TestNetworkBasicOps-server-137504459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-137504459',id=133,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG0sSnVHjb+SQP1ADA0KsoySjkYY+IUgWL14mgtXY5I/aEMvXXVftTgtxL3iE8eHbToFcNOq74CZwmpR/oAhD4Mbp1BIO3vvtKF+P4z2mX1GqFns7/nhAJwsX+/x8Yg04A==',key_name='tempest-TestNetworkBasicOps-2036456400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-r2vzehoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:42:52Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=61d505d6-e331-4728-9e1b-ffbb4cc5ea50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.859 253665 DEBUG nova.network.os_vif_util [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.860 253665 DEBUG nova.network.os_vif_util [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:1d:c2,bridge_name='br-int',has_traffic_filtering=True,id=0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e,network=Network(ea98697b-6736-4594-9eb4-6b1b64d10480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a0c4a1b-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.861 253665 DEBUG nova.objects.instance [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 61d505d6-e331-4728-9e1b-ffbb4cc5ea50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.875 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <uuid>61d505d6-e331-4728-9e1b-ffbb4cc5ea50</uuid>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <name>instance-00000085</name>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-137504459</nova:name>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:42:56</nova:creationTime>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <nova:port uuid="0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e">
Nov 22 09:42:57 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <system>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <entry name="serial">61d505d6-e331-4728-9e1b-ffbb4cc5ea50</entry>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <entry name="uuid">61d505d6-e331-4728-9e1b-ffbb4cc5ea50</entry>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     </system>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <os>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   </os>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <features>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   </features>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk">
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       </source>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk.config">
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       </source>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:42:57 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:43:1d:c2"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <target dev="tap0a0c4a1b-54"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50/console.log" append="off"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <video>
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     </video>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:42:57 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:42:57 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:42:57 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:42:57 compute-0 nova_compute[253661]: </domain>
Nov 22 09:42:57 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.875 253665 DEBUG nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Preparing to wait for external event network-vif-plugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.875 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.875 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.876 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.876 253665 DEBUG nova.virt.libvirt.vif [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:42:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-137504459',display_name='tempest-TestNetworkBasicOps-server-137504459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-137504459',id=133,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG0sSnVHjb+SQP1ADA0KsoySjkYY+IUgWL14mgtXY5I/aEMvXXVftTgtxL3iE8eHbToFcNOq74CZwmpR/oAhD4Mbp1BIO3vvtKF+P4z2mX1GqFns7/nhAJwsX+/x8Yg04A==',key_name='tempest-TestNetworkBasicOps-2036456400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-r2vzehoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:42:52Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=61d505d6-e331-4728-9e1b-ffbb4cc5ea50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.876 253665 DEBUG nova.network.os_vif_util [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.877 253665 DEBUG nova.network.os_vif_util [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:1d:c2,bridge_name='br-int',has_traffic_filtering=True,id=0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e,network=Network(ea98697b-6736-4594-9eb4-6b1b64d10480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a0c4a1b-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.878 253665 DEBUG os_vif [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:1d:c2,bridge_name='br-int',has_traffic_filtering=True,id=0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e,network=Network(ea98697b-6736-4594-9eb4-6b1b64d10480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a0c4a1b-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.879 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.879 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.881 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.881 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a0c4a1b-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.881 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a0c4a1b-54, col_values=(('external_ids', {'iface-id': '0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:1d:c2', 'vm-uuid': '61d505d6-e331-4728-9e1b-ffbb4cc5ea50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:57 compute-0 NetworkManager[48920]: <info>  [1763804577.8836] manager: (tap0a0c4a1b-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.889 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.890 253665 INFO os_vif [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:1d:c2,bridge_name='br-int',has_traffic_filtering=True,id=0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e,network=Network(ea98697b-6736-4594-9eb4-6b1b64d10480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a0c4a1b-54')
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.960 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.960 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.960 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:43:1d:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.961 253665 INFO nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Using config drive
Nov 22 09:42:57 compute-0 nova_compute[253661]: 2025-11-22 09:42:57.984 253665 DEBUG nova.storage.rbd_utils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.038 253665 DEBUG nova.network.neutron [req-94fa9f27-3ea9-4a4d-9b46-823cd34165ae req-2d38ea87-584b-4472-be46-cf2c0fdd0b93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Updated VIF entry in instance network info cache for port 4a5edb92-edbe-4724-be56-e8dddc03d872. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.039 253665 DEBUG nova.network.neutron [req-94fa9f27-3ea9-4a4d-9b46-823cd34165ae req-2d38ea87-584b-4472-be46-cf2c0fdd0b93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Updating instance_info_cache with network_info: [{"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.069 253665 DEBUG oslo_concurrency.lockutils [req-94fa9f27-3ea9-4a4d-9b46-823cd34165ae req-2d38ea87-584b-4472-be46-cf2c0fdd0b93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 260 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 22 09:42:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/606231917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2482574095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.483 253665 INFO nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Creating config drive at /var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50/disk.config
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.487 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoerlm3x9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.632 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoerlm3x9" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.672 253665 DEBUG nova.storage.rbd_utils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.680 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50/disk.config 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.870 253665 DEBUG oslo_concurrency.processutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50/disk.config 61d505d6-e331-4728-9e1b-ffbb4cc5ea50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.871 253665 INFO nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Deleting local config drive /var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50/disk.config because it was imported into RBD.
Nov 22 09:42:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:42:58 compute-0 kernel: tap0a0c4a1b-54: entered promiscuous mode
Nov 22 09:42:58 compute-0 NetworkManager[48920]: <info>  [1763804578.9390] manager: (tap0a0c4a1b-54): new Tun device (/org/freedesktop/NetworkManager/Devices/584)
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.945 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:58 compute-0 ovn_controller[152872]: 2025-11-22T09:42:58Z|01428|binding|INFO|Claiming lport 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e for this chassis.
Nov 22 09:42:58 compute-0 ovn_controller[152872]: 2025-11-22T09:42:58Z|01429|binding|INFO|0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e: Claiming fa:16:3e:43:1d:c2 10.100.0.12
Nov 22 09:42:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:58.957 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:1d:c2 10.100.0.12'], port_security=['fa:16:3e:43:1d:c2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '61d505d6-e331-4728-9e1b-ffbb4cc5ea50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea98697b-6736-4594-9eb4-6b1b64d10480', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '962f3c58-c766-4f2e-8de6-38606feaf786', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37a839c5-b437-4f74-8352-23c47a310399, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:42:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:58.958 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e in datapath ea98697b-6736-4594-9eb4-6b1b64d10480 bound to our chassis
Nov 22 09:42:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:58.960 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea98697b-6736-4594-9eb4-6b1b64d10480
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.963 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:58 compute-0 ovn_controller[152872]: 2025-11-22T09:42:58Z|01430|binding|INFO|Setting lport 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e ovn-installed in OVS
Nov 22 09:42:58 compute-0 ovn_controller[152872]: 2025-11-22T09:42:58Z|01431|binding|INFO|Setting lport 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e up in Southbound
Nov 22 09:42:58 compute-0 nova_compute[253661]: 2025-11-22 09:42:58.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:58.976 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[14a2bd84-fa55-4cd9-9afc-f3dcaeaf1670]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:58.978 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapea98697b-61 in ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:42:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:58.980 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapea98697b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:42:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:58.980 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[601daef1-202a-486e-8346-ff53b2b38272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:58.980 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9bde04-0e87-4056-a535-2143642dcbed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:58.997 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa3431c-3476-4c47-b715-ae5d72ce6457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 systemd-machined[215941]: New machine qemu-164-instance-00000085.
Nov 22 09:42:59 compute-0 systemd-udevd[389978]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:42:59 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000085.
Nov 22 09:42:59 compute-0 NetworkManager[48920]: <info>  [1763804579.0349] device (tap0a0c4a1b-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.033 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6afff527-d622-4d16-abef-c2d9fa82adf5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 NetworkManager[48920]: <info>  [1763804579.0360] device (tap0a0c4a1b-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:42:59 compute-0 podman[389944]: 2025-11-22 09:42:59.049065961 +0000 UTC m=+0.091113934 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:42:59 compute-0 podman[389946]: 2025-11-22 09:42:59.068727021 +0000 UTC m=+0.103238551 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.072 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[04d662fd-b1f6-4030-88e1-da18e6045295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 NetworkManager[48920]: <info>  [1763804579.0796] manager: (tapea98697b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/585)
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.078 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a444dda9-9dfa-46b5-9910-a966abd36ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.131 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[144f9b82-94ba-44d5-98c3-6b23b8cb8431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.139 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[559b5428-d26c-405d-8be3-ed3ef754abca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 NetworkManager[48920]: <info>  [1763804579.1650] device (tapea98697b-60): carrier: link connected
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.176 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2f83c8-3229-42ff-9734-bd5f35518cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.199 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9172e6-a344-485f-bafb-6994e8ea9e42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea98697b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:82:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 408], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742990, 'reachable_time': 38226, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390017, 'error': None, 'target': 'ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.221 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5000986e-3440-4aed-90d4-4f0700a6cd77]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:828a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742990, 'tstamp': 742990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390026, 'error': None, 'target': 'ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.247 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[073f8b10-062f-4799-a6c5-79771d9b070a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea98697b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:82:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 408], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742990, 'reachable_time': 38226, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390037, 'error': None, 'target': 'ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 ceph-mon[75021]: pgmap v2432: 305 pgs: 305 active+clean; 260 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.295 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fd708881-35b8-461f-9743-95613f82f56a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.321 253665 DEBUG nova.compute.manager [req-3f004c7b-8572-4c1a-9ccc-e17464fa549c req-79909733-9d43-4e44-b867-ff32b8cd1a8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received event network-vif-plugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.321 253665 DEBUG oslo_concurrency.lockutils [req-3f004c7b-8572-4c1a-9ccc-e17464fa549c req-79909733-9d43-4e44-b867-ff32b8cd1a8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.322 253665 DEBUG oslo_concurrency.lockutils [req-3f004c7b-8572-4c1a-9ccc-e17464fa549c req-79909733-9d43-4e44-b867-ff32b8cd1a8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.322 253665 DEBUG oslo_concurrency.lockutils [req-3f004c7b-8572-4c1a-9ccc-e17464fa549c req-79909733-9d43-4e44-b867-ff32b8cd1a8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.322 253665 DEBUG nova.compute.manager [req-3f004c7b-8572-4c1a-9ccc-e17464fa549c req-79909733-9d43-4e44-b867-ff32b8cd1a8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Processing event network-vif-plugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.380 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4ff6cb-20c5-42a9-adb8-3b20ce4a497f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.382 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea98697b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.382 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.382 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea98697b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:59 compute-0 NetworkManager[48920]: <info>  [1763804579.4039] manager: (tapea98697b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Nov 22 09:42:59 compute-0 kernel: tapea98697b-60: entered promiscuous mode
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.406 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.411 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea98697b-60, col_values=(('external_ids', {'iface-id': '2bdd8ca7-1bc2-4ebc-889b-cc94dc3719d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:42:59 compute-0 ovn_controller[152872]: 2025-11-22T09:42:59Z|01432|binding|INFO|Releasing lport 2bdd8ca7-1bc2-4ebc-889b-cc94dc3719d5 from this chassis (sb_readonly=0)
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.414 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.422 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804579.4222565, 61d505d6-e331-4728-9e1b-ffbb4cc5ea50 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.422 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] VM Started (Lifecycle Event)
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.425 253665 DEBUG nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.431 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.433 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ea98697b-6736-4594-9eb4-6b1b64d10480.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ea98697b-6736-4594-9eb4-6b1b64d10480.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.434 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.434 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a4172a-40c2-405f-b3ce-9a04baee6340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.435 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-ea98697b-6736-4594-9eb4-6b1b64d10480
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/ea98697b-6736-4594-9eb4-6b1b64d10480.pid.haproxy
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID ea98697b-6736-4594-9eb4-6b1b64d10480
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:42:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:42:59.438 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480', 'env', 'PROCESS_TAG=haproxy-ea98697b-6736-4594-9eb4-6b1b64d10480', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ea98697b-6736-4594-9eb4-6b1b64d10480.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.439 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.442 253665 INFO nova.virt.libvirt.driver [-] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Instance spawned successfully.
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.442 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.444 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.463 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.463 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804579.432791, 61d505d6-e331-4728-9e1b-ffbb4cc5ea50 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.463 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] VM Paused (Lifecycle Event)
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.469 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.469 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.469 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.470 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.470 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.470 253665 DEBUG nova.virt.libvirt.driver [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.488 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.493 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804579.4328437, 61d505d6-e331-4728-9e1b-ffbb4cc5ea50 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.494 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] VM Resumed (Lifecycle Event)
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.525 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.531 253665 INFO nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Took 6.83 seconds to spawn the instance on the hypervisor.
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.532 253665 DEBUG nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.533 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.558 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.599 253665 DEBUG nova.network.neutron [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Updated VIF entry in instance network info cache for port 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.600 253665 DEBUG nova.network.neutron [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Updating instance_info_cache with network_info: [{"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.614 253665 INFO nova.compute.manager [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Took 8.06 seconds to build instance.
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.622 253665 DEBUG oslo_concurrency.lockutils [req-4e8e7dd5-2268-40f5-9996-c030faa40c76 req-241778b9-4d7a-4753-8145-2c83e3a50100 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:42:59 compute-0 nova_compute[253661]: 2025-11-22 09:42:59.637 253665 DEBUG oslo_concurrency.lockutils [None req-e921f3bd-6efd-48e9-bb15-e0006274fd8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:42:59 compute-0 podman[390093]: 2025-11-22 09:42:59.849566482 +0000 UTC m=+0.048302201 container create 01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:42:59 compute-0 systemd[1]: Started libpod-conmon-01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e.scope.
Nov 22 09:42:59 compute-0 podman[390093]: 2025-11-22 09:42:59.823201298 +0000 UTC m=+0.021937037 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:42:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04be2e58273d601a9518c176197d5ec84c157d1b7d1c4d4e9517b069d115691d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:42:59 compute-0 podman[390093]: 2025-11-22 09:42:59.961115454 +0000 UTC m=+0.159851203 container init 01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:42:59 compute-0 podman[390093]: 2025-11-22 09:42:59.967370997 +0000 UTC m=+0.166106716 container start 01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:42:59 compute-0 neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480[390108]: [NOTICE]   (390112) : New worker (390114) forked
Nov 22 09:42:59 compute-0 neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480[390108]: [NOTICE]   (390112) : Loading success.
Nov 22 09:43:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 260 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Nov 22 09:43:00 compute-0 nova_compute[253661]: 2025-11-22 09:43:00.440 253665 DEBUG nova.compute.manager [req-0eb6c024-0483-4a2c-a566-fac713572ba6 req-ca7dfdf7-f31a-4117-8b99-c2b72ae813c7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Received event network-changed-29eab695-fcfb-43d5-b708-0f720ee0fc39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:00 compute-0 nova_compute[253661]: 2025-11-22 09:43:00.440 253665 DEBUG nova.compute.manager [req-0eb6c024-0483-4a2c-a566-fac713572ba6 req-ca7dfdf7-f31a-4117-8b99-c2b72ae813c7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Refreshing instance network info cache due to event network-changed-29eab695-fcfb-43d5-b708-0f720ee0fc39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:00 compute-0 nova_compute[253661]: 2025-11-22 09:43:00.440 253665 DEBUG oslo_concurrency.lockutils [req-0eb6c024-0483-4a2c-a566-fac713572ba6 req-ca7dfdf7-f31a-4117-8b99-c2b72ae813c7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:00 compute-0 nova_compute[253661]: 2025-11-22 09:43:00.440 253665 DEBUG oslo_concurrency.lockutils [req-0eb6c024-0483-4a2c-a566-fac713572ba6 req-ca7dfdf7-f31a-4117-8b99-c2b72ae813c7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:00 compute-0 nova_compute[253661]: 2025-11-22 09:43:00.440 253665 DEBUG nova.network.neutron [req-0eb6c024-0483-4a2c-a566-fac713572ba6 req-ca7dfdf7-f31a-4117-8b99-c2b72ae813c7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Refreshing network info cache for port 29eab695-fcfb-43d5-b708-0f720ee0fc39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:01 compute-0 ceph-mon[75021]: pgmap v2433: 305 pgs: 305 active+clean; 260 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.423 253665 DEBUG nova.compute.manager [req-30851319-6421-4491-ab61-0f5ff7b3a02c req-70ba6078-aa3b-4235-8d6c-9a5bc6610380 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received event network-vif-plugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.423 253665 DEBUG oslo_concurrency.lockutils [req-30851319-6421-4491-ab61-0f5ff7b3a02c req-70ba6078-aa3b-4235-8d6c-9a5bc6610380 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.424 253665 DEBUG oslo_concurrency.lockutils [req-30851319-6421-4491-ab61-0f5ff7b3a02c req-70ba6078-aa3b-4235-8d6c-9a5bc6610380 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.424 253665 DEBUG oslo_concurrency.lockutils [req-30851319-6421-4491-ab61-0f5ff7b3a02c req-70ba6078-aa3b-4235-8d6c-9a5bc6610380 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.424 253665 DEBUG nova.compute.manager [req-30851319-6421-4491-ab61-0f5ff7b3a02c req-70ba6078-aa3b-4235-8d6c-9a5bc6610380 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] No waiting events found dispatching network-vif-plugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.424 253665 WARNING nova.compute.manager [req-30851319-6421-4491-ab61-0f5ff7b3a02c req-70ba6078-aa3b-4235-8d6c-9a5bc6610380 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received unexpected event network-vif-plugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e for instance with vm_state active and task_state None.
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.947 253665 DEBUG nova.network.neutron [req-0eb6c024-0483-4a2c-a566-fac713572ba6 req-ca7dfdf7-f31a-4117-8b99-c2b72ae813c7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updated VIF entry in instance network info cache for port 29eab695-fcfb-43d5-b708-0f720ee0fc39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.947 253665 DEBUG nova.network.neutron [req-0eb6c024-0483-4a2c-a566-fac713572ba6 req-ca7dfdf7-f31a-4117-8b99-c2b72ae813c7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updating instance_info_cache with network_info: [{"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:01 compute-0 nova_compute[253661]: 2025-11-22 09:43:01.975 253665 DEBUG oslo_concurrency.lockutils [req-0eb6c024-0483-4a2c-a566-fac713572ba6 req-ca7dfdf7-f31a-4117-8b99-c2b72ae813c7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 260 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.3 MiB/s wr, 227 op/s
Nov 22 09:43:02 compute-0 nova_compute[253661]: 2025-11-22 09:43:02.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018022878410496842 of space, bias 1.0, pg target 0.5406863523149052 quantized to 32 (current 32)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:43:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:43:03 compute-0 ceph-mon[75021]: pgmap v2434: 305 pgs: 305 active+clean; 260 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.3 MiB/s wr, 227 op/s
Nov 22 09:43:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 260 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Nov 22 09:43:04 compute-0 nova_compute[253661]: 2025-11-22 09:43:04.327 253665 DEBUG nova.compute.manager [req-33765aa3-2c6e-4139-8bf3-d263aff07d39 req-8440d360-9a82-4b92-98d6-b6372efce777 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received event network-changed-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:04 compute-0 nova_compute[253661]: 2025-11-22 09:43:04.328 253665 DEBUG nova.compute.manager [req-33765aa3-2c6e-4139-8bf3-d263aff07d39 req-8440d360-9a82-4b92-98d6-b6372efce777 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Refreshing instance network info cache due to event network-changed-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:04 compute-0 nova_compute[253661]: 2025-11-22 09:43:04.328 253665 DEBUG oslo_concurrency.lockutils [req-33765aa3-2c6e-4139-8bf3-d263aff07d39 req-8440d360-9a82-4b92-98d6-b6372efce777 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:04 compute-0 nova_compute[253661]: 2025-11-22 09:43:04.328 253665 DEBUG oslo_concurrency.lockutils [req-33765aa3-2c6e-4139-8bf3-d263aff07d39 req-8440d360-9a82-4b92-98d6-b6372efce777 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:04 compute-0 nova_compute[253661]: 2025-11-22 09:43:04.328 253665 DEBUG nova.network.neutron [req-33765aa3-2c6e-4139-8bf3-d263aff07d39 req-8440d360-9a82-4b92-98d6-b6372efce777 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Refreshing network info cache for port 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:04 compute-0 podman[390123]: 2025-11-22 09:43:04.417582535 +0000 UTC m=+0.102354580 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:43:05 compute-0 ceph-mon[75021]: pgmap v2435: 305 pgs: 305 active+clean; 260 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Nov 22 09:43:05 compute-0 ovn_controller[152872]: 2025-11-22T09:43:05Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:bf:b2 10.100.0.6
Nov 22 09:43:05 compute-0 ovn_controller[152872]: 2025-11-22T09:43:05Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:bf:b2 10.100.0.6
Nov 22 09:43:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 260 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 682 KiB/s wr, 214 op/s
Nov 22 09:43:06 compute-0 nova_compute[253661]: 2025-11-22 09:43:06.136 253665 DEBUG nova.network.neutron [req-33765aa3-2c6e-4139-8bf3-d263aff07d39 req-8440d360-9a82-4b92-98d6-b6372efce777 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Updated VIF entry in instance network info cache for port 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:43:06 compute-0 nova_compute[253661]: 2025-11-22 09:43:06.137 253665 DEBUG nova.network.neutron [req-33765aa3-2c6e-4139-8bf3-d263aff07d39 req-8440d360-9a82-4b92-98d6-b6372efce777 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Updating instance_info_cache with network_info: [{"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:06 compute-0 nova_compute[253661]: 2025-11-22 09:43:06.166 253665 DEBUG oslo_concurrency.lockutils [req-33765aa3-2c6e-4139-8bf3-d263aff07d39 req-8440d360-9a82-4b92-98d6-b6372efce777 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:06 compute-0 nova_compute[253661]: 2025-11-22 09:43:06.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:07 compute-0 ceph-mon[75021]: pgmap v2436: 305 pgs: 305 active+clean; 260 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 682 KiB/s wr, 214 op/s
Nov 22 09:43:07 compute-0 nova_compute[253661]: 2025-11-22 09:43:07.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 273 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.6 MiB/s wr, 255 op/s
Nov 22 09:43:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:09 compute-0 ovn_controller[152872]: 2025-11-22T09:43:09Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:cf:93 10.100.0.3
Nov 22 09:43:09 compute-0 ovn_controller[152872]: 2025-11-22T09:43:09Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:cf:93 10.100.0.3
Nov 22 09:43:09 compute-0 ceph-mon[75021]: pgmap v2437: 305 pgs: 305 active+clean; 273 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.6 MiB/s wr, 255 op/s
Nov 22 09:43:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 314 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Nov 22 09:43:11 compute-0 ceph-mon[75021]: pgmap v2438: 305 pgs: 305 active+clean; 314 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Nov 22 09:43:11 compute-0 nova_compute[253661]: 2025-11-22 09:43:11.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 330 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 210 op/s
Nov 22 09:43:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:43:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3975548395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:43:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:43:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3975548395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:43:12 compute-0 ovn_controller[152872]: 2025-11-22T09:43:12Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:1d:c2 10.100.0.12
Nov 22 09:43:12 compute-0 ovn_controller[152872]: 2025-11-22T09:43:12Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:1d:c2 10.100.0.12
Nov 22 09:43:12 compute-0 nova_compute[253661]: 2025-11-22 09:43:12.889 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:13 compute-0 ceph-mon[75021]: pgmap v2439: 305 pgs: 305 active+clean; 330 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 210 op/s
Nov 22 09:43:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3975548395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:43:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3975548395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:43:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.3 MiB/s wr, 211 op/s
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.038 253665 DEBUG nova.compute.manager [req-a18c6033-27cf-4246-bd7b-b6e20920d695 req-a5c4a804-67dc-45c6-ae6a-bed253dddcba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-changed-4a5edb92-edbe-4724-be56-e8dddc03d872 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.038 253665 DEBUG nova.compute.manager [req-a18c6033-27cf-4246-bd7b-b6e20920d695 req-a5c4a804-67dc-45c6-ae6a-bed253dddcba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Refreshing instance network info cache due to event network-changed-4a5edb92-edbe-4724-be56-e8dddc03d872. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.039 253665 DEBUG oslo_concurrency.lockutils [req-a18c6033-27cf-4246-bd7b-b6e20920d695 req-a5c4a804-67dc-45c6-ae6a-bed253dddcba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.039 253665 DEBUG oslo_concurrency.lockutils [req-a18c6033-27cf-4246-bd7b-b6e20920d695 req-a5c4a804-67dc-45c6-ae6a-bed253dddcba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.039 253665 DEBUG nova.network.neutron [req-a18c6033-27cf-4246-bd7b-b6e20920d695 req-a5c4a804-67dc-45c6-ae6a-bed253dddcba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Refreshing network info cache for port 4a5edb92-edbe-4724-be56-e8dddc03d872 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.066 253665 DEBUG oslo_concurrency.lockutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.066 253665 DEBUG oslo_concurrency.lockutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.066 253665 DEBUG oslo_concurrency.lockutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.066 253665 DEBUG oslo_concurrency.lockutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.067 253665 DEBUG oslo_concurrency.lockutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.068 253665 INFO nova.compute.manager [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Terminating instance
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.068 253665 DEBUG nova.compute.manager [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:43:15 compute-0 kernel: tap4a5edb92-ed (unregistering): left promiscuous mode
Nov 22 09:43:15 compute-0 NetworkManager[48920]: <info>  [1763804595.1202] device (tap4a5edb92-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:43:15 compute-0 ovn_controller[152872]: 2025-11-22T09:43:15Z|01433|binding|INFO|Releasing lport 4a5edb92-edbe-4724-be56-e8dddc03d872 from this chassis (sb_readonly=0)
Nov 22 09:43:15 compute-0 ovn_controller[152872]: 2025-11-22T09:43:15Z|01434|binding|INFO|Setting lport 4a5edb92-edbe-4724-be56-e8dddc03d872 down in Southbound
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 ovn_controller[152872]: 2025-11-22T09:43:15Z|01435|binding|INFO|Removing iface tap4a5edb92-ed ovn-installed in OVS
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.141 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.147 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:bf:b2 10.100.0.6'], port_security=['fa:16:3e:24:bf:b2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4b40bebc-2343-478c-aacb-b4ae1fc87907', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f6f2cc35-7f29-4a49-a136-053a456001be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68715752-6987-44a9-a236-673079985d56, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4a5edb92-edbe-4724-be56-e8dddc03d872) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.149 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4a5edb92-edbe-4724-be56-e8dddc03d872 in datapath b7054ec2-03d5-4428-a2b8-9c9905d4fcef unbound from our chassis
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.151 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7054ec2-03d5-4428-a2b8-9c9905d4fcef
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 kernel: tap738a955b-d3 (unregistering): left promiscuous mode
Nov 22 09:43:15 compute-0 NetworkManager[48920]: <info>  [1763804595.1633] device (tap738a955b-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.170 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8a918aa0-4541-4ae9-8051-f30d0637f48d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 ovn_controller[152872]: 2025-11-22T09:43:15Z|01436|binding|INFO|Releasing lport 738a955b-d3fe-4521-9c7f-f6ae50a9112e from this chassis (sb_readonly=0)
Nov 22 09:43:15 compute-0 ovn_controller[152872]: 2025-11-22T09:43:15Z|01437|binding|INFO|Setting lport 738a955b-d3fe-4521-9c7f-f6ae50a9112e down in Southbound
Nov 22 09:43:15 compute-0 ovn_controller[152872]: 2025-11-22T09:43:15Z|01438|binding|INFO|Removing iface tap738a955b-d3 ovn-installed in OVS
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.188 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.191 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:47:48 2001:db8:0:1:f816:3eff:fedf:4748 2001:db8::f816:3eff:fedf:4748'], port_security=['fa:16:3e:df:47:48 2001:db8:0:1:f816:3eff:fedf:4748 2001:db8::f816:3eff:fedf:4748'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fedf:4748/64 2001:db8::f816:3eff:fedf:4748/64', 'neutron:device_id': '4b40bebc-2343-478c-aacb-b4ae1fc87907', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f6914f-16a1-4223-85f8-aa4fada62acd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f6f2cc35-7f29-4a49-a136-053a456001be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28e6ccec-e6eb-4baf-91d9-14c9f27dcba7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=738a955b-d3fe-4521-9c7f-f6ae50a9112e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.212 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc29a2a-611a-4d53-8b6f-55f1ef994186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.216 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd6fa29-a698-45d2-831b-245b2cfc669d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000083.scope: Deactivated successfully.
Nov 22 09:43:15 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000083.scope: Consumed 13.835s CPU time.
Nov 22 09:43:15 compute-0 systemd-machined[215941]: Machine qemu-162-instance-00000083 terminated.
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.242 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa7b67f-a4ba-4725-b3de-d9cec590bda5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.260 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af2309d3-3090-4830-a28c-9ecca4d29f83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7054ec2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:e8:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738602, 'reachable_time': 21831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390166, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.281 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[05b2aed7-5f5b-4b86-b459-38dd347e38d2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb7054ec2-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738616, 'tstamp': 738616}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390167, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb7054ec2-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738619, 'tstamp': 738619}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390167, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.283 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7054ec2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.291 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7054ec2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.292 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.291 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.292 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7054ec2-00, col_values=(('external_ids', {'iface-id': 'd0d06639-6ce0-40be-a634-96c5ecf3d6fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.292 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.294 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 738a955b-d3fe-4521-9c7f-f6ae50a9112e in datapath 14f6914f-16a1-4223-85f8-aa4fada62acd unbound from our chassis
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.295 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14f6914f-16a1-4223-85f8-aa4fada62acd
Nov 22 09:43:15 compute-0 NetworkManager[48920]: <info>  [1763804595.2978] manager: (tap738a955b-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.311 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d0b816-e533-4cbb-9c75-9a70d70c24ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.316 253665 INFO nova.virt.libvirt.driver [-] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Instance destroyed successfully.
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.317 253665 DEBUG nova.objects.instance [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 4b40bebc-2343-478c-aacb-b4ae1fc87907 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.328 253665 DEBUG nova.virt.libvirt.vif [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1789709758',display_name='tempest-TestGettingAddress-server-1789709758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1789709758',id=131,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:42:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-xw3p0wga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:42:52Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=4b40bebc-2343-478c-aacb-b4ae1fc87907,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.328 253665 DEBUG nova.network.os_vif_util [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.328 253665 DEBUG nova.network.os_vif_util [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:bf:b2,bridge_name='br-int',has_traffic_filtering=True,id=4a5edb92-edbe-4724-be56-e8dddc03d872,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a5edb92-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.329 253665 DEBUG os_vif [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:bf:b2,bridge_name='br-int',has_traffic_filtering=True,id=4a5edb92-edbe-4724-be56-e8dddc03d872,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a5edb92-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.331 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a5edb92-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.332 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 ceph-mon[75021]: pgmap v2440: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.3 MiB/s wr, 211 op/s
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.341 253665 INFO os_vif [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:bf:b2,bridge_name='br-int',has_traffic_filtering=True,id=4a5edb92-edbe-4724-be56-e8dddc03d872,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a5edb92-ed')
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.342 253665 DEBUG nova.virt.libvirt.vif [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1789709758',display_name='tempest-TestGettingAddress-server-1789709758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1789709758',id=131,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:42:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-xw3p0wga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:42:52Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=4b40bebc-2343-478c-aacb-b4ae1fc87907,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.343 253665 DEBUG nova.network.os_vif_util [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.343 253665 DEBUG nova.network.os_vif_util [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:47:48,bridge_name='br-int',has_traffic_filtering=True,id=738a955b-d3fe-4521-9c7f-f6ae50a9112e,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a955b-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.344 253665 DEBUG os_vif [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:47:48,bridge_name='br-int',has_traffic_filtering=True,id=738a955b-d3fe-4521-9c7f-f6ae50a9112e,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a955b-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.345 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738a955b-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.348 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.350 253665 INFO os_vif [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:47:48,bridge_name='br-int',has_traffic_filtering=True,id=738a955b-d3fe-4521-9c7f-f6ae50a9112e,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a955b-d3')
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.353 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[659a0481-8092-45fc-8f32-f5e8dad35179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.356 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e17d72ff-082e-48ca-acf0-e897661960e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.386 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3b8de2-7cc4-4387-b748-83b535a6dbe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.405 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73522bde-01a0-4715-9d08-2b98af658b65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f6914f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:46:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738694, 'reachable_time': 32374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390213, 'error': None, 'target': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.435 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc52469a-a177-4312-a19a-5507b392565b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14f6914f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738706, 'tstamp': 738706}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390214, 'error': None, 'target': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.438 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f6914f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.439 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.440 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f6914f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.441 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.441 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14f6914f-10, col_values=(('external_ids', {'iface-id': 'ef3b4cae-0bef-4b3d-812b-86120687d0c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:15.441 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.455 253665 DEBUG nova.compute.manager [req-11b16861-ff5f-4eab-a7b6-82dc2003017a req-1c76c419-1c47-4131-81c1-f73eb534a0b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-unplugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.455 253665 DEBUG oslo_concurrency.lockutils [req-11b16861-ff5f-4eab-a7b6-82dc2003017a req-1c76c419-1c47-4131-81c1-f73eb534a0b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.456 253665 DEBUG oslo_concurrency.lockutils [req-11b16861-ff5f-4eab-a7b6-82dc2003017a req-1c76c419-1c47-4131-81c1-f73eb534a0b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.456 253665 DEBUG oslo_concurrency.lockutils [req-11b16861-ff5f-4eab-a7b6-82dc2003017a req-1c76c419-1c47-4131-81c1-f73eb534a0b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.456 253665 DEBUG nova.compute.manager [req-11b16861-ff5f-4eab-a7b6-82dc2003017a req-1c76c419-1c47-4131-81c1-f73eb534a0b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] No waiting events found dispatching network-vif-unplugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.457 253665 DEBUG nova.compute.manager [req-11b16861-ff5f-4eab-a7b6-82dc2003017a req-1c76c419-1c47-4131-81c1-f73eb534a0b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-unplugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.782 253665 INFO nova.virt.libvirt.driver [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Deleting instance files /var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907_del
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.783 253665 INFO nova.virt.libvirt.driver [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Deletion of /var/lib/nova/instances/4b40bebc-2343-478c-aacb-b4ae1fc87907_del complete
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.843 253665 INFO nova.compute.manager [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.844 253665 DEBUG oslo.service.loopingcall [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.845 253665 DEBUG nova.compute.manager [-] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:43:15 compute-0 nova_compute[253661]: 2025-11-22 09:43:15.845 253665 DEBUG nova.network.neutron [-] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:43:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 875 KiB/s rd, 6.3 MiB/s wr, 167 op/s
Nov 22 09:43:16 compute-0 nova_compute[253661]: 2025-11-22 09:43:16.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:16 compute-0 nova_compute[253661]: 2025-11-22 09:43:16.530 253665 DEBUG nova.network.neutron [req-a18c6033-27cf-4246-bd7b-b6e20920d695 req-a5c4a804-67dc-45c6-ae6a-bed253dddcba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Updated VIF entry in instance network info cache for port 4a5edb92-edbe-4724-be56-e8dddc03d872. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:43:16 compute-0 nova_compute[253661]: 2025-11-22 09:43:16.531 253665 DEBUG nova.network.neutron [req-a18c6033-27cf-4246-bd7b-b6e20920d695 req-a5c4a804-67dc-45c6-ae6a-bed253dddcba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Updating instance_info_cache with network_info: [{"id": "4a5edb92-edbe-4724-be56-e8dddc03d872", "address": "fa:16:3e:24:bf:b2", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a5edb92-ed", "ovs_interfaceid": "4a5edb92-edbe-4724-be56-e8dddc03d872", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "address": "fa:16:3e:df:47:48", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedf:4748", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a955b-d3", "ovs_interfaceid": "738a955b-d3fe-4521-9c7f-f6ae50a9112e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:16 compute-0 nova_compute[253661]: 2025-11-22 09:43:16.557 253665 DEBUG oslo_concurrency.lockutils [req-a18c6033-27cf-4246-bd7b-b6e20920d695 req-a5c4a804-67dc-45c6-ae6a-bed253dddcba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4b40bebc-2343-478c-aacb-b4ae1fc87907" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.081 253665 DEBUG nova.network.neutron [-] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.095 253665 INFO nova.compute.manager [-] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Took 1.25 seconds to deallocate network for instance.
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.134 253665 DEBUG oslo_concurrency.lockutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.135 253665 DEBUG oslo_concurrency.lockutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.163 253665 DEBUG nova.compute.manager [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-unplugged-4a5edb92-edbe-4724-be56-e8dddc03d872 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.164 253665 DEBUG oslo_concurrency.lockutils [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.164 253665 DEBUG oslo_concurrency.lockutils [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.165 253665 DEBUG oslo_concurrency.lockutils [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.165 253665 DEBUG nova.compute.manager [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] No waiting events found dispatching network-vif-unplugged-4a5edb92-edbe-4724-be56-e8dddc03d872 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.165 253665 WARNING nova.compute.manager [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received unexpected event network-vif-unplugged-4a5edb92-edbe-4724-be56-e8dddc03d872 for instance with vm_state deleted and task_state None.
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.166 253665 DEBUG nova.compute.manager [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-plugged-4a5edb92-edbe-4724-be56-e8dddc03d872 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.166 253665 DEBUG oslo_concurrency.lockutils [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.166 253665 DEBUG oslo_concurrency.lockutils [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.167 253665 DEBUG oslo_concurrency.lockutils [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.167 253665 DEBUG nova.compute.manager [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] No waiting events found dispatching network-vif-plugged-4a5edb92-edbe-4724-be56-e8dddc03d872 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.167 253665 WARNING nova.compute.manager [req-a31708a8-1d92-4ef8-8e71-0f0393377d8c req-55c4f9ec-6ccc-4683-927c-b522b0c67741 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received unexpected event network-vif-plugged-4a5edb92-edbe-4724-be56-e8dddc03d872 for instance with vm_state deleted and task_state None.
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.245 253665 DEBUG oslo_concurrency.processutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:17 compute-0 ceph-mon[75021]: pgmap v2441: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 875 KiB/s rd, 6.3 MiB/s wr, 167 op/s
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.617 253665 DEBUG nova.compute.manager [req-319499ed-c23a-4bfd-8c8e-c4a7b89c094b req-aa0017aa-d1f5-447a-95cd-34614b925867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-plugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.618 253665 DEBUG oslo_concurrency.lockutils [req-319499ed-c23a-4bfd-8c8e-c4a7b89c094b req-aa0017aa-d1f5-447a-95cd-34614b925867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.619 253665 DEBUG oslo_concurrency.lockutils [req-319499ed-c23a-4bfd-8c8e-c4a7b89c094b req-aa0017aa-d1f5-447a-95cd-34614b925867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.620 253665 DEBUG oslo_concurrency.lockutils [req-319499ed-c23a-4bfd-8c8e-c4a7b89c094b req-aa0017aa-d1f5-447a-95cd-34614b925867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.620 253665 DEBUG nova.compute.manager [req-319499ed-c23a-4bfd-8c8e-c4a7b89c094b req-aa0017aa-d1f5-447a-95cd-34614b925867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] No waiting events found dispatching network-vif-plugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.620 253665 WARNING nova.compute.manager [req-319499ed-c23a-4bfd-8c8e-c4a7b89c094b req-aa0017aa-d1f5-447a-95cd-34614b925867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received unexpected event network-vif-plugged-738a955b-d3fe-4521-9c7f-f6ae50a9112e for instance with vm_state deleted and task_state None.
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.621 253665 DEBUG nova.compute.manager [req-319499ed-c23a-4bfd-8c8e-c4a7b89c094b req-aa0017aa-d1f5-447a-95cd-34614b925867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-deleted-4a5edb92-edbe-4724-be56-e8dddc03d872 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.621 253665 DEBUG nova.compute.manager [req-319499ed-c23a-4bfd-8c8e-c4a7b89c094b req-aa0017aa-d1f5-447a-95cd-34614b925867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Received event network-vif-deleted-738a955b-d3fe-4521-9c7f-f6ae50a9112e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:43:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1459477527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.715 253665 DEBUG oslo_concurrency.processutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.722 253665 DEBUG nova.compute.provider_tree [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.734 253665 DEBUG nova.scheduler.client.report [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.754 253665 DEBUG oslo_concurrency.lockutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.779 253665 INFO nova.scheduler.client.report [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 4b40bebc-2343-478c-aacb-b4ae1fc87907
Nov 22 09:43:17 compute-0 nova_compute[253661]: 2025-11-22 09:43:17.839 253665 DEBUG oslo_concurrency.lockutils [None req-8fb4b469-10f1-4ac8-9939-c6c8a7da1680 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "4b40bebc-2343-478c-aacb-b4ae1fc87907" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 973 KiB/s rd, 6.4 MiB/s wr, 192 op/s
Nov 22 09:43:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1459477527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.198 253665 DEBUG oslo_concurrency.lockutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.199 253665 DEBUG oslo_concurrency.lockutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.200 253665 DEBUG oslo_concurrency.lockutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.200 253665 DEBUG oslo_concurrency.lockutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.201 253665 DEBUG oslo_concurrency.lockutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.202 253665 INFO nova.compute.manager [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Terminating instance
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.203 253665 DEBUG nova.compute.manager [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:43:19 compute-0 kernel: tap868d876c-3d (unregistering): left promiscuous mode
Nov 22 09:43:19 compute-0 NetworkManager[48920]: <info>  [1763804599.2660] device (tap868d876c-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:43:19 compute-0 ovn_controller[152872]: 2025-11-22T09:43:19Z|01439|binding|INFO|Releasing lport 868d876c-3d4f-4618-aedd-e1ce97d50ae9 from this chassis (sb_readonly=0)
Nov 22 09:43:19 compute-0 ovn_controller[152872]: 2025-11-22T09:43:19Z|01440|binding|INFO|Setting lport 868d876c-3d4f-4618-aedd-e1ce97d50ae9 down in Southbound
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.278 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 ovn_controller[152872]: 2025-11-22T09:43:19Z|01441|binding|INFO|Removing iface tap868d876c-3d ovn-installed in OVS
Nov 22 09:43:19 compute-0 kernel: tap005f7c99-8e (unregistering): left promiscuous mode
Nov 22 09:43:19 compute-0 NetworkManager[48920]: <info>  [1763804599.2868] device (tap005f7c99-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.289 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:9d:36 10.100.0.14'], port_security=['fa:16:3e:ec:9d:36 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd749b709-79b8-40b6-8c2e-4d301bdc8e67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f6f2cc35-7f29-4a49-a136-053a456001be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68715752-6987-44a9-a236-673079985d56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=868d876c-3d4f-4618-aedd-e1ce97d50ae9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.290 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 868d876c-3d4f-4618-aedd-e1ce97d50ae9 in datapath b7054ec2-03d5-4428-a2b8-9c9905d4fcef unbound from our chassis
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.293 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b7054ec2-03d5-4428-a2b8-9c9905d4fcef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.293 253665 DEBUG nova.compute.manager [req-2da9a53d-955a-4398-90da-a186bcc3690c req-90654dec-94c0-4ff6-9e31-bc194be49b98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-changed-868d876c-3d4f-4618-aedd-e1ce97d50ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.293 253665 DEBUG nova.compute.manager [req-2da9a53d-955a-4398-90da-a186bcc3690c req-90654dec-94c0-4ff6-9e31-bc194be49b98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Refreshing instance network info cache due to event network-changed-868d876c-3d4f-4618-aedd-e1ce97d50ae9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.294 253665 DEBUG oslo_concurrency.lockutils [req-2da9a53d-955a-4398-90da-a186bcc3690c req-90654dec-94c0-4ff6-9e31-bc194be49b98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.294 253665 DEBUG oslo_concurrency.lockutils [req-2da9a53d-955a-4398-90da-a186bcc3690c req-90654dec-94c0-4ff6-9e31-bc194be49b98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.294 253665 DEBUG nova.network.neutron [req-2da9a53d-955a-4398-90da-a186bcc3690c req-90654dec-94c0-4ff6-9e31-bc194be49b98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Refreshing network info cache for port 868d876c-3d4f-4618-aedd-e1ce97d50ae9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.294 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7dab1028-b038-444d-8764-5bd18a47607e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.295 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef namespace which is not needed anymore
Nov 22 09:43:19 compute-0 ovn_controller[152872]: 2025-11-22T09:43:19Z|01442|binding|INFO|Releasing lport 005f7c99-8e6b-4818-9749-1360f814f253 from this chassis (sb_readonly=0)
Nov 22 09:43:19 compute-0 ovn_controller[152872]: 2025-11-22T09:43:19Z|01443|binding|INFO|Setting lport 005f7c99-8e6b-4818-9749-1360f814f253 down in Southbound
Nov 22 09:43:19 compute-0 ovn_controller[152872]: 2025-11-22T09:43:19Z|01444|binding|INFO|Removing iface tap005f7c99-8e ovn-installed in OVS
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.320 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:1b:ad 2001:db8:0:1:f816:3eff:fe16:1bad 2001:db8::f816:3eff:fe16:1bad'], port_security=['fa:16:3e:16:1b:ad 2001:db8:0:1:f816:3eff:fe16:1bad 2001:db8::f816:3eff:fe16:1bad'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe16:1bad/64 2001:db8::f816:3eff:fe16:1bad/64', 'neutron:device_id': 'd749b709-79b8-40b6-8c2e-4d301bdc8e67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f6914f-16a1-4223-85f8-aa4fada62acd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f6f2cc35-7f29-4a49-a136-053a456001be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28e6ccec-e6eb-4baf-91d9-14c9f27dcba7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=005f7c99-8e6b-4818-9749-1360f814f253) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.320 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.324 253665 INFO nova.compute.manager [None req-7cbae706-36c1-4784-8c3e-e8c5d839b890 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Get console output
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.332 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.334 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:43:19 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 22 09:43:19 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000082.scope: Consumed 16.344s CPU time.
Nov 22 09:43:19 compute-0 ceph-mon[75021]: pgmap v2442: 305 pgs: 305 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 973 KiB/s rd, 6.4 MiB/s wr, 192 op/s
Nov 22 09:43:19 compute-0 systemd-machined[215941]: Machine qemu-161-instance-00000082 terminated.
Nov 22 09:43:19 compute-0 NetworkManager[48920]: <info>  [1763804599.4355] manager: (tap005f7c99-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/588)
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.455 253665 INFO nova.virt.libvirt.driver [-] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Instance destroyed successfully.
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.456 253665 DEBUG nova.objects.instance [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid d749b709-79b8-40b6-8c2e-4d301bdc8e67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef[387681]: [NOTICE]   (387690) : haproxy version is 2.8.14-c23fe91
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef[387681]: [NOTICE]   (387690) : path to executable is /usr/sbin/haproxy
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef[387681]: [WARNING]  (387690) : Exiting Master process...
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef[387681]: [WARNING]  (387690) : Exiting Master process...
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef[387681]: [ALERT]    (387690) : Current worker (387693) exited with code 143 (Terminated)
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef[387681]: [WARNING]  (387690) : All workers exited. Exiting... (0)
Nov 22 09:43:19 compute-0 systemd[1]: libpod-fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa.scope: Deactivated successfully.
Nov 22 09:43:19 compute-0 conmon[387681]: conmon fa57c80724e0ee1b105c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa.scope/container/memory.events
Nov 22 09:43:19 compute-0 podman[390264]: 2025-11-22 09:43:19.469135226 +0000 UTC m=+0.060712513 container died fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.471 253665 DEBUG nova.virt.libvirt.vif [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:41:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1425734754',display_name='tempest-TestGettingAddress-server-1425734754',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1425734754',id=130,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:42:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-dage9ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:42:16Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d749b709-79b8-40b6-8c2e-4d301bdc8e67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.472 253665 DEBUG nova.network.os_vif_util [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.473 253665 DEBUG nova.network.os_vif_util [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:9d:36,bridge_name='br-int',has_traffic_filtering=True,id=868d876c-3d4f-4618-aedd-e1ce97d50ae9,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap868d876c-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.473 253665 DEBUG os_vif [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:9d:36,bridge_name='br-int',has_traffic_filtering=True,id=868d876c-3d4f-4618-aedd-e1ce97d50ae9,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap868d876c-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.476 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap868d876c-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.485 253665 INFO os_vif [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:9d:36,bridge_name='br-int',has_traffic_filtering=True,id=868d876c-3d4f-4618-aedd-e1ce97d50ae9,network=Network(b7054ec2-03d5-4428-a2b8-9c9905d4fcef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap868d876c-3d')
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.486 253665 DEBUG nova.virt.libvirt.vif [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:41:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1425734754',display_name='tempest-TestGettingAddress-server-1425734754',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1425734754',id=130,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxIoQTeBbtypiJhlKPMHQXjGU7aCMg6wZkvjPYwPkKjk3LO5DhtauW1diizbUVSW+k/Hudkn9kQvgXnZUVb6thSj6gbhxVXaYzzZMjFsOHI4OCmbD5KtdP+u8b3PraWNQ==',key_name='tempest-TestGettingAddress-1888093361',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:42:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-dage9ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:42:16Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d749b709-79b8-40b6-8c2e-4d301bdc8e67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.486 253665 DEBUG nova.network.os_vif_util [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.487 253665 DEBUG nova.network.os_vif_util [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=005f7c99-8e6b-4818-9749-1360f814f253,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap005f7c99-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.488 253665 DEBUG os_vif [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=005f7c99-8e6b-4818-9749-1360f814f253,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap005f7c99-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.489 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.489 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap005f7c99-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.494 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.496 253665 INFO os_vif [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=005f7c99-8e6b-4818-9749-1360f814f253,network=Network(14f6914f-16a1-4223-85f8-aa4fada62acd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap005f7c99-8e')
Nov 22 09:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa-userdata-shm.mount: Deactivated successfully.
Nov 22 09:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0821a1de6c097e13970fe5dd6f5c72c376d47e57bdbbe281b1d977650b349e04-merged.mount: Deactivated successfully.
Nov 22 09:43:19 compute-0 podman[390264]: 2025-11-22 09:43:19.517859785 +0000 UTC m=+0.109437072 container cleanup fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:43:19 compute-0 systemd[1]: libpod-conmon-fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa.scope: Deactivated successfully.
Nov 22 09:43:19 compute-0 podman[390331]: 2025-11-22 09:43:19.589173866 +0000 UTC m=+0.044021825 container remove fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c099f496-b327-49f0-8580-c09d458660b0]: (4, ('Sat Nov 22 09:43:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef (fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa)\nfa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa\nSat Nov 22 09:43:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef (fa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa)\nfa57c80724e0ee1b105cbb5db7985177cb011b030dda96420ecf98ee03687baa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.599 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d78fa872-f22e-49dd-ac70-8aa9a70c0260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.600 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7054ec2-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.602 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 kernel: tapb7054ec2-00: left promiscuous mode
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.614 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.615 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.623 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fed03256-2273-4376-a4b5-4e2f0886c2e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.638 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c0dcc0-ebd4-453e-84d9-ce683b887055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.640 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe94ffac-d531-4eb5-96e5-c91986f3ec52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.649 253665 DEBUG nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.662 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e98bc1b3-a5ff-465b-83f8-f0bfea11ff84]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738595, 'reachable_time': 27269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390348, 'error': None, 'target': 'ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 systemd[1]: run-netns-ovnmeta\x2db7054ec2\x2d03d5\x2d4428\x2da2b8\x2d9c9905d4fcef.mount: Deactivated successfully.
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.666 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b7054ec2-03d5-4428-a2b8-9c9905d4fcef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.666 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e04446-d30a-4fcf-a0f6-15246f8c7340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.668 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 005f7c99-8e6b-4818-9749-1360f814f253 in datapath 14f6914f-16a1-4223-85f8-aa4fada62acd unbound from our chassis
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.670 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14f6914f-16a1-4223-85f8-aa4fada62acd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.671 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bd783c-91a5-49b6-a6bc-84280d258795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.672 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd namespace which is not needed anymore
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.725 253665 DEBUG nova.compute.manager [req-09af5bd0-570a-4cd4-9c94-53781012f9b3 req-2f878d1e-ff90-46e8-851b-39370363a463 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-unplugged-005f7c99-8e6b-4818-9749-1360f814f253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.726 253665 DEBUG oslo_concurrency.lockutils [req-09af5bd0-570a-4cd4-9c94-53781012f9b3 req-2f878d1e-ff90-46e8-851b-39370363a463 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.726 253665 DEBUG oslo_concurrency.lockutils [req-09af5bd0-570a-4cd4-9c94-53781012f9b3 req-2f878d1e-ff90-46e8-851b-39370363a463 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.726 253665 DEBUG oslo_concurrency.lockutils [req-09af5bd0-570a-4cd4-9c94-53781012f9b3 req-2f878d1e-ff90-46e8-851b-39370363a463 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.726 253665 DEBUG nova.compute.manager [req-09af5bd0-570a-4cd4-9c94-53781012f9b3 req-2f878d1e-ff90-46e8-851b-39370363a463 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] No waiting events found dispatching network-vif-unplugged-005f7c99-8e6b-4818-9749-1360f814f253 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.726 253665 DEBUG nova.compute.manager [req-09af5bd0-570a-4cd4-9c94-53781012f9b3 req-2f878d1e-ff90-46e8-851b-39370363a463 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-unplugged-005f7c99-8e6b-4818-9749-1360f814f253 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.746 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.747 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.753 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.753 253665 INFO nova.compute.claims [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd[387757]: [NOTICE]   (387761) : haproxy version is 2.8.14-c23fe91
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd[387757]: [NOTICE]   (387761) : path to executable is /usr/sbin/haproxy
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd[387757]: [WARNING]  (387761) : Exiting Master process...
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd[387757]: [WARNING]  (387761) : Exiting Master process...
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd[387757]: [ALERT]    (387761) : Current worker (387763) exited with code 143 (Terminated)
Nov 22 09:43:19 compute-0 neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd[387757]: [WARNING]  (387761) : All workers exited. Exiting... (0)
Nov 22 09:43:19 compute-0 systemd[1]: libpod-463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287.scope: Deactivated successfully.
Nov 22 09:43:19 compute-0 podman[390365]: 2025-11-22 09:43:19.807827743 +0000 UTC m=+0.049005127 container died 463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287-userdata-shm.mount: Deactivated successfully.
Nov 22 09:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a89f242686f0e2d9942688a514074e019e91c326057cca7e1af2a238c306df4c-merged.mount: Deactivated successfully.
Nov 22 09:43:19 compute-0 podman[390365]: 2025-11-22 09:43:19.861850962 +0000 UTC m=+0.103028356 container cleanup 463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:43:19 compute-0 systemd[1]: libpod-conmon-463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287.scope: Deactivated successfully.
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.918 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:19 compute-0 podman[390396]: 2025-11-22 09:43:19.92566898 +0000 UTC m=+0.044445356 container remove 463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.932 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2b5005-779f-4fae-9478-65177e1ba5a0]: (4, ('Sat Nov 22 09:43:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd (463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287)\n463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287\nSat Nov 22 09:43:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd (463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287)\n463f60ac20c25210317a50a0f109ea914b87a7e915bec00e19d11bfd29a0a287\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.935 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1c611c55-080f-4943-b5de-eb1cb7e39abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.936 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f6914f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:19 compute-0 kernel: tap14f6914f-10: left promiscuous mode
Nov 22 09:43:19 compute-0 ovn_controller[152872]: 2025-11-22T09:43:19Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:1d:c2 10.100.0.12
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.957 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33d2c4a4-49ff-4116-909d-55978fbe0716]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.977 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf85884-b324-4aeb-a1a1-ed7514840167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.978 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dda73859-7a80-41d0-93fe-3242de455ea8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.984 253665 INFO nova.virt.libvirt.driver [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Deleting instance files /var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67_del
Nov 22 09:43:19 compute-0 nova_compute[253661]: 2025-11-22 09:43:19.985 253665 INFO nova.virt.libvirt.driver [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Deletion of /var/lib/nova/instances/d749b709-79b8-40b6-8c2e-4d301bdc8e67_del complete
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.995 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f8bb0d-8472-48ee-b525-059aacf1e5a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738686, 'reachable_time': 44147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390412, 'error': None, 'target': 'ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.997 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14f6914f-16a1-4223-85f8-aa4fada62acd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:43:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:19.997 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a0eb4f1f-4684-45ff-96d0-ee410597c459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.038 253665 INFO nova.compute.manager [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.039 253665 DEBUG oslo.service.loopingcall [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.039 253665 DEBUG nova.compute.manager [-] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.040 253665 DEBUG nova.network.neutron [-] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:43:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 730 KiB/s rd, 5.4 MiB/s wr, 179 op/s
Nov 22 09:43:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:43:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2634261876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.419 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.425 253665 DEBUG nova.compute.provider_tree [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.443 253665 DEBUG nova.scheduler.client.report [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.464 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.464 253665 DEBUG nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:43:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d14f6914f\x2d16a1\x2d4223\x2d85f8\x2daa4fada62acd.mount: Deactivated successfully.
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.518 253665 DEBUG nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.519 253665 DEBUG nova.network.neutron [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.536 253665 INFO nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.555 253665 DEBUG nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.644 253665 DEBUG nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.646 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.647 253665 INFO nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Creating image(s)
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.674 253665 DEBUG nova.storage.rbd_utils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.697 253665 DEBUG nova.storage.rbd_utils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.721 253665 DEBUG nova.storage.rbd_utils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.725 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.762 253665 DEBUG nova.policy [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.795 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.797 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.797 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.798 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.824 253665 DEBUG nova.storage.rbd_utils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:20 compute-0 nova_compute[253661]: 2025-11-22 09:43:20.827 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.149 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.201 253665 DEBUG nova.storage.rbd_utils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.303 253665 DEBUG nova.objects.instance [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid 2f0b41ce-f4ed-4358-ac93-eb662789bea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.322 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.323 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Ensure instance console log exists: /var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.323 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.324 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.324 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:21 compute-0 ovn_controller[152872]: 2025-11-22T09:43:21Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:1d:c2 10.100.0.12
Nov 22 09:43:21 compute-0 ceph-mon[75021]: pgmap v2443: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 730 KiB/s rd, 5.4 MiB/s wr, 179 op/s
Nov 22 09:43:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2634261876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.400 253665 DEBUG nova.compute.manager [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-unplugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.401 253665 DEBUG oslo_concurrency.lockutils [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.401 253665 DEBUG oslo_concurrency.lockutils [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.401 253665 DEBUG oslo_concurrency.lockutils [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.401 253665 DEBUG nova.compute.manager [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] No waiting events found dispatching network-vif-unplugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.401 253665 DEBUG nova.compute.manager [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-unplugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.402 253665 DEBUG nova.compute.manager [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-plugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.402 253665 DEBUG oslo_concurrency.lockutils [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.402 253665 DEBUG oslo_concurrency.lockutils [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.402 253665 DEBUG oslo_concurrency.lockutils [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.402 253665 DEBUG nova.compute.manager [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] No waiting events found dispatching network-vif-plugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.402 253665 WARNING nova.compute.manager [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received unexpected event network-vif-plugged-868d876c-3d4f-4618-aedd-e1ce97d50ae9 for instance with vm_state active and task_state deleting.
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.402 253665 DEBUG nova.compute.manager [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-deleted-005f7c99-8e6b-4818-9749-1360f814f253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.403 253665 INFO nova.compute.manager [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Neutron deleted interface 005f7c99-8e6b-4818-9749-1360f814f253; detaching it from the instance and deleting it from the info cache
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.403 253665 DEBUG nova.network.neutron [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updating instance_info_cache with network_info: [{"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.428 253665 DEBUG nova.compute.manager [req-047c8d3e-4bb2-4f8c-854a-f847d7197d39 req-9538e076-9ff9-4af6-87c8-cfe6c7fad0a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Detach interface failed, port_id=005f7c99-8e6b-4818-9749-1360f814f253, reason: Instance d749b709-79b8-40b6-8c2e-4d301bdc8e67 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.455 253665 DEBUG nova.network.neutron [req-2da9a53d-955a-4398-90da-a186bcc3690c req-90654dec-94c0-4ff6-9e31-bc194be49b98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updated VIF entry in instance network info cache for port 868d876c-3d4f-4618-aedd-e1ce97d50ae9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.456 253665 DEBUG nova.network.neutron [req-2da9a53d-955a-4398-90da-a186bcc3690c req-90654dec-94c0-4ff6-9e31-bc194be49b98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updating instance_info_cache with network_info: [{"id": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "address": "fa:16:3e:ec:9d:36", "network": {"id": "b7054ec2-03d5-4428-a2b8-9c9905d4fcef", "bridge": "br-int", "label": "tempest-network-smoke--751154275", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap868d876c-3d", "ovs_interfaceid": "868d876c-3d4f-4618-aedd-e1ce97d50ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "005f7c99-8e6b-4818-9749-1360f814f253", "address": "fa:16:3e:16:1b:ad", "network": {"id": "14f6914f-16a1-4223-85f8-aa4fada62acd", "bridge": "br-int", "label": "tempest-network-smoke--2003436418", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe16:1bad", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap005f7c99-8e", "ovs_interfaceid": "005f7c99-8e6b-4818-9749-1360f814f253", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.487 253665 DEBUG oslo_concurrency.lockutils [req-2da9a53d-955a-4398-90da-a186bcc3690c req-90654dec-94c0-4ff6-9e31-bc194be49b98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d749b709-79b8-40b6-8c2e-4d301bdc8e67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.493 253665 DEBUG nova.network.neutron [-] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.506 253665 INFO nova.compute.manager [-] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Took 1.47 seconds to deallocate network for instance.
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.544 253665 DEBUG oslo_concurrency.lockutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.545 253665 DEBUG oslo_concurrency.lockutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.646 253665 DEBUG oslo_concurrency.processutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.803 253665 DEBUG nova.compute.manager [req-63a1beb8-38f3-4b93-a812-b5bcdf89377a req-44d5d0bd-b10c-49f4-9e69-f23259e7ce4c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-plugged-005f7c99-8e6b-4818-9749-1360f814f253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.804 253665 DEBUG oslo_concurrency.lockutils [req-63a1beb8-38f3-4b93-a812-b5bcdf89377a req-44d5d0bd-b10c-49f4-9e69-f23259e7ce4c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.804 253665 DEBUG oslo_concurrency.lockutils [req-63a1beb8-38f3-4b93-a812-b5bcdf89377a req-44d5d0bd-b10c-49f4-9e69-f23259e7ce4c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.804 253665 DEBUG oslo_concurrency.lockutils [req-63a1beb8-38f3-4b93-a812-b5bcdf89377a req-44d5d0bd-b10c-49f4-9e69-f23259e7ce4c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.804 253665 DEBUG nova.compute.manager [req-63a1beb8-38f3-4b93-a812-b5bcdf89377a req-44d5d0bd-b10c-49f4-9e69-f23259e7ce4c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] No waiting events found dispatching network-vif-plugged-005f7c99-8e6b-4818-9749-1360f814f253 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.804 253665 WARNING nova.compute.manager [req-63a1beb8-38f3-4b93-a812-b5bcdf89377a req-44d5d0bd-b10c-49f4-9e69-f23259e7ce4c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received unexpected event network-vif-plugged-005f7c99-8e6b-4818-9749-1360f814f253 for instance with vm_state deleted and task_state None.
Nov 22 09:43:21 compute-0 nova_compute[253661]: 2025-11-22 09:43:21.806 253665 DEBUG nova.network.neutron [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Successfully created port: a52ac974-091c-475f-a3fd-ef86fab8b0d7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:43:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:43:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887928054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 260 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 3.4 MiB/s wr, 150 op/s
Nov 22 09:43:22 compute-0 nova_compute[253661]: 2025-11-22 09:43:22.115 253665 DEBUG oslo_concurrency.processutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:22 compute-0 nova_compute[253661]: 2025-11-22 09:43:22.122 253665 DEBUG nova.compute.provider_tree [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:43:22 compute-0 nova_compute[253661]: 2025-11-22 09:43:22.135 253665 DEBUG nova.scheduler.client.report [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:43:22 compute-0 nova_compute[253661]: 2025-11-22 09:43:22.153 253665 DEBUG oslo_concurrency.lockutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:22 compute-0 nova_compute[253661]: 2025-11-22 09:43:22.172 253665 INFO nova.scheduler.client.report [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance d749b709-79b8-40b6-8c2e-4d301bdc8e67
Nov 22 09:43:22 compute-0 nova_compute[253661]: 2025-11-22 09:43:22.226 253665 DEBUG oslo_concurrency.lockutils [None req-a91a743e-0553-41c9-81dc-84e9ebffab51 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d749b709-79b8-40b6-8c2e-4d301bdc8e67" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2887928054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:43:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:43:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:43:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:43:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:43:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:43:23 compute-0 nova_compute[253661]: 2025-11-22 09:43:23.524 253665 DEBUG nova.compute.manager [req-e4c37dce-5a2f-4dcc-8010-0da1f35bab6f req-bda967a7-d492-4213-83ca-9bdb56ea8d02 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Received event network-vif-deleted-868d876c-3d4f-4618-aedd-e1ce97d50ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:23 compute-0 ceph-mon[75021]: pgmap v2444: 305 pgs: 305 active+clean; 260 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 3.4 MiB/s wr, 150 op/s
Nov 22 09:43:23 compute-0 nova_compute[253661]: 2025-11-22 09:43:23.803 253665 DEBUG nova.network.neutron [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Successfully updated port: a52ac974-091c-475f-a3fd-ef86fab8b0d7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:43:23 compute-0 nova_compute[253661]: 2025-11-22 09:43:23.838 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:23 compute-0 nova_compute[253661]: 2025-11-22 09:43:23.838 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:23 compute-0 nova_compute[253661]: 2025-11-22 09:43:23.838 253665 DEBUG nova.network.neutron [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:43:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 3.3 MiB/s wr, 144 op/s
Nov 22 09:43:24 compute-0 nova_compute[253661]: 2025-11-22 09:43:24.492 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:24 compute-0 nova_compute[253661]: 2025-11-22 09:43:24.825 253665 DEBUG nova.network.neutron [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:43:25 compute-0 ceph-mon[75021]: pgmap v2445: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 3.3 MiB/s wr, 144 op/s
Nov 22 09:43:25 compute-0 nova_compute[253661]: 2025-11-22 09:43:25.627 253665 DEBUG nova.compute.manager [req-4c45a29f-ec3b-4e61-bcb8-63295e0c906f req-cd7231f4-aa79-42e5-9e76-1c8e297deb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received event network-changed-a52ac974-091c-475f-a3fd-ef86fab8b0d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:25 compute-0 nova_compute[253661]: 2025-11-22 09:43:25.628 253665 DEBUG nova.compute.manager [req-4c45a29f-ec3b-4e61-bcb8-63295e0c906f req-cd7231f4-aa79-42e5-9e76-1c8e297deb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Refreshing instance network info cache due to event network-changed-a52ac974-091c-475f-a3fd-ef86fab8b0d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:25 compute-0 nova_compute[253661]: 2025-11-22 09:43:25.628 253665 DEBUG oslo_concurrency.lockutils [req-4c45a29f-ec3b-4e61-bcb8-63295e0c906f req-cd7231f4-aa79-42e5-9e76-1c8e297deb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:25 compute-0 ovn_controller[152872]: 2025-11-22T09:43:25Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:1d:c2 10.100.0.12
Nov 22 09:43:25 compute-0 nova_compute[253661]: 2025-11-22 09:43:25.893 253665 DEBUG nova.compute.manager [req-181ddb91-de9c-459d-9788-4a12d083d9d3 req-2d79d2f7-09da-465c-909e-00bf61c4c7c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received event network-changed-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:25 compute-0 nova_compute[253661]: 2025-11-22 09:43:25.894 253665 DEBUG nova.compute.manager [req-181ddb91-de9c-459d-9788-4a12d083d9d3 req-2d79d2f7-09da-465c-909e-00bf61c4c7c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Refreshing instance network info cache due to event network-changed-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:25 compute-0 nova_compute[253661]: 2025-11-22 09:43:25.894 253665 DEBUG oslo_concurrency.lockutils [req-181ddb91-de9c-459d-9788-4a12d083d9d3 req-2d79d2f7-09da-465c-909e-00bf61c4c7c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:25 compute-0 nova_compute[253661]: 2025-11-22 09:43:25.894 253665 DEBUG oslo_concurrency.lockutils [req-181ddb91-de9c-459d-9788-4a12d083d9d3 req-2d79d2f7-09da-465c-909e-00bf61c4c7c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:25 compute-0 nova_compute[253661]: 2025-11-22 09:43:25.895 253665 DEBUG nova.network.neutron [req-181ddb91-de9c-459d-9788-4a12d083d9d3 req-2d79d2f7-09da-465c-909e-00bf61c4c7c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Refreshing network info cache for port 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.030 253665 DEBUG oslo_concurrency.lockutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.031 253665 DEBUG oslo_concurrency.lockutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.031 253665 DEBUG oslo_concurrency.lockutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.031 253665 DEBUG oslo_concurrency.lockutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.031 253665 DEBUG oslo_concurrency.lockutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.032 253665 INFO nova.compute.manager [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Terminating instance
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.033 253665 DEBUG nova.compute.manager [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:43:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 2.0 MiB/s wr, 109 op/s
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.131 253665 DEBUG nova.network.neutron [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Updating instance_info_cache with network_info: [{"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:26 compute-0 kernel: tap0a0c4a1b-54 (unregistering): left promiscuous mode
Nov 22 09:43:26 compute-0 NetworkManager[48920]: <info>  [1763804606.1375] device (tap0a0c4a1b-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:43:26 compute-0 ovn_controller[152872]: 2025-11-22T09:43:26Z|01445|binding|INFO|Releasing lport 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e from this chassis (sb_readonly=0)
Nov 22 09:43:26 compute-0 ovn_controller[152872]: 2025-11-22T09:43:26Z|01446|binding|INFO|Setting lport 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e down in Southbound
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:26 compute-0 ovn_controller[152872]: 2025-11-22T09:43:26Z|01447|binding|INFO|Removing iface tap0a0c4a1b-54 ovn-installed in OVS
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:26 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 22 09:43:26 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000085.scope: Consumed 14.384s CPU time.
Nov 22 09:43:26 compute-0 systemd-machined[215941]: Machine qemu-164-instance-00000085 terminated.
Nov 22 09:43:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:26.260 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:1d:c2 10.100.0.12'], port_security=['fa:16:3e:43:1d:c2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '61d505d6-e331-4728-9e1b-ffbb4cc5ea50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea98697b-6736-4594-9eb4-6b1b64d10480', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '962f3c58-c766-4f2e-8de6-38606feaf786', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37a839c5-b437-4f74-8352-23c47a310399, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.261 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.261 253665 DEBUG nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Instance network_info: |[{"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:43:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:26.262 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e in datapath ea98697b-6736-4594-9eb4-6b1b64d10480 unbound from our chassis
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.262 253665 DEBUG oslo_concurrency.lockutils [req-4c45a29f-ec3b-4e61-bcb8-63295e0c906f req-cd7231f4-aa79-42e5-9e76-1c8e297deb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.262 253665 DEBUG nova.network.neutron [req-4c45a29f-ec3b-4e61-bcb8-63295e0c906f req-cd7231f4-aa79-42e5-9e76-1c8e297deb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Refreshing network info cache for port a52ac974-091c-475f-a3fd-ef86fab8b0d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:26.263 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ea98697b-6736-4594-9eb4-6b1b64d10480, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.264 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Start _get_guest_xml network_info=[{"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:43:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:26.264 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[514c4404-6255-45dd-92af-7cd599cc1456]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:26.265 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480 namespace which is not needed anymore
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.269 253665 WARNING nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.274 253665 INFO nova.virt.libvirt.driver [-] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Instance destroyed successfully.
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.274 253665 DEBUG nova.objects.instance [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 61d505d6-e331-4728-9e1b-ffbb4cc5ea50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.280 253665 DEBUG nova.virt.libvirt.host [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.282 253665 DEBUG nova.virt.libvirt.host [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.285 253665 DEBUG nova.virt.libvirt.vif [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:42:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-137504459',display_name='tempest-TestNetworkBasicOps-server-137504459',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-137504459',id=133,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG0sSnVHjb+SQP1ADA0KsoySjkYY+IUgWL14mgtXY5I/aEMvXXVftTgtxL3iE8eHbToFcNOq74CZwmpR/oAhD4Mbp1BIO3vvtKF+P4z2mX1GqFns7/nhAJwsX+/x8Yg04A==',key_name='tempest-TestNetworkBasicOps-2036456400',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:42:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-r2vzehoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:42:59Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=61d505d6-e331-4728-9e1b-ffbb4cc5ea50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.286 253665 DEBUG nova.network.os_vif_util [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.286 253665 DEBUG nova.network.os_vif_util [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:1d:c2,bridge_name='br-int',has_traffic_filtering=True,id=0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e,network=Network(ea98697b-6736-4594-9eb4-6b1b64d10480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a0c4a1b-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.287 253665 DEBUG os_vif [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:1d:c2,bridge_name='br-int',has_traffic_filtering=True,id=0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e,network=Network(ea98697b-6736-4594-9eb4-6b1b64d10480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a0c4a1b-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.288 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.288 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a0c4a1b-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.290 253665 DEBUG nova.virt.libvirt.host [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.290 253665 DEBUG nova.virt.libvirt.host [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.290 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.291 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.291 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.291 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.291 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.292 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.292 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.292 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.292 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.292 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.292 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.293 253665 DEBUG nova.virt.hardware [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.295 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.335 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.340 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.343 253665 INFO os_vif [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:1d:c2,bridge_name='br-int',has_traffic_filtering=True,id=0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e,network=Network(ea98697b-6736-4594-9eb4-6b1b64d10480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a0c4a1b-54')
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.506 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:26 compute-0 neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480[390108]: [NOTICE]   (390112) : haproxy version is 2.8.14-c23fe91
Nov 22 09:43:26 compute-0 neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480[390108]: [NOTICE]   (390112) : path to executable is /usr/sbin/haproxy
Nov 22 09:43:26 compute-0 neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480[390108]: [WARNING]  (390112) : Exiting Master process...
Nov 22 09:43:26 compute-0 neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480[390108]: [WARNING]  (390112) : Exiting Master process...
Nov 22 09:43:26 compute-0 neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480[390108]: [ALERT]    (390112) : Current worker (390114) exited with code 143 (Terminated)
Nov 22 09:43:26 compute-0 neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480[390108]: [WARNING]  (390112) : All workers exited. Exiting... (0)
Nov 22 09:43:26 compute-0 systemd[1]: libpod-01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e.scope: Deactivated successfully.
Nov 22 09:43:26 compute-0 podman[390660]: 2025-11-22 09:43:26.536673524 +0000 UTC m=+0.174443649 container died 01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:43:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:43:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/887120841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.740 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.768 253665 DEBUG nova.storage.rbd_utils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:26 compute-0 nova_compute[253661]: 2025-11-22 09:43:26.773 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-04be2e58273d601a9518c176197d5ec84c157d1b7d1c4d4e9517b069d115691d-merged.mount: Deactivated successfully.
Nov 22 09:43:26 compute-0 podman[390660]: 2025-11-22 09:43:26.828215413 +0000 UTC m=+0.465985538 container cleanup 01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:43:27 compute-0 podman[390746]: 2025-11-22 09:43:27.054498781 +0000 UTC m=+0.188361443 container remove 01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.062 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7185b5-ec28-466d-88d8-25eb0ab4dc65]: (4, ('Sat Nov 22 09:43:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480 (01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e)\n01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e\nSat Nov 22 09:43:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480 (01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e)\n01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.066 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee9e0a5-d548-4ab4-a14a-7aeabdeac625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.067 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea98697b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:27 compute-0 systemd[1]: libpod-conmon-01cd72a0ffcef475290a10110641ea050e5d70535c7df71568b5cc6341c3d57e.scope: Deactivated successfully.
Nov 22 09:43:27 compute-0 kernel: tapea98697b-60: left promiscuous mode
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.069 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.094 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[477ba0a7-4806-402f-99c8-2dbb0adc51ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.109 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c9318d44-1f51-4f9c-b40a-9969cab54aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.111 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2bcc93-10e9-465a-809a-1d591c3b3c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.135 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3ac323-4ed7-4862-a05e-d4c3f1bd8023]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742980, 'reachable_time': 23630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390806, 'error': None, 'target': 'ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.139 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ea98697b-6736-4594-9eb4-6b1b64d10480 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.139 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[16b831bc-9fbf-47c0-a927-afc46ab628a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:27 compute-0 systemd[1]: run-netns-ovnmeta\x2dea98697b\x2d6736\x2d4594\x2d9eb4\x2d6b1b64d10480.mount: Deactivated successfully.
Nov 22 09:43:27 compute-0 sudo[390781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:43:27 compute-0 sudo[390781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:27 compute-0 sudo[390781]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:27 compute-0 sudo[390809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:43:27 compute-0 sudo[390809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:27 compute-0 sudo[390809]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:43:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531747823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.296 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.300 253665 DEBUG nova.virt.libvirt.vif [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:43:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-1414593885',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-1414593885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=134,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDAinPFxvJPb+F5aFOzMr/F2AeCA+zy9wSOLs/tEMQaNXxaZ2HNnh0s+13uWL7hsaz7nbRxU8LxOIxkt/PQ9hxO5mowXPIJIN0kjLr/YjJtROiFYKcno5ZGoM+YASq6MA==',key_name='tempest-TestSecurityGroupsBasicOps-33766932',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-5ktam00t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:43:20Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=2f0b41ce-f4ed-4358-ac93-eb662789bea1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.300 253665 DEBUG nova.network.os_vif_util [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.301 253665 DEBUG nova.network.os_vif_util [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8c:a9,bridge_name='br-int',has_traffic_filtering=True,id=a52ac974-091c-475f-a3fd-ef86fab8b0d7,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa52ac974-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.303 253665 DEBUG nova.objects.instance [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f0b41ce-f4ed-4358-ac93-eb662789bea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:43:27 compute-0 sudo[390834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:43:27 compute-0 sudo[390834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:27 compute-0 sudo[390834]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.321 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <uuid>2f0b41ce-f4ed-4358-ac93-eb662789bea1</uuid>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <name>instance-00000086</name>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-1414593885</nova:name>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:43:26</nova:creationTime>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <nova:port uuid="a52ac974-091c-475f-a3fd-ef86fab8b0d7">
Nov 22 09:43:27 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <system>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <entry name="serial">2f0b41ce-f4ed-4358-ac93-eb662789bea1</entry>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <entry name="uuid">2f0b41ce-f4ed-4358-ac93-eb662789bea1</entry>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     </system>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <os>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   </os>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <features>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   </features>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk">
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       </source>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk.config">
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       </source>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:43:27 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:8c:8c:a9"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <target dev="tapa52ac974-09"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1/console.log" append="off"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <video>
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     </video>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:43:27 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:43:27 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:43:27 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:43:27 compute-0 nova_compute[253661]: </domain>
Nov 22 09:43:27 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.323 253665 DEBUG nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Preparing to wait for external event network-vif-plugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.323 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.324 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.324 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.325 253665 DEBUG nova.virt.libvirt.vif [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:43:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-1414593885',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-1414593885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=134,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDAinPFxvJPb+F5aFOzMr/F2AeCA+zy9wSOLs/tEMQaNXxaZ2HNnh0s+13uWL7hsaz7nbRxU8LxOIxkt/PQ9hxO5mowXPIJIN0kjLr/YjJtROiFYKcno5ZGoM+YASq6MA==',key_name='tempest-TestSecurityGroupsBasicOps-33766932',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-5ktam00t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:43:20Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=2f0b41ce-f4ed-4358-ac93-eb662789bea1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.326 253665 DEBUG nova.network.os_vif_util [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.326 253665 DEBUG nova.network.os_vif_util [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8c:a9,bridge_name='br-int',has_traffic_filtering=True,id=a52ac974-091c-475f-a3fd-ef86fab8b0d7,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa52ac974-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.327 253665 DEBUG os_vif [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8c:a9,bridge_name='br-int',has_traffic_filtering=True,id=a52ac974-091c-475f-a3fd-ef86fab8b0d7,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa52ac974-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.328 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.329 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.332 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.333 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa52ac974-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.333 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa52ac974-09, col_values=(('external_ids', {'iface-id': 'a52ac974-091c-475f-a3fd-ef86fab8b0d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:8c:a9', 'vm-uuid': '2f0b41ce-f4ed-4358-ac93-eb662789bea1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.335 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:27 compute-0 NetworkManager[48920]: <info>  [1763804607.3366] manager: (tapa52ac974-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.338 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.344 253665 INFO os_vif [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8c:a9,bridge_name='br-int',has_traffic_filtering=True,id=a52ac974-091c-475f-a3fd-ef86fab8b0d7,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa52ac974-09')
Nov 22 09:43:27 compute-0 sudo[390862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:43:27 compute-0 sudo[390862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.408 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.408 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.408 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:8c:8c:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.409 253665 INFO nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Using config drive
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.429 253665 DEBUG nova.storage.rbd_utils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.477 253665 INFO nova.virt.libvirt.driver [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Deleting instance files /var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50_del
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.479 253665 INFO nova.virt.libvirt.driver [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Deletion of /var/lib/nova/instances/61d505d6-e331-4728-9e1b-ffbb4cc5ea50_del complete
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.598 253665 INFO nova.compute.manager [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Took 1.56 seconds to destroy the instance on the hypervisor.
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.599 253665 DEBUG oslo.service.loopingcall [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.599 253665 DEBUG nova.compute.manager [-] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:43:27 compute-0 nova_compute[253661]: 2025-11-22 09:43:27.599 253665 DEBUG nova.network.neutron [-] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:43:27 compute-0 ceph-mon[75021]: pgmap v2446: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 2.0 MiB/s wr, 109 op/s
Nov 22 09:43:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/887120841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:43:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1531747823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:43:27 compute-0 sudo[390862]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:43:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:43:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:43:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:43:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:43:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:43:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1400cdaf-a0ae-4d27-91a0-bc93c5a7e49a does not exist
Nov 22 09:43:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev fd807c93-2840-44a8-af0d-1dd191a2744a does not exist
Nov 22 09:43:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 35529d21-83c4-4882-8128-46a170ff25b9 does not exist
Nov 22 09:43:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:43:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:43:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:43:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:43:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:43:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.987 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.988 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:27.988 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:28 compute-0 sudo[390939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:43:28 compute-0 sudo[390939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:28 compute-0 sudo[390939]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:28 compute-0 sudo[390964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:43:28 compute-0 sudo[390964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:28 compute-0 sudo[390964]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 246 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 2.0 MiB/s wr, 110 op/s
Nov 22 09:43:28 compute-0 sudo[390989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:43:28 compute-0 sudo[390989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:28 compute-0 sudo[390989]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:28 compute-0 sudo[391014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:43:28 compute-0 sudo[391014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:28 compute-0 nova_compute[253661]: 2025-11-22 09:43:28.369 253665 DEBUG nova.network.neutron [req-181ddb91-de9c-459d-9788-4a12d083d9d3 req-2d79d2f7-09da-465c-909e-00bf61c4c7c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Updated VIF entry in instance network info cache for port 0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:43:28 compute-0 nova_compute[253661]: 2025-11-22 09:43:28.369 253665 DEBUG nova.network.neutron [req-181ddb91-de9c-459d-9788-4a12d083d9d3 req-2d79d2f7-09da-465c-909e-00bf61c4c7c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Updating instance_info_cache with network_info: [{"id": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "address": "fa:16:3e:43:1d:c2", "network": {"id": "ea98697b-6736-4594-9eb4-6b1b64d10480", "bridge": "br-int", "label": "tempest-network-smoke--1993312692", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a0c4a1b-54", "ovs_interfaceid": "0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:28 compute-0 nova_compute[253661]: 2025-11-22 09:43:28.430 253665 DEBUG oslo_concurrency.lockutils [req-181ddb91-de9c-459d-9788-4a12d083d9d3 req-2d79d2f7-09da-465c-909e-00bf61c4c7c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-61d505d6-e331-4728-9e1b-ffbb4cc5ea50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:28 compute-0 podman[391081]: 2025-11-22 09:43:28.557216547 +0000 UTC m=+0.044661584 container create a52c8c977bd4826c0c8eab5d2349f017b66de607f202d09a3d445e0406396e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kalam, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:43:28 compute-0 systemd[1]: Started libpod-conmon-a52c8c977bd4826c0c8eab5d2349f017b66de607f202d09a3d445e0406396e3d.scope.
Nov 22 09:43:28 compute-0 podman[391081]: 2025-11-22 09:43:28.536927096 +0000 UTC m=+0.024372153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:43:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:43:28 compute-0 podman[391081]: 2025-11-22 09:43:28.66953731 +0000 UTC m=+0.156982347 container init a52c8c977bd4826c0c8eab5d2349f017b66de607f202d09a3d445e0406396e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kalam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 22 09:43:28 compute-0 podman[391081]: 2025-11-22 09:43:28.677028046 +0000 UTC m=+0.164473083 container start a52c8c977bd4826c0c8eab5d2349f017b66de607f202d09a3d445e0406396e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kalam, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 09:43:28 compute-0 podman[391081]: 2025-11-22 09:43:28.680172203 +0000 UTC m=+0.167617240 container attach a52c8c977bd4826c0c8eab5d2349f017b66de607f202d09a3d445e0406396e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kalam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 09:43:28 compute-0 lucid_kalam[391097]: 167 167
Nov 22 09:43:28 compute-0 systemd[1]: libpod-a52c8c977bd4826c0c8eab5d2349f017b66de607f202d09a3d445e0406396e3d.scope: Deactivated successfully.
Nov 22 09:43:28 compute-0 podman[391081]: 2025-11-22 09:43:28.683188428 +0000 UTC m=+0.170633465 container died a52c8c977bd4826c0c8eab5d2349f017b66de607f202d09a3d445e0406396e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 09:43:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:43:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:43:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:43:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:43:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:43:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:43:28 compute-0 nova_compute[253661]: 2025-11-22 09:43:28.705 253665 INFO nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Creating config drive at /var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1/disk.config
Nov 22 09:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbd508aad1d220cb72cabe875fac6be784aa663588f8816d5dce6c80212eeb5d-merged.mount: Deactivated successfully.
Nov 22 09:43:28 compute-0 nova_compute[253661]: 2025-11-22 09:43:28.714 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbp0acl0n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:28 compute-0 podman[391081]: 2025-11-22 09:43:28.725779699 +0000 UTC m=+0.213224736 container remove a52c8c977bd4826c0c8eab5d2349f017b66de607f202d09a3d445e0406396e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:43:28 compute-0 systemd[1]: libpod-conmon-a52c8c977bd4826c0c8eab5d2349f017b66de607f202d09a3d445e0406396e3d.scope: Deactivated successfully.
Nov 22 09:43:28 compute-0 nova_compute[253661]: 2025-11-22 09:43:28.863 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbp0acl0n" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:28 compute-0 nova_compute[253661]: 2025-11-22 09:43:28.887 253665 DEBUG nova.storage.rbd_utils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:28 compute-0 nova_compute[253661]: 2025-11-22 09:43:28.890 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1/disk.config 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:28 compute-0 podman[391123]: 2025-11-22 09:43:28.898406412 +0000 UTC m=+0.044997232 container create edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:43:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:28 compute-0 systemd[1]: Started libpod-conmon-edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2.scope.
Nov 22 09:43:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:43:28 compute-0 podman[391123]: 2025-11-22 09:43:28.881215697 +0000 UTC m=+0.027806537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0377f043a88f14792ff6c9271651c6a22e27b4c2d8694910122861e56c2aa328/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0377f043a88f14792ff6c9271651c6a22e27b4c2d8694910122861e56c2aa328/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0377f043a88f14792ff6c9271651c6a22e27b4c2d8694910122861e56c2aa328/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0377f043a88f14792ff6c9271651c6a22e27b4c2d8694910122861e56c2aa328/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0377f043a88f14792ff6c9271651c6a22e27b4c2d8694910122861e56c2aa328/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:28 compute-0 podman[391123]: 2025-11-22 09:43:28.991856539 +0000 UTC m=+0.138447379 container init edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wright, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:43:29 compute-0 podman[391123]: 2025-11-22 09:43:29.000879042 +0000 UTC m=+0.147469862 container start edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wright, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:43:29 compute-0 podman[391123]: 2025-11-22 09:43:29.004401299 +0000 UTC m=+0.150992119 container attach edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wright, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.076 253665 DEBUG oslo_concurrency.processutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1/disk.config 2f0b41ce-f4ed-4358-ac93-eb662789bea1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.077 253665 INFO nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Deleting local config drive /var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1/disk.config because it was imported into RBD.
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.101 253665 DEBUG nova.network.neutron [-] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.119 253665 INFO nova.compute.manager [-] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Took 1.52 seconds to deallocate network for instance.
Nov 22 09:43:29 compute-0 kernel: tapa52ac974-09: entered promiscuous mode
Nov 22 09:43:29 compute-0 NetworkManager[48920]: <info>  [1763804609.1374] manager: (tapa52ac974-09): new Tun device (/org/freedesktop/NetworkManager/Devices/590)
Nov 22 09:43:29 compute-0 systemd-udevd[390624]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.137 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:29 compute-0 ovn_controller[152872]: 2025-11-22T09:43:29Z|01448|binding|INFO|Claiming lport a52ac974-091c-475f-a3fd-ef86fab8b0d7 for this chassis.
Nov 22 09:43:29 compute-0 ovn_controller[152872]: 2025-11-22T09:43:29Z|01449|binding|INFO|a52ac974-091c-475f-a3fd-ef86fab8b0d7: Claiming fa:16:3e:8c:8c:a9 10.100.0.4
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.147 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:8c:a9 10.100.0.4'], port_security=['fa:16:3e:8c:8c:a9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f0b41ce-f4ed-4358-ac93-eb662789bea1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec2b4fe9-a0a4-4ce7-b67b-b101ede1b3af', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c8d1ab5-5912-4930-8d33-fd5a1b07be2a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a52ac974-091c-475f-a3fd-ef86fab8b0d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:43:29 compute-0 NetworkManager[48920]: <info>  [1763804609.1513] device (tapa52ac974-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:43:29 compute-0 NetworkManager[48920]: <info>  [1763804609.1529] device (tapa52ac974-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.150 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a52ac974-091c-475f-a3fd-ef86fab8b0d7 in datapath d0ae8a74-13a0-46e6-ad55-3404cb0e971f bound to our chassis
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.155 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0ae8a74-13a0-46e6-ad55-3404cb0e971f
Nov 22 09:43:29 compute-0 ovn_controller[152872]: 2025-11-22T09:43:29Z|01450|binding|INFO|Setting lport a52ac974-091c-475f-a3fd-ef86fab8b0d7 ovn-installed in OVS
Nov 22 09:43:29 compute-0 ovn_controller[152872]: 2025-11-22T09:43:29Z|01451|binding|INFO|Setting lport a52ac974-091c-475f-a3fd-ef86fab8b0d7 up in Southbound
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.172 253665 DEBUG oslo_concurrency.lockutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.173 253665 DEBUG oslo_concurrency.lockutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.190 253665 DEBUG nova.network.neutron [req-4c45a29f-ec3b-4e61-bcb8-63295e0c906f req-cd7231f4-aa79-42e5-9e76-1c8e297deb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Updated VIF entry in instance network info cache for port a52ac974-091c-475f-a3fd-ef86fab8b0d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.191 253665 DEBUG nova.network.neutron [req-4c45a29f-ec3b-4e61-bcb8-63295e0c906f req-cd7231f4-aa79-42e5-9e76-1c8e297deb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Updating instance_info_cache with network_info: [{"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.199 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c86ea51-1553-4511-ade3-70dd9d0f576c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.204 253665 DEBUG oslo_concurrency.lockutils [req-4c45a29f-ec3b-4e61-bcb8-63295e0c906f req-cd7231f4-aa79-42e5-9e76-1c8e297deb32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:29 compute-0 systemd-machined[215941]: New machine qemu-165-instance-00000086.
Nov 22 09:43:29 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000086.
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.246 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[68663e78-1ed8-493a-ac8f-a89005db21ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.249 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[edb080be-4642-496f-a083-c8c1534e09ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:29 compute-0 podman[391193]: 2025-11-22 09:43:29.258942034 +0000 UTC m=+0.079217616 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.265 253665 DEBUG nova.compute.manager [req-7f5f3cce-c86a-4966-ac52-181a402ab81b req-827a06f3-4cae-494b-8ed1-9d053fe1483a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received event network-vif-unplugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.265 253665 DEBUG oslo_concurrency.lockutils [req-7f5f3cce-c86a-4966-ac52-181a402ab81b req-827a06f3-4cae-494b-8ed1-9d053fe1483a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.265 253665 DEBUG oslo_concurrency.lockutils [req-7f5f3cce-c86a-4966-ac52-181a402ab81b req-827a06f3-4cae-494b-8ed1-9d053fe1483a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.265 253665 DEBUG oslo_concurrency.lockutils [req-7f5f3cce-c86a-4966-ac52-181a402ab81b req-827a06f3-4cae-494b-8ed1-9d053fe1483a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.265 253665 DEBUG nova.compute.manager [req-7f5f3cce-c86a-4966-ac52-181a402ab81b req-827a06f3-4cae-494b-8ed1-9d053fe1483a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] No waiting events found dispatching network-vif-unplugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.266 253665 WARNING nova.compute.manager [req-7f5f3cce-c86a-4966-ac52-181a402ab81b req-827a06f3-4cae-494b-8ed1-9d053fe1483a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received unexpected event network-vif-unplugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e for instance with vm_state deleted and task_state None.
Nov 22 09:43:29 compute-0 podman[391194]: 2025-11-22 09:43:29.266280095 +0000 UTC m=+0.086410884 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.278 253665 DEBUG nova.compute.manager [req-6a6fd0cc-0aa9-4949-a268-75f355368d7c req-d30f164d-27ac-4e67-8e71-104f4cb59388 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received event network-vif-deleted-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.280 253665 DEBUG oslo_concurrency.processutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.281 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1133d3-b21b-49ca-83c4-6c688457b73c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.298 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0196b4bf-997f-4289-a144-ff0702d69255]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0ae8a74-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d3:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 406], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742577, 'reachable_time': 43515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391247, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.318 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[91c00208-196a-4d4f-912f-0b8646679104]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0ae8a74-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742592, 'tstamp': 742592}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391250, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0ae8a74-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742595, 'tstamp': 742595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391250, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.320 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0ae8a74-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.323 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0ae8a74-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.323 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.323 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0ae8a74-10, col_values=(('external_ids', {'iface-id': 'a6ff8b0d-d874-4d01-a908-4435ceea7174'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:29.324 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.606 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804609.6059196, 2f0b41ce-f4ed-4358-ac93-eb662789bea1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.607 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] VM Started (Lifecycle Event)
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.630 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.634 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804609.6065116, 2f0b41ce-f4ed-4358-ac93-eb662789bea1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.634 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] VM Paused (Lifecycle Event)
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.651 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.654 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.677 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:43:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:43:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636308237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.745 253665 DEBUG oslo_concurrency.processutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.750 253665 DEBUG nova.compute.provider_tree [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.762 253665 DEBUG nova.scheduler.client.report [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.779 253665 DEBUG oslo_concurrency.lockutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.803 253665 INFO nova.scheduler.client.report [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 61d505d6-e331-4728-9e1b-ffbb4cc5ea50
Nov 22 09:43:29 compute-0 nova_compute[253661]: 2025-11-22 09:43:29.859 253665 DEBUG oslo_concurrency.lockutils [None req-ba41b6f1-0f91-4c8c-8dde-bd1b679ec013 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:29 compute-0 ceph-mon[75021]: pgmap v2447: 305 pgs: 305 active+clean; 246 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 2.0 MiB/s wr, 110 op/s
Nov 22 09:43:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3636308237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:30 compute-0 cool_wright[391159]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:43:30 compute-0 cool_wright[391159]: --> relative data size: 1.0
Nov 22 09:43:30 compute-0 cool_wright[391159]: --> All data devices are unavailable
Nov 22 09:43:30 compute-0 systemd[1]: libpod-edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2.scope: Deactivated successfully.
Nov 22 09:43:30 compute-0 systemd[1]: libpod-edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2.scope: Consumed 1.017s CPU time.
Nov 22 09:43:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 196 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 22 09:43:30 compute-0 podman[391123]: 2025-11-22 09:43:30.110191244 +0000 UTC m=+1.256782074 container died edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0377f043a88f14792ff6c9271651c6a22e27b4c2d8694910122861e56c2aa328-merged.mount: Deactivated successfully.
Nov 22 09:43:30 compute-0 podman[391123]: 2025-11-22 09:43:30.166441753 +0000 UTC m=+1.313032573 container remove edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wright, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 09:43:30 compute-0 systemd[1]: libpod-conmon-edc684bd2be8f0bdf7d221ccc43a18357a8c12b516e98a43f659fefdfd0e7ba2.scope: Deactivated successfully.
Nov 22 09:43:30 compute-0 sudo[391014]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:30 compute-0 sudo[391348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:43:30 compute-0 sudo[391348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:30 compute-0 sudo[391348]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:30 compute-0 sudo[391373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:43:30 compute-0 sudo[391373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:30 compute-0 sudo[391373]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:30 compute-0 nova_compute[253661]: 2025-11-22 09:43:30.315 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804595.3140588, 4b40bebc-2343-478c-aacb-b4ae1fc87907 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:43:30 compute-0 nova_compute[253661]: 2025-11-22 09:43:30.316 253665 INFO nova.compute.manager [-] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] VM Stopped (Lifecycle Event)
Nov 22 09:43:30 compute-0 nova_compute[253661]: 2025-11-22 09:43:30.337 253665 DEBUG nova.compute.manager [None req-07aaee35-637b-4529-b337-7ffd19e35352 - - - - - -] [instance: 4b40bebc-2343-478c-aacb-b4ae1fc87907] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:43:30 compute-0 sudo[391398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:43:30 compute-0 sudo[391398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:30 compute-0 sudo[391398]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:30 compute-0 sudo[391423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:43:30 compute-0 sudo[391423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:30 compute-0 podman[391486]: 2025-11-22 09:43:30.738730065 +0000 UTC m=+0.043214338 container create 095f2ed7a5a19dfbf35a83399b2008ee8ef89ca7878b470000403bc7ab1e6040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 09:43:30 compute-0 systemd[1]: Started libpod-conmon-095f2ed7a5a19dfbf35a83399b2008ee8ef89ca7878b470000403bc7ab1e6040.scope.
Nov 22 09:43:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:43:30 compute-0 podman[391486]: 2025-11-22 09:43:30.719014798 +0000 UTC m=+0.023499091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:43:30 compute-0 podman[391486]: 2025-11-22 09:43:30.822157705 +0000 UTC m=+0.126641998 container init 095f2ed7a5a19dfbf35a83399b2008ee8ef89ca7878b470000403bc7ab1e6040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:43:30 compute-0 podman[391486]: 2025-11-22 09:43:30.830491911 +0000 UTC m=+0.134976184 container start 095f2ed7a5a19dfbf35a83399b2008ee8ef89ca7878b470000403bc7ab1e6040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:43:30 compute-0 podman[391486]: 2025-11-22 09:43:30.833740031 +0000 UTC m=+0.138224324 container attach 095f2ed7a5a19dfbf35a83399b2008ee8ef89ca7878b470000403bc7ab1e6040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:43:30 compute-0 upbeat_tesla[391502]: 167 167
Nov 22 09:43:30 compute-0 systemd[1]: libpod-095f2ed7a5a19dfbf35a83399b2008ee8ef89ca7878b470000403bc7ab1e6040.scope: Deactivated successfully.
Nov 22 09:43:30 compute-0 podman[391486]: 2025-11-22 09:43:30.837507924 +0000 UTC m=+0.141992197 container died 095f2ed7a5a19dfbf35a83399b2008ee8ef89ca7878b470000403bc7ab1e6040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 09:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0484e0b95e8351cde2a2f11fca0a6e654141eadf8e174892d7dbac5a88b33247-merged.mount: Deactivated successfully.
Nov 22 09:43:30 compute-0 podman[391486]: 2025-11-22 09:43:30.880201878 +0000 UTC m=+0.184686151 container remove 095f2ed7a5a19dfbf35a83399b2008ee8ef89ca7878b470000403bc7ab1e6040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:43:30 compute-0 systemd[1]: libpod-conmon-095f2ed7a5a19dfbf35a83399b2008ee8ef89ca7878b470000403bc7ab1e6040.scope: Deactivated successfully.
Nov 22 09:43:31 compute-0 podman[391527]: 2025-11-22 09:43:31.065776681 +0000 UTC m=+0.043203978 container create 7ee71d4c17a95aac05c7e3fc4ba1960bb875a30511e2bc55742d7c8dd9d7a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:43:31 compute-0 systemd[1]: Started libpod-conmon-7ee71d4c17a95aac05c7e3fc4ba1960bb875a30511e2bc55742d7c8dd9d7a08d.scope.
Nov 22 09:43:31 compute-0 podman[391527]: 2025-11-22 09:43:31.046939595 +0000 UTC m=+0.024366912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:43:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43662c636a2064b71759d6c8a216fce8cc8efa4ea634dcfb496cd9b77e7f0af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43662c636a2064b71759d6c8a216fce8cc8efa4ea634dcfb496cd9b77e7f0af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43662c636a2064b71759d6c8a216fce8cc8efa4ea634dcfb496cd9b77e7f0af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43662c636a2064b71759d6c8a216fce8cc8efa4ea634dcfb496cd9b77e7f0af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:31 compute-0 podman[391527]: 2025-11-22 09:43:31.164498658 +0000 UTC m=+0.141925975 container init 7ee71d4c17a95aac05c7e3fc4ba1960bb875a30511e2bc55742d7c8dd9d7a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:43:31 compute-0 podman[391527]: 2025-11-22 09:43:31.171970643 +0000 UTC m=+0.149397930 container start 7ee71d4c17a95aac05c7e3fc4ba1960bb875a30511e2bc55742d7c8dd9d7a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 09:43:31 compute-0 podman[391527]: 2025-11-22 09:43:31.17589966 +0000 UTC m=+0.153326947 container attach 7ee71d4c17a95aac05c7e3fc4ba1960bb875a30511e2bc55742d7c8dd9d7a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.373 253665 DEBUG nova.compute.manager [req-50fce05a-9413-4c00-8dd6-109356982fd1 req-8c7c37a9-55ff-4dcb-98a5-858df096096e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received event network-vif-plugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.375 253665 DEBUG oslo_concurrency.lockutils [req-50fce05a-9413-4c00-8dd6-109356982fd1 req-8c7c37a9-55ff-4dcb-98a5-858df096096e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.375 253665 DEBUG oslo_concurrency.lockutils [req-50fce05a-9413-4c00-8dd6-109356982fd1 req-8c7c37a9-55ff-4dcb-98a5-858df096096e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.375 253665 DEBUG oslo_concurrency.lockutils [req-50fce05a-9413-4c00-8dd6-109356982fd1 req-8c7c37a9-55ff-4dcb-98a5-858df096096e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61d505d6-e331-4728-9e1b-ffbb4cc5ea50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.376 253665 DEBUG nova.compute.manager [req-50fce05a-9413-4c00-8dd6-109356982fd1 req-8c7c37a9-55ff-4dcb-98a5-858df096096e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] No waiting events found dispatching network-vif-plugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.376 253665 WARNING nova.compute.manager [req-50fce05a-9413-4c00-8dd6-109356982fd1 req-8c7c37a9-55ff-4dcb-98a5-858df096096e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Received unexpected event network-vif-plugged-0a0c4a1b-54ce-4bc5-ad3e-fbf88ffebc0e for instance with vm_state deleted and task_state None.
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.379 253665 DEBUG nova.compute.manager [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received event network-vif-plugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.379 253665 DEBUG oslo_concurrency.lockutils [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.379 253665 DEBUG oslo_concurrency.lockutils [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.380 253665 DEBUG oslo_concurrency.lockutils [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.380 253665 DEBUG nova.compute.manager [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Processing event network-vif-plugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.380 253665 DEBUG nova.compute.manager [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received event network-vif-plugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.380 253665 DEBUG oslo_concurrency.lockutils [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.381 253665 DEBUG oslo_concurrency.lockutils [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.381 253665 DEBUG oslo_concurrency.lockutils [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.381 253665 DEBUG nova.compute.manager [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] No waiting events found dispatching network-vif-plugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.381 253665 WARNING nova.compute.manager [req-754ab646-12df-47bb-b4bf-9fe6f5806c8c req-cc54feed-69f4-4560-b626-c2d6ca0932e9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received unexpected event network-vif-plugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 for instance with vm_state building and task_state spawning.
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.382 253665 DEBUG nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.388 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804611.3879154, 2f0b41ce-f4ed-4358-ac93-eb662789bea1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.389 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] VM Resumed (Lifecycle Event)
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.392 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.398 253665 INFO nova.virt.libvirt.driver [-] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Instance spawned successfully.
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.399 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.412 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.420 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.426 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.426 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.427 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.427 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.428 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.428 253665 DEBUG nova.virt.libvirt.driver [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.454 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.490 253665 INFO nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Took 10.85 seconds to spawn the instance on the hypervisor.
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.491 253665 DEBUG nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.552 253665 INFO nova.compute.manager [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Took 11.83 seconds to build instance.
Nov 22 09:43:31 compute-0 nova_compute[253661]: 2025-11-22 09:43:31.570 253665 DEBUG oslo_concurrency.lockutils [None req-a7500914-de9c-4f6d-97b3-5d29ddde5c12 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:31 compute-0 ceph-mon[75021]: pgmap v2448: 305 pgs: 305 active+clean; 196 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]: {
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:     "0": [
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:         {
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "devices": [
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "/dev/loop3"
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             ],
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_name": "ceph_lv0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_size": "21470642176",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "name": "ceph_lv0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "tags": {
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.cluster_name": "ceph",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.crush_device_class": "",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.encrypted": "0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.osd_id": "0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.type": "block",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.vdo": "0"
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             },
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "type": "block",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "vg_name": "ceph_vg0"
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:         }
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:     ],
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:     "1": [
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:         {
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "devices": [
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "/dev/loop4"
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             ],
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_name": "ceph_lv1",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_size": "21470642176",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "name": "ceph_lv1",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "tags": {
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.cluster_name": "ceph",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.crush_device_class": "",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.encrypted": "0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.osd_id": "1",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.type": "block",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.vdo": "0"
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             },
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "type": "block",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "vg_name": "ceph_vg1"
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:         }
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:     ],
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:     "2": [
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:         {
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "devices": [
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "/dev/loop5"
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             ],
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_name": "ceph_lv2",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_size": "21470642176",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "name": "ceph_lv2",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "tags": {
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.cluster_name": "ceph",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.crush_device_class": "",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.encrypted": "0",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.osd_id": "2",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.type": "block",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:                 "ceph.vdo": "0"
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             },
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "type": "block",
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:             "vg_name": "ceph_vg2"
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:         }
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]:     ]
Nov 22 09:43:32 compute-0 gracious_bardeen[391544]: }
Nov 22 09:43:32 compute-0 systemd[1]: libpod-7ee71d4c17a95aac05c7e3fc4ba1960bb875a30511e2bc55742d7c8dd9d7a08d.scope: Deactivated successfully.
Nov 22 09:43:32 compute-0 podman[391527]: 2025-11-22 09:43:32.051298826 +0000 UTC m=+1.028726113 container died 7ee71d4c17a95aac05c7e3fc4ba1960bb875a30511e2bc55742d7c8dd9d7a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 22 09:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a43662c636a2064b71759d6c8a216fce8cc8efa4ea634dcfb496cd9b77e7f0af-merged.mount: Deactivated successfully.
Nov 22 09:43:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 22 09:43:32 compute-0 podman[391527]: 2025-11-22 09:43:32.120458614 +0000 UTC m=+1.097885901 container remove 7ee71d4c17a95aac05c7e3fc4ba1960bb875a30511e2bc55742d7c8dd9d7a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:43:32 compute-0 systemd[1]: libpod-conmon-7ee71d4c17a95aac05c7e3fc4ba1960bb875a30511e2bc55742d7c8dd9d7a08d.scope: Deactivated successfully.
Nov 22 09:43:32 compute-0 sudo[391423]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:32 compute-0 sudo[391567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:43:32 compute-0 sudo[391567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:32 compute-0 sudo[391567]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:32 compute-0 sudo[391592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:43:32 compute-0 sudo[391592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:32 compute-0 sudo[391592]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:32 compute-0 nova_compute[253661]: 2025-11-22 09:43:32.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:32 compute-0 sudo[391617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:43:32 compute-0 sudo[391617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:32 compute-0 sudo[391617]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:32 compute-0 sudo[391642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:43:32 compute-0 sudo[391642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:32 compute-0 podman[391705]: 2025-11-22 09:43:32.84034687 +0000 UTC m=+0.044126010 container create fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hopper, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:43:32 compute-0 systemd[1]: Started libpod-conmon-fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177.scope.
Nov 22 09:43:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:43:32 compute-0 podman[391705]: 2025-11-22 09:43:32.822825618 +0000 UTC m=+0.026604788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:43:32 compute-0 podman[391705]: 2025-11-22 09:43:32.924577529 +0000 UTC m=+0.128356689 container init fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:43:32 compute-0 podman[391705]: 2025-11-22 09:43:32.93432562 +0000 UTC m=+0.138104760 container start fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:43:32 compute-0 podman[391705]: 2025-11-22 09:43:32.939061338 +0000 UTC m=+0.142840498 container attach fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:43:32 compute-0 cranky_hopper[391721]: 167 167
Nov 22 09:43:32 compute-0 systemd[1]: libpod-fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177.scope: Deactivated successfully.
Nov 22 09:43:32 compute-0 conmon[391721]: conmon fe676c9964548000c9d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177.scope/container/memory.events
Nov 22 09:43:32 compute-0 podman[391705]: 2025-11-22 09:43:32.941798015 +0000 UTC m=+0.145577155 container died fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e097f0f432879e67c9600f7046d1197e992c1c5a8e162e7c89cf706b9697771-merged.mount: Deactivated successfully.
Nov 22 09:43:32 compute-0 podman[391705]: 2025-11-22 09:43:32.979950007 +0000 UTC m=+0.183729147 container remove fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:43:32 compute-0 systemd[1]: libpod-conmon-fe676c9964548000c9d6ae6260d20b7675c6359d6d58c26b32b52570220b0177.scope: Deactivated successfully.
Nov 22 09:43:33 compute-0 podman[391745]: 2025-11-22 09:43:33.187699997 +0000 UTC m=+0.054032855 container create 27ce23eb0d670a3268401b943e71eac622b56809f603beca556e9a8ff53642bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:43:33 compute-0 systemd[1]: Started libpod-conmon-27ce23eb0d670a3268401b943e71eac622b56809f603beca556e9a8ff53642bb.scope.
Nov 22 09:43:33 compute-0 podman[391745]: 2025-11-22 09:43:33.158854335 +0000 UTC m=+0.025187223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:43:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff7e73d42ca0b8aeee1f53692fe7f04a01a9d26ef67b169e259fb7d670b923b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff7e73d42ca0b8aeee1f53692fe7f04a01a9d26ef67b169e259fb7d670b923b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff7e73d42ca0b8aeee1f53692fe7f04a01a9d26ef67b169e259fb7d670b923b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff7e73d42ca0b8aeee1f53692fe7f04a01a9d26ef67b169e259fb7d670b923b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:43:33 compute-0 podman[391745]: 2025-11-22 09:43:33.288746902 +0000 UTC m=+0.155079780 container init 27ce23eb0d670a3268401b943e71eac622b56809f603beca556e9a8ff53642bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 09:43:33 compute-0 podman[391745]: 2025-11-22 09:43:33.295809276 +0000 UTC m=+0.162142134 container start 27ce23eb0d670a3268401b943e71eac622b56809f603beca556e9a8ff53642bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:43:33 compute-0 podman[391745]: 2025-11-22 09:43:33.29918174 +0000 UTC m=+0.165514618 container attach 27ce23eb0d670a3268401b943e71eac622b56809f603beca556e9a8ff53642bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:43:33 compute-0 nova_compute[253661]: 2025-11-22 09:43:33.873 253665 DEBUG nova.compute.manager [req-01ec10dc-32e6-48f2-96ad-937d273cf029 req-3fb61fe6-38df-4550-8c16-226856c23a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received event network-changed-a52ac974-091c-475f-a3fd-ef86fab8b0d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:33 compute-0 nova_compute[253661]: 2025-11-22 09:43:33.876 253665 DEBUG nova.compute.manager [req-01ec10dc-32e6-48f2-96ad-937d273cf029 req-3fb61fe6-38df-4550-8c16-226856c23a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Refreshing instance network info cache due to event network-changed-a52ac974-091c-475f-a3fd-ef86fab8b0d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:33 compute-0 nova_compute[253661]: 2025-11-22 09:43:33.876 253665 DEBUG oslo_concurrency.lockutils [req-01ec10dc-32e6-48f2-96ad-937d273cf029 req-3fb61fe6-38df-4550-8c16-226856c23a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:33 compute-0 nova_compute[253661]: 2025-11-22 09:43:33.876 253665 DEBUG oslo_concurrency.lockutils [req-01ec10dc-32e6-48f2-96ad-937d273cf029 req-3fb61fe6-38df-4550-8c16-226856c23a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:33 compute-0 nova_compute[253661]: 2025-11-22 09:43:33.877 253665 DEBUG nova.network.neutron [req-01ec10dc-32e6-48f2-96ad-937d273cf029 req-3fb61fe6-38df-4550-8c16-226856c23a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Refreshing network info cache for port a52ac974-091c-475f-a3fd-ef86fab8b0d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:33 compute-0 ceph-mon[75021]: pgmap v2449: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 22 09:43:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 121 op/s
Nov 22 09:43:34 compute-0 ovn_controller[152872]: 2025-11-22T09:43:34Z|01452|binding|INFO|Releasing lport a6ff8b0d-d874-4d01-a908-4435ceea7174 from this chassis (sb_readonly=0)
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]: {
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "osd_id": 1,
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "type": "bluestore"
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:     },
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "osd_id": 0,
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "type": "bluestore"
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:     },
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "osd_id": 2,
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:         "type": "bluestore"
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]:     }
Nov 22 09:43:34 compute-0 silly_grothendieck[391762]: }
Nov 22 09:43:34 compute-0 systemd[1]: libpod-27ce23eb0d670a3268401b943e71eac622b56809f603beca556e9a8ff53642bb.scope: Deactivated successfully.
Nov 22 09:43:34 compute-0 podman[391745]: 2025-11-22 09:43:34.294820535 +0000 UTC m=+1.161153413 container died 27ce23eb0d670a3268401b943e71eac622b56809f603beca556e9a8ff53642bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ff7e73d42ca0b8aeee1f53692fe7f04a01a9d26ef67b169e259fb7d670b923b-merged.mount: Deactivated successfully.
Nov 22 09:43:34 compute-0 nova_compute[253661]: 2025-11-22 09:43:34.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:34 compute-0 podman[391745]: 2025-11-22 09:43:34.355209536 +0000 UTC m=+1.221542394 container remove 27ce23eb0d670a3268401b943e71eac622b56809f603beca556e9a8ff53642bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:43:34 compute-0 systemd[1]: libpod-conmon-27ce23eb0d670a3268401b943e71eac622b56809f603beca556e9a8ff53642bb.scope: Deactivated successfully.
Nov 22 09:43:34 compute-0 sudo[391642]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:43:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:43:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:43:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:43:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev dcaac2ab-42f6-499b-901d-78c48a7a4fe7 does not exist
Nov 22 09:43:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0c0c19c1-f6d7-4173-8105-e6074690ef30 does not exist
Nov 22 09:43:34 compute-0 nova_compute[253661]: 2025-11-22 09:43:34.452 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804599.4505687, d749b709-79b8-40b6-8c2e-4d301bdc8e67 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:43:34 compute-0 nova_compute[253661]: 2025-11-22 09:43:34.452 253665 INFO nova.compute.manager [-] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] VM Stopped (Lifecycle Event)
Nov 22 09:43:34 compute-0 nova_compute[253661]: 2025-11-22 09:43:34.477 253665 DEBUG nova.compute.manager [None req-a5a52ac6-b603-4b30-b383-1129dfd840e4 - - - - - -] [instance: d749b709-79b8-40b6-8c2e-4d301bdc8e67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:43:34 compute-0 sudo[391809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:43:34 compute-0 sudo[391809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:34 compute-0 sudo[391809]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:34 compute-0 sudo[391840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:43:34 compute-0 sudo[391840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:43:34 compute-0 sudo[391840]: pam_unix(sudo:session): session closed for user root
Nov 22 09:43:34 compute-0 podman[391833]: 2025-11-22 09:43:34.626354182 +0000 UTC m=+0.117541773 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:43:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:35.290 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:43:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:35.291 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:43:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:35.292 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:35 compute-0 nova_compute[253661]: 2025-11-22 09:43:35.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:35 compute-0 ceph-mon[75021]: pgmap v2450: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 121 op/s
Nov 22 09:43:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:43:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:43:35 compute-0 nova_compute[253661]: 2025-11-22 09:43:35.990 253665 DEBUG nova.compute.manager [req-42c1cf28-9de3-48cc-a72b-bcfa531a1977 req-b1890d2e-a620-4a94-8964-e0b17a740dd4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received event network-changed-a52ac974-091c-475f-a3fd-ef86fab8b0d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:35 compute-0 nova_compute[253661]: 2025-11-22 09:43:35.990 253665 DEBUG nova.compute.manager [req-42c1cf28-9de3-48cc-a72b-bcfa531a1977 req-b1890d2e-a620-4a94-8964-e0b17a740dd4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Refreshing instance network info cache due to event network-changed-a52ac974-091c-475f-a3fd-ef86fab8b0d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:35 compute-0 nova_compute[253661]: 2025-11-22 09:43:35.990 253665 DEBUG oslo_concurrency.lockutils [req-42c1cf28-9de3-48cc-a72b-bcfa531a1977 req-b1890d2e-a620-4a94-8964-e0b17a740dd4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 20 KiB/s wr, 82 op/s
Nov 22 09:43:36 compute-0 nova_compute[253661]: 2025-11-22 09:43:36.261 253665 DEBUG nova.network.neutron [req-01ec10dc-32e6-48f2-96ad-937d273cf029 req-3fb61fe6-38df-4550-8c16-226856c23a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Updated VIF entry in instance network info cache for port a52ac974-091c-475f-a3fd-ef86fab8b0d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:43:36 compute-0 nova_compute[253661]: 2025-11-22 09:43:36.261 253665 DEBUG nova.network.neutron [req-01ec10dc-32e6-48f2-96ad-937d273cf029 req-3fb61fe6-38df-4550-8c16-226856c23a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Updating instance_info_cache with network_info: [{"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:36 compute-0 nova_compute[253661]: 2025-11-22 09:43:36.284 253665 DEBUG oslo_concurrency.lockutils [req-01ec10dc-32e6-48f2-96ad-937d273cf029 req-3fb61fe6-38df-4550-8c16-226856c23a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:36 compute-0 nova_compute[253661]: 2025-11-22 09:43:36.285 253665 DEBUG oslo_concurrency.lockutils [req-42c1cf28-9de3-48cc-a72b-bcfa531a1977 req-b1890d2e-a620-4a94-8964-e0b17a740dd4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:36 compute-0 nova_compute[253661]: 2025-11-22 09:43:36.285 253665 DEBUG nova.network.neutron [req-42c1cf28-9de3-48cc-a72b-bcfa531a1977 req-b1890d2e-a620-4a94-8964-e0b17a740dd4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Refreshing network info cache for port a52ac974-091c-475f-a3fd-ef86fab8b0d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:36 compute-0 nova_compute[253661]: 2025-11-22 09:43:36.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:37 compute-0 nova_compute[253661]: 2025-11-22 09:43:37.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:37 compute-0 ceph-mon[75021]: pgmap v2451: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 20 KiB/s wr, 82 op/s
Nov 22 09:43:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 101 op/s
Nov 22 09:43:38 compute-0 nova_compute[253661]: 2025-11-22 09:43:38.348 253665 DEBUG nova.network.neutron [req-42c1cf28-9de3-48cc-a72b-bcfa531a1977 req-b1890d2e-a620-4a94-8964-e0b17a740dd4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Updated VIF entry in instance network info cache for port a52ac974-091c-475f-a3fd-ef86fab8b0d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:43:38 compute-0 nova_compute[253661]: 2025-11-22 09:43:38.349 253665 DEBUG nova.network.neutron [req-42c1cf28-9de3-48cc-a72b-bcfa531a1977 req-b1890d2e-a620-4a94-8964-e0b17a740dd4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Updating instance_info_cache with network_info: [{"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:38 compute-0 nova_compute[253661]: 2025-11-22 09:43:38.366 253665 DEBUG oslo_concurrency.lockutils [req-42c1cf28-9de3-48cc-a72b-bcfa531a1977 req-b1890d2e-a620-4a94-8964-e0b17a740dd4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2f0b41ce-f4ed-4358-ac93-eb662789bea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:38 compute-0 nova_compute[253661]: 2025-11-22 09:43:38.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:39 compute-0 ceph-mon[75021]: pgmap v2452: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 101 op/s
Nov 22 09:43:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 101 op/s
Nov 22 09:43:41 compute-0 nova_compute[253661]: 2025-11-22 09:43:41.270 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804606.268899, 61d505d6-e331-4728-9e1b-ffbb4cc5ea50 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:43:41 compute-0 nova_compute[253661]: 2025-11-22 09:43:41.271 253665 INFO nova.compute.manager [-] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] VM Stopped (Lifecycle Event)
Nov 22 09:43:41 compute-0 nova_compute[253661]: 2025-11-22 09:43:41.301 253665 DEBUG nova.compute.manager [None req-d0294187-b635-4500-8da2-74fc15bdf69a - - - - - -] [instance: 61d505d6-e331-4728-9e1b-ffbb4cc5ea50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:43:41 compute-0 nova_compute[253661]: 2025-11-22 09:43:41.521 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:41 compute-0 ceph-mon[75021]: pgmap v2453: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 101 op/s
Nov 22 09:43:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 89 op/s
Nov 22 09:43:42 compute-0 nova_compute[253661]: 2025-11-22 09:43:42.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:42 compute-0 nova_compute[253661]: 2025-11-22 09:43:42.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:43 compute-0 ceph-mon[75021]: pgmap v2454: 305 pgs: 305 active+clean; 167 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 89 op/s
Nov 22 09:43:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 171 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 490 KiB/s wr, 83 op/s
Nov 22 09:43:44 compute-0 nova_compute[253661]: 2025-11-22 09:43:44.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:44 compute-0 nova_compute[253661]: 2025-11-22 09:43:44.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:43:44 compute-0 nova_compute[253661]: 2025-11-22 09:43:44.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:43:44 compute-0 nova_compute[253661]: 2025-11-22 09:43:44.513 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:44 compute-0 nova_compute[253661]: 2025-11-22 09:43:44.514 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:44 compute-0 nova_compute[253661]: 2025-11-22 09:43:44.515 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:43:44 compute-0 nova_compute[253661]: 2025-11-22 09:43:44.515 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9712bbe7-5d4c-41ad-8725-d063d344ef31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:43:45 compute-0 nova_compute[253661]: 2025-11-22 09:43:45.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:45 compute-0 ceph-mon[75021]: pgmap v2455: 305 pgs: 305 active+clean; 171 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 490 KiB/s wr, 83 op/s
Nov 22 09:43:45 compute-0 ovn_controller[152872]: 2025-11-22T09:43:45Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:8c:a9 10.100.0.4
Nov 22 09:43:45 compute-0 ovn_controller[152872]: 2025-11-22T09:43:45Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:8c:a9 10.100.0.4
Nov 22 09:43:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 171 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 635 KiB/s rd, 489 KiB/s wr, 30 op/s
Nov 22 09:43:46 compute-0 nova_compute[253661]: 2025-11-22 09:43:46.299 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updating instance_info_cache with network_info: [{"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:46 compute-0 nova_compute[253661]: 2025-11-22 09:43:46.313 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:46 compute-0 nova_compute[253661]: 2025-11-22 09:43:46.313 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:43:46 compute-0 nova_compute[253661]: 2025-11-22 09:43:46.314 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:46 compute-0 nova_compute[253661]: 2025-11-22 09:43:46.314 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:46 compute-0 nova_compute[253661]: 2025-11-22 09:43:46.578 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:47 compute-0 nova_compute[253661]: 2025-11-22 09:43:47.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:47 compute-0 ceph-mon[75021]: pgmap v2456: 305 pgs: 305 active+clean; 171 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 635 KiB/s rd, 489 KiB/s wr, 30 op/s
Nov 22 09:43:47 compute-0 nova_compute[253661]: 2025-11-22 09:43:47.982 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 182 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 1.1 MiB/s wr, 52 op/s
Nov 22 09:43:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:43:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 40K writes, 162K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 40K writes, 13K syncs, 2.92 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8891 writes, 37K keys, 8891 commit groups, 1.0 writes per commit group, ingest: 43.50 MB, 0.07 MB/s
                                           Interval WAL: 8891 writes, 3265 syncs, 2.72 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:43:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:49 compute-0 nova_compute[253661]: 2025-11-22 09:43:49.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:49 compute-0 nova_compute[253661]: 2025-11-22 09:43:49.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:49 compute-0 nova_compute[253661]: 2025-11-22 09:43:49.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:43:49 compute-0 nova_compute[253661]: 2025-11-22 09:43:49.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:49 compute-0 ceph-mon[75021]: pgmap v2457: 305 pgs: 305 active+clean; 182 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 1.1 MiB/s wr, 52 op/s
Nov 22 09:43:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.383 253665 DEBUG oslo_concurrency.lockutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.383 253665 DEBUG oslo_concurrency.lockutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.384 253665 DEBUG oslo_concurrency.lockutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.384 253665 DEBUG oslo_concurrency.lockutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.384 253665 DEBUG oslo_concurrency.lockutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.386 253665 INFO nova.compute.manager [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Terminating instance
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.387 253665 DEBUG nova.compute.manager [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:43:50 compute-0 kernel: tapa52ac974-09 (unregistering): left promiscuous mode
Nov 22 09:43:50 compute-0 NetworkManager[48920]: <info>  [1763804630.4546] device (tapa52ac974-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:43:50 compute-0 ovn_controller[152872]: 2025-11-22T09:43:50Z|01453|binding|INFO|Releasing lport a52ac974-091c-475f-a3fd-ef86fab8b0d7 from this chassis (sb_readonly=0)
Nov 22 09:43:50 compute-0 ovn_controller[152872]: 2025-11-22T09:43:50Z|01454|binding|INFO|Setting lport a52ac974-091c-475f-a3fd-ef86fab8b0d7 down in Southbound
Nov 22 09:43:50 compute-0 ovn_controller[152872]: 2025-11-22T09:43:50Z|01455|binding|INFO|Removing iface tapa52ac974-09 ovn-installed in OVS
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.464 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.474 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:8c:a9 10.100.0.4', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f0b41ce-f4ed-4358-ac93-eb662789bea1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c8d1ab5-5912-4930-8d33-fd5a1b07be2a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a52ac974-091c-475f-a3fd-ef86fab8b0d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.475 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a52ac974-091c-475f-a3fd-ef86fab8b0d7 in datapath d0ae8a74-13a0-46e6-ad55-3404cb0e971f unbound from our chassis
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.477 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0ae8a74-13a0-46e6-ad55-3404cb0e971f
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a46d73d-682a-44bd-9543-4db3f47d4fdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.536 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd3e4d2-7fff-4934-a7d1-d9b42913d446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:50 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000086.scope: Deactivated successfully.
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.540 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c94373e8-b3ae-4fec-8d2a-578e72b90442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:50 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000086.scope: Consumed 13.882s CPU time.
Nov 22 09:43:50 compute-0 systemd-machined[215941]: Machine qemu-165-instance-00000086 terminated.
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.579 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a779b895-d6b4-4d10-b469-be6904050a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.603 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6745e4-0e40-43a2-9df3-dc35a7748f86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0ae8a74-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d3:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 406], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742577, 'reachable_time': 43515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391895, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.612 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.617 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.624 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cce4fdc6-0fc9-4346-9bdb-e0a914543e48]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0ae8a74-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742592, 'tstamp': 742592}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391901, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0ae8a74-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742595, 'tstamp': 742595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391901, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.625 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0ae8a74-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.626 253665 INFO nova.virt.libvirt.driver [-] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Instance destroyed successfully.
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.627 253665 DEBUG nova.objects.instance [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid 2f0b41ce-f4ed-4358-ac93-eb662789bea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.630 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0ae8a74-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0ae8a74-10, col_values=(('external_ids', {'iface-id': 'a6ff8b0d-d874-4d01-a908-4435ceea7174'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:50 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:50.631 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.646 253665 DEBUG nova.virt.libvirt.vif [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:43:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-1414593885',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-1414593885',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=134,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDAinPFxvJPb+F5aFOzMr/F2AeCA+zy9wSOLs/tEMQaNXxaZ2HNnh0s+13uWL7hsaz7nbRxU8LxOIxkt/PQ9hxO5mowXPIJIN0kjLr/YjJtROiFYKcno5ZGoM+YASq6MA==',key_name='tempest-TestSecurityGroupsBasicOps-33766932',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:43:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-5ktam00t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:43:31Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=2f0b41ce-f4ed-4358-ac93-eb662789bea1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.647 253665 DEBUG nova.network.os_vif_util [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "address": "fa:16:3e:8c:8c:a9", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa52ac974-09", "ovs_interfaceid": "a52ac974-091c-475f-a3fd-ef86fab8b0d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.648 253665 DEBUG nova.network.os_vif_util [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:8c:a9,bridge_name='br-int',has_traffic_filtering=True,id=a52ac974-091c-475f-a3fd-ef86fab8b0d7,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa52ac974-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.648 253665 DEBUG os_vif [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:8c:a9,bridge_name='br-int',has_traffic_filtering=True,id=a52ac974-091c-475f-a3fd-ef86fab8b0d7,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa52ac974-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.650 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.650 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa52ac974-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:43:50 compute-0 nova_compute[253661]: 2025-11-22 09:43:50.657 253665 INFO os_vif [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:8c:a9,bridge_name='br-int',has_traffic_filtering=True,id=a52ac974-091c-475f-a3fd-ef86fab8b0d7,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa52ac974-09')
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.075 253665 DEBUG nova.compute.manager [req-2cf28140-62dd-42bb-ba4b-e6b3b2f93b7f req-768845d5-256f-4bb7-abff-379937c01dfd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received event network-vif-unplugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.076 253665 DEBUG oslo_concurrency.lockutils [req-2cf28140-62dd-42bb-ba4b-e6b3b2f93b7f req-768845d5-256f-4bb7-abff-379937c01dfd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.076 253665 DEBUG oslo_concurrency.lockutils [req-2cf28140-62dd-42bb-ba4b-e6b3b2f93b7f req-768845d5-256f-4bb7-abff-379937c01dfd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.076 253665 DEBUG oslo_concurrency.lockutils [req-2cf28140-62dd-42bb-ba4b-e6b3b2f93b7f req-768845d5-256f-4bb7-abff-379937c01dfd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.076 253665 DEBUG nova.compute.manager [req-2cf28140-62dd-42bb-ba4b-e6b3b2f93b7f req-768845d5-256f-4bb7-abff-379937c01dfd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] No waiting events found dispatching network-vif-unplugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.077 253665 DEBUG nova.compute.manager [req-2cf28140-62dd-42bb-ba4b-e6b3b2f93b7f req-768845d5-256f-4bb7-abff-379937c01dfd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received event network-vif-unplugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.096 253665 INFO nova.virt.libvirt.driver [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Deleting instance files /var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1_del
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.097 253665 INFO nova.virt.libvirt.driver [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Deletion of /var/lib/nova/instances/2f0b41ce-f4ed-4358-ac93-eb662789bea1_del complete
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.275 253665 INFO nova.compute.manager [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Took 0.89 seconds to destroy the instance on the hypervisor.
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.275 253665 DEBUG oslo.service.loopingcall [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.276 253665 DEBUG nova.compute.manager [-] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.276 253665 DEBUG nova.network.neutron [-] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.278 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.278 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.279 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.279 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.279 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:51 compute-0 ceph-mon[75021]: pgmap v2458: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.581 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:43:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667349524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.731 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.821 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:43:51 compute-0 nova_compute[253661]: 2025-11-22 09:43:51.822 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.003 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.004 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3462MB free_disk=59.89727020263672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.005 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.005 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 181 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.167 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9712bbe7-5d4c-41ad-8725-d063d344ef31 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2f0b41ce-f4ed-4358-ac93-eb662789bea1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.222 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:43:52
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'vms', 'images', 'default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.339 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.422 253665 DEBUG nova.network.neutron [-] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.441 253665 INFO nova.compute.manager [-] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Took 1.16 seconds to deallocate network for instance.
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.512 253665 DEBUG oslo_concurrency.lockutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/667349524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:43:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:43:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:43:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506107929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.818 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.826 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.839 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.866 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.867 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.867 253665 DEBUG oslo_concurrency.lockutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:52 compute-0 nova_compute[253661]: 2025-11-22 09:43:52.957 253665 DEBUG oslo_concurrency.processutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.033 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.034 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.054 253665 DEBUG nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.129 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.161 253665 DEBUG nova.compute.manager [req-70acefe9-f16f-433f-80cc-61f8aa1f8937 req-5ca00ac4-4fd7-4346-91ce-5e7c48e0c27e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received event network-vif-plugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.162 253665 DEBUG oslo_concurrency.lockutils [req-70acefe9-f16f-433f-80cc-61f8aa1f8937 req-5ca00ac4-4fd7-4346-91ce-5e7c48e0c27e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.162 253665 DEBUG oslo_concurrency.lockutils [req-70acefe9-f16f-433f-80cc-61f8aa1f8937 req-5ca00ac4-4fd7-4346-91ce-5e7c48e0c27e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.162 253665 DEBUG oslo_concurrency.lockutils [req-70acefe9-f16f-433f-80cc-61f8aa1f8937 req-5ca00ac4-4fd7-4346-91ce-5e7c48e0c27e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.163 253665 DEBUG nova.compute.manager [req-70acefe9-f16f-433f-80cc-61f8aa1f8937 req-5ca00ac4-4fd7-4346-91ce-5e7c48e0c27e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] No waiting events found dispatching network-vif-plugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.163 253665 WARNING nova.compute.manager [req-70acefe9-f16f-433f-80cc-61f8aa1f8937 req-5ca00ac4-4fd7-4346-91ce-5e7c48e0c27e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received unexpected event network-vif-plugged-a52ac974-091c-475f-a3fd-ef86fab8b0d7 for instance with vm_state deleted and task_state None.
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.163 253665 DEBUG nova.compute.manager [req-70acefe9-f16f-433f-80cc-61f8aa1f8937 req-5ca00ac4-4fd7-4346-91ce-5e7c48e0c27e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Received event network-vif-deleted-a52ac974-091c-475f-a3fd-ef86fab8b0d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:43:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285577269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.419 253665 DEBUG oslo_concurrency.processutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.424 253665 DEBUG nova.compute.provider_tree [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.460 253665 DEBUG nova.scheduler.client.report [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.484 253665 DEBUG oslo_concurrency.lockutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.487 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.495 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.496 253665 INFO nova.compute.claims [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.520 253665 INFO nova.scheduler.client.report [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance 2f0b41ce-f4ed-4358-ac93-eb662789bea1
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.592 253665 DEBUG oslo_concurrency.lockutils [None req-0fcd132c-85b7-47c8-ab55-6659148d8d1c 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "2f0b41ce-f4ed-4358-ac93-eb662789bea1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:53 compute-0 nova_compute[253661]: 2025-11-22 09:43:53.617 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:53 compute-0 ceph-mon[75021]: pgmap v2459: 305 pgs: 305 active+clean; 181 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Nov 22 09:43:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/506107929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1285577269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 09:43:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:43:54 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091016248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.188 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.195 253665 DEBUG nova.compute.provider_tree [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.209 253665 DEBUG nova.scheduler.client.report [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.230 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.230 253665 DEBUG nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.272 253665 DEBUG nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.273 253665 DEBUG nova.network.neutron [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.286 253665 INFO nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.300 253665 DEBUG nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.369 253665 DEBUG nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.370 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.371 253665 INFO nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Creating image(s)
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.395 253665 DEBUG nova.storage.rbd_utils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image dab57683-82b6-44b3-b663-556a4f0e3dab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.417 253665 DEBUG nova.storage.rbd_utils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image dab57683-82b6-44b3-b663-556a4f0e3dab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.442 253665 DEBUG nova.storage.rbd_utils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image dab57683-82b6-44b3-b663-556a4f0e3dab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.447 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.526 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.527 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.527 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.528 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.551 253665 DEBUG nova.storage.rbd_utils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image dab57683-82b6-44b3-b663-556a4f0e3dab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.555 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 dab57683-82b6-44b3-b663-556a4f0e3dab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4091016248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.877 253665 DEBUG nova.policy [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:43:54 compute-0 nova_compute[253661]: 2025-11-22 09:43:54.993 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 dab57683-82b6-44b3-b663-556a4f0e3dab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.051 253665 DEBUG nova.storage.rbd_utils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image dab57683-82b6-44b3-b663-556a4f0e3dab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:43:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:43:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.2 total, 600.0 interval
                                           Cumulative writes: 40K writes, 160K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 40K writes, 14K syncs, 2.87 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7646 writes, 32K keys, 7646 commit groups, 1.0 writes per commit group, ingest: 36.80 MB, 0.06 MB/s
                                           Interval WAL: 7646 writes, 2929 syncs, 2.61 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.165 253665 DEBUG nova.objects.instance [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid dab57683-82b6-44b3-b663-556a4f0e3dab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.180 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.180 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Ensure instance console log exists: /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.180 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.181 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.181 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.681 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:43:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:43:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:43:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:43:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.856 253665 DEBUG nova.compute.manager [req-ea909abd-d387-45d1-8552-3c8b715c18ac req-f4810c8b-3045-49f0-957e-8762fee901e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Received event network-changed-29eab695-fcfb-43d5-b708-0f720ee0fc39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.857 253665 DEBUG nova.compute.manager [req-ea909abd-d387-45d1-8552-3c8b715c18ac req-f4810c8b-3045-49f0-957e-8762fee901e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Refreshing instance network info cache due to event network-changed-29eab695-fcfb-43d5-b708-0f720ee0fc39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.857 253665 DEBUG oslo_concurrency.lockutils [req-ea909abd-d387-45d1-8552-3c8b715c18ac req-f4810c8b-3045-49f0-957e-8762fee901e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.857 253665 DEBUG oslo_concurrency.lockutils [req-ea909abd-d387-45d1-8552-3c8b715c18ac req-f4810c8b-3045-49f0-957e-8762fee901e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.857 253665 DEBUG nova.network.neutron [req-ea909abd-d387-45d1-8552-3c8b715c18ac req-f4810c8b-3045-49f0-957e-8762fee901e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Refreshing network info cache for port 29eab695-fcfb-43d5-b708-0f720ee0fc39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.858 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:55 compute-0 ceph-mon[75021]: pgmap v2460: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.937 253665 DEBUG oslo_concurrency.lockutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "9712bbe7-5d4c-41ad-8725-d063d344ef31" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.937 253665 DEBUG oslo_concurrency.lockutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.937 253665 DEBUG oslo_concurrency.lockutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.937 253665 DEBUG oslo_concurrency.lockutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.938 253665 DEBUG oslo_concurrency.lockutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.939 253665 INFO nova.compute.manager [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Terminating instance
Nov 22 09:43:55 compute-0 nova_compute[253661]: 2025-11-22 09:43:55.939 253665 DEBUG nova.compute.manager [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:43:55 compute-0 kernel: tap29eab695-fc (unregistering): left promiscuous mode
Nov 22 09:43:56 compute-0 NetworkManager[48920]: <info>  [1763804636.0050] device (tap29eab695-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:43:56 compute-0 ovn_controller[152872]: 2025-11-22T09:43:56Z|01456|binding|INFO|Releasing lport 29eab695-fcfb-43d5-b708-0f720ee0fc39 from this chassis (sb_readonly=0)
Nov 22 09:43:56 compute-0 ovn_controller[152872]: 2025-11-22T09:43:56Z|01457|binding|INFO|Setting lport 29eab695-fcfb-43d5-b708-0f720ee0fc39 down in Southbound
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.010 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 ovn_controller[152872]: 2025-11-22T09:43:56Z|01458|binding|INFO|Removing iface tap29eab695-fc ovn-installed in OVS
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.012 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.012 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.020 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:cf:93 10.100.0.3'], port_security=['fa:16:3e:c1:cf:93 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9712bbe7-5d4c-41ad-8725-d063d344ef31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3612e32-3934-4c5d-9617-e49cc61dda86 ec2b4fe9-a0a4-4ce7-b67b-b101ede1b3af', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c8d1ab5-5912-4930-8d33-fd5a1b07be2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=29eab695-fcfb-43d5-b708-0f720ee0fc39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.021 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 29eab695-fcfb-43d5-b708-0f720ee0fc39 in datapath d0ae8a74-13a0-46e6-ad55-3404cb0e971f unbound from our chassis
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.022 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0ae8a74-13a0-46e6-ad55-3404cb0e971f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.025 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1e487a-dca6-4ff4-9e16-0c56a7586c1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.025 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f namespace which is not needed anymore
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000084.scope: Deactivated successfully.
Nov 22 09:43:56 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000084.scope: Consumed 15.698s CPU time.
Nov 22 09:43:56 compute-0 systemd-machined[215941]: Machine qemu-163-instance-00000084 terminated.
Nov 22 09:43:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 1.7 MiB/s wr, 82 op/s
Nov 22 09:43:56 compute-0 neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f[389797]: [NOTICE]   (389801) : haproxy version is 2.8.14-c23fe91
Nov 22 09:43:56 compute-0 neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f[389797]: [NOTICE]   (389801) : path to executable is /usr/sbin/haproxy
Nov 22 09:43:56 compute-0 neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f[389797]: [WARNING]  (389801) : Exiting Master process...
Nov 22 09:43:56 compute-0 neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f[389797]: [ALERT]    (389801) : Current worker (389803) exited with code 143 (Terminated)
Nov 22 09:43:56 compute-0 neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f[389797]: [WARNING]  (389801) : All workers exited. Exiting... (0)
Nov 22 09:43:56 compute-0 systemd[1]: libpod-0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa.scope: Deactivated successfully.
Nov 22 09:43:56 compute-0 podman[392210]: 2025-11-22 09:43:56.155765107 +0000 UTC m=+0.047988426 container died 0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.156 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.170 253665 INFO nova.virt.libvirt.driver [-] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Instance destroyed successfully.
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.172 253665 DEBUG nova.objects.instance [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid 9712bbe7-5d4c-41ad-8725-d063d344ef31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa-userdata-shm.mount: Deactivated successfully.
Nov 22 09:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8db5ec8a4c513b2bd8a996899fae508998ec3f845793f58606bdbbc7e9e1913a-merged.mount: Deactivated successfully.
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.197 253665 DEBUG nova.virt.libvirt.vif [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:42:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-316962792',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-316962792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=132,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDAinPFxvJPb+F5aFOzMr/F2AeCA+zy9wSOLs/tEMQaNXxaZ2HNnh0s+13uWL7hsaz7nbRxU8LxOIxkt/PQ9hxO5mowXPIJIN0kjLr/YjJtROiFYKcno5ZGoM+YASq6MA==',key_name='tempest-TestSecurityGroupsBasicOps-33766932',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:42:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-j0u3e3ck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:42:55Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=9712bbe7-5d4c-41ad-8725-d063d344ef31,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.198 253665 DEBUG nova.network.os_vif_util [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.199 253665 DEBUG nova.network.os_vif_util [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:cf:93,bridge_name='br-int',has_traffic_filtering=True,id=29eab695-fcfb-43d5-b708-0f720ee0fc39,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eab695-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.199 253665 DEBUG os_vif [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:cf:93,bridge_name='br-int',has_traffic_filtering=True,id=29eab695-fcfb-43d5-b708-0f720ee0fc39,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eab695-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.201 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.201 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29eab695-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.204 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.206 253665 INFO os_vif [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:cf:93,bridge_name='br-int',has_traffic_filtering=True,id=29eab695-fcfb-43d5-b708-0f720ee0fc39,network=Network(d0ae8a74-13a0-46e6-ad55-3404cb0e971f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29eab695-fc')
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:56 compute-0 podman[392210]: 2025-11-22 09:43:56.222784182 +0000 UTC m=+0.115007471 container cleanup 0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:43:56 compute-0 systemd[1]: libpod-conmon-0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa.scope: Deactivated successfully.
Nov 22 09:43:56 compute-0 podman[392263]: 2025-11-22 09:43:56.308843318 +0000 UTC m=+0.062241999 container remove 0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.314 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0efec6f6-44a5-43b6-b313-a4124d3aa998]: (4, ('Sat Nov 22 09:43:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f (0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa)\n0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa\nSat Nov 22 09:43:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f (0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa)\n0e1452803d78806c9242d7e422a490f5b2e7be468ae5cddc02ce03577fc0dffa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.316 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd48f99-cd86-47e0-bf83-73c24e4674d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.317 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0ae8a74-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.318 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 kernel: tapd0ae8a74-10: left promiscuous mode
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.336 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8ef145-a36f-4136-abdb-1d45186a9b24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.348 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[403ef175-f023-4499-929b-9a693620cfb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.349 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[64afdaea-442b-45c1-975d-4b404314bfc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.364 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f3818079-0dd4-4a21-a5c8-8d43bedaa277]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742567, 'reachable_time': 30152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392281, 'error': None, 'target': 'ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:56 compute-0 systemd[1]: run-netns-ovnmeta\x2dd0ae8a74\x2d13a0\x2d46e6\x2dad55\x2d3404cb0e971f.mount: Deactivated successfully.
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.367 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0ae8a74-13a0-46e6-ad55-3404cb0e971f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:43:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:43:56.367 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0046f0d1-2e99-4307-bce9-f646e7baa909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.589 253665 DEBUG nova.network.neutron [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Successfully created port: 7f5f15bb-83ef-4c81-8585-c447323ac70f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:43:56 compute-0 nova_compute[253661]: 2025-11-22 09:43:56.614 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:43:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:43:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:43:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:43:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:43:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:43:57 compute-0 nova_compute[253661]: 2025-11-22 09:43:57.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:57 compute-0 nova_compute[253661]: 2025-11-22 09:43:57.862 253665 DEBUG nova.network.neutron [req-ea909abd-d387-45d1-8552-3c8b715c18ac req-f4810c8b-3045-49f0-957e-8762fee901e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updated VIF entry in instance network info cache for port 29eab695-fcfb-43d5-b708-0f720ee0fc39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:43:57 compute-0 nova_compute[253661]: 2025-11-22 09:43:57.862 253665 DEBUG nova.network.neutron [req-ea909abd-d387-45d1-8552-3c8b715c18ac req-f4810c8b-3045-49f0-957e-8762fee901e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updating instance_info_cache with network_info: [{"id": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "address": "fa:16:3e:c1:cf:93", "network": {"id": "d0ae8a74-13a0-46e6-ad55-3404cb0e971f", "bridge": "br-int", "label": "tempest-network-smoke--18253251", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29eab695-fc", "ovs_interfaceid": "29eab695-fcfb-43d5-b708-0f720ee0fc39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:57 compute-0 nova_compute[253661]: 2025-11-22 09:43:57.878 253665 DEBUG oslo_concurrency.lockutils [req-ea909abd-d387-45d1-8552-3c8b715c18ac req-f4810c8b-3045-49f0-957e-8762fee901e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9712bbe7-5d4c-41ad-8725-d063d344ef31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:57 compute-0 ceph-mon[75021]: pgmap v2461: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 1.7 MiB/s wr, 82 op/s
Nov 22 09:43:57 compute-0 nova_compute[253661]: 2025-11-22 09:43:57.997 253665 INFO nova.virt.libvirt.driver [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Deleting instance files /var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31_del
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:57.999 253665 INFO nova.virt.libvirt.driver [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Deletion of /var/lib/nova/instances/9712bbe7-5d4c-41ad-8725-d063d344ef31_del complete
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.082 253665 INFO nova.compute.manager [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Took 2.14 seconds to destroy the instance on the hypervisor.
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.084 253665 DEBUG oslo.service.loopingcall [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.085 253665 DEBUG nova.compute.manager [-] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.085 253665 DEBUG nova.network.neutron [-] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.090 253665 DEBUG nova.network.neutron [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Successfully updated port: 7f5f15bb-83ef-4c81-8585-c447323ac70f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.103 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.103 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.104 253665 DEBUG nova.network.neutron [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:43:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 139 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.296 253665 DEBUG nova.network.neutron [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.470 253665 DEBUG nova.compute.manager [req-de858952-0066-44b3-baa9-8bf95017caee req-db57cc55-ecb3-43c9-b521-cf6899b8f9fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.470 253665 DEBUG nova.compute.manager [req-de858952-0066-44b3-baa9-8bf95017caee req-db57cc55-ecb3-43c9-b521-cf6899b8f9fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing instance network info cache due to event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:43:58 compute-0 nova_compute[253661]: 2025-11-22 09:43:58.470 253665 DEBUG oslo_concurrency.lockutils [req-de858952-0066-44b3-baa9-8bf95017caee req-db57cc55-ecb3-43c9-b521-cf6899b8f9fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:43:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.005 253665 DEBUG nova.network.neutron [-] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.030 253665 INFO nova.compute.manager [-] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Took 0.94 seconds to deallocate network for instance.
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.074 253665 DEBUG oslo_concurrency.lockutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.075 253665 DEBUG oslo_concurrency.lockutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.109 253665 DEBUG nova.compute.manager [req-b1bcd69e-8445-4f56-936b-548c068d51d4 req-94bb7ed5-3115-44d0-9a6d-41245c18ab8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Received event network-vif-deleted-29eab695-fcfb-43d5-b708-0f720ee0fc39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.167 253665 DEBUG oslo_concurrency.processutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.303 253665 DEBUG nova.network.neutron [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.316 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.316 253665 DEBUG nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Instance network_info: |[{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.317 253665 DEBUG oslo_concurrency.lockutils [req-de858952-0066-44b3-baa9-8bf95017caee req-db57cc55-ecb3-43c9-b521-cf6899b8f9fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.317 253665 DEBUG nova.network.neutron [req-de858952-0066-44b3-baa9-8bf95017caee req-db57cc55-ecb3-43c9-b521-cf6899b8f9fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.320 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Start _get_guest_xml network_info=[{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.332 253665 WARNING nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.338 253665 DEBUG nova.virt.libvirt.host [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.339 253665 DEBUG nova.virt.libvirt.host [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.343 253665 DEBUG nova.virt.libvirt.host [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.343 253665 DEBUG nova.virt.libvirt.host [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.344 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.344 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.345 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.345 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.345 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.345 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.345 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.346 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.346 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.346 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.346 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.346 253665 DEBUG nova.virt.hardware [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.350 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:43:59 compute-0 podman[392286]: 2025-11-22 09:43:59.367799511 +0000 UTC m=+0.060344151 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:43:59 compute-0 podman[392284]: 2025-11-22 09:43:59.377231054 +0000 UTC m=+0.072614094 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 09:43:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:43:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3277654219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.634 253665 DEBUG oslo_concurrency.processutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.641 253665 DEBUG nova.compute.provider_tree [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.653 253665 DEBUG nova.scheduler.client.report [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.675 253665 DEBUG oslo_concurrency.lockutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.728 253665 INFO nova.scheduler.client.report [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance 9712bbe7-5d4c-41ad-8725-d063d344ef31
Nov 22 09:43:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:43:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2919763124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.791 253665 DEBUG oslo_concurrency.lockutils [None req-08e6eae9-4fb0-47bd-a987-9c364b751052 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "9712bbe7-5d4c-41ad-8725-d063d344ef31" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.795 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.815 253665 DEBUG nova.storage.rbd_utils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image dab57683-82b6-44b3-b663-556a4f0e3dab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:43:59 compute-0 nova_compute[253661]: 2025-11-22 09:43:59.819 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:00 compute-0 ceph-mon[75021]: pgmap v2462: 305 pgs: 305 active+clean; 139 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Nov 22 09:44:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3277654219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2919763124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 112 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.8 MiB/s wr, 101 op/s
Nov 22 09:44:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:44:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762813243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.306 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.308 253665 DEBUG nova.virt.libvirt.vif [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:43:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1917813057',display_name='tempest-TestNetworkBasicOps-server-1917813057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1917813057',id=135,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7RKy18xj59Mrr1Qz0apb8VM0RVo9aCtcuaRLK/Njyb/8H+0bdEC3XqXWMpAl+tfEMf3lBrH+nx/Y/xtjJAjEL/9WZ1nk79dDJUwDIjACi8kN3FW6TUbGYNm9djoYtMsA==',key_name='tempest-TestNetworkBasicOps-1265217701',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1ixz1m3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:43:54Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=dab57683-82b6-44b3-b663-556a4f0e3dab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.308 253665 DEBUG nova.network.os_vif_util [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.309 253665 DEBUG nova.network.os_vif_util [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.310 253665 DEBUG nova.objects.instance [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid dab57683-82b6-44b3-b663-556a4f0e3dab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.334 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <uuid>dab57683-82b6-44b3-b663-556a4f0e3dab</uuid>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <name>instance-00000087</name>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-1917813057</nova:name>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:43:59</nova:creationTime>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <nova:port uuid="7f5f15bb-83ef-4c81-8585-c447323ac70f">
Nov 22 09:44:00 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <system>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <entry name="serial">dab57683-82b6-44b3-b663-556a4f0e3dab</entry>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <entry name="uuid">dab57683-82b6-44b3-b663-556a4f0e3dab</entry>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     </system>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <os>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   </os>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <features>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   </features>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/dab57683-82b6-44b3-b663-556a4f0e3dab_disk">
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/dab57683-82b6-44b3-b663-556a4f0e3dab_disk.config">
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:00 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:e3:14:00"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <target dev="tap7f5f15bb-83"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab/console.log" append="off"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <video>
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     </video>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:44:00 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:44:00 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:44:00 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:44:00 compute-0 nova_compute[253661]: </domain>
Nov 22 09:44:00 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.335 253665 DEBUG nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Preparing to wait for external event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.336 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.336 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.336 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.337 253665 DEBUG nova.virt.libvirt.vif [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:43:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1917813057',display_name='tempest-TestNetworkBasicOps-server-1917813057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1917813057',id=135,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7RKy18xj59Mrr1Qz0apb8VM0RVo9aCtcuaRLK/Njyb/8H+0bdEC3XqXWMpAl+tfEMf3lBrH+nx/Y/xtjJAjEL/9WZ1nk79dDJUwDIjACi8kN3FW6TUbGYNm9djoYtMsA==',key_name='tempest-TestNetworkBasicOps-1265217701',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1ixz1m3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:43:54Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=dab57683-82b6-44b3-b663-556a4f0e3dab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.338 253665 DEBUG nova.network.os_vif_util [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.338 253665 DEBUG nova.network.os_vif_util [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.339 253665 DEBUG os_vif [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.339 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.340 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.340 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.344 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f5f15bb-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.344 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f5f15bb-83, col_values=(('external_ids', {'iface-id': '7f5f15bb-83ef-4c81-8585-c447323ac70f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:14:00', 'vm-uuid': 'dab57683-82b6-44b3-b663-556a4f0e3dab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.390 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:00 compute-0 NetworkManager[48920]: <info>  [1763804640.3915] manager: (tap7f5f15bb-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.398 253665 INFO os_vif [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83')
Nov 22 09:44:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:44:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 129K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 10K syncs, 2.95 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6107 writes, 25K keys, 6107 commit groups, 1.0 writes per commit group, ingest: 28.44 MB, 0.05 MB/s
                                           Interval WAL: 6107 writes, 2367 syncs, 2.58 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.448 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.448 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.469 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.541 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.542 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.549 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.550 253665 INFO nova.compute.claims [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.574 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.575 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.575 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:e3:14:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.576 253665 INFO nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Using config drive
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.594 253665 DEBUG nova.storage.rbd_utils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image dab57683-82b6-44b3-b663-556a4f0e3dab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:00 compute-0 nova_compute[253661]: 2025-11-22 09:44:00.658 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.065 253665 DEBUG nova.network.neutron [req-de858952-0066-44b3-baa9-8bf95017caee req-db57cc55-ecb3-43c9-b521-cf6899b8f9fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updated VIF entry in instance network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.067 253665 DEBUG nova.network.neutron [req-de858952-0066-44b3-baa9-8bf95017caee req-db57cc55-ecb3-43c9-b521-cf6899b8f9fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:44:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3392023323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.081 253665 DEBUG oslo_concurrency.lockutils [req-de858952-0066-44b3-baa9-8bf95017caee req-db57cc55-ecb3-43c9-b521-cf6899b8f9fe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3762813243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.088 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.095 253665 DEBUG nova.compute.provider_tree [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.108 253665 DEBUG nova.scheduler.client.report [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.182 253665 INFO nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Creating config drive at /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab/disk.config
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.187 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb7y3rzi7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.329 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb7y3rzi7" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.355 253665 DEBUG nova.storage.rbd_utils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image dab57683-82b6-44b3-b663-556a4f0e3dab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.359 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab/disk.config dab57683-82b6-44b3-b663-556a4f0e3dab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.422 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.423 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.491 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.492 253665 DEBUG nova.network.neutron [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.510 253665 INFO nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.529 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.615 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.617 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.618 253665 INFO nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Creating image(s)
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.639 253665 DEBUG nova.storage.rbd_utils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.660 253665 DEBUG nova.storage.rbd_utils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.685 253665 DEBUG nova.storage.rbd_utils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.689 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.773 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.774 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.775 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.775 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.796 253665 DEBUG nova.storage.rbd_utils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.799 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:01 compute-0 nova_compute[253661]: 2025-11-22 09:44:01.911 253665 DEBUG nova.policy [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 22 09:44:02 compute-0 ceph-mon[75021]: pgmap v2463: 305 pgs: 305 active+clean; 112 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.8 MiB/s wr, 101 op/s
Nov 22 09:44:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3392023323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:44:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.031 253665 DEBUG nova.network.neutron [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Successfully created port: 8e5490c3-8e77-4f49-a612-31f17e0a3586 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.064 253665 DEBUG oslo_concurrency.processutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab/disk.config dab57683-82b6-44b3-b663-556a4f0e3dab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.705s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.065 253665 INFO nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Deleting local config drive /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab/disk.config because it was imported into RBD.
Nov 22 09:44:03 compute-0 kernel: tap7f5f15bb-83: entered promiscuous mode
Nov 22 09:44:03 compute-0 NetworkManager[48920]: <info>  [1763804643.1258] manager: (tap7f5f15bb-83): new Tun device (/org/freedesktop/NetworkManager/Devices/592)
Nov 22 09:44:03 compute-0 ovn_controller[152872]: 2025-11-22T09:44:03Z|01459|binding|INFO|Claiming lport 7f5f15bb-83ef-4c81-8585-c447323ac70f for this chassis.
Nov 22 09:44:03 compute-0 ovn_controller[152872]: 2025-11-22T09:44:03Z|01460|binding|INFO|7f5f15bb-83ef-4c81-8585-c447323ac70f: Claiming fa:16:3e:e3:14:00 10.100.0.12
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.137 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:14:00 10.100.0.12'], port_security=['fa:16:3e:e3:14:00 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dab57683-82b6-44b3-b663-556a4f0e3dab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7139b3cb-5e3b-45f1-be1c-957199bdba02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cdf5ea7-dfee-4f0a-9b99-06484e8f93dc, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7f5f15bb-83ef-4c81-8585-c447323ac70f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.138 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7f5f15bb-83ef-4c81-8585-c447323ac70f in datapath ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 bound to our chassis
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.140 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ccaaf7d7-d083-4f4d-9c25-562b3924cdc3
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.145 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:03 compute-0 ovn_controller[152872]: 2025-11-22T09:44:03Z|01461|binding|INFO|Setting lport 7f5f15bb-83ef-4c81-8585-c447323ac70f ovn-installed in OVS
Nov 22 09:44:03 compute-0 ovn_controller[152872]: 2025-11-22T09:44:03Z|01462|binding|INFO|Setting lport 7f5f15bb-83ef-4c81-8585-c447323ac70f up in Southbound
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.152 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[85d9443f-e5c4-426e-ae02-28081fb1d789]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.153 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapccaaf7d7-d1 in ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.155 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapccaaf7d7-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.155 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2ed9af-409a-4807-9898-fbda280b778b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.156 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb8e653-9429-4553-b435-90f903684dd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 systemd-udevd[392593]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:03 compute-0 systemd-machined[215941]: New machine qemu-166-instance-00000087.
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.172 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[103ff832-da12-4ff9-abaf-f179fb7b890e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 NetworkManager[48920]: <info>  [1763804643.1757] device (tap7f5f15bb-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:44:03 compute-0 NetworkManager[48920]: <info>  [1763804643.1764] device (tap7f5f15bb-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:44:03 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000087.
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.187 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2ae080-5f96-46d2-a151-02c36c23fe34]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.228 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3ba0cf-0fb6-406b-a6d7-e3a25703c43f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 NetworkManager[48920]: <info>  [1763804643.2355] manager: (tapccaaf7d7-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/593)
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.235 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4a294fd4-86d8-4ebc-8926-6f25ee03c204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 systemd-udevd[392596]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.279 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[989c8845-8313-4fb2-b2e9-f469c3a73154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.283 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b0494a-83eb-4700-a283-ffa017415ea9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 NetworkManager[48920]: <info>  [1763804643.3132] device (tapccaaf7d7-d0): carrier: link connected
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.319 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3d666b68-d7f4-4d32-b32a-08e221d81555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[adb611ce-88ce-452c-acb7-8b5e4acda916]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapccaaf7d7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:e3:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749404, 'reachable_time': 35674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392625, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.363 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[531c30ad-a476-4fae-9ff4-37e6b83077ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:e3b7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749404, 'tstamp': 749404}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392626, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.383 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d44877-7c91-4469-8263-ce003f3407ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapccaaf7d7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:e3:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749404, 'reachable_time': 35674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 392627, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.415 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73737492-c317-4ebd-8cd6-81fdaca92ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.445 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.483 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[50222c4a-ada6-486a-9a3f-e58ed6b2b96b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.484 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccaaf7d7-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.484 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.485 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccaaf7d7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:03 compute-0 kernel: tapccaaf7d7-d0: entered promiscuous mode
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.487 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.489 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapccaaf7d7-d0, col_values=(('external_ids', {'iface-id': '9302a453-ce7d-475e-8f32-fad5f9f06dff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:03 compute-0 ovn_controller[152872]: 2025-11-22T09:44:03Z|01463|binding|INFO|Releasing lport 9302a453-ce7d-475e-8f32-fad5f9f06dff from this chassis (sb_readonly=0)
Nov 22 09:44:03 compute-0 NetworkManager[48920]: <info>  [1763804643.4910] manager: (tapccaaf7d7-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.508 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ccaaf7d7-d083-4f4d-9c25-562b3924cdc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ccaaf7d7-d083-4f4d-9c25-562b3924cdc3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.509 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f58bf74a-5c78-4a6f-a1fc-b6e4dacd4436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.510 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/ccaaf7d7-d083-4f4d-9c25-562b3924cdc3.pid.haproxy
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID ccaaf7d7-d083-4f4d-9c25-562b3924cdc3
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:44:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:03.510 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'env', 'PROCESS_TAG=haproxy-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ccaaf7d7-d083-4f4d-9c25-562b3924cdc3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.553 253665 DEBUG nova.storage.rbd_utils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:44:03 compute-0 ceph-mon[75021]: pgmap v2464: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.713 253665 DEBUG nova.objects.instance [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 48af02cd-94c5-473f-a6f9-4d2caad8483f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.732 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.733 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Ensure instance console log exists: /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.734 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.734 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.734 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.785 253665 DEBUG nova.network.neutron [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Successfully created port: 1010674e-1b87-43cb-97bd-6bca4325a7f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.793 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804643.7914412, dab57683-82b6-44b3-b663-556a4f0e3dab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.794 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] VM Started (Lifecycle Event)
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.814 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.821 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804643.791974, dab57683-82b6-44b3-b663-556a4f0e3dab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.822 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] VM Paused (Lifecycle Event)
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.839 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.844 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.870 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:03 compute-0 podman[392772]: 2025-11-22 09:44:03.920414989 +0000 UTC m=+0.058778943 container create 9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 09:44:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.947 253665 DEBUG nova.compute.manager [req-bde106eb-2452-4aa9-9f05-14aeb8523e51 req-ca659966-11b2-45db-8648-8564ca643a07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.948 253665 DEBUG oslo_concurrency.lockutils [req-bde106eb-2452-4aa9-9f05-14aeb8523e51 req-ca659966-11b2-45db-8648-8564ca643a07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.948 253665 DEBUG oslo_concurrency.lockutils [req-bde106eb-2452-4aa9-9f05-14aeb8523e51 req-ca659966-11b2-45db-8648-8564ca643a07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.949 253665 DEBUG oslo_concurrency.lockutils [req-bde106eb-2452-4aa9-9f05-14aeb8523e51 req-ca659966-11b2-45db-8648-8564ca643a07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.949 253665 DEBUG nova.compute.manager [req-bde106eb-2452-4aa9-9f05-14aeb8523e51 req-ca659966-11b2-45db-8648-8564ca643a07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Processing event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.949 253665 DEBUG nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.955 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804643.955103, dab57683-82b6-44b3-b663-556a4f0e3dab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.955 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] VM Resumed (Lifecycle Event)
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.957 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.961 253665 INFO nova.virt.libvirt.driver [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Instance spawned successfully.
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.961 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:44:03 compute-0 systemd[1]: Started libpod-conmon-9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3.scope.
Nov 22 09:44:03 compute-0 podman[392772]: 2025-11-22 09:44:03.884340808 +0000 UTC m=+0.022704782 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.990 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:03 compute-0 nova_compute[253661]: 2025-11-22 09:44:03.994 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.002 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.003 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.003 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.003 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743f2a86a0af71728ae8cd7ad6b829b53bc590a580aa2c574008ce5dfdbe39ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.004 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.004 253665 DEBUG nova.virt.libvirt.driver [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.023 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:04 compute-0 podman[392772]: 2025-11-22 09:44:04.033127962 +0000 UTC m=+0.171491936 container init 9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 09:44:04 compute-0 podman[392772]: 2025-11-22 09:44:04.03911258 +0000 UTC m=+0.177476534 container start 9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.054 253665 INFO nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Took 9.68 seconds to spawn the instance on the hypervisor.
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.054 253665 DEBUG nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:04 compute-0 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [NOTICE]   (392791) : New worker (392793) forked
Nov 22 09:44:04 compute-0 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [NOTICE]   (392791) : Loading success.
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.122 253665 INFO nova.compute.manager [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Took 11.02 seconds to build instance.
Nov 22 09:44:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 121 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.135 253665 DEBUG oslo_concurrency.lockutils [None req-a4a46e58-c987-4459-8bae-e7f25d4eac39 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:04 compute-0 ovn_controller[152872]: 2025-11-22T09:44:04Z|01464|binding|INFO|Releasing lport 9302a453-ce7d-475e-8f32-fad5f9f06dff from this chassis (sb_readonly=0)
Nov 22 09:44:04 compute-0 nova_compute[253661]: 2025-11-22 09:44:04.982 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.071 253665 DEBUG nova.network.neutron [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Successfully updated port: 8e5490c3-8e77-4f49-a612-31f17e0a3586 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:44:05 compute-0 ovn_controller[152872]: 2025-11-22T09:44:05Z|01465|binding|INFO|Releasing lport 9302a453-ce7d-475e-8f32-fad5f9f06dff from this chassis (sb_readonly=0)
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:05 compute-0 podman[392803]: 2025-11-22 09:44:05.398081376 +0000 UTC m=+0.078034227 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.588 253665 DEBUG nova.compute.manager [req-9d9e893e-d4a8-4c88-9401-b677a6be9317 req-8205e52e-0370-41a0-8b94-224d388a298c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-changed-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.588 253665 DEBUG nova.compute.manager [req-9d9e893e-d4a8-4c88-9401-b677a6be9317 req-8205e52e-0370-41a0-8b94-224d388a298c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing instance network info cache due to event network-changed-8e5490c3-8e77-4f49-a612-31f17e0a3586. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.589 253665 DEBUG oslo_concurrency.lockutils [req-9d9e893e-d4a8-4c88-9401-b677a6be9317 req-8205e52e-0370-41a0-8b94-224d388a298c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.589 253665 DEBUG oslo_concurrency.lockutils [req-9d9e893e-d4a8-4c88-9401-b677a6be9317 req-8205e52e-0370-41a0-8b94-224d388a298c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.589 253665 DEBUG nova.network.neutron [req-9d9e893e-d4a8-4c88-9401-b677a6be9317 req-8205e52e-0370-41a0-8b94-224d388a298c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing network info cache for port 8e5490c3-8e77-4f49-a612-31f17e0a3586 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.625 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804630.6241179, 2f0b41ce-f4ed-4358-ac93-eb662789bea1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.626 253665 INFO nova.compute.manager [-] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] VM Stopped (Lifecycle Event)
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.643 253665 DEBUG nova.compute.manager [None req-0ef3fc4c-cbd5-4225-86be-553d6a577153 - - - - - -] [instance: 2f0b41ce-f4ed-4358-ac93-eb662789bea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:05 compute-0 ceph-mon[75021]: pgmap v2465: 305 pgs: 305 active+clean; 121 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Nov 22 09:44:05 compute-0 nova_compute[253661]: 2025-11-22 09:44:05.835 253665 DEBUG nova.network.neutron [req-9d9e893e-d4a8-4c88-9401-b677a6be9317 req-8205e52e-0370-41a0-8b94-224d388a298c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.089 253665 DEBUG nova.compute.manager [req-75229f04-b265-4f70-ac6a-86433994ac91 req-336a3b17-bb40-4050-9518-0f3676ea5e77 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.090 253665 DEBUG oslo_concurrency.lockutils [req-75229f04-b265-4f70-ac6a-86433994ac91 req-336a3b17-bb40-4050-9518-0f3676ea5e77 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.091 253665 DEBUG oslo_concurrency.lockutils [req-75229f04-b265-4f70-ac6a-86433994ac91 req-336a3b17-bb40-4050-9518-0f3676ea5e77 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.091 253665 DEBUG oslo_concurrency.lockutils [req-75229f04-b265-4f70-ac6a-86433994ac91 req-336a3b17-bb40-4050-9518-0f3676ea5e77 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.091 253665 DEBUG nova.compute.manager [req-75229f04-b265-4f70-ac6a-86433994ac91 req-336a3b17-bb40-4050-9518-0f3676ea5e77 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.092 253665 WARNING nova.compute.manager [req-75229f04-b265-4f70-ac6a-86433994ac91 req-336a3b17-bb40-4050-9518-0f3676ea5e77 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state None.
Nov 22 09:44:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 121 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 2.7 MiB/s wr, 73 op/s
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.379 253665 DEBUG nova.network.neutron [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Successfully updated port: 1010674e-1b87-43cb-97bd-6bca4325a7f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.396 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.477 253665 DEBUG nova.network.neutron [req-9d9e893e-d4a8-4c88-9401-b677a6be9317 req-8205e52e-0370-41a0-8b94-224d388a298c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.492 253665 DEBUG oslo_concurrency.lockutils [req-9d9e893e-d4a8-4c88-9401-b677a6be9317 req-8205e52e-0370-41a0-8b94-224d388a298c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.494 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.495 253665 DEBUG nova.network.neutron [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.619 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:06 compute-0 nova_compute[253661]: 2025-11-22 09:44:06.838 253665 DEBUG nova.network.neutron [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:44:07 compute-0 nova_compute[253661]: 2025-11-22 09:44:07.680 253665 DEBUG nova.compute.manager [req-9067739c-b10e-4202-a1e5-df78c4532f8c req-daf0215b-e820-47a0-baef-880aad4e9afa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-changed-1010674e-1b87-43cb-97bd-6bca4325a7f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:07 compute-0 nova_compute[253661]: 2025-11-22 09:44:07.681 253665 DEBUG nova.compute.manager [req-9067739c-b10e-4202-a1e5-df78c4532f8c req-daf0215b-e820-47a0-baef-880aad4e9afa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing instance network info cache due to event network-changed-1010674e-1b87-43cb-97bd-6bca4325a7f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:07 compute-0 nova_compute[253661]: 2025-11-22 09:44:07.682 253665 DEBUG oslo_concurrency.lockutils [req-9067739c-b10e-4202-a1e5-df78c4532f8c req-daf0215b-e820-47a0-baef-880aad4e9afa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:07 compute-0 ceph-mon[75021]: pgmap v2466: 305 pgs: 305 active+clean; 121 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 2.7 MiB/s wr, 73 op/s
Nov 22 09:44:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 3.6 MiB/s wr, 108 op/s
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.371 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:08 compute-0 NetworkManager[48920]: <info>  [1763804648.3721] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Nov 22 09:44:08 compute-0 NetworkManager[48920]: <info>  [1763804648.3729] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/596)
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:08 compute-0 ovn_controller[152872]: 2025-11-22T09:44:08Z|01466|binding|INFO|Releasing lport 9302a453-ce7d-475e-8f32-fad5f9f06dff from this chassis (sb_readonly=0)
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.489 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.702 253665 DEBUG nova.network.neutron [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [{"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.716 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.716 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Instance network_info: |[{"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.717 253665 DEBUG oslo_concurrency.lockutils [req-9067739c-b10e-4202-a1e5-df78c4532f8c req-daf0215b-e820-47a0-baef-880aad4e9afa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.717 253665 DEBUG nova.network.neutron [req-9067739c-b10e-4202-a1e5-df78c4532f8c req-daf0215b-e820-47a0-baef-880aad4e9afa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing network info cache for port 1010674e-1b87-43cb-97bd-6bca4325a7f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.720 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Start _get_guest_xml network_info=[{"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.725 253665 WARNING nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.734 253665 DEBUG nova.virt.libvirt.host [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.736 253665 DEBUG nova.virt.libvirt.host [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.740 253665 DEBUG nova.virt.libvirt.host [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.741 253665 DEBUG nova.virt.libvirt.host [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.741 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.741 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.742 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.742 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.742 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.743 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.743 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.743 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.744 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.744 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.744 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.744 253665 DEBUG nova.virt.hardware [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:44:08 compute-0 nova_compute[253661]: 2025-11-22 09:44:08.747 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:44:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833171795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.182 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.206 253665 DEBUG nova.storage.rbd_utils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.210 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.341 253665 DEBUG nova.compute.manager [req-f01f1a33-b5ef-4085-908b-1178a61e9922 req-3335ecd2-2bbc-4f42-93fe-0e24e4572921 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.342 253665 DEBUG nova.compute.manager [req-f01f1a33-b5ef-4085-908b-1178a61e9922 req-3335ecd2-2bbc-4f42-93fe-0e24e4572921 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing instance network info cache due to event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.342 253665 DEBUG oslo_concurrency.lockutils [req-f01f1a33-b5ef-4085-908b-1178a61e9922 req-3335ecd2-2bbc-4f42-93fe-0e24e4572921 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.343 253665 DEBUG oslo_concurrency.lockutils [req-f01f1a33-b5ef-4085-908b-1178a61e9922 req-3335ecd2-2bbc-4f42-93fe-0e24e4572921 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.343 253665 DEBUG nova.network.neutron [req-f01f1a33-b5ef-4085-908b-1178a61e9922 req-3335ecd2-2bbc-4f42-93fe-0e24e4572921 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:44:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780904481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.652 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.653 253665 DEBUG nova.virt.libvirt.vif [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:43:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1350591661',display_name='tempest-TestGettingAddress-server-1350591661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1350591661',id=136,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-002uioix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:01Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=48af02cd-94c5-473f-a6f9-4d2caad8483f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.654 253665 DEBUG nova.network.os_vif_util [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.655 253665 DEBUG nova.network.os_vif_util [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.656 253665 DEBUG nova.virt.libvirt.vif [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:43:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1350591661',display_name='tempest-TestGettingAddress-server-1350591661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1350591661',id=136,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-002uioix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:01Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=48af02cd-94c5-473f-a6f9-4d2caad8483f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.656 253665 DEBUG nova.network.os_vif_util [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.657 253665 DEBUG nova.network.os_vif_util [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.658 253665 DEBUG nova.objects.instance [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 48af02cd-94c5-473f-a6f9-4d2caad8483f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.675 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <uuid>48af02cd-94c5-473f-a6f9-4d2caad8483f</uuid>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <name>instance-00000088</name>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-1350591661</nova:name>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:44:08</nova:creationTime>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <nova:port uuid="8e5490c3-8e77-4f49-a612-31f17e0a3586">
Nov 22 09:44:09 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <nova:port uuid="1010674e-1b87-43cb-97bd-6bca4325a7f9">
Nov 22 09:44:09 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee5:3fd" ipVersion="6"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <system>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <entry name="serial">48af02cd-94c5-473f-a6f9-4d2caad8483f</entry>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <entry name="uuid">48af02cd-94c5-473f-a6f9-4d2caad8483f</entry>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </system>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <os>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   </os>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <features>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   </features>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/48af02cd-94c5-473f-a6f9-4d2caad8483f_disk">
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/48af02cd-94c5-473f-a6f9-4d2caad8483f_disk.config">
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ec:f8:55"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <target dev="tap8e5490c3-8e"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:e5:03:fd"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <target dev="tap1010674e-1b"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f/console.log" append="off"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <video>
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </video>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:44:09 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:44:09 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:44:09 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:44:09 compute-0 nova_compute[253661]: </domain>
Nov 22 09:44:09 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.691 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Preparing to wait for external event network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.691 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.692 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.692 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.692 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Preparing to wait for external event network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.692 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.692 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.692 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.693 253665 DEBUG nova.virt.libvirt.vif [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:43:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1350591661',display_name='tempest-TestGettingAddress-server-1350591661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1350591661',id=136,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-002uioix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:01Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=48af02cd-94c5-473f-a6f9-4d2caad8483f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.693 253665 DEBUG nova.network.os_vif_util [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.694 253665 DEBUG nova.network.os_vif_util [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.694 253665 DEBUG os_vif [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.695 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.695 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.702 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e5490c3-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.703 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e5490c3-8e, col_values=(('external_ids', {'iface-id': '8e5490c3-8e77-4f49-a612-31f17e0a3586', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:f8:55', 'vm-uuid': '48af02cd-94c5-473f-a6f9-4d2caad8483f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:09 compute-0 NetworkManager[48920]: <info>  [1763804649.7106] manager: (tap8e5490c3-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.720 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.721 253665 INFO os_vif [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e')
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.722 253665 DEBUG nova.virt.libvirt.vif [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:43:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1350591661',display_name='tempest-TestGettingAddress-server-1350591661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1350591661',id=136,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-002uioix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:01Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=48af02cd-94c5-473f-a6f9-4d2caad8483f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.722 253665 DEBUG nova.network.os_vif_util [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.723 253665 DEBUG nova.network.os_vif_util [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.723 253665 DEBUG os_vif [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.724 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.724 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.727 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1010674e-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.727 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1010674e-1b, col_values=(('external_ids', {'iface-id': '1010674e-1b87-43cb-97bd-6bca4325a7f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:03:fd', 'vm-uuid': '48af02cd-94c5-473f-a6f9-4d2caad8483f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.728 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:09 compute-0 NetworkManager[48920]: <info>  [1763804649.7288] manager: (tap1010674e-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/598)
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.732 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.738 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.739 253665 INFO os_vif [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b')
Nov 22 09:44:09 compute-0 ceph-mon[75021]: pgmap v2467: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 3.6 MiB/s wr, 108 op/s
Nov 22 09:44:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/833171795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2780904481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.883 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.883 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.883 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:ec:f8:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.883 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:e5:03:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.884 253665 INFO nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Using config drive
Nov 22 09:44:09 compute-0 nova_compute[253661]: 2025-11-22 09:44:09.904 253665 DEBUG nova.storage.rbd_utils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 147 op/s
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.212 253665 DEBUG nova.network.neutron [req-9067739c-b10e-4202-a1e5-df78c4532f8c req-daf0215b-e820-47a0-baef-880aad4e9afa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updated VIF entry in instance network info cache for port 1010674e-1b87-43cb-97bd-6bca4325a7f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.212 253665 DEBUG nova.network.neutron [req-9067739c-b10e-4202-a1e5-df78c4532f8c req-daf0215b-e820-47a0-baef-880aad4e9afa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [{"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.229 253665 DEBUG oslo_concurrency.lockutils [req-9067739c-b10e-4202-a1e5-df78c4532f8c req-daf0215b-e820-47a0-baef-880aad4e9afa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.499 253665 INFO nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Creating config drive at /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f/disk.config
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.504 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3jjred06 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.652 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3jjred06" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.676 253665 DEBUG nova.storage.rbd_utils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.680 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f/disk.config 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.762 253665 DEBUG nova.network.neutron [req-f01f1a33-b5ef-4085-908b-1178a61e9922 req-3335ecd2-2bbc-4f42-93fe-0e24e4572921 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updated VIF entry in instance network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.764 253665 DEBUG nova.network.neutron [req-f01f1a33-b5ef-4085-908b-1178a61e9922 req-3335ecd2-2bbc-4f42-93fe-0e24e4572921 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.807 253665 DEBUG oslo_concurrency.lockutils [req-f01f1a33-b5ef-4085-908b-1178a61e9922 req-3335ecd2-2bbc-4f42-93fe-0e24e4572921 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:10 compute-0 nova_compute[253661]: 2025-11-22 09:44:10.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.168 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804636.1664662, 9712bbe7-5d4c-41ad-8725-d063d344ef31 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.168 253665 INFO nova.compute.manager [-] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] VM Stopped (Lifecycle Event)
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.186 253665 DEBUG nova.compute.manager [None req-c7ce21ef-4195-415e-8074-2724e5abf2b1 - - - - - -] [instance: 9712bbe7-5d4c-41ad-8725-d063d344ef31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.620 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.843 253665 DEBUG oslo_concurrency.processutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f/disk.config 48af02cd-94c5-473f-a6f9-4d2caad8483f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.844 253665 INFO nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Deleting local config drive /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f/disk.config because it was imported into RBD.
Nov 22 09:44:11 compute-0 NetworkManager[48920]: <info>  [1763804651.8960] manager: (tap8e5490c3-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/599)
Nov 22 09:44:11 compute-0 kernel: tap8e5490c3-8e: entered promiscuous mode
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01467|binding|INFO|Claiming lport 8e5490c3-8e77-4f49-a612-31f17e0a3586 for this chassis.
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01468|binding|INFO|8e5490c3-8e77-4f49-a612-31f17e0a3586: Claiming fa:16:3e:ec:f8:55 10.100.0.4
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.914 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f8:55 10.100.0.4'], port_security=['fa:16:3e:ec:f8:55 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '48af02cd-94c5-473f-a6f9-4d2caad8483f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621dd092-e20a-432f-8488-41d7fcd69532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=586feb4c-523c-413f-8bd3-6bc87edbdf4c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8e5490c3-8e77-4f49-a612-31f17e0a3586) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.915 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8e5490c3-8e77-4f49-a612-31f17e0a3586 in datapath 621dd092-e20a-432f-8488-41d7fcd69532 bound to our chassis
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.920 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621dd092-e20a-432f-8488-41d7fcd69532
Nov 22 09:44:11 compute-0 NetworkManager[48920]: <info>  [1763804651.9300] manager: (tap1010674e-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Nov 22 09:44:11 compute-0 kernel: tap1010674e-1b: entered promiscuous mode
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01469|binding|INFO|Setting lport 8e5490c3-8e77-4f49-a612-31f17e0a3586 ovn-installed in OVS
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01470|binding|INFO|Setting lport 8e5490c3-8e77-4f49-a612-31f17e0a3586 up in Southbound
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.932 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01471|if_status|INFO|Dropped 1 log messages in last 80 seconds (most recently, 80 seconds ago) due to excessive rate
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01472|if_status|INFO|Not updating pb chassis for 1010674e-1b87-43cb-97bd-6bca4325a7f9 now as sb is readonly
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01473|binding|INFO|Claiming lport 1010674e-1b87-43cb-97bd-6bca4325a7f9 for this chassis.
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01474|binding|INFO|1010674e-1b87-43cb-97bd-6bca4325a7f9: Claiming fa:16:3e:e5:03:fd 2001:db8::f816:3eff:fee5:3fd
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.936 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b371f326-f03e-4c2a-81dc-8f811482b7d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.938 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap621dd092-e1 in ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.942 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:03:fd 2001:db8::f816:3eff:fee5:3fd'], port_security=['fa:16:3e:e5:03:fd 2001:db8::f816:3eff:fee5:3fd'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee5:3fd/64', 'neutron:device_id': '48af02cd-94c5-473f-a6f9-4d2caad8483f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7556820e-db50-4efa-817c-86d63f0b8b71, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1010674e-1b87-43cb-97bd-6bca4325a7f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.942 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap621dd092-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.942 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38856c41-33fc-48f2-aea1-5ab1ff722076]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.945 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af74cad6-441a-488f-927f-9e1327c57a1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.955 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c6d83e-389f-410f-8017-dfa9e42341ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01475|binding|INFO|Setting lport 1010674e-1b87-43cb-97bd-6bca4325a7f9 ovn-installed in OVS
Nov 22 09:44:11 compute-0 ovn_controller[152872]: 2025-11-22T09:44:11Z|01476|binding|INFO|Setting lport 1010674e-1b87-43cb-97bd-6bca4325a7f9 up in Southbound
Nov 22 09:44:11 compute-0 nova_compute[253661]: 2025-11-22 09:44:11.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:11 compute-0 systemd-udevd[392975]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:11 compute-0 systemd-udevd[392976]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:11 compute-0 NetworkManager[48920]: <info>  [1763804651.9766] device (tap1010674e-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:44:11 compute-0 NetworkManager[48920]: <info>  [1763804651.9777] device (tap1010674e-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:44:11 compute-0 NetworkManager[48920]: <info>  [1763804651.9794] device (tap8e5490c3-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:44:11 compute-0 NetworkManager[48920]: <info>  [1763804651.9806] device (tap8e5490c3-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:44:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:11.979 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0375bd39-6ac1-4d4e-be95-ff7858c3418a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:11 compute-0 systemd-machined[215941]: New machine qemu-167-instance-00000088.
Nov 22 09:44:11 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000088.
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.027 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[80ad0d20-2b3d-453b-ae05-1e61fa218eb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c65dace-2bbd-494f-b5e4-adc9bb8e6af0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 NetworkManager[48920]: <info>  [1763804652.0329] manager: (tap621dd092-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/601)
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.069 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[14f6da73-d2fc-4ce3-ab87-e2995b3ec625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.071 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bbead8c6-6661-48c5-9038-fa76ac81402d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 NetworkManager[48920]: <info>  [1763804652.0962] device (tap621dd092-e0): carrier: link connected
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.101 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[185fad9a-6260-4d5b-acf8-76aba76239d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.119 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[49602930-5593-418d-a29f-98d5d2e1f3b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621dd092-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:9d:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750283, 'reachable_time': 39139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393008, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.138 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5d3cad-8a14-40ee-ae30-75622c8c7fdd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe07:9d3e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750283, 'tstamp': 750283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393009, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.155 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0eef699-66ea-4a37-9d12-e8e68b702aa5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621dd092-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:9d:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750283, 'reachable_time': 39139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393010, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.187 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4910eb-adb9-4644-ad51-d90fa2736406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ceph-mon[75021]: pgmap v2468: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 147 op/s
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.241 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa69012-e73f-452e-ab10-da1a3809a998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.242 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621dd092-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.242 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.243 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621dd092-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:12 compute-0 NetworkManager[48920]: <info>  [1763804652.2449] manager: (tap621dd092-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Nov 22 09:44:12 compute-0 kernel: tap621dd092-e0: entered promiscuous mode
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.254 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621dd092-e0, col_values=(('external_ids', {'iface-id': 'ce538828-218d-4def-9bed-efeb786012c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:12 compute-0 ovn_controller[152872]: 2025-11-22T09:44:12Z|01477|binding|INFO|Releasing lport ce538828-218d-4def-9bed-efeb786012c8 from this chassis (sb_readonly=0)
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.261 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/621dd092-e20a-432f-8488-41d7fcd69532.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/621dd092-e20a-432f-8488-41d7fcd69532.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.262 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ab21f2a5-61d6-43da-b791-37a5f2dbd1dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.262 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-621dd092-e20a-432f-8488-41d7fcd69532
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/621dd092-e20a-432f-8488-41d7fcd69532.pid.haproxy
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 621dd092-e20a-432f-8488-41d7fcd69532
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.264 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'env', 'PROCESS_TAG=haproxy-621dd092-e20a-432f-8488-41d7fcd69532', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/621dd092-e20a-432f-8488-41d7fcd69532.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.272 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:44:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335326629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:44:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:44:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335326629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.443 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804652.4425306, 48af02cd-94c5-473f-a6f9-4d2caad8483f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.443 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] VM Started (Lifecycle Event)
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.495 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.502 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804652.4430459, 48af02cd-94c5-473f-a6f9-4d2caad8483f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.502 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] VM Paused (Lifecycle Event)
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.528 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.531 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:12 compute-0 nova_compute[253661]: 2025-11-22 09:44:12.556 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:12 compute-0 podman[393085]: 2025-11-22 09:44:12.688626711 +0000 UTC m=+0.066463082 container create fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:44:12 compute-0 podman[393085]: 2025-11-22 09:44:12.642667936 +0000 UTC m=+0.020504327 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:44:12 compute-0 systemd[1]: Started libpod-conmon-fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6.scope.
Nov 22 09:44:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec57aa6059e798a6228ab70569192414cfc2ac4701797ee13c0ca77e6733e50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:12 compute-0 podman[393085]: 2025-11-22 09:44:12.796389122 +0000 UTC m=+0.174225493 container init fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:44:12 compute-0 podman[393085]: 2025-11-22 09:44:12.801911978 +0000 UTC m=+0.179748349 container start fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:44:12 compute-0 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [NOTICE]   (393104) : New worker (393106) forked
Nov 22 09:44:12 compute-0 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [NOTICE]   (393104) : Loading success.
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.872 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1010674e-1b87-43cb-97bd-6bca4325a7f9 in datapath 7a504de2-27b2-4d01-a183-d9b0331ca31e unbound from our chassis
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.875 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a504de2-27b2-4d01-a183-d9b0331ca31e
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.886 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cbb4a4-90fa-4416-8e0b-18f8ed63bce5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.888 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7a504de2-21 in ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.890 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7a504de2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.890 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[998a5744-c6e0-42fc-b34a-18f20d68e23d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.891 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c565d8d5-b9c1-471f-aaee-a4ebbe30a34a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.901 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e763140c-3ab0-45f0-916f-7045703c5a49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.917 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea202ed9-3032-4b39-be94-fb963c8fefe6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.949 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ad184bdf-82ef-440b-ad66-bd308fee3a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.955 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8a36c9c7-d250-4d2a-ae30-616dba611270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:12 compute-0 NetworkManager[48920]: <info>  [1763804652.9569] manager: (tap7a504de2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/603)
Nov 22 09:44:12 compute-0 systemd-udevd[392992]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:12.997 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[93a9f3ca-ce13-4488-8fc6-b41656e5448d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.000 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bd34d8c9-dd1a-470d-a3c1-ee0a04af1c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:13 compute-0 NetworkManager[48920]: <info>  [1763804653.0437] device (tap7a504de2-20): carrier: link connected
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.050 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[69cd9b53-566c-423c-8381-f5fab906dc2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.067 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d34943-8549-4e67-9de4-7b2d1409c4a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a504de2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:4b:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750378, 'reachable_time': 33991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393125, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.088 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[45bfb641-5089-4e87-a24e-6fd8d6b629d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:4b19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750378, 'tstamp': 750378}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393126, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.110 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30b878c8-e382-4741-a2ba-f8eb782cd388]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a504de2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:4b:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750378, 'reachable_time': 33991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393127, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.169 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aa874f88-7174-4972-aed9-3f51b8e00da2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[40d609f4-53dc-443b-9ecc-f6514650ea01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.211 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a504de2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.212 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.212 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a504de2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:13 compute-0 nova_compute[253661]: 2025-11-22 09:44:13.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:13 compute-0 kernel: tap7a504de2-20: entered promiscuous mode
Nov 22 09:44:13 compute-0 NetworkManager[48920]: <info>  [1763804653.2216] manager: (tap7a504de2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/604)
Nov 22 09:44:13 compute-0 nova_compute[253661]: 2025-11-22 09:44:13.222 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.223 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a504de2-20, col_values=(('external_ids', {'iface-id': 'b35ca171-2b2e-44d8-96a4-4559f6282fda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:13 compute-0 nova_compute[253661]: 2025-11-22 09:44:13.224 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:13 compute-0 ovn_controller[152872]: 2025-11-22T09:44:13Z|01478|binding|INFO|Releasing lport b35ca171-2b2e-44d8-96a4-4559f6282fda from this chassis (sb_readonly=0)
Nov 22 09:44:13 compute-0 nova_compute[253661]: 2025-11-22 09:44:13.242 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.244 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a504de2-27b2-4d01-a183-d9b0331ca31e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a504de2-27b2-4d01-a183-d9b0331ca31e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.245 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d68188f-7808-4d4f-9bfd-24a5dc39a068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.246 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-7a504de2-27b2-4d01-a183-d9b0331ca31e
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/7a504de2-27b2-4d01-a183-d9b0331ca31e.pid.haproxy
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 7a504de2-27b2-4d01-a183-d9b0331ca31e
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:44:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:13.247 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'env', 'PROCESS_TAG=haproxy-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7a504de2-27b2-4d01-a183-d9b0331ca31e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:44:13 compute-0 ceph-mon[75021]: pgmap v2469: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Nov 22 09:44:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/335326629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:44:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/335326629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:44:13 compute-0 podman[393155]: 2025-11-22 09:44:13.677391526 +0000 UTC m=+0.110885259 container create f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:44:13 compute-0 podman[393155]: 2025-11-22 09:44:13.58921734 +0000 UTC m=+0.022711083 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:44:13 compute-0 systemd[1]: Started libpod-conmon-f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7.scope.
Nov 22 09:44:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29096a98af37d370bc2562ae093d356474f4646e4103cb41d2d7d5290f21d167/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:13 compute-0 podman[393155]: 2025-11-22 09:44:13.783179539 +0000 UTC m=+0.216673282 container init f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 09:44:13 compute-0 podman[393155]: 2025-11-22 09:44:13.790699354 +0000 UTC m=+0.224193077 container start f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:44:13 compute-0 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [NOTICE]   (393174) : New worker (393176) forked
Nov 22 09:44:13 compute-0 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [NOTICE]   (393174) : Loading success.
Nov 22 09:44:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 22 09:44:14 compute-0 nova_compute[253661]: 2025-11-22 09:44:14.730 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:15 compute-0 nova_compute[253661]: 2025-11-22 09:44:15.012 253665 DEBUG nova.compute.manager [req-09ca05dc-f987-427f-80ba-85c5ac978af5 req-21078aec-5a97-4eb8-81a1-01e482c9a974 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:15 compute-0 nova_compute[253661]: 2025-11-22 09:44:15.013 253665 DEBUG oslo_concurrency.lockutils [req-09ca05dc-f987-427f-80ba-85c5ac978af5 req-21078aec-5a97-4eb8-81a1-01e482c9a974 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:15 compute-0 nova_compute[253661]: 2025-11-22 09:44:15.013 253665 DEBUG oslo_concurrency.lockutils [req-09ca05dc-f987-427f-80ba-85c5ac978af5 req-21078aec-5a97-4eb8-81a1-01e482c9a974 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:15 compute-0 nova_compute[253661]: 2025-11-22 09:44:15.013 253665 DEBUG oslo_concurrency.lockutils [req-09ca05dc-f987-427f-80ba-85c5ac978af5 req-21078aec-5a97-4eb8-81a1-01e482c9a974 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:15 compute-0 nova_compute[253661]: 2025-11-22 09:44:15.014 253665 DEBUG nova.compute.manager [req-09ca05dc-f987-427f-80ba-85c5ac978af5 req-21078aec-5a97-4eb8-81a1-01e482c9a974 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Processing event network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:44:15 compute-0 ceph-mon[75021]: pgmap v2470: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 22 09:44:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 937 KiB/s wr, 90 op/s
Nov 22 09:44:16 compute-0 nova_compute[253661]: 2025-11-22 09:44:16.623 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.134 253665 DEBUG nova.compute.manager [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.135 253665 DEBUG oslo_concurrency.lockutils [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.135 253665 DEBUG oslo_concurrency.lockutils [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.136 253665 DEBUG oslo_concurrency.lockutils [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.136 253665 DEBUG nova.compute.manager [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No event matching network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 in dict_keys([('network-vif-plugged', '1010674e-1b87-43cb-97bd-6bca4325a7f9')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.137 253665 WARNING nova.compute.manager [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received unexpected event network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 for instance with vm_state building and task_state spawning.
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.137 253665 DEBUG nova.compute.manager [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.138 253665 DEBUG oslo_concurrency.lockutils [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.138 253665 DEBUG oslo_concurrency.lockutils [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.139 253665 DEBUG oslo_concurrency.lockutils [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.139 253665 DEBUG nova.compute.manager [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Processing event network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.140 253665 DEBUG nova.compute.manager [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.140 253665 DEBUG oslo_concurrency.lockutils [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.140 253665 DEBUG oslo_concurrency.lockutils [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.140 253665 DEBUG oslo_concurrency.lockutils [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.141 253665 DEBUG nova.compute.manager [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No waiting events found dispatching network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.141 253665 WARNING nova.compute.manager [req-ac9918bf-eb4c-46b3-b216-ffbbccc1d853 req-54c89f6b-fb07-417b-803c-4e8839e31087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received unexpected event network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 for instance with vm_state building and task_state spawning.
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.142 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.149 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804657.1488903, 48af02cd-94c5-473f-a6f9-4d2caad8483f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.150 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] VM Resumed (Lifecycle Event)
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.153 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.157 253665 INFO nova.virt.libvirt.driver [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Instance spawned successfully.
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.158 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.170 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.174 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.199 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.203 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.203 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.204 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.205 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.205 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.206 253665 DEBUG nova.virt.libvirt.driver [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.222 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.269 253665 INFO nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Took 15.65 seconds to spawn the instance on the hypervisor.
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.270 253665 DEBUG nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.345 253665 INFO nova.compute.manager [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Took 16.82 seconds to build instance.
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.380 253665 DEBUG oslo_concurrency.lockutils [None req-6c21e0d9-8200-4ca1-b0f1-e33a8042af91 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.519 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.520 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.539 253665 DEBUG nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:44:17 compute-0 ceph-mon[75021]: pgmap v2471: 305 pgs: 305 active+clean; 134 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 937 KiB/s wr, 90 op/s
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.621 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.622 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.631 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.632 253665 INFO nova.compute.claims [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:44:17 compute-0 nova_compute[253661]: 2025-11-22 09:44:17.785 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:18 compute-0 ovn_controller[152872]: 2025-11-22T09:44:18Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:14:00 10.100.0.12
Nov 22 09:44:18 compute-0 ovn_controller[152872]: 2025-11-22T09:44:18Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:14:00 10.100.0.12
Nov 22 09:44:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 140 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 101 op/s
Nov 22 09:44:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:44:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977925828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.318 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.326 253665 DEBUG nova.compute.provider_tree [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.349 253665 DEBUG nova.scheduler.client.report [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.378 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.379 253665 DEBUG nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.441 253665 DEBUG nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.442 253665 DEBUG nova.network.neutron [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.463 253665 INFO nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.479 253665 DEBUG nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.563 253665 DEBUG nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.565 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.566 253665 INFO nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Creating image(s)
Nov 22 09:44:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1977925828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.595 253665 DEBUG nova.storage.rbd_utils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.630 253665 DEBUG nova.storage.rbd_utils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.661 253665 DEBUG nova.storage.rbd_utils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.665 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.711 253665 DEBUG nova.policy [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.750 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.751 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.752 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.752 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.773 253665 DEBUG nova.storage.rbd_utils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:18 compute-0 nova_compute[253661]: 2025-11-22 09:44:18.777 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:19 compute-0 nova_compute[253661]: 2025-11-22 09:44:19.222 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:19 compute-0 nova_compute[253661]: 2025-11-22 09:44:19.284 253665 DEBUG nova.storage.rbd_utils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:44:19 compute-0 nova_compute[253661]: 2025-11-22 09:44:19.424 253665 DEBUG nova.objects.instance [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:19 compute-0 nova_compute[253661]: 2025-11-22 09:44:19.440 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:44:19 compute-0 nova_compute[253661]: 2025-11-22 09:44:19.441 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Ensure instance console log exists: /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:44:19 compute-0 nova_compute[253661]: 2025-11-22 09:44:19.442 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:19 compute-0 nova_compute[253661]: 2025-11-22 09:44:19.442 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:19 compute-0 nova_compute[253661]: 2025-11-22 09:44:19.442 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:19 compute-0 ceph-mon[75021]: pgmap v2472: 305 pgs: 305 active+clean; 140 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 101 op/s
Nov 22 09:44:19 compute-0 nova_compute[253661]: 2025-11-22 09:44:19.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 158 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 22 09:44:20 compute-0 nova_compute[253661]: 2025-11-22 09:44:20.689 253665 DEBUG nova.network.neutron [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Successfully created port: ae02b780-c76c-4fec-9f50-a8fb17aec607 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:44:21 compute-0 nova_compute[253661]: 2025-11-22 09:44:21.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:22 compute-0 ceph-mon[75021]: pgmap v2473: 305 pgs: 305 active+clean; 158 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 22 09:44:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 174 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 149 op/s
Nov 22 09:44:22 compute-0 nova_compute[253661]: 2025-11-22 09:44:22.727 253665 DEBUG nova.network.neutron [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Successfully updated port: ae02b780-c76c-4fec-9f50-a8fb17aec607 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:44:22 compute-0 nova_compute[253661]: 2025-11-22 09:44:22.745 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:22 compute-0 nova_compute[253661]: 2025-11-22 09:44:22.745 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:22 compute-0 nova_compute[253661]: 2025-11-22 09:44:22.746 253665 DEBUG nova.network.neutron [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:44:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:44:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:44:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:44:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:44:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:44:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:44:22 compute-0 nova_compute[253661]: 2025-11-22 09:44:22.835 253665 DEBUG nova.compute.manager [req-d5df5308-5b68-4320-b1bb-93e548ef4fb9 req-f027fc4a-475f-4cc2-a0b4-38f0a3d60de3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:22 compute-0 nova_compute[253661]: 2025-11-22 09:44:22.836 253665 DEBUG nova.compute.manager [req-d5df5308-5b68-4320-b1bb-93e548ef4fb9 req-f027fc4a-475f-4cc2-a0b4-38f0a3d60de3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing instance network info cache due to event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:22 compute-0 nova_compute[253661]: 2025-11-22 09:44:22.836 253665 DEBUG oslo_concurrency.lockutils [req-d5df5308-5b68-4320-b1bb-93e548ef4fb9 req-f027fc4a-475f-4cc2-a0b4-38f0a3d60de3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:23 compute-0 nova_compute[253661]: 2025-11-22 09:44:23.029 253665 DEBUG nova.network.neutron [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:44:23 compute-0 nova_compute[253661]: 2025-11-22 09:44:23.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:23 compute-0 nova_compute[253661]: 2025-11-22 09:44:23.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:44:23 compute-0 nova_compute[253661]: 2025-11-22 09:44:23.266 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:44:23 compute-0 ceph-mon[75021]: pgmap v2474: 305 pgs: 305 active+clean; 174 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 149 op/s
Nov 22 09:44:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.023 253665 DEBUG nova.network.neutron [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updating instance_info_cache with network_info: [{"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.041 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.042 253665 DEBUG nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Instance network_info: |[{"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.042 253665 DEBUG oslo_concurrency.lockutils [req-d5df5308-5b68-4320-b1bb-93e548ef4fb9 req-f027fc4a-475f-4cc2-a0b4-38f0a3d60de3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.042 253665 DEBUG nova.network.neutron [req-d5df5308-5b68-4320-b1bb-93e548ef4fb9 req-f027fc4a-475f-4cc2-a0b4-38f0a3d60de3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.045 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Start _get_guest_xml network_info=[{"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.050 253665 WARNING nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.056 253665 DEBUG nova.virt.libvirt.host [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.056 253665 DEBUG nova.virt.libvirt.host [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.061 253665 DEBUG nova.virt.libvirt.host [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.061 253665 DEBUG nova.virt.libvirt.host [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.062 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.062 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.062 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.062 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.063 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.063 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.063 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.063 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.064 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.064 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.064 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.064 253665 DEBUG nova.virt.hardware [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.067 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 213 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 22 09:44:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:44:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160722414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.529 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.557 253665 DEBUG nova.storage.rbd_utils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.564 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:24 compute-0 nova_compute[253661]: 2025-11-22 09:44:24.778 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:44:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331068036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.005 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.008 253665 DEBUG nova.virt.libvirt.vif [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-394306021',display_name='tempest-TestNetworkBasicOps-server-394306021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-394306021',id=137,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKV6hdIegm8gqIp/u4iZ0PF1QzxEfr8miYRpA6YX5J6vj9O+Yx765rE9yt47fSsnx60um/BGJdUGLYgUR7QR+U6SMxydnwIMFxPr8weXhUUIM0aYuUrJ1oofcn2oF77DhQ==',key_name='tempest-TestNetworkBasicOps-1881101001',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-i0tmx233',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:18Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.008 253665 DEBUG nova.network.os_vif_util [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.009 253665 DEBUG nova.network.os_vif_util [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.010 253665 DEBUG nova.objects.instance [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.029 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <uuid>8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb</uuid>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <name>instance-00000089</name>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-394306021</nova:name>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:44:24</nova:creationTime>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <nova:port uuid="ae02b780-c76c-4fec-9f50-a8fb17aec607">
Nov 22 09:44:25 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <system>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <entry name="serial">8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb</entry>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <entry name="uuid">8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb</entry>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     </system>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <os>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   </os>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <features>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   </features>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk">
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk.config">
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:25 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:82:76:7a"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <target dev="tapae02b780-c7"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb/console.log" append="off"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <video>
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     </video>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:44:25 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:44:25 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:44:25 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:44:25 compute-0 nova_compute[253661]: </domain>
Nov 22 09:44:25 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.031 253665 DEBUG nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Preparing to wait for external event network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.031 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.031 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.031 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.032 253665 DEBUG nova.virt.libvirt.vif [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-394306021',display_name='tempest-TestNetworkBasicOps-server-394306021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-394306021',id=137,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKV6hdIegm8gqIp/u4iZ0PF1QzxEfr8miYRpA6YX5J6vj9O+Yx765rE9yt47fSsnx60um/BGJdUGLYgUR7QR+U6SMxydnwIMFxPr8weXhUUIM0aYuUrJ1oofcn2oF77DhQ==',key_name='tempest-TestNetworkBasicOps-1881101001',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-i0tmx233',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:18Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.032 253665 DEBUG nova.network.os_vif_util [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.033 253665 DEBUG nova.network.os_vif_util [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.033 253665 DEBUG os_vif [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.034 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.034 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.035 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.037 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.037 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae02b780-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.038 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae02b780-c7, col_values=(('external_ids', {'iface-id': 'ae02b780-c76c-4fec-9f50-a8fb17aec607', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:76:7a', 'vm-uuid': '8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:25 compute-0 NetworkManager[48920]: <info>  [1763804665.0403] manager: (tapae02b780-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.042 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.044 253665 DEBUG nova.compute.manager [req-18d55fe7-39ed-439b-8404-896354ab9bac req-c467d911-0c9f-4eee-b8f4-e7964bcf05dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-changed-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.045 253665 DEBUG nova.compute.manager [req-18d55fe7-39ed-439b-8404-896354ab9bac req-c467d911-0c9f-4eee-b8f4-e7964bcf05dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing instance network info cache due to event network-changed-8e5490c3-8e77-4f49-a612-31f17e0a3586. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.045 253665 DEBUG oslo_concurrency.lockutils [req-18d55fe7-39ed-439b-8404-896354ab9bac req-c467d911-0c9f-4eee-b8f4-e7964bcf05dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.045 253665 DEBUG oslo_concurrency.lockutils [req-18d55fe7-39ed-439b-8404-896354ab9bac req-c467d911-0c9f-4eee-b8f4-e7964bcf05dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.045 253665 DEBUG nova.network.neutron [req-18d55fe7-39ed-439b-8404-896354ab9bac req-c467d911-0c9f-4eee-b8f4-e7964bcf05dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing network info cache for port 8e5490c3-8e77-4f49-a612-31f17e0a3586 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.047 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.048 253665 INFO os_vif [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7')
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.117 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.118 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.118 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:82:76:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.118 253665 INFO nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Using config drive
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.137 253665 DEBUG nova.storage.rbd_utils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:25 compute-0 ceph-mon[75021]: pgmap v2475: 305 pgs: 305 active+clean; 213 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 22 09:44:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4160722414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1331068036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.577 253665 INFO nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Creating config drive at /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb/disk.config
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.583 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo6vq0w69 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.670 253665 DEBUG nova.network.neutron [req-d5df5308-5b68-4320-b1bb-93e548ef4fb9 req-f027fc4a-475f-4cc2-a0b4-38f0a3d60de3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updated VIF entry in instance network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.671 253665 DEBUG nova.network.neutron [req-d5df5308-5b68-4320-b1bb-93e548ef4fb9 req-f027fc4a-475f-4cc2-a0b4-38f0a3d60de3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updating instance_info_cache with network_info: [{"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.688 253665 DEBUG oslo_concurrency.lockutils [req-d5df5308-5b68-4320-b1bb-93e548ef4fb9 req-f027fc4a-475f-4cc2-a0b4-38f0a3d60de3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.718 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.719 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.728 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo6vq0w69" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.759 253665 DEBUG nova.storage.rbd_utils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.765 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb/disk.config 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.811 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.908 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.909 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.917 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:44:25 compute-0 nova_compute[253661]: 2025-11-22 09:44:25.918 253665 INFO nova.compute.claims [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.084 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 213 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.179 253665 DEBUG oslo_concurrency.processutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb/disk.config 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.180 253665 INFO nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Deleting local config drive /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb/disk.config because it was imported into RBD.
Nov 22 09:44:26 compute-0 NetworkManager[48920]: <info>  [1763804666.2362] manager: (tapae02b780-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/606)
Nov 22 09:44:26 compute-0 kernel: tapae02b780-c7: entered promiscuous mode
Nov 22 09:44:26 compute-0 ovn_controller[152872]: 2025-11-22T09:44:26Z|01479|binding|INFO|Claiming lport ae02b780-c76c-4fec-9f50-a8fb17aec607 for this chassis.
Nov 22 09:44:26 compute-0 ovn_controller[152872]: 2025-11-22T09:44:26Z|01480|binding|INFO|ae02b780-c76c-4fec-9f50-a8fb17aec607: Claiming fa:16:3e:82:76:7a 10.100.0.3
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.256 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:76:7a 10.100.0.3'], port_security=['fa:16:3e:82:76:7a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b59ce93-bc6e-4f8c-b65e-e937db06426e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cdf5ea7-dfee-4f0a-9b99-06484e8f93dc, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ae02b780-c76c-4fec-9f50-a8fb17aec607) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.258 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ae02b780-c76c-4fec-9f50-a8fb17aec607 in datapath ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 bound to our chassis
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.261 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ccaaf7d7-d083-4f4d-9c25-562b3924cdc3
Nov 22 09:44:26 compute-0 ovn_controller[152872]: 2025-11-22T09:44:26Z|01481|binding|INFO|Setting lport ae02b780-c76c-4fec-9f50-a8fb17aec607 ovn-installed in OVS
Nov 22 09:44:26 compute-0 ovn_controller[152872]: 2025-11-22T09:44:26Z|01482|binding|INFO|Setting lport ae02b780-c76c-4fec-9f50-a8fb17aec607 up in Southbound
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.277 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:26 compute-0 systemd-udevd[393531]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:26 compute-0 systemd-machined[215941]: New machine qemu-168-instance-00000089.
Nov 22 09:44:26 compute-0 NetworkManager[48920]: <info>  [1763804666.2925] device (tapae02b780-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:44:26 compute-0 NetworkManager[48920]: <info>  [1763804666.2938] device (tapae02b780-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:44:26 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000089.
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.294 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1e3865-e82d-4d30-98f5-daed9cbaa1ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.341 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[abb72ad6-fc73-492b-9684-c4bd93eeb93a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.346 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[11b602e4-f8e2-45f2-8855-4abb3cb35888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.387 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8c0fb3-3fd7-43c0-bf63-0116bf1aa029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.414 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1cfc2376-8589-4f56-8be8-5a73a8c61055]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapccaaf7d7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:e3:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749404, 'reachable_time': 35674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393544, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.425 253665 DEBUG nova.network.neutron [req-18d55fe7-39ed-439b-8404-896354ab9bac req-c467d911-0c9f-4eee-b8f4-e7964bcf05dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updated VIF entry in instance network info cache for port 8e5490c3-8e77-4f49-a612-31f17e0a3586. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.428 253665 DEBUG nova.network.neutron [req-18d55fe7-39ed-439b-8404-896354ab9bac req-c467d911-0c9f-4eee-b8f4-e7964bcf05dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [{"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.445 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ba08334a-d9b4-4426-ba93-ac73ba1d84bf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapccaaf7d7-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749418, 'tstamp': 749418}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393545, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapccaaf7d7-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749421, 'tstamp': 749421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393545, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.448 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccaaf7d7-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.450 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.452 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccaaf7d7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.453 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.454 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapccaaf7d7-d0, col_values=(('external_ids', {'iface-id': '9302a453-ce7d-475e-8f32-fad5f9f06dff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:26 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:26.454 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.458 253665 DEBUG oslo_concurrency.lockutils [req-18d55fe7-39ed-439b-8404-896354ab9bac req-c467d911-0c9f-4eee-b8f4-e7964bcf05dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.545 253665 DEBUG nova.compute.manager [req-ef9d0ad4-e421-4704-bc5d-083faa517338 req-76730fd2-2013-47dc-8054-4eb1c3e1fa14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.545 253665 DEBUG oslo_concurrency.lockutils [req-ef9d0ad4-e421-4704-bc5d-083faa517338 req-76730fd2-2013-47dc-8054-4eb1c3e1fa14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.545 253665 DEBUG oslo_concurrency.lockutils [req-ef9d0ad4-e421-4704-bc5d-083faa517338 req-76730fd2-2013-47dc-8054-4eb1c3e1fa14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.546 253665 DEBUG oslo_concurrency.lockutils [req-ef9d0ad4-e421-4704-bc5d-083faa517338 req-76730fd2-2013-47dc-8054-4eb1c3e1fa14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.546 253665 DEBUG nova.compute.manager [req-ef9d0ad4-e421-4704-bc5d-083faa517338 req-76730fd2-2013-47dc-8054-4eb1c3e1fa14 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Processing event network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:44:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:44:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664130192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.600 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.617 253665 DEBUG nova.compute.provider_tree [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.637 253665 DEBUG nova.scheduler.client.report [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.671 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.674 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.682 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.735 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.736 253665 DEBUG nova.network.neutron [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.753 253665 INFO nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.784 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.808 253665 DEBUG nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.809 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804666.8077376, 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.809 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] VM Started (Lifecycle Event)
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.813 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.820 253665 INFO nova.virt.libvirt.driver [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Instance spawned successfully.
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.821 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.840 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.846 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.849 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.850 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.850 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.851 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.853 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.853 253665 DEBUG nova.virt.libvirt.driver [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.876 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.876 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804666.809653, 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.876 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] VM Paused (Lifecycle Event)
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.899 253665 DEBUG nova.policy [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.903 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.904 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.904 253665 INFO nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Creating image(s)
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.927 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.955 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.986 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:26 compute-0 nova_compute[253661]: 2025-11-22 09:44:26.991 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.032 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.037 253665 INFO nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Took 8.47 seconds to spawn the instance on the hypervisor.
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.038 253665 DEBUG nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.042 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804666.811924, 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.042 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] VM Resumed (Lifecycle Event)
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.068 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.071 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.074 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.074 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.075 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.075 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.099 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.105 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.149 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.158 253665 INFO nova.compute.manager [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Took 9.56 seconds to build instance.
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.217 253665 DEBUG oslo_concurrency.lockutils [None req-81aba870-0832-493b-94b2-2796741dedf3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:27 compute-0 ceph-mon[75021]: pgmap v2476: 305 pgs: 305 active+clean; 213 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Nov 22 09:44:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2664130192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.585 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.671 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.900 253665 DEBUG nova.objects.instance [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid 71ef7514-c6bd-40ee-852a-4b850ca0a05c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.925 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.926 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Ensure instance console log exists: /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.926 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.926 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:27 compute-0 nova_compute[253661]: 2025-11-22 09:44:27.927 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:27.987 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:27.988 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:27.990 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 213 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 22 09:44:28 compute-0 nova_compute[253661]: 2025-11-22 09:44:28.689 253665 DEBUG nova.compute.manager [req-80b6df8e-db2b-4d93-8717-360f84a832df req-29d283c5-8a02-484f-8d84-5002daec3a42 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:28 compute-0 nova_compute[253661]: 2025-11-22 09:44:28.690 253665 DEBUG oslo_concurrency.lockutils [req-80b6df8e-db2b-4d93-8717-360f84a832df req-29d283c5-8a02-484f-8d84-5002daec3a42 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:28 compute-0 nova_compute[253661]: 2025-11-22 09:44:28.690 253665 DEBUG oslo_concurrency.lockutils [req-80b6df8e-db2b-4d93-8717-360f84a832df req-29d283c5-8a02-484f-8d84-5002daec3a42 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:28 compute-0 nova_compute[253661]: 2025-11-22 09:44:28.690 253665 DEBUG oslo_concurrency.lockutils [req-80b6df8e-db2b-4d93-8717-360f84a832df req-29d283c5-8a02-484f-8d84-5002daec3a42 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:28 compute-0 nova_compute[253661]: 2025-11-22 09:44:28.690 253665 DEBUG nova.compute.manager [req-80b6df8e-db2b-4d93-8717-360f84a832df req-29d283c5-8a02-484f-8d84-5002daec3a42 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] No waiting events found dispatching network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:28 compute-0 nova_compute[253661]: 2025-11-22 09:44:28.690 253665 WARNING nova.compute.manager [req-80b6df8e-db2b-4d93-8717-360f84a832df req-29d283c5-8a02-484f-8d84-5002daec3a42 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received unexpected event network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 for instance with vm_state active and task_state None.
Nov 22 09:44:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:29 compute-0 ceph-mon[75021]: pgmap v2477: 305 pgs: 305 active+clean; 213 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 22 09:44:29 compute-0 nova_compute[253661]: 2025-11-22 09:44:29.844 253665 DEBUG nova.network.neutron [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Successfully created port: 491b9f04-4133-4553-a044-0dffe6278421 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:44:30 compute-0 nova_compute[253661]: 2025-11-22 09:44:30.041 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 256 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 225 op/s
Nov 22 09:44:30 compute-0 podman[393757]: 2025-11-22 09:44:30.365669839 +0000 UTC m=+0.054546107 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 09:44:30 compute-0 podman[393758]: 2025-11-22 09:44:30.371875123 +0000 UTC m=+0.061510570 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:44:31 compute-0 nova_compute[253661]: 2025-11-22 09:44:31.198 253665 DEBUG nova.network.neutron [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Successfully updated port: 491b9f04-4133-4553-a044-0dffe6278421 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:44:31 compute-0 nova_compute[253661]: 2025-11-22 09:44:31.217 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:31 compute-0 nova_compute[253661]: 2025-11-22 09:44:31.217 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:31 compute-0 nova_compute[253661]: 2025-11-22 09:44:31.217 253665 DEBUG nova.network.neutron [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:44:31 compute-0 nova_compute[253661]: 2025-11-22 09:44:31.289 253665 DEBUG nova.compute.manager [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-changed-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:31 compute-0 nova_compute[253661]: 2025-11-22 09:44:31.289 253665 DEBUG nova.compute.manager [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing instance network info cache due to event network-changed-491b9f04-4133-4553-a044-0dffe6278421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:31 compute-0 nova_compute[253661]: 2025-11-22 09:44:31.290 253665 DEBUG oslo_concurrency.lockutils [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:31 compute-0 ovn_controller[152872]: 2025-11-22T09:44:31Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:f8:55 10.100.0.4
Nov 22 09:44:31 compute-0 ovn_controller[152872]: 2025-11-22T09:44:31Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:f8:55 10.100.0.4
Nov 22 09:44:31 compute-0 nova_compute[253661]: 2025-11-22 09:44:31.385 253665 DEBUG nova.network.neutron [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:44:31 compute-0 ceph-mon[75021]: pgmap v2478: 305 pgs: 305 active+clean; 256 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 225 op/s
Nov 22 09:44:31 compute-0 nova_compute[253661]: 2025-11-22 09:44:31.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:32 compute-0 rsyslogd[1005]: imjournal: 6187 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 09:44:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 269 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.528 253665 DEBUG nova.network.neutron [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.551 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.551 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Instance network_info: |[{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.552 253665 DEBUG oslo_concurrency.lockutils [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.552 253665 DEBUG nova.network.neutron [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing network info cache for port 491b9f04-4133-4553-a044-0dffe6278421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.554 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Start _get_guest_xml network_info=[{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.560 253665 WARNING nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.569 253665 DEBUG nova.virt.libvirt.host [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.570 253665 DEBUG nova.virt.libvirt.host [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.574 253665 DEBUG nova.virt.libvirt.host [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.574 253665 DEBUG nova.virt.libvirt.host [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.574 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.575 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.575 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.575 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.576 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.576 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.576 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.576 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.577 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.577 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.577 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.577 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.580 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.638 253665 DEBUG nova.compute.manager [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.638 253665 DEBUG nova.compute.manager [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing instance network info cache due to event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.639 253665 DEBUG oslo_concurrency.lockutils [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.639 253665 DEBUG oslo_concurrency.lockutils [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:32 compute-0 nova_compute[253661]: 2025-11-22 09:44:32.639 253665 DEBUG nova.network.neutron [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:44:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3868522305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.046 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.111 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.115 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:44:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3844498296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.580 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.583 253665 DEBUG nova.virt.libvirt.vif [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=138,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-97r64zcs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:26Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=71ef7514-c6bd-40ee-852a-4b850ca0a05c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.583 253665 DEBUG nova.network.os_vif_util [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.584 253665 DEBUG nova.network.os_vif_util [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.585 253665 DEBUG nova.objects.instance [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid 71ef7514-c6bd-40ee-852a-4b850ca0a05c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.597 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <uuid>71ef7514-c6bd-40ee-852a-4b850ca0a05c</uuid>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <name>instance-0000008a</name>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653</nova:name>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:44:32</nova:creationTime>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <nova:port uuid="491b9f04-4133-4553-a044-0dffe6278421">
Nov 22 09:44:33 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <system>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <entry name="serial">71ef7514-c6bd-40ee-852a-4b850ca0a05c</entry>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <entry name="uuid">71ef7514-c6bd-40ee-852a-4b850ca0a05c</entry>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     </system>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <os>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   </os>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <features>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   </features>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk">
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config">
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:33 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:a8:7d:61"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <target dev="tap491b9f04-41"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/console.log" append="off"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <video>
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     </video>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:44:33 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:44:33 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:44:33 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:44:33 compute-0 nova_compute[253661]: </domain>
Nov 22 09:44:33 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.604 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Preparing to wait for external event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.604 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.605 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.605 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.606 253665 DEBUG nova.virt.libvirt.vif [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=138,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-97r64zcs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:26Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=71ef7514-c6bd-40ee-852a-4b850ca0a05c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.607 253665 DEBUG nova.network.os_vif_util [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.607 253665 DEBUG nova.network.os_vif_util [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.608 253665 DEBUG os_vif [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.612 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.612 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.613 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.616 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap491b9f04-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.617 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap491b9f04-41, col_values=(('external_ids', {'iface-id': '491b9f04-4133-4553-a044-0dffe6278421', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:7d:61', 'vm-uuid': '71ef7514-c6bd-40ee-852a-4b850ca0a05c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:33 compute-0 NetworkManager[48920]: <info>  [1763804673.6195] manager: (tap491b9f04-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/607)
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.622 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.625 253665 INFO os_vif [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41')
Nov 22 09:44:33 compute-0 ceph-mon[75021]: pgmap v2479: 305 pgs: 305 active+clean; 269 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Nov 22 09:44:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3868522305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3844498296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.732 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.733 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.733 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:a8:7d:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.734 253665 INFO nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Using config drive
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.754 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.824 253665 DEBUG nova.network.neutron [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updated VIF entry in instance network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.825 253665 DEBUG nova.network.neutron [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updating instance_info_cache with network_info: [{"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:33 compute-0 nova_compute[253661]: 2025-11-22 09:44:33.843 253665 DEBUG oslo_concurrency.lockutils [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 179 op/s
Nov 22 09:44:34 compute-0 nova_compute[253661]: 2025-11-22 09:44:34.175 253665 DEBUG nova.network.neutron [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updated VIF entry in instance network info cache for port 491b9f04-4133-4553-a044-0dffe6278421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:34 compute-0 nova_compute[253661]: 2025-11-22 09:44:34.176 253665 DEBUG nova.network.neutron [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:34 compute-0 nova_compute[253661]: 2025-11-22 09:44:34.192 253665 DEBUG oslo_concurrency.lockutils [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:34 compute-0 sudo[393876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:44:34 compute-0 sudo[393876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:34 compute-0 sudo[393876]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:34 compute-0 sudo[393901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:44:34 compute-0 sudo[393901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:34 compute-0 sudo[393901]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:34 compute-0 sudo[393926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:44:34 compute-0 sudo[393926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:34 compute-0 sudo[393926]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:34 compute-0 sudo[393951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:44:34 compute-0 sudo[393951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:34 compute-0 nova_compute[253661]: 2025-11-22 09:44:34.832 253665 INFO nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Creating config drive at /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config
Nov 22 09:44:34 compute-0 nova_compute[253661]: 2025-11-22 09:44:34.837 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr2rekoic execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:34 compute-0 nova_compute[253661]: 2025-11-22 09:44:34.986 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr2rekoic" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.019 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.025 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.279 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.280 253665 INFO nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Deleting local config drive /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config because it was imported into RBD.
Nov 22 09:44:35 compute-0 kernel: tap491b9f04-41: entered promiscuous mode
Nov 22 09:44:35 compute-0 NetworkManager[48920]: <info>  [1763804675.3599] manager: (tap491b9f04-41): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Nov 22 09:44:35 compute-0 sudo[393951]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:35 compute-0 ovn_controller[152872]: 2025-11-22T09:44:35Z|01483|binding|INFO|Claiming lport 491b9f04-4133-4553-a044-0dffe6278421 for this chassis.
Nov 22 09:44:35 compute-0 ovn_controller[152872]: 2025-11-22T09:44:35Z|01484|binding|INFO|491b9f04-4133-4553-a044-0dffe6278421: Claiming fa:16:3e:a8:7d:61 10.100.0.11
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.372 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:7d:61 10.100.0.11'], port_security=['fa:16:3e:a8:7d:61 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '71ef7514-c6bd-40ee-852a-4b850ca0a05c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a734f39d-baf0-4591-94dc-9057caf53bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c524ade6-1430-48f4-af9a-629e8a61db96 d6471b4e-7bc5-407e-a8cc-88aa50b6222f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce1fe74-6934-45b2-a6d9-4702f1b2307a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=491b9f04-4133-4553-a044-0dffe6278421) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.373 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 491b9f04-4133-4553-a044-0dffe6278421 in datapath a734f39d-baf0-4591-94dc-9057caf53bb4 bound to our chassis
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.375 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a734f39d-baf0-4591-94dc-9057caf53bb4
Nov 22 09:44:35 compute-0 ovn_controller[152872]: 2025-11-22T09:44:35Z|01485|binding|INFO|Setting lport 491b9f04-4133-4553-a044-0dffe6278421 ovn-installed in OVS
Nov 22 09:44:35 compute-0 ovn_controller[152872]: 2025-11-22T09:44:35Z|01486|binding|INFO|Setting lport 491b9f04-4133-4553-a044-0dffe6278421 up in Southbound
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.388 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5930566e-2fd1-4114-9db3-088320638997]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.389 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa734f39d-b1 in ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.392 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa734f39d-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.392 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9763f857-782c-4668-a7ad-8a3c9cb4dc26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.394 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3fddd9b-ace2-4dd9-9517-0af902c7929d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.407 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3e352436-8d88-4466-939a-847bb71d7756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.427 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1de5d21-0dee-4551-9496-ae9b972d409c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:44:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:44:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:44:35 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:44:35 compute-0 systemd-machined[215941]: New machine qemu-169-instance-0000008a.
Nov 22 09:44:35 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-0000008a.
Nov 22 09:44:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:44:35 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:44:35 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b35c8a45-723a-42ed-841c-77e29070833d does not exist
Nov 22 09:44:35 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 719f2cb5-6785-4554-be52-a1adbbccdb43 does not exist
Nov 22 09:44:35 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8c981303-3a76-4da1-aea9-25e8ab62f99f does not exist
Nov 22 09:44:35 compute-0 systemd-udevd[394070]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:44:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:44:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:44:35 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:44:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:44:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.466 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[124d9bf9-ec97-4c17-a06f-a5e532da57bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.472 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c34ffa6b-7f0f-48a9-85ed-554fe1abea68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 NetworkManager[48920]: <info>  [1763804675.4742] manager: (tapa734f39d-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/609)
Nov 22 09:44:35 compute-0 systemd-udevd[394079]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:35 compute-0 NetworkManager[48920]: <info>  [1763804675.4780] device (tap491b9f04-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:44:35 compute-0 NetworkManager[48920]: <info>  [1763804675.4794] device (tap491b9f04-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.514 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[145ac6cb-9ccd-47ca-92d5-c7baac0c3efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.518 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b893da07-e96a-4122-a81f-975dd7d2443d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 sudo[394089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:44:35 compute-0 sudo[394089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:35 compute-0 sudo[394089]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:35 compute-0 NetworkManager[48920]: <info>  [1763804675.5531] device (tapa734f39d-b0): carrier: link connected
Nov 22 09:44:35 compute-0 podman[394062]: 2025-11-22 09:44:35.570141374 +0000 UTC m=+0.127185732 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.572 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[303885cc-b6d2-411e-ae2a-fec10bf175c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.592 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[198c1914-d5d9-46af-a424-56a00e0070d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa734f39d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752629, 'reachable_time': 30146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394150, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.610 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[76699df7-65bd-434d-be80-6ee12dcf56ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:4fef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752629, 'tstamp': 752629}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394166, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 sudo[394142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:44:35 compute-0 sudo[394142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:35 compute-0 sudo[394142]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.631 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb5ae2d-a9db-480b-9a66-a2ea2719d80f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa734f39d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752629, 'reachable_time': 30146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394169, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.665 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1851ae-492c-4e08-8fef-8519dc07d312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 sudo[394171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:44:35 compute-0 sudo[394171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:35 compute-0 sudo[394171]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.735 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce5ee26-891d-45a3-96ab-53f1190896ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.736 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa734f39d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.736 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.736 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa734f39d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.738 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:35 compute-0 NetworkManager[48920]: <info>  [1763804675.7385] manager: (tapa734f39d-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Nov 22 09:44:35 compute-0 kernel: tapa734f39d-b0: entered promiscuous mode
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.740 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa734f39d-b0, col_values=(('external_ids', {'iface-id': '3db82a3e-3c50-4f8e-b5b4-8b4657d60723'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.741 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:35 compute-0 ovn_controller[152872]: 2025-11-22T09:44:35Z|01487|binding|INFO|Releasing lport 3db82a3e-3c50-4f8e-b5b4-8b4657d60723 from this chassis (sb_readonly=0)
Nov 22 09:44:35 compute-0 sudo[394216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.745 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a734f39d-baf0-4591-94dc-9057caf53bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a734f39d-baf0-4591-94dc-9057caf53bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:44:35 compute-0 sudo[394216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.747 253665 DEBUG nova.compute.manager [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.748 253665 DEBUG oslo_concurrency.lockutils [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.749 253665 DEBUG oslo_concurrency.lockutils [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.750 253665 DEBUG oslo_concurrency.lockutils [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.750 253665 DEBUG nova.compute.manager [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Processing event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.747 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7bc49db-0b06-4db2-9e4d-9a466eae8126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.748 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-a734f39d-baf0-4591-94dc-9057caf53bb4
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/a734f39d-baf0-4591-94dc-9057caf53bb4.pid.haproxy
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID a734f39d-baf0-4591-94dc-9057caf53bb4
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.749 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'env', 'PROCESS_TAG=haproxy-a734f39d-baf0-4591-94dc-9057caf53bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a734f39d-baf0-4591-94dc-9057caf53bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:35 compute-0 ceph-mon[75021]: pgmap v2480: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 179 op/s
Nov 22 09:44:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:44:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:44:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:44:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:44:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:44:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:44:35 compute-0 nova_compute[253661]: 2025-11-22 09:44:35.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:35 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.836 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.016 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804676.0157568, 71ef7514-c6bd-40ee-852a-4b850ca0a05c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.016 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] VM Started (Lifecycle Event)
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.020 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.024 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.028 253665 INFO nova.virt.libvirt.driver [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Instance spawned successfully.
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.029 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.049 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.059 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.063 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.068 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.069 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.069 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.069 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.070 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.083 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.084 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804676.0188131, 71ef7514-c6bd-40ee-852a-4b850ca0a05c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.084 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] VM Paused (Lifecycle Event)
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.124 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.131 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804676.0230613, 71ef7514-c6bd-40ee-852a-4b850ca0a05c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.131 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] VM Resumed (Lifecycle Event)
Nov 22 09:44:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.153 253665 INFO nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Took 9.25 seconds to spawn the instance on the hypervisor.
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.154 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.166 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.169 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:36 compute-0 podman[394322]: 2025-11-22 09:44:36.109595454 +0000 UTC m=+0.023885361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.205 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.244 253665 INFO nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Took 10.38 seconds to build instance.
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.260 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:36 compute-0 nova_compute[253661]: 2025-11-22 09:44:36.673 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:36 compute-0 podman[394322]: 2025-11-22 09:44:36.680641225 +0000 UTC m=+0.594931112 container create cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:44:36 compute-0 systemd[1]: Started libpod-conmon-cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715.scope.
Nov 22 09:44:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:37 compute-0 podman[394341]: 2025-11-22 09:44:36.980221413 +0000 UTC m=+0.868381495 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:44:37 compute-0 podman[394322]: 2025-11-22 09:44:37.311608195 +0000 UTC m=+1.225898102 container init cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:44:37 compute-0 podman[394322]: 2025-11-22 09:44:37.319862289 +0000 UTC m=+1.234152176 container start cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:44:37 compute-0 affectionate_bhaskara[394358]: 167 167
Nov 22 09:44:37 compute-0 systemd[1]: libpod-cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715.scope: Deactivated successfully.
Nov 22 09:44:37 compute-0 podman[394322]: 2025-11-22 09:44:37.575300417 +0000 UTC m=+1.489590324 container attach cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 09:44:37 compute-0 podman[394322]: 2025-11-22 09:44:37.575964523 +0000 UTC m=+1.490254410 container died cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:44:37 compute-0 nova_compute[253661]: 2025-11-22 09:44:37.856 253665 DEBUG nova.compute.manager [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:37 compute-0 nova_compute[253661]: 2025-11-22 09:44:37.857 253665 DEBUG oslo_concurrency.lockutils [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:37 compute-0 nova_compute[253661]: 2025-11-22 09:44:37.858 253665 DEBUG oslo_concurrency.lockutils [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:37 compute-0 nova_compute[253661]: 2025-11-22 09:44:37.858 253665 DEBUG oslo_concurrency.lockutils [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:37 compute-0 nova_compute[253661]: 2025-11-22 09:44:37.858 253665 DEBUG nova.compute.manager [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] No waiting events found dispatching network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:37 compute-0 nova_compute[253661]: 2025-11-22 09:44:37.858 253665 WARNING nova.compute.manager [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received unexpected event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 for instance with vm_state active and task_state None.
Nov 22 09:44:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Nov 22 09:44:38 compute-0 ceph-mon[75021]: pgmap v2481: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Nov 22 09:44:38 compute-0 podman[394341]: 2025-11-22 09:44:38.390405094 +0000 UTC m=+2.278565156 container create e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:44:38 compute-0 nova_compute[253661]: 2025-11-22 09:44:38.620 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:38 compute-0 systemd[1]: Started libpod-conmon-e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852.scope.
Nov 22 09:44:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e778d95479f50b481d89a1bee1b1080db3f4d51fefff371a7f567798dd2494e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-77fdfc0ca1d8ee2bb979ba324b331acee102c41c3e79416595c3e29b1cd4b6c4-merged.mount: Deactivated successfully.
Nov 22 09:44:39 compute-0 podman[394322]: 2025-11-22 09:44:39.599253393 +0000 UTC m=+3.513543290 container remove cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:44:39 compute-0 systemd[1]: libpod-conmon-cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715.scope: Deactivated successfully.
Nov 22 09:44:39 compute-0 podman[394341]: 2025-11-22 09:44:39.745793212 +0000 UTC m=+3.633953324 container init e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 09:44:39 compute-0 podman[394341]: 2025-11-22 09:44:39.755476991 +0000 UTC m=+3.643637053 container start e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:44:39 compute-0 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [NOTICE]   (394397) : New worker (394405) forked
Nov 22 09:44:39 compute-0 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [NOTICE]   (394397) : Loading success.
Nov 22 09:44:39 compute-0 podman[394389]: 2025-11-22 09:44:39.847493564 +0000 UTC m=+0.091811619 container create 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:44:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:39.851 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:44:39 compute-0 podman[394389]: 2025-11-22 09:44:39.799934739 +0000 UTC m=+0.044252824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:44:39 compute-0 systemd[1]: Started libpod-conmon-86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357.scope.
Nov 22 09:44:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:40 compute-0 ceph-mon[75021]: pgmap v2482: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Nov 22 09:44:40 compute-0 podman[394389]: 2025-11-22 09:44:40.037738851 +0000 UTC m=+0.282056896 container init 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 09:44:40 compute-0 podman[394389]: 2025-11-22 09:44:40.04579838 +0000 UTC m=+0.290116435 container start 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:44:40 compute-0 podman[394389]: 2025-11-22 09:44:40.076101779 +0000 UTC m=+0.320419834 container attach 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:44:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 234 op/s
Nov 22 09:44:40 compute-0 ovn_controller[152872]: 2025-11-22T09:44:40Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:76:7a 10.100.0.3
Nov 22 09:44:40 compute-0 ovn_controller[152872]: 2025-11-22T09:44:40Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:76:7a 10.100.0.3
Nov 22 09:44:41 compute-0 blissful_driscoll[394416]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:44:41 compute-0 blissful_driscoll[394416]: --> relative data size: 1.0
Nov 22 09:44:41 compute-0 blissful_driscoll[394416]: --> All data devices are unavailable
Nov 22 09:44:41 compute-0 systemd[1]: libpod-86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357.scope: Deactivated successfully.
Nov 22 09:44:41 compute-0 systemd[1]: libpod-86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357.scope: Consumed 1.027s CPU time.
Nov 22 09:44:41 compute-0 podman[394389]: 2025-11-22 09:44:41.178459519 +0000 UTC m=+1.422777584 container died 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:44:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50-merged.mount: Deactivated successfully.
Nov 22 09:44:41 compute-0 podman[394389]: 2025-11-22 09:44:41.282426646 +0000 UTC m=+1.526744701 container remove 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:44:41 compute-0 systemd[1]: libpod-conmon-86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357.scope: Deactivated successfully.
Nov 22 09:44:41 compute-0 sudo[394216]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:41 compute-0 sudo[394459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:44:41 compute-0 sudo[394459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:41 compute-0 sudo[394459]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:41 compute-0 sudo[394484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:44:41 compute-0 sudo[394484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:41 compute-0 sudo[394484]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:41 compute-0 sudo[394509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:44:41 compute-0 sudo[394509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:41 compute-0 sudo[394509]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:41 compute-0 sudo[394534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:44:41 compute-0 sudo[394534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:41 compute-0 nova_compute[253661]: 2025-11-22 09:44:41.676 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:41 compute-0 podman[394600]: 2025-11-22 09:44:41.865925614 +0000 UTC m=+0.021417959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:44:42 compute-0 podman[394600]: 2025-11-22 09:44:42.025488164 +0000 UTC m=+0.180980479 container create 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:44:42 compute-0 ceph-mon[75021]: pgmap v2483: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 234 op/s
Nov 22 09:44:42 compute-0 systemd[1]: Started libpod-conmon-00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b.scope.
Nov 22 09:44:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 298 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 177 op/s
Nov 22 09:44:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:42 compute-0 nova_compute[253661]: 2025-11-22 09:44:42.344 253665 DEBUG nova.compute.manager [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-changed-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:42 compute-0 nova_compute[253661]: 2025-11-22 09:44:42.345 253665 DEBUG nova.compute.manager [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing instance network info cache due to event network-changed-491b9f04-4133-4553-a044-0dffe6278421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:42 compute-0 nova_compute[253661]: 2025-11-22 09:44:42.346 253665 DEBUG oslo_concurrency.lockutils [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:42 compute-0 nova_compute[253661]: 2025-11-22 09:44:42.346 253665 DEBUG oslo_concurrency.lockutils [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:42 compute-0 nova_compute[253661]: 2025-11-22 09:44:42.346 253665 DEBUG nova.network.neutron [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing network info cache for port 491b9f04-4133-4553-a044-0dffe6278421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:42 compute-0 podman[394600]: 2025-11-22 09:44:42.479830874 +0000 UTC m=+0.635323209 container init 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 09:44:42 compute-0 podman[394600]: 2025-11-22 09:44:42.488466487 +0000 UTC m=+0.643958802 container start 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:44:42 compute-0 goofy_swirles[394616]: 167 167
Nov 22 09:44:42 compute-0 systemd[1]: libpod-00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b.scope: Deactivated successfully.
Nov 22 09:44:42 compute-0 podman[394600]: 2025-11-22 09:44:42.849660066 +0000 UTC m=+1.005152421 container attach 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:44:42 compute-0 podman[394600]: 2025-11-22 09:44:42.851662545 +0000 UTC m=+1.007154930 container died 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:44:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:42.854 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.007 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.008 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.028 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.160 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.160 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.167 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.167 253665 INFO nova.compute.claims [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.361 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f0226f17d3150a2c2e0684c2ccdba66d17da2aaa0560eabe8ba1d7710d8b0cc-merged.mount: Deactivated successfully.
Nov 22 09:44:43 compute-0 ceph-mon[75021]: pgmap v2484: 305 pgs: 305 active+clean; 298 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 177 op/s
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.678 253665 DEBUG nova.network.neutron [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updated VIF entry in instance network info cache for port 491b9f04-4133-4553-a044-0dffe6278421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.679 253665 DEBUG nova.network.neutron [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.706 253665 DEBUG oslo_concurrency.lockutils [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:44:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117140290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.866 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.873 253665 DEBUG nova.compute.provider_tree [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.886 253665 DEBUG nova.scheduler.client.report [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:44:43 compute-0 podman[394600]: 2025-11-22 09:44:43.893418249 +0000 UTC m=+2.048910564 container remove 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.918 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.919 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:44:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.964 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.965 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:44:43 compute-0 systemd[1]: libpod-conmon-00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b.scope: Deactivated successfully.
Nov 22 09:44:43 compute-0 nova_compute[253661]: 2025-11-22 09:44:43.990 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.007 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:44:44 compute-0 podman[394662]: 2025-11-22 09:44:44.094489664 +0000 UTC m=+0.042944962 container create 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.103 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.105 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.105 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Creating image(s)
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.131 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:44 compute-0 systemd[1]: Started libpod-conmon-16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae.scope.
Nov 22 09:44:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Nov 22 09:44:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.166 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:44 compute-0 podman[394662]: 2025-11-22 09:44:44.076337276 +0000 UTC m=+0.024792594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:44:44 compute-0 podman[394662]: 2025-11-22 09:44:44.179098603 +0000 UTC m=+0.127553921 container init 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:44:44 compute-0 podman[394662]: 2025-11-22 09:44:44.186430104 +0000 UTC m=+0.134885402 container start 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 09:44:44 compute-0 podman[394662]: 2025-11-22 09:44:44.193736205 +0000 UTC m=+0.142191503 container attach 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.205 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.210 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.256 253665 DEBUG nova.policy [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.259 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.259 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.293 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.303 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.304 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.305 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.305 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.331 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:44 compute-0 nova_compute[253661]: 2025-11-22 09:44:44.335 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:44 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1117140290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]: {
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:     "0": [
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:         {
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "devices": [
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "/dev/loop3"
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             ],
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_name": "ceph_lv0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_size": "21470642176",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "name": "ceph_lv0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "tags": {
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.cluster_name": "ceph",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.crush_device_class": "",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.encrypted": "0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.osd_id": "0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.type": "block",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.vdo": "0"
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             },
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "type": "block",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "vg_name": "ceph_vg0"
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:         }
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:     ],
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:     "1": [
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:         {
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "devices": [
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "/dev/loop4"
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             ],
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_name": "ceph_lv1",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_size": "21470642176",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "name": "ceph_lv1",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "tags": {
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.cluster_name": "ceph",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.crush_device_class": "",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.encrypted": "0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.osd_id": "1",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.type": "block",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.vdo": "0"
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             },
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "type": "block",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "vg_name": "ceph_vg1"
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:         }
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:     ],
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:     "2": [
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:         {
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "devices": [
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "/dev/loop5"
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             ],
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_name": "ceph_lv2",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_size": "21470642176",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "name": "ceph_lv2",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "tags": {
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.cluster_name": "ceph",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.crush_device_class": "",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.encrypted": "0",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.osd_id": "2",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.type": "block",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:                 "ceph.vdo": "0"
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             },
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "type": "block",
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:             "vg_name": "ceph_vg2"
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:         }
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]:     ]
Nov 22 09:44:45 compute-0 objective_mcnulty[394696]: }
Nov 22 09:44:45 compute-0 systemd[1]: libpod-16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae.scope: Deactivated successfully.
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.121 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Successfully created port: 2979286f-0fdd-4b20-9c29-da29aac8e5ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.171 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.836s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:45 compute-0 podman[394781]: 2025-11-22 09:44:45.172610306 +0000 UTC m=+0.048582271 container died 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f-merged.mount: Deactivated successfully.
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.252 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:45 compute-0 podman[394781]: 2025-11-22 09:44:45.260671561 +0000 UTC m=+0.136643506 container remove 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.261 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:44:45 compute-0 systemd[1]: libpod-conmon-16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae.scope: Deactivated successfully.
Nov 22 09:44:45 compute-0 sudo[394534]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:45 compute-0 sudo[394850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:44:45 compute-0 sudo[394850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:45 compute-0 sudo[394850]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.434 253665 DEBUG nova.objects.instance [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.454 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.455 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Ensure instance console log exists: /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.455 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.456 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.456 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:45 compute-0 sudo[394893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:44:45 compute-0 sudo[394893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:45 compute-0 sudo[394893]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:45 compute-0 sudo[394918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:44:45 compute-0 sudo[394918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:45 compute-0 sudo[394918]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:45 compute-0 sudo[394943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:44:45 compute-0 sudo[394943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:45 compute-0 ceph-mon[75021]: pgmap v2485: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Nov 22 09:44:45 compute-0 nova_compute[253661]: 2025-11-22 09:44:45.822 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Successfully created port: 7b663864-2935-4127-ab02-75e4a0acfc73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:44:45 compute-0 podman[395009]: 2025-11-22 09:44:45.926760428 +0000 UTC m=+0.040413598 container create 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 09:44:45 compute-0 systemd[1]: Started libpod-conmon-927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced.scope.
Nov 22 09:44:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:46 compute-0 podman[395009]: 2025-11-22 09:44:46.003613825 +0000 UTC m=+0.117267025 container init 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:44:46 compute-0 podman[395009]: 2025-11-22 09:44:45.908921218 +0000 UTC m=+0.022574408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:44:46 compute-0 podman[395009]: 2025-11-22 09:44:46.012109645 +0000 UTC m=+0.125762825 container start 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:44:46 compute-0 podman[395009]: 2025-11-22 09:44:46.018032562 +0000 UTC m=+0.131685762 container attach 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 09:44:46 compute-0 busy_dhawan[395025]: 167 167
Nov 22 09:44:46 compute-0 systemd[1]: libpod-927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced.scope: Deactivated successfully.
Nov 22 09:44:46 compute-0 podman[395009]: 2025-11-22 09:44:46.020656477 +0000 UTC m=+0.134309647 container died 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 09:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-70d534d345ba8b3d6c668ca7dc973b2edf6aa868e6b9de4f98b420164746a38f-merged.mount: Deactivated successfully.
Nov 22 09:44:46 compute-0 podman[395009]: 2025-11-22 09:44:46.092530112 +0000 UTC m=+0.206183282 container remove 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 09:44:46 compute-0 systemd[1]: libpod-conmon-927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced.scope: Deactivated successfully.
Nov 22 09:44:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Nov 22 09:44:46 compute-0 nova_compute[253661]: 2025-11-22 09:44:46.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:46 compute-0 podman[395049]: 2025-11-22 09:44:46.330683902 +0000 UTC m=+0.076609653 container create a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:44:46 compute-0 podman[395049]: 2025-11-22 09:44:46.281280583 +0000 UTC m=+0.027206364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:44:46 compute-0 systemd[1]: Started libpod-conmon-a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf.scope.
Nov 22 09:44:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:44:46 compute-0 podman[395049]: 2025-11-22 09:44:46.477833176 +0000 UTC m=+0.223758937 container init a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 09:44:46 compute-0 podman[395049]: 2025-11-22 09:44:46.483832853 +0000 UTC m=+0.229758604 container start a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:44:46 compute-0 podman[395049]: 2025-11-22 09:44:46.502730081 +0000 UTC m=+0.248655842 container attach a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:44:46 compute-0 nova_compute[253661]: 2025-11-22 09:44:46.679 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:47 compute-0 nova_compute[253661]: 2025-11-22 09:44:47.116 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Successfully updated port: 2979286f-0fdd-4b20-9c29-da29aac8e5ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:44:47 compute-0 nova_compute[253661]: 2025-11-22 09:44:47.244 253665 DEBUG nova.compute.manager [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:47 compute-0 nova_compute[253661]: 2025-11-22 09:44:47.245 253665 DEBUG nova.compute.manager [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing instance network info cache due to event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:47 compute-0 nova_compute[253661]: 2025-11-22 09:44:47.245 253665 DEBUG oslo_concurrency.lockutils [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:47 compute-0 nova_compute[253661]: 2025-11-22 09:44:47.246 253665 DEBUG oslo_concurrency.lockutils [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:47 compute-0 nova_compute[253661]: 2025-11-22 09:44:47.246 253665 DEBUG nova.network.neutron [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:47 compute-0 infallible_cerf[395066]: {
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "osd_id": 1,
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "type": "bluestore"
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:     },
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "osd_id": 0,
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "type": "bluestore"
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:     },
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "osd_id": 2,
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:         "type": "bluestore"
Nov 22 09:44:47 compute-0 infallible_cerf[395066]:     }
Nov 22 09:44:47 compute-0 infallible_cerf[395066]: }
Nov 22 09:44:47 compute-0 systemd[1]: libpod-a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf.scope: Deactivated successfully.
Nov 22 09:44:47 compute-0 podman[395049]: 2025-11-22 09:44:47.441729267 +0000 UTC m=+1.187655018 container died a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:44:47 compute-0 nova_compute[253661]: 2025-11-22 09:44:47.507 253665 DEBUG nova.network.neutron [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d-merged.mount: Deactivated successfully.
Nov 22 09:44:47 compute-0 nova_compute[253661]: 2025-11-22 09:44:47.996 253665 DEBUG nova.network.neutron [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:48 compute-0 nova_compute[253661]: 2025-11-22 09:44:48.010 253665 DEBUG oslo_concurrency.lockutils [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:48 compute-0 ceph-mon[75021]: pgmap v2486: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Nov 22 09:44:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 335 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 163 op/s
Nov 22 09:44:48 compute-0 nova_compute[253661]: 2025-11-22 09:44:48.174 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Successfully updated port: 7b663864-2935-4127-ab02-75e4a0acfc73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:44:48 compute-0 nova_compute[253661]: 2025-11-22 09:44:48.193 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:48 compute-0 nova_compute[253661]: 2025-11-22 09:44:48.193 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:48 compute-0 nova_compute[253661]: 2025-11-22 09:44:48.193 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:44:48 compute-0 podman[395049]: 2025-11-22 09:44:48.362201666 +0000 UTC m=+2.108127417 container remove a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:44:48 compute-0 nova_compute[253661]: 2025-11-22 09:44:48.383 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:44:48 compute-0 sudo[394943]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:44:48 compute-0 systemd[1]: libpod-conmon-a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf.scope: Deactivated successfully.
Nov 22 09:44:48 compute-0 nova_compute[253661]: 2025-11-22 09:44:48.460 253665 INFO nova.compute.manager [None req-de14b6b8-e2b7-4e17-8607-154efc33cb04 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Get console output
Nov 22 09:44:48 compute-0 nova_compute[253661]: 2025-11-22 09:44:48.471 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:44:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:44:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:44:48 compute-0 nova_compute[253661]: 2025-11-22 09:44:48.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:44:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d711e591-04ce-4bef-874d-20e8426dad10 does not exist
Nov 22 09:44:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a19af43e-ff9d-44c6-b0af-20c31e7af3a3 does not exist
Nov 22 09:44:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:48 compute-0 sudo[395112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:44:48 compute-0 sudo[395112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:48 compute-0 sudo[395112]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:49 compute-0 sudo[395137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:44:49 compute-0 sudo[395137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:44:49 compute-0 sudo[395137]: pam_unix(sudo:session): session closed for user root
Nov 22 09:44:49 compute-0 nova_compute[253661]: 2025-11-22 09:44:49.373 253665 DEBUG nova.compute.manager [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-changed-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:49 compute-0 nova_compute[253661]: 2025-11-22 09:44:49.374 253665 DEBUG nova.compute.manager [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing instance network info cache due to event network-changed-7b663864-2935-4127-ab02-75e4a0acfc73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:49 compute-0 nova_compute[253661]: 2025-11-22 09:44:49.374 253665 DEBUG oslo_concurrency.lockutils [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:49 compute-0 ceph-mon[75021]: pgmap v2487: 305 pgs: 305 active+clean; 335 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 163 op/s
Nov 22 09:44:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:44:49 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:44:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 178 op/s
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.234 253665 DEBUG nova.compute.manager [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.234 253665 DEBUG nova.compute.manager [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing instance network info cache due to event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.235 253665 DEBUG oslo_concurrency.lockutils [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.235 253665 DEBUG oslo_concurrency.lockutils [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.235 253665 DEBUG nova.network.neutron [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.256 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.284 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.284 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance network_info: |[{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.285 253665 DEBUG oslo_concurrency.lockutils [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.285 253665 DEBUG nova.network.neutron [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing network info cache for port 7b663864-2935-4127-ab02-75e4a0acfc73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.289 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start _get_guest_xml network_info=[{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.294 253665 WARNING nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.300 253665 DEBUG nova.virt.libvirt.host [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.301 253665 DEBUG nova.virt.libvirt.host [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.309 253665 DEBUG nova.virt.libvirt.host [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.309 253665 DEBUG nova.virt.libvirt.host [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.310 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.310 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.310 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.311 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.311 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.311 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.311 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.316 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:50 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:44:50 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44717686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.757 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.782 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:50 compute-0 nova_compute[253661]: 2025-11-22 09:44:50.787 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:44:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062114375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.245 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.246 253665 DEBUG nova.virt.libvirt.vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:44Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.247 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.248 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.248 253665 DEBUG nova.virt.libvirt.vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:44Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.249 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.249 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.250 253665 DEBUG nova.objects.instance [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.263 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.263 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.304 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <uuid>b4a045a0-0a46-4644-8d2e-9ec4a6d893b9</uuid>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <name>instance-0000008b</name>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-63266585</nova:name>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:44:50</nova:creationTime>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <nova:port uuid="2979286f-0fdd-4b20-9c29-da29aac8e5ab">
Nov 22 09:44:51 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <nova:port uuid="7b663864-2935-4127-ab02-75e4a0acfc73">
Nov 22 09:44:51 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6f:1042" ipVersion="6"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <system>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <entry name="serial">b4a045a0-0a46-4644-8d2e-9ec4a6d893b9</entry>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <entry name="uuid">b4a045a0-0a46-4644-8d2e-9ec4a6d893b9</entry>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </system>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <os>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   </os>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <features>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   </features>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk">
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config">
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       </source>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:44:51 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:f9:a0:45"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <target dev="tap2979286f-0f"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:6f:10:42"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <target dev="tap7b663864-29"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/console.log" append="off"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <video>
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </video>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:44:51 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:44:51 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:44:51 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:44:51 compute-0 nova_compute[253661]: </domain>
Nov 22 09:44:51 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.305 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Preparing to wait for external event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.305 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.305 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.306 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.306 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Preparing to wait for external event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.307 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.307 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.307 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.308 253665 DEBUG nova.virt.libvirt.vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:44Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.308 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.309 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.309 253665 DEBUG os_vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.310 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.310 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.314 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2979286f-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.315 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2979286f-0f, col_values=(('external_ids', {'iface-id': '2979286f-0fdd-4b20-9c29-da29aac8e5ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:a0:45', 'vm-uuid': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:51 compute-0 NetworkManager[48920]: <info>  [1763804691.3176] manager: (tap2979286f-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.328 253665 INFO os_vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f')
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.329 253665 DEBUG nova.virt.libvirt.vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:44Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.329 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.329 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.330 253665 DEBUG os_vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.330 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.330 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.332 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.332 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b663864-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.333 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b663864-29, col_values=(('external_ids', {'iface-id': '7b663864-2935-4127-ab02-75e4a0acfc73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:10:42', 'vm-uuid': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.334 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:51 compute-0 NetworkManager[48920]: <info>  [1763804691.3356] manager: (tap7b663864-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Nov 22 09:44:51 compute-0 ovn_controller[152872]: 2025-11-22T09:44:51Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:7d:61 10.100.0.11
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:44:51 compute-0 ovn_controller[152872]: 2025-11-22T09:44:51Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:7d:61 10.100.0.11
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.346 253665 INFO os_vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29')
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.350 253665 INFO nova.compute.manager [None req-0eab2b6d-b6af-470d-a51b-425eb681dde3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Get console output
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.373 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.406 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.406 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.406 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:f9:a0:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.406 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:6f:10:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.407 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Using config drive
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.434 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.451 253665 DEBUG nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 WARNING nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state None.
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.453 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.453 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.453 253665 DEBUG nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.453 253665 WARNING nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state None.
Nov 22 09:44:51 compute-0 ceph-mon[75021]: pgmap v2488: 305 pgs: 305 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 178 op/s
Nov 22 09:44:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/44717686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:51 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2062114375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:44:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580475424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.760 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.779 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Creating config drive at /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.784 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46vh0yht execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.910 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.911 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.914 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.914 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.918 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.918 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.921 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.921 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.924 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.924 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.926 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46vh0yht" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.949 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:44:51 compute-0 nova_compute[253661]: 2025-11-22 09:44:51.952 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 4.5 MiB/s wr, 114 op/s
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.246 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.247 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2811MB free_disk=59.80126190185547GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:44:52
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr']
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.286 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.287 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Deleting local config drive /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config because it was imported into RBD.
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance dab57683-82b6-44b3-b663-556a4f0e3dab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 48af02cd-94c5-473f-a6f9-4d2caad8483f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 71ef7514-c6bd-40ee-852a-4b850ca0a05c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:44:52 compute-0 NetworkManager[48920]: <info>  [1763804692.3497] manager: (tap2979286f-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Nov 22 09:44:52 compute-0 kernel: tap2979286f-0f: entered promiscuous mode
Nov 22 09:44:52 compute-0 ovn_controller[152872]: 2025-11-22T09:44:52Z|01488|binding|INFO|Claiming lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab for this chassis.
Nov 22 09:44:52 compute-0 ovn_controller[152872]: 2025-11-22T09:44:52Z|01489|binding|INFO|2979286f-0fdd-4b20-9c29-da29aac8e5ab: Claiming fa:16:3e:f9:a0:45 10.100.0.13
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.356 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.367 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:a0:45 10.100.0.13'], port_security=['fa:16:3e:f9:a0:45 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621dd092-e20a-432f-8488-41d7fcd69532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=586feb4c-523c-413f-8bd3-6bc87edbdf4c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2979286f-0fdd-4b20-9c29-da29aac8e5ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.369 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2979286f-0fdd-4b20-9c29-da29aac8e5ab in datapath 621dd092-e20a-432f-8488-41d7fcd69532 bound to our chassis
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.371 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621dd092-e20a-432f-8488-41d7fcd69532
Nov 22 09:44:52 compute-0 NetworkManager[48920]: <info>  [1763804692.3745] manager: (tap7b663864-29): new Tun device (/org/freedesktop/NetworkManager/Devices/614)
Nov 22 09:44:52 compute-0 ovn_controller[152872]: 2025-11-22T09:44:52Z|01490|binding|INFO|Setting lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab ovn-installed in OVS
Nov 22 09:44:52 compute-0 kernel: tap7b663864-29: entered promiscuous mode
Nov 22 09:44:52 compute-0 ovn_controller[152872]: 2025-11-22T09:44:52Z|01491|binding|INFO|Setting lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab up in Southbound
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:52 compute-0 ovn_controller[152872]: 2025-11-22T09:44:52Z|01492|if_status|INFO|Not updating pb chassis for 7b663864-2935-4127-ab02-75e4a0acfc73 now as sb is readonly
Nov 22 09:44:52 compute-0 ovn_controller[152872]: 2025-11-22T09:44:52Z|01493|binding|INFO|Claiming lport 7b663864-2935-4127-ab02-75e4a0acfc73 for this chassis.
Nov 22 09:44:52 compute-0 ovn_controller[152872]: 2025-11-22T09:44:52Z|01494|binding|INFO|7b663864-2935-4127-ab02-75e4a0acfc73: Claiming fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.392 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68c69a2a-2867-43fa-b7ca-c8e35461e9f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 ovn_controller[152872]: 2025-11-22T09:44:52Z|01495|binding|INFO|Setting lport 7b663864-2935-4127-ab02-75e4a0acfc73 up in Southbound
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:52 compute-0 ovn_controller[152872]: 2025-11-22T09:44:52Z|01496|binding|INFO|Setting lport 7b663864-2935-4127-ab02-75e4a0acfc73 ovn-installed in OVS
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:52 compute-0 systemd-udevd[395329]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:52 compute-0 systemd-udevd[395328]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.408 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042'], port_security=['fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:1042/64', 'neutron:device_id': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7556820e-db50-4efa-817c-86d63f0b8b71, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7b663864-2935-4127-ab02-75e4a0acfc73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:44:52 compute-0 systemd-machined[215941]: New machine qemu-170-instance-0000008b.
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.424 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4618d677-8db0-4b23-9fd1-c80b49c82f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 NetworkManager[48920]: <info>  [1763804692.4295] device (tap7b663864-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:44:52 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-0000008b.
Nov 22 09:44:52 compute-0 NetworkManager[48920]: <info>  [1763804692.4311] device (tap7b663864-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.430 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1641da5b-023c-4515-b51d-afdb22f3dc03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 NetworkManager[48920]: <info>  [1763804692.4318] device (tap2979286f-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:44:52 compute-0 NetworkManager[48920]: <info>  [1763804692.4327] device (tap2979286f-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.434 253665 DEBUG nova.network.neutron [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updated VIF entry in instance network info cache for port 7b663864-2935-4127-ab02-75e4a0acfc73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.435 253665 DEBUG nova.network.neutron [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.454 253665 DEBUG oslo_concurrency.lockutils [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.467 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.469 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5323798a-6eb8-4ef9-af28-800b373672cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.493 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3705857a-05a8-4abf-b27e-70677e982f10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621dd092-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:9d:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750283, 'reachable_time': 20749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395340, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cd3958-07b4-427f-931b-16ba782c413a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621dd092-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750294, 'tstamp': 750294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395343, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621dd092-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750297, 'tstamp': 750297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395343, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.514 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621dd092-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.517 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621dd092-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.517 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621dd092-e0, col_values=(('external_ids', {'iface-id': 'ce538828-218d-4def-9bed-efeb786012c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.519 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7b663864-2935-4127-ab02-75e4a0acfc73 in datapath 7a504de2-27b2-4d01-a183-d9b0331ca31e unbound from our chassis
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.521 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a504de2-27b2-4d01-a183-d9b0331ca31e
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.539 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d55c36-ffdd-4ff4-83ea-8579136fa0c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.580 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9c8606-d63e-4ab5-9086-b8d1ba00faf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.583 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[79102230-f426-4bbd-a155-402fb6421f45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.602 253665 DEBUG nova.network.neutron [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updated VIF entry in instance network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.603 253665 DEBUG nova.network.neutron [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.616 253665 DEBUG nova.compute.manager [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.617 253665 DEBUG oslo_concurrency.lockutils [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.617 253665 DEBUG oslo_concurrency.lockutils [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.618 253665 DEBUG oslo_concurrency.lockutils [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.618 253665 DEBUG nova.compute.manager [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Processing event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.623 253665 DEBUG oslo_concurrency.lockutils [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.625 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fd19cb40-e856-4a03-b3d2-f91bd3e48adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.656 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[365f23e0-08e3-4658-9dab-c62e4a6f5a3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a504de2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:4b:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750378, 'reachable_time': 21380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395384, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1580475424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.674 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[888aec68-c68d-4f77-a357-bf9149120a73]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a504de2-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750393, 'tstamp': 750393}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395388, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.676 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a504de2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.677 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.678 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.679 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a504de2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.679 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.680 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a504de2-20, col_values=(('external_ids', {'iface-id': 'b35ca171-2b2e-44d8-96a4-4559f6282fda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.680 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:44:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.822 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804692.821428, b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.822 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] VM Started (Lifecycle Event)
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.858 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.862 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804692.8215432, b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.862 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] VM Paused (Lifecycle Event)
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.886 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.889 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.924 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:44:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042584425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.951 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.956 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:44:52 compute-0 nova_compute[253661]: 2025-11-22 09:44:52.971 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.001 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.001 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.308 253665 INFO nova.compute.manager [None req-114c33d1-d55b-4874-af54-bb98c8643c8c 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Get console output
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.314 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.591 253665 DEBUG nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.592 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.593 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.593 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.593 253665 DEBUG nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.593 253665 WARNING nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state None.
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.594 253665 DEBUG nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.594 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.594 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.594 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.595 253665 DEBUG nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:53 compute-0 nova_compute[253661]: 2025-11-22 09:44:53.595 253665 WARNING nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state None.
Nov 22 09:44:53 compute-0 ceph-mon[75021]: pgmap v2489: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 4.5 MiB/s wr, 114 op/s
Nov 22 09:44:53 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1042584425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 5.4 MiB/s wr, 148 op/s
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.501 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.502 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.502 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.502 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.502 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.503 253665 INFO nova.compute.manager [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Terminating instance
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.504 253665 DEBUG nova.compute.manager [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:44:54 compute-0 kernel: tapae02b780-c7 (unregistering): left promiscuous mode
Nov 22 09:44:54 compute-0 NetworkManager[48920]: <info>  [1763804694.5760] device (tapae02b780-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.583 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:54 compute-0 ovn_controller[152872]: 2025-11-22T09:44:54Z|01497|binding|INFO|Releasing lport ae02b780-c76c-4fec-9f50-a8fb17aec607 from this chassis (sb_readonly=0)
Nov 22 09:44:54 compute-0 ovn_controller[152872]: 2025-11-22T09:44:54Z|01498|binding|INFO|Setting lport ae02b780-c76c-4fec-9f50-a8fb17aec607 down in Southbound
Nov 22 09:44:54 compute-0 ovn_controller[152872]: 2025-11-22T09:44:54Z|01499|binding|INFO|Removing iface tapae02b780-c7 ovn-installed in OVS
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.587 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.591 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:76:7a 10.100.0.3'], port_security=['fa:16:3e:82:76:7a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b59ce93-bc6e-4f8c-b65e-e937db06426e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cdf5ea7-dfee-4f0a-9b99-06484e8f93dc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ae02b780-c76c-4fec-9f50-a8fb17aec607) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.593 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ae02b780-c76c-4fec-9f50-a8fb17aec607 in datapath ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 unbound from our chassis
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.595 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ccaaf7d7-d083-4f4d-9c25-562b3924cdc3
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.614 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e25e247-8640-411a-94ef-360120403d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:54 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000089.scope: Deactivated successfully.
Nov 22 09:44:54 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000089.scope: Consumed 13.822s CPU time.
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.644 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4508397-40b7-4526-a2be-eb07e803b296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:54 compute-0 systemd-machined[215941]: Machine qemu-168-instance-00000089 terminated.
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.649 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e8eae088-748e-4559-9280-fdedf9936572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.686 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7082a234-1c11-430e-b8aa-a8de62c2f5d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.706 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f7c68c-f078-4348-b899-fd98a38c4b24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapccaaf7d7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:e3:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749404, 'reachable_time': 23939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395425, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:54 compute-0 ceph-mon[75021]: pgmap v2490: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 5.4 MiB/s wr, 148 op/s
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.724 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[22467d93-fd77-49b8-8e3f-1ef5dabebbcc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapccaaf7d7-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749418, 'tstamp': 749418}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395426, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapccaaf7d7-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749421, 'tstamp': 749421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395426, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.726 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccaaf7d7-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.727 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.732 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.733 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccaaf7d7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.733 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.734 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapccaaf7d7-d0, col_values=(('external_ids', {'iface-id': '9302a453-ce7d-475e-8f32-fad5f9f06dff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.734 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.738 253665 INFO nova.virt.libvirt.driver [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Instance destroyed successfully.
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.739 253665 DEBUG nova.objects.instance [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.744 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No event matching network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab in dict_keys([('network-vif-plugged', '7b663864-2935-4127-ab02-75e4a0acfc73')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 WARNING nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received unexpected event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab for instance with vm_state building and task_state spawning.
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing instance network info cache due to event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG nova.network.neutron [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.750 253665 DEBUG nova.virt.libvirt.vif [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-394306021',display_name='tempest-TestNetworkBasicOps-server-394306021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-394306021',id=137,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKV6hdIegm8gqIp/u4iZ0PF1QzxEfr8miYRpA6YX5J6vj9O+Yx765rE9yt47fSsnx60um/BGJdUGLYgUR7QR+U6SMxydnwIMFxPr8weXhUUIM0aYuUrJ1oofcn2oF77DhQ==',key_name='tempest-TestNetworkBasicOps-1881101001',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-i0tmx233',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:27Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.750 253665 DEBUG nova.network.os_vif_util [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.751 253665 DEBUG nova.network.os_vif_util [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.751 253665 DEBUG os_vif [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.752 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.753 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae02b780-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.755 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.760 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:44:54 compute-0 nova_compute[253661]: 2025-11-22 09:44:54.763 253665 INFO os_vif [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7')
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.639 253665 INFO nova.virt.libvirt.driver [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Deleting instance files /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_del
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.640 253665 INFO nova.virt.libvirt.driver [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Deletion of /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_del complete
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.686 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.686 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing instance network info cache due to event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.686 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.687 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.687 253665 DEBUG nova.network.neutron [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:44:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:44:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:44:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:44:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:44:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.725 253665 INFO nova.compute.manager [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Took 1.22 seconds to destroy the instance on the hypervisor.
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.725 253665 DEBUG oslo.service.loopingcall [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.726 253665 DEBUG nova.compute.manager [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.726 253665 DEBUG nova.network.neutron [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.991 253665 DEBUG nova.network.neutron [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updated VIF entry in instance network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:55 compute-0 nova_compute[253661]: 2025-11-22 09:44:55.992 253665 DEBUG nova.network.neutron [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.006 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.007 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.007 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.007 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.008 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.008 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Processing event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.008 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.008 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.009 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.009 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.009 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.010 253665 WARNING nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received unexpected event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 for instance with vm_state building and task_state spawning.
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.010 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.013 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804696.0133588, b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.014 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] VM Resumed (Lifecycle Event)
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.015 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.018 253665 INFO nova.virt.libvirt.driver [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance spawned successfully.
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.019 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.033 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.037 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.040 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.040 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.040 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.041 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.041 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.042 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.079 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.114 253665 INFO nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Took 12.01 seconds to spawn the instance on the hypervisor.
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.115 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:44:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 3.9 MiB/s wr, 106 op/s
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.168 253665 INFO nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Took 13.07 seconds to build instance.
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.181 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.339 253665 DEBUG nova.network.neutron [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.353 253665 INFO nova.compute.manager [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Took 0.63 seconds to deallocate network for instance.
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.404 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.405 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.520 253665 DEBUG oslo_concurrency.processutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:44:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:44:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:44:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:44:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:44:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.717 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.861 253665 DEBUG nova.compute.manager [req-cf4a034a-e0a7-4f12-8f6a-844dd155985d req-66a7d5e1-344e-4bfd-92e1-145237524d55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-deleted-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:44:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942560802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.988 253665 DEBUG oslo_concurrency.processutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:44:56 compute-0 nova_compute[253661]: 2025-11-22 09:44:56.994 253665 DEBUG nova.compute.provider_tree [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.001 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.008 253665 DEBUG nova.scheduler.client.report [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.035 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.074 253665 INFO nova.scheduler.client.report [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.158 253665 DEBUG nova.network.neutron [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updated VIF entry in instance network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.159 253665 DEBUG nova.network.neutron [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updating instance_info_cache with network_info: [{"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.164 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.186 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.187 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-unplugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.187 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.187 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.188 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.188 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] No waiting events found dispatching network-vif-unplugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.188 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-unplugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.188 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.189 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.189 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.189 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.189 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] No waiting events found dispatching network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.190 253665 WARNING nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received unexpected event network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 for instance with vm_state active and task_state deleting.
Nov 22 09:44:57 compute-0 ceph-mon[75021]: pgmap v2491: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 3.9 MiB/s wr, 106 op/s
Nov 22 09:44:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3942560802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:44:57 compute-0 nova_compute[253661]: 2025-11-22 09:44:57.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:44:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 4.0 MiB/s wr, 131 op/s
Nov 22 09:44:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:44:59 compute-0 ceph-mon[75021]: pgmap v2492: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 4.0 MiB/s wr, 131 op/s
Nov 22 09:44:59 compute-0 nova_compute[253661]: 2025-11-22 09:44:59.780 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.070 253665 DEBUG nova.compute.manager [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.071 253665 DEBUG nova.compute.manager [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing instance network info cache due to event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.071 253665 DEBUG oslo_concurrency.lockutils [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.071 253665 DEBUG oslo_concurrency.lockutils [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.072 253665 DEBUG nova.network.neutron [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.120 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.121 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.121 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.121 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.122 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.122 253665 INFO nova.compute.manager [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Terminating instance
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.123 253665 DEBUG nova.compute.manager [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:45:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 145 op/s
Nov 22 09:45:00 compute-0 kernel: tap7f5f15bb-83 (unregistering): left promiscuous mode
Nov 22 09:45:00 compute-0 NetworkManager[48920]: <info>  [1763804700.2245] device (tap7f5f15bb-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.232 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 ovn_controller[152872]: 2025-11-22T09:45:00Z|01500|binding|INFO|Releasing lport 7f5f15bb-83ef-4c81-8585-c447323ac70f from this chassis (sb_readonly=0)
Nov 22 09:45:00 compute-0 ovn_controller[152872]: 2025-11-22T09:45:00Z|01501|binding|INFO|Setting lport 7f5f15bb-83ef-4c81-8585-c447323ac70f down in Southbound
Nov 22 09:45:00 compute-0 ovn_controller[152872]: 2025-11-22T09:45:00Z|01502|binding|INFO|Removing iface tap7f5f15bb-83 ovn-installed in OVS
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.234 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.239 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:14:00 10.100.0.12'], port_security=['fa:16:3e:e3:14:00 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dab57683-82b6-44b3-b663-556a4f0e3dab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7139b3cb-5e3b-45f1-be1c-957199bdba02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cdf5ea7-dfee-4f0a-9b99-06484e8f93dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7f5f15bb-83ef-4c81-8585-c447323ac70f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.240 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7f5f15bb-83ef-4c81-8585-c447323ac70f in datapath ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 unbound from our chassis
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.242 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.243 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[824db2d7-b16b-4603-8971-15f5910207bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.243 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 namespace which is not needed anymore
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 22 09:45:00 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000087.scope: Consumed 15.003s CPU time.
Nov 22 09:45:00 compute-0 systemd-machined[215941]: Machine qemu-166-instance-00000087 terminated.
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.352 253665 INFO nova.virt.libvirt.driver [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Instance destroyed successfully.
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.353 253665 DEBUG nova.objects.instance [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid dab57683-82b6-44b3-b663-556a4f0e3dab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.367 253665 DEBUG nova.virt.libvirt.vif [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:43:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1917813057',display_name='tempest-TestNetworkBasicOps-server-1917813057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1917813057',id=135,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7RKy18xj59Mrr1Qz0apb8VM0RVo9aCtcuaRLK/Njyb/8H+0bdEC3XqXWMpAl+tfEMf3lBrH+nx/Y/xtjJAjEL/9WZ1nk79dDJUwDIjACi8kN3FW6TUbGYNm9djoYtMsA==',key_name='tempest-TestNetworkBasicOps-1265217701',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1ixz1m3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:04Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=dab57683-82b6-44b3-b663-556a4f0e3dab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.367 253665 DEBUG nova.network.os_vif_util [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.368 253665 DEBUG nova.network.os_vif_util [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.369 253665 DEBUG os_vif [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.372 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f5f15bb-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.374 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.379 253665 INFO os_vif [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83')
Nov 22 09:45:00 compute-0 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [NOTICE]   (392791) : haproxy version is 2.8.14-c23fe91
Nov 22 09:45:00 compute-0 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [NOTICE]   (392791) : path to executable is /usr/sbin/haproxy
Nov 22 09:45:00 compute-0 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [WARNING]  (392791) : Exiting Master process...
Nov 22 09:45:00 compute-0 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [ALERT]    (392791) : Current worker (392793) exited with code 143 (Terminated)
Nov 22 09:45:00 compute-0 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [WARNING]  (392791) : All workers exited. Exiting... (0)
Nov 22 09:45:00 compute-0 systemd[1]: libpod-9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3.scope: Deactivated successfully.
Nov 22 09:45:00 compute-0 podman[395497]: 2025-11-22 09:45:00.395138714 +0000 UTC m=+0.060593197 container died 9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3-userdata-shm.mount: Deactivated successfully.
Nov 22 09:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-743f2a86a0af71728ae8cd7ad6b829b53bc590a580aa2c574008ce5dfdbe39ae-merged.mount: Deactivated successfully.
Nov 22 09:45:00 compute-0 podman[395497]: 2025-11-22 09:45:00.476812841 +0000 UTC m=+0.142267304 container cleanup 9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:45:00 compute-0 podman[395529]: 2025-11-22 09:45:00.478085212 +0000 UTC m=+0.064445822 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:45:00 compute-0 podman[395539]: 2025-11-22 09:45:00.482743228 +0000 UTC m=+0.064823183 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 22 09:45:00 compute-0 systemd[1]: libpod-conmon-9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3.scope: Deactivated successfully.
Nov 22 09:45:00 compute-0 podman[395590]: 2025-11-22 09:45:00.572066633 +0000 UTC m=+0.073175368 container remove 9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.577 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7b06db-6ffd-476e-97ca-b15c52bb40a3]: (4, ('Sat Nov 22 09:45:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 (9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3)\n9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3\nSat Nov 22 09:45:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 (9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3)\n9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.579 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ade4ab52-1899-4ee7-b7f1-9d22b0a7282f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.579 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccaaf7d7-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.581 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 kernel: tapccaaf7d7-d0: left promiscuous mode
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.598 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.601 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[44843621-c1b0-4131-bd05-f581c15f94f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.616 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7dad3086-01f6-412a-b5cf-8542b6a818e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.617 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23bead0c-d47b-46a1-8b82-21b6254d3193]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.633 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[195c74a4-b05b-49a7-9ca5-a8bbf406d4ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749395, 'reachable_time': 31571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395605, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dccaaf7d7\x2dd083\x2d4f4d\x2d9c25\x2d562b3924cdc3.mount: Deactivated successfully.
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.635 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:45:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.635 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[95b606ca-e31d-4670-acf9-85714cc6554d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.743 253665 DEBUG nova.compute.manager [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.743 253665 DEBUG oslo_concurrency.lockutils [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.744 253665 DEBUG oslo_concurrency.lockutils [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.744 253665 DEBUG oslo_concurrency.lockutils [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.744 253665 DEBUG nova.compute.manager [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:00 compute-0 nova_compute[253661]: 2025-11-22 09:45:00.744 253665 DEBUG nova.compute.manager [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:45:01 compute-0 ceph-mon[75021]: pgmap v2493: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 145 op/s
Nov 22 09:45:01 compute-0 nova_compute[253661]: 2025-11-22 09:45:01.401 253665 INFO nova.virt.libvirt.driver [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Deleting instance files /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab_del
Nov 22 09:45:01 compute-0 nova_compute[253661]: 2025-11-22 09:45:01.402 253665 INFO nova.virt.libvirt.driver [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Deletion of /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab_del complete
Nov 22 09:45:01 compute-0 nova_compute[253661]: 2025-11-22 09:45:01.449 253665 INFO nova.compute.manager [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Took 1.33 seconds to destroy the instance on the hypervisor.
Nov 22 09:45:01 compute-0 nova_compute[253661]: 2025-11-22 09:45:01.450 253665 DEBUG oslo.service.loopingcall [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:45:01 compute-0 nova_compute[253661]: 2025-11-22 09:45:01.450 253665 DEBUG nova.compute.manager [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:45:01 compute-0 nova_compute[253661]: 2025-11-22 09:45:01.450 253665 DEBUG nova.network.neutron [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:45:01 compute-0 nova_compute[253661]: 2025-11-22 09:45:01.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Nov 22 09:45:02 compute-0 nova_compute[253661]: 2025-11-22 09:45:02.854 253665 DEBUG nova.compute.manager [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:02 compute-0 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 DEBUG oslo_concurrency.lockutils [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:02 compute-0 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 DEBUG oslo_concurrency.lockutils [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:02 compute-0 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 DEBUG oslo_concurrency.lockutils [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:02 compute-0 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 DEBUG nova.compute.manager [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:02 compute-0 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 WARNING nova.compute.manager [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state deleting.
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0023620132512860584 of space, bias 1.0, pg target 0.7086039753858175 quantized to 32 (current 32)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:45:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:45:03 compute-0 nova_compute[253661]: 2025-11-22 09:45:03.093 253665 DEBUG nova.network.neutron [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updated VIF entry in instance network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:45:03 compute-0 nova_compute[253661]: 2025-11-22 09:45:03.093 253665 DEBUG nova.network.neutron [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:03 compute-0 nova_compute[253661]: 2025-11-22 09:45:03.114 253665 DEBUG oslo_concurrency.lockutils [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:03 compute-0 ceph-mon[75021]: pgmap v2494: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Nov 22 09:45:03 compute-0 nova_compute[253661]: 2025-11-22 09:45:03.428 253665 DEBUG nova.network.neutron [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:03 compute-0 nova_compute[253661]: 2025-11-22 09:45:03.451 253665 INFO nova.compute.manager [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Took 2.00 seconds to deallocate network for instance.
Nov 22 09:45:03 compute-0 nova_compute[253661]: 2025-11-22 09:45:03.516 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:03 compute-0 nova_compute[253661]: 2025-11-22 09:45:03.516 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:03 compute-0 nova_compute[253661]: 2025-11-22 09:45:03.663 253665 DEBUG oslo_concurrency.processutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.032 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.033 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.050 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:45:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:45:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267366814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.112 253665 DEBUG oslo_concurrency.processutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.116 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.118 253665 DEBUG nova.compute.provider_tree [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.129 253665 DEBUG nova.scheduler.client.report [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.146 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.148 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 180 op/s
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.154 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.155 253665 INFO nova.compute.claims [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.192 253665 INFO nova.scheduler.client.report [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance dab57683-82b6-44b3-b663-556a4f0e3dab
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.274 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1267366814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.342 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.478 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 WARNING nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 48af02cd-94c5-473f-a6f9-4d2caad8483f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 71ef7514-c6bd-40ee-852a-4b850ca0a05c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.514 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.514 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.514 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.558 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.563 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.592 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:45:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352016269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.783 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.789 253665 DEBUG nova.compute.provider_tree [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.809 253665 DEBUG nova.scheduler.client.report [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.837 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.838 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.884 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.885 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.906 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.922 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.946 253665 DEBUG nova.compute.manager [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-deleted-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.946 253665 DEBUG nova.compute.manager [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.947 253665 DEBUG nova.compute.manager [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing instance network info cache due to event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.947 253665 DEBUG oslo_concurrency.lockutils [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.947 253665 DEBUG oslo_concurrency.lockutils [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:04 compute-0 nova_compute[253661]: 2025-11-22 09:45:04.948 253665 DEBUG nova.network.neutron [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.018 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.019 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.019 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Creating image(s)
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.040 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.059 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.086 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.090 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.141 253665 DEBUG nova.policy [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.161 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.162 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.163 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.163 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.185 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.190 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:05 compute-0 ceph-mon[75021]: pgmap v2495: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 180 op/s
Nov 22 09:45:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2352016269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:05 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.719 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.781 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.889 253665 DEBUG nova.objects.instance [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.905 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.905 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Ensure instance console log exists: /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.905 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.906 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:05 compute-0 nova_compute[253661]: 2025-11-22 09:45:05.906 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 127 op/s
Nov 22 09:45:06 compute-0 ceph-mon[75021]: pgmap v2496: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 127 op/s
Nov 22 09:45:06 compute-0 podman[395817]: 2025-11-22 09:45:06.408374649 +0000 UTC m=+0.108477270 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 22 09:45:06 compute-0 nova_compute[253661]: 2025-11-22 09:45:06.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:06 compute-0 nova_compute[253661]: 2025-11-22 09:45:06.746 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Successfully created port: f0192978-0953-4171-b70f-7f21bd6af5a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:45:06 compute-0 nova_compute[253661]: 2025-11-22 09:45:06.848 253665 DEBUG nova.network.neutron [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updated VIF entry in instance network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:45:06 compute-0 nova_compute[253661]: 2025-11-22 09:45:06.849 253665 DEBUG nova.network.neutron [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:06 compute-0 nova_compute[253661]: 2025-11-22 09:45:06.871 253665 DEBUG oslo_concurrency.lockutils [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:07 compute-0 nova_compute[253661]: 2025-11-22 09:45:07.563 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Successfully updated port: f0192978-0953-4171-b70f-7f21bd6af5a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:45:07 compute-0 nova_compute[253661]: 2025-11-22 09:45:07.586 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:07 compute-0 nova_compute[253661]: 2025-11-22 09:45:07.586 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:07 compute-0 nova_compute[253661]: 2025-11-22 09:45:07.587 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:45:07 compute-0 nova_compute[253661]: 2025-11-22 09:45:07.656 253665 DEBUG nova.compute.manager [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-changed-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:07 compute-0 nova_compute[253661]: 2025-11-22 09:45:07.656 253665 DEBUG nova.compute.manager [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Refreshing instance network info cache due to event network-changed-f0192978-0953-4171-b70f-7f21bd6af5a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:45:07 compute-0 nova_compute[253661]: 2025-11-22 09:45:07.656 253665 DEBUG oslo_concurrency.lockutils [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:07 compute-0 nova_compute[253661]: 2025-11-22 09:45:07.815 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:45:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 255 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 222 KiB/s wr, 138 op/s
Nov 22 09:45:08 compute-0 ovn_controller[152872]: 2025-11-22T09:45:08Z|01503|binding|INFO|Releasing lport 3db82a3e-3c50-4f8e-b5b4-8b4657d60723 from this chassis (sb_readonly=0)
Nov 22 09:45:08 compute-0 ovn_controller[152872]: 2025-11-22T09:45:08Z|01504|binding|INFO|Releasing lport b35ca171-2b2e-44d8-96a4-4559f6282fda from this chassis (sb_readonly=0)
Nov 22 09:45:08 compute-0 ovn_controller[152872]: 2025-11-22T09:45:08Z|01505|binding|INFO|Releasing lport ce538828-218d-4def-9bed-efeb786012c8 from this chassis (sb_readonly=0)
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.698 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updating instance_info_cache with network_info: [{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.719 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.719 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance network_info: |[{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.720 253665 DEBUG oslo_concurrency.lockutils [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.720 253665 DEBUG nova.network.neutron [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Refreshing network info cache for port f0192978-0953-4171-b70f-7f21bd6af5a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.723 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start _get_guest_xml network_info=[{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.735 253665 WARNING nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.743 253665 DEBUG nova.virt.libvirt.host [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.744 253665 DEBUG nova.virt.libvirt.host [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.747 253665 DEBUG nova.virt.libvirt.host [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.748 253665 DEBUG nova.virt.libvirt.host [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.748 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.748 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.750 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.750 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.750 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:45:08 compute-0 nova_compute[253661]: 2025-11-22 09:45:08.756 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:45:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185236624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.243 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.264 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.268 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:09 compute-0 ceph-mon[75021]: pgmap v2497: 305 pgs: 305 active+clean; 255 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 222 KiB/s wr, 138 op/s
Nov 22 09:45:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1185236624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.737 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804694.7360916, 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.738 253665 INFO nova.compute.manager [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] VM Stopped (Lifecycle Event)
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.762 253665 DEBUG nova.compute.manager [None req-4f6ed317-5c6e-485d-8473-2fe3cec84725 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:45:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633205509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.817 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.819 253665 DEBUG nova.virt.libvirt.vif [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:45:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=140,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-h3590ebr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:45:04Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=ed3583b5-6d93-4e3f-83e0-3b36f25f08f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.819 253665 DEBUG nova.network.os_vif_util [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.820 253665 DEBUG nova.network.os_vif_util [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.821 253665 DEBUG nova.objects.instance [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.834 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <uuid>ed3583b5-6d93-4e3f-83e0-3b36f25f08f1</uuid>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <name>instance-0000008c</name>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118</nova:name>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:45:08</nova:creationTime>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <nova:port uuid="f0192978-0953-4171-b70f-7f21bd6af5a0">
Nov 22 09:45:09 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <system>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <entry name="serial">ed3583b5-6d93-4e3f-83e0-3b36f25f08f1</entry>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <entry name="uuid">ed3583b5-6d93-4e3f-83e0-3b36f25f08f1</entry>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     </system>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <os>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   </os>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <features>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   </features>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk">
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config">
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       </source>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:45:09 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:73:98:16"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <target dev="tapf0192978-09"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/console.log" append="off"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <video>
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     </video>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:45:09 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:45:09 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:45:09 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:45:09 compute-0 nova_compute[253661]: </domain>
Nov 22 09:45:09 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.835 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Preparing to wait for external event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.835 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.835 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.836 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.836 253665 DEBUG nova.virt.libvirt.vif [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:45:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=140,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-h3590ebr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:45:04Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=ed3583b5-6d93-4e3f-83e0-3b36f25f08f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.836 253665 DEBUG nova.network.os_vif_util [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.837 253665 DEBUG nova.network.os_vif_util [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.837 253665 DEBUG os_vif [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.839 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.842 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0192978-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.842 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0192978-09, col_values=(('external_ids', {'iface-id': 'f0192978-0953-4171-b70f-7f21bd6af5a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:98:16', 'vm-uuid': 'ed3583b5-6d93-4e3f-83e0-3b36f25f08f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:09 compute-0 NetworkManager[48920]: <info>  [1763804709.8908] manager: (tapf0192978-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.893 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.898 253665 INFO os_vif [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09')
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.958 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.959 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.959 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:73:98:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.959 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Using config drive
Nov 22 09:45:09 compute-0 nova_compute[253661]: 2025-11-22 09:45:09.980 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 297 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 138 op/s
Nov 22 09:45:10 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/633205509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.453 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Creating config drive at /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.459 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9lod7y93 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.601 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9lod7y93" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.631 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.635 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.674 253665 DEBUG nova.network.neutron [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updated VIF entry in instance network info cache for port f0192978-0953-4171-b70f-7f21bd6af5a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.676 253665 DEBUG nova.network.neutron [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updating instance_info_cache with network_info: [{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.690 253665 DEBUG oslo_concurrency.lockutils [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.866 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.867 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Deleting local config drive /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config because it was imported into RBD.
Nov 22 09:45:10 compute-0 kernel: tapf0192978-09: entered promiscuous mode
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.915 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:10 compute-0 ovn_controller[152872]: 2025-11-22T09:45:10Z|01506|binding|INFO|Claiming lport f0192978-0953-4171-b70f-7f21bd6af5a0 for this chassis.
Nov 22 09:45:10 compute-0 ovn_controller[152872]: 2025-11-22T09:45:10Z|01507|binding|INFO|f0192978-0953-4171-b70f-7f21bd6af5a0: Claiming fa:16:3e:73:98:16 10.100.0.9
Nov 22 09:45:10 compute-0 NetworkManager[48920]: <info>  [1763804710.9171] manager: (tapf0192978-09): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Nov 22 09:45:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.921 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:98:16 10.100.0.9'], port_security=['fa:16:3e:73:98:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ed3583b5-6d93-4e3f-83e0-3b36f25f08f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a734f39d-baf0-4591-94dc-9057caf53bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c524ade6-1430-48f4-af9a-629e8a61db96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce1fe74-6934-45b2-a6d9-4702f1b2307a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0192978-0953-4171-b70f-7f21bd6af5a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.923 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0192978-0953-4171-b70f-7f21bd6af5a0 in datapath a734f39d-baf0-4591-94dc-9057caf53bb4 bound to our chassis
Nov 22 09:45:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.925 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a734f39d-baf0-4591-94dc-9057caf53bb4
Nov 22 09:45:10 compute-0 ovn_controller[152872]: 2025-11-22T09:45:10Z|01508|binding|INFO|Setting lport f0192978-0953-4171-b70f-7f21bd6af5a0 ovn-installed in OVS
Nov 22 09:45:10 compute-0 ovn_controller[152872]: 2025-11-22T09:45:10Z|01509|binding|INFO|Setting lport f0192978-0953-4171-b70f-7f21bd6af5a0 up in Southbound
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:10 compute-0 nova_compute[253661]: 2025-11-22 09:45:10.940 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.945 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f93e47f3-10ae-4e2c-b7ca-f1465845cfcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:10 compute-0 systemd-machined[215941]: New machine qemu-171-instance-0000008c.
Nov 22 09:45:10 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-0000008c.
Nov 22 09:45:10 compute-0 systemd-udevd[395981]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:45:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.984 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[403fd93c-f30b-469e-b0a0-d0fa15fab46f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.989 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb30e9d-f05a-4e53-94cb-3299a40b8e40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:10 compute-0 NetworkManager[48920]: <info>  [1763804710.9982] device (tapf0192978-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:45:11 compute-0 NetworkManager[48920]: <info>  [1763804711.0004] device (tapf0192978-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:45:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.020 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4c32ccf3-4e85-4b16-9fd3-770bebbe246d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.037 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ade384-4785-42ef-b606-f64d608fe801]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa734f39d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752629, 'reachable_time': 34911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395992, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.052 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9a5d70-3e37-4857-8290-5c9fffa25c09]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa734f39d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752643, 'tstamp': 752643}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395993, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa734f39d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752646, 'tstamp': 752646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395993, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.053 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa734f39d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.107 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.108 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa734f39d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.108 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.109 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa734f39d-b0, col_values=(('external_ids', {'iface-id': '3db82a3e-3c50-4f8e-b5b4-8b4657d60723'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:11 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.109 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.139 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.187 253665 DEBUG nova.compute.manager [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.188 253665 DEBUG oslo_concurrency.lockutils [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.188 253665 DEBUG oslo_concurrency.lockutils [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.188 253665 DEBUG oslo_concurrency.lockutils [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.188 253665 DEBUG nova.compute.manager [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Processing event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:45:11 compute-0 ceph-mon[75021]: pgmap v2498: 305 pgs: 305 active+clean; 297 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 138 op/s
Nov 22 09:45:11 compute-0 ovn_controller[152872]: 2025-11-22T09:45:11Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:a0:45 10.100.0.13
Nov 22 09:45:11 compute-0 ovn_controller[152872]: 2025-11-22T09:45:11Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:a0:45 10.100.0.13
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.750 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804711.7499137, ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.751 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] VM Started (Lifecycle Event)
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.752 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.756 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.760 253665 INFO nova.virt.libvirt.driver [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance spawned successfully.
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.761 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.787 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.793 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.796 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.796 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.796 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.797 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.797 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.797 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.825 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.825 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804711.7500281, ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.825 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] VM Paused (Lifecycle Event)
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.849 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.853 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804711.7555947, ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.853 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] VM Resumed (Lifecycle Event)
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.860 253665 INFO nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Took 6.84 seconds to spawn the instance on the hypervisor.
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.861 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.867 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.869 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.887 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.912 253665 INFO nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Took 7.81 seconds to build instance.
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.924 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.925 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 7.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:11 compute-0 nova_compute[253661]: 2025-11-22 09:45:11.943 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.1 MiB/s wr, 133 op/s
Nov 22 09:45:12 compute-0 ceph-mon[75021]: pgmap v2499: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.1 MiB/s wr, 133 op/s
Nov 22 09:45:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:45:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3910842087' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:45:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:45:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3910842087' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:45:13 compute-0 nova_compute[253661]: 2025-11-22 09:45:13.329 253665 DEBUG nova.compute.manager [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:13 compute-0 nova_compute[253661]: 2025-11-22 09:45:13.330 253665 DEBUG oslo_concurrency.lockutils [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:13 compute-0 nova_compute[253661]: 2025-11-22 09:45:13.330 253665 DEBUG oslo_concurrency.lockutils [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:13 compute-0 nova_compute[253661]: 2025-11-22 09:45:13.330 253665 DEBUG oslo_concurrency.lockutils [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:13 compute-0 nova_compute[253661]: 2025-11-22 09:45:13.331 253665 DEBUG nova.compute.manager [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] No waiting events found dispatching network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:13 compute-0 nova_compute[253661]: 2025-11-22 09:45:13.331 253665 WARNING nova.compute.manager [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received unexpected event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 for instance with vm_state active and task_state None.
Nov 22 09:45:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3910842087' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:45:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3910842087' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:45:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Nov 22 09:45:14 compute-0 ceph-mon[75021]: pgmap v2500: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Nov 22 09:45:14 compute-0 nova_compute[253661]: 2025-11-22 09:45:14.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:15 compute-0 nova_compute[253661]: 2025-11-22 09:45:15.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:15 compute-0 nova_compute[253661]: 2025-11-22 09:45:15.351 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804700.3500113, dab57683-82b6-44b3-b663-556a4f0e3dab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:15 compute-0 nova_compute[253661]: 2025-11-22 09:45:15.351 253665 INFO nova.compute.manager [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] VM Stopped (Lifecycle Event)
Nov 22 09:45:15 compute-0 nova_compute[253661]: 2025-11-22 09:45:15.380 253665 DEBUG nova.compute.manager [None req-d50896d9-1a46-4c2a-ba6a-ac607b4aaa67 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:15 compute-0 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG nova.compute.manager [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-changed-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:15 compute-0 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG nova.compute.manager [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Refreshing instance network info cache due to event network-changed-f0192978-0953-4171-b70f-7f21bd6af5a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:45:15 compute-0 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG oslo_concurrency.lockutils [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:15 compute-0 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG oslo_concurrency.lockutils [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:15 compute-0 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG nova.network.neutron [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Refreshing network info cache for port f0192978-0953-4171-b70f-7f21bd6af5a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:45:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Nov 22 09:45:16 compute-0 nova_compute[253661]: 2025-11-22 09:45:16.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:17 compute-0 ceph-mon[75021]: pgmap v2501: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Nov 22 09:45:18 compute-0 nova_compute[253661]: 2025-11-22 09:45:18.069 253665 DEBUG nova.network.neutron [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updated VIF entry in instance network info cache for port f0192978-0953-4171-b70f-7f21bd6af5a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:45:18 compute-0 nova_compute[253661]: 2025-11-22 09:45:18.070 253665 DEBUG nova.network.neutron [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updating instance_info_cache with network_info: [{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:18 compute-0 nova_compute[253661]: 2025-11-22 09:45:18.126 253665 DEBUG oslo_concurrency.lockutils [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Nov 22 09:45:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:19 compute-0 ceph-mon[75021]: pgmap v2502: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Nov 22 09:45:19 compute-0 nova_compute[253661]: 2025-11-22 09:45:19.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 154 op/s
Nov 22 09:45:21 compute-0 ceph-mon[75021]: pgmap v2503: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 154 op/s
Nov 22 09:45:21 compute-0 nova_compute[253661]: 2025-11-22 09:45:21.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.004 253665 DEBUG nova.compute.manager [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.004 253665 DEBUG nova.compute.manager [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing instance network info cache due to event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.004 253665 DEBUG oslo_concurrency.lockutils [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.005 253665 DEBUG oslo_concurrency.lockutils [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.005 253665 DEBUG nova.network.neutron [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.093 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.094 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.095 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.095 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.095 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.097 253665 INFO nova.compute.manager [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Terminating instance
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.098 253665 DEBUG nova.compute.manager [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:45:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 132 op/s
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.227 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.229 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.246 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.340 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.341 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.504 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.505 253665 INFO nova.compute.claims [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:45:22 compute-0 nova_compute[253661]: 2025-11-22 09:45:22.675 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:45:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:45:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:45:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:45:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:45:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:45:22 compute-0 ceph-mon[75021]: pgmap v2504: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 132 op/s
Nov 22 09:45:22 compute-0 kernel: tap2979286f-0f (unregistering): left promiscuous mode
Nov 22 09:45:23 compute-0 NetworkManager[48920]: <info>  [1763804723.0086] device (tap2979286f-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:45:23 compute-0 ovn_controller[152872]: 2025-11-22T09:45:23Z|01510|binding|INFO|Releasing lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab from this chassis (sb_readonly=0)
Nov 22 09:45:23 compute-0 ovn_controller[152872]: 2025-11-22T09:45:23Z|01511|binding|INFO|Setting lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab down in Southbound
Nov 22 09:45:23 compute-0 ovn_controller[152872]: 2025-11-22T09:45:23Z|01512|binding|INFO|Removing iface tap2979286f-0f ovn-installed in OVS
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 kernel: tap7b663864-29 (unregistering): left promiscuous mode
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.029 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:a0:45 10.100.0.13'], port_security=['fa:16:3e:f9:a0:45 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621dd092-e20a-432f-8488-41d7fcd69532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=586feb4c-523c-413f-8bd3-6bc87edbdf4c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2979286f-0fdd-4b20-9c29-da29aac8e5ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.030 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2979286f-0fdd-4b20-9c29-da29aac8e5ab in datapath 621dd092-e20a-432f-8488-41d7fcd69532 unbound from our chassis
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.033 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621dd092-e20a-432f-8488-41d7fcd69532
Nov 22 09:45:23 compute-0 NetworkManager[48920]: <info>  [1763804723.0342] device (tap7b663864-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.047 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[43a91cbd-e19c-4a77-8b9c-ba2c8f4e4179]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 ovn_controller[152872]: 2025-11-22T09:45:23Z|01513|binding|INFO|Releasing lport 7b663864-2935-4127-ab02-75e4a0acfc73 from this chassis (sb_readonly=0)
Nov 22 09:45:23 compute-0 ovn_controller[152872]: 2025-11-22T09:45:23Z|01514|binding|INFO|Setting lport 7b663864-2935-4127-ab02-75e4a0acfc73 down in Southbound
Nov 22 09:45:23 compute-0 ovn_controller[152872]: 2025-11-22T09:45:23Z|01515|binding|INFO|Removing iface tap7b663864-29 ovn-installed in OVS
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.069 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042'], port_security=['fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:1042/64', 'neutron:device_id': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7556820e-db50-4efa-817c-86d63f0b8b71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7b663864-2935-4127-ab02-75e4a0acfc73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.080 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.081 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bad4c1dd-dd3d-4dfc-b220-73b4c00e5c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.084 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ca668d3a-7420-4ed5-87ca-7497ed4b3bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 22 09:45:23 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008b.scope: Consumed 14.076s CPU time.
Nov 22 09:45:23 compute-0 systemd-machined[215941]: Machine qemu-170-instance-0000008b terminated.
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.117 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ba106420-07c4-45ba-8668-4436e4e819df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 NetworkManager[48920]: <info>  [1763804723.1329] manager: (tap7b663864-29): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.137 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b801ae-fbaf-403b-8105-9f07441b93f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621dd092-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:9d:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750283, 'reachable_time': 20749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396074, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.158 253665 INFO nova.virt.libvirt.driver [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance destroyed successfully.
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.161 253665 DEBUG nova.objects.instance [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:45:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767172878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.176 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b85f49be-eeba-4ccd-a6ed-9f0f9894f1dd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621dd092-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750294, 'tstamp': 750294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396088, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621dd092-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750297, 'tstamp': 750297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396088, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.178 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621dd092-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.187 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621dd092-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.187 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.187 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621dd092-e0, col_values=(('external_ids', {'iface-id': 'ce538828-218d-4def-9bed-efeb786012c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.187 253665 DEBUG nova.virt.libvirt.vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:56Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.188 253665 DEBUG nova.network.os_vif_util [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.188 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.188 253665 DEBUG nova.network.os_vif_util [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.189 253665 DEBUG os_vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.190 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7b663864-2935-4127-ab02-75e4a0acfc73 in datapath 7a504de2-27b2-4d01-a183-d9b0331ca31e unbound from our chassis
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.191 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.191 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2979286f-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.192 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a504de2-27b2-4d01-a183-d9b0331ca31e
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.203 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.204 253665 INFO os_vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f')
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.205 253665 DEBUG nova.virt.libvirt.vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:56Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.206 253665 DEBUG nova.network.os_vif_util [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.207 253665 DEBUG nova.network.os_vif_util [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.207 253665 DEBUG os_vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c590d1bb-df49-48e5-85c7-70372e4f0566]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.210 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b663864-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.213 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.215 253665 INFO os_vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29')
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.236 253665 DEBUG nova.compute.provider_tree [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.239 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b117d142-4e39-4d69-a21c-e6d16adf05fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.244 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[477768dd-f97e-4b4d-9153-d4c37124adb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.267 253665 DEBUG nova.scheduler.client.report [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.283 253665 DEBUG nova.compute.manager [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-unplugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.283 253665 DEBUG oslo_concurrency.lockutils [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.283 253665 DEBUG oslo_concurrency.lockutils [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.283 253665 DEBUG oslo_concurrency.lockutils [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.284 253665 DEBUG nova.compute.manager [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-unplugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.284 253665 DEBUG nova.compute.manager [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-unplugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.287 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[adf121f8-8c0a-4e52-b6aa-68148e040ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.296 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.297 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.306 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f0452f-75bf-42d0-b7c6-65441cfbd930]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a504de2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:4b:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750378, 'reachable_time': 21380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396120, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.325 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e24a7f8d-ce58-4c32-ae93-262fc0e0733f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a504de2-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750393, 'tstamp': 750393}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396121, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.327 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a504de2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.329 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a504de2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a504de2-20, col_values=(('external_ids', {'iface-id': 'b35ca171-2b2e-44d8-96a4-4559f6282fda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:23 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.365 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.366 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.377 253665 DEBUG nova.network.neutron [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updated VIF entry in instance network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.378 253665 DEBUG nova.network.neutron [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.391 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.407 253665 DEBUG oslo_concurrency.lockutils [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.419 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.523 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.524 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.524 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Creating image(s)
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.542 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.564 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.585 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.589 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.669 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.670 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.671 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.671 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.694 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.698 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:23 compute-0 nova_compute[253661]: 2025-11-22 09:45:23.858 253665 DEBUG nova.policy [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:45:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 877 KiB/s wr, 111 op/s
Nov 22 09:45:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2767172878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.357 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Successfully created port: 2bf46f44-05ff-4af4-ba41-f280a21be09e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:45:25 compute-0 ceph-mon[75021]: pgmap v2505: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 877 KiB/s wr, 111 op/s
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.432 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.432 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.432 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 WARNING nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received unexpected event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab for instance with vm_state active and task_state deleting.
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-unplugged-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-unplugged-7b663864-2935-4127-ab02-75e4a0acfc73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-unplugged-7b663864-2935-4127-ab02-75e4a0acfc73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.435 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.435 253665 WARNING nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received unexpected event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 for instance with vm_state active and task_state deleting.
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.613 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.916s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.679 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.881 253665 DEBUG nova.objects.instance [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid e1b6c07e-b79f-4b39-a2b8-a952e54f4972 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.895 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.896 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Ensure instance console log exists: /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.897 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.897 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:25 compute-0 nova_compute[253661]: 2025-11-22 09:45:25.897 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 25 KiB/s wr, 27 op/s
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.379 253665 INFO nova.virt.libvirt.driver [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Deleting instance files /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_del
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.381 253665 INFO nova.virt.libvirt.driver [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Deletion of /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_del complete
Nov 22 09:45:26 compute-0 ceph-mon[75021]: pgmap v2506: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 25 KiB/s wr, 27 op/s
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.473 253665 INFO nova.compute.manager [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Took 4.38 seconds to destroy the instance on the hypervisor.
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.474 253665 DEBUG oslo.service.loopingcall [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.475 253665 DEBUG nova.compute.manager [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.475 253665 DEBUG nova.network.neutron [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.839 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Successfully updated port: 2bf46f44-05ff-4af4-ba41-f280a21be09e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.867 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.867 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:26 compute-0 nova_compute[253661]: 2025-11-22 09:45:26.868 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:45:27 compute-0 nova_compute[253661]: 2025-11-22 09:45:27.050 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:45:27 compute-0 nova_compute[253661]: 2025-11-22 09:45:27.535 253665 DEBUG nova.compute.manager [req-a93dc0ed-4f01-4f00-9b69-54a88c6e9f80 req-eee139d7-2f2a-4260-9c47-b828f2c444e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-deleted-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:27 compute-0 nova_compute[253661]: 2025-11-22 09:45:27.536 253665 INFO nova.compute.manager [req-a93dc0ed-4f01-4f00-9b69-54a88c6e9f80 req-eee139d7-2f2a-4260-9c47-b828f2c444e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Neutron deleted interface 2979286f-0fdd-4b20-9c29-da29aac8e5ab; detaching it from the instance and deleting it from the info cache
Nov 22 09:45:27 compute-0 nova_compute[253661]: 2025-11-22 09:45:27.536 253665 DEBUG nova.network.neutron [req-a93dc0ed-4f01-4f00-9b69-54a88c6e9f80 req-eee139d7-2f2a-4260-9c47-b828f2c444e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:27 compute-0 nova_compute[253661]: 2025-11-22 09:45:27.568 253665 DEBUG nova.compute.manager [req-a93dc0ed-4f01-4f00-9b69-54a88c6e9f80 req-eee139d7-2f2a-4260-9c47-b828f2c444e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Detach interface failed, port_id=2979286f-0fdd-4b20-9c29-da29aac8e5ab, reason: Instance b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:45:27 compute-0 nova_compute[253661]: 2025-11-22 09:45:27.619 253665 DEBUG nova.compute.manager [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:27 compute-0 nova_compute[253661]: 2025-11-22 09:45:27.619 253665 DEBUG nova.compute.manager [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing instance network info cache due to event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:45:27 compute-0 nova_compute[253661]: 2025-11-22 09:45:27.620 253665 DEBUG oslo_concurrency.lockutils [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:27 compute-0 ovn_controller[152872]: 2025-11-22T09:45:27Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:98:16 10.100.0.9
Nov 22 09:45:27 compute-0 ovn_controller[152872]: 2025-11-22T09:45:27Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:98:16 10.100.0.9
Nov 22 09:45:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:27.988 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:27.988 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:27.989 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.083 253665 DEBUG nova.network.neutron [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.098 253665 INFO nova.compute.manager [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Took 1.62 seconds to deallocate network for instance.
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.140 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.141 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 307 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 791 KiB/s wr, 65 op/s
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.255 253665 DEBUG oslo_concurrency.processutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.293 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.309 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.310 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance network_info: |[{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.310 253665 DEBUG oslo_concurrency.lockutils [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.310 253665 DEBUG nova.network.neutron [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.313 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start _get_guest_xml network_info=[{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.317 253665 WARNING nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.326 253665 DEBUG nova.virt.libvirt.host [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.326 253665 DEBUG nova.virt.libvirt.host [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.329 253665 DEBUG nova.virt.libvirt.host [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.330 253665 DEBUG nova.virt.libvirt.host [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.330 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.330 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.330 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.332 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.332 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.332 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.332 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.335 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:45:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651576619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.709 253665 DEBUG oslo_concurrency.processutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.715 253665 DEBUG nova.compute.provider_tree [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.729 253665 DEBUG nova.scheduler.client.report [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.747 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:45:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478967708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.775 253665 INFO nova.scheduler.client.report [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance b4a045a0-0a46-4644-8d2e-9ec4a6d893b9
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.779 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.808 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.812 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:28 compute-0 nova_compute[253661]: 2025-11-22 09:45:28.916 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:45:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142577657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.287 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.289 253665 DEBUG nova.virt.libvirt.vif [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:45:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-221401421',display_name='tempest-TestNetworkBasicOps-server-221401421',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-221401421',id=141,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFpRaOd0MA6ods5Fgu/bePJdKNA6xJzpwKTamybJrRd4vBorrEhiuMwvVBW2vy+fN3+ZAEzEiG8NI9LxFAosf7VdPZQ2Hzoq936Yx2tDHAB+5D4UznxlVut3DWP76u/ISw==',key_name='tempest-TestNetworkBasicOps-1054111100',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-qlv4m0ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:45:23Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=e1b6c07e-b79f-4b39-a2b8-a952e54f4972,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.289 253665 DEBUG nova.network.os_vif_util [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.290 253665 DEBUG nova.network.os_vif_util [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.291 253665 DEBUG nova.objects.instance [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1b6c07e-b79f-4b39-a2b8-a952e54f4972 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.302 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <uuid>e1b6c07e-b79f-4b39-a2b8-a952e54f4972</uuid>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <name>instance-0000008d</name>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <nova:name>tempest-TestNetworkBasicOps-server-221401421</nova:name>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:45:28</nova:creationTime>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <nova:port uuid="2bf46f44-05ff-4af4-ba41-f280a21be09e">
Nov 22 09:45:29 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <system>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <entry name="serial">e1b6c07e-b79f-4b39-a2b8-a952e54f4972</entry>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <entry name="uuid">e1b6c07e-b79f-4b39-a2b8-a952e54f4972</entry>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     </system>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <os>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   </os>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <features>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   </features>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk">
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       </source>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config">
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       </source>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:45:29 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ec:a9:1c"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <target dev="tap2bf46f44-05"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/console.log" append="off"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <video>
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     </video>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:45:29 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:45:29 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:45:29 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:45:29 compute-0 nova_compute[253661]: </domain>
Nov 22 09:45:29 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.303 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Preparing to wait for external event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.303 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.303 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.303 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.304 253665 DEBUG nova.virt.libvirt.vif [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:45:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-221401421',display_name='tempest-TestNetworkBasicOps-server-221401421',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-221401421',id=141,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFpRaOd0MA6ods5Fgu/bePJdKNA6xJzpwKTamybJrRd4vBorrEhiuMwvVBW2vy+fN3+ZAEzEiG8NI9LxFAosf7VdPZQ2Hzoq936Yx2tDHAB+5D4UznxlVut3DWP76u/ISw==',key_name='tempest-TestNetworkBasicOps-1054111100',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-qlv4m0ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:45:23Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=e1b6c07e-b79f-4b39-a2b8-a952e54f4972,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.305 253665 DEBUG nova.network.os_vif_util [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.305 253665 DEBUG nova.network.os_vif_util [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.306 253665 DEBUG os_vif [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.307 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.307 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.312 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bf46f44-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.313 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bf46f44-05, col_values=(('external_ids', {'iface-id': '2bf46f44-05ff-4af4-ba41-f280a21be09e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:a9:1c', 'vm-uuid': 'e1b6c07e-b79f-4b39-a2b8-a952e54f4972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.314 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:29 compute-0 NetworkManager[48920]: <info>  [1763804729.3154] manager: (tap2bf46f44-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.320 253665 INFO os_vif [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05')
Nov 22 09:45:29 compute-0 ceph-mon[75021]: pgmap v2507: 305 pgs: 305 active+clean; 307 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 791 KiB/s wr, 65 op/s
Nov 22 09:45:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/651576619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3478967708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:45:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2142577657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.381 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.381 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.382 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:ec:a9:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.382 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Using config drive
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.400 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.626 253665 DEBUG nova.compute.manager [req-eb9a8ee3-3122-4d09-98b8-d3e89ad7a5c0 req-6003503d-d8cd-4aa5-a4a7-d3d5f489080f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-deleted-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.643 253665 DEBUG nova.network.neutron [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updated VIF entry in instance network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.644 253665 DEBUG nova.network.neutron [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.655 253665 DEBUG oslo_concurrency.lockutils [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.843 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Creating config drive at /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.848 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxymbl1u6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:29 compute-0 nova_compute[253661]: 2025-11-22 09:45:29.988 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxymbl1u6" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.011 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.014 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 324 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.179 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.180 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Deleting local config drive /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config because it was imported into RBD.
Nov 22 09:45:30 compute-0 kernel: tap2bf46f44-05: entered promiscuous mode
Nov 22 09:45:30 compute-0 NetworkManager[48920]: <info>  [1763804730.2291] manager: (tap2bf46f44-05): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01516|binding|INFO|Claiming lport 2bf46f44-05ff-4af4-ba41-f280a21be09e for this chassis.
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01517|binding|INFO|2bf46f44-05ff-4af4-ba41-f280a21be09e: Claiming fa:16:3e:ec:a9:1c 10.100.0.9
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.234 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.244 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:a9:1c 10.100.0.9'], port_security=['fa:16:3e:ec:a9:1c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e1b6c07e-b79f-4b39-a2b8-a952e54f4972', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf4ccd55-5049-48da-a040-7bc492278d9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bf69086-9ee8-4131-a2f6-8ce3890c821e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bf46f44-05ff-4af4-ba41-f280a21be09e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.244 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bf46f44-05ff-4af4-ba41-f280a21be09e in datapath 32b06b6f-2dbe-45a6-a0ed-07f342aa967b bound to our chassis
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.246 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32b06b6f-2dbe-45a6-a0ed-07f342aa967b
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01518|binding|INFO|Setting lport 2bf46f44-05ff-4af4-ba41-f280a21be09e ovn-installed in OVS
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01519|binding|INFO|Setting lport 2bf46f44-05ff-4af4-ba41-f280a21be09e up in Southbound
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.252 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 systemd-udevd[396446]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.262 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a81404af-0571-45fc-9c80-59bfba61550b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.263 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32b06b6f-21 in ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.265 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32b06b6f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.265 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e217332-26a8-403f-96e1-efbdfaa0b1af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.267 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7020dd-582b-4328-8146-4c906a6bcdce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 systemd-machined[215941]: New machine qemu-172-instance-0000008d.
Nov 22 09:45:30 compute-0 NetworkManager[48920]: <info>  [1763804730.2729] device (tap2bf46f44-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:45:30 compute-0 NetworkManager[48920]: <info>  [1763804730.2740] device (tap2bf46f44-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.278 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[542323b0-27d4-4373-991d-87c52dba0f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.283 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.283 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.283 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.284 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.284 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:30 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008d.
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.285 253665 INFO nova.compute.manager [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Terminating instance
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.286 253665 DEBUG nova.compute.manager [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.302 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d4566bee-e78f-4f6d-b342-2f1acd31bb7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.332 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8eef854b-8a1f-4d1b-9c10-53f294b846f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.338 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4055970a-ba2f-44ef-8557-a2aa52ec68ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 systemd-udevd[396450]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:45:30 compute-0 NetworkManager[48920]: <info>  [1763804730.3400] manager: (tap32b06b6f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/620)
Nov 22 09:45:30 compute-0 ceph-mon[75021]: pgmap v2508: 305 pgs: 305 active+clean; 324 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 22 09:45:30 compute-0 kernel: tap8e5490c3-8e (unregistering): left promiscuous mode
Nov 22 09:45:30 compute-0 NetworkManager[48920]: <info>  [1763804730.3556] device (tap8e5490c3-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01520|binding|INFO|Releasing lport 8e5490c3-8e77-4f49-a612-31f17e0a3586 from this chassis (sb_readonly=0)
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01521|binding|INFO|Setting lport 8e5490c3-8e77-4f49-a612-31f17e0a3586 down in Southbound
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01522|binding|INFO|Removing iface tap8e5490c3-8e ovn-installed in OVS
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.367 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.370 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 kernel: tap1010674e-1b (unregistering): left promiscuous mode
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.376 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f8:55 10.100.0.4'], port_security=['fa:16:3e:ec:f8:55 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '48af02cd-94c5-473f-a6f9-4d2caad8483f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621dd092-e20a-432f-8488-41d7fcd69532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=586feb4c-523c-413f-8bd3-6bc87edbdf4c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8e5490c3-8e77-4f49-a612-31f17e0a3586) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:30 compute-0 NetworkManager[48920]: <info>  [1763804730.3789] device (tap1010674e-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.383 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc14a8a-59e6-4b3c-8958-7b0fd88b9adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.385 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2b09daa8-91bf-4b72-8360-1cc68048e388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01523|binding|INFO|Releasing lport 1010674e-1b87-43cb-97bd-6bca4325a7f9 from this chassis (sb_readonly=0)
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01524|binding|INFO|Setting lport 1010674e-1b87-43cb-97bd-6bca4325a7f9 down in Southbound
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01525|binding|INFO|Removing iface tap1010674e-1b ovn-installed in OVS
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.402 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.404 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:03:fd 2001:db8::f816:3eff:fee5:3fd'], port_security=['fa:16:3e:e5:03:fd 2001:db8::f816:3eff:fee5:3fd'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee5:3fd/64', 'neutron:device_id': '48af02cd-94c5-473f-a6f9-4d2caad8483f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7556820e-db50-4efa-817c-86d63f0b8b71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1010674e-1b87-43cb-97bd-6bca4325a7f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:30 compute-0 NetworkManager[48920]: <info>  [1763804730.4147] device (tap32b06b6f-20): carrier: link connected
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.420 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4d273153-1f60-4959-a088-ea81190f5a5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000088.scope: Deactivated successfully.
Nov 22 09:45:30 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000088.scope: Consumed 15.873s CPU time.
Nov 22 09:45:30 compute-0 systemd-machined[215941]: Machine qemu-167-instance-00000088 terminated.
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.440 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa7d72c-dd0d-4057-a67b-64b01f897067]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32b06b6f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:46:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758115, 'reachable_time': 17068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396489, 'error': None, 'target': 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.455 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23368851-e2f9-499b-9cb4-a1018ae5ada1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:464b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758115, 'tstamp': 758115}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396491, 'error': None, 'target': 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.471 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4429b106-41aa-4cdd-bcdb-ab6d97f0f082]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32b06b6f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:46:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758115, 'reachable_time': 17068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396492, 'error': None, 'target': 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.503 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[739fe10d-4bc9-4992-979f-3392a8652d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 NetworkManager[48920]: <info>  [1763804730.5066] manager: (tap8e5490c3-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.534 253665 INFO nova.virt.libvirt.driver [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Instance destroyed successfully.
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.534 253665 DEBUG nova.objects.instance [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 48af02cd-94c5-473f-a6f9-4d2caad8483f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.549 253665 DEBUG nova.virt.libvirt.vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:43:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1350591661',display_name='tempest-TestGettingAddress-server-1350591661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1350591661',id=136,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-002uioix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:17Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=48af02cd-94c5-473f-a6f9-4d2caad8483f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.549 253665 DEBUG nova.network.os_vif_util [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.550 253665 DEBUG nova.network.os_vif_util [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.550 253665 DEBUG os_vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.552 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e5490c3-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.553 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.559 253665 INFO os_vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e')
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.561 253665 DEBUG nova.virt.libvirt.vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:43:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1350591661',display_name='tempest-TestGettingAddress-server-1350591661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1350591661',id=136,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-002uioix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:17Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=48af02cd-94c5-473f-a6f9-4d2caad8483f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.561 253665 DEBUG nova.network.os_vif_util [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.562 253665 DEBUG nova.network.os_vif_util [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.562 253665 DEBUG os_vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.564 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1010674e-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.566 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.568 253665 INFO os_vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b')
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.598 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68d0c5b8-0c65-4f03-b975-fa98b1dd6dd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.600 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32b06b6f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.600 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.600 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32b06b6f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:30 compute-0 NetworkManager[48920]: <info>  [1763804730.6032] manager: (tap32b06b6f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/622)
Nov 22 09:45:30 compute-0 kernel: tap32b06b6f-20: entered promiscuous mode
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.607 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32b06b6f-20, col_values=(('external_ids', {'iface-id': 'acb44b8b-e586-4d56-8c91-42b393fbe8ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 ovn_controller[152872]: 2025-11-22T09:45:30Z|01526|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.625 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32b06b6f-2dbe-45a6-a0ed-07f342aa967b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32b06b6f-2dbe-45a6-a0ed-07f342aa967b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.626 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d78bde6-821a-4de7-ad88-f5c46e568ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.628 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-32b06b6f-2dbe-45a6-a0ed-07f342aa967b
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/32b06b6f-2dbe-45a6-a0ed-07f342aa967b.pid.haproxy
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 32b06b6f-2dbe-45a6-a0ed-07f342aa967b
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:45:30 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.628 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'env', 'PROCESS_TAG=haproxy-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32b06b6f-2dbe-45a6-a0ed-07f342aa967b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:45:30 compute-0 podman[396552]: 2025-11-22 09:45:30.639202689 +0000 UTC m=+0.067838226 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:45:30 compute-0 podman[396549]: 2025-11-22 09:45:30.639206279 +0000 UTC m=+0.069188390 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.698 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804730.6985383, e1b6c07e-b79f-4b39-a2b8-a952e54f4972 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.699 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] VM Started (Lifecycle Event)
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.725 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.729 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804730.6986473, e1b6c07e-b79f-4b39-a2b8-a952e54f4972 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.729 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] VM Paused (Lifecycle Event)
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.757 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.760 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:45:30 compute-0 nova_compute[253661]: 2025-11-22 09:45:30.783 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.020 253665 INFO nova.virt.libvirt.driver [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Deleting instance files /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f_del
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.021 253665 INFO nova.virt.libvirt.driver [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Deletion of /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f_del complete
Nov 22 09:45:31 compute-0 podman[396642]: 2025-11-22 09:45:31.0333084 +0000 UTC m=+0.072436669 container create 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:45:31 compute-0 systemd[1]: Started libpod-conmon-990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322.scope.
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.077 253665 INFO nova.compute.manager [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.078 253665 DEBUG oslo.service.loopingcall [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.078 253665 DEBUG nova.compute.manager [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.078 253665 DEBUG nova.network.neutron [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:45:31 compute-0 podman[396642]: 2025-11-22 09:45:30.994566643 +0000 UTC m=+0.033694942 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:45:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05d1c1e0ba78584dff2dec95748fc513d262bc386f559cea8bbfce0b30478ef5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:31 compute-0 podman[396642]: 2025-11-22 09:45:31.14385218 +0000 UTC m=+0.182980489 container init 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:45:31 compute-0 podman[396642]: 2025-11-22 09:45:31.150558186 +0000 UTC m=+0.189686465 container start 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [NOTICE]   (396661) : New worker (396663) forked
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [NOTICE]   (396661) : Loading success.
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.213 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8e5490c3-8e77-4f49-a612-31f17e0a3586 in datapath 621dd092-e20a-432f-8488-41d7fcd69532 unbound from our chassis
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.215 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 621dd092-e20a-432f-8488-41d7fcd69532, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2824d233-183c-41ff-b85e-54d46134d9a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.217 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532 namespace which is not needed anymore
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [NOTICE]   (393104) : haproxy version is 2.8.14-c23fe91
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [NOTICE]   (393104) : path to executable is /usr/sbin/haproxy
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [WARNING]  (393104) : Exiting Master process...
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [ALERT]    (393104) : Current worker (393106) exited with code 143 (Terminated)
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [WARNING]  (393104) : All workers exited. Exiting... (0)
Nov 22 09:45:31 compute-0 systemd[1]: libpod-fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6.scope: Deactivated successfully.
Nov 22 09:45:31 compute-0 conmon[393100]: conmon fd2df0d2ca6f704d4075 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6.scope/container/memory.events
Nov 22 09:45:31 compute-0 podman[396689]: 2025-11-22 09:45:31.400889998 +0000 UTC m=+0.068059852 container died fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6-userdata-shm.mount: Deactivated successfully.
Nov 22 09:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ec57aa6059e798a6228ab70569192414cfc2ac4701797ee13c0ca77e6733e50-merged.mount: Deactivated successfully.
Nov 22 09:45:31 compute-0 podman[396689]: 2025-11-22 09:45:31.464685362 +0000 UTC m=+0.131855216 container cleanup fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:45:31 compute-0 systemd[1]: libpod-conmon-fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6.scope: Deactivated successfully.
Nov 22 09:45:31 compute-0 podman[396720]: 2025-11-22 09:45:31.526536079 +0000 UTC m=+0.040243104 container remove fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.532 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cf6844d2-7baf-45bc-99f0-e621545b495e]: (4, ('Sat Nov 22 09:45:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532 (fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6)\nfd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6\nSat Nov 22 09:45:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532 (fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6)\nfd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.533 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e72be594-9926-4ce2-ac24-74ff0930ff51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.534 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621dd092-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:31 compute-0 kernel: tap621dd092-e0: left promiscuous mode
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.555 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2198c3-7d18-4a3e-adb5-a2a9090c52dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.573 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e95be6e1-919a-40f7-b03c-c65523a87eea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.574 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[60d6ab1c-c97b-4781-9641-4f75515f923c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.590 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c83c410e-fea1-4a60-877b-fd7aa2e8a25a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750275, 'reachable_time': 16057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396735, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.593 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.593 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4db22d86-a9cd-422d-93f8-95a67e8763c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.594 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1010674e-1b87-43cb-97bd-6bca4325a7f9 in datapath 7a504de2-27b2-4d01-a183-d9b0331ca31e unbound from our chassis
Nov 22 09:45:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d621dd092\x2de20a\x2d432f\x2d8488\x2d41d7fcd69532.mount: Deactivated successfully.
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.596 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a504de2-27b2-4d01-a183-d9b0331ca31e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa74460-e1d2-4f46-924d-d093a5087c60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.597 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e namespace which is not needed anymore
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [NOTICE]   (393174) : haproxy version is 2.8.14-c23fe91
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [NOTICE]   (393174) : path to executable is /usr/sbin/haproxy
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [WARNING]  (393174) : Exiting Master process...
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [WARNING]  (393174) : Exiting Master process...
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [ALERT]    (393174) : Current worker (393176) exited with code 143 (Terminated)
Nov 22 09:45:31 compute-0 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [WARNING]  (393174) : All workers exited. Exiting... (0)
Nov 22 09:45:31 compute-0 systemd[1]: libpod-f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7.scope: Deactivated successfully.
Nov 22 09:45:31 compute-0 podman[396752]: 2025-11-22 09:45:31.736040923 +0000 UTC m=+0.043467364 container died f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.749 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-changed-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.750 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing instance network info cache due to event network-changed-8e5490c3-8e77-4f49-a612-31f17e0a3586. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.750 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.750 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.751 253665 DEBUG nova.network.neutron [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing network info cache for port 8e5490c3-8e77-4f49-a612-31f17e0a3586 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7-userdata-shm.mount: Deactivated successfully.
Nov 22 09:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-29096a98af37d370bc2562ae093d356474f4646e4103cb41d2d7d5290f21d167-merged.mount: Deactivated successfully.
Nov 22 09:45:31 compute-0 podman[396752]: 2025-11-22 09:45:31.773930159 +0000 UTC m=+0.081356600 container cleanup f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:45:31 compute-0 systemd[1]: libpod-conmon-f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7.scope: Deactivated successfully.
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.888 253665 DEBUG nova.compute.manager [req-4563dcd1-701d-4572-9f68-9e7a64ef6692 req-227bf369-557b-4d36-bedc-71c2c8b8f4a5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-deleted-1010674e-1b87-43cb-97bd-6bca4325a7f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.889 253665 INFO nova.compute.manager [req-4563dcd1-701d-4572-9f68-9e7a64ef6692 req-227bf369-557b-4d36-bedc-71c2c8b8f4a5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Neutron deleted interface 1010674e-1b87-43cb-97bd-6bca4325a7f9; detaching it from the instance and deleting it from the info cache
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.889 253665 DEBUG nova.network.neutron [req-4563dcd1-701d-4572-9f68-9e7a64ef6692 req-227bf369-557b-4d36-bedc-71c2c8b8f4a5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [{"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.912 253665 DEBUG nova.compute.manager [req-4563dcd1-701d-4572-9f68-9e7a64ef6692 req-227bf369-557b-4d36-bedc-71c2c8b8f4a5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Detach interface failed, port_id=1010674e-1b87-43cb-97bd-6bca4325a7f9, reason: Instance 48af02cd-94c5-473f-a6f9-4d2caad8483f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:45:31 compute-0 podman[396781]: 2025-11-22 09:45:31.987476302 +0000 UTC m=+0.195388386 container remove f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.992 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6585ce0a-e6f4-433c-b879-baf2fbdf0e44]: (4, ('Sat Nov 22 09:45:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e (f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7)\nf1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7\nSat Nov 22 09:45:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e (f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7)\nf1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d364bcf5-fc3a-4b2b-9783-f4927635e458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.995 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a504de2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:31 compute-0 kernel: tap7a504de2-20: left promiscuous mode
Nov 22 09:45:31 compute-0 nova_compute[253661]: 2025-11-22 09:45:31.997 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:32 compute-0 nova_compute[253661]: 2025-11-22 09:45:32.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.016 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb3b3b0-fb08-451d-8f42-cc80689c8544]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.041 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[235e1442-49a6-4f01-a297-ba5ce4c05571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.042 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32f22b5b-bfd8-4551-8c31-deec5bfc7293]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.059 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f2a937-61aa-4a15-a36d-7fca02547ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750368, 'reachable_time': 37441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396797, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.061 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:45:32 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.061 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[37a134ef-49cb-4467-8006-0d4c7cae973d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 444 KiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 22 09:45:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d7a504de2\x2d27b2\x2d4d01\x2da183\x2dd9b0331ca31e.mount: Deactivated successfully.
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.049 253665 DEBUG nova.network.neutron [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.070 253665 INFO nova.compute.manager [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Took 1.99 seconds to deallocate network for instance.
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.120 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.120 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.242 253665 DEBUG oslo_concurrency.processutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:33 compute-0 ceph-mon[75021]: pgmap v2509: 305 pgs: 305 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 444 KiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.278 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.278 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.279 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.279 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.279 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.281 253665 INFO nova.compute.manager [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Terminating instance
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.281 253665 DEBUG nova.compute.manager [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:45:33 compute-0 kernel: tapf0192978-09 (unregistering): left promiscuous mode
Nov 22 09:45:33 compute-0 NetworkManager[48920]: <info>  [1763804733.3894] device (tapf0192978-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:45:33 compute-0 ovn_controller[152872]: 2025-11-22T09:45:33Z|01527|binding|INFO|Releasing lport f0192978-0953-4171-b70f-7f21bd6af5a0 from this chassis (sb_readonly=0)
Nov 22 09:45:33 compute-0 ovn_controller[152872]: 2025-11-22T09:45:33Z|01528|binding|INFO|Setting lport f0192978-0953-4171-b70f-7f21bd6af5a0 down in Southbound
Nov 22 09:45:33 compute-0 ovn_controller[152872]: 2025-11-22T09:45:33Z|01529|binding|INFO|Removing iface tapf0192978-09 ovn-installed in OVS
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.412 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:98:16 10.100.0.9'], port_security=['fa:16:3e:73:98:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ed3583b5-6d93-4e3f-83e0-3b36f25f08f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a734f39d-baf0-4591-94dc-9057caf53bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '049b93fb-a2d2-4853-99c3-bef4a7dfe745', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce1fe74-6934-45b2-a6d9-4702f1b2307a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0192978-0953-4171-b70f-7f21bd6af5a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.413 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0192978-0953-4171-b70f-7f21bd6af5a0 in datapath a734f39d-baf0-4591-94dc-9057caf53bb4 unbound from our chassis
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.415 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a734f39d-baf0-4591-94dc-9057caf53bb4
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.434 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5967577e-899d-48ac-af32-8ac4fd5b6e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:33 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 22 09:45:33 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008c.scope: Consumed 13.562s CPU time.
Nov 22 09:45:33 compute-0 systemd-machined[215941]: Machine qemu-171-instance-0000008c terminated.
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.467 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe464810-656b-49d3-a608-855f461dd046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.471 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6dfdbf-1380-4f5e-a7c5-775f3ec3ca68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.505 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[af40aa40-96bd-45d9-a621-fe2a1a4f5e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[037f1221-0aea-4c27-b048-562669b46c6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa734f39d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752629, 'reachable_time': 34911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396835, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.528 253665 INFO nova.virt.libvirt.driver [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance destroyed successfully.
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.529 253665 DEBUG nova.objects.instance [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.547 253665 DEBUG nova.virt.libvirt.vif [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:45:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=140,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:45:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-h3590ebr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:45:11Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=ed3583b5-6d93-4e3f-83e0-3b36f25f08f1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.548 253665 DEBUG nova.network.os_vif_util [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.547 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a610acb8-32df-4903-9996-6e3a7306eda0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa734f39d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752643, 'tstamp': 752643}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396841, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa734f39d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752646, 'tstamp': 752646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396841, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.549 253665 DEBUG nova.network.os_vif_util [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.549 253665 DEBUG os_vif [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa734f39d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.552 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0192978-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.555 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa734f39d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.555 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.556 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa734f39d-b0, col_values=(('external_ids', {'iface-id': '3db82a3e-3c50-4f8e-b5b4-8b4657d60723'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:33 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.556 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.558 253665 INFO os_vif [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09')
Nov 22 09:45:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:45:33 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329337882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.716 253665 DEBUG oslo_concurrency.processutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.722 253665 DEBUG nova.compute.provider_tree [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.738 253665 DEBUG nova.scheduler.client.report [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.771 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.809 253665 INFO nova.scheduler.client.report [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 48af02cd-94c5-473f-a6f9-4d2caad8483f
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.863 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.864 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.864 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.864 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.864 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No waiting events found dispatching network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.865 253665 WARNING nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received unexpected event network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 for instance with vm_state deleted and task_state None.
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.865 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-unplugged-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.865 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.865 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.866 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.866 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] No waiting events found dispatching network-vif-unplugged-f0192978-0953-4171-b70f-7f21bd6af5a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.866 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-unplugged-f0192978-0953-4171-b70f-7f21bd6af5a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:45:33 compute-0 nova_compute[253661]: 2025-11-22 09:45:33.882 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.002 253665 DEBUG nova.compute.manager [req-ceb51883-689c-4644-b50d-103bbebb8806 req-3d5e576c-2b43-4224-9bd4-b5fc32a1252f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-deleted-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.099 253665 INFO nova.virt.libvirt.driver [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Deleting instance files /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_del
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.100 253665 INFO nova.virt.libvirt.driver [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Deletion of /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_del complete
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.146 253665 INFO nova.compute.manager [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.147 253665 DEBUG oslo.service.loopingcall [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.147 253665 DEBUG nova.compute.manager [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.147 253665 DEBUG nova.network.neutron [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:45:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 196 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 464 KiB/s rd, 3.9 MiB/s wr, 226 op/s
Nov 22 09:45:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3329337882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.423 253665 DEBUG nova.network.neutron [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updated VIF entry in instance network info cache for port 8e5490c3-8e77-4f49-a612-31f17e0a3586. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.424 253665 DEBUG nova.network.neutron [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [{"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.437 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.437 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.437 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Processing event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] No waiting events found dispatching network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 WARNING nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received unexpected event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e for instance with vm_state building and task_state spawning.
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-unplugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No waiting events found dispatching network-vif-unplugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-unplugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No waiting events found dispatching network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 WARNING nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received unexpected event network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 for instance with vm_state active and task_state deleting.
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-unplugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No waiting events found dispatching network-vif-unplugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.443 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-unplugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.443 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.448 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804734.4478645, e1b6c07e-b79f-4b39-a2b8-a952e54f4972 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.448 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] VM Resumed (Lifecycle Event)
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.450 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.453 253665 INFO nova.virt.libvirt.driver [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance spawned successfully.
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.454 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.464 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.472 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.474 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.475 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.475 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.475 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.476 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.476 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.521 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.553 253665 INFO nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Took 11.03 seconds to spawn the instance on the hypervisor.
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.553 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.608 253665 INFO nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Took 12.31 seconds to build instance.
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.624 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.754 253665 DEBUG nova.network.neutron [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.775 253665 INFO nova.compute.manager [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Took 0.63 seconds to deallocate network for instance.
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.824 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.825 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:34 compute-0 nova_compute[253661]: 2025-11-22 09:45:34.919 253665 DEBUG oslo_concurrency.processutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:35 compute-0 ceph-mon[75021]: pgmap v2510: 305 pgs: 305 active+clean; 196 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 464 KiB/s rd, 3.9 MiB/s wr, 226 op/s
Nov 22 09:45:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:45:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402981110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.380 253665 DEBUG oslo_concurrency.processutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.387 253665 DEBUG nova.compute.provider_tree [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.403 253665 DEBUG nova.scheduler.client.report [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.427 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.466 253665 INFO nova.scheduler.client.report [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance ed3583b5-6d93-4e3f-83e0-3b36f25f08f1
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.534 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.946 253665 DEBUG nova.compute.manager [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.946 253665 DEBUG oslo_concurrency.lockutils [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.947 253665 DEBUG oslo_concurrency.lockutils [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.947 253665 DEBUG oslo_concurrency.lockutils [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.947 253665 DEBUG nova.compute.manager [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] No waiting events found dispatching network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:35 compute-0 nova_compute[253661]: 2025-11-22 09:45:35.947 253665 WARNING nova.compute.manager [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received unexpected event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 for instance with vm_state deleted and task_state None.
Nov 22 09:45:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 196 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 3.9 MiB/s wr, 209 op/s
Nov 22 09:45:36 compute-0 nova_compute[253661]: 2025-11-22 09:45:36.218 253665 DEBUG nova.compute.manager [req-ef57665d-fcb2-4098-a1a3-0f6fc623c93e req-263fd912-2d84-48df-8598-67aa05ab31cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-deleted-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1402981110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:36.332 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:36 compute-0 nova_compute[253661]: 2025-11-22 09:45:36.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:36.334 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:45:36 compute-0 nova_compute[253661]: 2025-11-22 09:45:36.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:37 compute-0 ceph-mon[75021]: pgmap v2511: 305 pgs: 305 active+clean; 196 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 3.9 MiB/s wr, 209 op/s
Nov 22 09:45:37 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:37.336 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:37 compute-0 podman[396886]: 2025-11-22 09:45:37.40516126 +0000 UTC m=+0.089825369 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.153 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804723.1486976, b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.153 253665 INFO nova.compute.manager [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] VM Stopped (Lifecycle Event)
Nov 22 09:45:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 178 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 3.9 MiB/s wr, 226 op/s
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.183 253665 DEBUG nova.compute.manager [None req-a2738896-3169-4a6c-bcdb-82b1e82b86c4 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.380 253665 DEBUG nova.compute.manager [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-changed-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.380 253665 DEBUG nova.compute.manager [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing instance network info cache due to event network-changed-491b9f04-4133-4553-a044-0dffe6278421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.380 253665 DEBUG oslo_concurrency.lockutils [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.381 253665 DEBUG oslo_concurrency.lockutils [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.381 253665 DEBUG nova.network.neutron [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing network info cache for port 491b9f04-4133-4553-a044-0dffe6278421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.582 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.582 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.582 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.583 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.583 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.584 253665 INFO nova.compute.manager [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Terminating instance
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.585 253665 DEBUG nova.compute.manager [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:45:38 compute-0 kernel: tap491b9f04-41 (unregistering): left promiscuous mode
Nov 22 09:45:38 compute-0 NetworkManager[48920]: <info>  [1763804738.6381] device (tap491b9f04-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:45:38 compute-0 ovn_controller[152872]: 2025-11-22T09:45:38Z|01530|binding|INFO|Releasing lport 491b9f04-4133-4553-a044-0dffe6278421 from this chassis (sb_readonly=0)
Nov 22 09:45:38 compute-0 ovn_controller[152872]: 2025-11-22T09:45:38Z|01531|binding|INFO|Setting lport 491b9f04-4133-4553-a044-0dffe6278421 down in Southbound
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:38 compute-0 ovn_controller[152872]: 2025-11-22T09:45:38Z|01532|binding|INFO|Removing iface tap491b9f04-41 ovn-installed in OVS
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.663 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:38 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 22 09:45:38 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008a.scope: Consumed 15.607s CPU time.
Nov 22 09:45:38 compute-0 systemd-machined[215941]: Machine qemu-169-instance-0000008a terminated.
Nov 22 09:45:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.710 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:7d:61 10.100.0.11'], port_security=['fa:16:3e:a8:7d:61 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '71ef7514-c6bd-40ee-852a-4b850ca0a05c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a734f39d-baf0-4591-94dc-9057caf53bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c524ade6-1430-48f4-af9a-629e8a61db96 d6471b4e-7bc5-407e-a8cc-88aa50b6222f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce1fe74-6934-45b2-a6d9-4702f1b2307a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=491b9f04-4133-4553-a044-0dffe6278421) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.711 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 491b9f04-4133-4553-a044-0dffe6278421 in datapath a734f39d-baf0-4591-94dc-9057caf53bb4 unbound from our chassis
Nov 22 09:45:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.713 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a734f39d-baf0-4591-94dc-9057caf53bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:45:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.714 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f388ac8f-b2a4-4740-bed3-f333cd25cecc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.714 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 namespace which is not needed anymore
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.816 253665 INFO nova.virt.libvirt.driver [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Instance destroyed successfully.
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.817 253665 DEBUG nova.objects.instance [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid 71ef7514-c6bd-40ee-852a-4b850ca0a05c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.827 253665 DEBUG nova.virt.libvirt.vif [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:44:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=138,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-97r64zcs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:36Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=71ef7514-c6bd-40ee-852a-4b850ca0a05c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.827 253665 DEBUG nova.network.os_vif_util [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.828 253665 DEBUG nova.network.os_vif_util [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.829 253665 DEBUG os_vif [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.830 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap491b9f04-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:38 compute-0 nova_compute[253661]: 2025-11-22 09:45:38.838 253665 INFO os_vif [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41')
Nov 22 09:45:38 compute-0 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [NOTICE]   (394397) : haproxy version is 2.8.14-c23fe91
Nov 22 09:45:38 compute-0 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [NOTICE]   (394397) : path to executable is /usr/sbin/haproxy
Nov 22 09:45:38 compute-0 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [WARNING]  (394397) : Exiting Master process...
Nov 22 09:45:38 compute-0 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [ALERT]    (394397) : Current worker (394405) exited with code 143 (Terminated)
Nov 22 09:45:38 compute-0 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [WARNING]  (394397) : All workers exited. Exiting... (0)
Nov 22 09:45:38 compute-0 systemd[1]: libpod-e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852.scope: Deactivated successfully.
Nov 22 09:45:38 compute-0 podman[396936]: 2025-11-22 09:45:38.87600112 +0000 UTC m=+0.064043703 container died e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852-userdata-shm.mount: Deactivated successfully.
Nov 22 09:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e778d95479f50b481d89a1bee1b1080db3f4d51fefff371a7f567798dd2494e6-merged.mount: Deactivated successfully.
Nov 22 09:45:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:38 compute-0 podman[396936]: 2025-11-22 09:45:38.938555884 +0000 UTC m=+0.126598467 container cleanup e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:45:38 compute-0 systemd[1]: libpod-conmon-e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852.scope: Deactivated successfully.
Nov 22 09:45:39 compute-0 podman[396993]: 2025-11-22 09:45:39.023743038 +0000 UTC m=+0.057352658 container remove e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:45:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.033 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1965c7-0a74-443d-899d-7915b1c44f86]: (4, ('Sat Nov 22 09:45:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 (e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852)\ne02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852\nSat Nov 22 09:45:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 (e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852)\ne02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.035 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc657208-fb3a-477e-8fd2-f35b43fbf2fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.037 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa734f39d-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:45:39 compute-0 nova_compute[253661]: 2025-11-22 09:45:39.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:39 compute-0 kernel: tapa734f39d-b0: left promiscuous mode
Nov 22 09:45:39 compute-0 nova_compute[253661]: 2025-11-22 09:45:39.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.097 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99541157-c87c-495a-9f06-20d9efd5cde8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:39 compute-0 nova_compute[253661]: 2025-11-22 09:45:39.112 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.116 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a099bd28-4423-469e-9432-f00464ee74a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.117 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[791be265-f481-4776-bdd0-da19db410be0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.138 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c527661e-1604-4eac-881e-64aed9c483ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752619, 'reachable_time': 39111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397008, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:39 compute-0 systemd[1]: run-netns-ovnmeta\x2da734f39d\x2dbaf0\x2d4591\x2d94dc\x2d9057caf53bb4.mount: Deactivated successfully.
Nov 22 09:45:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.144 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:45:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.144 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e48c2eda-19b0-44ff-bc0d-036dfc23f855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:39 compute-0 ceph-mon[75021]: pgmap v2512: 305 pgs: 305 active+clean; 178 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 3.9 MiB/s wr, 226 op/s
Nov 22 09:45:39 compute-0 nova_compute[253661]: 2025-11-22 09:45:39.355 253665 INFO nova.virt.libvirt.driver [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Deleting instance files /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c_del
Nov 22 09:45:39 compute-0 nova_compute[253661]: 2025-11-22 09:45:39.357 253665 INFO nova.virt.libvirt.driver [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Deletion of /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c_del complete
Nov 22 09:45:39 compute-0 nova_compute[253661]: 2025-11-22 09:45:39.443 253665 INFO nova.compute.manager [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 22 09:45:39 compute-0 nova_compute[253661]: 2025-11-22 09:45:39.445 253665 DEBUG oslo.service.loopingcall [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:45:39 compute-0 nova_compute[253661]: 2025-11-22 09:45:39.445 253665 DEBUG nova.compute.manager [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:45:39 compute-0 nova_compute[253661]: 2025-11-22 09:45:39.445 253665 DEBUG nova.network.neutron [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:45:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 255 op/s
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.506 253665 DEBUG nova.compute.manager [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-unplugged-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.506 253665 DEBUG oslo_concurrency.lockutils [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.507 253665 DEBUG oslo_concurrency.lockutils [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.507 253665 DEBUG oslo_concurrency.lockutils [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.507 253665 DEBUG nova.compute.manager [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] No waiting events found dispatching network-vif-unplugged-491b9f04-4133-4553-a044-0dffe6278421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.508 253665 DEBUG nova.compute.manager [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-unplugged-491b9f04-4133-4553-a044-0dffe6278421 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.880 253665 DEBUG nova.network.neutron [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.890 253665 DEBUG nova.compute.manager [req-d18c5feb-c26c-4926-9368-27b26692b2ee req-62565068-ade0-4c30-a34e-70ec10cb5252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-deleted-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.890 253665 INFO nova.compute.manager [req-d18c5feb-c26c-4926-9368-27b26692b2ee req-62565068-ade0-4c30-a34e-70ec10cb5252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Neutron deleted interface 491b9f04-4133-4553-a044-0dffe6278421; detaching it from the instance and deleting it from the info cache
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.890 253665 DEBUG nova.network.neutron [req-d18c5feb-c26c-4926-9368-27b26692b2ee req-62565068-ade0-4c30-a34e-70ec10cb5252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.943 253665 INFO nova.compute.manager [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Took 1.50 seconds to deallocate network for instance.
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.950 253665 DEBUG nova.compute.manager [req-d18c5feb-c26c-4926-9368-27b26692b2ee req-62565068-ade0-4c30-a34e-70ec10cb5252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Detach interface failed, port_id=491b9f04-4133-4553-a044-0dffe6278421, reason: Instance 71ef7514-c6bd-40ee-852a-4b850ca0a05c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.993 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:40 compute-0 nova_compute[253661]: 2025-11-22 09:45:40.994 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.056 253665 DEBUG oslo_concurrency.processutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:41 compute-0 ceph-mon[75021]: pgmap v2513: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 255 op/s
Nov 22 09:45:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:45:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554038413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.496 253665 DEBUG oslo_concurrency.processutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.503 253665 DEBUG nova.compute.provider_tree [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.517 253665 DEBUG nova.scheduler.client.report [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.598 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.650 253665 INFO nova.scheduler.client.report [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance 71ef7514-c6bd-40ee-852a-4b850ca0a05c
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.726 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:41 compute-0 ovn_controller[152872]: 2025-11-22T09:45:41Z|01533|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.865 253665 DEBUG nova.network.neutron [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updated VIF entry in instance network info cache for port 491b9f04-4133-4553-a044-0dffe6278421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.865 253665 DEBUG nova.network.neutron [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:41 compute-0 nova_compute[253661]: 2025-11-22 09:45:41.892 253665 DEBUG oslo_concurrency.lockutils [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 152 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 43 KiB/s wr, 165 op/s
Nov 22 09:45:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2554038413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:42 compute-0 ceph-mon[75021]: pgmap v2514: 305 pgs: 305 active+clean; 152 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 43 KiB/s wr, 165 op/s
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.752 253665 DEBUG nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.753 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.753 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 DEBUG nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] No waiting events found dispatching network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 WARNING nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received unexpected event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 for instance with vm_state deleted and task_state None.
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 DEBUG nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 DEBUG nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing instance network info cache due to event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.755 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.755 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:42 compute-0 nova_compute[253661]: 2025-11-22 09:45:42.755 253665 DEBUG nova.network.neutron [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:45:43 compute-0 nova_compute[253661]: 2025-11-22 09:45:43.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:44 compute-0 nova_compute[253661]: 2025-11-22 09:45:44.128 253665 DEBUG nova.network.neutron [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updated VIF entry in instance network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:45:44 compute-0 nova_compute[253661]: 2025-11-22 09:45:44.130 253665 DEBUG nova.network.neutron [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:44 compute-0 nova_compute[253661]: 2025-11-22 09:45:44.161 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 88 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 38 KiB/s wr, 144 op/s
Nov 22 09:45:44 compute-0 nova_compute[253661]: 2025-11-22 09:45:44.256 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:45 compute-0 ceph-mon[75021]: pgmap v2515: 305 pgs: 305 active+clean; 88 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 38 KiB/s wr, 144 op/s
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.437 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.437 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.437 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.437 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e1b6c07e-b79f-4b39-a2b8-a952e54f4972 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.527 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804730.5260248, 48af02cd-94c5-473f-a6f9-4d2caad8483f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.528 253665 INFO nova.compute.manager [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] VM Stopped (Lifecycle Event)
Nov 22 09:45:45 compute-0 ovn_controller[152872]: 2025-11-22T09:45:45Z|01534|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.548 253665 DEBUG nova.compute.manager [None req-8cae69bc-5698-4e33-9435-94e6b2db11f6 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:45 compute-0 nova_compute[253661]: 2025-11-22 09:45:45.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:45 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Nov 22 09:45:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 88 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 111 op/s
Nov 22 09:45:46 compute-0 nova_compute[253661]: 2025-11-22 09:45:46.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:47 compute-0 ceph-mon[75021]: pgmap v2516: 305 pgs: 305 active+clean; 88 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 111 op/s
Nov 22 09:45:47 compute-0 nova_compute[253661]: 2025-11-22 09:45:47.446 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:45:47 compute-0 nova_compute[253661]: 2025-11-22 09:45:47.458 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:45:47 compute-0 nova_compute[253661]: 2025-11-22 09:45:47.458 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:45:47 compute-0 nova_compute[253661]: 2025-11-22 09:45:47.459 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:47 compute-0 nova_compute[253661]: 2025-11-22 09:45:47.459 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 96 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 511 KiB/s wr, 117 op/s
Nov 22 09:45:48 compute-0 ovn_controller[152872]: 2025-11-22T09:45:48Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:a9:1c 10.100.0.9
Nov 22 09:45:48 compute-0 ovn_controller[152872]: 2025-11-22T09:45:48Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:a9:1c 10.100.0.9
Nov 22 09:45:48 compute-0 nova_compute[253661]: 2025-11-22 09:45:48.524 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804733.5205002, ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:48 compute-0 nova_compute[253661]: 2025-11-22 09:45:48.524 253665 INFO nova.compute.manager [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] VM Stopped (Lifecycle Event)
Nov 22 09:45:48 compute-0 nova_compute[253661]: 2025-11-22 09:45:48.567 253665 DEBUG nova.compute.manager [None req-62b85729-f9b5-49a8-9f64-c829ed74e638 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:48 compute-0 nova_compute[253661]: 2025-11-22 09:45:48.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:49 compute-0 sudo[397032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:45:49 compute-0 sudo[397032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:49 compute-0 sudo[397032]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:49 compute-0 sudo[397057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:45:49 compute-0 sudo[397057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:49 compute-0 sudo[397057]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:49 compute-0 sudo[397082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:45:49 compute-0 sudo[397082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:49 compute-0 sudo[397082]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:49 compute-0 ceph-mon[75021]: pgmap v2517: 305 pgs: 305 active+clean; 96 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 511 KiB/s wr, 117 op/s
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.246968) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749247047, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2076, "num_deletes": 251, "total_data_size": 3349266, "memory_usage": 3394248, "flush_reason": "Manual Compaction"}
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749267028, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3293594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49980, "largest_seqno": 52055, "table_properties": {"data_size": 3284181, "index_size": 5907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19543, "raw_average_key_size": 20, "raw_value_size": 3265357, "raw_average_value_size": 3394, "num_data_blocks": 261, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804527, "oldest_key_time": 1763804527, "file_creation_time": 1763804749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 20110 microseconds, and 10394 cpu microseconds.
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.267083) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3293594 bytes OK
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.267103) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.269579) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.269592) EVENT_LOG_v1 {"time_micros": 1763804749269588, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.269612) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3340507, prev total WAL file size 3340507, number of live WAL files 2.
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.270425) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3216KB)], [116(8424KB)]
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749270489, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11920429, "oldest_snapshot_seqno": -1}
Nov 22 09:45:49 compute-0 sudo[397107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:45:49 compute-0 sudo[397107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7501 keys, 10263728 bytes, temperature: kUnknown
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749321163, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10263728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10213399, "index_size": 30441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 194100, "raw_average_key_size": 25, "raw_value_size": 10079283, "raw_average_value_size": 1343, "num_data_blocks": 1189, "num_entries": 7501, "num_filter_entries": 7501, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.321558) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10263728 bytes
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.323034) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.7 rd, 202.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 8015, records dropped: 514 output_compression: NoCompression
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.323050) EVENT_LOG_v1 {"time_micros": 1763804749323043, "job": 70, "event": "compaction_finished", "compaction_time_micros": 50800, "compaction_time_cpu_micros": 25020, "output_level": 6, "num_output_files": 1, "total_output_size": 10263728, "num_input_records": 8015, "num_output_records": 7501, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749323649, "job": 70, "event": "table_file_deletion", "file_number": 118}
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749324951, "job": 70, "event": "table_file_deletion", "file_number": 116}
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.270331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:49 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:49 compute-0 sudo[397107]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:45:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:45:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:45:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:45:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:45:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:45:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 13982a1a-0d5e-49d1-a709-5b066d8063c9 does not exist
Nov 22 09:45:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev feb23909-56fd-4244-a562-357bbf836155 does not exist
Nov 22 09:45:49 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d325f99f-9083-428b-a3fd-cd98673860d3 does not exist
Nov 22 09:45:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:45:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:45:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:45:49 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:45:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:45:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:45:49 compute-0 sudo[397162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:45:49 compute-0 sudo[397162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:49 compute-0 sudo[397162]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:49 compute-0 sudo[397187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:45:49 compute-0 sudo[397187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:49 compute-0 sudo[397187]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:49 compute-0 sudo[397212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:45:49 compute-0 sudo[397212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:49 compute-0 sudo[397212]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:50 compute-0 sudo[397237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:45:50 compute-0 sudo[397237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 117 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 158 op/s
Nov 22 09:45:50 compute-0 nova_compute[253661]: 2025-11-22 09:45:50.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:50 compute-0 nova_compute[253661]: 2025-11-22 09:45:50.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:45:50 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:45:50 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:45:50 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:45:50 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:45:50 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:45:50 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:45:50 compute-0 podman[397301]: 2025-11-22 09:45:50.424400355 +0000 UTC m=+0.043427044 container create b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:45:50 compute-0 systemd[1]: Started libpod-conmon-b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5.scope.
Nov 22 09:45:50 compute-0 podman[397301]: 2025-11-22 09:45:50.405892657 +0000 UTC m=+0.024919356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:45:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:45:50 compute-0 podman[397301]: 2025-11-22 09:45:50.524208789 +0000 UTC m=+0.143235478 container init b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:45:50 compute-0 podman[397301]: 2025-11-22 09:45:50.534247827 +0000 UTC m=+0.153274526 container start b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:45:50 compute-0 condescending_noyce[397317]: 167 167
Nov 22 09:45:50 compute-0 systemd[1]: libpod-b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5.scope: Deactivated successfully.
Nov 22 09:45:50 compute-0 podman[397301]: 2025-11-22 09:45:50.541389633 +0000 UTC m=+0.160416322 container attach b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:45:50 compute-0 conmon[397317]: conmon b9eae249f28d8cd89514 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5.scope/container/memory.events
Nov 22 09:45:50 compute-0 podman[397301]: 2025-11-22 09:45:50.542705116 +0000 UTC m=+0.161731805 container died b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:45:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f097ba5c088ee27d0a1201d016b5d2d00cd064a2ef7aa12e4ea866a1d8e0eefd-merged.mount: Deactivated successfully.
Nov 22 09:45:50 compute-0 podman[397301]: 2025-11-22 09:45:50.589284495 +0000 UTC m=+0.208311174 container remove b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:45:50 compute-0 systemd[1]: libpod-conmon-b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5.scope: Deactivated successfully.
Nov 22 09:45:50 compute-0 podman[397341]: 2025-11-22 09:45:50.755425148 +0000 UTC m=+0.043586827 container create a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:45:50 compute-0 systemd[1]: Started libpod-conmon-a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95.scope.
Nov 22 09:45:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:50 compute-0 podman[397341]: 2025-11-22 09:45:50.737904675 +0000 UTC m=+0.026066384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:45:50 compute-0 podman[397341]: 2025-11-22 09:45:50.843229307 +0000 UTC m=+0.131390996 container init a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:45:50 compute-0 podman[397341]: 2025-11-22 09:45:50.849298536 +0000 UTC m=+0.137460215 container start a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 09:45:50 compute-0 podman[397341]: 2025-11-22 09:45:50.853166452 +0000 UTC m=+0.141328141 container attach a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 09:45:51 compute-0 ceph-mon[75021]: pgmap v2518: 305 pgs: 305 active+clean; 117 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 158 op/s
Nov 22 09:45:51 compute-0 nova_compute[253661]: 2025-11-22 09:45:51.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:51 compute-0 boring_brahmagupta[397356]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:45:51 compute-0 boring_brahmagupta[397356]: --> relative data size: 1.0
Nov 22 09:45:51 compute-0 boring_brahmagupta[397356]: --> All data devices are unavailable
Nov 22 09:45:51 compute-0 nova_compute[253661]: 2025-11-22 09:45:51.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:51 compute-0 systemd[1]: libpod-a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95.scope: Deactivated successfully.
Nov 22 09:45:51 compute-0 podman[397341]: 2025-11-22 09:45:51.908443229 +0000 UTC m=+1.196604908 container died a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:45:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21-merged.mount: Deactivated successfully.
Nov 22 09:45:51 compute-0 podman[397341]: 2025-11-22 09:45:51.970522393 +0000 UTC m=+1.258684072 container remove a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:45:51 compute-0 systemd[1]: libpod-conmon-a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95.scope: Deactivated successfully.
Nov 22 09:45:51 compute-0 sudo[397237]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:52 compute-0 sudo[397401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:45:52 compute-0 sudo[397401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:52 compute-0 sudo[397401]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:52 compute-0 sudo[397426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:45:52 compute-0 sudo[397426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:52 compute-0 sudo[397426]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:52 compute-0 sudo[397451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:45:52 compute-0 sudo[397451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 22 09:45:52 compute-0 sudo[397451]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:52 compute-0 sudo[397476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:45:52 compute-0 sudo[397476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:52 compute-0 nova_compute[253661]: 2025-11-22 09:45:52.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:45:52
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data']
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:45:52 compute-0 podman[397541]: 2025-11-22 09:45:52.575732037 +0000 UTC m=+0.047847103 container create bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:45:52 compute-0 systemd[1]: Started libpod-conmon-bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8.scope.
Nov 22 09:45:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:45:52 compute-0 podman[397541]: 2025-11-22 09:45:52.650802461 +0000 UTC m=+0.122917547 container init bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:45:52 compute-0 podman[397541]: 2025-11-22 09:45:52.556783769 +0000 UTC m=+0.028898865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:45:52 compute-0 podman[397541]: 2025-11-22 09:45:52.659865684 +0000 UTC m=+0.131980750 container start bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:45:52 compute-0 wizardly_dijkstra[397557]: 167 167
Nov 22 09:45:52 compute-0 podman[397541]: 2025-11-22 09:45:52.664214691 +0000 UTC m=+0.136329757 container attach bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:45:52 compute-0 systemd[1]: libpod-bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8.scope: Deactivated successfully.
Nov 22 09:45:52 compute-0 podman[397541]: 2025-11-22 09:45:52.665181266 +0000 UTC m=+0.137296352 container died bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 09:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-23fef82c69ace631f1a831194362a094e8191c5c4ca48a156aa946cc27d19f33-merged.mount: Deactivated successfully.
Nov 22 09:45:52 compute-0 podman[397541]: 2025-11-22 09:45:52.70584038 +0000 UTC m=+0.177955446 container remove bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:45:52 compute-0 systemd[1]: libpod-conmon-bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8.scope: Deactivated successfully.
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:45:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:45:52 compute-0 podman[397580]: 2025-11-22 09:45:52.866979068 +0000 UTC m=+0.037899216 container create 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:45:52 compute-0 systemd[1]: Started libpod-conmon-72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f.scope.
Nov 22 09:45:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:52 compute-0 podman[397580]: 2025-11-22 09:45:52.850728397 +0000 UTC m=+0.021648565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:45:52 compute-0 podman[397580]: 2025-11-22 09:45:52.950929771 +0000 UTC m=+0.121849939 container init 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:45:52 compute-0 podman[397580]: 2025-11-22 09:45:52.95897042 +0000 UTC m=+0.129890578 container start 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:45:52 compute-0 podman[397580]: 2025-11-22 09:45:52.962518927 +0000 UTC m=+0.133439085 container attach 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:53 compute-0 ceph-mon[75021]: pgmap v2519: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.705 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:45:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229626796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.785 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.815 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804738.8142846, 71ef7514-c6bd-40ee-852a-4b850ca0a05c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.815 253665 INFO nova.compute.manager [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] VM Stopped (Lifecycle Event)
Nov 22 09:45:53 compute-0 affectionate_germain[397598]: {
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:     "0": [
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:         {
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "devices": [
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "/dev/loop3"
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             ],
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_name": "ceph_lv0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_size": "21470642176",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "name": "ceph_lv0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "tags": {
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.cluster_name": "ceph",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.crush_device_class": "",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.encrypted": "0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.osd_id": "0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.type": "block",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.vdo": "0"
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             },
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "type": "block",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "vg_name": "ceph_vg0"
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:         }
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:     ],
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:     "1": [
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:         {
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "devices": [
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "/dev/loop4"
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             ],
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_name": "ceph_lv1",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_size": "21470642176",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "name": "ceph_lv1",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "tags": {
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.cluster_name": "ceph",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.crush_device_class": "",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.encrypted": "0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.osd_id": "1",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.type": "block",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.vdo": "0"
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             },
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "type": "block",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "vg_name": "ceph_vg1"
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:         }
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:     ],
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:     "2": [
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:         {
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "devices": [
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "/dev/loop5"
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             ],
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_name": "ceph_lv2",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_size": "21470642176",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "name": "ceph_lv2",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "tags": {
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.cluster_name": "ceph",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.crush_device_class": "",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.encrypted": "0",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.osd_id": "2",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.type": "block",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:                 "ceph.vdo": "0"
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             },
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "type": "block",
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:             "vg_name": "ceph_vg2"
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:         }
Nov 22 09:45:53 compute-0 affectionate_germain[397598]:     ]
Nov 22 09:45:53 compute-0 affectionate_germain[397598]: }
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.843 253665 DEBUG nova.compute.manager [None req-dc53686c-0146-472f-bdab-e254b47e6830 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:45:53 compute-0 systemd[1]: libpod-72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f.scope: Deactivated successfully.
Nov 22 09:45:53 compute-0 podman[397580]: 2025-11-22 09:45:53.853515759 +0000 UTC m=+1.024435907 container died 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.868 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:45:53 compute-0 nova_compute[253661]: 2025-11-22 09:45:53.869 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128-merged.mount: Deactivated successfully.
Nov 22 09:45:53 compute-0 podman[397580]: 2025-11-22 09:45:53.921891007 +0000 UTC m=+1.092811145 container remove 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 09:45:53 compute-0 systemd[1]: libpod-conmon-72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f.scope: Deactivated successfully.
Nov 22 09:45:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:53 compute-0 sudo[397476]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:54 compute-0 sudo[397642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:45:54 compute-0 sudo[397642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:54 compute-0 sudo[397642]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.042 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.044 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3360MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.044 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.044 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:45:54 compute-0 sudo[397667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:45:54 compute-0 sudo[397667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:54 compute-0 sudo[397667]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:54 compute-0 sudo[397692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:45:54 compute-0 sudo[397692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:54 compute-0 sudo[397692]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.127 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e1b6c07e-b79f-4b39-a2b8-a952e54f4972 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.128 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.128 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:45:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Nov 22 09:45:54 compute-0 sudo[397717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:45:54 compute-0 sudo[397717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.187 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:45:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4229626796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:54 compute-0 podman[397800]: 2025-11-22 09:45:54.524806285 +0000 UTC m=+0.045912195 container create c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:45:54 compute-0 systemd[1]: Started libpod-conmon-c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14.scope.
Nov 22 09:45:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:45:54 compute-0 podman[397800]: 2025-11-22 09:45:54.504782681 +0000 UTC m=+0.025888651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:45:54 compute-0 podman[397800]: 2025-11-22 09:45:54.60480505 +0000 UTC m=+0.125910980 container init c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:45:54 compute-0 podman[397800]: 2025-11-22 09:45:54.612824449 +0000 UTC m=+0.133930359 container start c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:45:54 compute-0 podman[397800]: 2025-11-22 09:45:54.616445118 +0000 UTC m=+0.137551118 container attach c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:45:54 compute-0 intelligent_wilson[397816]: 167 167
Nov 22 09:45:54 compute-0 systemd[1]: libpod-c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14.scope: Deactivated successfully.
Nov 22 09:45:54 compute-0 podman[397800]: 2025-11-22 09:45:54.621998285 +0000 UTC m=+0.143104195 container died c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 09:45:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:45:54 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690095993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d31b9923edef04c69e0041813fab3f3ca3614c81c054bd6914110691100e0483-merged.mount: Deactivated successfully.
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.645 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.654 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:45:54 compute-0 podman[397800]: 2025-11-22 09:45:54.657387779 +0000 UTC m=+0.178493689 container remove c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:45:54 compute-0 systemd[1]: libpod-conmon-c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14.scope: Deactivated successfully.
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.681 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.718 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:45:54 compute-0 nova_compute[253661]: 2025-11-22 09:45:54.718 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:45:54 compute-0 podman[397842]: 2025-11-22 09:45:54.843376992 +0000 UTC m=+0.043679300 container create 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 09:45:54 compute-0 systemd[1]: Started libpod-conmon-57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee.scope.
Nov 22 09:45:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:45:54 compute-0 podman[397842]: 2025-11-22 09:45:54.825702085 +0000 UTC m=+0.026004413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:45:54 compute-0 podman[397842]: 2025-11-22 09:45:54.926238078 +0000 UTC m=+0.126540386 container init 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:45:54 compute-0 podman[397842]: 2025-11-22 09:45:54.942218702 +0000 UTC m=+0.142520990 container start 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:45:54 compute-0 podman[397842]: 2025-11-22 09:45:54.946079608 +0000 UTC m=+0.146381906 container attach 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:45:55 compute-0 nova_compute[253661]: 2025-11-22 09:45:55.046 253665 INFO nova.compute.manager [None req-2a338163-4df8-4940-b164-898080d580ed 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Get console output
Nov 22 09:45:55 compute-0 nova_compute[253661]: 2025-11-22 09:45:55.052 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:45:55 compute-0 ceph-mon[75021]: pgmap v2520: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Nov 22 09:45:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/690095993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:45:55 compute-0 ovn_controller[152872]: 2025-11-22T09:45:55Z|01535|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 09:45:55 compute-0 nova_compute[253661]: 2025-11-22 09:45:55.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:45:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:45:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:45:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:45:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:45:55 compute-0 ovn_controller[152872]: 2025-11-22T09:45:55Z|01536|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 09:45:55 compute-0 nova_compute[253661]: 2025-11-22 09:45:55.740 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:55 compute-0 exciting_cori[397858]: {
Nov 22 09:45:55 compute-0 exciting_cori[397858]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "osd_id": 1,
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "type": "bluestore"
Nov 22 09:45:55 compute-0 exciting_cori[397858]:     },
Nov 22 09:45:55 compute-0 exciting_cori[397858]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "osd_id": 0,
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "type": "bluestore"
Nov 22 09:45:55 compute-0 exciting_cori[397858]:     },
Nov 22 09:45:55 compute-0 exciting_cori[397858]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "osd_id": 2,
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:45:55 compute-0 exciting_cori[397858]:         "type": "bluestore"
Nov 22 09:45:55 compute-0 exciting_cori[397858]:     }
Nov 22 09:45:55 compute-0 exciting_cori[397858]: }
Nov 22 09:45:55 compute-0 systemd[1]: libpod-57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee.scope: Deactivated successfully.
Nov 22 09:45:55 compute-0 podman[397892]: 2025-11-22 09:45:55.947404043 +0000 UTC m=+0.021749717 container died 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2-merged.mount: Deactivated successfully.
Nov 22 09:45:56 compute-0 podman[397892]: 2025-11-22 09:45:56.011174288 +0000 UTC m=+0.085519942 container remove 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:45:56 compute-0 systemd[1]: libpod-conmon-57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee.scope: Deactivated successfully.
Nov 22 09:45:56 compute-0 sudo[397717]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:45:56 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:45:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:45:56 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:45:56 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1293a0ad-8670-482e-9e8a-acdf317ef2a4 does not exist
Nov 22 09:45:56 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5234f4eb-510d-4bdd-8c92-93dbe8abad78 does not exist
Nov 22 09:45:56 compute-0 sudo[397907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:45:56 compute-0 sudo[397907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:56 compute-0 sudo[397907]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 09:45:56 compute-0 sudo[397932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:45:56 compute-0 sudo[397932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:45:56 compute-0 sudo[397932]: pam_unix(sudo:session): session closed for user root
Nov 22 09:45:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:56.228 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:f0:a2 10.100.0.2 2001:db8::f816:3eff:fe38:f0a2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe38:f0a2/64', 'neutron:device_id': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b8d092bb-b893-4593-9090-1acdc081ae18) old=Port_Binding(mac=['fa:16:3e:38:f0:a2 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:56.229 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b8d092bb-b893-4593-9090-1acdc081ae18 in datapath b6b9221a-729b-4988-afa8-72f95360d9ea updated
Nov 22 09:45:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:56.230 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6b9221a-729b-4988-afa8-72f95360d9ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:45:56 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:56.232 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4f2cd3-9272-4fd2-92fd-e30f8c14d4f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:45:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:45:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:45:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:45:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:45:56 compute-0 nova_compute[253661]: 2025-11-22 09:45:56.719 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:56 compute-0 nova_compute[253661]: 2025-11-22 09:45:56.740 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:56 compute-0 nova_compute[253661]: 2025-11-22 09:45:56.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:56 compute-0 nova_compute[253661]: 2025-11-22 09:45:56.920 253665 INFO nova.compute.manager [None req-c862ba58-47ea-4467-8afc-4297b5472aa3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Get console output
Nov 22 09:45:56 compute-0 nova_compute[253661]: 2025-11-22 09:45:56.925 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:45:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:57.018 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:19:23 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c6337497-985f-4f89-84be-8d10ca67dfa1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6337497-985f-4f89-84be-8d10ca67dfa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8239e99d-886b-4c86-bd64-ce377c2ec6f6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=47f379dc-c905-4e14-9660-00b10a28ef04) old=Port_Binding(mac=['fa:16:3e:45:19:23 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c6337497-985f-4f89-84be-8d10ca67dfa1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6337497-985f-4f89-84be-8d10ca67dfa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:45:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:57.020 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 47f379dc-c905-4e14-9660-00b10a28ef04 in datapath c6337497-985f-4f89-84be-8d10ca67dfa1 updated
Nov 22 09:45:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:57.021 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6337497-985f-4f89-84be-8d10ca67dfa1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:45:57 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:45:57.021 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98d13600-d0c0-4739-b20a-dedf83602143]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:45:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:45:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:45:57 compute-0 ceph-mon[75021]: pgmap v2521: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 09:45:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 09:45:58 compute-0 nova_compute[253661]: 2025-11-22 09:45:58.208 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:58 compute-0 NetworkManager[48920]: <info>  [1763804758.2090] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Nov 22 09:45:58 compute-0 NetworkManager[48920]: <info>  [1763804758.2106] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Nov 22 09:45:58 compute-0 nova_compute[253661]: 2025-11-22 09:45:58.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:58 compute-0 ovn_controller[152872]: 2025-11-22T09:45:58Z|01537|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 09:45:58 compute-0 nova_compute[253661]: 2025-11-22 09:45:58.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:58 compute-0 nova_compute[253661]: 2025-11-22 09:45:58.626 253665 INFO nova.compute.manager [None req-1291d30d-9dcb-4021-8fea-8638cbf08b80 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Get console output
Nov 22 09:45:58 compute-0 nova_compute[253661]: 2025-11-22 09:45:58.631 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 09:45:58 compute-0 nova_compute[253661]: 2025-11-22 09:45:58.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:45:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.972692) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804758972736, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 370, "num_deletes": 257, "total_data_size": 210915, "memory_usage": 218712, "flush_reason": "Manual Compaction"}
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804758987699, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 209513, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52056, "largest_seqno": 52425, "table_properties": {"data_size": 207189, "index_size": 424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5706, "raw_average_key_size": 18, "raw_value_size": 202516, "raw_average_value_size": 647, "num_data_blocks": 18, "num_entries": 313, "num_filter_entries": 313, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804749, "oldest_key_time": 1763804749, "file_creation_time": 1763804758, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 15066 microseconds, and 1496 cpu microseconds.
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.987754) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 209513 bytes OK
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.987780) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990308) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990321) EVENT_LOG_v1 {"time_micros": 1763804758990318, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990370) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 208428, prev total WAL file size 208428, number of live WAL files 2.
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303035' seq:72057594037927935, type:22 .. '6C6F676D0032323538' seq:0, type:0; will stop at (end)
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(204KB)], [119(10023KB)]
Nov 22 09:45:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804758990851, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10473241, "oldest_snapshot_seqno": -1}
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7288 keys, 10350214 bytes, temperature: kUnknown
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804759151504, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10350214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10300623, "index_size": 30279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 190628, "raw_average_key_size": 26, "raw_value_size": 10169514, "raw_average_value_size": 1395, "num_data_blocks": 1179, "num_entries": 7288, "num_filter_entries": 7288, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804758, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.151836) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10350214 bytes
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.156901) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.2 rd, 64.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.8 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(99.4) write-amplify(49.4) OK, records in: 7814, records dropped: 526 output_compression: NoCompression
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.156923) EVENT_LOG_v1 {"time_micros": 1763804759156911, "job": 72, "event": "compaction_finished", "compaction_time_micros": 160755, "compaction_time_cpu_micros": 23834, "output_level": 6, "num_output_files": 1, "total_output_size": 10350214, "num_input_records": 7814, "num_output_records": 7288, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804759157110, "job": 72, "event": "table_file_deletion", "file_number": 121}
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804759159489, "job": 72, "event": "table_file_deletion", "file_number": 119}
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:45:59 compute-0 nova_compute[253661]: 2025-11-22 09:45:59.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:45:59 compute-0 ceph-mon[75021]: pgmap v2522: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 09:46:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Nov 22 09:46:01 compute-0 ceph-mon[75021]: pgmap v2523: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Nov 22 09:46:01 compute-0 podman[397958]: 2025-11-22 09:46:01.387845324 +0000 UTC m=+0.071981319 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 09:46:01 compute-0 podman[397959]: 2025-11-22 09:46:01.388197303 +0000 UTC m=+0.072661826 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.935 253665 DEBUG nova.compute.manager [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.935 253665 DEBUG nova.compute.manager [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing instance network info cache due to event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.935 253665 DEBUG oslo_concurrency.lockutils [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.935 253665 DEBUG oslo_concurrency.lockutils [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.936 253665 DEBUG nova.network.neutron [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.957 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.958 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.958 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.958 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.958 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.959 253665 INFO nova.compute.manager [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Terminating instance
Nov 22 09:46:01 compute-0 nova_compute[253661]: 2025-11-22 09:46:01.960 253665 DEBUG nova.compute.manager [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:46:02 compute-0 kernel: tap2bf46f44-05 (unregistering): left promiscuous mode
Nov 22 09:46:02 compute-0 NetworkManager[48920]: <info>  [1763804762.0276] device (tap2bf46f44-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:02 compute-0 ovn_controller[152872]: 2025-11-22T09:46:02Z|01538|binding|INFO|Releasing lport 2bf46f44-05ff-4af4-ba41-f280a21be09e from this chassis (sb_readonly=0)
Nov 22 09:46:02 compute-0 ovn_controller[152872]: 2025-11-22T09:46:02Z|01539|binding|INFO|Setting lport 2bf46f44-05ff-4af4-ba41-f280a21be09e down in Southbound
Nov 22 09:46:02 compute-0 ovn_controller[152872]: 2025-11-22T09:46:02Z|01540|binding|INFO|Removing iface tap2bf46f44-05 ovn-installed in OVS
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.046 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:a9:1c 10.100.0.9'], port_security=['fa:16:3e:ec:a9:1c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e1b6c07e-b79f-4b39-a2b8-a952e54f4972', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf4ccd55-5049-48da-a040-7bc492278d9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bf69086-9ee8-4131-a2f6-8ce3890c821e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bf46f44-05ff-4af4-ba41-f280a21be09e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.048 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bf46f44-05ff-4af4-ba41-f280a21be09e in datapath 32b06b6f-2dbe-45a6-a0ed-07f342aa967b unbound from our chassis
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.050 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32b06b6f-2dbe-45a6-a0ed-07f342aa967b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.052 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb4d291-74ca-4c97-aefe-a6ee343fe257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.053 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b namespace which is not needed anymore
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:02 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 22 09:46:02 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Consumed 13.290s CPU time.
Nov 22 09:46:02 compute-0 systemd-machined[215941]: Machine qemu-172-instance-0000008d terminated.
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 30 KiB/s wr, 1 op/s
Nov 22 09:46:02 compute-0 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [NOTICE]   (396661) : haproxy version is 2.8.14-c23fe91
Nov 22 09:46:02 compute-0 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [NOTICE]   (396661) : path to executable is /usr/sbin/haproxy
Nov 22 09:46:02 compute-0 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [WARNING]  (396661) : Exiting Master process...
Nov 22 09:46:02 compute-0 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [ALERT]    (396661) : Current worker (396663) exited with code 143 (Terminated)
Nov 22 09:46:02 compute-0 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [WARNING]  (396661) : All workers exited. Exiting... (0)
Nov 22 09:46:02 compute-0 systemd[1]: libpod-990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322.scope: Deactivated successfully.
Nov 22 09:46:02 compute-0 podman[398021]: 2025-11-22 09:46:02.19155953 +0000 UTC m=+0.050289083 container died 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.202 253665 INFO nova.virt.libvirt.driver [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance destroyed successfully.
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.203 253665 DEBUG nova.objects.instance [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid e1b6c07e-b79f-4b39-a2b8-a952e54f4972 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322-userdata-shm.mount: Deactivated successfully.
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.223 253665 DEBUG nova.virt.libvirt.vif [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:45:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-221401421',display_name='tempest-TestNetworkBasicOps-server-221401421',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-221401421',id=141,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFpRaOd0MA6ods5Fgu/bePJdKNA6xJzpwKTamybJrRd4vBorrEhiuMwvVBW2vy+fN3+ZAEzEiG8NI9LxFAosf7VdPZQ2Hzoq936Yx2tDHAB+5D4UznxlVut3DWP76u/ISw==',key_name='tempest-TestNetworkBasicOps-1054111100',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:45:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-qlv4m0ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:45:34Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=e1b6c07e-b79f-4b39-a2b8-a952e54f4972,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.223 253665 DEBUG nova.network.os_vif_util [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.224 253665 DEBUG nova.network.os_vif_util [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.225 253665 DEBUG os_vif [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.227 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bf46f44-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-05d1c1e0ba78584dff2dec95748fc513d262bc386f559cea8bbfce0b30478ef5-merged.mount: Deactivated successfully.
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.232 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.234 253665 INFO os_vif [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05')
Nov 22 09:46:02 compute-0 podman[398021]: 2025-11-22 09:46:02.239827582 +0000 UTC m=+0.098557135 container cleanup 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:46:02 compute-0 systemd[1]: libpod-conmon-990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322.scope: Deactivated successfully.
Nov 22 09:46:02 compute-0 podman[398078]: 2025-11-22 09:46:02.314357713 +0000 UTC m=+0.051168065 container remove 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.321 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[082e9408-4f17-425c-8df5-d45d4daefd29]: (4, ('Sat Nov 22 09:46:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b (990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322)\n990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322\nSat Nov 22 09:46:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b (990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322)\n990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.324 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[844ad775-4436-4da6-9105-62b3347775fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.325 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32b06b6f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:02 compute-0 kernel: tap32b06b6f-20: left promiscuous mode
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[361989cd-acf9-442e-9c1f-74711d703c93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.359 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d225522e-cd11-421f-9840-8275343bb6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.361 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4d538e-2841-4831-bc64-6b769352dc5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.377 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[36f01dd9-bd70-4274-9aa5-4f5a6d5ea575]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758106, 'reachable_time': 15612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398096, 'error': None, 'target': 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.379 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:46:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d32b06b6f\x2d2dbe\x2d45a6\x2da0ed\x2d07f342aa967b.mount: Deactivated successfully.
Nov 22 09:46:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.379 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6c88735b-39f7-4420-88e0-f6950d95f0a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.667 253665 INFO nova.virt.libvirt.driver [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Deleting instance files /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972_del
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.668 253665 INFO nova.virt.libvirt.driver [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Deletion of /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972_del complete
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.769 253665 INFO nova.compute.manager [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.769 253665 DEBUG oslo.service.loopingcall [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.769 253665 DEBUG nova.compute.manager [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:46:02 compute-0 nova_compute[253661]: 2025-11-22 09:46:02.770 253665 DEBUG nova.network.neutron [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007596545956241453 of space, bias 1.0, pg target 0.22789637868724358 quantized to 32 (current 32)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:46:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:46:03 compute-0 ceph-mon[75021]: pgmap v2524: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 30 KiB/s wr, 1 op/s
Nov 22 09:46:03 compute-0 nova_compute[253661]: 2025-11-22 09:46:03.776 253665 DEBUG nova.network.neutron [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updated VIF entry in instance network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:46:03 compute-0 nova_compute[253661]: 2025-11-22 09:46:03.776 253665 DEBUG nova.network.neutron [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:46:03 compute-0 nova_compute[253661]: 2025-11-22 09:46:03.792 253665 DEBUG nova.network.neutron [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:46:03 compute-0 nova_compute[253661]: 2025-11-22 09:46:03.833 253665 DEBUG oslo_concurrency.lockutils [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:46:03 compute-0 nova_compute[253661]: 2025-11-22 09:46:03.849 253665 INFO nova.compute.manager [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Took 1.08 seconds to deallocate network for instance.
Nov 22 09:46:03 compute-0 nova_compute[253661]: 2025-11-22 09:46:03.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:03 compute-0 nova_compute[253661]: 2025-11-22 09:46:03.911 253665 DEBUG nova.compute.manager [req-1cc8c360-0aa9-4b88-af78-3776a3878bcf req-caaa8977-6731-4a2c-a071-eea85a1c44eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-deleted-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:03 compute-0 nova_compute[253661]: 2025-11-22 09:46:03.963 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:03 compute-0 nova_compute[253661]: 2025-11-22 09:46:03.963 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.026 253665 DEBUG oslo_concurrency.processutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.080 253665 DEBUG nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-unplugged-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.080 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.080 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.081 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.081 253665 DEBUG nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] No waiting events found dispatching network-vif-unplugged-2bf46f44-05ff-4af4-ba41-f280a21be09e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.081 253665 WARNING nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received unexpected event network-vif-unplugged-2bf46f44-05ff-4af4-ba41-f280a21be09e for instance with vm_state deleted and task_state None.
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.081 253665 DEBUG nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 DEBUG nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] No waiting events found dispatching network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 WARNING nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received unexpected event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e for instance with vm_state deleted and task_state None.
Nov 22 09:46:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 57 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 16 KiB/s wr, 23 op/s
Nov 22 09:46:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:46:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472057837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.462 253665 DEBUG oslo_concurrency.processutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.468 253665 DEBUG nova.compute.provider_tree [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.488 253665 DEBUG nova.scheduler.client.report [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.519 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.569 253665 INFO nova.scheduler.client.report [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance e1b6c07e-b79f-4b39-a2b8-a952e54f4972
Nov 22 09:46:04 compute-0 nova_compute[253661]: 2025-11-22 09:46:04.667 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:05.051 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:f0:a2 10.100.0.2 2001:db8:0:1:f816:3eff:fe38:f0a2 2001:db8::f816:3eff:fe38:f0a2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe38:f0a2/64 2001:db8::f816:3eff:fe38:f0a2/64', 'neutron:device_id': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b8d092bb-b893-4593-9090-1acdc081ae18) old=Port_Binding(mac=['fa:16:3e:38:f0:a2 10.100.0.2 2001:db8::f816:3eff:fe38:f0a2'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe38:f0a2/64', 'neutron:device_id': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:46:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:05.052 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b8d092bb-b893-4593-9090-1acdc081ae18 in datapath b6b9221a-729b-4988-afa8-72f95360d9ea updated
Nov 22 09:46:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:05.053 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6b9221a-729b-4988-afa8-72f95360d9ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:46:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:05.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ab645c66-0fc9-41ab-9bdb-7402f184531e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:05 compute-0 ceph-mon[75021]: pgmap v2525: 305 pgs: 305 active+clean; 57 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 16 KiB/s wr, 23 op/s
Nov 22 09:46:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2472057837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 57 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 5.7 KiB/s wr, 23 op/s
Nov 22 09:46:06 compute-0 nova_compute[253661]: 2025-11-22 09:46:06.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:07 compute-0 nova_compute[253661]: 2025-11-22 09:46:07.230 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:07 compute-0 ceph-mon[75021]: pgmap v2526: 305 pgs: 305 active+clean; 57 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 5.7 KiB/s wr, 23 op/s
Nov 22 09:46:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Nov 22 09:46:08 compute-0 podman[398120]: 2025-11-22 09:46:08.373672495 +0000 UTC m=+0.071284891 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 22 09:46:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:09 compute-0 ceph-mon[75021]: pgmap v2527: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Nov 22 09:46:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Nov 22 09:46:11 compute-0 ceph-mon[75021]: pgmap v2528: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Nov 22 09:46:11 compute-0 nova_compute[253661]: 2025-11-22 09:46:11.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Nov 22 09:46:12 compute-0 nova_compute[253661]: 2025-11-22 09:46:12.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:46:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683379569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:46:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:46:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683379569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:46:13 compute-0 ceph-mon[75021]: pgmap v2529: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Nov 22 09:46:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2683379569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:46:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2683379569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:46:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Nov 22 09:46:14 compute-0 ceph-mon[75021]: pgmap v2530: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Nov 22 09:46:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 852 B/s wr, 5 op/s
Nov 22 09:46:16 compute-0 nova_compute[253661]: 2025-11-22 09:46:16.903 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:17 compute-0 nova_compute[253661]: 2025-11-22 09:46:17.199 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804762.198196, e1b6c07e-b79f-4b39-a2b8-a952e54f4972 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:46:17 compute-0 nova_compute[253661]: 2025-11-22 09:46:17.199 253665 INFO nova.compute.manager [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] VM Stopped (Lifecycle Event)
Nov 22 09:46:17 compute-0 nova_compute[253661]: 2025-11-22 09:46:17.225 253665 DEBUG nova.compute.manager [None req-cefc2a13-754c-4093-bee5-f64f99e3d4ac - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:46:17 compute-0 nova_compute[253661]: 2025-11-22 09:46:17.232 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:17 compute-0 ceph-mon[75021]: pgmap v2531: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 852 B/s wr, 5 op/s
Nov 22 09:46:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 853 B/s wr, 5 op/s
Nov 22 09:46:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:19 compute-0 ceph-mon[75021]: pgmap v2532: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 853 B/s wr, 5 op/s
Nov 22 09:46:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:21 compute-0 ceph-mon[75021]: pgmap v2533: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:21 compute-0 nova_compute[253661]: 2025-11-22 09:46:21.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:22 compute-0 nova_compute[253661]: 2025-11-22 09:46:22.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:46:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:46:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:46:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:46:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:46:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:46:23 compute-0 ceph-mon[75021]: pgmap v2534: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:25 compute-0 ceph-mon[75021]: pgmap v2535: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:26 compute-0 nova_compute[253661]: 2025-11-22 09:46:26.907 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:26 compute-0 nova_compute[253661]: 2025-11-22 09:46:26.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:27 compute-0 nova_compute[253661]: 2025-11-22 09:46:27.109 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:27 compute-0 nova_compute[253661]: 2025-11-22 09:46:27.234 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:27 compute-0 ceph-mon[75021]: pgmap v2536: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:27.989 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:27.989 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:27.989 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:28 compute-0 nova_compute[253661]: 2025-11-22 09:46:28.233 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:28 compute-0 nova_compute[253661]: 2025-11-22 09:46:28.233 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:28 compute-0 nova_compute[253661]: 2025-11-22 09:46:28.256 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:46:28 compute-0 nova_compute[253661]: 2025-11-22 09:46:28.381 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:28 compute-0 nova_compute[253661]: 2025-11-22 09:46:28.381 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:28 compute-0 nova_compute[253661]: 2025-11-22 09:46:28.388 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:46:28 compute-0 nova_compute[253661]: 2025-11-22 09:46:28.388 253665 INFO nova.compute.claims [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:46:28 compute-0 nova_compute[253661]: 2025-11-22 09:46:28.779 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:46:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520195221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.205 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.211 253665 DEBUG nova.compute.provider_tree [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.230 253665 DEBUG nova.scheduler.client.report [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:46:29 compute-0 ceph-mon[75021]: pgmap v2537: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3520195221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.436 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.436 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.614 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.615 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.725 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.760 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.917 253665 DEBUG nova.policy [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.920 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.921 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.922 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Creating image(s)
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.942 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.961 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.978 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:29 compute-0 nova_compute[253661]: 2025-11-22 09:46:29.982 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.062 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.063 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.063 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.064 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.083 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.087 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.342 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.392 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.473 253665 DEBUG nova.objects.instance [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.487 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.487 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Ensure instance console log exists: /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.488 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.488 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:30 compute-0 nova_compute[253661]: 2025-11-22 09:46:30.488 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:31 compute-0 ceph-mon[75021]: pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 09:46:31 compute-0 nova_compute[253661]: 2025-11-22 09:46:31.910 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:31 compute-0 nova_compute[253661]: 2025-11-22 09:46:31.970 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Successfully created port: be2ad403-fc37-4e1b-a9b8-f0e116595caf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:46:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 59 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 749 KiB/s wr, 11 op/s
Nov 22 09:46:32 compute-0 nova_compute[253661]: 2025-11-22 09:46:32.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:32 compute-0 podman[398336]: 2025-11-22 09:46:32.359168836 +0000 UTC m=+0.055309677 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:46:32 compute-0 podman[398335]: 2025-11-22 09:46:32.379774186 +0000 UTC m=+0.077941106 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 09:46:33 compute-0 ceph-mon[75021]: pgmap v2539: 305 pgs: 305 active+clean; 59 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 749 KiB/s wr, 11 op/s
Nov 22 09:46:33 compute-0 nova_compute[253661]: 2025-11-22 09:46:33.809 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Successfully updated port: be2ad403-fc37-4e1b-a9b8-f0e116595caf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:46:33 compute-0 nova_compute[253661]: 2025-11-22 09:46:33.874 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:46:33 compute-0 nova_compute[253661]: 2025-11-22 09:46:33.875 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:46:33 compute-0 nova_compute[253661]: 2025-11-22 09:46:33.875 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:46:33 compute-0 nova_compute[253661]: 2025-11-22 09:46:33.934 253665 DEBUG nova.compute.manager [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:33 compute-0 nova_compute[253661]: 2025-11-22 09:46:33.935 253665 DEBUG nova.compute.manager [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing instance network info cache due to event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:46:33 compute-0 nova_compute[253661]: 2025-11-22 09:46:33.935 253665 DEBUG oslo_concurrency.lockutils [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:46:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:46:34 compute-0 nova_compute[253661]: 2025-11-22 09:46:34.242 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:46:35 compute-0 ceph-mon[75021]: pgmap v2540: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:46:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.893 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.934 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.934 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance network_info: |[{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.935 253665 DEBUG oslo_concurrency.lockutils [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.935 253665 DEBUG nova.network.neutron [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.938 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start _get_guest_xml network_info=[{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.941 253665 WARNING nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.947 253665 DEBUG nova.virt.libvirt.host [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.948 253665 DEBUG nova.virt.libvirt.host [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.953 253665 DEBUG nova.virt.libvirt.host [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.954 253665 DEBUG nova.virt.libvirt.host [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.954 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.954 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.955 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.955 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.955 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.955 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.957 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:46:36 compute-0 nova_compute[253661]: 2025-11-22 09:46:36.959 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:37 compute-0 ceph-mon[75021]: pgmap v2541: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:46:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:46:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324791487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.398 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.420 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.424 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:46:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/864849740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.878 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.880 253665 DEBUG nova.virt.libvirt.vif [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:46:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-533516475',display_name='tempest-TestGettingAddress-server-533516475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-533516475',id=142,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-khcmddwq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:46:29Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=63134c6f-fc14-4157-9874-e7c6227f8d0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.880 253665 DEBUG nova.network.os_vif_util [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.881 253665 DEBUG nova.network.os_vif_util [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.882 253665 DEBUG nova.objects.instance [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.894 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <uuid>63134c6f-fc14-4157-9874-e7c6227f8d0a</uuid>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <name>instance-0000008e</name>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-533516475</nova:name>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:46:36</nova:creationTime>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <nova:port uuid="be2ad403-fc37-4e1b-a9b8-f0e116595caf">
Nov 22 09:46:37 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feca:893" ipVersion="6"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feca:893" ipVersion="6"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <system>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <entry name="serial">63134c6f-fc14-4157-9874-e7c6227f8d0a</entry>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <entry name="uuid">63134c6f-fc14-4157-9874-e7c6227f8d0a</entry>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     </system>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <os>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   </os>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <features>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   </features>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/63134c6f-fc14-4157-9874-e7c6227f8d0a_disk">
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config">
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:46:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:ca:08:93"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <target dev="tapbe2ad403-fc"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/console.log" append="off"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <video>
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     </video>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:46:37 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:46:37 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:46:37 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:46:37 compute-0 nova_compute[253661]: </domain>
Nov 22 09:46:37 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.894 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Preparing to wait for external event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.895 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.896 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.896 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.896 253665 DEBUG nova.virt.libvirt.vif [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:46:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-533516475',display_name='tempest-TestGettingAddress-server-533516475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-533516475',id=142,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-khcmddwq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:46:29Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=63134c6f-fc14-4157-9874-e7c6227f8d0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.897 253665 DEBUG nova.network.os_vif_util [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.897 253665 DEBUG nova.network.os_vif_util [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.898 253665 DEBUG os_vif [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.899 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.899 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.901 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe2ad403-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.902 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe2ad403-fc, col_values=(('external_ids', {'iface-id': 'be2ad403-fc37-4e1b-a9b8-f0e116595caf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:08:93', 'vm-uuid': '63134c6f-fc14-4157-9874-e7c6227f8d0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.903 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:37 compute-0 NetworkManager[48920]: <info>  [1763804797.9046] manager: (tapbe2ad403-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.909 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.910 253665 INFO os_vif [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc')
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.962 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.962 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.962 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:ca:08:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.962 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Using config drive
Nov 22 09:46:37 compute-0 nova_compute[253661]: 2025-11-22 09:46:37.983 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:46:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1324791487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:46:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/864849740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.427 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.428 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.428 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.483 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Creating config drive at /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.492 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_9sg_v_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.643 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_9sg_v_" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.682 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.687 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.866 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.867 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Deleting local config drive /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config because it was imported into RBD.
Nov 22 09:46:38 compute-0 kernel: tapbe2ad403-fc: entered promiscuous mode
Nov 22 09:46:38 compute-0 NetworkManager[48920]: <info>  [1763804798.9222] manager: (tapbe2ad403-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/626)
Nov 22 09:46:38 compute-0 ovn_controller[152872]: 2025-11-22T09:46:38Z|01541|binding|INFO|Claiming lport be2ad403-fc37-4e1b-a9b8-f0e116595caf for this chassis.
Nov 22 09:46:38 compute-0 ovn_controller[152872]: 2025-11-22T09:46:38Z|01542|binding|INFO|be2ad403-fc37-4e1b-a9b8-f0e116595caf: Claiming fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.930 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.961 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893'], port_security=['fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feca:893/64 2001:db8::f816:3eff:feca:893/64', 'neutron:device_id': '63134c6f-fc14-4157-9874-e7c6227f8d0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b8f1ae80-edda-4d40-9085-393558ac5aa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=be2ad403-fc37-4e1b-a9b8-f0e116595caf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.962 162862 INFO neutron.agent.ovn.metadata.agent [-] Port be2ad403-fc37-4e1b-a9b8-f0e116595caf in datapath b6b9221a-729b-4988-afa8-72f95360d9ea bound to our chassis
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.964 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6b9221a-729b-4988-afa8-72f95360d9ea
Nov 22 09:46:38 compute-0 systemd-machined[215941]: New machine qemu-173-instance-0000008e.
Nov 22 09:46:38 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008e.
Nov 22 09:46:38 compute-0 systemd-udevd[398521]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.977 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[798ba77f-5661-4a63-b1a4-9059f866358a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.978 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6b9221a-71 in ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.979 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6b9221a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.980 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f23af61a-b242-45cb-a132-1d355f6cbcb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.980 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[448a1106-fd1b-4fbc-9b82-97ef3e2e3128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:38 compute-0 NetworkManager[48920]: <info>  [1763804798.9880] device (tapbe2ad403-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:46:38 compute-0 NetworkManager[48920]: <info>  [1763804798.9891] device (tapbe2ad403-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:46:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.994 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[397ce3e6-730c-4459-989c-cf471caf0957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:38 compute-0 nova_compute[253661]: 2025-11-22 09:46:38.994 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:39 compute-0 ovn_controller[152872]: 2025-11-22T09:46:39Z|01543|binding|INFO|Setting lport be2ad403-fc37-4e1b-a9b8-f0e116595caf ovn-installed in OVS
Nov 22 09:46:39 compute-0 ovn_controller[152872]: 2025-11-22T09:46:39Z|01544|binding|INFO|Setting lport be2ad403-fc37-4e1b-a9b8-f0e116595caf up in Southbound
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.008 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8918ce58-4d61-4bda-a517-76c3cc46163e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.039 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa94ee3-b2ce-47d6-82d1-dc9f53cb5471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 systemd-udevd[398526]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.046 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b483b14-a47e-4d36-9edb-01e45594cf3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 NetworkManager[48920]: <info>  [1763804799.0477] manager: (tapb6b9221a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/627)
Nov 22 09:46:39 compute-0 podman[398507]: 2025-11-22 09:46:39.054296739 +0000 UTC m=+0.100504023 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.076 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a756565f-fc19-4188-aaad-d334200d58b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.079 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf5df24-b80d-4041-9fa9-c11ea7f66a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 NetworkManager[48920]: <info>  [1763804799.0977] device (tapb6b9221a-70): carrier: link connected
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.102 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa4a1ba-b6bc-4d34-a682-1c325f88e466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.121 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94672775-2453-4c4a-9d40-a08db899a684]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6b9221a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:f0:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764983, 'reachable_time': 18873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398569, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.139 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc38b0e-1518-4a17-9c70-ed50f9b16bce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:f0a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764983, 'tstamp': 764983}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398570, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.155 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[632baa45-1ecc-4cb3-9cd5-8a295eb2d684]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6b9221a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:f0:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764983, 'reachable_time': 18873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398571, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.183 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21759096-2cc4-46ad-bc54-b5e54df510f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.244 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c573f1f7-13d6-40ed-8f85-0d9d514c76cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.245 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6b9221a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.245 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.246 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6b9221a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:39 compute-0 kernel: tapb6b9221a-70: entered promiscuous mode
Nov 22 09:46:39 compute-0 NetworkManager[48920]: <info>  [1763804799.2501] manager: (tapb6b9221a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.250 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6b9221a-70, col_values=(('external_ids', {'iface-id': 'b8d092bb-b893-4593-9090-1acdc081ae18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.253 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6b9221a-729b-4988-afa8-72f95360d9ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6b9221a-729b-4988-afa8-72f95360d9ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:46:39 compute-0 ovn_controller[152872]: 2025-11-22T09:46:39Z|01545|binding|INFO|Releasing lport b8d092bb-b893-4593-9090-1acdc081ae18 from this chassis (sb_readonly=0)
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.254 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b86c5c7e-050a-4864-8cb2-3b21a07f8157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.255 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-b6b9221a-729b-4988-afa8-72f95360d9ea
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/b6b9221a-729b-4988-afa8-72f95360d9ea.pid.haproxy
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID b6b9221a-729b-4988-afa8-72f95360d9ea
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.255 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'env', 'PROCESS_TAG=haproxy-b6b9221a-729b-4988-afa8-72f95360d9ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6b9221a-729b-4988-afa8-72f95360d9ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.266 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.308 253665 DEBUG nova.compute.manager [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.308 253665 DEBUG oslo_concurrency.lockutils [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.309 253665 DEBUG oslo_concurrency.lockutils [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.309 253665 DEBUG oslo_concurrency.lockutils [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.309 253665 DEBUG nova.compute.manager [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Processing event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:46:39 compute-0 ceph-mon[75021]: pgmap v2542: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:46:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.430 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.465 253665 DEBUG nova.network.neutron [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated VIF entry in instance network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.466 253665 DEBUG nova.network.neutron [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.483 253665 DEBUG oslo_concurrency.lockutils [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:46:39 compute-0 podman[398644]: 2025-11-22 09:46:39.598912417 +0000 UTC m=+0.050442826 container create effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.614 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804799.612993, 63134c6f-fc14-4157-9874-e7c6227f8d0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.615 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] VM Started (Lifecycle Event)
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.617 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.620 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.627 253665 INFO nova.virt.libvirt.driver [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance spawned successfully.
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.627 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:46:39 compute-0 systemd[1]: Started libpod-conmon-effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e.scope.
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.651 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.657 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:46:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.665 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.665 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.666 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.666 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.667 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.667 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:39 compute-0 podman[398644]: 2025-11-22 09:46:39.574055303 +0000 UTC m=+0.025585742 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd7c84ba3a988d7548558565c563363ed9d49bd874d1c96a511c8c8772c831a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:46:39 compute-0 podman[398644]: 2025-11-22 09:46:39.68324636 +0000 UTC m=+0.134776809 container init effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:46:39 compute-0 podman[398644]: 2025-11-22 09:46:39.690657812 +0000 UTC m=+0.142188221 container start effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.708 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.709 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804799.6145248, 63134c6f-fc14-4157-9874-e7c6227f8d0a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.709 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] VM Paused (Lifecycle Event)
Nov 22 09:46:39 compute-0 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [NOTICE]   (398665) : New worker (398667) forked
Nov 22 09:46:39 compute-0 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [NOTICE]   (398665) : Loading success.
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.740 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.744 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804799.6196914, 63134c6f-fc14-4157-9874-e7c6227f8d0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.744 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] VM Resumed (Lifecycle Event)
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.771 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.774 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.790 253665 INFO nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Took 9.87 seconds to spawn the instance on the hypervisor.
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.790 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.797 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.874 253665 INFO nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Took 11.53 seconds to build instance.
Nov 22 09:46:39 compute-0 nova_compute[253661]: 2025-11-22 09:46:39.902 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:46:40 compute-0 ceph-mon[75021]: pgmap v2543: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:46:41 compute-0 nova_compute[253661]: 2025-11-22 09:46:41.409 253665 DEBUG nova.compute.manager [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:41 compute-0 nova_compute[253661]: 2025-11-22 09:46:41.410 253665 DEBUG oslo_concurrency.lockutils [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:41 compute-0 nova_compute[253661]: 2025-11-22 09:46:41.410 253665 DEBUG oslo_concurrency.lockutils [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:41 compute-0 nova_compute[253661]: 2025-11-22 09:46:41.410 253665 DEBUG oslo_concurrency.lockutils [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:41 compute-0 nova_compute[253661]: 2025-11-22 09:46:41.411 253665 DEBUG nova.compute.manager [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] No waiting events found dispatching network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:46:41 compute-0 nova_compute[253661]: 2025-11-22 09:46:41.411 253665 WARNING nova.compute.manager [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received unexpected event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf for instance with vm_state active and task_state None.
Nov 22 09:46:41 compute-0 nova_compute[253661]: 2025-11-22 09:46:41.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 938 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 22 09:46:42 compute-0 NetworkManager[48920]: <info>  [1763804802.2067] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Nov 22 09:46:42 compute-0 NetworkManager[48920]: <info>  [1763804802.2074] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/630)
Nov 22 09:46:42 compute-0 nova_compute[253661]: 2025-11-22 09:46:42.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:42 compute-0 nova_compute[253661]: 2025-11-22 09:46:42.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:42 compute-0 ovn_controller[152872]: 2025-11-22T09:46:42Z|01546|binding|INFO|Releasing lport b8d092bb-b893-4593-9090-1acdc081ae18 from this chassis (sb_readonly=0)
Nov 22 09:46:42 compute-0 nova_compute[253661]: 2025-11-22 09:46:42.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:42 compute-0 nova_compute[253661]: 2025-11-22 09:46:42.395 253665 DEBUG nova.compute.manager [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:42 compute-0 nova_compute[253661]: 2025-11-22 09:46:42.396 253665 DEBUG nova.compute.manager [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing instance network info cache due to event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:46:42 compute-0 nova_compute[253661]: 2025-11-22 09:46:42.396 253665 DEBUG oslo_concurrency.lockutils [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:46:42 compute-0 nova_compute[253661]: 2025-11-22 09:46:42.396 253665 DEBUG oslo_concurrency.lockutils [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:46:42 compute-0 nova_compute[253661]: 2025-11-22 09:46:42.397 253665 DEBUG nova.network.neutron [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:46:42 compute-0 nova_compute[253661]: 2025-11-22 09:46:42.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:43 compute-0 ceph-mon[75021]: pgmap v2544: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 938 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 22 09:46:43 compute-0 nova_compute[253661]: 2025-11-22 09:46:43.768 253665 DEBUG nova.network.neutron [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated VIF entry in instance network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:46:43 compute-0 nova_compute[253661]: 2025-11-22 09:46:43.769 253665 DEBUG nova.network.neutron [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:46:43 compute-0 nova_compute[253661]: 2025-11-22 09:46:43.801 253665 DEBUG oslo_concurrency.lockutils [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:46:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 88 op/s
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.223 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.223 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.254 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:46:45 compute-0 ceph-mon[75021]: pgmap v2545: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 88 op/s
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.329 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.329 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.335 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.335 253665 INFO nova.compute.claims [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.360 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.360 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.360 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.360 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.447 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:46:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578900337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.907 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.914 253665 DEBUG nova.compute.provider_tree [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.928 253665 DEBUG nova.scheduler.client.report [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.960 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:45 compute-0 nova_compute[253661]: 2025-11-22 09:46:45.961 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.008 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.008 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.030 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.050 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:46:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.263 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.265 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.265 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating image(s)
Nov 22 09:46:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2578900337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.292 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.316 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.338 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.342 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.406 253665 DEBUG nova.policy [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15f54ba9d7eb4efd9b760da5c85ec22e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.446 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.447 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.448 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.448 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.468 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.474 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.857 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.925 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] resizing rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:46:46 compute-0 nova_compute[253661]: 2025-11-22 09:46:46.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:47 compute-0 nova_compute[253661]: 2025-11-22 09:46:47.043 253665 DEBUG nova.objects.instance [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'migration_context' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:46:47 compute-0 nova_compute[253661]: 2025-11-22 09:46:47.056 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:46:47 compute-0 nova_compute[253661]: 2025-11-22 09:46:47.056 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Ensure instance console log exists: /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:46:47 compute-0 nova_compute[253661]: 2025-11-22 09:46:47.057 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:47 compute-0 nova_compute[253661]: 2025-11-22 09:46:47.057 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:47 compute-0 nova_compute[253661]: 2025-11-22 09:46:47.057 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:47 compute-0 ceph-mon[75021]: pgmap v2546: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:46:47 compute-0 nova_compute[253661]: 2025-11-22 09:46:47.360 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Successfully created port: 88d574be-cb53-4693-a025-34a039ee625c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:46:47 compute-0 nova_compute[253661]: 2025-11-22 09:46:47.907 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.083 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.097 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.097 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.097 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:46:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.368 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Successfully updated port: 88d574be-cb53-4693-a025-34a039ee625c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.383 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.384 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.384 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.480 253665 DEBUG nova.compute.manager [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.481 253665 DEBUG nova.compute.manager [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.481 253665 DEBUG oslo_concurrency.lockutils [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:46:48 compute-0 nova_compute[253661]: 2025-11-22 09:46:48.519 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:46:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:49 compute-0 nova_compute[253661]: 2025-11-22 09:46:49.090 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:46:49 compute-0 nova_compute[253661]: 2025-11-22 09:46:49.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:46:49 compute-0 ceph-mon[75021]: pgmap v2547: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 22 09:46:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 941 KiB/s wr, 99 op/s
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.910 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.936 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.937 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance network_info: |[{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.937 253665 DEBUG oslo_concurrency.lockutils [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.937 253665 DEBUG nova.network.neutron [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.940 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start _get_guest_xml network_info=[{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.944 253665 WARNING nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.949 253665 DEBUG nova.virt.libvirt.host [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.950 253665 DEBUG nova.virt.libvirt.host [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.958 253665 DEBUG nova.virt.libvirt.host [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.959 253665 DEBUG nova.virt.libvirt.host [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.959 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.960 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.960 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.960 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.961 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.961 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.961 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.961 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.962 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.962 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.962 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.962 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:46:50 compute-0 nova_compute[253661]: 2025-11-22 09:46:50.965 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:51 compute-0 ceph-mon[75021]: pgmap v2548: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 941 KiB/s wr, 99 op/s
Nov 22 09:46:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:46:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925495255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:46:51 compute-0 nova_compute[253661]: 2025-11-22 09:46:51.500 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:51 compute-0 nova_compute[253661]: 2025-11-22 09:46:51.539 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:51 compute-0 nova_compute[253661]: 2025-11-22 09:46:51.547 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:51.999 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:46:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2473614015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.088 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.090 253665 DEBUG nova.virt.libvirt.vif [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJMfV5BjTM8GJujok7HYi2H1JqAcE7EEyl3AluUOeV8mGOJe1kvDgduzG9FjqiMj3IyTkvrleTcL49x3Y3dHrfp4PbZT/WUxBgqL6QlOxXbuGaO695U0GzmKtLI552+pbw==',key_name='tempest-TestShelveInstance-1840126280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:46:46Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.091 253665 DEBUG nova.network.os_vif_util [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.092 253665 DEBUG nova.network.os_vif_util [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.093 253665 DEBUG nova.objects.instance [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.117 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <uuid>91cfde9c-3aa6-4946-92d6-471c8f63eb2f</uuid>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <name>instance-0000008f</name>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <nova:name>tempest-TestShelveInstance-server-140973884</nova:name>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:46:50</nova:creationTime>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <nova:user uuid="15f54ba9d7eb4efd9b760da5c85ec22e">tempest-TestShelveInstance-463882348-project-member</nova:user>
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <nova:project uuid="2a86e5c3f3c34f2285b7958147f6bbd3">tempest-TestShelveInstance-463882348</nova:project>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <nova:port uuid="88d574be-cb53-4693-a025-34a039ee625c">
Nov 22 09:46:52 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <system>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <entry name="serial">91cfde9c-3aa6-4946-92d6-471c8f63eb2f</entry>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <entry name="uuid">91cfde9c-3aa6-4946-92d6-471c8f63eb2f</entry>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     </system>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <os>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   </os>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <features>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   </features>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk">
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config">
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       </source>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:46:52 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:88:cb:74"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <target dev="tap88d574be-cb"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/console.log" append="off"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <video>
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     </video>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:46:52 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:46:52 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:46:52 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:46:52 compute-0 nova_compute[253661]: </domain>
Nov 22 09:46:52 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.119 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Preparing to wait for external event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.120 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.120 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.120 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.121 253665 DEBUG nova.virt.libvirt.vif [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJMfV5BjTM8GJujok7HYi2H1JqAcE7EEyl3AluUOeV8mGOJe1kvDgduzG9FjqiMj3IyTkvrleTcL49x3Y3dHrfp4PbZT/WUxBgqL6QlOxXbuGaO695U0GzmKtLI552+pbw==',key_name='tempest-TestShelveInstance-1840126280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:46:46Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.122 253665 DEBUG nova.network.os_vif_util [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.122 253665 DEBUG nova.network.os_vif_util [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.123 253665 DEBUG os_vif [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.124 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.124 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.125 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.127 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.128 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88d574be-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.128 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88d574be-cb, col_values=(('external_ids', {'iface-id': '88d574be-cb53-4693-a025-34a039ee625c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:cb:74', 'vm-uuid': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:52 compute-0 NetworkManager[48920]: <info>  [1763804812.1314] manager: (tap88d574be-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.140 253665 INFO os_vif [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb')
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 138 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.222 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.223 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.223 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No VIF found with MAC fa:16:3e:88:cb:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.224 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Using config drive
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.250 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:46:52
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:46:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3925495255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:46:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2473614015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.678 253665 DEBUG nova.network.neutron [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.678 253665 DEBUG nova.network.neutron [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.698 253665 DEBUG oslo_concurrency.lockutils [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:46:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.833 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating config drive at /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.839 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_c4e2qz2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:52 compute-0 nova_compute[253661]: 2025-11-22 09:46:52.983 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_c4e2qz2" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.020 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.025 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.163 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:e1:fd 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99fc36cb-9a4c-4a60-8325-715974e22da5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c512420e-0b9a-4ee2-8cc1-60a3bae398ca) old=Port_Binding(mac=['fa:16:3e:5f:e1:fd 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.165 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c512420e-0b9a-4ee2-8cc1-60a3bae398ca in datapath 2420418b-f976-4644-88b8-5c9c24d72ca2 updated
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.168 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2420418b-f976-4644-88b8-5c9c24d72ca2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.169 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[004c1179-9af7-4ea3-9a82-0a0ea5be4e35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.205 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.207 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deleting local config drive /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config because it was imported into RBD.
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.264 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.264 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:53 compute-0 kernel: tap88d574be-cb: entered promiscuous mode
Nov 22 09:46:53 compute-0 NetworkManager[48920]: <info>  [1763804813.2690] manager: (tap88d574be-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/632)
Nov 22 09:46:53 compute-0 ovn_controller[152872]: 2025-11-22T09:46:53Z|01547|binding|INFO|Claiming lport 88d574be-cb53-4693-a025-34a039ee625c for this chassis.
Nov 22 09:46:53 compute-0 ovn_controller[152872]: 2025-11-22T09:46:53Z|01548|binding|INFO|88d574be-cb53-4693-a025-34a039ee625c: Claiming fa:16:3e:88:cb:74 10.100.0.14
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.279 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:cb:74 10.100.0.14'], port_security=['fa:16:3e:88:cb:74 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-449be411-464c-4d69-be15-6372ecacd778', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'da881b1b-2aad-4a91-9422-a708cc3c5d34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a67d762-85ed-414e-ab70-eac2ab54b109, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88d574be-cb53-4693-a025-34a039ee625c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.280 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88d574be-cb53-4693-a025-34a039ee625c in datapath 449be411-464c-4d69-be15-6372ecacd778 bound to our chassis
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.285 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 449be411-464c-4d69-be15-6372ecacd778
Nov 22 09:46:53 compute-0 ovn_controller[152872]: 2025-11-22T09:46:53Z|01549|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c ovn-installed in OVS
Nov 22 09:46:53 compute-0 ovn_controller[152872]: 2025-11-22T09:46:53Z|01550|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c up in Southbound
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.299 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf22108-3a2a-4db7-9823-a6875d7c38ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ceph-mon[75021]: pgmap v2549: 305 pgs: 305 active+clean; 138 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.303 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap449be411-41 in ovnmeta-449be411-464c-4d69-be15-6372ecacd778 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.307 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap449be411-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.308 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb0249f-f0f2-4d73-b285-aa53783cfe1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e8efb8-e0e2-4028-9e92-353468434247]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:53 compute-0 systemd-machined[215941]: New machine qemu-174-instance-0000008f.
Nov 22 09:46:53 compute-0 systemd-udevd[399004]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.323 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ca433674-0899-4f0f-b12b-37152c90ec81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ceph-mgr[75315]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1636168236
Nov 22 09:46:53 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-0000008f.
Nov 22 09:46:53 compute-0 NetworkManager[48920]: <info>  [1763804813.3412] device (tap88d574be-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:46:53 compute-0 NetworkManager[48920]: <info>  [1763804813.3448] device (tap88d574be-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0eed014-2683-40ab-860f-6bf5ccdf604d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.388 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[484562ee-5185-4b82-a658-c0c7372b090c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 NetworkManager[48920]: <info>  [1763804813.3966] manager: (tap449be411-40): new Veth device (/org/freedesktop/NetworkManager/Devices/633)
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab81a66-7da9-4457-918f-9822339756a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.446 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2815de9e-a9b6-4709-9461-394f1196c1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.451 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[32c50c5c-df82-44c7-8c58-6ec2b7ca414b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 NetworkManager[48920]: <info>  [1763804813.4906] device (tap449be411-40): carrier: link connected
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.498 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[539f12ec-90c3-44fe-8721-b863c3c946cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.523 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac44893a-e308-432b-80e3-e17fc6d54e49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap449be411-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 443], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766422, 'reachable_time': 20612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399055, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.548 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aca67983-0ba6-4031-976e-00d4a8f10968]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:5a86'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766422, 'tstamp': 766422}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399056, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06369b4d-fe19-4143-ad99-062c74caf7d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap449be411-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 443], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766422, 'reachable_time': 20612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399057, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_controller[152872]: 2025-11-22T09:46:53Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ca:08:93 10.100.0.6
Nov 22 09:46:53 compute-0 ovn_controller[152872]: 2025-11-22T09:46:53Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ca:08:93 10.100.0.6
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.613 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[24800f5a-e9fa-4e8f-971e-a61f9f45a34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.687 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e26876fe-7e8d-4bb4-947e-de4a92ca32de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.689 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap449be411-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.689 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.689 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap449be411-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:53 compute-0 kernel: tap449be411-40: entered promiscuous mode
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:53 compute-0 NetworkManager[48920]: <info>  [1763804813.6922] manager: (tap449be411-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.694 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap449be411-40, col_values=(('external_ids', {'iface-id': '02bcb711-03d1-4bf4-b274-247c09a1af89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.696 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:53 compute-0 ovn_controller[152872]: 2025-11-22T09:46:53Z|01551|binding|INFO|Releasing lport 02bcb711-03d1-4bf4-b274-247c09a1af89 from this chassis (sb_readonly=0)
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.712 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.713 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c58441d8-6e6d-453e-8931-ffd7b5318aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.714 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-449be411-464c-4d69-be15-6372ecacd778
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 449be411-464c-4d69-be15-6372ecacd778
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:46:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.715 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'env', 'PROCESS_TAG=haproxy-449be411-464c-4d69-be15-6372ecacd778', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/449be411-464c-4d69-be15-6372ecacd778.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:46:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:46:53 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668907189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.763 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.858 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.859 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.862 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:46:53 compute-0 nova_compute[253661]: 2025-11-22 09:46:53.863 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:46:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.080 253665 DEBUG nova.compute.manager [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.082 253665 DEBUG oslo_concurrency.lockutils [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.082 253665 DEBUG oslo_concurrency.lockutils [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.082 253665 DEBUG oslo_concurrency.lockutils [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.082 253665 DEBUG nova.compute.manager [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Processing event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.086 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.087 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3366MB free_disk=59.940032958984375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.087 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.087 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.179 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.182 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804814.1781473, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.182 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Started (Lifecycle Event)
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.186 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 63134c6f-fc14-4157-9874-e7c6227f8d0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.187 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:46:54 compute-0 podman[399129]: 2025-11-22 09:46:54.187889851 +0000 UTC m=+0.055715817 container create 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.187 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.189 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:46:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 159 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 106 op/s
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.196 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.205 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance spawned successfully.
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.205 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.210 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.213 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.217 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.229 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.230 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.234 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.235 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.236 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.237 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.237 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.237 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:46:54 compute-0 systemd[1]: Started libpod-conmon-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec.scope.
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.245 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.245 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804814.1783524, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.245 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Paused (Lifecycle Event)
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.249 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:46:54 compute-0 podman[399129]: 2025-11-22 09:46:54.157243384 +0000 UTC m=+0.025069380 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:46:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e471b3ba63f09a57e7dc430fa813007eb58d33dbbf5330f5c3a4f5390b8ed3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.276 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.283 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.290 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804814.1866298, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.290 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Resumed (Lifecycle Event)
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.301 253665 INFO nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Took 8.04 seconds to spawn the instance on the hypervisor.
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.301 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:46:54 compute-0 podman[399129]: 2025-11-22 09:46:54.30609764 +0000 UTC m=+0.173923686 container init 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.311 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:46:54 compute-0 podman[399129]: 2025-11-22 09:46:54.311846652 +0000 UTC m=+0.179672658 container start 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:46:54 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1668907189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.318 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:46:54 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [NOTICE]   (399149) : New worker (399151) forked
Nov 22 09:46:54 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [NOTICE]   (399149) : Loading success.
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.343 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.394 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.405 253665 INFO nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Took 9.10 seconds to build instance.
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.425 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:46:54 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1512643873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.808 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.815 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.834 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.858 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:46:54 compute-0 nova_compute[253661]: 2025-11-22 09:46:54.859 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:55 compute-0 ceph-mon[75021]: pgmap v2550: 305 pgs: 305 active+clean; 159 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 106 op/s
Nov 22 09:46:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1512643873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:46:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:46:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:46:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:46:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:46:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:46:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 159 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 3.5 MiB/s wr, 70 op/s
Nov 22 09:46:56 compute-0 sudo[399182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:46:56 compute-0 sudo[399182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:56 compute-0 sudo[399182]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:56 compute-0 sudo[399207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:46:56 compute-0 sudo[399207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:56 compute-0 sudo[399207]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:56 compute-0 sudo[399232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:46:56 compute-0 nova_compute[253661]: 2025-11-22 09:46:56.444 253665 DEBUG nova.compute.manager [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:56 compute-0 nova_compute[253661]: 2025-11-22 09:46:56.444 253665 DEBUG oslo_concurrency.lockutils [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:46:56 compute-0 nova_compute[253661]: 2025-11-22 09:46:56.445 253665 DEBUG oslo_concurrency.lockutils [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:46:56 compute-0 nova_compute[253661]: 2025-11-22 09:46:56.446 253665 DEBUG oslo_concurrency.lockutils [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:46:56 compute-0 nova_compute[253661]: 2025-11-22 09:46:56.446 253665 DEBUG nova.compute.manager [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:46:56 compute-0 nova_compute[253661]: 2025-11-22 09:46:56.446 253665 WARNING nova.compute.manager [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state None.
Nov 22 09:46:56 compute-0 sudo[399232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:56 compute-0 sudo[399232]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:56 compute-0 sudo[399257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 09:46:56 compute-0 sudo[399257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:56 compute-0 ceph-mon[75021]: pgmap v2551: 305 pgs: 305 active+clean; 159 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 3.5 MiB/s wr, 70 op/s
Nov 22 09:46:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:46:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:46:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:46:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:46:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:46:56 compute-0 sudo[399257]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:46:56 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:46:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:46:56 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:46:56 compute-0 sudo[399302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:46:56 compute-0 sudo[399302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:56 compute-0 sudo[399302]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:57 compute-0 nova_compute[253661]: 2025-11-22 09:46:57.001 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:57 compute-0 sudo[399327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:46:57 compute-0 sudo[399327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:57 compute-0 sudo[399327]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:57 compute-0 sudo[399352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:46:57 compute-0 sudo[399352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:57 compute-0 sudo[399352]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:57 compute-0 nova_compute[253661]: 2025-11-22 09:46:57.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:46:57 compute-0 sudo[399377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:46:57 compute-0 sudo[399377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:57 compute-0 sudo[399377]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:46:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:46:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:46:57 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:46:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:46:57 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:46:57 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2592f2bc-30b4-4b0a-9408-27efdbb8b69b does not exist
Nov 22 09:46:57 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5facc5d3-2a21-41da-a18c-5c5e3283f372 does not exist
Nov 22 09:46:57 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5d3d7e95-7ec1-4c21-a2b6-1f24ccd4a2e2 does not exist
Nov 22 09:46:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:46:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:46:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:46:57 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:46:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:46:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:46:57 compute-0 sudo[399434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:46:57 compute-0 sudo[399434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:57 compute-0 sudo[399434]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:57 compute-0 nova_compute[253661]: 2025-11-22 09:46:57.859 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:46:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:46:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:46:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:46:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:46:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:46:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:46:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:46:57 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:46:57 compute-0 sudo[399459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:46:57 compute-0 sudo[399459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:57 compute-0 sudo[399459]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:57 compute-0 sudo[399484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:46:57 compute-0 sudo[399484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:57 compute-0 sudo[399484]: pam_unix(sudo:session): session closed for user root
Nov 22 09:46:58 compute-0 sudo[399509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:46:58 compute-0 sudo[399509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:46:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Nov 22 09:46:58 compute-0 podman[399574]: 2025-11-22 09:46:58.306808369 +0000 UTC m=+0.039722591 container create 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 09:46:58 compute-0 systemd[1]: Started libpod-conmon-57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c.scope.
Nov 22 09:46:58 compute-0 podman[399574]: 2025-11-22 09:46:58.289305337 +0000 UTC m=+0.022219579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:46:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:46:58 compute-0 podman[399574]: 2025-11-22 09:46:58.412482359 +0000 UTC m=+0.145396691 container init 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 09:46:58 compute-0 podman[399574]: 2025-11-22 09:46:58.422726902 +0000 UTC m=+0.155641124 container start 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 09:46:58 compute-0 podman[399574]: 2025-11-22 09:46:58.42713146 +0000 UTC m=+0.160045732 container attach 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:46:58 compute-0 hopeful_pike[399590]: 167 167
Nov 22 09:46:58 compute-0 systemd[1]: libpod-57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c.scope: Deactivated successfully.
Nov 22 09:46:58 compute-0 podman[399574]: 2025-11-22 09:46:58.433386944 +0000 UTC m=+0.166301196 container died 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-89d40256d28c4db50d655cf531a306ca1eb3d410f331b53ade1c4b95bf67ef68-merged.mount: Deactivated successfully.
Nov 22 09:46:58 compute-0 podman[399574]: 2025-11-22 09:46:58.47690647 +0000 UTC m=+0.209820702 container remove 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:46:58 compute-0 systemd[1]: libpod-conmon-57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c.scope: Deactivated successfully.
Nov 22 09:46:58 compute-0 nova_compute[253661]: 2025-11-22 09:46:58.625 253665 DEBUG nova.compute.manager [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:46:58 compute-0 nova_compute[253661]: 2025-11-22 09:46:58.626 253665 DEBUG nova.compute.manager [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:46:58 compute-0 nova_compute[253661]: 2025-11-22 09:46:58.626 253665 DEBUG oslo_concurrency.lockutils [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:46:58 compute-0 nova_compute[253661]: 2025-11-22 09:46:58.627 253665 DEBUG oslo_concurrency.lockutils [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:46:58 compute-0 nova_compute[253661]: 2025-11-22 09:46:58.627 253665 DEBUG nova.network.neutron [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:46:58 compute-0 podman[399614]: 2025-11-22 09:46:58.674398796 +0000 UTC m=+0.049018431 container create 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:46:58 compute-0 systemd[1]: Started libpod-conmon-9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2.scope.
Nov 22 09:46:58 compute-0 podman[399614]: 2025-11-22 09:46:58.654971327 +0000 UTC m=+0.029590982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:46:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:46:58 compute-0 podman[399614]: 2025-11-22 09:46:58.81022021 +0000 UTC m=+0.184839865 container init 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:46:58 compute-0 podman[399614]: 2025-11-22 09:46:58.821690433 +0000 UTC m=+0.196310108 container start 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:46:58 compute-0 podman[399614]: 2025-11-22 09:46:58.826785929 +0000 UTC m=+0.201405574 container attach 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:46:58 compute-0 ceph-mon[75021]: pgmap v2552: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Nov 22 09:46:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:46:59 compute-0 thirsty_ellis[399630]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:46:59 compute-0 thirsty_ellis[399630]: --> relative data size: 1.0
Nov 22 09:46:59 compute-0 thirsty_ellis[399630]: --> All data devices are unavailable
Nov 22 09:46:59 compute-0 systemd[1]: libpod-9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2.scope: Deactivated successfully.
Nov 22 09:46:59 compute-0 systemd[1]: libpod-9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2.scope: Consumed 1.068s CPU time.
Nov 22 09:46:59 compute-0 podman[399614]: 2025-11-22 09:46:59.957846178 +0000 UTC m=+1.332465813 container died 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:47:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946-merged.mount: Deactivated successfully.
Nov 22 09:47:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 22 09:47:00 compute-0 nova_compute[253661]: 2025-11-22 09:47:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:47:00 compute-0 podman[399614]: 2025-11-22 09:47:00.246920307 +0000 UTC m=+1.621539952 container remove 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:47:00 compute-0 systemd[1]: libpod-conmon-9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2.scope: Deactivated successfully.
Nov 22 09:47:00 compute-0 nova_compute[253661]: 2025-11-22 09:47:00.268 253665 DEBUG nova.network.neutron [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:47:00 compute-0 nova_compute[253661]: 2025-11-22 09:47:00.268 253665 DEBUG nova.network.neutron [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:47:00 compute-0 sudo[399509]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:00 compute-0 nova_compute[253661]: 2025-11-22 09:47:00.296 253665 DEBUG oslo_concurrency.lockutils [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:47:00 compute-0 sudo[399673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:47:00 compute-0 sudo[399673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:00 compute-0 sudo[399673]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:00 compute-0 sudo[399698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:47:00 compute-0 sudo[399698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:00 compute-0 sudo[399698]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:00 compute-0 sudo[399723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:47:00 compute-0 sudo[399723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:00 compute-0 sudo[399723]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:00 compute-0 sudo[399748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:47:00 compute-0 sudo[399748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:00 compute-0 podman[399813]: 2025-11-22 09:47:00.961624944 +0000 UTC m=+0.092682479 container create 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 22 09:47:00 compute-0 podman[399813]: 2025-11-22 09:47:00.898139457 +0000 UTC m=+0.029196992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:47:01 compute-0 systemd[1]: Started libpod-conmon-9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7.scope.
Nov 22 09:47:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:47:01 compute-0 podman[399813]: 2025-11-22 09:47:01.159673535 +0000 UTC m=+0.290731080 container init 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:47:01 compute-0 podman[399813]: 2025-11-22 09:47:01.167837026 +0000 UTC m=+0.298894551 container start 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:47:01 compute-0 goofy_goodall[399829]: 167 167
Nov 22 09:47:01 compute-0 systemd[1]: libpod-9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7.scope: Deactivated successfully.
Nov 22 09:47:01 compute-0 podman[399813]: 2025-11-22 09:47:01.268755959 +0000 UTC m=+0.399813484 container attach 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:47:01 compute-0 podman[399813]: 2025-11-22 09:47:01.269576239 +0000 UTC m=+0.400633764 container died 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:47:01 compute-0 ceph-mon[75021]: pgmap v2553: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 22 09:47:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:01.448 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:e1:fd 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99fc36cb-9a4c-4a60-8325-715974e22da5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c512420e-0b9a-4ee2-8cc1-60a3bae398ca) old=Port_Binding(mac=['fa:16:3e:5f:e1:fd 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:47:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:01.450 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c512420e-0b9a-4ee2-8cc1-60a3bae398ca in datapath 2420418b-f976-4644-88b8-5c9c24d72ca2 updated
Nov 22 09:47:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:01.453 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2420418b-f976-4644-88b8-5c9c24d72ca2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:47:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:01.454 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0d4c71-0b86-4971-a697-55bb5841ecdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-35dcf518a6cbd241e895628bf2316d01fc511d8c53d10870dd60bea615d00c68-merged.mount: Deactivated successfully.
Nov 22 09:47:01 compute-0 podman[399813]: 2025-11-22 09:47:01.896710804 +0000 UTC m=+1.027768319 container remove 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:47:01 compute-0 systemd[1]: libpod-conmon-9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7.scope: Deactivated successfully.
Nov 22 09:47:02 compute-0 nova_compute[253661]: 2025-11-22 09:47:02.044 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:02 compute-0 nova_compute[253661]: 2025-11-22 09:47:02.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:02 compute-0 podman[399853]: 2025-11-22 09:47:02.096192491 +0000 UTC m=+0.024535898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 139 op/s
Nov 22 09:47:02 compute-0 podman[399853]: 2025-11-22 09:47:02.216693026 +0000 UTC m=+0.145036403 container create 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:47:02 compute-0 systemd[1]: Started libpod-conmon-6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6.scope.
Nov 22 09:47:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:47:02 compute-0 podman[399853]: 2025-11-22 09:47:02.321508134 +0000 UTC m=+0.249851511 container init 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 09:47:02 compute-0 podman[399853]: 2025-11-22 09:47:02.32781692 +0000 UTC m=+0.256160277 container start 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 22 09:47:02 compute-0 podman[399853]: 2025-11-22 09:47:02.368525375 +0000 UTC m=+0.296868752 container attach 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 09:47:02 compute-0 ceph-mon[75021]: pgmap v2554: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 139 op/s
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011071142231555643 of space, bias 1.0, pg target 0.3321342669466693 quantized to 32 (current 32)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:47:02 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:47:03 compute-0 infallible_borg[399869]: {
Nov 22 09:47:03 compute-0 infallible_borg[399869]:     "0": [
Nov 22 09:47:03 compute-0 infallible_borg[399869]:         {
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "devices": [
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "/dev/loop3"
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             ],
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_name": "ceph_lv0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_size": "21470642176",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "name": "ceph_lv0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "tags": {
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.cluster_name": "ceph",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.crush_device_class": "",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.encrypted": "0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.osd_id": "0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.type": "block",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.vdo": "0"
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             },
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "type": "block",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "vg_name": "ceph_vg0"
Nov 22 09:47:03 compute-0 infallible_borg[399869]:         }
Nov 22 09:47:03 compute-0 infallible_borg[399869]:     ],
Nov 22 09:47:03 compute-0 infallible_borg[399869]:     "1": [
Nov 22 09:47:03 compute-0 infallible_borg[399869]:         {
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "devices": [
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "/dev/loop4"
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             ],
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_name": "ceph_lv1",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_size": "21470642176",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "name": "ceph_lv1",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "tags": {
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.cluster_name": "ceph",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.crush_device_class": "",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.encrypted": "0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.osd_id": "1",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.type": "block",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.vdo": "0"
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             },
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "type": "block",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "vg_name": "ceph_vg1"
Nov 22 09:47:03 compute-0 infallible_borg[399869]:         }
Nov 22 09:47:03 compute-0 infallible_borg[399869]:     ],
Nov 22 09:47:03 compute-0 infallible_borg[399869]:     "2": [
Nov 22 09:47:03 compute-0 infallible_borg[399869]:         {
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "devices": [
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "/dev/loop5"
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             ],
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_name": "ceph_lv2",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_size": "21470642176",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "name": "ceph_lv2",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "tags": {
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.cluster_name": "ceph",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.crush_device_class": "",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.encrypted": "0",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.osd_id": "2",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.type": "block",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:                 "ceph.vdo": "0"
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             },
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "type": "block",
Nov 22 09:47:03 compute-0 infallible_borg[399869]:             "vg_name": "ceph_vg2"
Nov 22 09:47:03 compute-0 infallible_borg[399869]:         }
Nov 22 09:47:03 compute-0 infallible_borg[399869]:     ]
Nov 22 09:47:03 compute-0 infallible_borg[399869]: }
Nov 22 09:47:03 compute-0 systemd[1]: libpod-6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6.scope: Deactivated successfully.
Nov 22 09:47:03 compute-0 podman[399853]: 2025-11-22 09:47:03.11468559 +0000 UTC m=+1.043028947 container died 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 09:47:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28-merged.mount: Deactivated successfully.
Nov 22 09:47:03 compute-0 podman[399853]: 2025-11-22 09:47:03.225894265 +0000 UTC m=+1.154237622 container remove 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:47:03 compute-0 podman[399879]: 2025-11-22 09:47:03.234357944 +0000 UTC m=+0.091399827 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:47:03 compute-0 systemd[1]: libpod-conmon-6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6.scope: Deactivated successfully.
Nov 22 09:47:03 compute-0 sudo[399748]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:03 compute-0 podman[399885]: 2025-11-22 09:47:03.26211795 +0000 UTC m=+0.118161278 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:47:03 compute-0 sudo[399929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:47:03 compute-0 sudo[399929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:03 compute-0 sudo[399929]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:03 compute-0 sudo[399954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:47:03 compute-0 sudo[399954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:03 compute-0 sudo[399954]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:03 compute-0 sudo[399979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:47:03 compute-0 sudo[399979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:03 compute-0 sudo[399979]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:03 compute-0 sudo[400004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:47:03 compute-0 sudo[400004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:03 compute-0 podman[400070]: 2025-11-22 09:47:03.842371909 +0000 UTC m=+0.038084132 container create 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:47:03 compute-0 systemd[1]: Started libpod-conmon-74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1.scope.
Nov 22 09:47:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:47:03 compute-0 podman[400070]: 2025-11-22 09:47:03.917849013 +0000 UTC m=+0.113561256 container init 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:47:03 compute-0 podman[400070]: 2025-11-22 09:47:03.827802178 +0000 UTC m=+0.023514421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:47:03 compute-0 podman[400070]: 2025-11-22 09:47:03.924542037 +0000 UTC m=+0.120254260 container start 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 09:47:03 compute-0 podman[400070]: 2025-11-22 09:47:03.928788442 +0000 UTC m=+0.124500665 container attach 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 09:47:03 compute-0 nostalgic_heisenberg[400085]: 167 167
Nov 22 09:47:03 compute-0 systemd[1]: libpod-74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1.scope: Deactivated successfully.
Nov 22 09:47:03 compute-0 podman[400070]: 2025-11-22 09:47:03.930917115 +0000 UTC m=+0.126629348 container died 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:47:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-97b54562e4751b3371ada03443a59148331cf97b1783f7fb3ed9536a6c656d83-merged.mount: Deactivated successfully.
Nov 22 09:47:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:03 compute-0 podman[400070]: 2025-11-22 09:47:03.968442612 +0000 UTC m=+0.164154835 container remove 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 09:47:03 compute-0 systemd[1]: libpod-conmon-74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1.scope: Deactivated successfully.
Nov 22 09:47:04 compute-0 podman[400109]: 2025-11-22 09:47:04.145804931 +0000 UTC m=+0.039024815 container create c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:47:04 compute-0 systemd[1]: Started libpod-conmon-c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9.scope.
Nov 22 09:47:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 22 09:47:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:47:04 compute-0 podman[400109]: 2025-11-22 09:47:04.127971871 +0000 UTC m=+0.021191775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:47:04 compute-0 podman[400109]: 2025-11-22 09:47:04.237500505 +0000 UTC m=+0.130720419 container init c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:47:04 compute-0 podman[400109]: 2025-11-22 09:47:04.247345068 +0000 UTC m=+0.140564952 container start c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:47:04 compute-0 podman[400109]: 2025-11-22 09:47:04.250913306 +0000 UTC m=+0.144133220 container attach c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:47:05 compute-0 ceph-mon[75021]: pgmap v2555: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 22 09:47:05 compute-0 vigilant_jang[400125]: {
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "osd_id": 1,
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "type": "bluestore"
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:     },
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "osd_id": 0,
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "type": "bluestore"
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:     },
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "osd_id": 2,
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:         "type": "bluestore"
Nov 22 09:47:05 compute-0 vigilant_jang[400125]:     }
Nov 22 09:47:05 compute-0 vigilant_jang[400125]: }
Nov 22 09:47:05 compute-0 systemd[1]: libpod-c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9.scope: Deactivated successfully.
Nov 22 09:47:05 compute-0 podman[400109]: 2025-11-22 09:47:05.379026653 +0000 UTC m=+1.272246557 container died c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:47:05 compute-0 systemd[1]: libpod-c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9.scope: Consumed 1.134s CPU time.
Nov 22 09:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5-merged.mount: Deactivated successfully.
Nov 22 09:47:05 compute-0 podman[400109]: 2025-11-22 09:47:05.52065804 +0000 UTC m=+1.413877924 container remove c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:47:05 compute-0 systemd[1]: libpod-conmon-c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9.scope: Deactivated successfully.
Nov 22 09:47:05 compute-0 sudo[400004]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:47:05 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:47:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:47:05 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:47:05 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 652ebff0-9c3e-46e3-ab94-ec377e25351f does not exist
Nov 22 09:47:05 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1b6d76a4-074c-46e2-baa6-94da4cee413f does not exist
Nov 22 09:47:05 compute-0 sudo[400172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:47:05 compute-0 sudo[400172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:05 compute-0 sudo[400172]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:05 compute-0 sudo[400197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:47:05 compute-0 sudo[400197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:47:05 compute-0 sudo[400197]: pam_unix(sudo:session): session closed for user root
Nov 22 09:47:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 460 KiB/s wr, 93 op/s
Nov 22 09:47:06 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:47:06 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:47:06 compute-0 ceph-mon[75021]: pgmap v2556: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 460 KiB/s wr, 93 op/s
Nov 22 09:47:07 compute-0 nova_compute[253661]: 2025-11-22 09:47:07.047 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:07 compute-0 ovn_controller[152872]: 2025-11-22T09:47:07Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:cb:74 10.100.0.14
Nov 22 09:47:07 compute-0 ovn_controller[152872]: 2025-11-22T09:47:07Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:cb:74 10.100.0.14
Nov 22 09:47:07 compute-0 nova_compute[253661]: 2025-11-22 09:47:07.134 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 178 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Nov 22 09:47:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:09 compute-0 ceph-mon[75021]: pgmap v2557: 305 pgs: 305 active+clean; 178 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Nov 22 09:47:09 compute-0 podman[400222]: 2025-11-22 09:47:09.403005478 +0000 UTC m=+0.091126571 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 09:47:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 198 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Nov 22 09:47:11 compute-0 ceph-mon[75021]: pgmap v2558: 305 pgs: 305 active+clean; 198 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Nov 22 09:47:12 compute-0 nova_compute[253661]: 2025-11-22 09:47:12.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:12 compute-0 nova_compute[253661]: 2025-11-22 09:47:12.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 09:47:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:47:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4242003812' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:47:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:47:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4242003812' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:47:13 compute-0 ceph-mon[75021]: pgmap v2559: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 09:47:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/4242003812' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:47:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/4242003812' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:47:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 09:47:14 compute-0 ceph-mon[75021]: pgmap v2560: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 09:47:14 compute-0 nova_compute[253661]: 2025-11-22 09:47:14.743 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:14 compute-0 nova_compute[253661]: 2025-11-22 09:47:14.744 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:14 compute-0 nova_compute[253661]: 2025-11-22 09:47:14.744 253665 INFO nova.compute.manager [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Shelving
Nov 22 09:47:14 compute-0 nova_compute[253661]: 2025-11-22 09:47:14.769 253665 DEBUG nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 22 09:47:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 09:47:17 compute-0 kernel: tap88d574be-cb (unregistering): left promiscuous mode
Nov 22 09:47:17 compute-0 NetworkManager[48920]: <info>  [1763804837.0314] device (tap88d574be-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:47:17 compute-0 ovn_controller[152872]: 2025-11-22T09:47:17Z|01552|binding|INFO|Releasing lport 88d574be-cb53-4693-a025-34a039ee625c from this chassis (sb_readonly=0)
Nov 22 09:47:17 compute-0 ovn_controller[152872]: 2025-11-22T09:47:17Z|01553|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c down in Southbound
Nov 22 09:47:17 compute-0 ovn_controller[152872]: 2025-11-22T09:47:17Z|01554|binding|INFO|Removing iface tap88d574be-cb ovn-installed in OVS
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.090 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:cb:74 10.100.0.14'], port_security=['fa:16:3e:88:cb:74 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-449be411-464c-4d69-be15-6372ecacd778', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'da881b1b-2aad-4a91-9422-a708cc3c5d34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a67d762-85ed-414e-ab70-eac2ab54b109, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88d574be-cb53-4693-a025-34a039ee625c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.093 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88d574be-cb53-4693-a025-34a039ee625c in datapath 449be411-464c-4d69-be15-6372ecacd778 unbound from our chassis
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.095 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 449be411-464c-4d69-be15-6372ecacd778, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.098 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8d55b2a3-ccd8-452d-940e-b2801407b830]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.099 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-449be411-464c-4d69-be15-6372ecacd778 namespace which is not needed anymore
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:17 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 22 09:47:17 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008f.scope: Consumed 14.060s CPU time.
Nov 22 09:47:17 compute-0 systemd-machined[215941]: Machine qemu-174-instance-0000008f terminated.
Nov 22 09:47:17 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [NOTICE]   (399149) : haproxy version is 2.8.14-c23fe91
Nov 22 09:47:17 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [NOTICE]   (399149) : path to executable is /usr/sbin/haproxy
Nov 22 09:47:17 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [WARNING]  (399149) : Exiting Master process...
Nov 22 09:47:17 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [ALERT]    (399149) : Current worker (399151) exited with code 143 (Terminated)
Nov 22 09:47:17 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [WARNING]  (399149) : All workers exited. Exiting... (0)
Nov 22 09:47:17 compute-0 systemd[1]: libpod-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec.scope: Deactivated successfully.
Nov 22 09:47:17 compute-0 conmon[399145]: conmon 1c25e6f021647f540dfd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec.scope/container/memory.events
Nov 22 09:47:17 compute-0 podman[400272]: 2025-11-22 09:47:17.241389642 +0000 UTC m=+0.046768875 container died 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:47:17 compute-0 ceph-mon[75021]: pgmap v2561: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 09:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec-userdata-shm.mount: Deactivated successfully.
Nov 22 09:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2e471b3ba63f09a57e7dc430fa813007eb58d33dbbf5330f5c3a4f5390b8ed3-merged.mount: Deactivated successfully.
Nov 22 09:47:17 compute-0 podman[400272]: 2025-11-22 09:47:17.281043122 +0000 UTC m=+0.086422355 container cleanup 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:47:17 compute-0 systemd[1]: libpod-conmon-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec.scope: Deactivated successfully.
Nov 22 09:47:17 compute-0 podman[400306]: 2025-11-22 09:47:17.34251216 +0000 UTC m=+0.041526197 container remove 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.348 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c55b06f-a662-47a0-ad90-2b72d038bdec]: (4, ('Sat Nov 22 09:47:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778 (1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec)\n1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec\nSat Nov 22 09:47:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778 (1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec)\n1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.349 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b9cf10-e215-4a39-9743-822194fc1469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.350 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap449be411-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:17 compute-0 kernel: tap449be411-40: left promiscuous mode
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.371 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c552472-d0ee-4309-9c23-b16bfc85accc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.380 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.380 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.387 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78076d3c-15fb-4615-9f52-f851283e8402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.388 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9424ad82-b556-4191-80a8-8e060f46871a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.389 253665 DEBUG nova.compute.manager [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.390 253665 DEBUG oslo_concurrency.lockutils [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.390 253665 DEBUG oslo_concurrency.lockutils [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.390 253665 DEBUG oslo_concurrency.lockutils [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.391 253665 DEBUG nova.compute.manager [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.391 253665 WARNING nova.compute.manager [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state shelving.
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.400 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.404 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b10da92-f1c9-4a14-a0ec-1de311e2c20e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766411, 'reachable_time': 30458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400331, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.407 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-449be411-464c-4d69-be15-6372ecacd778 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:47:17 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.407 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbce1ba-dc94-4c16-b13c-15f8242383eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d449be411\x2d464c\x2d4d69\x2dbe15\x2d6372ecacd778.mount: Deactivated successfully.
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.578 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.579 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.586 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.587 253665 INFO nova.compute.claims [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.787 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance shutdown successfully after 3 seconds.
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.791 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance destroyed successfully.
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.792 253665 DEBUG nova.objects.instance [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:47:17 compute-0 nova_compute[253661]: 2025-11-22 09:47:17.808 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:47:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.227 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Beginning cold snapshot process
Nov 22 09:47:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:47:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883190485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.251 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.256 253665 DEBUG nova.compute.provider_tree [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.267 253665 DEBUG nova.scheduler.client.report [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:47:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2883190485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.289 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.290 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.431 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.431 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.437 253665 DEBUG nova.virt.libvirt.imagebackend [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.459 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.486 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.599 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.601 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.601 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Creating image(s)
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.619 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.641 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.663 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.666 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.708 253665 DEBUG nova.policy [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.714 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] creating snapshot(9d9a2ea52f604f159154dc1633b490bd) on rbd image(91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.748 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.748 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.749 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.749 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.770 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:47:18 compute-0 nova_compute[253661]: 2025-11-22 09:47:18.773 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:47:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.104 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.161 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.254 253665 DEBUG nova.objects.instance [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.271 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.271 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Ensure instance console log exists: /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.272 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.272 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.272 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Nov 22 09:47:19 compute-0 ceph-mon[75021]: pgmap v2562: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.480 253665 DEBUG nova.compute.manager [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.480 253665 DEBUG oslo_concurrency.lockutils [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.480 253665 DEBUG oslo_concurrency.lockutils [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.481 253665 DEBUG oslo_concurrency.lockutils [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.481 253665 DEBUG nova.compute.manager [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.481 253665 WARNING nova.compute.manager [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state shelving_image_uploading.
Nov 22 09:47:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Nov 22 09:47:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.603 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] cloning vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk@9d9a2ea52f604f159154dc1633b490bd to images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.787 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] flattening images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:47:19 compute-0 nova_compute[253661]: 2025-11-22 09:47:19.901 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Successfully created port: 1a443391-105a-4568-ba24-7748b702e21d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:47:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 197 KiB/s wr, 20 op/s
Nov 22 09:47:20 compute-0 nova_compute[253661]: 2025-11-22 09:47:20.483 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] removing snapshot(9d9a2ea52f604f159154dc1633b490bd) on rbd image(91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 09:47:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Nov 22 09:47:20 compute-0 ceph-mon[75021]: osdmap e310: 3 total, 3 up, 3 in
Nov 22 09:47:20 compute-0 ceph-mon[75021]: pgmap v2564: 305 pgs: 305 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 197 KiB/s wr, 20 op/s
Nov 22 09:47:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Nov 22 09:47:20 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Nov 22 09:47:20 compute-0 nova_compute[253661]: 2025-11-22 09:47:20.601 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] creating snapshot(snap) on rbd image(7427dc9c-0c7d-45bc-9904-89241d5b4e4d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:47:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Nov 22 09:47:21 compute-0 ceph-mon[75021]: osdmap e311: 3 total, 3 up, 3 in
Nov 22 09:47:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Nov 22 09:47:21 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Nov 22 09:47:22 compute-0 nova_compute[253661]: 2025-11-22 09:47:22.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:22 compute-0 nova_compute[253661]: 2025-11-22 09:47:22.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 272 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 7.3 MiB/s wr, 70 op/s
Nov 22 09:47:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:47:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:47:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:47:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:47:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:47:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:47:22 compute-0 nova_compute[253661]: 2025-11-22 09:47:22.927 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Successfully updated port: 1a443391-105a-4568-ba24-7748b702e21d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:47:23 compute-0 nova_compute[253661]: 2025-11-22 09:47:23.037 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:47:23 compute-0 nova_compute[253661]: 2025-11-22 09:47:23.037 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:47:23 compute-0 nova_compute[253661]: 2025-11-22 09:47:23.038 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:47:23 compute-0 ceph-mon[75021]: osdmap e312: 3 total, 3 up, 3 in
Nov 22 09:47:23 compute-0 ceph-mon[75021]: pgmap v2567: 305 pgs: 305 active+clean; 272 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 7.3 MiB/s wr, 70 op/s
Nov 22 09:47:23 compute-0 nova_compute[253661]: 2025-11-22 09:47:23.931 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:47:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 221 op/s
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.362 253665 DEBUG nova.compute.manager [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-changed-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.363 253665 DEBUG nova.compute.manager [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing instance network info cache due to event network-changed-1a443391-105a-4568-ba24-7748b702e21d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.363 253665 DEBUG oslo_concurrency.lockutils [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.855 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Snapshot image upload complete
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.856 253665 DEBUG nova.compute.manager [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.968 253665 INFO nova.compute.manager [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Shelve offloading
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.981 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance destroyed successfully.
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.981 253665 DEBUG nova.compute.manager [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.985 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.985 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:47:24 compute-0 nova_compute[253661]: 2025-11-22 09:47:24.986 253665 DEBUG nova.network.neutron [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:47:25 compute-0 ceph-mon[75021]: pgmap v2568: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 221 op/s
Nov 22 09:47:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 10 MiB/s wr, 181 op/s
Nov 22 09:47:26 compute-0 ceph-mon[75021]: pgmap v2569: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 10 MiB/s wr, 181 op/s
Nov 22 09:47:27 compute-0 nova_compute[253661]: 2025-11-22 09:47:27.101 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:27 compute-0 nova_compute[253661]: 2025-11-22 09:47:27.141 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:27.990 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:27.990 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:27.990 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.4 MiB/s wr, 153 op/s
Nov 22 09:47:28 compute-0 ceph-mon[75021]: pgmap v2570: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.4 MiB/s wr, 153 op/s
Nov 22 09:47:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Nov 22 09:47:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Nov 22 09:47:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Nov 22 09:47:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 7.7 MiB/s wr, 150 op/s
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.628 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.708 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.709 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance network_info: |[{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.710 253665 DEBUG oslo_concurrency.lockutils [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.711 253665 DEBUG nova.network.neutron [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing network info cache for port 1a443391-105a-4568-ba24-7748b702e21d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.716 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start _get_guest_xml network_info=[{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.722 253665 WARNING nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.736 253665 DEBUG nova.virt.libvirt.host [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.737 253665 DEBUG nova.virt.libvirt.host [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.743 253665 DEBUG nova.virt.libvirt.host [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.743 253665 DEBUG nova.virt.libvirt.host [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.744 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.745 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.746 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.746 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.746 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.747 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.747 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.748 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.748 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.749 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.749 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.750 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:47:30 compute-0 nova_compute[253661]: 2025-11-22 09:47:30.755 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:47:31 compute-0 nova_compute[253661]: 2025-11-22 09:47:31.268 253665 DEBUG nova.network.neutron [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:47:31 compute-0 nova_compute[253661]: 2025-11-22 09:47:31.334 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:47:32 compute-0 nova_compute[253661]: 2025-11-22 09:47:32.104 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:32 compute-0 nova_compute[253661]: 2025-11-22 09:47:32.143 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 100 op/s
Nov 22 09:47:32 compute-0 nova_compute[253661]: 2025-11-22 09:47:32.302 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804837.3013563, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:47:32 compute-0 nova_compute[253661]: 2025-11-22 09:47:32.303 253665 INFO nova.compute.manager [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Stopped (Lifecycle Event)
Nov 22 09:47:32 compute-0 nova_compute[253661]: 2025-11-22 09:47:32.331 253665 DEBUG nova.compute.manager [None req-bf8c5ef0-060c-4a12-b059-9545f00169d0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:47:32 compute-0 nova_compute[253661]: 2025-11-22 09:47:32.338 253665 DEBUG nova.compute.manager [None req-bf8c5ef0-060c-4a12-b059-9545f00169d0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:47:32 compute-0 nova_compute[253661]: 2025-11-22 09:47:32.399 253665 INFO nova.compute.manager [None req-bf8c5ef0-060c-4a12-b059-9545f00169d0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Nov 22 09:47:33 compute-0 podman[400672]: 2025-11-22 09:47:33.362621296 +0000 UTC m=+0.059419637 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 09:47:33 compute-0 podman[400673]: 2025-11-22 09:47:33.397636011 +0000 UTC m=+0.087739787 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:47:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 2.0 KiB/s wr, 8 op/s
Nov 22 09:47:34 compute-0 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 09:47:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 2.0 KiB/s wr, 8 op/s
Nov 22 09:47:37 compute-0 nova_compute[253661]: 2025-11-22 09:47:37.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:37 compute-0 nova_compute[253661]: 2025-11-22 09:47:37.145 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 409 B/s wr, 6 op/s
Nov 22 09:47:38 compute-0 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 09:47:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:47:40 compute-0 podman[400713]: 2025-11-22 09:47:40.39920092 +0000 UTC m=+0.089746967 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:47:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.7173 seconds
Nov 22 09:47:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:47:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432132538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:47:42 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.456047058s, txc = 0x56208096c000
Nov 22 09:47:42 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 12.455920219s
Nov 22 09:47:42 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 12.455920219s
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.124 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 11.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:47:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.219 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.225 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:47:42 compute-0 ceph-mon[75021]: osdmap e313: 3 total, 3 up, 3 in
Nov 22 09:47:42 compute-0 ceph-mon[75021]: pgmap v2572: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 7.7 MiB/s wr, 150 op/s
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:47:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006889541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.781 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.782 253665 DEBUG nova.virt.libvirt.vif [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:47:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-338385867',display_name='tempest-TestGettingAddress-server-338385867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-338385867',id=144,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rpa99d70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:47:18Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=6973b14c-b2af-4012-9d0c-1e86b6eb3a28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.783 253665 DEBUG nova.network.os_vif_util [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.784 253665 DEBUG nova.network.os_vif_util [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.785 253665 DEBUG nova.objects.instance [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.795 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <uuid>6973b14c-b2af-4012-9d0c-1e86b6eb3a28</uuid>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <name>instance-00000090</name>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-338385867</nova:name>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:47:30</nova:creationTime>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <nova:port uuid="1a443391-105a-4568-ba24-7748b702e21d">
Nov 22 09:47:42 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe1b:f53" ipVersion="6"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe1b:f53" ipVersion="6"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <system>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <entry name="serial">6973b14c-b2af-4012-9d0c-1e86b6eb3a28</entry>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <entry name="uuid">6973b14c-b2af-4012-9d0c-1e86b6eb3a28</entry>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     </system>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <os>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   </os>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <features>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   </features>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk">
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config">
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       </source>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:47:42 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:1b:0f:53"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <target dev="tap1a443391-10"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/console.log" append="off"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <video>
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     </video>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:47:42 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:47:42 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:47:42 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:47:42 compute-0 nova_compute[253661]: </domain>
Nov 22 09:47:42 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.796 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Preparing to wait for external event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.797 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.797 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.797 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.798 253665 DEBUG nova.virt.libvirt.vif [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:47:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-338385867',display_name='tempest-TestGettingAddress-server-338385867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-338385867',id=144,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rpa99d70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:47:18Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=6973b14c-b2af-4012-9d0c-1e86b6eb3a28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.798 253665 DEBUG nova.network.os_vif_util [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.798 253665 DEBUG nova.network.os_vif_util [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.799 253665 DEBUG os_vif [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.799 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.800 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.802 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.802 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a443391-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.803 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a443391-10, col_values=(('external_ids', {'iface-id': '1a443391-105a-4568-ba24-7748b702e21d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:0f:53', 'vm-uuid': '6973b14c-b2af-4012-9d0c-1e86b6eb3a28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:42 compute-0 NetworkManager[48920]: <info>  [1763804862.8055] manager: (tap1a443391-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.813 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.814 253665 INFO os_vif [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10')
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.916 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.916 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.916 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:1b:0f:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.917 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Using config drive
Nov 22 09:47:42 compute-0 nova_compute[253661]: 2025-11-22 09:47:42.937 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:47:43 compute-0 ceph-mon[75021]: pgmap v2573: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 100 op/s
Nov 22 09:47:43 compute-0 ceph-mon[75021]: pgmap v2574: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 2.0 KiB/s wr, 8 op/s
Nov 22 09:47:43 compute-0 ceph-mon[75021]: pgmap v2575: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 2.0 KiB/s wr, 8 op/s
Nov 22 09:47:43 compute-0 ceph-mon[75021]: pgmap v2576: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 409 B/s wr, 6 op/s
Nov 22 09:47:43 compute-0 ceph-mon[75021]: pgmap v2577: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:47:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/432132538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:47:43 compute-0 ceph-mon[75021]: pgmap v2578: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:47:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4006889541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:47:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Nov 22 09:47:44 compute-0 ceph-mon[75021]: pgmap v2579: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Nov 22 09:47:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Nov 22 09:47:46 compute-0 nova_compute[253661]: 2025-11-22 09:47:46.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:47:46 compute-0 nova_compute[253661]: 2025-11-22 09:47:46.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:47:46 compute-0 nova_compute[253661]: 2025-11-22 09:47:46.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:47:46 compute-0 nova_compute[253661]: 2025-11-22 09:47:46.250 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:47:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:47 compute-0 nova_compute[253661]: 2025-11-22 09:47:47.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:47 compute-0 nova_compute[253661]: 2025-11-22 09:47:47.225 253665 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 2.95 sec
Nov 22 09:47:47 compute-0 ceph-mon[75021]: pgmap v2580: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Nov 22 09:47:47 compute-0 nova_compute[253661]: 2025-11-22 09:47:47.805 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Nov 22 09:47:48 compute-0 ceph-mon[75021]: pgmap v2581: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.448 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance destroyed successfully.
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.449 253665 DEBUG nova.objects.instance [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'resources' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.468 253665 DEBUG nova.virt.libvirt.vif [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJMfV5BjTM8GJujok7HYi2H1JqAcE7EEyl3AluUOeV8mGOJe1kvDgduzG9FjqiMj3IyTkvrleTcL49x3Y3dHrfp4PbZT/WUxBgqL6QlOxXbuGaO695U0GzmKtLI552+pbw==',key_name='tempest-TestShelveInstance-1840126280',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:46:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member',shelved_at='2025-11-22T09:47:24.856148',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7427dc9c-0c7d-45bc-9904-89241d5b4e4d'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:47:18Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.469 253665 DEBUG nova.network.os_vif_util [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.470 253665 DEBUG nova.network.os_vif_util [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.470 253665 DEBUG os_vif [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.472 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88d574be-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.480 253665 INFO os_vif [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb')
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.681 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Creating config drive at /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.686 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4p_yv5nx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.728 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.729 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.730 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.730 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.733 253665 DEBUG nova.network.neutron [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updated VIF entry in instance network info cache for port 1a443391-105a-4568-ba24-7748b702e21d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.734 253665 DEBUG nova.network.neutron [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.769 253665 DEBUG oslo_concurrency.lockutils [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.838 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4p_yv5nx" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.864 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:47:49 compute-0 nova_compute[253661]: 2025-11-22 09:47:49.869 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:47:50 compute-0 nova_compute[253661]: 2025-11-22 09:47:50.002 253665 DEBUG nova.compute.manager [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:47:50 compute-0 nova_compute[253661]: 2025-11-22 09:47:50.003 253665 DEBUG nova.compute.manager [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:47:50 compute-0 nova_compute[253661]: 2025-11-22 09:47:50.003 253665 DEBUG oslo_concurrency.lockutils [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:47:50 compute-0 nova_compute[253661]: 2025-11-22 09:47:50.003 253665 DEBUG oslo_concurrency.lockutils [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:47:50 compute-0 nova_compute[253661]: 2025-11-22 09:47:50.004 253665 DEBUG nova.network.neutron [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:47:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Nov 22 09:47:50 compute-0 ceph-mon[75021]: pgmap v2582: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Nov 22 09:47:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:51.528 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:47:51 compute-0 nova_compute[253661]: 2025-11-22 09:47:51.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:51 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:51.529 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:47:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:52 compute-0 nova_compute[253661]: 2025-11-22 09:47:52.166 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 8.7 KiB/s wr, 2 op/s
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:47:52
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images']
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:47:52 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:52.531 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:47:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:47:52 compute-0 ceph-mon[75021]: pgmap v2583: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 8.7 KiB/s wr, 2 op/s
Nov 22 09:47:53 compute-0 nova_compute[253661]: 2025-11-22 09:47:53.074 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:47:53 compute-0 nova_compute[253661]: 2025-11-22 09:47:53.075 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Deleting local config drive /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config because it was imported into RBD.
Nov 22 09:47:53 compute-0 kernel: tap1a443391-10: entered promiscuous mode
Nov 22 09:47:53 compute-0 NetworkManager[48920]: <info>  [1763804873.1284] manager: (tap1a443391-10): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Nov 22 09:47:53 compute-0 nova_compute[253661]: 2025-11-22 09:47:53.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:53 compute-0 ovn_controller[152872]: 2025-11-22T09:47:53Z|01555|binding|INFO|Claiming lport 1a443391-105a-4568-ba24-7748b702e21d for this chassis.
Nov 22 09:47:53 compute-0 ovn_controller[152872]: 2025-11-22T09:47:53Z|01556|binding|INFO|1a443391-105a-4568-ba24-7748b702e21d: Claiming fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53
Nov 22 09:47:53 compute-0 ovn_controller[152872]: 2025-11-22T09:47:53Z|01557|binding|INFO|Setting lport 1a443391-105a-4568-ba24-7748b702e21d ovn-installed in OVS
Nov 22 09:47:53 compute-0 nova_compute[253661]: 2025-11-22 09:47:53.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:53 compute-0 systemd-udevd[400885]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:47:53 compute-0 nova_compute[253661]: 2025-11-22 09:47:53.154 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:53 compute-0 systemd-machined[215941]: New machine qemu-175-instance-00000090.
Nov 22 09:47:53 compute-0 NetworkManager[48920]: <info>  [1763804873.1682] device (tap1a443391-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:47:53 compute-0 NetworkManager[48920]: <info>  [1763804873.1692] device (tap1a443391-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:47:53 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-00000090.
Nov 22 09:47:53 compute-0 ovn_controller[152872]: 2025-11-22T09:47:53Z|01558|binding|INFO|Setting lport 1a443391-105a-4568-ba24-7748b702e21d up in Southbound
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.350 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53'], port_security=['fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:fe1b:f53/64 2001:db8::f816:3eff:fe1b:f53/64', 'neutron:device_id': '6973b14c-b2af-4012-9d0c-1e86b6eb3a28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b8f1ae80-edda-4d40-9085-393558ac5aa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1a443391-105a-4568-ba24-7748b702e21d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.351 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1a443391-105a-4568-ba24-7748b702e21d in datapath b6b9221a-729b-4988-afa8-72f95360d9ea bound to our chassis
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.353 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6b9221a-729b-4988-afa8-72f95360d9ea
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.367 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33197980-f24e-4253-8bc3-c434a8b2d363]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.398 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a57cf19e-c4eb-4c85-b480-5425c0635876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.401 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dd314488-f8b3-46a2-8267-a84d246d391a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.449 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4591b1ed-29bd-4da7-87e8-ad68935a17bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.483 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d5964b7a-3027-4d72-a132-4b773a89fe84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6b9221a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:f0:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764983, 'reachable_time': 18873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400900, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[05d7db40-cd0c-4a2c-b827-ca5d370a7504]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb6b9221a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764994, 'tstamp': 764994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400901, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb6b9221a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764997, 'tstamp': 764997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400901, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.514 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6b9221a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:47:53 compute-0 nova_compute[253661]: 2025-11-22 09:47:53.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:53 compute-0 nova_compute[253661]: 2025-11-22 09:47:53.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6b9221a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6b9221a-70, col_values=(('external_ids', {'iface-id': 'b8d092bb-b893-4593-9090-1acdc081ae18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:47:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:47:53 compute-0 nova_compute[253661]: 2025-11-22 09:47:53.943 253665 DEBUG nova.network.neutron [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:47:53 compute-0 nova_compute[253661]: 2025-11-22 09:47:53.944 253665 DEBUG nova.network.neutron [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": null, "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap88d574be-cb", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:47:54 compute-0 nova_compute[253661]: 2025-11-22 09:47:54.216 253665 DEBUG oslo_concurrency.lockutils [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:47:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 8.9 KiB/s wr, 13 op/s
Nov 22 09:47:54 compute-0 nova_compute[253661]: 2025-11-22 09:47:54.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:55 compute-0 ceph-mon[75021]: pgmap v2584: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 8.9 KiB/s wr, 13 op/s
Nov 22 09:47:55 compute-0 nova_compute[253661]: 2025-11-22 09:47:55.501 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804875.5006216, 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:47:55 compute-0 nova_compute[253661]: 2025-11-22 09:47:55.501 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] VM Started (Lifecycle Event)
Nov 22 09:47:55 compute-0 nova_compute[253661]: 2025-11-22 09:47:55.533 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:47:55 compute-0 nova_compute[253661]: 2025-11-22 09:47:55.537 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804875.5008843, 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:47:55 compute-0 nova_compute[253661]: 2025-11-22 09:47:55.538 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] VM Paused (Lifecycle Event)
Nov 22 09:47:55 compute-0 nova_compute[253661]: 2025-11-22 09:47:55.560 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:47:55 compute-0 nova_compute[253661]: 2025-11-22 09:47:55.563 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:47:55 compute-0 nova_compute[253661]: 2025-11-22 09:47:55.579 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:47:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:47:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:47:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:47:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:47:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:47:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 8.8 KiB/s wr, 12 op/s
Nov 22 09:47:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:47:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:47:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:47:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:47:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:47:56 compute-0 ceph-mon[75021]: pgmap v2585: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 8.8 KiB/s wr, 12 op/s
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.021 253665 DEBUG nova.compute.manager [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.022 253665 DEBUG oslo_concurrency.lockutils [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.022 253665 DEBUG oslo_concurrency.lockutils [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.022 253665 DEBUG oslo_concurrency.lockutils [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.023 253665 DEBUG nova.compute.manager [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Processing event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.023 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.027 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804877.0274258, 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.028 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] VM Resumed (Lifecycle Event)
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.030 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.034 253665 INFO nova.virt.libvirt.driver [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance spawned successfully.
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.034 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.050 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.056 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.059 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.060 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.060 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.060 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.061 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.061 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.092 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.351 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.376 253665 INFO nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Took 38.78 seconds to spawn the instance on the hypervisor.
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.376 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.377 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.377 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.378 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.378 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.417 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.417 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.417 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.418 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.418 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.639 253665 INFO nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Took 40.09 seconds to build instance.
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.694 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 40.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:47:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697904501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.869 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.963 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.964 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.969 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.970 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.974 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:47:57 compute-0 nova_compute[253661]: 2025-11-22 09:47:57.974 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.176 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.177 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3347MB free_disk=59.87614822387695GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.177 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.178 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 302 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 21 KiB/s wr, 14 op/s
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.243 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 63134c6f-fc14-4157-9874-e7c6227f8d0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:47:58 compute-0 nova_compute[253661]: 2025-11-22 09:47:58.340 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:47:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/697904501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:47:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:47:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1990508250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.040 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.047 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.063 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.134 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.135 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.174 253665 DEBUG nova.compute.manager [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.175 253665 DEBUG oslo_concurrency.lockutils [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.175 253665 DEBUG oslo_concurrency.lockutils [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.175 253665 DEBUG oslo_concurrency.lockutils [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.175 253665 DEBUG nova.compute.manager [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] No waiting events found dispatching network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.176 253665 WARNING nova.compute.manager [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received unexpected event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d for instance with vm_state active and task_state None.
Nov 22 09:47:59 compute-0 nova_compute[253661]: 2025-11-22 09:47:59.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:47:59 compute-0 ceph-mon[75021]: pgmap v2586: 305 pgs: 305 active+clean; 302 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 21 KiB/s wr, 14 op/s
Nov 22 09:47:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1990508250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 249 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 749 KiB/s rd, 13 KiB/s wr, 49 op/s
Nov 22 09:48:00 compute-0 nova_compute[253661]: 2025-11-22 09:48:00.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:01 compute-0 nova_compute[253661]: 2025-11-22 09:48:01.129 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:01 compute-0 nova_compute[253661]: 2025-11-22 09:48:01.130 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:01 compute-0 ceph-mon[75021]: pgmap v2587: 305 pgs: 305 active+clean; 249 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 749 KiB/s rd, 13 KiB/s wr, 49 op/s
Nov 22 09:48:01 compute-0 nova_compute[253661]: 2025-11-22 09:48:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:02 compute-0 nova_compute[253661]: 2025-11-22 09:48:02.172 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 92 op/s
Nov 22 09:48:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:02 compute-0 ceph-mon[75021]: pgmap v2588: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 92 op/s
Nov 22 09:48:02 compute-0 nova_compute[253661]: 2025-11-22 09:48:02.767 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deleting instance files /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_del
Nov 22 09:48:02 compute-0 nova_compute[253661]: 2025-11-22 09:48:02.769 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deletion of /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_del complete
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011106117120856939 of space, bias 1.0, pg target 0.33318351362570814 quantized to 32 (current 32)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014249405808426125 of space, bias 1.0, pg target 0.42748217425278373 quantized to 32 (current 32)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:48:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:48:03 compute-0 nova_compute[253661]: 2025-11-22 09:48:03.314 253665 INFO nova.scheduler.client.report [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Deleted allocations for instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f
Nov 22 09:48:03 compute-0 nova_compute[253661]: 2025-11-22 09:48:03.912 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:03 compute-0 nova_compute[253661]: 2025-11-22 09:48:03.913 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:04 compute-0 nova_compute[253661]: 2025-11-22 09:48:04.008 253665 DEBUG oslo_concurrency.processutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Nov 22 09:48:04 compute-0 podman[401010]: 2025-11-22 09:48:04.399348636 +0000 UTC m=+0.078834357 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:48:04 compute-0 podman[401011]: 2025-11-22 09:48:04.410741588 +0000 UTC m=+0.088767443 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:48:04 compute-0 nova_compute[253661]: 2025-11-22 09:48:04.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:48:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002951629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:04 compute-0 nova_compute[253661]: 2025-11-22 09:48:04.522 253665 DEBUG oslo_concurrency.processutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:04 compute-0 nova_compute[253661]: 2025-11-22 09:48:04.530 253665 DEBUG nova.compute.provider_tree [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:48:04 compute-0 nova_compute[253661]: 2025-11-22 09:48:04.567 253665 DEBUG nova.scheduler.client.report [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:48:04 compute-0 nova_compute[253661]: 2025-11-22 09:48:04.647 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:04 compute-0 ceph-mon[75021]: pgmap v2589: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Nov 22 09:48:04 compute-0 nova_compute[253661]: 2025-11-22 09:48:04.759 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 50.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:05 compute-0 sudo[401047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:48:05 compute-0 sudo[401047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:05 compute-0 sudo[401047]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2002951629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:05 compute-0 sudo[401072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:48:05 compute-0 sudo[401072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:05 compute-0 sudo[401072]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:05 compute-0 sudo[401097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:48:05 compute-0 sudo[401097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:05 compute-0 sudo[401097]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:06 compute-0 sudo[401122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:48:06 compute-0 sudo[401122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Nov 22 09:48:06 compute-0 sudo[401122]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 09:48:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:48:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:48:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:48:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:48:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:48:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:48:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:48:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b25f7685-ade7-43a4-95d9-3ecf1d48878f does not exist
Nov 22 09:48:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1ea8fd3f-ecc5-4fd2-a00d-eb40aaa3b22c does not exist
Nov 22 09:48:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5dd8e59b-5e79-4f98-a50a-caf64e08a137 does not exist
Nov 22 09:48:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:48:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:48:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:48:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:48:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:48:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:48:06 compute-0 sudo[401177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:48:06 compute-0 sudo[401177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:06 compute-0 sudo[401177]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:06 compute-0 sudo[401202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:48:06 compute-0 sudo[401202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:06 compute-0 sudo[401202]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:06 compute-0 sudo[401227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:48:06 compute-0 sudo[401227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:06 compute-0 sudo[401227]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:07 compute-0 sudo[401252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:48:07 compute-0 sudo[401252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:07 compute-0 ceph-mon[75021]: pgmap v2590: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Nov 22 09:48:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:48:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:48:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:48:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:48:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:48:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:48:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:48:07 compute-0 nova_compute[253661]: 2025-11-22 09:48:07.164 253665 DEBUG nova.compute.manager [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-changed-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:07 compute-0 nova_compute[253661]: 2025-11-22 09:48:07.167 253665 DEBUG nova.compute.manager [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing instance network info cache due to event network-changed-1a443391-105a-4568-ba24-7748b702e21d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:48:07 compute-0 nova_compute[253661]: 2025-11-22 09:48:07.168 253665 DEBUG oslo_concurrency.lockutils [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:07 compute-0 nova_compute[253661]: 2025-11-22 09:48:07.168 253665 DEBUG oslo_concurrency.lockutils [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:07 compute-0 nova_compute[253661]: 2025-11-22 09:48:07.169 253665 DEBUG nova.network.neutron [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing network info cache for port 1a443391-105a-4568-ba24-7748b702e21d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:48:07 compute-0 nova_compute[253661]: 2025-11-22 09:48:07.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:07 compute-0 podman[401314]: 2025-11-22 09:48:07.438554644 +0000 UTC m=+0.026754422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:48:07 compute-0 podman[401314]: 2025-11-22 09:48:07.537956558 +0000 UTC m=+0.126156316 container create 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:48:07 compute-0 systemd[1]: Started libpod-conmon-772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55.scope.
Nov 22 09:48:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:48:07 compute-0 podman[401314]: 2025-11-22 09:48:07.694044442 +0000 UTC m=+0.282244200 container init 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 09:48:07 compute-0 podman[401314]: 2025-11-22 09:48:07.705463245 +0000 UTC m=+0.293663003 container start 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:48:07 compute-0 compassionate_mestorf[401330]: 167 167
Nov 22 09:48:07 compute-0 systemd[1]: libpod-772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55.scope: Deactivated successfully.
Nov 22 09:48:07 compute-0 podman[401314]: 2025-11-22 09:48:07.786776252 +0000 UTC m=+0.374976040 container attach 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 09:48:07 compute-0 podman[401314]: 2025-11-22 09:48:07.787834858 +0000 UTC m=+0.376034616 container died 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-503f5ffa285e7be60851db40ccbb20339112461cbb7d125faf80d128dabac895-merged.mount: Deactivated successfully.
Nov 22 09:48:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Nov 22 09:48:08 compute-0 ceph-mon[75021]: pgmap v2591: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Nov 22 09:48:09 compute-0 podman[401314]: 2025-11-22 09:48:09.144105909 +0000 UTC m=+1.732305677 container remove 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:48:09 compute-0 systemd[1]: libpod-conmon-772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55.scope: Deactivated successfully.
Nov 22 09:48:09 compute-0 podman[401354]: 2025-11-22 09:48:09.384146125 +0000 UTC m=+0.045104734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:48:09 compute-0 nova_compute[253661]: 2025-11-22 09:48:09.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:09 compute-0 podman[401354]: 2025-11-22 09:48:09.720949103 +0000 UTC m=+0.381907702 container create b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:48:10 compute-0 systemd[1]: Started libpod-conmon-b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd.scope.
Nov 22 09:48:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 88 op/s
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.273 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.274 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.274 253665 INFO nova.compute.manager [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Unshelving
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.351 253665 DEBUG nova.network.neutron [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updated VIF entry in instance network info cache for port 1a443391-105a-4568-ba24-7748b702e21d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.352 253665 DEBUG nova.network.neutron [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.356 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.356 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.363 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.374 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.382 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.383 253665 INFO nova.compute.claims [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.446 253665 DEBUG oslo_concurrency.lockutils [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:10 compute-0 nova_compute[253661]: 2025-11-22 09:48:10.549 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:11 compute-0 nova_compute[253661]: 2025-11-22 09:48:11.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:11 compute-0 podman[401354]: 2025-11-22 09:48:11.044236868 +0000 UTC m=+1.705195467 container init b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:48:11 compute-0 podman[401354]: 2025-11-22 09:48:11.070168019 +0000 UTC m=+1.731126578 container start b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:48:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:48:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2008884121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:11 compute-0 nova_compute[253661]: 2025-11-22 09:48:11.142 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:11 compute-0 nova_compute[253661]: 2025-11-22 09:48:11.150 253665 DEBUG nova.compute.provider_tree [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:48:11 compute-0 nova_compute[253661]: 2025-11-22 09:48:11.162 253665 DEBUG nova.scheduler.client.report [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:48:11 compute-0 nova_compute[253661]: 2025-11-22 09:48:11.183 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:11 compute-0 nova_compute[253661]: 2025-11-22 09:48:11.312 253665 INFO nova.network.neutron [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating port 88d574be-cb53-4693-a025-34a039ee625c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 22 09:48:11 compute-0 podman[401354]: 2025-11-22 09:48:11.988983097 +0000 UTC m=+2.649941766 container attach b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:48:11 compute-0 ceph-mon[75021]: pgmap v2592: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 88 op/s
Nov 22 09:48:12 compute-0 podman[401397]: 2025-11-22 09:48:12.200151282 +0000 UTC m=+0.878728060 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 09:48:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 249 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 293 KiB/s wr, 58 op/s
Nov 22 09:48:12 compute-0 nova_compute[253661]: 2025-11-22 09:48:12.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:48:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3015865944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:48:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:48:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3015865944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:48:12 compute-0 nova_compute[253661]: 2025-11-22 09:48:12.918 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:12 compute-0 nova_compute[253661]: 2025-11-22 09:48:12.919 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:12 compute-0 nova_compute[253661]: 2025-11-22 09:48:12.919 253665 DEBUG nova.network.neutron [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:48:13 compute-0 nova_compute[253661]: 2025-11-22 09:48:13.054 253665 DEBUG nova.compute.manager [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:13 compute-0 nova_compute[253661]: 2025-11-22 09:48:13.054 253665 DEBUG nova.compute.manager [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:48:13 compute-0 nova_compute[253661]: 2025-11-22 09:48:13.055 253665 DEBUG oslo_concurrency.lockutils [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:13 compute-0 magical_wozniak[401370]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:48:13 compute-0 magical_wozniak[401370]: --> relative data size: 1.0
Nov 22 09:48:13 compute-0 magical_wozniak[401370]: --> All data devices are unavailable
Nov 22 09:48:13 compute-0 systemd[1]: libpod-b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd.scope: Deactivated successfully.
Nov 22 09:48:13 compute-0 systemd[1]: libpod-b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd.scope: Consumed 1.126s CPU time.
Nov 22 09:48:13 compute-0 podman[401447]: 2025-11-22 09:48:13.393124889 +0000 UTC m=+0.028252888 container died b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:48:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 254 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.380 253665 DEBUG nova.network.neutron [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.397 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.399 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.400 253665 INFO nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating image(s)
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.428 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.434 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.437 253665 DEBUG oslo_concurrency.lockutils [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.437 253665 DEBUG nova.network.neutron [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.488 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2008884121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:14 compute-0 ceph-mon[75021]: pgmap v2593: 305 pgs: 305 active+clean; 249 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 293 KiB/s wr, 58 op/s
Nov 22 09:48:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3015865944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:48:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3015865944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.681 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.687 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "137b68f7209b57adc5bfe7c053ca10718182857d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.688 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "137b68f7209b57adc5bfe7c053ca10718182857d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d-merged.mount: Deactivated successfully.
Nov 22 09:48:14 compute-0 nova_compute[253661]: 2025-11-22 09:48:14.955 253665 DEBUG nova.virt.libvirt.imagebackend [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 22 09:48:15 compute-0 nova_compute[253661]: 2025-11-22 09:48:15.176 253665 DEBUG nova.virt.libvirt.imagebackend [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Selected location: {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 22 09:48:15 compute-0 nova_compute[253661]: 2025-11-22 09:48:15.176 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] cloning images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d@snap to None/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:48:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 254 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.1 MiB/s wr, 17 op/s
Nov 22 09:48:16 compute-0 nova_compute[253661]: 2025-11-22 09:48:16.236 253665 DEBUG nova.network.neutron [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:48:16 compute-0 nova_compute[253661]: 2025-11-22 09:48:16.238 253665 DEBUG nova.network.neutron [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:16 compute-0 nova_compute[253661]: 2025-11-22 09:48:16.250 253665 DEBUG oslo_concurrency.lockutils [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:16 compute-0 nova_compute[253661]: 2025-11-22 09:48:16.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:16 compute-0 ceph-mon[75021]: pgmap v2594: 305 pgs: 305 active+clean; 254 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Nov 22 09:48:16 compute-0 podman[401447]: 2025-11-22 09:48:16.450626408 +0000 UTC m=+3.085754387 container remove b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:48:16 compute-0 systemd[1]: libpod-conmon-b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd.scope: Deactivated successfully.
Nov 22 09:48:16 compute-0 sudo[401252]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:16 compute-0 sudo[401582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:48:16 compute-0 sudo[401582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:16 compute-0 sudo[401582]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:16 compute-0 sudo[401607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:48:16 compute-0 sudo[401607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:16 compute-0 sudo[401607]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:16 compute-0 sudo[401632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:48:16 compute-0 sudo[401632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:16 compute-0 sudo[401632]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:16 compute-0 sudo[401657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:48:16 compute-0 sudo[401657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:17 compute-0 nova_compute[253661]: 2025-11-22 09:48:17.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:17 compute-0 podman[401723]: 2025-11-22 09:48:17.178878042 +0000 UTC m=+0.027043530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:48:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:18 compute-0 podman[401723]: 2025-11-22 09:48:18.197385981 +0000 UTC m=+1.045551469 container create f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 22 09:48:18 compute-0 ceph-mon[75021]: pgmap v2595: 305 pgs: 305 active+clean; 254 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.1 MiB/s wr, 17 op/s
Nov 22 09:48:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 259 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1.5 MiB/s wr, 22 op/s
Nov 22 09:48:18 compute-0 systemd[1]: Started libpod-conmon-f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951.scope.
Nov 22 09:48:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:48:18 compute-0 podman[401723]: 2025-11-22 09:48:18.758253171 +0000 UTC m=+1.606418739 container init f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:48:18 compute-0 podman[401723]: 2025-11-22 09:48:18.769712074 +0000 UTC m=+1.617877562 container start f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:48:18 compute-0 gallant_easley[401743]: 167 167
Nov 22 09:48:18 compute-0 systemd[1]: libpod-f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951.scope: Deactivated successfully.
Nov 22 09:48:19 compute-0 podman[401723]: 2025-11-22 09:48:19.120401883 +0000 UTC m=+1.968567451 container attach f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 09:48:19 compute-0 podman[401723]: 2025-11-22 09:48:19.122146946 +0000 UTC m=+1.970312434 container died f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:48:19 compute-0 nova_compute[253661]: 2025-11-22 09:48:19.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:19 compute-0 ceph-mon[75021]: pgmap v2596: 305 pgs: 305 active+clean; 259 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1.5 MiB/s wr, 22 op/s
Nov 22 09:48:19 compute-0 nova_compute[253661]: 2025-11-22 09:48:19.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 264 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 22 09:48:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e76301868f54794deab97962d82984139fb90daca31db140d6d60a2ec6afc9-merged.mount: Deactivated successfully.
Nov 22 09:48:20 compute-0 nova_compute[253661]: 2025-11-22 09:48:20.715 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "137b68f7209b57adc5bfe7c053ca10718182857d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 6.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:20 compute-0 nova_compute[253661]: 2025-11-22 09:48:20.881 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'migration_context' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:20 compute-0 nova_compute[253661]: 2025-11-22 09:48:20.951 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] flattening vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:48:21 compute-0 podman[401723]: 2025-11-22 09:48:21.175113981 +0000 UTC m=+4.023279459 container remove f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:48:21 compute-0 systemd[1]: libpod-conmon-f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951.scope: Deactivated successfully.
Nov 22 09:48:21 compute-0 ceph-mon[75021]: pgmap v2597: 305 pgs: 305 active+clean; 264 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 22 09:48:21 compute-0 podman[401858]: 2025-11-22 09:48:21.394024396 +0000 UTC m=+0.026188318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:48:21 compute-0 podman[401858]: 2025-11-22 09:48:21.541693023 +0000 UTC m=+0.173856925 container create e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:48:21 compute-0 systemd[1]: Started libpod-conmon-e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577.scope.
Nov 22 09:48:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:22 compute-0 nova_compute[253661]: 2025-11-22 09:48:22.242 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 267 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 47 op/s
Nov 22 09:48:22 compute-0 podman[401858]: 2025-11-22 09:48:22.293696592 +0000 UTC m=+0.925860514 container init e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:48:22 compute-0 podman[401858]: 2025-11-22 09:48:22.301392312 +0000 UTC m=+0.933556204 container start e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 09:48:22 compute-0 podman[401858]: 2025-11-22 09:48:22.560851648 +0000 UTC m=+1.193015550 container attach e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:48:22 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 22 09:48:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:48:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:48:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:48:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:48:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:48:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:48:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:23 compute-0 ceph-mon[75021]: pgmap v2598: 305 pgs: 305 active+clean; 267 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 47 op/s
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]: {
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:     "0": [
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:         {
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "devices": [
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "/dev/loop3"
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             ],
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_name": "ceph_lv0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_size": "21470642176",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "name": "ceph_lv0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "tags": {
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.cluster_name": "ceph",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.crush_device_class": "",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.encrypted": "0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.osd_id": "0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.type": "block",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.vdo": "0"
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             },
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "type": "block",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "vg_name": "ceph_vg0"
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:         }
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:     ],
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:     "1": [
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:         {
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "devices": [
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "/dev/loop4"
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             ],
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_name": "ceph_lv1",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_size": "21470642176",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "name": "ceph_lv1",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "tags": {
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.cluster_name": "ceph",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.crush_device_class": "",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.encrypted": "0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.osd_id": "1",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.type": "block",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.vdo": "0"
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             },
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "type": "block",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "vg_name": "ceph_vg1"
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:         }
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:     ],
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:     "2": [
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:         {
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "devices": [
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "/dev/loop5"
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             ],
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_name": "ceph_lv2",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_size": "21470642176",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "name": "ceph_lv2",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "tags": {
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.cluster_name": "ceph",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.crush_device_class": "",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.encrypted": "0",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.osd_id": "2",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.type": "block",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:                 "ceph.vdo": "0"
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             },
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "type": "block",
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:             "vg_name": "ceph_vg2"
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:         }
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]:     ]
Nov 22 09:48:23 compute-0 admiring_matsumoto[401875]: }
Nov 22 09:48:23 compute-0 systemd[1]: libpod-e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577.scope: Deactivated successfully.
Nov 22 09:48:23 compute-0 podman[401858]: 2025-11-22 09:48:23.511021241 +0000 UTC m=+2.143185143 container died e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:48:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 280 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 72 op/s
Nov 22 09:48:24 compute-0 nova_compute[253661]: 2025-11-22 09:48:24.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b-merged.mount: Deactivated successfully.
Nov 22 09:48:25 compute-0 ceph-mon[75021]: pgmap v2599: 305 pgs: 305 active+clean; 280 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 72 op/s
Nov 22 09:48:25 compute-0 podman[401858]: 2025-11-22 09:48:25.702883365 +0000 UTC m=+4.335047267 container remove e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:48:25 compute-0 sudo[401657]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:25 compute-0 systemd[1]: libpod-conmon-e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577.scope: Deactivated successfully.
Nov 22 09:48:25 compute-0 sudo[401896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:48:25 compute-0 sudo[401896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:25 compute-0 sudo[401896]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:25 compute-0 sudo[401921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:48:25 compute-0 sudo[401921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:25 compute-0 sudo[401921]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:25 compute-0 sudo[401946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:48:25 compute-0 sudo[401946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:25 compute-0 sudo[401946]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:26 compute-0 sudo[401971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:48:26 compute-0 sudo[401971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 71 op/s
Nov 22 09:48:26 compute-0 podman[402037]: 2025-11-22 09:48:26.372616343 +0000 UTC m=+0.026737141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:48:26 compute-0 podman[402037]: 2025-11-22 09:48:26.515783657 +0000 UTC m=+0.169904425 container create ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.579 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Image rbd:vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.582 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.583 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Ensure instance console log exists: /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.584 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.584 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.584 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.586 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start _get_guest_xml network_info=[{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:47:14Z,direct_url=<?>,disk_format='raw',id=7427dc9c-0c7d-45bc-9904-89241d5b4e4d,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-140973884-shelved',owner='2a86e5c3f3c34f2285b7958147f6bbd3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:47:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.592 253665 WARNING nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:48:26 compute-0 systemd[1]: Started libpod-conmon-ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0.scope.
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.603 253665 DEBUG nova.virt.libvirt.host [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.604 253665 DEBUG nova.virt.libvirt.host [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.607 253665 DEBUG nova.virt.libvirt.host [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.608 253665 DEBUG nova.virt.libvirt.host [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.608 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.608 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:47:14Z,direct_url=<?>,disk_format='raw',id=7427dc9c-0c7d-45bc-9904-89241d5b4e4d,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-140973884-shelved',owner='2a86e5c3f3c34f2285b7958147f6bbd3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:47:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.609 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.609 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.609 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.610 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.610 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.610 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.610 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.611 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.611 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.611 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.611 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:48:26 compute-0 nova_compute[253661]: 2025-11-22 09:48:26.633 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:26 compute-0 podman[402037]: 2025-11-22 09:48:26.890436029 +0000 UTC m=+0.544556877 container init ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:48:26 compute-0 podman[402037]: 2025-11-22 09:48:26.900716503 +0000 UTC m=+0.554837291 container start ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:48:26 compute-0 nifty_nash[402053]: 167 167
Nov 22 09:48:26 compute-0 systemd[1]: libpod-ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0.scope: Deactivated successfully.
Nov 22 09:48:26 compute-0 conmon[402053]: conmon ed85f4140273c1be404b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0.scope/container/memory.events
Nov 22 09:48:26 compute-0 ceph-mon[75021]: pgmap v2600: 305 pgs: 305 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 71 op/s
Nov 22 09:48:26 compute-0 podman[402037]: 2025-11-22 09:48:26.957648989 +0000 UTC m=+0.611769787 container attach ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:48:26 compute-0 podman[402037]: 2025-11-22 09:48:26.958465918 +0000 UTC m=+0.612586686 container died ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:48:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:48:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2346501455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.151 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.179 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.184 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4334e4ed212c0f16f05b9b8692a0d0d63380db2af75dc02e67829d1c996b9892-merged.mount: Deactivated successfully.
Nov 22 09:48:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:48:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776956792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.690 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.692 253665 DEBUG nova.virt.libvirt.vif [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='7427dc9c-0c7d-45bc-9904-89241d5b4e4d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1840126280',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:46:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member',shelved_at='2025-11-22T09:47:24.856148',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7427dc9c-0c7d-45bc-9904-89241d5b4e4d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:48:10Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.693 253665 DEBUG nova.network.os_vif_util [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.694 253665 DEBUG nova.network.os_vif_util [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.696 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.710 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <uuid>91cfde9c-3aa6-4946-92d6-471c8f63eb2f</uuid>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <name>instance-0000008f</name>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <nova:name>tempest-TestShelveInstance-server-140973884</nova:name>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:48:26</nova:creationTime>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <nova:user uuid="15f54ba9d7eb4efd9b760da5c85ec22e">tempest-TestShelveInstance-463882348-project-member</nova:user>
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <nova:project uuid="2a86e5c3f3c34f2285b7958147f6bbd3">tempest-TestShelveInstance-463882348</nova:project>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="7427dc9c-0c7d-45bc-9904-89241d5b4e4d"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <nova:port uuid="88d574be-cb53-4693-a025-34a039ee625c">
Nov 22 09:48:27 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <system>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <entry name="serial">91cfde9c-3aa6-4946-92d6-471c8f63eb2f</entry>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <entry name="uuid">91cfde9c-3aa6-4946-92d6-471c8f63eb2f</entry>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     </system>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <os>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   </os>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <features>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   </features>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk">
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       </source>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config">
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       </source>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:48:27 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:88:cb:74"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <target dev="tap88d574be-cb"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/console.log" append="off"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <video>
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     </video>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <input type="keyboard" bus="usb"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:48:27 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:48:27 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:48:27 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:48:27 compute-0 nova_compute[253661]: </domain>
Nov 22 09:48:27 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.711 253665 DEBUG nova.compute.manager [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Preparing to wait for external event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.712 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.712 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.713 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.713 253665 DEBUG nova.virt.libvirt.vif [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='7427dc9c-0c7d-45bc-9904-89241d5b4e4d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1840126280',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:46:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member',shelved_at='2025-11-22T09:47:24.856148',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7427dc9c-0c7d-45bc-9904-89241d5b4e4d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:48:10Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.714 253665 DEBUG nova.network.os_vif_util [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.714 253665 DEBUG nova.network.os_vif_util [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.715 253665 DEBUG os_vif [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.715 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.716 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.716 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.719 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88d574be-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.720 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88d574be-cb, col_values=(('external_ids', {'iface-id': '88d574be-cb53-4693-a025-34a039ee625c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:cb:74', 'vm-uuid': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:27 compute-0 NetworkManager[48920]: <info>  [1763804907.7231] manager: (tap88d574be-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.731 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.733 253665 INFO os_vif [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb')
Nov 22 09:48:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:27 compute-0 ovn_controller[152872]: 2025-11-22T09:48:27Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:0f:53 10.100.0.8
Nov 22 09:48:27 compute-0 ovn_controller[152872]: 2025-11-22T09:48:27Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:0f:53 10.100.0.8
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.901 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.902 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.902 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No VIF found with MAC fa:16:3e:88:cb:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.903 253665 INFO nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Using config drive
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.926 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:27 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 22 09:48:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:27.935899) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:48:27 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 22 09:48:27 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804907935941, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1416, "num_deletes": 253, "total_data_size": 2146440, "memory_usage": 2187328, "flush_reason": "Manual Compaction"}
Nov 22 09:48:27 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 22 09:48:27 compute-0 nova_compute[253661]: 2025-11-22 09:48:27.952 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:27.990 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:27.991 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:27.992 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:28 compute-0 nova_compute[253661]: 2025-11-22 09:48:28.005 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'keypairs' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908122750, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1285403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52426, "largest_seqno": 53841, "table_properties": {"data_size": 1280301, "index_size": 2370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13594, "raw_average_key_size": 21, "raw_value_size": 1269039, "raw_average_value_size": 1961, "num_data_blocks": 107, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804759, "oldest_key_time": 1763804759, "file_creation_time": 1763804907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 186897 microseconds, and 4576 cpu microseconds.
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:48:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 334 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.6 MiB/s wr, 93 op/s
Nov 22 09:48:28 compute-0 nova_compute[253661]: 2025-11-22 09:48:28.289 253665 INFO nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating config drive at /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config
Nov 22 09:48:28 compute-0 nova_compute[253661]: 2025-11-22 09:48:28.294 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ervky0t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2346501455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:48:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3776956792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:48:28 compute-0 podman[402037]: 2025-11-22 09:48:28.391675459 +0000 UTC m=+2.045796227 container remove ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:48:28 compute-0 nova_compute[253661]: 2025-11-22 09:48:28.437 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ervky0t" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:28 compute-0 systemd[1]: libpod-conmon-ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0.scope: Deactivated successfully.
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.122793) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1285403 bytes OK
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.122815) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.615412) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.615462) EVENT_LOG_v1 {"time_micros": 1763804908615451, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.615489) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2140115, prev total WAL file size 2142652, number of live WAL files 2.
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.616878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303034' seq:72057594037927935, type:22 .. '6D6772737461740032323537' seq:0, type:0; will stop at (end)
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1255KB)], [122(10107KB)]
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908616955, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11635617, "oldest_snapshot_seqno": -1}
Nov 22 09:48:28 compute-0 podman[402169]: 2025-11-22 09:48:28.57919863 +0000 UTC m=+0.025419249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7476 keys, 9135298 bytes, temperature: kUnknown
Nov 22 09:48:28 compute-0 podman[402169]: 2025-11-22 09:48:28.76387914 +0000 UTC m=+0.210099759 container create 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908763638, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9135298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9087305, "index_size": 28169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 194807, "raw_average_key_size": 26, "raw_value_size": 8955844, "raw_average_value_size": 1197, "num_data_blocks": 1096, "num_entries": 7476, "num_filter_entries": 7476, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804908, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:48:28 compute-0 nova_compute[253661]: 2025-11-22 09:48:28.789 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:28 compute-0 nova_compute[253661]: 2025-11-22 09:48:28.797 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.764026) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9135298 bytes
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.839502) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 79.3 rd, 62.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.9 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(16.2) write-amplify(7.1) OK, records in: 7935, records dropped: 459 output_compression: NoCompression
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.839545) EVENT_LOG_v1 {"time_micros": 1763804908839528, "job": 74, "event": "compaction_finished", "compaction_time_micros": 146784, "compaction_time_cpu_micros": 25687, "output_level": 6, "num_output_files": 1, "total_output_size": 9135298, "num_input_records": 7935, "num_output_records": 7476, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908840044, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908842301, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.616430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:48:28 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:48:28 compute-0 systemd[1]: Started libpod-conmon-9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0.scope.
Nov 22 09:48:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:29 compute-0 podman[402169]: 2025-11-22 09:48:29.214835875 +0000 UTC m=+0.661056494 container init 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:48:29 compute-0 podman[402169]: 2025-11-22 09:48:29.223888758 +0000 UTC m=+0.670109397 container start 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:48:29 compute-0 podman[402169]: 2025-11-22 09:48:29.478301211 +0000 UTC m=+0.924521840 container attach 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 09:48:29 compute-0 ceph-mon[75021]: pgmap v2601: 305 pgs: 305 active+clean; 334 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.6 MiB/s wr, 93 op/s
Nov 22 09:48:29 compute-0 nova_compute[253661]: 2025-11-22 09:48:29.602 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:29 compute-0 nova_compute[253661]: 2025-11-22 09:48:29.603 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:29 compute-0 nova_compute[253661]: 2025-11-22 09:48:29.617 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:48:29 compute-0 nova_compute[253661]: 2025-11-22 09:48:29.733 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:29 compute-0 nova_compute[253661]: 2025-11-22 09:48:29.733 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:29 compute-0 nova_compute[253661]: 2025-11-22 09:48:29.745 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:48:29 compute-0 nova_compute[253661]: 2025-11-22 09:48:29.745 253665 INFO nova.compute.claims [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:48:29 compute-0 nova_compute[253661]: 2025-11-22 09:48:29.914 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 345 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.4 MiB/s wr, 100 op/s
Nov 22 09:48:30 compute-0 eloquent_ride[402213]: {
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "osd_id": 1,
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "type": "bluestore"
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:     },
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "osd_id": 0,
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "type": "bluestore"
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:     },
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "osd_id": 2,
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:         "type": "bluestore"
Nov 22 09:48:30 compute-0 eloquent_ride[402213]:     }
Nov 22 09:48:30 compute-0 eloquent_ride[402213]: }
Nov 22 09:48:30 compute-0 systemd[1]: libpod-9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0.scope: Deactivated successfully.
Nov 22 09:48:30 compute-0 systemd[1]: libpod-9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0.scope: Consumed 1.054s CPU time.
Nov 22 09:48:30 compute-0 podman[402169]: 2025-11-22 09:48:30.299629732 +0000 UTC m=+1.745850331 container died 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 09:48:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:48:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/588911086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.359 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.366 253665 DEBUG nova.compute.provider_tree [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.379 253665 DEBUG nova.scheduler.client.report [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.402 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.403 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.443 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.444 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.473 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.495 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928-merged.mount: Deactivated successfully.
Nov 22 09:48:30 compute-0 ceph-mon[75021]: pgmap v2602: 305 pgs: 305 active+clean; 345 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.4 MiB/s wr, 100 op/s
Nov 22 09:48:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/588911086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.603 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.605 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:48:30 compute-0 nova_compute[253661]: 2025-11-22 09:48:30.606 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Creating image(s)
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.170 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.198 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.225 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.230 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.271 253665 DEBUG nova.policy [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aff683c22adc499393a2037bae323af6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce09df5a051f4f24bbb216fbe5785dcb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.308 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.309 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.309 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.310 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.331 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.335 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 73f1da2d-d075-455d-94dd-f10146df7d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:31 compute-0 podman[402169]: 2025-11-22 09:48:31.349589959 +0000 UTC m=+2.795810558 container remove 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:48:31 compute-0 systemd[1]: libpod-conmon-9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0.scope: Deactivated successfully.
Nov 22 09:48:31 compute-0 sudo[401971]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:48:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:48:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:48:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.492 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.493 253665 INFO nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deleting local config drive /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config because it was imported into RBD.
Nov 22 09:48:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 497f58a1-1d12-4d32-ac68-d00a404ee57a does not exist
Nov 22 09:48:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 35dfff9a-c14e-4e0b-b2cb-8d0414c47bed does not exist
Nov 22 09:48:31 compute-0 sudo[402376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:48:31 compute-0 sudo[402376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:31 compute-0 sudo[402376]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:31 compute-0 kernel: tap88d574be-cb: entered promiscuous mode
Nov 22 09:48:31 compute-0 NetworkManager[48920]: <info>  [1763804911.5784] manager: (tap88d574be-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Nov 22 09:48:31 compute-0 ovn_controller[152872]: 2025-11-22T09:48:31Z|01559|binding|INFO|Claiming lport 88d574be-cb53-4693-a025-34a039ee625c for this chassis.
Nov 22 09:48:31 compute-0 ovn_controller[152872]: 2025-11-22T09:48:31Z|01560|binding|INFO|88d574be-cb53-4693-a025-34a039ee625c: Claiming fa:16:3e:88:cb:74 10.100.0.14
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.584 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.591 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:cb:74 10.100.0.14'], port_security=['fa:16:3e:88:cb:74 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-449be411-464c-4d69-be15-6372ecacd778', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'da881b1b-2aad-4a91-9422-a708cc3c5d34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a67d762-85ed-414e-ab70-eac2ab54b109, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88d574be-cb53-4693-a025-34a039ee625c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.593 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88d574be-cb53-4693-a025-34a039ee625c in datapath 449be411-464c-4d69-be15-6372ecacd778 bound to our chassis
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.595 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 449be411-464c-4d69-be15-6372ecacd778
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:31 compute-0 ovn_controller[152872]: 2025-11-22T09:48:31Z|01561|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c ovn-installed in OVS
Nov 22 09:48:31 compute-0 ovn_controller[152872]: 2025-11-22T09:48:31Z|01562|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c up in Southbound
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.608 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cc995e-11c6-41ff-ae56-ed2d1568274d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.609 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap449be411-41 in ovnmeta-449be411-464c-4d69-be15-6372ecacd778 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.612 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap449be411-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.612 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[edcba10a-7ea2-4513-b59f-9cba98817aad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.614 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba23774-6705-4d6d-85be-8a1d98eb3b7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.627 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[7919bd15-ca38-4266-8a8c-8d830ccc4cc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 systemd-udevd[402440]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:48:31 compute-0 systemd-machined[215941]: New machine qemu-176-instance-0000008f.
Nov 22 09:48:31 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-0000008f.
Nov 22 09:48:31 compute-0 sudo[402414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:48:31 compute-0 sudo[402414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.655 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b65cf79f-40a4-4b4e-907b-433431ec83aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 sudo[402414]: pam_unix(sudo:session): session closed for user root
Nov 22 09:48:31 compute-0 NetworkManager[48920]: <info>  [1763804911.6617] device (tap88d574be-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:48:31 compute-0 NetworkManager[48920]: <info>  [1763804911.6628] device (tap88d574be-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.690 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb4af6e-fdf8-4589-900d-142a3e7d949e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.697 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc7c1a3-505c-484b-ad7b-beb795cb0ccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 NetworkManager[48920]: <info>  [1763804911.6990] manager: (tap449be411-40): new Veth device (/org/freedesktop/NetworkManager/Devices/639)
Nov 22 09:48:31 compute-0 systemd-udevd[402449]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.731 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[61b46d8e-cc58-4507-aca1-d8fe9613a972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.735 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[14d5fd3c-59d1-4b0b-ab59-bb121760b41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 NetworkManager[48920]: <info>  [1763804911.7632] device (tap449be411-40): carrier: link connected
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.773 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3306891a-8766-4442-88dc-bf9122fe85a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6bee20c3-8f31-4f75-bd6b-063c6f7f4a65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap449be411-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776250, 'reachable_time': 36535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402476, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.820 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c49eb6e-c087-413c-b489-5f48952960ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:5a86'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 776250, 'tstamp': 776250}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402477, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.835 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 73f1da2d-d075-455d-94dd-f10146df7d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.846 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3afd4060-c728-4dd4-806a-076316b6080d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap449be411-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776250, 'reachable_time': 36535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402478, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.886 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0d147129-9418-464b-9bfb-f1f1bc171b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.918 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] resizing rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.960 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0ad9dd-a07b-41a3-965b-4ae45be61ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.964 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap449be411-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.964 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.966 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap449be411-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:31 compute-0 NetworkManager[48920]: <info>  [1763804911.9694] manager: (tap449be411-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/640)
Nov 22 09:48:31 compute-0 kernel: tap449be411-40: entered promiscuous mode
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.971 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap449be411-40, col_values=(('external_ids', {'iface-id': '02bcb711-03d1-4bf4-b274-247c09a1af89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:31 compute-0 ovn_controller[152872]: 2025-11-22T09:48:31Z|01563|binding|INFO|Releasing lport 02bcb711-03d1-4bf4-b274-247c09a1af89 from this chassis (sb_readonly=0)
Nov 22 09:48:31 compute-0 nova_compute[253661]: 2025-11-22 09:48:31.988 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.993 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[97df544c-c1b0-4d1f-a4d4-1b55aa051fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.996 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-449be411-464c-4d69-be15-6372ecacd778
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 449be411-464c-4d69-be15-6372ecacd778
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:48:31 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.997 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'env', 'PROCESS_TAG=haproxy-449be411-464c-4d69-be15-6372ecacd778', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/449be411-464c-4d69-be15-6372ecacd778.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.030 253665 DEBUG nova.objects.instance [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lazy-loading 'migration_context' on Instance uuid 73f1da2d-d075-455d-94dd-f10146df7d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.069 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.070 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Ensure instance console log exists: /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.070 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.071 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.071 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 113 op/s
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.289 253665 DEBUG nova.compute.manager [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.290 253665 DEBUG oslo_concurrency.lockutils [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.290 253665 DEBUG oslo_concurrency.lockutils [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.290 253665 DEBUG oslo_concurrency.lockutils [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.291 253665 DEBUG nova.compute.manager [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Processing event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:48:32 compute-0 podman[402582]: 2025-11-22 09:48:32.43213937 +0000 UTC m=+0.064420692 container create b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:48:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:48:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:48:32 compute-0 ceph-mon[75021]: pgmap v2603: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 113 op/s
Nov 22 09:48:32 compute-0 systemd[1]: Started libpod-conmon-b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd.scope.
Nov 22 09:48:32 compute-0 podman[402582]: 2025-11-22 09:48:32.399577455 +0000 UTC m=+0.031858797 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:48:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf74c07a4a6889725f3f85eec25642fe7fcac08ba09ce686a69d19763d2ff09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:32 compute-0 podman[402582]: 2025-11-22 09:48:32.546448342 +0000 UTC m=+0.178729684 container init b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:48:32 compute-0 podman[402582]: 2025-11-22 09:48:32.553290311 +0000 UTC m=+0.185571633 container start b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 09:48:32 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [NOTICE]   (402638) : New worker (402644) forked
Nov 22 09:48:32 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [NOTICE]   (402638) : Loading success.
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.652 253665 DEBUG nova.compute.manager [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.654 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804912.6522875, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.654 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Started (Lifecycle Event)
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.658 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.662 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance spawned successfully.
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.673 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.678 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.696 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.696 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804912.653604, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.696 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Paused (Lifecycle Event)
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.711 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.714 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804912.6583905, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.714 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Resumed (Lifecycle Event)
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.729 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.732 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.761 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:48:32 compute-0 nova_compute[253661]: 2025-11-22 09:48:32.811 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Successfully created port: 8d1f2012-aa57-4dfc-a744-a852d1353ad2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:48:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Nov 22 09:48:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Nov 22 09:48:33 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Nov 22 09:48:34 compute-0 nova_compute[253661]: 2025-11-22 09:48:34.144 253665 DEBUG nova.compute.manager [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:48:34 compute-0 nova_compute[253661]: 2025-11-22 09:48:34.230 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 23.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.0 MiB/s wr, 124 op/s
Nov 22 09:48:34 compute-0 nova_compute[253661]: 2025-11-22 09:48:34.352 253665 DEBUG nova.compute.manager [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:34 compute-0 nova_compute[253661]: 2025-11-22 09:48:34.352 253665 DEBUG oslo_concurrency.lockutils [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:34 compute-0 nova_compute[253661]: 2025-11-22 09:48:34.353 253665 DEBUG oslo_concurrency.lockutils [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:34 compute-0 nova_compute[253661]: 2025-11-22 09:48:34.353 253665 DEBUG oslo_concurrency.lockutils [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:34 compute-0 nova_compute[253661]: 2025-11-22 09:48:34.353 253665 DEBUG nova.compute.manager [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:48:34 compute-0 nova_compute[253661]: 2025-11-22 09:48:34.353 253665 WARNING nova.compute.manager [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state None.
Nov 22 09:48:34 compute-0 ceph-mon[75021]: osdmap e314: 3 total, 3 up, 3 in
Nov 22 09:48:34 compute-0 ceph-mon[75021]: pgmap v2605: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.0 MiB/s wr, 124 op/s
Nov 22 09:48:35 compute-0 nova_compute[253661]: 2025-11-22 09:48:35.016 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Successfully updated port: 8d1f2012-aa57-4dfc-a744-a852d1353ad2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:48:35 compute-0 nova_compute[253661]: 2025-11-22 09:48:35.029 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:35 compute-0 nova_compute[253661]: 2025-11-22 09:48:35.029 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquired lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:35 compute-0 nova_compute[253661]: 2025-11-22 09:48:35.029 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:48:35 compute-0 nova_compute[253661]: 2025-11-22 09:48:35.185 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:48:35 compute-0 podman[402655]: 2025-11-22 09:48:35.391597787 +0000 UTC m=+0.068042460 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:48:35 compute-0 podman[402654]: 2025-11-22 09:48:35.41151163 +0000 UTC m=+0.090738102 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 22 09:48:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 196 op/s
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.339 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updating instance_info_cache with network_info: [{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.366 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Releasing lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.367 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance network_info: |[{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.369 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start _get_guest_xml network_info=[{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.373 253665 WARNING nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.380 253665 DEBUG nova.virt.libvirt.host [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.380 253665 DEBUG nova.virt.libvirt.host [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.384 253665 DEBUG nova.virt.libvirt.host [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.384 253665 DEBUG nova.virt.libvirt.host [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.384 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.385 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.385 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.385 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.386 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.386 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.386 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.386 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.387 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.387 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.387 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.387 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.390 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.446 253665 DEBUG nova.compute.manager [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-changed-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.446 253665 DEBUG nova.compute.manager [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Refreshing instance network info cache due to event network-changed-8d1f2012-aa57-4dfc-a744-a852d1353ad2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.447 253665 DEBUG oslo_concurrency.lockutils [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.447 253665 DEBUG oslo_concurrency.lockutils [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.447 253665 DEBUG nova.network.neutron [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Refreshing network info cache for port 8d1f2012-aa57-4dfc-a744-a852d1353ad2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:48:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:48:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604616291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.853 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.879 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:36 compute-0 nova_compute[253661]: 2025-11-22 09:48:36.885 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:37 compute-0 ceph-mon[75021]: pgmap v2606: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 196 op/s
Nov 22 09:48:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/604616291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:48:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:48:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1060518713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.354 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.356 253665 DEBUG nova.virt.libvirt.vif [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:48:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-762381395',display_name='tempest-TestServerBasicOps-server-762381395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-762381395',id=145,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtTbF8bJfFddW96zTLdAkxE2iVzgX9zcT4Pj4BnA20Jji6o4SOv+z2CVEObDH8w0qoNYti5+X9zzKmkIowUY67LzvbSFwG+M1TtD6ysNGURVyIwLTyMSUq/al9LkPsMvg==',key_name='tempest-TestServerBasicOps-443879583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce09df5a051f4f24bbb216fbe5785dcb',ramdisk_id='',reservation_id='r-b7997295',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1909013265',owner_user_name='tempest-TestServerBasicOps-1909013265-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:48:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aff683c22adc499393a2037bae323af6',uuid=73f1da2d-d075-455d-94dd-f10146df7d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.357 253665 DEBUG nova.network.os_vif_util [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converting VIF {"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.358 253665 DEBUG nova.network.os_vif_util [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.359 253665 DEBUG nova.objects.instance [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lazy-loading 'pci_devices' on Instance uuid 73f1da2d-d075-455d-94dd-f10146df7d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.375 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <uuid>73f1da2d-d075-455d-94dd-f10146df7d30</uuid>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <name>instance-00000091</name>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <nova:name>tempest-TestServerBasicOps-server-762381395</nova:name>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:48:36</nova:creationTime>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <nova:user uuid="aff683c22adc499393a2037bae323af6">tempest-TestServerBasicOps-1909013265-project-member</nova:user>
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <nova:project uuid="ce09df5a051f4f24bbb216fbe5785dcb">tempest-TestServerBasicOps-1909013265</nova:project>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <nova:port uuid="8d1f2012-aa57-4dfc-a744-a852d1353ad2">
Nov 22 09:48:37 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <system>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <entry name="serial">73f1da2d-d075-455d-94dd-f10146df7d30</entry>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <entry name="uuid">73f1da2d-d075-455d-94dd-f10146df7d30</entry>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     </system>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <os>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   </os>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <features>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   </features>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/73f1da2d-d075-455d-94dd-f10146df7d30_disk">
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/73f1da2d-d075-455d-94dd-f10146df7d30_disk.config">
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       </source>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:48:37 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:43:7c:5c"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <target dev="tap8d1f2012-aa"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/console.log" append="off"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <video>
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     </video>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:48:37 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:48:37 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:48:37 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:48:37 compute-0 nova_compute[253661]: </domain>
Nov 22 09:48:37 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.376 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Preparing to wait for external event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.376 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.376 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.377 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.378 253665 DEBUG nova.virt.libvirt.vif [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:48:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-762381395',display_name='tempest-TestServerBasicOps-server-762381395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-762381395',id=145,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtTbF8bJfFddW96zTLdAkxE2iVzgX9zcT4Pj4BnA20Jji6o4SOv+z2CVEObDH8w0qoNYti5+X9zzKmkIowUY67LzvbSFwG+M1TtD6ysNGURVyIwLTyMSUq/al9LkPsMvg==',key_name='tempest-TestServerBasicOps-443879583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce09df5a051f4f24bbb216fbe5785dcb',ramdisk_id='',reservation_id='r-b7997295',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1909013265',owner_user_name='tempest-TestServerBasicOps-1909013265-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:48:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aff683c22adc499393a2037bae323af6',uuid=73f1da2d-d075-455d-94dd-f10146df7d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.378 253665 DEBUG nova.network.os_vif_util [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converting VIF {"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.379 253665 DEBUG nova.network.os_vif_util [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.379 253665 DEBUG os_vif [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.380 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.384 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d1f2012-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.390 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d1f2012-aa, col_values=(('external_ids', {'iface-id': '8d1f2012-aa57-4dfc-a744-a852d1353ad2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:7c:5c', 'vm-uuid': '73f1da2d-d075-455d-94dd-f10146df7d30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:37 compute-0 NetworkManager[48920]: <info>  [1763804917.3927] manager: (tap8d1f2012-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.400 253665 INFO os_vif [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa')
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.458 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.459 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.460 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] No VIF found with MAC fa:16:3e:43:7c:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.460 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Using config drive
Nov 22 09:48:37 compute-0 nova_compute[253661]: 2025-11-22 09:48:37.484 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.081 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Creating config drive at /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.087 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp50wf67b2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.139 253665 DEBUG nova.network.neutron [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updated VIF entry in instance network info cache for port 8d1f2012-aa57-4dfc-a744-a852d1353ad2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.140 253665 DEBUG nova.network.neutron [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updating instance_info_cache with network_info: [{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.155 253665 DEBUG oslo_concurrency.lockutils [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.247 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp50wf67b2" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 187 op/s
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.274 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.279 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1060518713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.464 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.466 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Deleting local config drive /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config because it was imported into RBD.
Nov 22 09:48:38 compute-0 kernel: tap8d1f2012-aa: entered promiscuous mode
Nov 22 09:48:38 compute-0 NetworkManager[48920]: <info>  [1763804918.5516] manager: (tap8d1f2012-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Nov 22 09:48:38 compute-0 ovn_controller[152872]: 2025-11-22T09:48:38Z|01564|binding|INFO|Claiming lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 for this chassis.
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:38 compute-0 ovn_controller[152872]: 2025-11-22T09:48:38Z|01565|binding|INFO|8d1f2012-aa57-4dfc-a744-a852d1353ad2: Claiming fa:16:3e:43:7c:5c 10.100.0.9
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.572 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:7c:5c 10.100.0.9'], port_security=['fa:16:3e:43:7c:5c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73f1da2d-d075-455d-94dd-f10146df7d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473e817e-09da-452b-aec0-d46546489b36', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce09df5a051f4f24bbb216fbe5785dcb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '134111d9-6c1c-466d-8bad-cdc68aa178a5 f24a4b05-cdda-49a7-af44-458e15bd9a13', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf052f1b-89e1-46ee-8169-e44075a76fcb, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8d1f2012-aa57-4dfc-a744-a852d1353ad2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.574 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8d1f2012-aa57-4dfc-a744-a852d1353ad2 in datapath 473e817e-09da-452b-aec0-d46546489b36 bound to our chassis
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.577 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473e817e-09da-452b-aec0-d46546489b36
Nov 22 09:48:38 compute-0 ovn_controller[152872]: 2025-11-22T09:48:38Z|01566|binding|INFO|Setting lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 ovn-installed in OVS
Nov 22 09:48:38 compute-0 ovn_controller[152872]: 2025-11-22T09:48:38Z|01567|binding|INFO|Setting lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 up in Southbound
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.587 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.592 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07dee0ab-6220-4cef-b22b-d1dc661c857a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.593 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap473e817e-01 in ovnmeta-473e817e-09da-452b-aec0-d46546489b36 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.596 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap473e817e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.596 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[29b9d308-55a1-414a-ae88-e50af46d117d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac026fae-3bb6-445a-9ae5-703d0264376a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 systemd-udevd[402825]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:48:38 compute-0 systemd-machined[215941]: New machine qemu-177-instance-00000091.
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.617 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a49010-2a7a-47d7-8b71-f7df8519bbca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 NetworkManager[48920]: <info>  [1763804918.6242] device (tap8d1f2012-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:48:38 compute-0 NetworkManager[48920]: <info>  [1763804918.6252] device (tap8d1f2012-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:48:38 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-00000091.
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.648 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ab296e-d8c9-47f5-b0d0-b796e7809440]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.702 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4cba8b-52b1-4a62-b58d-a9669da4c4cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 systemd-udevd[402831]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:48:38 compute-0 NetworkManager[48920]: <info>  [1763804918.7092] manager: (tap473e817e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/643)
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.708 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a855f93b-1bc7-48e3-ac14-9489db07e916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.759 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ce974b5f-8d24-4629-a172-43bba36b99d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.763 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0b92f34a-48eb-42f5-a4af-5ecbc6a69777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 NetworkManager[48920]: <info>  [1763804918.8003] device (tap473e817e-00): carrier: link connected
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.812 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4afd9e3-2376-4510-933a-be1366354bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.836 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a505fcf3-448f-41c6-b722-687904348cdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473e817e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:74:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776953, 'reachable_time': 43715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402860, 'error': None, 'target': 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.848 253665 DEBUG nova.compute.manager [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.849 253665 DEBUG oslo_concurrency.lockutils [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.849 253665 DEBUG oslo_concurrency.lockutils [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.850 253665 DEBUG oslo_concurrency.lockutils [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:38 compute-0 nova_compute[253661]: 2025-11-22 09:48:38.850 253665 DEBUG nova.compute.manager [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Processing event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.857 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[108b0b4b-edcc-46df-af7a-311038647983]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:74ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 776953, 'tstamp': 776953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402861, 'error': None, 'target': 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.887 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[813521c5-8ad8-4216-8f05-41c05c17fd53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473e817e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:74:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776953, 'reachable_time': 43715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402862, 'error': None, 'target': 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:38 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.933 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03053e16-b21f-45d7-bd13-2554b48c8126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[04c39bc0-f803-4c05-8320-15a2bc7a452c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.017 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473e817e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.017 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.017 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473e817e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:39 compute-0 kernel: tap473e817e-00: entered promiscuous mode
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:39 compute-0 NetworkManager[48920]: <info>  [1763804919.0205] manager: (tap473e817e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.025 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.026 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473e817e-00, col_values=(('external_ids', {'iface-id': 'fb48fac2-f19f-4ef4-a7bc-e07e49098585'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.027 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:39 compute-0 ovn_controller[152872]: 2025-11-22T09:48:39Z|01568|binding|INFO|Releasing lport fb48fac2-f19f-4ef4-a7bc-e07e49098585 from this chassis (sb_readonly=0)
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.045 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.052 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/473e817e-09da-452b-aec0-d46546489b36.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/473e817e-09da-452b-aec0-d46546489b36.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.053 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bc166c-4901-4721-9a8f-01ab034d8169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.053 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-473e817e-09da-452b-aec0-d46546489b36
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/473e817e-09da-452b-aec0-d46546489b36.pid.haproxy
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 473e817e-09da-452b-aec0-d46546489b36
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:48:39 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.054 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'env', 'PROCESS_TAG=haproxy-473e817e-09da-452b-aec0-d46546489b36', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/473e817e-09da-452b-aec0-d46546489b36.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.140 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804919.139901, 73f1da2d-d075-455d-94dd-f10146df7d30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.141 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] VM Started (Lifecycle Event)
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.143 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.148 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.152 253665 INFO nova.virt.libvirt.driver [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance spawned successfully.
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.152 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.180 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.195 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.201 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.202 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.202 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.203 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.203 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.204 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.233 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.233 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804919.1400502, 73f1da2d-d075-455d-94dd-f10146df7d30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.233 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] VM Paused (Lifecycle Event)
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.263 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.267 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804919.1471126, 73f1da2d-d075-455d-94dd-f10146df7d30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.267 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] VM Resumed (Lifecycle Event)
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.273 253665 INFO nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Took 8.67 seconds to spawn the instance on the hypervisor.
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.273 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.282 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.286 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.317 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.334 253665 INFO nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Took 9.63 seconds to build instance.
Nov 22 09:48:39 compute-0 ceph-mon[75021]: pgmap v2607: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 187 op/s
Nov 22 09:48:39 compute-0 nova_compute[253661]: 2025-11-22 09:48:39.348 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:39 compute-0 podman[402936]: 2025-11-22 09:48:39.476857694 +0000 UTC m=+0.069120357 container create e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 09:48:39 compute-0 systemd[1]: Started libpod-conmon-e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf.scope.
Nov 22 09:48:39 compute-0 podman[402936]: 2025-11-22 09:48:39.444043105 +0000 UTC m=+0.036305888 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:48:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b593cb210fa24aaae4778a6a8c263ac8ff16d9e064d9f8dea041fd87cb8a31/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:48:39 compute-0 podman[402936]: 2025-11-22 09:48:39.574893685 +0000 UTC m=+0.167156378 container init e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:48:39 compute-0 podman[402936]: 2025-11-22 09:48:39.588091982 +0000 UTC m=+0.180354645 container start e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 09:48:39 compute-0 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [NOTICE]   (402955) : New worker (402957) forked
Nov 22 09:48:39 compute-0 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [NOTICE]   (402955) : Loading success.
Nov 22 09:48:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Nov 22 09:48:40 compute-0 nova_compute[253661]: 2025-11-22 09:48:40.940 253665 DEBUG nova.compute.manager [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:40 compute-0 nova_compute[253661]: 2025-11-22 09:48:40.941 253665 DEBUG oslo_concurrency.lockutils [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:40 compute-0 nova_compute[253661]: 2025-11-22 09:48:40.941 253665 DEBUG oslo_concurrency.lockutils [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:40 compute-0 nova_compute[253661]: 2025-11-22 09:48:40.941 253665 DEBUG oslo_concurrency.lockutils [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:40 compute-0 nova_compute[253661]: 2025-11-22 09:48:40.941 253665 DEBUG nova.compute.manager [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] No waiting events found dispatching network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:48:40 compute-0 nova_compute[253661]: 2025-11-22 09:48:40.942 253665 WARNING nova.compute.manager [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received unexpected event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 for instance with vm_state active and task_state None.
Nov 22 09:48:41 compute-0 ceph-mon[75021]: pgmap v2608: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.147 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.147 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.148 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.148 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.148 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.149 253665 INFO nova.compute.manager [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Terminating instance
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.150 253665 DEBUG nova.compute.manager [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:48:42 compute-0 kernel: tap1a443391-10 (unregistering): left promiscuous mode
Nov 22 09:48:42 compute-0 NetworkManager[48920]: <info>  [1763804922.2271] device (tap1a443391-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:42 compute-0 ovn_controller[152872]: 2025-11-22T09:48:42Z|01569|binding|INFO|Releasing lport 1a443391-105a-4568-ba24-7748b702e21d from this chassis (sb_readonly=0)
Nov 22 09:48:42 compute-0 ovn_controller[152872]: 2025-11-22T09:48:42Z|01570|binding|INFO|Setting lport 1a443391-105a-4568-ba24-7748b702e21d down in Southbound
Nov 22 09:48:42 compute-0 ovn_controller[152872]: 2025-11-22T09:48:42Z|01571|binding|INFO|Removing iface tap1a443391-10 ovn-installed in OVS
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 207 op/s
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.260 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53'], port_security=['fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:fe1b:f53/64 2001:db8::f816:3eff:fe1b:f53/64', 'neutron:device_id': '6973b14c-b2af-4012-9d0c-1e86b6eb3a28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b8f1ae80-edda-4d40-9085-393558ac5aa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1a443391-105a-4568-ba24-7748b702e21d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.262 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1a443391-105a-4568-ba24-7748b702e21d in datapath b6b9221a-729b-4988-afa8-72f95360d9ea unbound from our chassis
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.266 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6b9221a-729b-4988-afa8-72f95360d9ea
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:42 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 22 09:48:42 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000090.scope: Consumed 15.808s CPU time.
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.291 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[746f0df0-966c-4fe5-bdd9-436eec434158]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:42 compute-0 systemd-machined[215941]: Machine qemu-175-instance-00000090 terminated.
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.324 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4e404c0f-5986-43e2-acd0-1b44409969f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.330 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3a671407-1592-4dd0-a019-015bb6be0337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:42 compute-0 podman[402966]: 2025-11-22 09:48:42.354147303 +0000 UTC m=+0.102430380 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.364 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[baaa86e0-96ea-4f55-9c75-ad58788e9597]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.387 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f03b1737-ade6-4ac7-9d87-7d4214df7d6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6b9221a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:f0:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764983, 'reachable_time': 18873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403004, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.396 253665 INFO nova.virt.libvirt.driver [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance destroyed successfully.
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.396 253665 DEBUG nova.objects.instance [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.406 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47aaaa71-2d29-4620-8e8d-db7d0fb10bb8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb6b9221a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764994, 'tstamp': 764994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403009, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb6b9221a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764997, 'tstamp': 764997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403009, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.408 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6b9221a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.411 253665 DEBUG nova.virt.libvirt.vif [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:47:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-338385867',display_name='tempest-TestGettingAddress-server-338385867',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-338385867',id=144,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:47:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rpa99d70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:47:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=6973b14c-b2af-4012-9d0c-1e86b6eb3a28,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.412 253665 DEBUG nova.network.os_vif_util [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.412 253665 DEBUG nova.network.os_vif_util [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.413 253665 DEBUG os_vif [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.414 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a443391-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.418 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6b9221a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.418 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.419 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6b9221a-70, col_values=(('external_ids', {'iface-id': 'b8d092bb-b893-4593-9090-1acdc081ae18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.420 253665 INFO os_vif [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10')
Nov 22 09:48:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.420 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.818 253665 INFO nova.virt.libvirt.driver [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Deleting instance files /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28_del
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.819 253665 INFO nova.virt.libvirt.driver [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Deletion of /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28_del complete
Nov 22 09:48:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.871 253665 INFO nova.compute.manager [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.872 253665 DEBUG oslo.service.loopingcall [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.872 253665 DEBUG nova.compute.manager [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:48:42 compute-0 nova_compute[253661]: 2025-11-22 09:48:42.872 253665 DEBUG nova.network.neutron [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:48:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Nov 22 09:48:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Nov 22 09:48:43 compute-0 nova_compute[253661]: 2025-11-22 09:48:43.033 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-changed-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:43 compute-0 nova_compute[253661]: 2025-11-22 09:48:43.034 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing instance network info cache due to event network-changed-1a443391-105a-4568-ba24-7748b702e21d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:48:43 compute-0 nova_compute[253661]: 2025-11-22 09:48:43.034 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:43 compute-0 nova_compute[253661]: 2025-11-22 09:48:43.034 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:43 compute-0 nova_compute[253661]: 2025-11-22 09:48:43.034 253665 DEBUG nova.network.neutron [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing network info cache for port 1a443391-105a-4568-ba24-7748b702e21d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:48:43 compute-0 ceph-mon[75021]: pgmap v2609: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 207 op/s
Nov 22 09:48:43 compute-0 ceph-mon[75021]: osdmap e315: 3 total, 3 up, 3 in
Nov 22 09:48:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 299 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 192 op/s
Nov 22 09:48:44 compute-0 nova_compute[253661]: 2025-11-22 09:48:44.649 253665 DEBUG nova.network.neutron [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:44 compute-0 nova_compute[253661]: 2025-11-22 09:48:44.665 253665 INFO nova.compute.manager [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Took 1.79 seconds to deallocate network for instance.
Nov 22 09:48:44 compute-0 nova_compute[253661]: 2025-11-22 09:48:44.712 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:44 compute-0 nova_compute[253661]: 2025-11-22 09:48:44.713 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:44 compute-0 nova_compute[253661]: 2025-11-22 09:48:44.739 253665 DEBUG nova.compute.manager [req-5884cfa9-152d-4f16-a08a-1c0a68f360a4 req-6a027da9-78d8-44bf-bc82-5d971da3c95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-deleted-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:44 compute-0 nova_compute[253661]: 2025-11-22 09:48:44.806 253665 DEBUG oslo_concurrency.processutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.114 253665 DEBUG nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.115 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.115 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.116 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.116 253665 DEBUG nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] No waiting events found dispatching network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.116 253665 WARNING nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received unexpected event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d for instance with vm_state deleted and task_state None.
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.116 253665 DEBUG nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-changed-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.117 253665 DEBUG nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Refreshing instance network info cache due to event network-changed-8d1f2012-aa57-4dfc-a744-a852d1353ad2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.117 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.117 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.117 253665 DEBUG nova.network.neutron [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Refreshing network info cache for port 8d1f2012-aa57-4dfc-a744-a852d1353ad2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:48:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:48:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876950918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.331 253665 DEBUG oslo_concurrency.processutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.336 253665 DEBUG nova.compute.provider_tree [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.350 253665 DEBUG nova.scheduler.client.report [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:48:45 compute-0 ceph-mon[75021]: pgmap v2611: 305 pgs: 305 active+clean; 299 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 192 op/s
Nov 22 09:48:45 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2876950918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.378 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.408 253665 INFO nova.scheduler.client.report [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 6973b14c-b2af-4012-9d0c-1e86b6eb3a28
Nov 22 09:48:45 compute-0 nova_compute[253661]: 2025-11-22 09:48:45.500 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.176 253665 DEBUG nova.network.neutron [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updated VIF entry in instance network info cache for port 1a443391-105a-4568-ba24-7748b702e21d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.176 253665 DEBUG nova.network.neutron [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.205 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.205 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-unplugged-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.205 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.206 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.206 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.206 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] No waiting events found dispatching network-vif-unplugged-1a443391-105a-4568-ba24-7748b702e21d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.206 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-unplugged-1a443391-105a-4568-ba24-7748b702e21d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:48:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 282 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 972 KiB/s wr, 155 op/s
Nov 22 09:48:46 compute-0 ceph-mon[75021]: pgmap v2612: 305 pgs: 305 active+clean; 282 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 972 KiB/s wr, 155 op/s
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.451 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.452 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.452 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.453 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.453 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.454 253665 INFO nova.compute.manager [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Terminating instance
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.456 253665 DEBUG nova.compute.manager [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:48:46 compute-0 ovn_controller[152872]: 2025-11-22T09:48:46Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:cb:74 10.100.0.14
Nov 22 09:48:46 compute-0 kernel: tapbe2ad403-fc (unregistering): left promiscuous mode
Nov 22 09:48:46 compute-0 NetworkManager[48920]: <info>  [1763804926.5330] device (tapbe2ad403-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:48:46 compute-0 ovn_controller[152872]: 2025-11-22T09:48:46Z|01572|binding|INFO|Releasing lport be2ad403-fc37-4e1b-a9b8-f0e116595caf from this chassis (sb_readonly=0)
Nov 22 09:48:46 compute-0 ovn_controller[152872]: 2025-11-22T09:48:46Z|01573|binding|INFO|Setting lport be2ad403-fc37-4e1b-a9b8-f0e116595caf down in Southbound
Nov 22 09:48:46 compute-0 ovn_controller[152872]: 2025-11-22T09:48:46Z|01574|binding|INFO|Removing iface tapbe2ad403-fc ovn-installed in OVS
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.566 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893'], port_security=['fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feca:893/64 2001:db8::f816:3eff:feca:893/64', 'neutron:device_id': '63134c6f-fc14-4157-9874-e7c6227f8d0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b8f1ae80-edda-4d40-9085-393558ac5aa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=be2ad403-fc37-4e1b-a9b8-f0e116595caf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.567 162862 INFO neutron.agent.ovn.metadata.agent [-] Port be2ad403-fc37-4e1b-a9b8-f0e116595caf in datapath b6b9221a-729b-4988-afa8-72f95360d9ea unbound from our chassis
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.574 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6b9221a-729b-4988-afa8-72f95360d9ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.575 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa17a8f-d1f6-4a37-a4bd-a6dd7e6ba554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.577 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea namespace which is not needed anymore
Nov 22 09:48:46 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Nov 22 09:48:46 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008e.scope: Consumed 18.967s CPU time.
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.588 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:46 compute-0 systemd-machined[215941]: Machine qemu-173-instance-0000008e terminated.
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.697 253665 INFO nova.virt.libvirt.driver [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance destroyed successfully.
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.698 253665 DEBUG nova.objects.instance [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.709 253665 DEBUG nova.virt.libvirt.vif [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:46:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-533516475',display_name='tempest-TestGettingAddress-server-533516475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-533516475',id=142,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:46:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-khcmddwq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:46:39Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=63134c6f-fc14-4157-9874-e7c6227f8d0a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.712 253665 DEBUG nova.network.os_vif_util [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.714 253665 DEBUG nova.network.os_vif_util [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.714 253665 DEBUG os_vif [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.717 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe2ad403-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.724 253665 INFO os_vif [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc')
Nov 22 09:48:46 compute-0 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [NOTICE]   (398665) : haproxy version is 2.8.14-c23fe91
Nov 22 09:48:46 compute-0 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [NOTICE]   (398665) : path to executable is /usr/sbin/haproxy
Nov 22 09:48:46 compute-0 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [WARNING]  (398665) : Exiting Master process...
Nov 22 09:48:46 compute-0 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [WARNING]  (398665) : Exiting Master process...
Nov 22 09:48:46 compute-0 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [ALERT]    (398665) : Current worker (398667) exited with code 143 (Terminated)
Nov 22 09:48:46 compute-0 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [WARNING]  (398665) : All workers exited. Exiting... (0)
Nov 22 09:48:46 compute-0 systemd[1]: libpod-effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e.scope: Deactivated successfully.
Nov 22 09:48:46 compute-0 podman[403079]: 2025-11-22 09:48:46.752406549 +0000 UTC m=+0.052688622 container died effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:48:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:48:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecd7c84ba3a988d7548558565c563363ed9d49bd874d1c96a511c8c8772c831a-merged.mount: Deactivated successfully.
Nov 22 09:48:46 compute-0 podman[403079]: 2025-11-22 09:48:46.797817451 +0000 UTC m=+0.098099524 container cleanup effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:48:46 compute-0 systemd[1]: libpod-conmon-effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e.scope: Deactivated successfully.
Nov 22 09:48:46 compute-0 podman[403133]: 2025-11-22 09:48:46.870666559 +0000 UTC m=+0.044604372 container remove effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[683e3d25-5780-472c-9489-4f3468679325]: (4, ('Sat Nov 22 09:48:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea (effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e)\neffa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e\nSat Nov 22 09:48:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea (effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e)\neffa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.880 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd1d444-4f50-4441-852d-cec1a069a847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.881 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6b9221a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:46 compute-0 kernel: tapb6b9221a-70: left promiscuous mode
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.890 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aaaf7f8a-4dc4-4392-93d0-ee07f42aaced]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:46 compute-0 nova_compute[253661]: 2025-11-22 09:48:46.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.912 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[efac061d-d5cb-4214-bbcb-7e79067ed562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.914 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8698e341-5303-468f-bc68-b0335a59ae73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.934 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a2b2a3-d29f-4262-939b-ab8a8c6a5d97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764977, 'reachable_time': 21527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403148, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:46 compute-0 systemd[1]: run-netns-ovnmeta\x2db6b9221a\x2d729b\x2d4988\x2dafa8\x2d72f95360d9ea.mount: Deactivated successfully.
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.940 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:48:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.940 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e68af062-2da9-4c12-a4f7-2bd5b45e5c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.175 253665 INFO nova.virt.libvirt.driver [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Deleting instance files /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a_del
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.176 253665 INFO nova.virt.libvirt.driver [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Deletion of /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a_del complete
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.184 253665 DEBUG nova.network.neutron [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updated VIF entry in instance network info cache for port 8d1f2012-aa57-4dfc-a744-a852d1353ad2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.185 253665 DEBUG nova.network.neutron [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updating instance_info_cache with network_info: [{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.199 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.200 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing instance network info cache due to event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.200 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.200 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.201 253665 DEBUG nova.network.neutron [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.219 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.235 253665 INFO nova.compute.manager [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.236 253665 DEBUG oslo.service.loopingcall [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.236 253665 DEBUG nova.compute.manager [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.236 253665 DEBUG nova.network.neutron [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.268 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.389 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.390 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:47 compute-0 nova_compute[253661]: 2025-11-22 09:48:47.390 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:48:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 52 KiB/s wr, 171 op/s
Nov 22 09:48:48 compute-0 nova_compute[253661]: 2025-11-22 09:48:48.403 253665 DEBUG nova.network.neutron [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:48 compute-0 nova_compute[253661]: 2025-11-22 09:48:48.423 253665 INFO nova.compute.manager [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Took 1.19 seconds to deallocate network for instance.
Nov 22 09:48:48 compute-0 nova_compute[253661]: 2025-11-22 09:48:48.461 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:48 compute-0 nova_compute[253661]: 2025-11-22 09:48:48.461 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:48 compute-0 nova_compute[253661]: 2025-11-22 09:48:48.537 253665 DEBUG oslo_concurrency.processutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:48 compute-0 nova_compute[253661]: 2025-11-22 09:48:48.907 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:48 compute-0 nova_compute[253661]: 2025-11-22 09:48:48.928 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:48 compute-0 nova_compute[253661]: 2025-11-22 09:48:48.929 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:48:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:48:49 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305776052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.049 253665 DEBUG oslo_concurrency.processutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.054 253665 DEBUG nova.compute.provider_tree [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.072 253665 DEBUG nova.scheduler.client.report [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.091 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.121 253665 INFO nova.scheduler.client.report [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 63134c6f-fc14-4157-9874-e7c6227f8d0a
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.181 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.297 253665 DEBUG nova.compute.manager [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.298 253665 DEBUG oslo_concurrency.lockutils [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.298 253665 DEBUG oslo_concurrency.lockutils [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.298 253665 DEBUG oslo_concurrency.lockutils [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.299 253665 DEBUG nova.compute.manager [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] No waiting events found dispatching network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.299 253665 WARNING nova.compute.manager [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received unexpected event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf for instance with vm_state deleted and task_state None.
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.299 253665 DEBUG nova.compute.manager [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-deleted-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:49 compute-0 ceph-mon[75021]: pgmap v2613: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 52 KiB/s wr, 171 op/s
Nov 22 09:48:49 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1305776052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.477 253665 DEBUG nova.network.neutron [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated VIF entry in instance network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.478 253665 DEBUG nova.network.neutron [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.496 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.497 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-unplugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.497 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.497 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.497 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.498 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] No waiting events found dispatching network-vif-unplugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:48:49 compute-0 nova_compute[253661]: 2025-11-22 09:48:49.498 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-unplugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:48:50 compute-0 nova_compute[253661]: 2025-11-22 09:48:50.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 228 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 52 KiB/s wr, 186 op/s
Nov 22 09:48:51 compute-0 ceph-mon[75021]: pgmap v2614: 305 pgs: 305 active+clean; 228 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 52 KiB/s wr, 186 op/s
Nov 22 09:48:51 compute-0 nova_compute[253661]: 2025-11-22 09:48:51.720 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:52 compute-0 nova_compute[253661]: 2025-11-22 09:48:52.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:52 compute-0 nova_compute[253661]: 2025-11-22 09:48:52.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 22 KiB/s wr, 149 op/s
Nov 22 09:48:52 compute-0 nova_compute[253661]: 2025-11-22 09:48:52.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:48:52
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'vms', 'backups', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data']
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:48:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:48:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:53 compute-0 nova_compute[253661]: 2025-11-22 09:48:53.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:53 compute-0 ceph-mon[75021]: pgmap v2615: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 22 KiB/s wr, 149 op/s
Nov 22 09:48:53 compute-0 ovn_controller[152872]: 2025-11-22T09:48:53Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:7c:5c 10.100.0.9
Nov 22 09:48:53 compute-0 ovn_controller[152872]: 2025-11-22T09:48:53Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:7c:5c 10.100.0.9
Nov 22 09:48:54 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.251 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.251 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 175 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 573 KiB/s wr, 137 op/s
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.281 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.282 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.470 253665 DEBUG nova.compute.manager [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.471 253665 DEBUG nova.compute.manager [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.472 253665 DEBUG oslo_concurrency.lockutils [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.472 253665 DEBUG oslo_concurrency.lockutils [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.473 253665 DEBUG nova.network.neutron [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.540 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.541 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.541 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.542 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.542 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.543 253665 INFO nova.compute.manager [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Terminating instance
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.544 253665 DEBUG nova.compute.manager [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:48:54 compute-0 kernel: tap88d574be-cb (unregistering): left promiscuous mode
Nov 22 09:48:54 compute-0 NetworkManager[48920]: <info>  [1763804934.5960] device (tap88d574be-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:48:54 compute-0 ovn_controller[152872]: 2025-11-22T09:48:54Z|01575|binding|INFO|Releasing lport 88d574be-cb53-4693-a025-34a039ee625c from this chassis (sb_readonly=0)
Nov 22 09:48:54 compute-0 ovn_controller[152872]: 2025-11-22T09:48:54Z|01576|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c down in Southbound
Nov 22 09:48:54 compute-0 ovn_controller[152872]: 2025-11-22T09:48:54Z|01577|binding|INFO|Removing iface tap88d574be-cb ovn-installed in OVS
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.604 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:cb:74 10.100.0.14'], port_security=['fa:16:3e:88:cb:74 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-449be411-464c-4d69-be15-6372ecacd778', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'da881b1b-2aad-4a91-9422-a708cc3c5d34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a67d762-85ed-414e-ab70-eac2ab54b109, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88d574be-cb53-4693-a025-34a039ee625c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.605 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88d574be-cb53-4693-a025-34a039ee625c in datapath 449be411-464c-4d69-be15-6372ecacd778 unbound from our chassis
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.607 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 449be411-464c-4d69-be15-6372ecacd778, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.611 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0729055-77da-4d47-a14a-4e7b95d0a9b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.612 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-449be411-464c-4d69-be15-6372ecacd778 namespace which is not needed anymore
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:54 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 22 09:48:54 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Consumed 14.700s CPU time.
Nov 22 09:48:54 compute-0 systemd-machined[215941]: Machine qemu-176-instance-0000008f terminated.
Nov 22 09:48:54 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [NOTICE]   (402638) : haproxy version is 2.8.14-c23fe91
Nov 22 09:48:54 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [NOTICE]   (402638) : path to executable is /usr/sbin/haproxy
Nov 22 09:48:54 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [ALERT]    (402638) : Current worker (402644) exited with code 143 (Terminated)
Nov 22 09:48:54 compute-0 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [WARNING]  (402638) : All workers exited. Exiting... (0)
Nov 22 09:48:54 compute-0 systemd[1]: libpod-b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd.scope: Deactivated successfully.
Nov 22 09:48:54 compute-0 podman[403214]: 2025-11-22 09:48:54.753563771 +0000 UTC m=+0.044917230 container died b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:48:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:48:54 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513861184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.783 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance destroyed successfully.
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.784 253665 DEBUG nova.objects.instance [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'resources' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd-userdata-shm.mount: Deactivated successfully.
Nov 22 09:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf74c07a4a6889725f3f85eec25642fe7fcac08ba09ce686a69d19763d2ff09-merged.mount: Deactivated successfully.
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.799 253665 DEBUG nova.virt.libvirt.vif [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJMfV5BjTM8GJujok7HYi2H1JqAcE7EEyl3AluUOeV8mGOJe1kvDgduzG9FjqiMj3IyTkvrleTcL49x3Y3dHrfp4PbZT/WUxBgqL6QlOxXbuGaO695U0GzmKtLI552+pbw==',key_name='tempest-TestShelveInstance-1840126280',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:48:34Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.800 253665 DEBUG nova.network.os_vif_util [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:48:54 compute-0 podman[403214]: 2025-11-22 09:48:54.801141066 +0000 UTC m=+0.092494515 container cleanup b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.801 253665 DEBUG nova.network.os_vif_util [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.802 253665 DEBUG os_vif [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.805 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88d574be-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.808 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.809 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.811 253665 INFO os_vif [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb')
Nov 22 09:48:54 compute-0 systemd[1]: libpod-conmon-b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd.scope: Deactivated successfully.
Nov 22 09:48:54 compute-0 podman[403254]: 2025-11-22 09:48:54.873256706 +0000 UTC m=+0.048501698 container remove b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.879 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7cabffca-da77-4bef-abd8-ce09b52889d5]: (4, ('Sat Nov 22 09:48:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778 (b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd)\nb300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd\nSat Nov 22 09:48:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778 (b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd)\nb300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.880 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3615d060-226d-45bc-b8d0-97133dc9725b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.881 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap449be411-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:48:54 compute-0 kernel: tap449be411-40: left promiscuous mode
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8399c637-09d8-43d1-97ff-d041a2ea6b25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.899 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.906 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1185e2fa-8397-4521-9157-d514a15915ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.907 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20918a84-5242-4f69-8a37-97c6d3c91621]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.924 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c51cefd8-bb38-4f5e-a5d0-10924aa3c5bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776242, 'reachable_time': 38129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403289, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d449be411\x2d464c\x2d4d69\x2dbe15\x2d6372ecacd778.mount: Deactivated successfully.
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.929 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-449be411-464c-4d69-be15-6372ecacd778 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:48:54 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.929 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d05ff1ae-4081-4459-bd4e-9ef3a645489d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.931 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.931 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.937 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:48:54 compute-0 nova_compute[253661]: 2025-11-22 09:48:54.938 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.121 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.122 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3320MB free_disk=59.91588592529297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.122 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.123 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.224 253665 INFO nova.virt.libvirt.driver [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deleting instance files /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_del
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.225 253665 INFO nova.virt.libvirt.driver [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deletion of /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_del complete
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.276 253665 INFO nova.compute.manager [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.278 253665 DEBUG oslo.service.loopingcall [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.278 253665 DEBUG nova.compute.manager [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.278 253665 DEBUG nova.network.neutron [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.303 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.304 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 73f1da2d-d075-455d-94dd-f10146df7d30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.304 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.304 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:48:55 compute-0 ceph-mon[75021]: pgmap v2616: 305 pgs: 305 active+clean; 175 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 573 KiB/s wr, 137 op/s
Nov 22 09:48:55 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2513861184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.511 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:48:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:48:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:48:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:48:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:48:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:48:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042091176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:55 compute-0 nova_compute[253661]: 2025-11-22 09:48:55.994 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.000 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.015 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.047 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.048 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 138 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 1.0 MiB/s wr, 142 op/s
Nov 22 09:48:56 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4042091176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.608 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.608 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.608 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.608 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.609 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.609 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.609 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.610 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.610 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.610 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.610 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:48:56 compute-0 nova_compute[253661]: 2025-11-22 09:48:56.611 253665 WARNING nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state deleting.
Nov 22 09:48:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:48:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:48:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:48:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:48:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.048 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.132 253665 DEBUG nova.network.neutron [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.132 253665 DEBUG nova.network.neutron [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.150 253665 DEBUG oslo_concurrency.lockutils [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:48:57 compute-0 ceph-mon[75021]: pgmap v2617: 305 pgs: 305 active+clean; 138 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 1.0 MiB/s wr, 142 op/s
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.378 253665 DEBUG nova.network.neutron [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.388 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804922.387406, 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.388 253665 INFO nova.compute.manager [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] VM Stopped (Lifecycle Event)
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.401 253665 INFO nova.compute.manager [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Took 2.12 seconds to deallocate network for instance.
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.411 253665 DEBUG nova.compute.manager [None req-36ae3343-eb16-4e99-809f-9adc9d680151 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.453 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.453 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.467 253665 DEBUG nova.compute.manager [req-ebb9a721-03f9-4088-af7b-c4f0a9dcfe2b req-f8abd34f-ba66-4fc6-8d09-42314795a12d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-deleted-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:48:57 compute-0 nova_compute[253661]: 2025-11-22 09:48:57.519 253665 DEBUG oslo_concurrency.processutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:48:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:48:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:48:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879184415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:58 compute-0 nova_compute[253661]: 2025-11-22 09:48:58.033 253665 DEBUG oslo_concurrency.processutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:48:58 compute-0 nova_compute[253661]: 2025-11-22 09:48:58.038 253665 DEBUG nova.compute.provider_tree [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:48:58 compute-0 nova_compute[253661]: 2025-11-22 09:48:58.053 253665 DEBUG nova.scheduler.client.report [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:48:58 compute-0 nova_compute[253661]: 2025-11-22 09:48:58.072 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:58 compute-0 nova_compute[253661]: 2025-11-22 09:48:58.106 253665 INFO nova.scheduler.client.report [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Deleted allocations for instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f
Nov 22 09:48:58 compute-0 nova_compute[253661]: 2025-11-22 09:48:58.159 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:48:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 140 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 847 KiB/s rd, 2.1 MiB/s wr, 151 op/s
Nov 22 09:48:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1879184415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:48:59 compute-0 nova_compute[253661]: 2025-11-22 09:48:59.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:48:59 compute-0 ceph-mon[75021]: pgmap v2618: 305 pgs: 305 active+clean; 140 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 847 KiB/s rd, 2.1 MiB/s wr, 151 op/s
Nov 22 09:48:59 compute-0 nova_compute[253661]: 2025-11-22 09:48:59.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 22 09:49:00 compute-0 ceph-mon[75021]: pgmap v2619: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 22 09:49:01 compute-0 nova_compute[253661]: 2025-11-22 09:49:01.238 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:01 compute-0 ovn_controller[152872]: 2025-11-22T09:49:01Z|01578|binding|INFO|Releasing lport fb48fac2-f19f-4ef4-a7bc-e07e49098585 from this chassis (sb_readonly=0)
Nov 22 09:49:01 compute-0 nova_compute[253661]: 2025-11-22 09:49:01.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:01 compute-0 nova_compute[253661]: 2025-11-22 09:49:01.692 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804926.6905787, 63134c6f-fc14-4157-9874-e7c6227f8d0a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:49:01 compute-0 nova_compute[253661]: 2025-11-22 09:49:01.692 253665 INFO nova.compute.manager [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] VM Stopped (Lifecycle Event)
Nov 22 09:49:01 compute-0 nova_compute[253661]: 2025-11-22 09:49:01.720 253665 DEBUG nova.compute.manager [None req-d77a7d9c-635c-4a9d-800c-57df3ab9eae1 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 572 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Nov 22 09:49:02 compute-0 nova_compute[253661]: 2025-11-22 09:49:02.276 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:02.284 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:02 compute-0 ovn_controller[152872]: 2025-11-22T09:49:02Z|01579|binding|INFO|Releasing lport fb48fac2-f19f-4ef4-a7bc-e07e49098585 from this chassis (sb_readonly=0)
Nov 22 09:49:02 compute-0 nova_compute[253661]: 2025-11-22 09:49:02.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007573653301426059 of space, bias 1.0, pg target 0.2272095990427818 quantized to 32 (current 32)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:49:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:49:03 compute-0 ceph-mon[75021]: pgmap v2620: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 572 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Nov 22 09:49:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 559 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Nov 22 09:49:04 compute-0 nova_compute[253661]: 2025-11-22 09:49:04.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:05 compute-0 ceph-mon[75021]: pgmap v2621: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 559 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Nov 22 09:49:06 compute-0 ovn_controller[152872]: 2025-11-22T09:49:06Z|01580|binding|INFO|Releasing lport fb48fac2-f19f-4ef4-a7bc-e07e49098585 from this chassis (sb_readonly=0)
Nov 22 09:49:06 compute-0 nova_compute[253661]: 2025-11-22 09:49:06.191 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 519 KiB/s rd, 1.6 MiB/s wr, 93 op/s
Nov 22 09:49:06 compute-0 podman[403336]: 2025-11-22 09:49:06.401131432 +0000 UTC m=+0.080116829 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:49:06 compute-0 podman[403335]: 2025-11-22 09:49:06.418011639 +0000 UTC m=+0.093964571 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:49:07 compute-0 nova_compute[253661]: 2025-11-22 09:49:07.278 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:07 compute-0 ceph-mon[75021]: pgmap v2622: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 519 KiB/s rd, 1.6 MiB/s wr, 93 op/s
Nov 22 09:49:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:08 compute-0 nova_compute[253661]: 2025-11-22 09:49:08.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:08 compute-0 nova_compute[253661]: 2025-11-22 09:49:08.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:49:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 1.2 MiB/s wr, 53 op/s
Nov 22 09:49:08 compute-0 ceph-mon[75021]: pgmap v2623: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 1.2 MiB/s wr, 53 op/s
Nov 22 09:49:09 compute-0 nova_compute[253661]: 2025-11-22 09:49:09.364 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:09 compute-0 nova_compute[253661]: 2025-11-22 09:49:09.780 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804934.7778037, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:49:09 compute-0 nova_compute[253661]: 2025-11-22 09:49:09.780 253665 INFO nova.compute.manager [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Stopped (Lifecycle Event)
Nov 22 09:49:09 compute-0 nova_compute[253661]: 2025-11-22 09:49:09.795 253665 DEBUG nova.compute.manager [None req-4279b867-7e63-4f4b-ba16-ab1b0ff66a86 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:09 compute-0 nova_compute[253661]: 2025-11-22 09:49:09.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 15 KiB/s wr, 5 op/s
Nov 22 09:49:11 compute-0 ceph-mon[75021]: pgmap v2624: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 15 KiB/s wr, 5 op/s
Nov 22 09:49:11 compute-0 nova_compute[253661]: 2025-11-22 09:49:11.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 22 09:49:12 compute-0 nova_compute[253661]: 2025-11-22 09:49:12.280 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:49:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3573905986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:49:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:49:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3573905986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:49:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:13 compute-0 nova_compute[253661]: 2025-11-22 09:49:13.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:13 compute-0 ceph-mon[75021]: pgmap v2625: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 22 09:49:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3573905986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:49:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3573905986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:49:13 compute-0 podman[403373]: 2025-11-22 09:49:13.405765498 +0000 UTC m=+0.090113907 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 09:49:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:13.697 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:d3:2f 10.100.0.2 2001:db8::f816:3eff:fe02:d32f'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:d32f/64', 'neutron:device_id': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=da001788-faa3-412b-9b6a-82fe1a808a87) old=Port_Binding(mac=['fa:16:3e:02:d3:2f 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:49:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:13.698 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port da001788-faa3-412b-9b6a-82fe1a808a87 in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce updated
Nov 22 09:49:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:13.700 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:49:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:13.701 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a52eb677-5b7d-4761-a674-aff35ff58e9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 22 09:49:14 compute-0 nova_compute[253661]: 2025-11-22 09:49:14.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:15 compute-0 ceph-mon[75021]: pgmap v2626: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 22 09:49:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 22 09:49:17 compute-0 nova_compute[253661]: 2025-11-22 09:49:17.282 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:17 compute-0 ceph-mon[75021]: pgmap v2627: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 22 09:49:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 22 09:49:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:19.192 162970 DEBUG eventlet.wsgi.server [-] (162970) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 22 09:49:19 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:19.193 162970 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Nov 22 09:49:19 compute-0 ovn_metadata_agent[162856]: Accept: */*
Nov 22 09:49:19 compute-0 ovn_metadata_agent[162856]: Connection: close
Nov 22 09:49:19 compute-0 ovn_metadata_agent[162856]: Content-Type: text/plain
Nov 22 09:49:19 compute-0 ovn_metadata_agent[162856]: Host: 169.254.169.254
Nov 22 09:49:19 compute-0 ovn_metadata_agent[162856]: User-Agent: curl/7.84.0
Nov 22 09:49:19 compute-0 ovn_metadata_agent[162856]: X-Forwarded-For: 10.100.0.9
Nov 22 09:49:19 compute-0 ovn_metadata_agent[162856]: X-Ovn-Network-Id: 473e817e-09da-452b-aec0-d46546489b36 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 22 09:49:19 compute-0 ceph-mon[75021]: pgmap v2628: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 22 09:49:19 compute-0 nova_compute[253661]: 2025-11-22 09:49:19.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 22 09:49:21 compute-0 ceph-mon[75021]: pgmap v2629: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.023 162970 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.024 162970 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.8306718
Nov 22 09:49:22 compute-0 haproxy-metadata-proxy-473e817e-09da-452b-aec0-d46546489b36[402957]: 10.100.0.9:50110 [22/Nov/2025:09:49:19.190] listener listener/metadata 0/0/0/2833/2833 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.099 162970 DEBUG eventlet.wsgi.server [-] (162970) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.100 162970 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: Accept: */*
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: Connection: close
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: Content-Length: 100
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: Content-Type: application/x-www-form-urlencoded
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: Host: 169.254.169.254
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: User-Agent: curl/7.84.0
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: X-Forwarded-For: 10.100.0.9
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: X-Ovn-Network-Id: 473e817e-09da-452b-aec0-d46546489b36
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 22 09:49:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 22 09:49:22 compute-0 nova_compute[253661]: 2025-11-22 09:49:22.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.380 162970 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 22 09:49:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.380 162970 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2798340
Nov 22 09:49:22 compute-0 haproxy-metadata-proxy-473e817e-09da-452b-aec0-d46546489b36[402957]: 10.100.0.9:50116 [22/Nov/2025:09:49:22.098] listener listener/metadata 0/0/0/282/282 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 22 09:49:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:49:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:49:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:49:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:49:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:49:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:49:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:23 compute-0 nova_compute[253661]: 2025-11-22 09:49:23.158 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:23 compute-0 ceph-mon[75021]: pgmap v2630: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 22 09:49:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.446 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.447 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.447 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.447 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.448 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.449 253665 INFO nova.compute.manager [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Terminating instance
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.450 253665 DEBUG nova.compute.manager [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:49:24 compute-0 kernel: tap8d1f2012-aa (unregistering): left promiscuous mode
Nov 22 09:49:24 compute-0 NetworkManager[48920]: <info>  [1763804964.5035] device (tap8d1f2012-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:49:24 compute-0 ovn_controller[152872]: 2025-11-22T09:49:24Z|01581|binding|INFO|Releasing lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 from this chassis (sb_readonly=0)
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.509 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:24 compute-0 ovn_controller[152872]: 2025-11-22T09:49:24Z|01582|binding|INFO|Setting lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 down in Southbound
Nov 22 09:49:24 compute-0 ovn_controller[152872]: 2025-11-22T09:49:24Z|01583|binding|INFO|Removing iface tap8d1f2012-aa ovn-installed in OVS
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.516 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:7c:5c 10.100.0.9'], port_security=['fa:16:3e:43:7c:5c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73f1da2d-d075-455d-94dd-f10146df7d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473e817e-09da-452b-aec0-d46546489b36', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce09df5a051f4f24bbb216fbe5785dcb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '134111d9-6c1c-466d-8bad-cdc68aa178a5 f24a4b05-cdda-49a7-af44-458e15bd9a13', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf052f1b-89e1-46ee-8169-e44075a76fcb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8d1f2012-aa57-4dfc-a744-a852d1353ad2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.517 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8d1f2012-aa57-4dfc-a744-a852d1353ad2 in datapath 473e817e-09da-452b-aec0-d46546489b36 unbound from our chassis
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.518 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 473e817e-09da-452b-aec0-d46546489b36, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.519 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74707c23-a43c-4ce4-876c-62b9a55912eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.520 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-473e817e-09da-452b-aec0-d46546489b36 namespace which is not needed anymore
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.535 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:24 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 22 09:49:24 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000091.scope: Consumed 16.803s CPU time.
Nov 22 09:49:24 compute-0 systemd-machined[215941]: Machine qemu-177-instance-00000091 terminated.
Nov 22 09:49:24 compute-0 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [NOTICE]   (402955) : haproxy version is 2.8.14-c23fe91
Nov 22 09:49:24 compute-0 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [NOTICE]   (402955) : path to executable is /usr/sbin/haproxy
Nov 22 09:49:24 compute-0 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [WARNING]  (402955) : Exiting Master process...
Nov 22 09:49:24 compute-0 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [ALERT]    (402955) : Current worker (402957) exited with code 143 (Terminated)
Nov 22 09:49:24 compute-0 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [WARNING]  (402955) : All workers exited. Exiting... (0)
Nov 22 09:49:24 compute-0 systemd[1]: libpod-e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf.scope: Deactivated successfully.
Nov 22 09:49:24 compute-0 podman[403425]: 2025-11-22 09:49:24.653745704 +0000 UTC m=+0.045700809 container died e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.678 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf-userdata-shm.mount: Deactivated successfully.
Nov 22 09:49:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3b593cb210fa24aaae4778a6a8c263ac8ff16d9e064d9f8dea041fd87cb8a31-merged.mount: Deactivated successfully.
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.690 253665 INFO nova.virt.libvirt.driver [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance destroyed successfully.
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.690 253665 DEBUG nova.objects.instance [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lazy-loading 'resources' on Instance uuid 73f1da2d-d075-455d-94dd-f10146df7d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:49:24 compute-0 podman[403425]: 2025-11-22 09:49:24.700244952 +0000 UTC m=+0.092200057 container cleanup e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.701 253665 DEBUG nova.virt.libvirt.vif [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:48:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-762381395',display_name='tempest-TestServerBasicOps-server-762381395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-762381395',id=145,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtTbF8bJfFddW96zTLdAkxE2iVzgX9zcT4Pj4BnA20Jji6o4SOv+z2CVEObDH8w0qoNYti5+X9zzKmkIowUY67LzvbSFwG+M1TtD6ysNGURVyIwLTyMSUq/al9LkPsMvg==',key_name='tempest-TestServerBasicOps-443879583',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:48:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce09df5a051f4f24bbb216fbe5785dcb',ramdisk_id='',reservation_id='r-b7997295',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1909013265',owner_user_name='tempest-TestServerBasicOps-1909013265-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:49:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aff683c22adc499393a2037bae323af6',uuid=73f1da2d-d075-455d-94dd-f10146df7d30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.701 253665 DEBUG nova.network.os_vif_util [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converting VIF {"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.702 253665 DEBUG nova.network.os_vif_util [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.703 253665 DEBUG os_vif [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.705 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.705 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d1f2012-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:24 compute-0 systemd[1]: libpod-conmon-e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf.scope: Deactivated successfully.
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.711 253665 INFO os_vif [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa')
Nov 22 09:49:24 compute-0 podman[403461]: 2025-11-22 09:49:24.768875257 +0000 UTC m=+0.045888003 container remove e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.771 253665 DEBUG nova.compute.manager [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-unplugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.772 253665 DEBUG oslo_concurrency.lockutils [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.772 253665 DEBUG oslo_concurrency.lockutils [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.772 253665 DEBUG oslo_concurrency.lockutils [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.772 253665 DEBUG nova.compute.manager [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] No waiting events found dispatching network-vif-unplugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.773 253665 DEBUG nova.compute.manager [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-unplugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.776 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7d1a30-6565-4d0f-85d7-12de2c468124]: (4, ('Sat Nov 22 09:49:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36 (e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf)\ne938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf\nSat Nov 22 09:49:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36 (e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf)\ne938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.778 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e425842-98c4-4589-9f12-27b8b7cbb008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.779 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473e817e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.780 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:24 compute-0 kernel: tap473e817e-00: left promiscuous mode
Nov 22 09:49:24 compute-0 nova_compute[253661]: 2025-11-22 09:49:24.792 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8e98f0-ef9e-46c2-9806-cf2207f46db4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.820 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99c63cab-5ff9-4bda-83fd-89d1ce81db56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a00750e0-631b-4a66-be28-1fe086eeae98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.840 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1fe880-c431-4811-9d96-3d5aee3508bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776942, 'reachable_time': 16846, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403494, 'error': None, 'target': 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.842 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-473e817e-09da-452b-aec0-d46546489b36 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:49:24 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.842 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b39b9569-8aaf-45e4-8371-61a4de106b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d473e817e\x2d09da\x2d452b\x2daec0\x2dd46546489b36.mount: Deactivated successfully.
Nov 22 09:49:25 compute-0 nova_compute[253661]: 2025-11-22 09:49:25.076 253665 INFO nova.virt.libvirt.driver [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Deleting instance files /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30_del
Nov 22 09:49:25 compute-0 nova_compute[253661]: 2025-11-22 09:49:25.077 253665 INFO nova.virt.libvirt.driver [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Deletion of /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30_del complete
Nov 22 09:49:25 compute-0 nova_compute[253661]: 2025-11-22 09:49:25.137 253665 INFO nova.compute.manager [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Took 0.69 seconds to destroy the instance on the hypervisor.
Nov 22 09:49:25 compute-0 nova_compute[253661]: 2025-11-22 09:49:25.137 253665 DEBUG oslo.service.loopingcall [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:49:25 compute-0 nova_compute[253661]: 2025-11-22 09:49:25.137 253665 DEBUG nova.compute.manager [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:49:25 compute-0 nova_compute[253661]: 2025-11-22 09:49:25.138 253665 DEBUG nova.network.neutron [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:49:25 compute-0 ceph-mon[75021]: pgmap v2631: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Nov 22 09:49:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.7 KiB/s wr, 0 op/s
Nov 22 09:49:26 compute-0 ceph-mon[75021]: pgmap v2632: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.7 KiB/s wr, 0 op/s
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.485 253665 DEBUG nova.network.neutron [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.498 253665 INFO nova.compute.manager [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Took 1.36 seconds to deallocate network for instance.
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.538 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.539 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.601 253665 DEBUG oslo_concurrency.processutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.853 253665 DEBUG nova.compute.manager [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.853 253665 DEBUG oslo_concurrency.lockutils [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.854 253665 DEBUG oslo_concurrency.lockutils [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.854 253665 DEBUG oslo_concurrency.lockutils [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.854 253665 DEBUG nova.compute.manager [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] No waiting events found dispatching network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.854 253665 WARNING nova.compute.manager [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received unexpected event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 for instance with vm_state deleted and task_state None.
Nov 22 09:49:26 compute-0 nova_compute[253661]: 2025-11-22 09:49:26.855 253665 DEBUG nova.compute.manager [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-deleted-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:49:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3043213952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:27 compute-0 nova_compute[253661]: 2025-11-22 09:49:27.034 253665 DEBUG oslo_concurrency.processutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:27 compute-0 nova_compute[253661]: 2025-11-22 09:49:27.041 253665 DEBUG nova.compute.provider_tree [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:49:27 compute-0 nova_compute[253661]: 2025-11-22 09:49:27.055 253665 DEBUG nova.scheduler.client.report [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:49:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.078 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:d3:2f 10.100.0.2 2001:db8:0:1:f816:3eff:fe02:d32f 2001:db8::f816:3eff:fe02:d32f'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe02:d32f/64 2001:db8::f816:3eff:fe02:d32f/64', 'neutron:device_id': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=da001788-faa3-412b-9b6a-82fe1a808a87) old=Port_Binding(mac=['fa:16:3e:02:d3:2f 10.100.0.2 2001:db8::f816:3eff:fe02:d32f'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:d32f/64', 'neutron:device_id': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:49:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.080 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port da001788-faa3-412b-9b6a-82fe1a808a87 in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce updated
Nov 22 09:49:27 compute-0 nova_compute[253661]: 2025-11-22 09:49:27.081 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.081 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:49:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.082 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4f36db4d-6bbf-437d-a908-092dcc0fc514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:27 compute-0 nova_compute[253661]: 2025-11-22 09:49:27.128 253665 INFO nova.scheduler.client.report [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Deleted allocations for instance 73f1da2d-d075-455d-94dd-f10146df7d30
Nov 22 09:49:27 compute-0 nova_compute[253661]: 2025-11-22 09:49:27.189 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:27 compute-0 nova_compute[253661]: 2025-11-22 09:49:27.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3043213952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.992 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.992 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.993 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 09:49:28 compute-0 ceph-mon[75021]: pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 09:49:29 compute-0 nova_compute[253661]: 2025-11-22 09:49:29.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 09:49:31 compute-0 nova_compute[253661]: 2025-11-22 09:49:31.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:31 compute-0 ceph-mon[75021]: pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 09:49:31 compute-0 sudo[403518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:31 compute-0 sudo[403518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:31 compute-0 sudo[403518]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:31 compute-0 sudo[403543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:49:31 compute-0 sudo[403543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:31 compute-0 sudo[403543]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:31 compute-0 sudo[403568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:31 compute-0 sudo[403568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:31 compute-0 sudo[403568]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:31 compute-0 sudo[403593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 09:49:31 compute-0 sudo[403593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 09:49:32 compute-0 nova_compute[253661]: 2025-11-22 09:49:32.290 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:32 compute-0 podman[403690]: 2025-11-22 09:49:32.416951472 +0000 UTC m=+0.125644454 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:49:32 compute-0 ceph-mon[75021]: pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 09:49:32 compute-0 podman[403690]: 2025-11-22 09:49:32.527180903 +0000 UTC m=+0.235873885 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 09:49:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:33 compute-0 sudo[403593]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:49:33 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:49:33 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.293594) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973293713, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 819, "num_deletes": 252, "total_data_size": 1075306, "memory_usage": 1109504, "flush_reason": "Manual Compaction"}
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973305261, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1054047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53842, "largest_seqno": 54660, "table_properties": {"data_size": 1049873, "index_size": 1890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9388, "raw_average_key_size": 19, "raw_value_size": 1041427, "raw_average_value_size": 2183, "num_data_blocks": 84, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804908, "oldest_key_time": 1763804908, "file_creation_time": 1763804973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 11702 microseconds, and 3827 cpu microseconds.
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.305336) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1054047 bytes OK
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.305362) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307243) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307261) EVENT_LOG_v1 {"time_micros": 1763804973307256, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307282) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1071217, prev total WAL file size 1071217, number of live WAL files 2.
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307891) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1029KB)], [125(8921KB)]
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973307963, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10189345, "oldest_snapshot_seqno": -1}
Nov 22 09:49:33 compute-0 sudo[403849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:33 compute-0 sudo[403849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:33 compute-0 sudo[403849]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7434 keys, 8526463 bytes, temperature: kUnknown
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973358248, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8526463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8479290, "index_size": 27476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18629, "raw_key_size": 194617, "raw_average_key_size": 26, "raw_value_size": 8349066, "raw_average_value_size": 1123, "num_data_blocks": 1061, "num_entries": 7434, "num_filter_entries": 7434, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.358964) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8526463 bytes
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.363046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.8 rd, 168.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(17.8) write-amplify(8.1) OK, records in: 7953, records dropped: 519 output_compression: NoCompression
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.363067) EVENT_LOG_v1 {"time_micros": 1763804973363058, "job": 76, "event": "compaction_finished", "compaction_time_micros": 50755, "compaction_time_cpu_micros": 21639, "output_level": 6, "num_output_files": 1, "total_output_size": 8526463, "num_input_records": 7953, "num_output_records": 7434, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973363370, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973365140, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:49:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:49:33 compute-0 sudo[403874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:49:33 compute-0 sudo[403874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:33 compute-0 sudo[403874]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:33 compute-0 sudo[403899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:33 compute-0 sudo[403899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:33 compute-0 sudo[403899]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:33 compute-0 sudo[403924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:49:33 compute-0 sudo[403924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:34 compute-0 sudo[403924]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:49:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:49:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:49:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:49:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:49:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c8ae9e43-4599-4d30-92c2-cfabe3708307 does not exist
Nov 22 09:49:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4505e737-c35b-4316-a5da-310b3c92c44d does not exist
Nov 22 09:49:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 64387ff9-72cf-4ca4-bf10-b3dd0aea340d does not exist
Nov 22 09:49:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:49:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:49:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:49:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:49:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:49:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:49:34 compute-0 sudo[403980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:34 compute-0 sudo[403980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:34 compute-0 sudo[403980]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:34 compute-0 sudo[404005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:49:34 compute-0 sudo[404005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:34 compute-0 sudo[404005]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:34 compute-0 sudo[404030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:34 compute-0 sudo[404030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:34 compute-0 sudo[404030]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 09:49:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:49:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:49:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:49:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:49:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:49:34 compute-0 sudo[404055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:49:34 compute-0 sudo[404055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:34 compute-0 nova_compute[253661]: 2025-11-22 09:49:34.529 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:34 compute-0 nova_compute[253661]: 2025-11-22 09:49:34.531 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:34 compute-0 nova_compute[253661]: 2025-11-22 09:49:34.547 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:49:34 compute-0 nova_compute[253661]: 2025-11-22 09:49:34.624 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:34 compute-0 nova_compute[253661]: 2025-11-22 09:49:34.624 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:34 compute-0 podman[404118]: 2025-11-22 09:49:34.626229756 +0000 UTC m=+0.043863245 container create bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:49:34 compute-0 nova_compute[253661]: 2025-11-22 09:49:34.632 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:49:34 compute-0 nova_compute[253661]: 2025-11-22 09:49:34.634 253665 INFO nova.compute.claims [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:49:34 compute-0 systemd[1]: Started libpod-conmon-bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303.scope.
Nov 22 09:49:34 compute-0 podman[404118]: 2025-11-22 09:49:34.605328039 +0000 UTC m=+0.022961548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:49:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:49:34 compute-0 nova_compute[253661]: 2025-11-22 09:49:34.734 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:34 compute-0 podman[404118]: 2025-11-22 09:49:34.772129928 +0000 UTC m=+0.189763447 container init bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:49:34 compute-0 nova_compute[253661]: 2025-11-22 09:49:34.772 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:34 compute-0 podman[404118]: 2025-11-22 09:49:34.782837682 +0000 UTC m=+0.200471171 container start bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:49:34 compute-0 bold_darwin[404135]: 167 167
Nov 22 09:49:34 compute-0 systemd[1]: libpod-bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303.scope: Deactivated successfully.
Nov 22 09:49:34 compute-0 podman[404118]: 2025-11-22 09:49:34.789517047 +0000 UTC m=+0.207150536 container attach bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:49:34 compute-0 podman[404118]: 2025-11-22 09:49:34.790393819 +0000 UTC m=+0.208027318 container died bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec631dcf51993cea591b54591683a6e4057396d3cff769b7ed0a1f5707515e2d-merged.mount: Deactivated successfully.
Nov 22 09:49:34 compute-0 podman[404118]: 2025-11-22 09:49:34.830901239 +0000 UTC m=+0.248534718 container remove bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 09:49:34 compute-0 systemd[1]: libpod-conmon-bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303.scope: Deactivated successfully.
Nov 22 09:49:35 compute-0 podman[404180]: 2025-11-22 09:49:35.01523535 +0000 UTC m=+0.060764281 container create 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 09:49:35 compute-0 systemd[1]: Started libpod-conmon-56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35.scope.
Nov 22 09:49:35 compute-0 podman[404180]: 2025-11-22 09:49:34.981286022 +0000 UTC m=+0.026814963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:49:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:35 compute-0 podman[404180]: 2025-11-22 09:49:35.186191342 +0000 UTC m=+0.231720273 container init 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:49:35 compute-0 podman[404180]: 2025-11-22 09:49:35.194739264 +0000 UTC m=+0.240268175 container start 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 09:49:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:49:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177278564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.227 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.236 253665 DEBUG nova.compute.provider_tree [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:49:35 compute-0 podman[404180]: 2025-11-22 09:49:35.24278769 +0000 UTC m=+0.288316631 container attach 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.254 253665 DEBUG nova.scheduler.client.report [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.281 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.282 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:49:35 compute-0 ceph-mon[75021]: pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 09:49:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/177278564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.369 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.370 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.394 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.419 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.526 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.527 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.528 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Creating image(s)
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.569 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.596 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.617 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.620 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.703 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.705 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.706 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.706 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.737 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.741 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:35 compute-0 nova_compute[253661]: 2025-11-22 09:49:35.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.055 253665 DEBUG nova.policy [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.113 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.171 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.270 253665 DEBUG nova.objects.instance [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:49:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.293 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.294 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Ensure instance console log exists: /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.295 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.295 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.295 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:36 compute-0 condescending_taussig[404197]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:49:36 compute-0 condescending_taussig[404197]: --> relative data size: 1.0
Nov 22 09:49:36 compute-0 condescending_taussig[404197]: --> All data devices are unavailable
Nov 22 09:49:36 compute-0 systemd[1]: libpod-56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35.scope: Deactivated successfully.
Nov 22 09:49:36 compute-0 systemd[1]: libpod-56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35.scope: Consumed 1.148s CPU time.
Nov 22 09:49:36 compute-0 podman[404180]: 2025-11-22 09:49:36.438135687 +0000 UTC m=+1.483664598 container died 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 22 09:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a-merged.mount: Deactivated successfully.
Nov 22 09:49:36 compute-0 podman[404180]: 2025-11-22 09:49:36.519901305 +0000 UTC m=+1.565430216 container remove 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:49:36 compute-0 systemd[1]: libpod-conmon-56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35.scope: Deactivated successfully.
Nov 22 09:49:36 compute-0 podman[404396]: 2025-11-22 09:49:36.557109295 +0000 UTC m=+0.074327557 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:49:36 compute-0 sudo[404055]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:36 compute-0 podman[404404]: 2025-11-22 09:49:36.561117853 +0000 UTC m=+0.078109879 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 22 09:49:36 compute-0 sudo[404442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:36 compute-0 sudo[404442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:36 compute-0 sudo[404442]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:36 compute-0 sudo[404467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:49:36 compute-0 sudo[404467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:36 compute-0 sudo[404467]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:36 compute-0 sudo[404492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:36 compute-0 sudo[404492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:36 compute-0 sudo[404492]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:36 compute-0 sudo[404517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:49:36 compute-0 sudo[404517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:36 compute-0 nova_compute[253661]: 2025-11-22 09:49:36.861 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Successfully created port: 9c015dd3-d340-40c6-bcc6-efef0a914d39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:49:37 compute-0 podman[404582]: 2025-11-22 09:49:37.142310175 +0000 UTC m=+0.043159747 container create 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:49:37 compute-0 systemd[1]: Started libpod-conmon-3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48.scope.
Nov 22 09:49:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:49:37 compute-0 podman[404582]: 2025-11-22 09:49:37.124873954 +0000 UTC m=+0.025723546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:49:37 compute-0 podman[404582]: 2025-11-22 09:49:37.228791 +0000 UTC m=+0.129640572 container init 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:49:37 compute-0 podman[404582]: 2025-11-22 09:49:37.235190148 +0000 UTC m=+0.136039720 container start 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:49:37 compute-0 podman[404582]: 2025-11-22 09:49:37.238342166 +0000 UTC m=+0.139191768 container attach 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:49:37 compute-0 agitated_galois[404597]: 167 167
Nov 22 09:49:37 compute-0 systemd[1]: libpod-3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48.scope: Deactivated successfully.
Nov 22 09:49:37 compute-0 podman[404582]: 2025-11-22 09:49:37.241832682 +0000 UTC m=+0.142682254 container died 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.246 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.247 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b307f6bfca4bfc560150cabdc9bee60b715ed53a7ca32b8050f8ec307b59116-merged.mount: Deactivated successfully.
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.271 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:49:37 compute-0 podman[404582]: 2025-11-22 09:49:37.281100712 +0000 UTC m=+0.181950284 container remove 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:49:37 compute-0 systemd[1]: libpod-conmon-3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48.scope: Deactivated successfully.
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.292 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:37 compute-0 ceph-mon[75021]: pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Nov 22 09:49:37 compute-0 podman[404621]: 2025-11-22 09:49:37.444579218 +0000 UTC m=+0.040610854 container create dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:49:37 compute-0 systemd[1]: Started libpod-conmon-dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684.scope.
Nov 22 09:49:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:37 compute-0 podman[404621]: 2025-11-22 09:49:37.428599574 +0000 UTC m=+0.024631240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:49:37 compute-0 podman[404621]: 2025-11-22 09:49:37.530401778 +0000 UTC m=+0.126433434 container init dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 22 09:49:37 compute-0 podman[404621]: 2025-11-22 09:49:37.53897873 +0000 UTC m=+0.135010386 container start dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:49:37 compute-0 podman[404621]: 2025-11-22 09:49:37.542822585 +0000 UTC m=+0.138854311 container attach dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.584 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Successfully updated port: 9c015dd3-d340-40c6-bcc6-efef0a914d39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.601 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.601 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.601 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.670 253665 DEBUG nova.compute.manager [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.670 253665 DEBUG nova.compute.manager [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing instance network info cache due to event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.670 253665 DEBUG oslo_concurrency.lockutils [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:49:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:37 compute-0 nova_compute[253661]: 2025-11-22 09:49:37.928 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:49:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 22 09:49:38 compute-0 silly_rubin[404637]: {
Nov 22 09:49:38 compute-0 silly_rubin[404637]:     "0": [
Nov 22 09:49:38 compute-0 silly_rubin[404637]:         {
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "devices": [
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "/dev/loop3"
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             ],
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_name": "ceph_lv0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_size": "21470642176",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "name": "ceph_lv0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "tags": {
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.cluster_name": "ceph",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.crush_device_class": "",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.encrypted": "0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.osd_id": "0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.type": "block",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.vdo": "0"
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             },
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "type": "block",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "vg_name": "ceph_vg0"
Nov 22 09:49:38 compute-0 silly_rubin[404637]:         }
Nov 22 09:49:38 compute-0 silly_rubin[404637]:     ],
Nov 22 09:49:38 compute-0 silly_rubin[404637]:     "1": [
Nov 22 09:49:38 compute-0 silly_rubin[404637]:         {
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "devices": [
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "/dev/loop4"
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             ],
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_name": "ceph_lv1",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_size": "21470642176",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "name": "ceph_lv1",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "tags": {
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.cluster_name": "ceph",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.crush_device_class": "",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.encrypted": "0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.osd_id": "1",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.type": "block",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.vdo": "0"
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             },
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "type": "block",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "vg_name": "ceph_vg1"
Nov 22 09:49:38 compute-0 silly_rubin[404637]:         }
Nov 22 09:49:38 compute-0 silly_rubin[404637]:     ],
Nov 22 09:49:38 compute-0 silly_rubin[404637]:     "2": [
Nov 22 09:49:38 compute-0 silly_rubin[404637]:         {
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "devices": [
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "/dev/loop5"
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             ],
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_name": "ceph_lv2",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_size": "21470642176",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "name": "ceph_lv2",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "tags": {
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.cluster_name": "ceph",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.crush_device_class": "",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.encrypted": "0",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.osd_id": "2",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.type": "block",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:                 "ceph.vdo": "0"
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             },
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "type": "block",
Nov 22 09:49:38 compute-0 silly_rubin[404637]:             "vg_name": "ceph_vg2"
Nov 22 09:49:38 compute-0 silly_rubin[404637]:         }
Nov 22 09:49:38 compute-0 silly_rubin[404637]:     ]
Nov 22 09:49:38 compute-0 silly_rubin[404637]: }
Nov 22 09:49:38 compute-0 systemd[1]: libpod-dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684.scope: Deactivated successfully.
Nov 22 09:49:38 compute-0 podman[404621]: 2025-11-22 09:49:38.404305567 +0000 UTC m=+1.000337203 container died dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8-merged.mount: Deactivated successfully.
Nov 22 09:49:38 compute-0 podman[404621]: 2025-11-22 09:49:38.461486549 +0000 UTC m=+1.057518185 container remove dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 09:49:38 compute-0 systemd[1]: libpod-conmon-dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684.scope: Deactivated successfully.
Nov 22 09:49:38 compute-0 sudo[404517]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:38 compute-0 sudo[404658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:38 compute-0 sudo[404658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:38 compute-0 sudo[404658]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:38 compute-0 sudo[404683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:49:38 compute-0 sudo[404683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:38 compute-0 sudo[404683]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:38 compute-0 sudo[404708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:38 compute-0 sudo[404708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:38 compute-0 sudo[404708]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:38 compute-0 sudo[404733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:49:38 compute-0 sudo[404733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:39 compute-0 podman[404799]: 2025-11-22 09:49:39.070585009 +0000 UTC m=+0.039033234 container create 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:49:39 compute-0 systemd[1]: Started libpod-conmon-0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186.scope.
Nov 22 09:49:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:49:39 compute-0 podman[404799]: 2025-11-22 09:49:39.146450123 +0000 UTC m=+0.114898378 container init 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 09:49:39 compute-0 podman[404799]: 2025-11-22 09:49:39.054967954 +0000 UTC m=+0.023416199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:49:39 compute-0 podman[404799]: 2025-11-22 09:49:39.153136248 +0000 UTC m=+0.121584473 container start 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:49:39 compute-0 podman[404799]: 2025-11-22 09:49:39.157596078 +0000 UTC m=+0.126044323 container attach 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:49:39 compute-0 hardcore_shockley[404816]: 167 167
Nov 22 09:49:39 compute-0 systemd[1]: libpod-0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186.scope: Deactivated successfully.
Nov 22 09:49:39 compute-0 podman[404799]: 2025-11-22 09:49:39.159973687 +0000 UTC m=+0.128421912 container died 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 09:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6655698ea932c3440df08305a6fc870c70e91825db24f91a61ad8f61fe57a253-merged.mount: Deactivated successfully.
Nov 22 09:49:39 compute-0 podman[404799]: 2025-11-22 09:49:39.20101948 +0000 UTC m=+0.169467705 container remove 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:49:39 compute-0 systemd[1]: libpod-conmon-0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186.scope: Deactivated successfully.
Nov 22 09:49:39 compute-0 ceph-mon[75021]: pgmap v2638: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 22 09:49:39 compute-0 podman[404840]: 2025-11-22 09:49:39.354957271 +0000 UTC m=+0.040667115 container create acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:49:39 compute-0 systemd[1]: Started libpod-conmon-acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62.scope.
Nov 22 09:49:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:39 compute-0 podman[404840]: 2025-11-22 09:49:39.423204157 +0000 UTC m=+0.108914021 container init acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:49:39 compute-0 podman[404840]: 2025-11-22 09:49:39.430390254 +0000 UTC m=+0.116100098 container start acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 09:49:39 compute-0 podman[404840]: 2025-11-22 09:49:39.43348271 +0000 UTC m=+0.119192574 container attach acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:49:39 compute-0 podman[404840]: 2025-11-22 09:49:39.33908645 +0000 UTC m=+0.024796314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.504 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.525 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.526 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance network_info: |[{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.527 253665 DEBUG oslo_concurrency.lockutils [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.527 253665 DEBUG nova.network.neutron [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.532 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start _get_guest_xml network_info=[{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.537 253665 WARNING nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.545 253665 DEBUG nova.virt.libvirt.host [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.546 253665 DEBUG nova.virt.libvirt.host [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.556 253665 DEBUG nova.virt.libvirt.host [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.557 253665 DEBUG nova.virt.libvirt.host [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.557 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.557 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.558 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.558 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.558 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.559 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.559 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.559 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.560 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.560 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.560 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.560 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.564 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.687 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804964.6856081, 73f1da2d-d075-455d-94dd-f10146df7d30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.692 253665 INFO nova.compute.manager [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] VM Stopped (Lifecycle Event)
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.721 253665 DEBUG nova.compute.manager [None req-51a9e37d-eead-4fc9-8a66-69ad6257eb38 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:39 compute-0 nova_compute[253661]: 2025-11-22 09:49:39.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:49:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871077078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.021 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.046 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.051 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:49:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3871077078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:49:40 compute-0 elastic_faraday[404856]: {
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "osd_id": 1,
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "type": "bluestore"
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:     },
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "osd_id": 0,
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "type": "bluestore"
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:     },
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "osd_id": 2,
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:         "type": "bluestore"
Nov 22 09:49:40 compute-0 elastic_faraday[404856]:     }
Nov 22 09:49:40 compute-0 elastic_faraday[404856]: }
Nov 22 09:49:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:49:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469356678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:49:40 compute-0 systemd[1]: libpod-acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62.scope: Deactivated successfully.
Nov 22 09:49:40 compute-0 systemd[1]: libpod-acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62.scope: Consumed 1.081s CPU time.
Nov 22 09:49:40 compute-0 podman[404840]: 2025-11-22 09:49:40.515343575 +0000 UTC m=+1.201053449 container died acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.534 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.538 253665 DEBUG nova.virt.libvirt.vif [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:49:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1502632540',display_name='tempest-TestGettingAddress-server-1502632540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1502632540',id=146,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-97p9p4ep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:49:35Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=3a65f84a-3072-4b94-b08a-0ba7b1529a07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.538 253665 DEBUG nova.network.os_vif_util [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.539 253665 DEBUG nova.network.os_vif_util [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.541 253665 DEBUG nova.objects.instance [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.558 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <uuid>3a65f84a-3072-4b94-b08a-0ba7b1529a07</uuid>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <name>instance-00000092</name>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-1502632540</nova:name>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:49:39</nova:creationTime>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <nova:port uuid="9c015dd3-d340-40c6-bcc6-efef0a914d39">
Nov 22 09:49:40 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe7b:4756" ipVersion="6"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe7b:4756" ipVersion="6"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <system>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <entry name="serial">3a65f84a-3072-4b94-b08a-0ba7b1529a07</entry>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <entry name="uuid">3a65f84a-3072-4b94-b08a-0ba7b1529a07</entry>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     </system>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <os>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   </os>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <features>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   </features>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk">
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       </source>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config">
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       </source>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:49:40 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:7b:47:56"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <target dev="tap9c015dd3-d3"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/console.log" append="off"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <video>
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     </video>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:49:40 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:49:40 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:49:40 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:49:40 compute-0 nova_compute[253661]: </domain>
Nov 22 09:49:40 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.559 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Preparing to wait for external event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.559 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.559 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.559 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.560 253665 DEBUG nova.virt.libvirt.vif [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:49:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1502632540',display_name='tempest-TestGettingAddress-server-1502632540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1502632540',id=146,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-97p9p4ep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:49:35Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=3a65f84a-3072-4b94-b08a-0ba7b1529a07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.560 253665 DEBUG nova.network.os_vif_util [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.561 253665 DEBUG nova.network.os_vif_util [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.561 253665 DEBUG os_vif [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.562 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.562 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.563 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.568 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.569 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c015dd3-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.570 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c015dd3-d3, col_values=(('external_ids', {'iface-id': '9c015dd3-d340-40c6-bcc6-efef0a914d39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:47:56', 'vm-uuid': '3a65f84a-3072-4b94-b08a-0ba7b1529a07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:40 compute-0 NetworkManager[48920]: <info>  [1763804980.5725] manager: (tap9c015dd3-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Nov 22 09:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794-merged.mount: Deactivated successfully.
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.582 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.585 253665 INFO os_vif [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3')
Nov 22 09:49:40 compute-0 podman[404840]: 2025-11-22 09:49:40.608143626 +0000 UTC m=+1.293853470 container remove acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 22 09:49:40 compute-0 systemd[1]: libpod-conmon-acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62.scope: Deactivated successfully.
Nov 22 09:49:40 compute-0 sudo[404733]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.666 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.667 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.667 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:7b:47:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.668 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Using config drive
Nov 22 09:49:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:49:40 compute-0 nova_compute[253661]: 2025-11-22 09:49:40.694 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2a044c15-b5b6-4a99-b839-54272f15b4e9 does not exist
Nov 22 09:49:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6b65dd4a-9be5-472a-9e51-02175682cd7e does not exist
Nov 22 09:49:40 compute-0 sudo[404987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:49:40 compute-0 sudo[404987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:40 compute-0 sudo[404987]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:40 compute-0 sudo[405012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:49:40 compute-0 sudo[405012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:49:40 compute-0 sudo[405012]: pam_unix(sudo:session): session closed for user root
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.154 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Creating config drive at /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.159 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtra0f3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.307 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtra0f3x" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.339 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.342 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:41 compute-0 ceph-mon[75021]: pgmap v2639: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:49:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1469356678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:49:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:41 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.722 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.723 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Deleting local config drive /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config because it was imported into RBD.
Nov 22 09:49:41 compute-0 kernel: tap9c015dd3-d3: entered promiscuous mode
Nov 22 09:49:41 compute-0 NetworkManager[48920]: <info>  [1763804981.7675] manager: (tap9c015dd3-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/646)
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.767 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:41 compute-0 ovn_controller[152872]: 2025-11-22T09:49:41Z|01584|binding|INFO|Claiming lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 for this chassis.
Nov 22 09:49:41 compute-0 ovn_controller[152872]: 2025-11-22T09:49:41Z|01585|binding|INFO|9c015dd3-d340-40c6-bcc6-efef0a914d39: Claiming fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.772 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.783 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756'], port_security=['fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe7b:4756/64 2001:db8::f816:3eff:fe7b:4756/64', 'neutron:device_id': '3a65f84a-3072-4b94-b08a-0ba7b1529a07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a71aa19e-d298-43f1-b9d0-7f952a63c1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9c015dd3-d340-40c6-bcc6-efef0a914d39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.784 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9c015dd3-d340-40c6-bcc6-efef0a914d39 in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce bound to our chassis
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.786 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce
Nov 22 09:49:41 compute-0 systemd-udevd[405089]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.798 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cf091125-1ee7-46e5-bcc5-4dad3ff3eee1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.799 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9b64819a-21 in ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.800 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9b64819a-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.800 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[db69ebf5-276c-4a21-adfd-ea9bb7978a59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.801 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18769b0c-9c6b-4bbf-8a03-af27f228634d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 systemd-machined[215941]: New machine qemu-178-instance-00000092.
Nov 22 09:49:41 compute-0 NetworkManager[48920]: <info>  [1763804981.8107] device (tap9c015dd3-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:49:41 compute-0 NetworkManager[48920]: <info>  [1763804981.8121] device (tap9c015dd3-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.814 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[5da6c25c-aea2-4c3e-8f36-fd16aeea96d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000092.
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.840 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a24a2d0b-0215-4c85-bba8-d15769408fcc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 ovn_controller[152872]: 2025-11-22T09:49:41Z|01586|binding|INFO|Setting lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 ovn-installed in OVS
Nov 22 09:49:41 compute-0 ovn_controller[152872]: 2025-11-22T09:49:41Z|01587|binding|INFO|Setting lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 up in Southbound
Nov 22 09:49:41 compute-0 nova_compute[253661]: 2025-11-22 09:49:41.845 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.875 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c9baa8-c369-4cc4-a902-4deff76131c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 systemd-udevd[405094]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.881 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[803087e8-f125-41e9-b7c4-9dc8f6d40210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 NetworkManager[48920]: <info>  [1763804981.8829] manager: (tap9b64819a-20): new Veth device (/org/freedesktop/NetworkManager/Devices/647)
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.923 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad99793-3d94-4f75-98f6-717e07b2ff75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.926 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9a66ba16-5551-4e7d-9147-4de88d1a6666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 NetworkManager[48920]: <info>  [1763804981.9602] device (tap9b64819a-20): carrier: link connected
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.965 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5f80acab-d80e-4249-a0d0-1d0556a3972b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:41 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.983 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[281da5a1-1f6f-4e21-b0e0-de3daf66a601]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b64819a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:d3:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783269, 'reachable_time': 15193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405123, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.004 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fa1afe-bcc2-46b2-9dbb-ab2e7ebedfd0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:d32f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783269, 'tstamp': 783269}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405124, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.017 253665 DEBUG nova.network.neutron [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated VIF entry in instance network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.017 253665 DEBUG nova.network.neutron [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.025 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fb495800-0541-44f1-890c-0ddf621a38ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b64819a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:d3:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783269, 'reachable_time': 15193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405125, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.035 253665 DEBUG oslo_concurrency.lockutils [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.070 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e84d19cf-739b-4d1c-a8f7-f84b0041a6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe9ac65-3673-4d40-b964-df78745f9ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b64819a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.145 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b64819a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:42 compute-0 kernel: tap9b64819a-20: entered promiscuous mode
Nov 22 09:49:42 compute-0 NetworkManager[48920]: <info>  [1763804982.1494] manager: (tap9b64819a-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b64819a-20, col_values=(('external_ids', {'iface-id': 'da001788-faa3-412b-9b6a-82fe1a808a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:42 compute-0 ovn_controller[152872]: 2025-11-22T09:49:42Z|01588|binding|INFO|Releasing lport da001788-faa3-412b-9b6a-82fe1a808a87 from this chassis (sb_readonly=0)
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.153 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.171 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9b64819a-274e-4eb7-988b-ceb1ea73c9ce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9b64819a-274e-4eb7-988b-ceb1ea73c9ce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.172 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb850b4d-b64e-430e-9b42-2a3e08cbf1da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.173 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-9b64819a-274e-4eb7-988b-ceb1ea73c9ce
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/9b64819a-274e-4eb7-988b-ceb1ea73c9ce.pid.haproxy
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 9b64819a-274e-4eb7-988b-ceb1ea73c9ce
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:49:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.174 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'env', 'PROCESS_TAG=haproxy-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9b64819a-274e-4eb7-988b-ceb1ea73c9ce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.217 253665 DEBUG nova.compute.manager [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.218 253665 DEBUG oslo_concurrency.lockutils [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.218 253665 DEBUG oslo_concurrency.lockutils [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.218 253665 DEBUG oslo_concurrency.lockutils [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.218 253665 DEBUG nova.compute.manager [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Processing event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.231 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804982.2310464, 3a65f84a-3072-4b94-b08a-0ba7b1529a07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.232 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] VM Started (Lifecycle Event)
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.234 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.237 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.242 253665 INFO nova.virt.libvirt.driver [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance spawned successfully.
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.242 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.246 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.249 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.260 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.260 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.260 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.261 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.261 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.261 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.264 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.265 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804982.2312224, 3a65f84a-3072-4b94-b08a-0ba7b1529a07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.265 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] VM Paused (Lifecycle Event)
Nov 22 09:49:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.288 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.292 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804982.2370152, 3a65f84a-3072-4b94-b08a-0ba7b1529a07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.292 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] VM Resumed (Lifecycle Event)
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.316 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.319 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.341 253665 INFO nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Took 6.82 seconds to spawn the instance on the hypervisor.
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.342 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.343 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.424 253665 INFO nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Took 7.82 seconds to build instance.
Nov 22 09:49:42 compute-0 nova_compute[253661]: 2025-11-22 09:49:42.441 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:42 compute-0 ceph-mon[75021]: pgmap v2640: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:49:42 compute-0 podman[405199]: 2025-11-22 09:49:42.517094843 +0000 UTC m=+0.023657055 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:49:42 compute-0 podman[405199]: 2025-11-22 09:49:42.651356619 +0000 UTC m=+0.157918811 container create e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:49:42 compute-0 systemd[1]: Started libpod-conmon-e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e.scope.
Nov 22 09:49:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d720b8b444377e6f53dfc930e4e7925a5c19a33403959e8730a0a92df22382/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:42 compute-0 podman[405199]: 2025-11-22 09:49:42.790604848 +0000 UTC m=+0.297167040 container init e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:49:42 compute-0 podman[405199]: 2025-11-22 09:49:42.797653742 +0000 UTC m=+0.304215934 container start e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:49:42 compute-0 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [NOTICE]   (405216) : New worker (405218) forked
Nov 22 09:49:42 compute-0 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [NOTICE]   (405216) : Loading success.
Nov 22 09:49:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.039 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.040 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.061 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.137 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.137 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.144 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.144 253665 INFO nova.compute.claims [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.265 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:49:43 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3012163552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.702 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.708 253665 DEBUG nova.compute.provider_tree [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.722 253665 DEBUG nova.scheduler.client.report [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:49:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3012163552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.747 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.748 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.807 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.807 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.845 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.864 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.975 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.976 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:49:43 compute-0 nova_compute[253661]: 2025-11-22 09:49:43.976 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Creating image(s)
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.002 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.023 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.044 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.047 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.128 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.129 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.130 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.130 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.152 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.157 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 027bdffc-9e8e-4a33-9b06-844890912dc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.288 253665 DEBUG nova.compute.manager [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 DEBUG oslo_concurrency.lockutils [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 DEBUG oslo_concurrency.lockutils [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 DEBUG oslo_concurrency.lockutils [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 DEBUG nova.compute.manager [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] No waiting events found dispatching network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 WARNING nova.compute.manager [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received unexpected event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 for instance with vm_state active and task_state None.
Nov 22 09:49:44 compute-0 podman[405340]: 2025-11-22 09:49:44.393838346 +0000 UTC m=+0.083598385 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 09:49:44 compute-0 nova_compute[253661]: 2025-11-22 09:49:44.957 253665 DEBUG nova.policy [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1edb692a8ff443038839784febd964b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ffacc46512445d8b5c24899a0053196', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:49:44 compute-0 ceph-mon[75021]: pgmap v2641: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 22 09:49:45 compute-0 nova_compute[253661]: 2025-11-22 09:49:45.264 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 027bdffc-9e8e-4a33-9b06-844890912dc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:45 compute-0 nova_compute[253661]: 2025-11-22 09:49:45.331 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] resizing rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:49:45 compute-0 nova_compute[253661]: 2025-11-22 09:49:45.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:45 compute-0 nova_compute[253661]: 2025-11-22 09:49:45.619 253665 DEBUG nova.objects.instance [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lazy-loading 'migration_context' on Instance uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:49:45 compute-0 nova_compute[253661]: 2025-11-22 09:49:45.635 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:49:45 compute-0 nova_compute[253661]: 2025-11-22 09:49:45.635 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Ensure instance console log exists: /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:49:45 compute-0 nova_compute[253661]: 2025-11-22 09:49:45.636 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:45 compute-0 nova_compute[253661]: 2025-11-22 09:49:45.636 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:45 compute-0 nova_compute[253661]: 2025-11-22 09:49:45.636 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 22 09:49:46 compute-0 nova_compute[253661]: 2025-11-22 09:49:46.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:46 compute-0 NetworkManager[48920]: <info>  [1763804986.8400] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/649)
Nov 22 09:49:46 compute-0 NetworkManager[48920]: <info>  [1763804986.8411] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Nov 22 09:49:46 compute-0 nova_compute[253661]: 2025-11-22 09:49:46.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:46 compute-0 ovn_controller[152872]: 2025-11-22T09:49:46Z|01589|binding|INFO|Releasing lport da001788-faa3-412b-9b6a-82fe1a808a87 from this chassis (sb_readonly=0)
Nov 22 09:49:46 compute-0 nova_compute[253661]: 2025-11-22 09:49:46.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.049 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Successfully created port: 62358b95-9f4a-404c-8165-dc98c7e3b042 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.121 253665 DEBUG nova.compute.manager [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.122 253665 DEBUG nova.compute.manager [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing instance network info cache due to event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.122 253665 DEBUG oslo_concurrency.lockutils [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.123 253665 DEBUG oslo_concurrency.lockutils [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.123 253665 DEBUG nova.network.neutron [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.253 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.269 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.295 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:47 compute-0 ceph-mon[75021]: pgmap v2642: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 22 09:49:47 compute-0 nova_compute[253661]: 2025-11-22 09:49:47.520 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:49:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.681 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Successfully updated port: 62358b95-9f4a-404c-8165-dc98c7e3b042 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.695 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.696 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.696 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.766 253665 DEBUG nova.compute.manager [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.767 253665 DEBUG nova.compute.manager [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing instance network info cache due to event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.768 253665 DEBUG oslo_concurrency.lockutils [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.828 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.979 253665 DEBUG nova.network.neutron [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated VIF entry in instance network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.980 253665 DEBUG nova.network.neutron [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.997 253665 DEBUG oslo_concurrency.lockutils [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.998 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.998 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:49:48 compute-0 nova_compute[253661]: 2025-11-22 09:49:48.999 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:49:49 compute-0 ceph-mon[75021]: pgmap v2643: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 22 09:49:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.937 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.955 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.956 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance network_info: |[{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.957 253665 DEBUG oslo_concurrency.lockutils [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.958 253665 DEBUG nova.network.neutron [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.964 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start _get_guest_xml network_info=[{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.971 253665 WARNING nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.981 253665 DEBUG nova.virt.libvirt.host [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.982 253665 DEBUG nova.virt.libvirt.host [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.987 253665 DEBUG nova.virt.libvirt.host [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.988 253665 DEBUG nova.virt.libvirt.host [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.988 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.989 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.990 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.991 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.991 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.992 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.992 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.993 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.994 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.994 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.995 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:49:50 compute-0 nova_compute[253661]: 2025-11-22 09:49:50.995 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.000 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:51 compute-0 ceph-mon[75021]: pgmap v2644: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:49:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:49:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2782051890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.491 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.515 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.519 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:49:51 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246561523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.973 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.976 253665 DEBUG nova.virt.libvirt.vif [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:49:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1978624834',display_name='tempest-TestSnapshotPattern-server-1978624834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1978624834',id=147,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPbs4cvcme9ivACjshW3GrHRutsNNtC8JsYxZJpO7Wdm0wymVGG4uq7MUY+cUVsrxl6cn1THXZxHPADM3ZJF4hahzevBsWxtyjQn+l0NA1XlnmuhoCdb7kymP1eYu1QPUA==',key_name='tempest-TestSnapshotPattern-1057806612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ffacc46512445d8b5c24899a0053196',ramdisk_id='',reservation_id='r-c1keooq8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-98475773',owner_user_name='tempest-TestSnapshotPattern-98475773-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:49:43Z,user_data=None,user_id='1edb692a8ff443038839784febd964b1',uuid=027bdffc-9e8e-4a33-9b06-844890912dc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.977 253665 DEBUG nova.network.os_vif_util [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converting VIF {"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.978 253665 DEBUG nova.network.os_vif_util [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.980 253665 DEBUG nova.objects.instance [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lazy-loading 'pci_devices' on Instance uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:49:51 compute-0 nova_compute[253661]: 2025-11-22 09:49:51.994 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <uuid>027bdffc-9e8e-4a33-9b06-844890912dc9</uuid>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <name>instance-00000093</name>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <nova:name>tempest-TestSnapshotPattern-server-1978624834</nova:name>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:49:50</nova:creationTime>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <nova:user uuid="1edb692a8ff443038839784febd964b1">tempest-TestSnapshotPattern-98475773-project-member</nova:user>
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <nova:project uuid="6ffacc46512445d8b5c24899a0053196">tempest-TestSnapshotPattern-98475773</nova:project>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <nova:port uuid="62358b95-9f4a-404c-8165-dc98c7e3b042">
Nov 22 09:49:51 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <system>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <entry name="serial">027bdffc-9e8e-4a33-9b06-844890912dc9</entry>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <entry name="uuid">027bdffc-9e8e-4a33-9b06-844890912dc9</entry>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     </system>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <os>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   </os>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <features>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   </features>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/027bdffc-9e8e-4a33-9b06-844890912dc9_disk">
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       </source>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config">
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       </source>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:49:51 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:bc:a0:65"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <target dev="tap62358b95-9f"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/console.log" append="off"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <video>
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     </video>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:49:51 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:49:51 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:49:51 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:49:51 compute-0 nova_compute[253661]: </domain>
Nov 22 09:49:51 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.002 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Preparing to wait for external event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.003 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.004 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.004 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.005 253665 DEBUG nova.virt.libvirt.vif [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:49:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1978624834',display_name='tempest-TestSnapshotPattern-server-1978624834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1978624834',id=147,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPbs4cvcme9ivACjshW3GrHRutsNNtC8JsYxZJpO7Wdm0wymVGG4uq7MUY+cUVsrxl6cn1THXZxHPADM3ZJF4hahzevBsWxtyjQn+l0NA1XlnmuhoCdb7kymP1eYu1QPUA==',key_name='tempest-TestSnapshotPattern-1057806612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ffacc46512445d8b5c24899a0053196',ramdisk_id='',reservation_id='r-c1keooq8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-98475773',owner_user_name='tempest-TestSnapshotPattern-98475773-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:49:43Z,user_data=None,user_id='1edb692a8ff443038839784febd964b1',uuid=027bdffc-9e8e-4a33-9b06-844890912dc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.006 253665 DEBUG nova.network.os_vif_util [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converting VIF {"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.007 253665 DEBUG nova.network.os_vif_util [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.007 253665 DEBUG os_vif [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.008 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.009 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.009 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.012 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.012 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62358b95-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.013 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62358b95-9f, col_values=(('external_ids', {'iface-id': '62358b95-9f4a-404c-8165-dc98c7e3b042', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:a0:65', 'vm-uuid': '027bdffc-9e8e-4a33-9b06-844890912dc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.014 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:52 compute-0 NetworkManager[48920]: <info>  [1763804992.0158] manager: (tap62358b95-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.018 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.025 253665 INFO os_vif [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f')
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.089 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.090 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.091 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] No VIF found with MAC fa:16:3e:bc:a0:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.092 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Using config drive
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.117 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:49:52
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', '.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2782051890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:49:52 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2246561523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:49:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:49:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.951 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Creating config drive at /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config
Nov 22 09:49:52 compute-0 nova_compute[253661]: 2025-11-22 09:49:52.958 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7z6g3r12 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.013 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.032 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.033 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.034 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.122 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7z6g3r12" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.150 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.155 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.187 253665 DEBUG nova.network.neutron [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated VIF entry in instance network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.188 253665 DEBUG nova.network.neutron [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.204 253665 DEBUG oslo_concurrency.lockutils [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.308 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.309 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Deleting local config drive /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config because it was imported into RBD.
Nov 22 09:49:53 compute-0 NetworkManager[48920]: <info>  [1763804993.3676] manager: (tap62358b95-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/652)
Nov 22 09:49:53 compute-0 kernel: tap62358b95-9f: entered promiscuous mode
Nov 22 09:49:53 compute-0 ovn_controller[152872]: 2025-11-22T09:49:53Z|01590|binding|INFO|Claiming lport 62358b95-9f4a-404c-8165-dc98c7e3b042 for this chassis.
Nov 22 09:49:53 compute-0 ovn_controller[152872]: 2025-11-22T09:49:53Z|01591|binding|INFO|62358b95-9f4a-404c-8165-dc98c7e3b042: Claiming fa:16:3e:bc:a0:65 10.100.0.3
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.381 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a0:65 10.100.0.3'], port_security=['fa:16:3e:bc:a0:65 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '027bdffc-9e8e-4a33-9b06-844890912dc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-768d62d5-f993-4383-9edf-3d68f19e409c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ffacc46512445d8b5c24899a0053196', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aef6f84b-f5db-4e86-b5ce-afacad080f10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eea3b39d-a626-45c2-a32c-ad267efc3243, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=62358b95-9f4a-404c-8165-dc98c7e3b042) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:49:53 compute-0 ceph-mon[75021]: pgmap v2645: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.384 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 62358b95-9f4a-404c-8165-dc98c7e3b042 in datapath 768d62d5-f993-4383-9edf-3d68f19e409c bound to our chassis
Nov 22 09:49:53 compute-0 ovn_controller[152872]: 2025-11-22T09:49:53Z|01592|binding|INFO|Setting lport 62358b95-9f4a-404c-8165-dc98c7e3b042 ovn-installed in OVS
Nov 22 09:49:53 compute-0 ovn_controller[152872]: 2025-11-22T09:49:53Z|01593|binding|INFO|Setting lport 62358b95-9f4a-404c-8165-dc98c7e3b042 up in Southbound
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.387 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 768d62d5-f993-4383-9edf-3d68f19e409c
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.401 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a321c9c-45f1-40ce-bed2-cddb695f51ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.402 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap768d62d5-f1 in ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.405 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap768d62d5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.405 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e421b477-4082-442f-8be0-4b910b09f835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.406 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8444ad-198c-44e8-99fc-7ae3b30afbb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 systemd-udevd[405582]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:49:53 compute-0 systemd-machined[215941]: New machine qemu-179-instance-00000093.
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.418 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ee90c2e0-81d8-474a-9189-790519d23742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000093.
Nov 22 09:49:53 compute-0 NetworkManager[48920]: <info>  [1763804993.4293] device (tap62358b95-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:49:53 compute-0 NetworkManager[48920]: <info>  [1763804993.4305] device (tap62358b95-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.438 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c9de9c7c-9243-457e-a7dd-34c2639f528b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.473 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[68ce526f-26ce-485b-a670-f10e45f098c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 NetworkManager[48920]: <info>  [1763804993.4820] manager: (tap768d62d5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/653)
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.481 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[63182330-9c10-44cc-ab31-a1765c6a2d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.518 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6578b734-2bff-47bf-b675-17566d952546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.522 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9caf2b8b-801e-448d-9e4d-f8cb5cd84eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 NetworkManager[48920]: <info>  [1763804993.5470] device (tap768d62d5-f0): carrier: link connected
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.551 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[630e83f7-4dc4-4f69-8706-55b2434d3c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f158187-47dc-4df5-bb5e-98b0b741f8dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap768d62d5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:b9:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 457], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784428, 'reachable_time': 28560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405614, 'error': None, 'target': 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.589 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[387e6895-071c-4de9-90dc-64b505b51efb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:b99d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 784428, 'tstamp': 784428}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405615, 'error': None, 'target': 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.606 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a12441-8fab-4f3f-9531-8ceb842df7cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap768d62d5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:b9:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 457], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784428, 'reachable_time': 28560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405616, 'error': None, 'target': 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.645 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e08ab24b-b145-4a93-8ef7-e6c8fc4e5c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.741 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7676d62c-af9f-4cfa-8d37-ad13933e861f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.742 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap768d62d5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.742 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.743 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap768d62d5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.744 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:53 compute-0 kernel: tap768d62d5-f0: entered promiscuous mode
Nov 22 09:49:53 compute-0 NetworkManager[48920]: <info>  [1763804993.7454] manager: (tap768d62d5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.747 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap768d62d5-f0, col_values=(('external_ids', {'iface-id': 'e20358df-1297-4b78-9482-59841121a4d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:49:53 compute-0 ovn_controller[152872]: 2025-11-22T09:49:53Z|01594|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.749 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/768d62d5-f993-4383-9edf-3d68f19e409c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/768d62d5-f993-4383-9edf-3d68f19e409c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.750 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c84b2508-32f4-4481-87a8-11738260d556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.751 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-768d62d5-f993-4383-9edf-3d68f19e409c
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/768d62d5-f993-4383-9edf-3d68f19e409c.pid.haproxy
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 768d62d5-f993-4383-9edf-3d68f19e409c
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:49:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.751 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'env', 'PROCESS_TAG=haproxy-768d62d5-f993-4383-9edf-3d68f19e409c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/768d62d5-f993-4383-9edf-3d68f19e409c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:49:53 compute-0 nova_compute[253661]: 2025-11-22 09:49:53.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:54 compute-0 nova_compute[253661]: 2025-11-22 09:49:54.084 253665 DEBUG nova.compute.manager [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:54 compute-0 nova_compute[253661]: 2025-11-22 09:49:54.086 253665 DEBUG oslo_concurrency.lockutils [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:54 compute-0 nova_compute[253661]: 2025-11-22 09:49:54.087 253665 DEBUG oslo_concurrency.lockutils [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:54 compute-0 nova_compute[253661]: 2025-11-22 09:49:54.087 253665 DEBUG oslo_concurrency.lockutils [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:54 compute-0 nova_compute[253661]: 2025-11-22 09:49:54.087 253665 DEBUG nova.compute.manager [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Processing event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:49:54 compute-0 podman[405648]: 2025-11-22 09:49:54.16762839 +0000 UTC m=+0.089457111 container create 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 09:49:54 compute-0 podman[405648]: 2025-11-22 09:49:54.10971349 +0000 UTC m=+0.031542211 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:49:54 compute-0 nova_compute[253661]: 2025-11-22 09:49:54.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:54 compute-0 systemd[1]: Started libpod-conmon-5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab.scope.
Nov 22 09:49:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:49:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3a35e44d987f37dea2ee9bb4f5402e567d895145b73865842e34bbbf02d3f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:49:54 compute-0 podman[405648]: 2025-11-22 09:49:54.322578166 +0000 UTC m=+0.244406917 container init 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:49:54 compute-0 podman[405648]: 2025-11-22 09:49:54.32964739 +0000 UTC m=+0.251476111 container start 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:49:54 compute-0 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [NOTICE]   (405666) : New worker (405668) forked
Nov 22 09:49:54 compute-0 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [NOTICE]   (405666) : Loading success.
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.067 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804995.0670545, 027bdffc-9e8e-4a33-9b06-844890912dc9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.068 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] VM Started (Lifecycle Event)
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.070 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.074 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.077 253665 INFO nova.virt.libvirt.driver [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance spawned successfully.
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.078 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.098 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.104 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.109 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.110 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.111 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.111 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.112 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.112 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.143 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.144 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804995.0672367, 027bdffc-9e8e-4a33-9b06-844890912dc9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.144 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] VM Paused (Lifecycle Event)
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.172 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.175 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804995.0732617, 027bdffc-9e8e-4a33-9b06-844890912dc9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.175 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] VM Resumed (Lifecycle Event)
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.192 253665 INFO nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Took 11.22 seconds to spawn the instance on the hypervisor.
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.193 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.196 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.203 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.229 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.253 253665 INFO nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Took 12.15 seconds to build instance.
Nov 22 09:49:55 compute-0 nova_compute[253661]: 2025-11-22 09:49:55.268 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:55 compute-0 ceph-mon[75021]: pgmap v2646: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:49:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:49:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:49:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:49:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:49:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:49:55 compute-0 ovn_controller[152872]: 2025-11-22T09:49:55Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:47:56 10.100.0.9
Nov 22 09:49:55 compute-0 ovn_controller[152872]: 2025-11-22T09:49:55Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:47:56 10.100.0.9
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.151 253665 DEBUG nova.compute.manager [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.152 253665 DEBUG oslo_concurrency.lockutils [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.152 253665 DEBUG oslo_concurrency.lockutils [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.152 253665 DEBUG oslo_concurrency.lockutils [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.153 253665 DEBUG nova.compute.manager [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] No waiting events found dispatching network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.153 253665 WARNING nova.compute.manager [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received unexpected event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 for instance with vm_state active and task_state None.
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.250 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 22 09:49:56 compute-0 ceph-mon[75021]: pgmap v2647: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 22 09:49:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:49:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:49:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:49:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:49:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:49:56 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:49:56 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811333471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.720 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.788 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.789 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.792 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.792 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.967 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.968 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3192MB free_disk=59.94662857055664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.969 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:49:56 compute-0 nova_compute[253661]: 2025-11-22 09:49:56.969 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3a65f84a-3072-4b94-b08a-0ba7b1529a07 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.040 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.040 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.116 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:49:57 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2811333471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:49:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307993980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.565 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.571 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.584 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.610 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:49:57 compute-0 nova_compute[253661]: 2025-11-22 09:49:57.611 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:49:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:49:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Nov 22 09:49:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/307993980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:49:58 compute-0 ceph-mon[75021]: pgmap v2648: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Nov 22 09:49:58 compute-0 nova_compute[253661]: 2025-11-22 09:49:58.611 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:58 compute-0 nova_compute[253661]: 2025-11-22 09:49:58.637 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:58 compute-0 nova_compute[253661]: 2025-11-22 09:49:58.638 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:49:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:59.013 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:49:59 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:49:59.014 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:49:59 compute-0 nova_compute[253661]: 2025-11-22 09:49:59.014 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 22 09:50:01 compute-0 nova_compute[253661]: 2025-11-22 09:50:01.150 253665 DEBUG nova.compute.manager [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:50:01 compute-0 nova_compute[253661]: 2025-11-22 09:50:01.151 253665 DEBUG nova.compute.manager [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing instance network info cache due to event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:50:01 compute-0 nova_compute[253661]: 2025-11-22 09:50:01.152 253665 DEBUG oslo_concurrency.lockutils [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:50:01 compute-0 nova_compute[253661]: 2025-11-22 09:50:01.152 253665 DEBUG oslo_concurrency.lockutils [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:50:01 compute-0 nova_compute[253661]: 2025-11-22 09:50:01.152 253665 DEBUG nova.network.neutron [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:50:01 compute-0 ceph-mon[75021]: pgmap v2649: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 22 09:50:02 compute-0 nova_compute[253661]: 2025-11-22 09:50:02.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:02 compute-0 nova_compute[253661]: 2025-11-22 09:50:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:50:02 compute-0 nova_compute[253661]: 2025-11-22 09:50:02.303 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 22 09:50:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011051429112131277 of space, bias 1.0, pg target 0.3315428733639383 quantized to 32 (current 32)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:50:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:50:03 compute-0 nova_compute[253661]: 2025-11-22 09:50:03.128 253665 DEBUG nova.network.neutron [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated VIF entry in instance network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:50:03 compute-0 nova_compute[253661]: 2025-11-22 09:50:03.129 253665 DEBUG nova.network.neutron [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:50:03 compute-0 nova_compute[253661]: 2025-11-22 09:50:03.144 253665 DEBUG oslo_concurrency.lockutils [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:50:03 compute-0 ceph-mon[75021]: pgmap v2650: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 22 09:50:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Nov 22 09:50:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:05.017 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:05 compute-0 ceph-mon[75021]: pgmap v2651: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Nov 22 09:50:05 compute-0 nova_compute[253661]: 2025-11-22 09:50:05.638 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:05 compute-0 nova_compute[253661]: 2025-11-22 09:50:05.639 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:05 compute-0 nova_compute[253661]: 2025-11-22 09:50:05.657 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:50:05 compute-0 nova_compute[253661]: 2025-11-22 09:50:05.717 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:05 compute-0 nova_compute[253661]: 2025-11-22 09:50:05.717 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:05 compute-0 nova_compute[253661]: 2025-11-22 09:50:05.725 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:50:05 compute-0 nova_compute[253661]: 2025-11-22 09:50:05.725 253665 INFO nova.compute.claims [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:50:05 compute-0 nova_compute[253661]: 2025-11-22 09:50:05.875 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Nov 22 09:50:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:50:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469611548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.372 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.378 253665 DEBUG nova.compute.provider_tree [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:50:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2469611548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.393 253665 DEBUG nova.scheduler.client.report [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.421 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.422 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.471 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.472 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.498 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.514 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.599 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.601 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.602 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Creating image(s)
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.623 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.649 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.672 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.676 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.793 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.794 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.796 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.797 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.825 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.830 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 34b8226a-40bd-46d4-99ee-1be44f56e142_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:06 compute-0 nova_compute[253661]: 2025-11-22 09:50:06.980 253665 DEBUG nova.policy [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.127 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 34b8226a-40bd-46d4-99ee-1be44f56e142_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.201 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:07 compute-0 podman[405934]: 2025-11-22 09:50:07.386549294 +0000 UTC m=+0.060368172 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:50:07 compute-0 podman[405935]: 2025-11-22 09:50:07.408568848 +0000 UTC m=+0.090175038 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:50:07 compute-0 ceph-mon[75021]: pgmap v2652: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.485 253665 DEBUG nova.objects.instance [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 34b8226a-40bd-46d4-99ee-1be44f56e142 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.500 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.501 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Ensure instance console log exists: /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.501 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.501 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:07 compute-0 nova_compute[253661]: 2025-11-22 09:50:07.502 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:08 compute-0 nova_compute[253661]: 2025-11-22 09:50:08.045 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Successfully created port: fbec9736-25e9-44be-80ed-974c1de2bf0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:50:08 compute-0 ovn_controller[152872]: 2025-11-22T09:50:08Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:a0:65 10.100.0.3
Nov 22 09:50:08 compute-0 ovn_controller[152872]: 2025-11-22T09:50:08Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:a0:65 10.100.0.3
Nov 22 09:50:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 223 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 179 op/s
Nov 22 09:50:08 compute-0 ceph-mon[75021]: pgmap v2653: 305 pgs: 305 active+clean; 223 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 179 op/s
Nov 22 09:50:08 compute-0 nova_compute[253661]: 2025-11-22 09:50:08.821 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Successfully updated port: fbec9736-25e9-44be-80ed-974c1de2bf0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:50:08 compute-0 nova_compute[253661]: 2025-11-22 09:50:08.837 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:50:08 compute-0 nova_compute[253661]: 2025-11-22 09:50:08.838 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:50:08 compute-0 nova_compute[253661]: 2025-11-22 09:50:08.838 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:50:08 compute-0 nova_compute[253661]: 2025-11-22 09:50:08.907 253665 DEBUG nova.compute.manager [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:50:08 compute-0 nova_compute[253661]: 2025-11-22 09:50:08.908 253665 DEBUG nova.compute.manager [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing instance network info cache due to event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:50:08 compute-0 nova_compute[253661]: 2025-11-22 09:50:08.908 253665 DEBUG oslo_concurrency.lockutils [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:50:08 compute-0 nova_compute[253661]: 2025-11-22 09:50:08.985 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:50:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 223 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.471 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.771 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.772 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance network_info: |[{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.772 253665 DEBUG oslo_concurrency.lockutils [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.773 253665 DEBUG nova.network.neutron [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.776 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start _get_guest_xml network_info=[{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.780 253665 WARNING nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.786 253665 DEBUG nova.virt.libvirt.host [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.788 253665 DEBUG nova.virt.libvirt.host [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.796 253665 DEBUG nova.virt.libvirt.host [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.797 253665 DEBUG nova.virt.libvirt.host [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.797 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.797 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:50:10 compute-0 nova_compute[253661]: 2025-11-22 09:50:10.802 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:50:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3444718572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.261 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.294 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.299 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:11 compute-0 ceph-mon[75021]: pgmap v2654: 305 pgs: 305 active+clean; 223 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Nov 22 09:50:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3444718572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:50:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:50:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968768954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.764 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.765 253665 DEBUG nova.virt.libvirt.vif [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1818854274',display_name='tempest-TestGettingAddress-server-1818854274',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1818854274',id=148,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-no44603j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:50:06Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=34b8226a-40bd-46d4-99ee-1be44f56e142,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.766 253665 DEBUG nova.network.os_vif_util [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.767 253665 DEBUG nova.network.os_vif_util [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.768 253665 DEBUG nova.objects.instance [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 34b8226a-40bd-46d4-99ee-1be44f56e142 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.785 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <uuid>34b8226a-40bd-46d4-99ee-1be44f56e142</uuid>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <name>instance-00000094</name>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-1818854274</nova:name>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:50:10</nova:creationTime>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <nova:port uuid="fbec9736-25e9-44be-80ed-974c1de2bf0d">
Nov 22 09:50:11 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:febc:980b" ipVersion="6"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:febc:980b" ipVersion="6"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <system>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <entry name="serial">34b8226a-40bd-46d4-99ee-1be44f56e142</entry>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <entry name="uuid">34b8226a-40bd-46d4-99ee-1be44f56e142</entry>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     </system>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <os>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   </os>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <features>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   </features>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/34b8226a-40bd-46d4-99ee-1be44f56e142_disk">
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       </source>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config">
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       </source>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:50:11 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:bc:98:0b"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <target dev="tapfbec9736-25"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/console.log" append="off"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <video>
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     </video>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:50:11 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:50:11 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:50:11 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:50:11 compute-0 nova_compute[253661]: </domain>
Nov 22 09:50:11 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.786 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Preparing to wait for external event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.786 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.786 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.786 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.787 253665 DEBUG nova.virt.libvirt.vif [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1818854274',display_name='tempest-TestGettingAddress-server-1818854274',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1818854274',id=148,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-no44603j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:50:06Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=34b8226a-40bd-46d4-99ee-1be44f56e142,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.788 253665 DEBUG nova.network.os_vif_util [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.788 253665 DEBUG nova.network.os_vif_util [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.789 253665 DEBUG os_vif [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.790 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.790 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.793 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.793 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbec9736-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.793 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbec9736-25, col_values=(('external_ids', {'iface-id': 'fbec9736-25e9-44be-80ed-974c1de2bf0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:98:0b', 'vm-uuid': '34b8226a-40bd-46d4-99ee-1be44f56e142'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.795 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:11 compute-0 NetworkManager[48920]: <info>  [1763805011.7959] manager: (tapfbec9736-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/655)
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.802 253665 INFO os_vif [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25')
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.840 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.840 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.840 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:bc:98:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.841 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Using config drive
Nov 22 09:50:11 compute-0 nova_compute[253661]: 2025-11-22 09:50:11.859 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:50:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 238 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 3.9 MiB/s wr, 75 op/s
Nov 22 09:50:12 compute-0 nova_compute[253661]: 2025-11-22 09:50:12.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/968768954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:50:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:50:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229073963' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:50:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:50:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229073963' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:50:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:13 compute-0 ceph-mon[75021]: pgmap v2655: 305 pgs: 305 active+clean; 238 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 3.9 MiB/s wr, 75 op/s
Nov 22 09:50:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2229073963' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:50:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2229073963' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:50:13 compute-0 nova_compute[253661]: 2025-11-22 09:50:13.939 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Creating config drive at /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config
Nov 22 09:50:13 compute-0 nova_compute[253661]: 2025-11-22 09:50:13.947 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xvu_tq_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.097 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xvu_tq_" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.125 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.128 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.325 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.326 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Deleting local config drive /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config because it was imported into RBD.
Nov 22 09:50:14 compute-0 kernel: tapfbec9736-25: entered promiscuous mode
Nov 22 09:50:14 compute-0 NetworkManager[48920]: <info>  [1763805014.3722] manager: (tapfbec9736-25): new Tun device (/org/freedesktop/NetworkManager/Devices/656)
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:14 compute-0 ovn_controller[152872]: 2025-11-22T09:50:14Z|01595|binding|INFO|Claiming lport fbec9736-25e9-44be-80ed-974c1de2bf0d for this chassis.
Nov 22 09:50:14 compute-0 ovn_controller[152872]: 2025-11-22T09:50:14Z|01596|binding|INFO|fbec9736-25e9-44be-80ed-974c1de2bf0d: Claiming fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:14 compute-0 ovn_controller[152872]: 2025-11-22T09:50:14Z|01597|binding|INFO|Setting lport fbec9736-25e9-44be-80ed-974c1de2bf0d ovn-installed in OVS
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:14 compute-0 ovn_controller[152872]: 2025-11-22T09:50:14Z|01598|binding|INFO|Setting lport fbec9736-25e9-44be-80ed-974c1de2bf0d up in Southbound
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.415 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b'], port_security=['fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:febc:980b/64 2001:db8::f816:3eff:febc:980b/64', 'neutron:device_id': '34b8226a-40bd-46d4-99ee-1be44f56e142', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a71aa19e-d298-43f1-b9d0-7f952a63c1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fbec9736-25e9-44be-80ed-974c1de2bf0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.416 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fbec9736-25e9-44be-80ed-974c1de2bf0d in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce bound to our chassis
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.418 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce
Nov 22 09:50:14 compute-0 systemd-machined[215941]: New machine qemu-180-instance-00000094.
Nov 22 09:50:14 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000094.
Nov 22 09:50:14 compute-0 systemd-udevd[406129]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.435 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f541e6c3-6ffa-46c5-a706-e6e341c72537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:14 compute-0 NetworkManager[48920]: <info>  [1763805014.4505] device (tapfbec9736-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:50:14 compute-0 NetworkManager[48920]: <info>  [1763805014.4527] device (tapfbec9736-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.474 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[55cdf009-6b10-49f1-be5b-511f7ca2aa8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.480 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[13f05718-effb-40f5-9487-8f8d2abb13ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.512 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c87faf24-482f-485f-ae62-cc3a56deb421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:14 compute-0 podman[406127]: 2025-11-22 09:50:14.526256075 +0000 UTC m=+0.090996628 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.528 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1c5125-7178-4bbb-af22-3ad0d17c922b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b64819a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:d3:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783269, 'reachable_time': 17873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406167, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c10a7487-fc0e-4e58-aebe-13a7f4c49963]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9b64819a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783283, 'tstamp': 783283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406168, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9b64819a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783287, 'tstamp': 783287}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406168, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.547 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b64819a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.549 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.550 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b64819a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.550 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.551 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b64819a-20, col_values=(('external_ids', {'iface-id': 'da001788-faa3-412b-9b6a-82fe1a808a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:14 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.551 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.761 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805014.761172, 34b8226a-40bd-46d4-99ee-1be44f56e142 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.762 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] VM Started (Lifecycle Event)
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.782 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.787 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805014.761292, 34b8226a-40bd-46d4-99ee-1be44f56e142 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.787 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] VM Paused (Lifecycle Event)
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.806 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.809 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:50:14 compute-0 nova_compute[253661]: 2025-11-22 09:50:14.831 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.178 253665 DEBUG nova.compute.manager [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.178 253665 DEBUG oslo_concurrency.lockutils [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.179 253665 DEBUG oslo_concurrency.lockutils [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.179 253665 DEBUG oslo_concurrency.lockutils [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.179 253665 DEBUG nova.compute.manager [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Processing event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.180 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.183 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805015.1828246, 34b8226a-40bd-46d4-99ee-1be44f56e142 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.183 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] VM Resumed (Lifecycle Event)
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.185 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.188 253665 INFO nova.virt.libvirt.driver [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance spawned successfully.
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.189 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.206 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.211 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.211 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.212 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.212 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.213 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.213 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.221 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.254 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.309 253665 INFO nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Took 8.71 seconds to spawn the instance on the hypervisor.
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.310 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:50:15 compute-0 ceph-mon[75021]: pgmap v2656: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.404 253665 INFO nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Took 9.71 seconds to build instance.
Nov 22 09:50:15 compute-0 nova_compute[253661]: 2025-11-22 09:50:15.429 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 22 09:50:16 compute-0 ceph-mon[75021]: pgmap v2657: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 22 09:50:16 compute-0 nova_compute[253661]: 2025-11-22 09:50:16.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.340 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.586 253665 DEBUG nova.network.neutron [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updated VIF entry in instance network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.587 253665 DEBUG nova.network.neutron [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.608 253665 DEBUG oslo_concurrency.lockutils [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.669 253665 DEBUG nova.compute.manager [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.676 253665 DEBUG nova.compute.manager [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.676 253665 DEBUG oslo_concurrency.lockutils [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.677 253665 DEBUG oslo_concurrency.lockutils [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.677 253665 DEBUG oslo_concurrency.lockutils [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.677 253665 DEBUG nova.compute.manager [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] No waiting events found dispatching network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.677 253665 WARNING nova.compute.manager [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received unexpected event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d for instance with vm_state active and task_state None.
Nov 22 09:50:17 compute-0 nova_compute[253661]: 2025-11-22 09:50:17.716 253665 INFO nova.compute.manager [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] instance snapshotting
Nov 22 09:50:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:18 compute-0 nova_compute[253661]: 2025-11-22 09:50:18.125 253665 INFO nova.virt.libvirt.driver [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Beginning live snapshot process
Nov 22 09:50:18 compute-0 nova_compute[253661]: 2025-11-22 09:50:18.263 253665 DEBUG nova.virt.libvirt.imagebackend [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 09:50:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Nov 22 09:50:18 compute-0 nova_compute[253661]: 2025-11-22 09:50:18.545 253665 DEBUG nova.storage.rbd_utils [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] creating snapshot(1c148e2a9d904db5a74c4a842a0649a6) on rbd image(027bdffc-9e8e-4a33-9b06-844890912dc9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 09:50:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Nov 22 09:50:19 compute-0 ceph-mon[75021]: pgmap v2658: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Nov 22 09:50:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Nov 22 09:50:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Nov 22 09:50:19 compute-0 nova_compute[253661]: 2025-11-22 09:50:19.456 253665 DEBUG nova.storage.rbd_utils [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] cloning vms/027bdffc-9e8e-4a33-9b06-844890912dc9_disk@1c148e2a9d904db5a74c4a842a0649a6 to images/39d89ba1-0559-4b27-814a-561c7d3add70 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 09:50:19 compute-0 nova_compute[253661]: 2025-11-22 09:50:19.733 253665 DEBUG nova.storage.rbd_utils [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] flattening images/39d89ba1-0559-4b27-814a-561c7d3add70 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 09:50:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 145 op/s
Nov 22 09:50:20 compute-0 ceph-mon[75021]: osdmap e316: 3 total, 3 up, 3 in
Nov 22 09:50:21 compute-0 nova_compute[253661]: 2025-11-22 09:50:21.050 253665 DEBUG nova.compute.manager [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:50:21 compute-0 nova_compute[253661]: 2025-11-22 09:50:21.052 253665 DEBUG nova.compute.manager [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing instance network info cache due to event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:50:21 compute-0 nova_compute[253661]: 2025-11-22 09:50:21.052 253665 DEBUG oslo_concurrency.lockutils [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:50:21 compute-0 nova_compute[253661]: 2025-11-22 09:50:21.053 253665 DEBUG oslo_concurrency.lockutils [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:50:21 compute-0 nova_compute[253661]: 2025-11-22 09:50:21.053 253665 DEBUG nova.network.neutron [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:50:21 compute-0 nova_compute[253661]: 2025-11-22 09:50:21.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 101 KiB/s wr, 105 op/s
Nov 22 09:50:22 compute-0 nova_compute[253661]: 2025-11-22 09:50:22.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:50:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:50:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:50:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:50:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:50:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:50:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Nov 22 09:50:25 compute-0 nova_compute[253661]: 2025-11-22 09:50:25.961 253665 DEBUG nova.network.neutron [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updated VIF entry in instance network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:50:25 compute-0 nova_compute[253661]: 2025-11-22 09:50:25.962 253665 DEBUG nova.network.neutron [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:50:26 compute-0 nova_compute[253661]: 2025-11-22 09:50:26.000 253665 DEBUG oslo_concurrency.lockutils [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:50:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Nov 22 09:50:26 compute-0 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 09:50:26 compute-0 nova_compute[253661]: 2025-11-22 09:50:26.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:27 compute-0 nova_compute[253661]: 2025-11-22 09:50:27.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:27.993 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:27.994 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:27.994 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 23 op/s
Nov 22 09:50:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 21 op/s
Nov 22 09:50:30 compute-0 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 09:50:31 compute-0 nova_compute[253661]: 2025-11-22 09:50:31.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 19 op/s
Nov 22 09:50:32 compute-0 nova_compute[253661]: 2025-11-22 09:50:32.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 14.7733 seconds
Nov 22 09:50:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:33 compute-0 ceph-mon[75021]: pgmap v2660: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 145 op/s
Nov 22 09:50:33 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 11.245452881s
Nov 22 09:50:33 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 11.245452881s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 11.123932838s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 11.124113083s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.257675171s, txc = 0x56138410c900
Nov 22 09:50:33 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 6.877428532s
Nov 22 09:50:33 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 6.877429485s
Nov 22 09:50:33 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.877703667s, txc = 0x55a6cbc8a900
Nov 22 09:50:33 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.878297806s, txc = 0x55a6cc8b7500
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 11.123982430s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 11.123887062s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 11.716888428s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 11.715475082s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 11.732536316s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 11.717301369s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 11.715861320s
Nov 22 09:50:33 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 11.729718208s
Nov 22 09:50:33 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.245686531s, txc = 0x56207faafb00
Nov 22 09:50:34 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.962179661s, txc = 0x55a6cbd4db00
Nov 22 09:50:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 1 active+clean+laggy, 304 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 55 op/s
Nov 22 09:50:34 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.335518837s, txc = 0x561383c42600
Nov 22 09:50:35 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.676224709s, txc = 0x561383c42f00
Nov 22 09:50:35 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.269107819s, txc = 0x56138410d200
Nov 22 09:50:35 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.269145966s, txc = 0x561383331800
Nov 22 09:50:35 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.284407616s, txc = 0x561385106000
Nov 22 09:50:35 compute-0 ceph-mon[75021]: pgmap v2661: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 101 KiB/s wr, 105 op/s
Nov 22 09:50:35 compute-0 ceph-mon[75021]: pgmap v2662: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Nov 22 09:50:35 compute-0 ceph-mon[75021]: pgmap v2663: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Nov 22 09:50:35 compute-0 ceph-mon[75021]: pgmap v2664: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 23 op/s
Nov 22 09:50:35 compute-0 ceph-mon[75021]: pgmap v2665: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 21 op/s
Nov 22 09:50:35 compute-0 ceph-mon[75021]: pgmap v2666: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 19 op/s
Nov 22 09:50:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 1 active+clean+laggy, 304 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 36 op/s
Nov 22 09:50:36 compute-0 nova_compute[253661]: 2025-11-22 09:50:36.856 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:37 compute-0 nova_compute[253661]: 2025-11-22 09:50:37.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:38 compute-0 ceph-mon[75021]: pgmap v2667: 305 pgs: 1 active+clean+laggy, 304 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 55 op/s
Nov 22 09:50:38 compute-0 ceph-mon[75021]: pgmap v2668: 305 pgs: 1 active+clean+laggy, 304 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 36 op/s
Nov 22 09:50:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 1 active+clean+laggy, 304 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 22 09:50:38 compute-0 podman[406318]: 2025-11-22 09:50:38.376352126 +0000 UTC m=+0.059799118 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 09:50:38 compute-0 podman[406319]: 2025-11-22 09:50:38.39677345 +0000 UTC m=+0.076908000 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:50:39 compute-0 ceph-mon[75021]: pgmap v2669: 305 pgs: 1 active+clean+laggy, 304 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 22 09:50:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 1 active+clean+laggy, 304 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 22 09:50:40 compute-0 sudo[406360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:50:40 compute-0 sudo[406360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:40 compute-0 sudo[406360]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:40 compute-0 ceph-mon[75021]: pgmap v2670: 305 pgs: 1 active+clean+laggy, 304 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 22 09:50:40 compute-0 sudo[406385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:50:41 compute-0 sudo[406385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:41 compute-0 sudo[406385]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:41 compute-0 sudo[406410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:50:41 compute-0 sudo[406410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:41 compute-0 sudo[406410]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:41 compute-0 sudo[406435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:50:41 compute-0 sudo[406435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:41 compute-0 sudo[406435]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:50:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:50:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:50:41 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:50:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:50:41 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:50:41 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 060260d5-3384-41d6-af25-daa9ad2f23e5 does not exist
Nov 22 09:50:41 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a42d49b3-5f86-4642-8511-eba7972b10ad does not exist
Nov 22 09:50:41 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 633a37a8-374f-4538-baf6-33cd85138f08 does not exist
Nov 22 09:50:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:50:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:50:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:50:41 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:50:41 compute-0 nova_compute[253661]: 2025-11-22 09:50:41.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:50:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:50:41 compute-0 sudo[406491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:50:41 compute-0 sudo[406491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:41 compute-0 sudo[406491]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:42 compute-0 sudo[406516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:50:42 compute-0 sudo[406516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:42 compute-0 sudo[406516]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:42 compute-0 sudo[406541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:50:42 compute-0 sudo[406541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:42 compute-0 sudo[406541]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:42 compute-0 sudo[406566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:50:42 compute-0 sudo[406566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:50:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:50:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:50:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:50:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:50:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:50:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 1 active+clean+laggy, 304 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 22 09:50:42 compute-0 nova_compute[253661]: 2025-11-22 09:50:42.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:42 compute-0 podman[406631]: 2025-11-22 09:50:42.462118067 +0000 UTC m=+0.026974207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:50:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:42 compute-0 podman[406631]: 2025-11-22 09:50:42.776619932 +0000 UTC m=+0.341476052 container create cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:50:43 compute-0 systemd[1]: Started libpod-conmon-cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3.scope.
Nov 22 09:50:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:50:43 compute-0 ceph-mon[75021]: pgmap v2671: 305 pgs: 1 active+clean+laggy, 304 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 22 09:50:43 compute-0 podman[406631]: 2025-11-22 09:50:43.487467416 +0000 UTC m=+1.052323556 container init cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:50:43 compute-0 podman[406631]: 2025-11-22 09:50:43.49413062 +0000 UTC m=+1.058986740 container start cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:50:43 compute-0 nifty_ganguly[406647]: 167 167
Nov 22 09:50:43 compute-0 systemd[1]: libpod-cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3.scope: Deactivated successfully.
Nov 22 09:50:43 compute-0 podman[406631]: 2025-11-22 09:50:43.766939237 +0000 UTC m=+1.331795357 container attach cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:50:43 compute-0 podman[406631]: 2025-11-22 09:50:43.767504171 +0000 UTC m=+1.332360291 container died cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3e3c9ea16e0c0b64db721f23e7e0e1e6e2a9e0dc8edbf89b766d59b1a0b7615-merged.mount: Deactivated successfully.
Nov 22 09:50:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 1 active+clean+laggy, 304 active+clean; 336 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 67 op/s
Nov 22 09:50:44 compute-0 podman[406631]: 2025-11-22 09:50:44.537391401 +0000 UTC m=+2.102247521 container remove cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:50:44 compute-0 systemd[1]: libpod-conmon-cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3.scope: Deactivated successfully.
Nov 22 09:50:44 compute-0 podman[406666]: 2025-11-22 09:50:44.716305159 +0000 UTC m=+0.110420088 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:50:44 compute-0 podman[406692]: 2025-11-22 09:50:44.704025536 +0000 UTC m=+0.027376087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:50:44 compute-0 podman[406692]: 2025-11-22 09:50:44.803969593 +0000 UTC m=+0.127320124 container create 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 09:50:44 compute-0 systemd[1]: Started libpod-conmon-9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600.scope.
Nov 22 09:50:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:45 compute-0 ceph-mon[75021]: pgmap v2672: 305 pgs: 1 active+clean+laggy, 304 active+clean; 336 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 67 op/s
Nov 22 09:50:45 compute-0 podman[406692]: 2025-11-22 09:50:45.048540713 +0000 UTC m=+0.371891264 container init 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:50:45 compute-0 podman[406692]: 2025-11-22 09:50:45.056288135 +0000 UTC m=+0.379638666 container start 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:50:45 compute-0 podman[406692]: 2025-11-22 09:50:45.147024934 +0000 UTC m=+0.470375465 container attach 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:50:46 compute-0 stupefied_lovelace[406713]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:50:46 compute-0 stupefied_lovelace[406713]: --> relative data size: 1.0
Nov 22 09:50:46 compute-0 stupefied_lovelace[406713]: --> All data devices are unavailable
Nov 22 09:50:46 compute-0 systemd[1]: libpod-9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600.scope: Deactivated successfully.
Nov 22 09:50:46 compute-0 systemd[1]: libpod-9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600.scope: Consumed 1.027s CPU time.
Nov 22 09:50:46 compute-0 podman[406692]: 2025-11-22 09:50:46.197343151 +0000 UTC m=+1.520693682 container died 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:50:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 1 active+clean+laggy, 304 active+clean; 336 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.6 MiB/s wr, 31 op/s
Nov 22 09:50:46 compute-0 nova_compute[253661]: 2025-11-22 09:50:46.861 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa-merged.mount: Deactivated successfully.
Nov 22 09:50:47 compute-0 ovn_controller[152872]: 2025-11-22T09:50:47Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:98:0b 10.100.0.8
Nov 22 09:50:47 compute-0 ovn_controller[152872]: 2025-11-22T09:50:47Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:98:0b 10.100.0.8
Nov 22 09:50:47 compute-0 ceph-mon[75021]: pgmap v2673: 305 pgs: 1 active+clean+laggy, 304 active+clean; 336 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.6 MiB/s wr, 31 op/s
Nov 22 09:50:47 compute-0 nova_compute[253661]: 2025-11-22 09:50:47.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:50:47 compute-0 nova_compute[253661]: 2025-11-22 09:50:47.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:50:47 compute-0 nova_compute[253661]: 2025-11-22 09:50:47.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:50:47 compute-0 nova_compute[253661]: 2025-11-22 09:50:47.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:47 compute-0 podman[406692]: 2025-11-22 09:50:47.44380425 +0000 UTC m=+2.767154781 container remove 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 09:50:47 compute-0 systemd[1]: libpod-conmon-9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600.scope: Deactivated successfully.
Nov 22 09:50:47 compute-0 sudo[406566]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:47 compute-0 nova_compute[253661]: 2025-11-22 09:50:47.509 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:50:47 compute-0 nova_compute[253661]: 2025-11-22 09:50:47.509 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:50:47 compute-0 nova_compute[253661]: 2025-11-22 09:50:47.510 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:50:47 compute-0 nova_compute[253661]: 2025-11-22 09:50:47.510 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:50:47 compute-0 sudo[406756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:50:47 compute-0 sudo[406756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:47 compute-0 sudo[406756]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:47 compute-0 sudo[406781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:50:47 compute-0 sudo[406781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:47 compute-0 sudo[406781]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:47 compute-0 sudo[406806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:50:47 compute-0 sudo[406806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:47 compute-0 sudo[406806]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:47 compute-0 sudo[406831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:50:47 compute-0 sudo[406831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:48 compute-0 podman[406896]: 2025-11-22 09:50:48.055258489 +0000 UTC m=+0.040875661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:50:48 compute-0 podman[406896]: 2025-11-22 09:50:48.238110993 +0000 UTC m=+0.223728055 container create e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 09:50:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 1 active+clean+laggy, 304 active+clean; 348 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 22 09:50:48 compute-0 systemd[1]: Started libpod-conmon-e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4.scope.
Nov 22 09:50:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:50:48 compute-0 podman[406896]: 2025-11-22 09:50:48.606529951 +0000 UTC m=+0.592147043 container init e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 09:50:48 compute-0 podman[406896]: 2025-11-22 09:50:48.613418501 +0000 UTC m=+0.599035573 container start e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:50:48 compute-0 crazy_chatelet[406912]: 167 167
Nov 22 09:50:48 compute-0 systemd[1]: libpod-e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4.scope: Deactivated successfully.
Nov 22 09:50:48 compute-0 ceph-mon[75021]: pgmap v2674: 305 pgs: 1 active+clean+laggy, 304 active+clean; 348 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 22 09:50:48 compute-0 podman[406896]: 2025-11-22 09:50:48.780358303 +0000 UTC m=+0.765975365 container attach e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:50:48 compute-0 podman[406896]: 2025-11-22 09:50:48.781522102 +0000 UTC m=+0.767139194 container died e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-690d0fda1d9dd49e18050356e6f5dbfe306ee9257e60e8e8ad0637115c7cbb26-merged.mount: Deactivated successfully.
Nov 22 09:50:49 compute-0 nova_compute[253661]: 2025-11-22 09:50:49.312 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:50:49 compute-0 nova_compute[253661]: 2025-11-22 09:50:49.325 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:50:49 compute-0 nova_compute[253661]: 2025-11-22 09:50:49.326 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:50:49 compute-0 podman[406896]: 2025-11-22 09:50:49.355488975 +0000 UTC m=+1.341106037 container remove e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:50:49 compute-0 systemd[1]: libpod-conmon-e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4.scope: Deactivated successfully.
Nov 22 09:50:49 compute-0 podman[406936]: 2025-11-22 09:50:49.540233737 +0000 UTC m=+0.027177322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:50:49 compute-0 podman[406936]: 2025-11-22 09:50:49.686640582 +0000 UTC m=+0.173584147 container create 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 09:50:49 compute-0 systemd[1]: Started libpod-conmon-423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80.scope.
Nov 22 09:50:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:50 compute-0 podman[406936]: 2025-11-22 09:50:50.086197208 +0000 UTC m=+0.573140803 container init 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:50:50 compute-0 podman[406936]: 2025-11-22 09:50:50.092375771 +0000 UTC m=+0.579319346 container start 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:50:50 compute-0 ovn_controller[152872]: 2025-11-22T09:50:50Z|01599|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 22 09:50:50 compute-0 nova_compute[253661]: 2025-11-22 09:50:50.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:50:50 compute-0 podman[406936]: 2025-11-22 09:50:50.299983768 +0000 UTC m=+0.786932803 container attach 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 09:50:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 1 active+clean+laggy, 304 active+clean; 348 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 22 09:50:50 compute-0 ceph-mon[75021]: pgmap v2675: 305 pgs: 1 active+clean+laggy, 304 active+clean; 348 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 22 09:50:50 compute-0 keen_keldysh[406953]: {
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:     "0": [
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:         {
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "devices": [
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "/dev/loop3"
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             ],
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_name": "ceph_lv0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_size": "21470642176",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "name": "ceph_lv0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "tags": {
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.cluster_name": "ceph",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.crush_device_class": "",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.encrypted": "0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.osd_id": "0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.type": "block",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.vdo": "0"
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             },
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "type": "block",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "vg_name": "ceph_vg0"
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:         }
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:     ],
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:     "1": [
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:         {
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "devices": [
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "/dev/loop4"
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             ],
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_name": "ceph_lv1",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_size": "21470642176",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "name": "ceph_lv1",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "tags": {
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.cluster_name": "ceph",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.crush_device_class": "",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.encrypted": "0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.osd_id": "1",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.type": "block",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.vdo": "0"
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             },
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "type": "block",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "vg_name": "ceph_vg1"
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:         }
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:     ],
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:     "2": [
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:         {
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "devices": [
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "/dev/loop5"
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             ],
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_name": "ceph_lv2",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_size": "21470642176",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "name": "ceph_lv2",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "tags": {
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.cluster_name": "ceph",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.crush_device_class": "",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.encrypted": "0",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.osd_id": "2",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.type": "block",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:                 "ceph.vdo": "0"
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             },
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "type": "block",
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:             "vg_name": "ceph_vg2"
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:         }
Nov 22 09:50:50 compute-0 keen_keldysh[406953]:     ]
Nov 22 09:50:50 compute-0 keen_keldysh[406953]: }
Nov 22 09:50:50 compute-0 systemd[1]: libpod-423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80.scope: Deactivated successfully.
Nov 22 09:50:50 compute-0 podman[406936]: 2025-11-22 09:50:50.942958415 +0000 UTC m=+1.429901970 container died 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 09:50:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:51.185+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d-merged.mount: Deactivated successfully.
Nov 22 09:50:51 compute-0 podman[406936]: 2025-11-22 09:50:51.514010186 +0000 UTC m=+2.000953751 container remove 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 09:50:51 compute-0 systemd[1]: libpod-conmon-423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80.scope: Deactivated successfully.
Nov 22 09:50:51 compute-0 sudo[406831]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:51 compute-0 sudo[406976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:50:51 compute-0 sudo[406976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:51 compute-0 sudo[406976]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:51 compute-0 sudo[407001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:50:51 compute-0 sudo[407001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:51 compute-0 sudo[407001]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:51 compute-0 sudo[407026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:50:51 compute-0 sudo[407026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:51 compute-0 sudo[407026]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:51 compute-0 sudo[407051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:50:51 compute-0 sudo[407051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:51 compute-0 nova_compute[253661]: 2025-11-22 09:50:51.896 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:51 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:52 compute-0 podman[407116]: 2025-11-22 09:50:52.158878989 +0000 UTC m=+0.057333437 container create 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:50:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:52.181+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:52 compute-0 systemd[1]: Started libpod-conmon-4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c.scope.
Nov 22 09:50:52 compute-0 podman[407116]: 2025-11-22 09:50:52.125122755 +0000 UTC m=+0.023577223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:50:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:50:52 compute-0 podman[407116]: 2025-11-22 09:50:52.256851698 +0000 UTC m=+0.155306166 container init 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 09:50:52 compute-0 podman[407116]: 2025-11-22 09:50:52.263234666 +0000 UTC m=+0.161689114 container start 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 09:50:52 compute-0 podman[407116]: 2025-11-22 09:50:52.267843039 +0000 UTC m=+0.166297487 container attach 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:50:52 compute-0 brave_chaum[407132]: 167 167
Nov 22 09:50:52 compute-0 systemd[1]: libpod-4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c.scope: Deactivated successfully.
Nov 22 09:50:52 compute-0 podman[407116]: 2025-11-22 09:50:52.269115931 +0000 UTC m=+0.167570379 container died 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:50:52
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'images', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups']
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fd43736adee5c64a61ab7566de62de8bc1f4674cbf6f3b34da08f26d76c3047-merged.mount: Deactivated successfully.
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 1 active+clean+laggy, 304 active+clean; 348 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 22 09:50:52 compute-0 nova_compute[253661]: 2025-11-22 09:50:52.394 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:52 compute-0 podman[407116]: 2025-11-22 09:50:52.39867001 +0000 UTC m=+0.297124458 container remove 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 22 09:50:52 compute-0 systemd[1]: libpod-conmon-4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c.scope: Deactivated successfully.
Nov 22 09:50:52 compute-0 podman[407156]: 2025-11-22 09:50:52.575742972 +0000 UTC m=+0.044091989 container create 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 09:50:52 compute-0 systemd[1]: Started libpod-conmon-76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034.scope.
Nov 22 09:50:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:50:52 compute-0 podman[407156]: 2025-11-22 09:50:52.556001885 +0000 UTC m=+0.024350922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:50:52 compute-0 podman[407156]: 2025-11-22 09:50:52.658991399 +0000 UTC m=+0.127340436 container init 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:50:52 compute-0 podman[407156]: 2025-11-22 09:50:52.665841037 +0000 UTC m=+0.134190054 container start 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 09:50:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:52 compute-0 podman[407156]: 2025-11-22 09:50:52.674397839 +0000 UTC m=+0.142746856 container attach 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:50:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:50:52 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:52 compute-0 ceph-mon[75021]: pgmap v2676: 305 pgs: 1 active+clean+laggy, 304 active+clean; 348 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 22 09:50:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:53.145+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:53 compute-0 nova_compute[253661]: 2025-11-22 09:50:53.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:50:53 compute-0 nova_compute[253661]: 2025-11-22 09:50:53.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:50:53 compute-0 nova_compute[253661]: 2025-11-22 09:50:53.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]: {
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "osd_id": 1,
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "type": "bluestore"
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:     },
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "osd_id": 0,
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "type": "bluestore"
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:     },
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "osd_id": 2,
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:         "type": "bluestore"
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]:     }
Nov 22 09:50:53 compute-0 beautiful_proskuriakova[407172]: }
Nov 22 09:50:53 compute-0 systemd[1]: libpod-76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034.scope: Deactivated successfully.
Nov 22 09:50:53 compute-0 podman[407206]: 2025-11-22 09:50:53.696055486 +0000 UTC m=+0.024793122 container died 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68-merged.mount: Deactivated successfully.
Nov 22 09:50:53 compute-0 podman[407206]: 2025-11-22 09:50:53.811695192 +0000 UTC m=+0.140432828 container remove 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:50:53 compute-0 systemd[1]: libpod-conmon-76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034.scope: Deactivated successfully.
Nov 22 09:50:53 compute-0 sudo[407051]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:50:53 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:50:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:50:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:54.151+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:54 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:50:54 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 160e9175-67e6-4370-b321-bfe7b41c5d0e does not exist
Nov 22 09:50:54 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6addafff-1fc7-4600-b85d-ad37c31bee18 does not exist
Nov 22 09:50:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:54 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:50:54 compute-0 sudo[407221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:50:54 compute-0 sudo[407221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:54 compute-0 sudo[407221]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 22 09:50:54 compute-0 sudo[407246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:50:54 compute-0 sudo[407246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:50:54 compute-0 sudo[407246]: pam_unix(sudo:session): session closed for user root
Nov 22 09:50:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:55.190+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.226 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.226 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:50:55 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 31 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:50:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:55 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:50:55 compute-0 ceph-mon[75021]: pgmap v2677: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.446 253665 DEBUG nova.compute.manager [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.446 253665 DEBUG nova.compute.manager [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing instance network info cache due to event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.447 253665 DEBUG oslo_concurrency.lockutils [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.448 253665 DEBUG oslo_concurrency.lockutils [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.448 253665 DEBUG nova.network.neutron [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.515 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.516 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.516 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.516 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.516 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.517 253665 INFO nova.compute.manager [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Terminating instance
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.518 253665 DEBUG nova.compute.manager [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:50:55 compute-0 kernel: tapfbec9736-25 (unregistering): left promiscuous mode
Nov 22 09:50:55 compute-0 NetworkManager[48920]: <info>  [1763805055.6939] device (tapfbec9736-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:50:55 compute-0 ovn_controller[152872]: 2025-11-22T09:50:55Z|01600|binding|INFO|Releasing lport fbec9736-25e9-44be-80ed-974c1de2bf0d from this chassis (sb_readonly=0)
Nov 22 09:50:55 compute-0 ovn_controller[152872]: 2025-11-22T09:50:55Z|01601|binding|INFO|Setting lport fbec9736-25e9-44be-80ed-974c1de2bf0d down in Southbound
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:55 compute-0 ovn_controller[152872]: 2025-11-22T09:50:55Z|01602|binding|INFO|Removing iface tapfbec9736-25 ovn-installed in OVS
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.725 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b'], port_security=['fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:febc:980b/64 2001:db8::f816:3eff:febc:980b/64', 'neutron:device_id': '34b8226a-40bd-46d4-99ee-1be44f56e142', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a71aa19e-d298-43f1-b9d0-7f952a63c1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fbec9736-25e9-44be-80ed-974c1de2bf0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.727 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fbec9736-25e9-44be-80ed-974c1de2bf0d in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce unbound from our chassis
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.729 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce
Nov 22 09:50:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:50:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:50:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.749 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf58ef30-e596-4377-b127-7490e68b0cf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:50:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:50:55 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000094.scope: Deactivated successfully.
Nov 22 09:50:55 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000094.scope: Consumed 14.952s CPU time.
Nov 22 09:50:55 compute-0 systemd-machined[215941]: Machine qemu-180-instance-00000094 terminated.
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.787 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[519ae2b5-dc65-47e2-b9e1-0a0aba6425ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.791 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0969b1-164a-414b-a113-e5ed893063d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.832 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fbab38-38bd-45af-b84b-57c7d8bed5f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.857 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94ff2fad-b771-4c75-9966-460509a71ca4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b64819a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:d3:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783269, 'reachable_time': 17873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407282, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.881 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c320cd39-8afd-4930-b1ab-00c50462143b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9b64819a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783283, 'tstamp': 783283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407283, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9b64819a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783287, 'tstamp': 783287}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407283, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.884 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b64819a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.892 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b64819a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.893 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.893 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b64819a-20, col_values=(('external_ids', {'iface-id': 'da001788-faa3-412b-9b6a-82fe1a808a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:55 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.894 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.971 253665 INFO nova.virt.libvirt.driver [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance destroyed successfully.
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.972 253665 DEBUG nova.objects.instance [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 34b8226a-40bd-46d4-99ee-1be44f56e142 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.987 253665 DEBUG nova.virt.libvirt.vif [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1818854274',display_name='tempest-TestGettingAddress-server-1818854274',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1818854274',id=148,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:50:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-no44603j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:50:15Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=34b8226a-40bd-46d4-99ee-1be44f56e142,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.988 253665 DEBUG nova.network.os_vif_util [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.989 253665 DEBUG nova.network.os_vif_util [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.989 253665 DEBUG os_vif [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.993 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbec9736-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:55 compute-0 nova_compute[253661]: 2025-11-22 09:50:55.998 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:50:56 compute-0 nova_compute[253661]: 2025-11-22 09:50:56.001 253665 INFO os_vif [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25')
Nov 22 09:50:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:56.150+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:56 compute-0 nova_compute[253661]: 2025-11-22 09:50:56.161 253665 DEBUG nova.compute.manager [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-unplugged-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:50:56 compute-0 nova_compute[253661]: 2025-11-22 09:50:56.162 253665 DEBUG oslo_concurrency.lockutils [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:56 compute-0 nova_compute[253661]: 2025-11-22 09:50:56.162 253665 DEBUG oslo_concurrency.lockutils [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:56 compute-0 nova_compute[253661]: 2025-11-22 09:50:56.162 253665 DEBUG oslo_concurrency.lockutils [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:56 compute-0 nova_compute[253661]: 2025-11-22 09:50:56.162 253665 DEBUG nova.compute.manager [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] No waiting events found dispatching network-vif-unplugged-fbec9736-25e9-44be-80ed-974c1de2bf0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:50:56 compute-0 nova_compute[253661]: 2025-11-22 09:50:56.163 253665 DEBUG nova.compute.manager [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-unplugged-fbec9736-25e9-44be-80ed-974c1de2bf0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:50:56 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:56 compute-0 ceph-mon[75021]: Health check failed: 1 slow ops, oldest one blocked for 31 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:50:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 604 KiB/s wr, 42 op/s
Nov 22 09:50:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:50:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:50:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:50:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:50:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:50:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:57.183+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.251 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.252 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:57 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:57 compute-0 ceph-mon[75021]: pgmap v2678: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 604 KiB/s wr, 42 op/s
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.428 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.631 253665 INFO nova.virt.libvirt.driver [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Deleting instance files /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142_del
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.632 253665 INFO nova.virt.libvirt.driver [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Deletion of /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142_del complete
Nov 22 09:50:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:50:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:50:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1355 writes, 6193 keys, 1355 commit groups, 1.0 writes per commit group, ingest: 8.58 MB, 0.01 MB/s
                                           Interval WAL: 1355 writes, 1355 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.2      1.46              0.23        38    0.038       0      0       0.0       0.0
                                             L6      1/0    8.13 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.8     97.8     82.7      3.64              0.92        37    0.098    227K    19K       0.0       0.0
                                            Sum      1/0    8.13 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.8     69.8     71.4      5.10              1.15        75    0.068    227K    19K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.0     74.8     75.1      0.70              0.14        10    0.070     39K   2532       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     97.8     82.7      3.64              0.92        37    0.098    227K    19K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.3      1.46              0.23        37    0.039       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.062, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.35 GB read, 0.07 MB/s read, 5.1 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 40.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.0005 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2658,39.02 MB,12.8354%) FilterBlock(76,645.17 KB,0.207254%) IndexBlock(76,1.05 MB,0.345456%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.680 253665 INFO nova.compute.manager [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Took 2.16 seconds to destroy the instance on the hypervisor.
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.681 253665 DEBUG oslo.service.loopingcall [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.681 253665 DEBUG nova.compute.manager [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.682 253665 DEBUG nova.network.neutron [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:50:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:50:57 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1194823485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.779 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.851 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.852 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.857 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:50:57 compute-0 nova_compute[253661]: 2025-11-22 09:50:57.857 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.060 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.061 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3150MB free_disk=59.85171127319336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.061 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.062 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.143 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3a65f84a-3072-4b94-b08a-0ba7b1529a07 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.144 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.144 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 34b8226a-40bd-46d4-99ee-1be44f56e142 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.144 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.144 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:50:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:58.186+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.208 253665 DEBUG nova.network.neutron [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updated VIF entry in instance network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.209 253665 DEBUG nova.network.neutron [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:50:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:50:58.230 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.233 253665 DEBUG oslo_concurrency.lockutils [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.238 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.288 253665 DEBUG nova.compute.manager [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.289 253665 DEBUG oslo_concurrency.lockutils [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.290 253665 DEBUG oslo_concurrency.lockutils [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.290 253665 DEBUG oslo_concurrency.lockutils [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.290 253665 DEBUG nova.compute.manager [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] No waiting events found dispatching network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.290 253665 WARNING nova.compute.manager [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received unexpected event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d for instance with vm_state active and task_state deleting.
Nov 22 09:50:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 605 KiB/s wr, 71 op/s
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.513 253665 DEBUG nova.network.neutron [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:50:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1194823485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:50:58 compute-0 ceph-mon[75021]: pgmap v2679: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 605 KiB/s wr, 71 op/s
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.532 253665 INFO nova.compute.manager [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Took 0.85 seconds to deallocate network for instance.
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.591 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:50:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:50:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571257852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.731 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.738 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.754 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.787 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.788 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.788 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:50:58 compute-0 nova_compute[253661]: 2025-11-22 09:50:58.868 253665 DEBUG oslo_concurrency.processutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:50:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:59.198+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:50:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:50:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665110708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:50:59 compute-0 nova_compute[253661]: 2025-11-22 09:50:59.338 253665 DEBUG oslo_concurrency.processutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:50:59 compute-0 nova_compute[253661]: 2025-11-22 09:50:59.345 253665 DEBUG nova.compute.provider_tree [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:50:59 compute-0 nova_compute[253661]: 2025-11-22 09:50:59.362 253665 DEBUG nova.scheduler.client.report [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:50:59 compute-0 nova_compute[253661]: 2025-11-22 09:50:59.387 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:59 compute-0 nova_compute[253661]: 2025-11-22 09:50:59.415 253665 INFO nova.scheduler.client.report [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 34b8226a-40bd-46d4-99ee-1be44f56e142
Nov 22 09:50:59 compute-0 nova_compute[253661]: 2025-11-22 09:50:59.482 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:50:59 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:50:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3571257852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:50:59 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3665110708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:50:59 compute-0 nova_compute[253661]: 2025-11-22 09:50:59.788 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:50:59 compute-0 nova_compute[253661]: 2025-11-22 09:50:59.789 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:51:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:00.217+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 62 KiB/s wr, 44 op/s
Nov 22 09:51:00 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:00 compute-0 ceph-mon[75021]: pgmap v2680: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 62 KiB/s wr, 44 op/s
Nov 22 09:51:00 compute-0 nova_compute[253661]: 2025-11-22 09:51:00.861 253665 DEBUG nova.compute.manager [req-e9df2005-2592-48cd-aa30-2dc2d82cc05b req-8c7dcc9c-666a-4756-b827-2399579e9313 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-deleted-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:51:00 compute-0 nova_compute[253661]: 2025-11-22 09:51:00.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:01.250+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.476 253665 DEBUG nova.compute.manager [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.476 253665 DEBUG nova.compute.manager [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing instance network info cache due to event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.476 253665 DEBUG oslo_concurrency.lockutils [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.477 253665 DEBUG oslo_concurrency.lockutils [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.477 253665 DEBUG nova.network.neutron [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.551 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.551 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.552 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.552 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.552 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.553 253665 INFO nova.compute.manager [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Terminating instance
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.554 253665 DEBUG nova.compute.manager [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:51:01 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:01 compute-0 kernel: tap9c015dd3-d3 (unregistering): left promiscuous mode
Nov 22 09:51:01 compute-0 NetworkManager[48920]: <info>  [1763805061.6283] device (tap9c015dd3-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:51:01 compute-0 ovn_controller[152872]: 2025-11-22T09:51:01Z|01603|binding|INFO|Releasing lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 from this chassis (sb_readonly=0)
Nov 22 09:51:01 compute-0 ovn_controller[152872]: 2025-11-22T09:51:01Z|01604|binding|INFO|Setting lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 down in Southbound
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:01 compute-0 ovn_controller[152872]: 2025-11-22T09:51:01Z|01605|binding|INFO|Removing iface tap9c015dd3-d3 ovn-installed in OVS
Nov 22 09:51:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.648 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756'], port_security=['fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe7b:4756/64 2001:db8::f816:3eff:fe7b:4756/64', 'neutron:device_id': '3a65f84a-3072-4b94-b08a-0ba7b1529a07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a71aa19e-d298-43f1-b9d0-7f952a63c1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9c015dd3-d340-40c6-bcc6-efef0a914d39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:51:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.649 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9c015dd3-d340-40c6-bcc6-efef0a914d39 in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce unbound from our chassis
Nov 22 09:51:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.651 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:51:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.652 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a94064bf-1995-4da3-9dd6-9f197b61cfea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:51:01 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.652 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce namespace which is not needed anymore
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:01 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 22 09:51:01 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000092.scope: Consumed 16.245s CPU time.
Nov 22 09:51:01 compute-0 systemd-machined[215941]: Machine qemu-178-instance-00000092 terminated.
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.794 253665 INFO nova.virt.libvirt.driver [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance destroyed successfully.
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.795 253665 DEBUG nova.objects.instance [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.805 253665 DEBUG nova.virt.libvirt.vif [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:49:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1502632540',display_name='tempest-TestGettingAddress-server-1502632540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1502632540',id=146,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:49:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-97p9p4ep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:49:42Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=3a65f84a-3072-4b94-b08a-0ba7b1529a07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.806 253665 DEBUG nova.network.os_vif_util [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.807 253665 DEBUG nova.network.os_vif_util [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.807 253665 DEBUG os_vif [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.810 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c015dd3-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:51:01 compute-0 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [NOTICE]   (405216) : haproxy version is 2.8.14-c23fe91
Nov 22 09:51:01 compute-0 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [NOTICE]   (405216) : path to executable is /usr/sbin/haproxy
Nov 22 09:51:01 compute-0 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [WARNING]  (405216) : Exiting Master process...
Nov 22 09:51:01 compute-0 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [WARNING]  (405216) : Exiting Master process...
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:01 compute-0 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [ALERT]    (405216) : Current worker (405218) exited with code 143 (Terminated)
Nov 22 09:51:01 compute-0 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [WARNING]  (405216) : All workers exited. Exiting... (0)
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.813 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:01 compute-0 nova_compute[253661]: 2025-11-22 09:51:01.816 253665 INFO os_vif [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3')
Nov 22 09:51:01 compute-0 systemd[1]: libpod-e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e.scope: Deactivated successfully.
Nov 22 09:51:01 compute-0 podman[407405]: 2025-11-22 09:51:01.824160934 +0000 UTC m=+0.054052865 container died e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:51:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e-userdata-shm.mount: Deactivated successfully.
Nov 22 09:51:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-04d720b8b444377e6f53dfc930e4e7925a5c19a33403959e8730a0a92df22382-merged.mount: Deactivated successfully.
Nov 22 09:51:01 compute-0 podman[407405]: 2025-11-22 09:51:01.91873844 +0000 UTC m=+0.148630361 container cleanup e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:51:01 compute-0 systemd[1]: libpod-conmon-e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e.scope: Deactivated successfully.
Nov 22 09:51:02 compute-0 podman[407462]: 2025-11-22 09:51:02.007227654 +0000 UTC m=+0.063723864 container remove e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:51:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[397c7cbf-f5b3-4ed5-8f56-61864c8a5937]: (4, ('Sat Nov 22 09:51:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce (e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e)\ne0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e\nSat Nov 22 09:51:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce (e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e)\ne0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:51:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.017 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13e8ac76-6547-4c18-8723-1d6e83806f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:51:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.019 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b64819a-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.080 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:02 compute-0 kernel: tap9b64819a-20: left promiscuous mode
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.101 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f02506b-07b6-4263-ad65-18bd7e5b0df8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:51:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.117 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cd4e6b-f014-4fd2-b1ad-d752a5cec976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:51:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.118 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[04efd892-4cdc-4592-8b35-ff325b9f6c91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:51:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.139 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be336b49-be83-46ef-b5dc-6f3c3498efa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783260, 'reachable_time': 20424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407475, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:51:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d9b64819a\x2d274e\x2d4eb7\x2d988b\x2dceb1ea73c9ce.mount: Deactivated successfully.
Nov 22 09:51:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.142 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:51:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.144 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[13ec574a-a328-48ae-8249-cce5466ad0f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:51:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:02.297+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 62 KiB/s wr, 44 op/s
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.391 253665 INFO nova.virt.libvirt.driver [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Deleting instance files /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07_del
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.392 253665 INFO nova.virt.libvirt.driver [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Deletion of /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07_del complete
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.555 253665 INFO nova.compute.manager [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Took 1.00 seconds to destroy the instance on the hypervisor.
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.556 253665 DEBUG oslo.service.loopingcall [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.556 253665 DEBUG nova.compute.manager [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.556 253665 DEBUG nova.network.neutron [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:51:02 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:02 compute-0 ceph-mon[75021]: pgmap v2681: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 62 KiB/s wr, 44 op/s
Nov 22 09:51:02 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 36 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.925 253665 DEBUG nova.compute.manager [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-unplugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.926 253665 DEBUG oslo_concurrency.lockutils [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.926 253665 DEBUG oslo_concurrency.lockutils [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.926 253665 DEBUG oslo_concurrency.lockutils [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.926 253665 DEBUG nova.compute.manager [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] No waiting events found dispatching network-vif-unplugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:51:02 compute-0 nova_compute[253661]: 2025-11-22 09:51:02.927 253665 DEBUG nova.compute.manager [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-unplugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001519945098326496 of space, bias 1.0, pg target 0.4559835294979488 quantized to 32 (current 32)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:51:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:51:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:03.325+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:03 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:03 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 36 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:03 compute-0 nova_compute[253661]: 2025-11-22 09:51:03.949 253665 DEBUG nova.network.neutron [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:51:03 compute-0 nova_compute[253661]: 2025-11-22 09:51:03.966 253665 INFO nova.compute.manager [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Took 1.41 seconds to deallocate network for instance.
Nov 22 09:51:04 compute-0 nova_compute[253661]: 2025-11-22 09:51:04.007 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:04 compute-0 nova_compute[253661]: 2025-11-22 09:51:04.008 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 66 KiB/s wr, 72 op/s
Nov 22 09:51:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:04.352+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:04 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:04 compute-0 ceph-mon[75021]: pgmap v2682: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 66 KiB/s wr, 72 op/s
Nov 22 09:51:04 compute-0 nova_compute[253661]: 2025-11-22 09:51:04.863 253665 DEBUG oslo_concurrency.processutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.007 253665 DEBUG nova.compute.manager [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.008 253665 DEBUG oslo_concurrency.lockutils [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.008 253665 DEBUG oslo_concurrency.lockutils [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.009 253665 DEBUG oslo_concurrency.lockutils [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.009 253665 DEBUG nova.compute.manager [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] No waiting events found dispatching network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.009 253665 WARNING nova.compute.manager [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received unexpected event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 for instance with vm_state deleted and task_state None.
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.010 253665 DEBUG nova.compute.manager [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-deleted-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:51:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:05.307+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:51:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3443649570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.347 253665 DEBUG oslo_concurrency.processutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.355 253665 DEBUG nova.compute.provider_tree [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.374 253665 DEBUG nova.scheduler.client.report [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.420 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.532 253665 INFO nova.scheduler.client.report [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 3a65f84a-3072-4b94-b08a-0ba7b1529a07
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.663 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.676 253665 DEBUG nova.network.neutron [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated VIF entry in instance network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.677 253665 DEBUG nova.network.neutron [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:51:05 compute-0 nova_compute[253661]: 2025-11-22 09:51:05.715 253665 DEBUG oslo_concurrency.lockutils [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:51:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3443649570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:51:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:06.283+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Nov 22 09:51:06 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:06 compute-0 ceph-mon[75021]: pgmap v2683: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Nov 22 09:51:06 compute-0 nova_compute[253661]: 2025-11-22 09:51:06.813 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:07.280+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:07 compute-0 nova_compute[253661]: 2025-11-22 09:51:07.431 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:07 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 41 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:07 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:07 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 41 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:08.309+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Nov 22 09:51:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:08 compute-0 ceph-mon[75021]: pgmap v2684: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Nov 22 09:51:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:09.299+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:09 compute-0 podman[407499]: 2025-11-22 09:51:09.37671394 +0000 UTC m=+0.067166640 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 09:51:09 compute-0 podman[407500]: 2025-11-22 09:51:09.38322598 +0000 UTC m=+0.073061555 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible)
Nov 22 09:51:09 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:10.305+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 22 09:51:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:10 compute-0 ceph-mon[75021]: pgmap v2685: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 22 09:51:10 compute-0 nova_compute[253661]: 2025-11-22 09:51:10.968 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805055.9676287, 34b8226a-40bd-46d4-99ee-1be44f56e142 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:51:10 compute-0 nova_compute[253661]: 2025-11-22 09:51:10.969 253665 INFO nova.compute.manager [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] VM Stopped (Lifecycle Event)
Nov 22 09:51:10 compute-0 nova_compute[253661]: 2025-11-22 09:51:10.989 253665 DEBUG nova.compute.manager [None req-7a8235dd-e4ad-4a76-802c-f6b005dfe445 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:51:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:11.338+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:11 compute-0 nova_compute[253661]: 2025-11-22 09:51:11.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:11 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 22 09:51:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:12.362+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:51:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/303551982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:51:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:51:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/303551982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:51:12 compute-0 nova_compute[253661]: 2025-11-22 09:51:12.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:12 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 46 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:12 compute-0 ceph-mon[75021]: pgmap v2686: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 22 09:51:12 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/303551982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:51:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/303551982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:51:12 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 46 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:13.397+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:14 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 22 09:51:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:14.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:15 compute-0 ceph-mon[75021]: pgmap v2687: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 22 09:51:15 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:15.314+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:15 compute-0 podman[407534]: 2025-11-22 09:51:15.399217973 +0000 UTC m=+0.092418573 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 09:51:16 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:16.294+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:16 compute-0 nova_compute[253661]: 2025-11-22 09:51:16.791 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805061.7896166, 3a65f84a-3072-4b94-b08a-0ba7b1529a07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:51:16 compute-0 nova_compute[253661]: 2025-11-22 09:51:16.792 253665 INFO nova.compute.manager [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] VM Stopped (Lifecycle Event)
Nov 22 09:51:16 compute-0 nova_compute[253661]: 2025-11-22 09:51:16.832 253665 DEBUG nova.compute.manager [None req-29d5bc73-d091-44ae-9ca3-b6ee9e7326e9 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:51:16 compute-0 nova_compute[253661]: 2025-11-22 09:51:16.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:17 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:17 compute-0 ceph-mon[75021]: pgmap v2688: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:17.267+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:17 compute-0 ovn_controller[152872]: 2025-11-22T09:51:17Z|01606|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 09:51:17 compute-0 nova_compute[253661]: 2025-11-22 09:51:17.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:17 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 51 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:18 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:18 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 51 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:18.297+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:19 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:19 compute-0 ceph-mon[75021]: pgmap v2689: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:19.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:20 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:20.283+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:21.243+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:21 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:21 compute-0 ceph-mon[75021]: pgmap v2690: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:21 compute-0 nova_compute[253661]: 2025-11-22 09:51:21.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:22.196+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:22 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:22 compute-0 nova_compute[253661]: 2025-11-22 09:51:22.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:22 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 56 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:51:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:51:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:23.224+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:23 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:23 compute-0 ceph-mon[75021]: pgmap v2691: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:23 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 56 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:24.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:24 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:24 compute-0 ceph-mon[75021]: pgmap v2692: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:25.232+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:25 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:26.271+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:26 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:26 compute-0 ceph-mon[75021]: pgmap v2693: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:26 compute-0 nova_compute[253661]: 2025-11-22 09:51:26.869 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:27.247+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:27 compute-0 nova_compute[253661]: 2025-11-22 09:51:27.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:27 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 61 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:27 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:27.994 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:27.994 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:27.995 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:51:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:28.252+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:28 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 61 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:28 compute-0 ceph-mon[75021]: pgmap v2694: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:29.221+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:29 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:29 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:30.192+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:30 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:30 compute-0 ceph-mon[75021]: pgmap v2695: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:31.237+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:31 compute-0 nova_compute[253661]: 2025-11-22 09:51:31.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:31 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:32.212+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:32 compute-0 nova_compute[253661]: 2025-11-22 09:51:32.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:32 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 66 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:33 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:33 compute-0 ceph-mon[75021]: pgmap v2696: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:33 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 66 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:33.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:33 compute-0 nova_compute[253661]: 2025-11-22 09:51:33.267 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:34 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:34.171+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:35 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:35 compute-0 ceph-mon[75021]: pgmap v2697: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:35.218+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:36.225+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:36 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:36 compute-0 nova_compute[253661]: 2025-11-22 09:51:36.873 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:37.195+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:37 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:37 compute-0 ceph-mon[75021]: pgmap v2698: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:37 compute-0 nova_compute[253661]: 2025-11-22 09:51:37.487 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:37 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 71 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:38.213+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:38 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:38 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 71 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:39.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:39 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:39 compute-0 ceph-mon[75021]: pgmap v2699: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:40.201+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:40 compute-0 podman[407561]: 2025-11-22 09:51:40.361659379 +0000 UTC m=+0.051826811 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:51:40 compute-0 podman[407562]: 2025-11-22 09:51:40.380670039 +0000 UTC m=+0.063004097 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 22 09:51:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:40 compute-0 ceph-mon[75021]: pgmap v2700: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:41.207+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:41 compute-0 nova_compute[253661]: 2025-11-22 09:51:41.875 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:42.206+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:42 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:42 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:42 compute-0 nova_compute[253661]: 2025-11-22 09:51:42.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:42 compute-0 nova_compute[253661]: 2025-11-22 09:51:42.583 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 76 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:43.228+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:43 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:43 compute-0 ceph-mon[75021]: pgmap v2701: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:43 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 76 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:44.249+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:44 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:44 compute-0 ceph-mon[75021]: pgmap v2702: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:45.220+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:45 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:46.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:46.297 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:0e:e6 10.100.0.2 2001:db8::f816:3eff:fe94:ee6'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe94:ee6/64', 'neutron:device_id': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=318882b5-a140-4600-8260-0040c058e797) old=Port_Binding(mac=['fa:16:3e:94:0e:e6 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:51:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:46.298 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 318882b5-a140-4600-8260-0040c058e797 in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e updated
Nov 22 09:51:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:46.300 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58b95ca9-260c-49de-9bd2-c16568d51c7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:51:46 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:51:46.301 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5562f392-e162-4353-99f5-f73070eaf8ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:51:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:46 compute-0 podman[407600]: 2025-11-22 09:51:46.426124679 +0000 UTC m=+0.122564668 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:51:46 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:46 compute-0 ceph-mon[75021]: pgmap v2703: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:46 compute-0 nova_compute[253661]: 2025-11-22 09:51:46.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:47.215+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:47 compute-0 nova_compute[253661]: 2025-11-22 09:51:47.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:47 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 81 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:47 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:47 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 81 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:48.229+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:48 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:48 compute-0 ceph-mon[75021]: pgmap v2704: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:49 compute-0 nova_compute[253661]: 2025-11-22 09:51:49.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:51:49 compute-0 nova_compute[253661]: 2025-11-22 09:51:49.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:51:49 compute-0 nova_compute[253661]: 2025-11-22 09:51:49.269 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:51:49 compute-0 nova_compute[253661]: 2025-11-22 09:51:49.269 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:51:49 compute-0 nova_compute[253661]: 2025-11-22 09:51:49.269 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:51:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:49.271+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:50.290+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:50 compute-0 ceph-mon[75021]: pgmap v2705: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:51 compute-0 nova_compute[253661]: 2025-11-22 09:51:51.308 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:51:51 compute-0 nova_compute[253661]: 2025-11-22 09:51:51.336 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:51:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:51.337+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:51 compute-0 nova_compute[253661]: 2025-11-22 09:51:51.337 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:51:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:51 compute-0 nova_compute[253661]: 2025-11-22 09:51:51.338 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:51:51 compute-0 nova_compute[253661]: 2025-11-22 09:51:51.881 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:52 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:51:52
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images']
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:51:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:52.350+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:52 compute-0 nova_compute[253661]: 2025-11-22 09:51:52.540 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:52 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 86 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:51:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:51:53 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:53 compute-0 ceph-mon[75021]: pgmap v2706: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:53 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 86 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:53.375+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:54 compute-0 nova_compute[253661]: 2025-11-22 09:51:54.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:51:54 compute-0 nova_compute[253661]: 2025-11-22 09:51:54.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:51:54 compute-0 nova_compute[253661]: 2025-11-22 09:51:54.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:51:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:54.408+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:54 compute-0 sudo[407626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:51:54 compute-0 sudo[407626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:54 compute-0 sudo[407626]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:54 compute-0 sudo[407651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:51:54 compute-0 sudo[407651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:54 compute-0 sudo[407651]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:54 compute-0 sudo[407676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:51:54 compute-0 sudo[407676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:54 compute-0 sudo[407676]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:54 compute-0 sudo[407701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:51:54 compute-0 sudo[407701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:55 compute-0 sudo[407701]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:55 compute-0 nova_compute[253661]: 2025-11-22 09:51:55.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:51:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:51:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:51:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:51:55 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:51:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:51:55 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:51:55 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 60ff2f3b-a4b1-4785-a814-2e98fe2812ac does not exist
Nov 22 09:51:55 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev dcb66370-02b4-44b7-ac11-541774b5949e does not exist
Nov 22 09:51:55 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0eb21d3e-6abc-4f2e-a00b-c1589f316085 does not exist
Nov 22 09:51:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:51:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:51:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:51:55 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:51:55 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:51:55 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:51:55 compute-0 ceph-mon[75021]: pgmap v2707: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:55 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:51:55 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:51:55 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:51:55 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:51:55 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:51:55 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:51:55 compute-0 sudo[407757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:51:55 compute-0 sudo[407757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:55 compute-0 sudo[407757]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:55.427+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:55 compute-0 sudo[407782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:51:55 compute-0 sudo[407782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:55 compute-0 sudo[407782]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:55 compute-0 sudo[407807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:51:55 compute-0 sudo[407807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:55 compute-0 sudo[407807]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:55 compute-0 sudo[407832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:51:55 compute-0 sudo[407832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:51:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:51:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:51:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:51:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:51:56 compute-0 podman[407897]: 2025-11-22 09:51:55.922069303 +0000 UTC m=+0.031350405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:51:56 compute-0 podman[407897]: 2025-11-22 09:51:56.050026072 +0000 UTC m=+0.159307094 container create 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:51:56 compute-0 systemd[1]: Started libpod-conmon-739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960.scope.
Nov 22 09:51:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:51:56 compute-0 podman[407897]: 2025-11-22 09:51:56.211019908 +0000 UTC m=+0.320300950 container init 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:51:56 compute-0 podman[407897]: 2025-11-22 09:51:56.220220675 +0000 UTC m=+0.329501697 container start 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:51:56 compute-0 podman[407897]: 2025-11-22 09:51:56.228276604 +0000 UTC m=+0.337557626 container attach 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 09:51:56 compute-0 interesting_hofstadter[407914]: 167 167
Nov 22 09:51:56 compute-0 systemd[1]: libpod-739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960.scope: Deactivated successfully.
Nov 22 09:51:56 compute-0 conmon[407914]: conmon 739a10b089a7c7317d39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960.scope/container/memory.events
Nov 22 09:51:56 compute-0 podman[407897]: 2025-11-22 09:51:56.233698848 +0000 UTC m=+0.342979860 container died 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 09:51:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-aba8d81932232959c5c25e09d9aae245c46c5f7ba500f6bd0e226376393e2bbd-merged.mount: Deactivated successfully.
Nov 22 09:51:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:56 compute-0 podman[407897]: 2025-11-22 09:51:56.368752703 +0000 UTC m=+0.478033745 container remove 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:51:56 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:56 compute-0 systemd[1]: libpod-conmon-739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960.scope: Deactivated successfully.
Nov 22 09:51:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:56.466+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:56 compute-0 podman[407940]: 2025-11-22 09:51:56.57029496 +0000 UTC m=+0.024486897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:51:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:51:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:51:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:51:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:51:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:51:56 compute-0 podman[407940]: 2025-11-22 09:51:56.830010602 +0000 UTC m=+0.284202519 container create bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:51:56 compute-0 nova_compute[253661]: 2025-11-22 09:51:56.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:56 compute-0 systemd[1]: Started libpod-conmon-bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e.scope.
Nov 22 09:51:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:51:57 compute-0 podman[407940]: 2025-11-22 09:51:57.016756384 +0000 UTC m=+0.470948331 container init bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:51:57 compute-0 podman[407940]: 2025-11-22 09:51:57.024017263 +0000 UTC m=+0.478209190 container start bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:51:57 compute-0 podman[407940]: 2025-11-22 09:51:57.065448037 +0000 UTC m=+0.519639964 container attach bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.388 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.390 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.408 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:51:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:57.437+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:57 compute-0 ceph-mon[75021]: pgmap v2708: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:57 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.512 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.513 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.518 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.519 253665 INFO nova.compute.claims [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.611 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.629 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.630 253665 DEBUG nova.compute.provider_tree [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.645 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.678 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:51:57 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 91 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:51:57 compute-0 nova_compute[253661]: 2025-11-22 09:51:57.729 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:51:58 compute-0 cranky_khayyam[407956]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:51:58 compute-0 cranky_khayyam[407956]: --> relative data size: 1.0
Nov 22 09:51:58 compute-0 cranky_khayyam[407956]: --> All data devices are unavailable
Nov 22 09:51:58 compute-0 systemd[1]: libpod-bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e.scope: Deactivated successfully.
Nov 22 09:51:58 compute-0 systemd[1]: libpod-bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e.scope: Consumed 1.056s CPU time.
Nov 22 09:51:58 compute-0 podman[407940]: 2025-11-22 09:51:58.129390643 +0000 UTC m=+1.583582560 container died bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:51:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:51:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125127614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.188 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.194 253665 DEBUG nova.compute.provider_tree [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.208 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.229 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.230 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.259 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.259 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.306 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.306 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.324 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.345 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:51:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.441 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.443 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.444 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Creating image(s)
Nov 22 09:51:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:58.467+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01-merged.mount: Deactivated successfully.
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.613 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:51:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:51:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499355171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.772 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.791 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.794 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.828 253665 DEBUG nova.policy [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.831 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.871 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:51:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.872 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:58 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 91 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:51:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3125127614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:51:58 compute-0 ceph-mon[75021]: pgmap v2709: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.872 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:58 compute-0 nova_compute[253661]: 2025-11-22 09:51:58.872 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.023 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.027 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d986b43b-ea74-42e0-903b-eef7a997e4ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.107 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.108 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.263 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.264 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3293MB free_disk=59.942649841308594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.265 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:51:59 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Nov 22 09:51:59 compute-0 podman[407940]: 2025-11-22 09:51:59.321512714 +0000 UTC m=+2.775704631 container remove bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d986b43b-ea74-42e0-903b-eef7a997e4ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:51:59 compute-0 systemd[1]: libpod-conmon-bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e.scope: Deactivated successfully.
Nov 22 09:51:59 compute-0 sudo[407832]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.400 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:51:59 compute-0 sudo[408135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:51:59 compute-0 sudo[408135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:59 compute-0 sudo[408135]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:59 compute-0 sudo[408161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:51:59 compute-0 sudo[408161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:59.493+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:51:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:51:59 compute-0 sudo[408161]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:59 compute-0 sudo[408186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:51:59 compute-0 sudo[408186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:59 compute-0 sudo[408186]: pam_unix(sudo:session): session closed for user root
Nov 22 09:51:59 compute-0 sudo[408233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:51:59 compute-0 sudo[408233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:51:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:51:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985646846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.856 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.864 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.885 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.903 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:51:59 compute-0 nova_compute[253661]: 2025-11-22 09:51:59.903 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:00 compute-0 podman[408297]: 2025-11-22 09:51:59.917371084 +0000 UTC m=+0.020186673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:52:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:00.047 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:52:00 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:00.048 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:52:00 compute-0 nova_compute[253661]: 2025-11-22 09:52:00.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:00 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1499355171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:52:00 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/985646846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:52:00 compute-0 nova_compute[253661]: 2025-11-22 09:52:00.164 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Successfully created port: b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:52:00 compute-0 podman[408297]: 2025-11-22 09:52:00.192100872 +0000 UTC m=+0.294916441 container create bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:52:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:52:00 compute-0 systemd[1]: Started libpod-conmon-bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48.scope.
Nov 22 09:52:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:52:00 compute-0 podman[408297]: 2025-11-22 09:52:00.440659535 +0000 UTC m=+0.543475134 container init bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:52:00 compute-0 podman[408297]: 2025-11-22 09:52:00.448238357 +0000 UTC m=+0.551053926 container start bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 09:52:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:00.450+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:00 compute-0 sharp_chandrasekhar[408315]: 167 167
Nov 22 09:52:00 compute-0 systemd[1]: libpod-bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48.scope: Deactivated successfully.
Nov 22 09:52:00 compute-0 podman[408297]: 2025-11-22 09:52:00.477414366 +0000 UTC m=+0.580229955 container attach bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 09:52:00 compute-0 podman[408297]: 2025-11-22 09:52:00.478174396 +0000 UTC m=+0.580989965 container died bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:52:00 compute-0 nova_compute[253661]: 2025-11-22 09:52:00.518 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d986b43b-ea74-42e0-903b-eef7a997e4ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-faad95947cc632d4432333ff8c81b735696584f6184133eb6ad4eaa9e1297ebc-merged.mount: Deactivated successfully.
Nov 22 09:52:00 compute-0 nova_compute[253661]: 2025-11-22 09:52:00.579 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:52:00 compute-0 nova_compute[253661]: 2025-11-22 09:52:00.896 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:00 compute-0 nova_compute[253661]: 2025-11-22 09:52:00.914 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:00 compute-0 nova_compute[253661]: 2025-11-22 09:52:00.915 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:00 compute-0 podman[408297]: 2025-11-22 09:52:00.946634356 +0000 UTC m=+1.049449925 container remove bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:52:01 compute-0 systemd[1]: libpod-conmon-bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48.scope: Deactivated successfully.
Nov 22 09:52:01 compute-0 podman[408392]: 2025-11-22 09:52:01.170937894 +0000 UTC m=+0.090909307 container create c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:52:01 compute-0 podman[408392]: 2025-11-22 09:52:01.104704654 +0000 UTC m=+0.024676087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.233 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Successfully updated port: b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:52:01 compute-0 ceph-mon[75021]: pgmap v2710: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:52:01 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.246 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.246 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.246 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:52:01 compute-0 systemd[1]: Started libpod-conmon-c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8.scope.
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.287 253665 DEBUG nova.objects.instance [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid d986b43b-ea74-42e0-903b-eef7a997e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:52:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.296 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.297 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Ensure instance console log exists: /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.297 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.297 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.297 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.310 253665 DEBUG nova.compute.manager [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.310 253665 DEBUG nova.compute.manager [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing instance network info cache due to event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.311 253665 DEBUG oslo_concurrency.lockutils [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:52:01 compute-0 podman[408392]: 2025-11-22 09:52:01.342648168 +0000 UTC m=+0.262619601 container init c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 09:52:01 compute-0 podman[408392]: 2025-11-22 09:52:01.353616156 +0000 UTC m=+0.273587549 container start c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:52:01 compute-0 podman[408392]: 2025-11-22 09:52:01.383816882 +0000 UTC m=+0.303788305 container attach c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.389 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:52:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:01.445+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:01 compute-0 nova_compute[253661]: 2025-11-22 09:52:01.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:02 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:02.050 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:02 compute-0 nifty_curran[408423]: {
Nov 22 09:52:02 compute-0 nifty_curran[408423]:     "0": [
Nov 22 09:52:02 compute-0 nifty_curran[408423]:         {
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "devices": [
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "/dev/loop3"
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             ],
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_name": "ceph_lv0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_size": "21470642176",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "name": "ceph_lv0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "tags": {
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.cluster_name": "ceph",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.crush_device_class": "",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.encrypted": "0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.osd_id": "0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.type": "block",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.vdo": "0"
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             },
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "type": "block",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "vg_name": "ceph_vg0"
Nov 22 09:52:02 compute-0 nifty_curran[408423]:         }
Nov 22 09:52:02 compute-0 nifty_curran[408423]:     ],
Nov 22 09:52:02 compute-0 nifty_curran[408423]:     "1": [
Nov 22 09:52:02 compute-0 nifty_curran[408423]:         {
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "devices": [
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "/dev/loop4"
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             ],
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_name": "ceph_lv1",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_size": "21470642176",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "name": "ceph_lv1",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "tags": {
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.cluster_name": "ceph",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.crush_device_class": "",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.encrypted": "0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.osd_id": "1",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.type": "block",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.vdo": "0"
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             },
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "type": "block",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "vg_name": "ceph_vg1"
Nov 22 09:52:02 compute-0 nifty_curran[408423]:         }
Nov 22 09:52:02 compute-0 nifty_curran[408423]:     ],
Nov 22 09:52:02 compute-0 nifty_curran[408423]:     "2": [
Nov 22 09:52:02 compute-0 nifty_curran[408423]:         {
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "devices": [
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "/dev/loop5"
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             ],
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_name": "ceph_lv2",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_size": "21470642176",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "name": "ceph_lv2",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "tags": {
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.cluster_name": "ceph",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.crush_device_class": "",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.encrypted": "0",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.osd_id": "2",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.type": "block",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:                 "ceph.vdo": "0"
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             },
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "type": "block",
Nov 22 09:52:02 compute-0 nifty_curran[408423]:             "vg_name": "ceph_vg2"
Nov 22 09:52:02 compute-0 nifty_curran[408423]:         }
Nov 22 09:52:02 compute-0 nifty_curran[408423]:     ]
Nov 22 09:52:02 compute-0 nifty_curran[408423]: }
Nov 22 09:52:02 compute-0 systemd[1]: libpod-c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8.scope: Deactivated successfully.
Nov 22 09:52:02 compute-0 podman[408392]: 2025-11-22 09:52:02.168945033 +0000 UTC m=+1.088916436 container died c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:52:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:52:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:02.495+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.575 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.599 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:52:02 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.630 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.631 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance network_info: |[{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.632 253665 DEBUG oslo_concurrency.lockutils [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.632 253665 DEBUG nova.network.neutron [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.636 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start _get_guest_xml network_info=[{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.644 253665 WARNING nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.656 253665 DEBUG nova.virt.libvirt.host [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.658 253665 DEBUG nova.virt.libvirt.host [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.662 253665 DEBUG nova.virt.libvirt.host [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.663 253665 DEBUG nova.virt.libvirt.host [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.664 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.664 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.665 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.666 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.666 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.666 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.667 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.667 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.667 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.668 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.668 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.668 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:52:02 compute-0 nova_compute[253661]: 2025-11-22 09:52:02.673 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6-merged.mount: Deactivated successfully.
Nov 22 09:52:02 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 96 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007606720469492739 of space, bias 1.0, pg target 0.22820161408478218 quantized to 32 (current 32)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:52:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:52:03 compute-0 podman[408392]: 2025-11-22 09:52:03.061826825 +0000 UTC m=+1.981798228 container remove c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:52:03 compute-0 systemd[1]: libpod-conmon-c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8.scope: Deactivated successfully.
Nov 22 09:52:03 compute-0 sudo[408233]: pam_unix(sudo:session): session closed for user root
Nov 22 09:52:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:52:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2647914078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.193 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:03 compute-0 sudo[408469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:52:03 compute-0 sudo[408469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:52:03 compute-0 sudo[408469]: pam_unix(sudo:session): session closed for user root
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.219 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.225 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:03 compute-0 sudo[408512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:52:03 compute-0 sudo[408512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:52:03 compute-0 sudo[408512]: pam_unix(sudo:session): session closed for user root
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.277 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:03 compute-0 sudo[408540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:52:03 compute-0 sudo[408540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:52:03 compute-0 sudo[408540]: pam_unix(sudo:session): session closed for user root
Nov 22 09:52:03 compute-0 sudo[408565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:52:03 compute-0 sudo[408565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:52:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:03.455+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:03 compute-0 ceph-mon[75021]: pgmap v2711: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:52:03 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:03 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 96 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2647914078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:52:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:52:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1226863174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.792 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.795 253665 DEBUG nova.virt.libvirt.vif [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-355287164',display_name='tempest-TestGettingAddress-server-355287164',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-355287164',id=149,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-913op7wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:51:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d986b43b-ea74-42e0-903b-eef7a997e4ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.795 253665 DEBUG nova.network.os_vif_util [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.796 253665 DEBUG nova.network.os_vif_util [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.798 253665 DEBUG nova.objects.instance [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid d986b43b-ea74-42e0-903b-eef7a997e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.817 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <uuid>d986b43b-ea74-42e0-903b-eef7a997e4ce</uuid>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <name>instance-00000095</name>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-355287164</nova:name>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:52:02</nova:creationTime>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <nova:port uuid="b10caa5b-0659-423b-9bcf-57a9a1ed30c0">
Nov 22 09:52:03 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe12:a97e" ipVersion="6"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <system>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <entry name="serial">d986b43b-ea74-42e0-903b-eef7a997e4ce</entry>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <entry name="uuid">d986b43b-ea74-42e0-903b-eef7a997e4ce</entry>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     </system>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <os>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   </os>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <features>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   </features>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d986b43b-ea74-42e0-903b-eef7a997e4ce_disk">
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       </source>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config">
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       </source>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:52:03 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:12:a9:7e"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <target dev="tapb10caa5b-06"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/console.log" append="off"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <video>
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     </video>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:52:03 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:52:03 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:52:03 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:52:03 compute-0 nova_compute[253661]: </domain>
Nov 22 09:52:03 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.819 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Preparing to wait for external event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.819 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.819 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.820 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.820 253665 DEBUG nova.virt.libvirt.vif [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-355287164',display_name='tempest-TestGettingAddress-server-355287164',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-355287164',id=149,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-913op7wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:51:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d986b43b-ea74-42e0-903b-eef7a997e4ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.821 253665 DEBUG nova.network.os_vif_util [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.822 253665 DEBUG nova.network.os_vif_util [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.822 253665 DEBUG os_vif [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.824 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.825 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.830 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb10caa5b-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.830 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb10caa5b-06, col_values=(('external_ids', {'iface-id': 'b10caa5b-0659-423b-9bcf-57a9a1ed30c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:a9:7e', 'vm-uuid': 'd986b43b-ea74-42e0-903b-eef7a997e4ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:03 compute-0 NetworkManager[48920]: <info>  [1763805123.8336] manager: (tapb10caa5b-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.844 253665 INFO os_vif [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06')
Nov 22 09:52:03 compute-0 podman[408651]: 2025-11-22 09:52:03.814689417 +0000 UTC m=+0.032641628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:52:03 compute-0 podman[408651]: 2025-11-22 09:52:03.980882642 +0000 UTC m=+0.198834803 container create c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.993 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.993 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.993 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:12:a9:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:52:03 compute-0 nova_compute[253661]: 2025-11-22 09:52:03.994 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Using config drive
Nov 22 09:52:04 compute-0 nova_compute[253661]: 2025-11-22 09:52:04.018 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:04 compute-0 systemd[1]: Started libpod-conmon-c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379.scope.
Nov 22 09:52:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:52:04 compute-0 podman[408651]: 2025-11-22 09:52:04.205652221 +0000 UTC m=+0.423604412 container init c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:52:04 compute-0 podman[408651]: 2025-11-22 09:52:04.218961519 +0000 UTC m=+0.436913720 container start c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 09:52:04 compute-0 xenodochial_jones[408688]: 167 167
Nov 22 09:52:04 compute-0 systemd[1]: libpod-c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379.scope: Deactivated successfully.
Nov 22 09:52:04 compute-0 nova_compute[253661]: 2025-11-22 09:52:04.268 253665 DEBUG nova.network.neutron [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updated VIF entry in instance network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:52:04 compute-0 nova_compute[253661]: 2025-11-22 09:52:04.270 253665 DEBUG nova.network.neutron [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:52:04 compute-0 nova_compute[253661]: 2025-11-22 09:52:04.283 253665 DEBUG oslo_concurrency.lockutils [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:52:04 compute-0 nova_compute[253661]: 2025-11-22 09:52:04.344 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Creating config drive at /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config
Nov 22 09:52:04 compute-0 nova_compute[253661]: 2025-11-22 09:52:04.350 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjq3z9yo_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:04 compute-0 podman[408651]: 2025-11-22 09:52:04.364684645 +0000 UTC m=+0.582636856 container attach c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:52:04 compute-0 podman[408651]: 2025-11-22 09:52:04.365113065 +0000 UTC m=+0.583065246 container died c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:52:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:04.483+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:04 compute-0 nova_compute[253661]: 2025-11-22 09:52:04.498 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjq3z9yo_" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:04 compute-0 nova_compute[253661]: 2025-11-22 09:52:04.524 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:04 compute-0 nova_compute[253661]: 2025-11-22 09:52:04.528 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc1f2a45ed5ac16da445967a74145bcdbfca2fbe13f46ad9ecfce1d4ed05a2ee-merged.mount: Deactivated successfully.
Nov 22 09:52:04 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1226863174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:52:04 compute-0 ceph-mon[75021]: pgmap v2712: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:05 compute-0 podman[408651]: 2025-11-22 09:52:05.105631365 +0000 UTC m=+1.323583556 container remove c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:52:05 compute-0 systemd[1]: libpod-conmon-c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379.scope: Deactivated successfully.
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.252 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.255 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Deleting local config drive /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config because it was imported into RBD.
Nov 22 09:52:05 compute-0 kernel: tapb10caa5b-06: entered promiscuous mode
Nov 22 09:52:05 compute-0 NetworkManager[48920]: <info>  [1763805125.3180] manager: (tapb10caa5b-06): new Tun device (/org/freedesktop/NetworkManager/Devices/658)
Nov 22 09:52:05 compute-0 ovn_controller[152872]: 2025-11-22T09:52:05Z|01607|binding|INFO|Claiming lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 for this chassis.
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.317 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:05 compute-0 ovn_controller[152872]: 2025-11-22T09:52:05Z|01608|binding|INFO|b10caa5b-0659-423b-9bcf-57a9a1ed30c0: Claiming fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.328 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e'], port_security=['fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8::f816:3eff:fe12:a97e/64', 'neutron:device_id': 'd986b43b-ea74-42e0-903b-eef7a997e4ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0beff345-fc2f-4a68-a4a7-1d4c0960ae91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b10caa5b-0659-423b-9bcf-57a9a1ed30c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.329 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e bound to our chassis
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.330 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58b95ca9-260c-49de-9bd2-c16568d51c7e
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.332 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:05 compute-0 ovn_controller[152872]: 2025-11-22T09:52:05Z|01609|binding|INFO|Setting lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 ovn-installed in OVS
Nov 22 09:52:05 compute-0 ovn_controller[152872]: 2025-11-22T09:52:05Z|01610|binding|INFO|Setting lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 up in Southbound
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.334 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.338 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.342 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d551d9d5-cc6c-4a62-805a-5285a6bd4f03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.343 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58b95ca9-21 in ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 09:52:05 compute-0 podman[408752]: 2025-11-22 09:52:05.34638729 +0000 UTC m=+0.074553851 container create 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.346 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58b95ca9-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.346 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fb058665-4838-4ade-8885-9cf917d5d160]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.347 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f0619e-404d-4c5f-9257-9ecac8c09d80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 systemd-udevd[408776]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.360 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ceba34db-5b60-495d-8e09-8d00eb16876e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 NetworkManager[48920]: <info>  [1763805125.3682] device (tapb10caa5b-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:52:05 compute-0 NetworkManager[48920]: <info>  [1763805125.3698] device (tapb10caa5b-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:52:05 compute-0 systemd-machined[215941]: New machine qemu-181-instance-00000095.
Nov 22 09:52:05 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000095.
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.390 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94948bd3-25e4-40e5-b18b-28a848fb0531]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 systemd[1]: Started libpod-conmon-31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34.scope.
Nov 22 09:52:05 compute-0 podman[408752]: 2025-11-22 09:52:05.299442319 +0000 UTC m=+0.027608900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:52:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.422 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5d19faf2-9d75-4b19-ab31-765de95e2a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:52:05 compute-0 NetworkManager[48920]: <info>  [1763805125.4368] manager: (tap58b95ca9-20): new Veth device (/org/freedesktop/NetworkManager/Devices/659)
Nov 22 09:52:05 compute-0 systemd-udevd[408780]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.436 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1070addd-e679-48b9-8fba-621bee52c836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.473 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[40d0ba2c-fa37-4178-981b-89ebcb6b5fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 podman[408752]: 2025-11-22 09:52:05.477340011 +0000 UTC m=+0.205506572 container init 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.476 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[37e154a1-6cae-4d16-9c46-f04268bf93bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:05.484+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:05 compute-0 podman[408752]: 2025-11-22 09:52:05.487429657 +0000 UTC m=+0.215596218 container start 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 09:52:05 compute-0 podman[408752]: 2025-11-22 09:52:05.504055829 +0000 UTC m=+0.232222410 container attach 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:52:05 compute-0 NetworkManager[48920]: <info>  [1763805125.5078] device (tap58b95ca9-20): carrier: link connected
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.512 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[49d2232a-5831-4cba-b445-2db44cadfa2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.531 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38a3d71f-8b8b-4dd2-a923-2e9b8a9dc28f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58b95ca9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0e:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797624, 'reachable_time': 32921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408817, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.545 253665 DEBUG nova.compute.manager [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.545 253665 DEBUG oslo_concurrency.lockutils [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.545 253665 DEBUG oslo_concurrency.lockutils [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.546 253665 DEBUG oslo_concurrency.lockutils [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.546 253665 DEBUG nova.compute.manager [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Processing event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.549 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ba0cc1ab-30cd-4917-b5d6-68ae280ca954]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:ee6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797624, 'tstamp': 797624}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408818, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.565 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f38e643c-d8f7-4283-bcc6-cb8f88ad2995]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58b95ca9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0e:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797624, 'reachable_time': 32921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 408819, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.596 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eced9b79-7379-4209-b2eb-785b356be4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.661 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7fb409-76be-42f8-9c04-e8842bed3762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.663 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58b95ca9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.664 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.665 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58b95ca9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:05 compute-0 NetworkManager[48920]: <info>  [1763805125.6677] manager: (tap58b95ca9-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Nov 22 09:52:05 compute-0 kernel: tap58b95ca9-20: entered promiscuous mode
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.672 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58b95ca9-20, col_values=(('external_ids', {'iface-id': '318882b5-a140-4600-8260-0040c058e797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.673 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:05 compute-0 ovn_controller[152872]: 2025-11-22T09:52:05Z|01611|binding|INFO|Releasing lport 318882b5-a140-4600-8260-0040c058e797 from this chassis (sb_readonly=0)
Nov 22 09:52:05 compute-0 nova_compute[253661]: 2025-11-22 09:52:05.689 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.690 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58b95ca9-260c-49de-9bd2-c16568d51c7e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58b95ca9-260c-49de-9bd2-c16568d51c7e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.691 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[218a370d-dc7a-46a7-8574-399579ad77b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.692 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: global
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     log         /dev/log local0 debug
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     log-tag     haproxy-metadata-proxy-58b95ca9-260c-49de-9bd2-c16568d51c7e
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     user        root
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     group       root
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     maxconn     1024
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     pidfile     /var/lib/neutron/external/pids/58b95ca9-260c-49de-9bd2-c16568d51c7e.pid.haproxy
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     daemon
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: defaults
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     log global
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     mode http
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     option httplog
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     option dontlognull
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     option http-server-close
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     option forwardfor
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     retries                 3
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     timeout http-request    30s
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     timeout connect         30s
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     timeout client          32s
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     timeout server          32s
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     timeout http-keep-alive 30s
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: listen listener
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     bind 169.254.169.254:80
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:     http-request add-header X-OVN-Network-ID 58b95ca9-260c-49de-9bd2-c16568d51c7e
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 09:52:05 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.693 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'env', 'PROCESS_TAG=haproxy-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58b95ca9-260c-49de-9bd2-c16568d51c7e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 09:52:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:06 compute-0 podman[408876]: 2025-11-22 09:52:06.107986334 +0000 UTC m=+0.098811787 container create 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:52:06 compute-0 podman[408876]: 2025-11-22 09:52:06.032659354 +0000 UTC m=+0.023484837 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 09:52:06 compute-0 systemd[1]: Started libpod-conmon-2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47.scope.
Nov 22 09:52:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.167 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805126.166405, d986b43b-ea74-42e0-903b-eef7a997e4ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.168 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] VM Started (Lifecycle Event)
Nov 22 09:52:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ea709103a9d5440c95c0740482f664efdf69029657dc148bca198fcea4ae2e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.171 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.175 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.179 253665 INFO nova.virt.libvirt.driver [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance spawned successfully.
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.179 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:52:06 compute-0 podman[408876]: 2025-11-22 09:52:06.192069256 +0000 UTC m=+0.182894739 container init 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.192 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:52:06 compute-0 podman[408876]: 2025-11-22 09:52:06.19890608 +0000 UTC m=+0.189731533 container start 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.201 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.204 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.205 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.205 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.206 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.206 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.207 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:06 compute-0 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [NOTICE]   (408912) : New worker (408915) forked
Nov 22 09:52:06 compute-0 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [NOTICE]   (408912) : Loading success.
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.229 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.230 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805126.1667562, d986b43b-ea74-42e0-903b-eef7a997e4ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.230 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] VM Paused (Lifecycle Event)
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.250 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.253 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805126.1744428, d986b43b-ea74-42e0-903b-eef7a997e4ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.253 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] VM Resumed (Lifecycle Event)
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.270 253665 INFO nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Took 7.83 seconds to spawn the instance on the hypervisor.
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.271 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.273 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.282 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.325 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.350 253665 INFO nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Took 8.86 seconds to build instance.
Nov 22 09:52:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:06 compute-0 nova_compute[253661]: 2025-11-22 09:52:06.369 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:06.476+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]: {
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "osd_id": 1,
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "type": "bluestore"
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:     },
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "osd_id": 0,
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "type": "bluestore"
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:     },
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "osd_id": 2,
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:         "type": "bluestore"
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]:     }
Nov 22 09:52:06 compute-0 loving_heyrovsky[408784]: }
Nov 22 09:52:06 compute-0 systemd[1]: libpod-31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34.scope: Deactivated successfully.
Nov 22 09:52:06 compute-0 systemd[1]: libpod-31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34.scope: Consumed 1.068s CPU time.
Nov 22 09:52:06 compute-0 podman[408752]: 2025-11-22 09:52:06.569253061 +0000 UTC m=+1.297419622 container died 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:52:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90-merged.mount: Deactivated successfully.
Nov 22 09:52:06 compute-0 podman[408752]: 2025-11-22 09:52:06.688879955 +0000 UTC m=+1.417046516 container remove 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 09:52:06 compute-0 systemd[1]: libpod-conmon-31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34.scope: Deactivated successfully.
Nov 22 09:52:06 compute-0 sudo[408565]: pam_unix(sudo:session): session closed for user root
Nov 22 09:52:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:52:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:52:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:52:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:52:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 14a0f2a2-1e58-4472-8e10-cb832dfeca9a does not exist
Nov 22 09:52:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 234ce324-3e16-4b1e-bd62-9036c637b423 does not exist
Nov 22 09:52:06 compute-0 sudo[408964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:52:06 compute-0 sudo[408964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:52:06 compute-0 sudo[408964]: pam_unix(sudo:session): session closed for user root
Nov 22 09:52:06 compute-0 sudo[408989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:52:06 compute-0 sudo[408989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:52:06 compute-0 sudo[408989]: pam_unix(sudo:session): session closed for user root
Nov 22 09:52:07 compute-0 ceph-mon[75021]: pgmap v2713: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:07 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:52:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:52:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:07.515+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:07 compute-0 nova_compute[253661]: 2025-11-22 09:52:07.614 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:07 compute-0 nova_compute[253661]: 2025-11-22 09:52:07.632 253665 DEBUG nova.compute.manager [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:52:07 compute-0 nova_compute[253661]: 2025-11-22 09:52:07.632 253665 DEBUG oslo_concurrency.lockutils [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:07 compute-0 nova_compute[253661]: 2025-11-22 09:52:07.632 253665 DEBUG oslo_concurrency.lockutils [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:07 compute-0 nova_compute[253661]: 2025-11-22 09:52:07.632 253665 DEBUG oslo_concurrency.lockutils [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:07 compute-0 nova_compute[253661]: 2025-11-22 09:52:07.633 253665 DEBUG nova.compute.manager [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] No waiting events found dispatching network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:52:07 compute-0 nova_compute[253661]: 2025-11-22 09:52:07.633 253665 WARNING nova.compute.manager [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received unexpected event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 for instance with vm_state active and task_state None.
Nov 22 09:52:07 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 101 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:08 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 101 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:08.524+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:08 compute-0 nova_compute[253661]: 2025-11-22 09:52:08.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:09 compute-0 ceph-mon[75021]: pgmap v2714: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:09 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:09.493+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:10.498+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:11 compute-0 ceph-mon[75021]: pgmap v2715: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:11 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:11 compute-0 podman[409014]: 2025-11-22 09:52:11.35441898 +0000 UTC m=+0.051815875 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:52:11 compute-0 podman[409015]: 2025-11-22 09:52:11.366227719 +0000 UTC m=+0.061747207 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:52:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:11.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:12 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:52:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3369010267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:52:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:52:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3369010267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:52:12 compute-0 nova_compute[253661]: 2025-11-22 09:52:12.469 253665 DEBUG nova.compute.manager [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:52:12 compute-0 nova_compute[253661]: 2025-11-22 09:52:12.470 253665 DEBUG nova.compute.manager [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing instance network info cache due to event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:52:12 compute-0 nova_compute[253661]: 2025-11-22 09:52:12.470 253665 DEBUG oslo_concurrency.lockutils [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:52:12 compute-0 nova_compute[253661]: 2025-11-22 09:52:12.470 253665 DEBUG oslo_concurrency.lockutils [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:52:12 compute-0 nova_compute[253661]: 2025-11-22 09:52:12.470 253665 DEBUG nova.network.neutron [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:52:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:12.523+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:12 compute-0 nova_compute[253661]: 2025-11-22 09:52:12.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:12 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 106 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:13 compute-0 ceph-mon[75021]: pgmap v2716: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3369010267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:52:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3369010267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:52:13 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:13 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 106 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:13.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:13 compute-0 nova_compute[253661]: 2025-11-22 09:52:13.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:14 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:14 compute-0 nova_compute[253661]: 2025-11-22 09:52:14.445 253665 DEBUG nova.network.neutron [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updated VIF entry in instance network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:52:14 compute-0 nova_compute[253661]: 2025-11-22 09:52:14.446 253665 DEBUG nova.network.neutron [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:52:14 compute-0 nova_compute[253661]: 2025-11-22 09:52:14.466 253665 DEBUG oslo_concurrency.lockutils [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:52:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:14.561+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:15 compute-0 ceph-mon[75021]: pgmap v2717: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:15 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:15.521+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:52:16 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:16 compute-0 ceph-mon[75021]: pgmap v2718: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:52:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:16.549+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:17 compute-0 podman[409053]: 2025-11-22 09:52:17.407729318 +0000 UTC m=+0.093820120 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 09:52:17 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:17.524+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:17 compute-0 nova_compute[253661]: 2025-11-22 09:52:17.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:17 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 111 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 1 active+clean+laggy, 304 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Nov 22 09:52:18 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:18 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 111 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:18 compute-0 ceph-mon[75021]: pgmap v2719: 305 pgs: 1 active+clean+laggy, 304 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Nov 22 09:52:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:18.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:18 compute-0 nova_compute[253661]: 2025-11-22 09:52:18.839 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:19 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:19.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:20 compute-0 ovn_controller[152872]: 2025-11-22T09:52:20Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:a9:7e 10.100.0.6
Nov 22 09:52:20 compute-0 ovn_controller[152872]: 2025-11-22T09:52:20Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:a9:7e 10.100.0.6
Nov 22 09:52:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 1 active+clean+laggy, 304 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 19 op/s
Nov 22 09:52:20 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:20 compute-0 ceph-mon[75021]: pgmap v2720: 305 pgs: 1 active+clean+laggy, 304 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 19 op/s
Nov 22 09:52:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:20.616+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:21.585+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:21 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 1 active+clean+laggy, 304 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 19 op/s
Nov 22 09:52:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:22.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:22 compute-0 nova_compute[253661]: 2025-11-22 09:52:22.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:22 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:22 compute-0 ceph-mon[75021]: pgmap v2721: 305 pgs: 1 active+clean+laggy, 304 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 19 op/s
Nov 22 09:52:22 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 116 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:52:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:52:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:23.601+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:23 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:23 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 116 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:23 compute-0 nova_compute[253661]: 2025-11-22 09:52:23.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:52:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:24.624+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:24 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:24 compute-0 ceph-mon[75021]: pgmap v2722: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:52:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:25.660+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:25 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:52:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:26.617+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:26 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:26 compute-0 ceph-mon[75021]: pgmap v2723: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:52:26 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:27.582+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:27 compute-0 nova_compute[253661]: 2025-11-22 09:52:27.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:27 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 121 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:27.995 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:27.996 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:28 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 121 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:52:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:28.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:28 compute-0 nova_compute[253661]: 2025-11-22 09:52:28.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:29 compute-0 ceph-mon[75021]: pgmap v2724: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:52:29 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:29.634+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:30 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 740 KiB/s wr, 44 op/s
Nov 22 09:52:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:30.675+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:31 compute-0 ceph-mon[75021]: pgmap v2725: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 740 KiB/s wr, 44 op/s
Nov 22 09:52:31 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:31.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 740 KiB/s wr, 44 op/s
Nov 22 09:52:32 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:32 compute-0 nova_compute[253661]: 2025-11-22 09:52:32.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:32.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:32 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 126 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:32 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Nov 22 09:52:32 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:32.775368) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:52:32 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Nov 22 09:52:32 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805152775441, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2008, "num_deletes": 258, "total_data_size": 2740122, "memory_usage": 2780040, "flush_reason": "Manual Compaction"}
Nov 22 09:52:32 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Nov 22 09:52:32 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805152975506, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 2681387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54661, "largest_seqno": 56668, "table_properties": {"data_size": 2672672, "index_size": 5080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20786, "raw_average_key_size": 20, "raw_value_size": 2653984, "raw_average_value_size": 2651, "num_data_blocks": 224, "num_entries": 1001, "num_filter_entries": 1001, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804974, "oldest_key_time": 1763804974, "file_creation_time": 1763805152, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:52:32 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 200195 microseconds, and 7417 cpu microseconds.
Nov 22 09:52:32 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:32.975556) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 2681387 bytes OK
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:32.975592) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.224424) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.224478) EVENT_LOG_v1 {"time_micros": 1763805153224467, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.224503) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2731364, prev total WAL file size 2731364, number of live WAL files 2.
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.225704) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323537' seq:72057594037927935, type:22 .. '6C6F676D0032353131' seq:0, type:0; will stop at (end)
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(2618KB)], [128(8326KB)]
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805153225778, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11207850, "oldest_snapshot_seqno": -1}
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7903 keys, 11080519 bytes, temperature: kUnknown
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805153445987, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11080519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11027560, "index_size": 32110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 206083, "raw_average_key_size": 26, "raw_value_size": 10886274, "raw_average_value_size": 1377, "num_data_blocks": 1257, "num_entries": 7903, "num_filter_entries": 7903, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.446543) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11080519 bytes
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.590486) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 50.9 rd, 50.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.1 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(8.3) write-amplify(4.1) OK, records in: 8435, records dropped: 532 output_compression: NoCompression
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.590526) EVENT_LOG_v1 {"time_micros": 1763805153590511, "job": 78, "event": "compaction_finished", "compaction_time_micros": 220373, "compaction_time_cpu_micros": 32527, "output_level": 6, "num_output_files": 1, "total_output_size": 11080519, "num_input_records": 8435, "num_output_records": 7903, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805153591270, "job": 78, "event": "table_file_deletion", "file_number": 130}
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805153593037, "job": 78, "event": "table_file_deletion", "file_number": 128}
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.225549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:33 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:33 compute-0 ceph-mon[75021]: pgmap v2726: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 740 KiB/s wr, 44 op/s
Nov 22 09:52:33 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:33 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 126 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:33.728+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:33 compute-0 nova_compute[253661]: 2025-11-22 09:52:33.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 745 KiB/s wr, 45 op/s
Nov 22 09:52:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:34.692+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:34 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:34 compute-0 ceph-mon[75021]: pgmap v2727: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 745 KiB/s wr, 45 op/s
Nov 22 09:52:35 compute-0 nova_compute[253661]: 2025-11-22 09:52:35.259 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:35 compute-0 nova_compute[253661]: 2025-11-22 09:52:35.260 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:35 compute-0 nova_compute[253661]: 2025-11-22 09:52:35.277 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:52:35 compute-0 nova_compute[253661]: 2025-11-22 09:52:35.365 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:35 compute-0 nova_compute[253661]: 2025-11-22 09:52:35.365 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:35 compute-0 nova_compute[253661]: 2025-11-22 09:52:35.377 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:52:35 compute-0 nova_compute[253661]: 2025-11-22 09:52:35.377 253665 INFO nova.compute.claims [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:52:35 compute-0 nova_compute[253661]: 2025-11-22 09:52:35.530 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:35.713+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:35 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:52:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283250128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.019 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.028 253665 DEBUG nova.compute.provider_tree [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.051 253665 DEBUG nova.scheduler.client.report [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.082 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.083 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.145 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.145 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.167 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.187 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.272 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.273 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.274 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Creating image(s)
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.294 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.316 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.339 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.343 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.414 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.415 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.416 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.416 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.435 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:36 compute-0 nova_compute[253661]: 2025-11-22 09:52:36.439 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:36.718+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:36 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4283250128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:52:36 compute-0 ceph-mon[75021]: pgmap v2728: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Nov 22 09:52:36 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.005 253665 DEBUG nova.policy [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.240 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.327 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.448 253665 DEBUG nova.objects.instance [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 566d6c71-a9a6-49f3-9f46-f9d31e71936b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.461 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.462 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Ensure instance console log exists: /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.463 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.463 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.464 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:37.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:37 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 131 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:37 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:37 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 131 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:37 compute-0 nova_compute[253661]: 2025-11-22 09:52:37.946 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Successfully created port: 43d41379-bbc3-4a75-be24-8943b05e7a8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:52:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:38.725+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:38 compute-0 nova_compute[253661]: 2025-11-22 09:52:38.727 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Successfully updated port: 43d41379-bbc3-4a75-be24-8943b05e7a8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:52:38 compute-0 nova_compute[253661]: 2025-11-22 09:52:38.739 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:52:38 compute-0 nova_compute[253661]: 2025-11-22 09:52:38.740 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:52:38 compute-0 nova_compute[253661]: 2025-11-22 09:52:38.740 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:52:38 compute-0 nova_compute[253661]: 2025-11-22 09:52:38.816 253665 DEBUG nova.compute.manager [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:52:38 compute-0 nova_compute[253661]: 2025-11-22 09:52:38.817 253665 DEBUG nova.compute.manager [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing instance network info cache due to event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:52:38 compute-0 nova_compute[253661]: 2025-11-22 09:52:38.817 253665 DEBUG oslo_concurrency.lockutils [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:52:38 compute-0 nova_compute[253661]: 2025-11-22 09:52:38.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:38 compute-0 ceph-mon[75021]: pgmap v2729: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:38 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:38 compute-0 nova_compute[253661]: 2025-11-22 09:52:38.891 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:52:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:39.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:39 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.455 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.484 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.485 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance network_info: |[{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.485 253665 DEBUG oslo_concurrency.lockutils [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.485 253665 DEBUG nova.network.neutron [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.488 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start _get_guest_xml network_info=[{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.492 253665 WARNING nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.498 253665 DEBUG nova.virt.libvirt.host [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.498 253665 DEBUG nova.virt.libvirt.host [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.506 253665 DEBUG nova.virt.libvirt.host [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.507 253665 DEBUG nova.virt.libvirt.host [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.507 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.508 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.508 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.508 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.509 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.509 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.509 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.509 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.510 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.510 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.510 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.510 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.513 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:40.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:40 compute-0 ceph-mon[75021]: pgmap v2730: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:52:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983528387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.971 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.991 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:40 compute-0 nova_compute[253661]: 2025-11-22 09:52:40.995 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:52:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/67735884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.414 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.417 253665 DEBUG nova.virt.libvirt.vif [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-262851612',display_name='tempest-TestGettingAddress-server-262851612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-262851612',id=150,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-drb6i5m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:52:36Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=566d6c71-a9a6-49f3-9f46-f9d31e71936b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.417 253665 DEBUG nova.network.os_vif_util [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.418 253665 DEBUG nova.network.os_vif_util [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.420 253665 DEBUG nova.objects.instance [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 566d6c71-a9a6-49f3-9f46-f9d31e71936b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.432 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <uuid>566d6c71-a9a6-49f3-9f46-f9d31e71936b</uuid>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <name>instance-00000096</name>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <nova:name>tempest-TestGettingAddress-server-262851612</nova:name>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:52:40</nova:creationTime>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <nova:port uuid="43d41379-bbc3-4a75-be24-8943b05e7a8e">
Nov 22 09:52:41 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe95:c99b" ipVersion="6"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <system>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <entry name="serial">566d6c71-a9a6-49f3-9f46-f9d31e71936b</entry>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <entry name="uuid">566d6c71-a9a6-49f3-9f46-f9d31e71936b</entry>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     </system>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <os>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   </os>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <features>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   </features>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk">
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       </source>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config">
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       </source>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:52:41 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:95:c9:9b"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <target dev="tap43d41379-bb"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/console.log" append="off"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <video>
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     </video>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:52:41 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:52:41 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:52:41 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:52:41 compute-0 nova_compute[253661]: </domain>
Nov 22 09:52:41 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.433 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Preparing to wait for external event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.434 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.434 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.435 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.435 253665 DEBUG nova.virt.libvirt.vif [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-262851612',display_name='tempest-TestGettingAddress-server-262851612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-262851612',id=150,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-drb6i5m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:52:36Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=566d6c71-a9a6-49f3-9f46-f9d31e71936b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.436 253665 DEBUG nova.network.os_vif_util [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.436 253665 DEBUG nova.network.os_vif_util [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.437 253665 DEBUG os_vif [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.438 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.438 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.441 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.441 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43d41379-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.442 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43d41379-bb, col_values=(('external_ids', {'iface-id': '43d41379-bbc3-4a75-be24-8943b05e7a8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:c9:9b', 'vm-uuid': '566d6c71-a9a6-49f3-9f46-f9d31e71936b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:41 compute-0 NetworkManager[48920]: <info>  [1763805161.4444] manager: (tap43d41379-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.450 253665 INFO os_vif [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb')
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.498 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.499 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.499 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:95:c9:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.500 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Using config drive
Nov 22 09:52:41 compute-0 nova_compute[253661]: 2025-11-22 09:52:41.522 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:41 compute-0 podman[409336]: 2025-11-22 09:52:41.533504711 +0000 UTC m=+0.047639769 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 09:52:41 compute-0 podman[409338]: 2025-11-22 09:52:41.571290629 +0000 UTC m=+0.084017311 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 09:52:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:41.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3983528387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:52:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/67735884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:52:41 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.129 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Creating config drive at /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.133 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp57kqn_8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.273 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp57kqn_8" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.296 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.299 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:52:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.409 253665 DEBUG nova.network.neutron [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updated VIF entry in instance network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.410 253665 DEBUG nova.network.neutron [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.426 253665 DEBUG oslo_concurrency.lockutils [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.458 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.459 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Deleting local config drive /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config because it was imported into RBD.
Nov 22 09:52:42 compute-0 kernel: tap43d41379-bb: entered promiscuous mode
Nov 22 09:52:42 compute-0 NetworkManager[48920]: <info>  [1763805162.5038] manager: (tap43d41379-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/662)
Nov 22 09:52:42 compute-0 ovn_controller[152872]: 2025-11-22T09:52:42Z|01612|binding|INFO|Claiming lport 43d41379-bbc3-4a75-be24-8943b05e7a8e for this chassis.
Nov 22 09:52:42 compute-0 ovn_controller[152872]: 2025-11-22T09:52:42Z|01613|binding|INFO|43d41379-bbc3-4a75-be24-8943b05e7a8e: Claiming fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.571 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b'], port_security=['fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe95:c99b/64', 'neutron:device_id': '566d6c71-a9a6-49f3-9f46-f9d31e71936b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0beff345-fc2f-4a68-a4a7-1d4c0960ae91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=43d41379-bbc3-4a75-be24-8943b05e7a8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.572 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 43d41379-bbc3-4a75-be24-8943b05e7a8e in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e bound to our chassis
Nov 22 09:52:42 compute-0 ovn_controller[152872]: 2025-11-22T09:52:42Z|01614|binding|INFO|Setting lport 43d41379-bbc3-4a75-be24-8943b05e7a8e up in Southbound
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.573 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58b95ca9-260c-49de-9bd2-c16568d51c7e
Nov 22 09:52:42 compute-0 ovn_controller[152872]: 2025-11-22T09:52:42Z|01615|binding|INFO|Setting lport 43d41379-bbc3-4a75-be24-8943b05e7a8e ovn-installed in OVS
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:42 compute-0 systemd-machined[215941]: New machine qemu-182-instance-00000096.
Nov 22 09:52:42 compute-0 systemd-udevd[409444]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.590 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[477f7fa3-6863-4a8e-b0fc-2604963e308e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:42 compute-0 NetworkManager[48920]: <info>  [1763805162.5987] device (tap43d41379-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:52:42 compute-0 NetworkManager[48920]: <info>  [1763805162.5997] device (tap43d41379-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:52:42 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000096.
Nov 22 09:52:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:42.604+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.618 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0237e3f1-d00c-42ec-babc-6b70ac69fa95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.620 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[216ebe6f-f201-437f-9362-89a0f4538066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.648 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4370038f-0aa1-4392-9e65-107e4a42453a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.666 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dcad49f0-d825-4794-b596-930edff52475]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58b95ca9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0e:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797624, 'reachable_time': 32921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409456, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.681 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[77382257-33f2-435e-8655-bc6c625ea204]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58b95ca9-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797636, 'tstamp': 797636}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409457, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58b95ca9-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797639, 'tstamp': 797639}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409457, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.683 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58b95ca9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.684 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:42 compute-0 nova_compute[253661]: 2025-11-22 09:52:42.685 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.686 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58b95ca9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.687 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.687 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58b95ca9-20, col_values=(('external_ids', {'iface-id': '318882b5-a140-4600-8260-0040c058e797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:52:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.688 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:52:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 136 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:43 compute-0 ceph-mon[75021]: pgmap v2731: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:52:43 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:43 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 136 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.061 253665 DEBUG nova.compute.manager [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.061 253665 DEBUG oslo_concurrency.lockutils [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.062 253665 DEBUG oslo_concurrency.lockutils [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.062 253665 DEBUG oslo_concurrency.lockutils [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.062 253665 DEBUG nova.compute.manager [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Processing event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.143 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805163.1428714, 566d6c71-a9a6-49f3-9f46-f9d31e71936b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.143 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] VM Started (Lifecycle Event)
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.146 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.150 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.154 253665 INFO nova.virt.libvirt.driver [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance spawned successfully.
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.154 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.161 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.165 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.174 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.174 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.175 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.175 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.175 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.176 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.183 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.184 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805163.1430955, 566d6c71-a9a6-49f3-9f46-f9d31e71936b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.184 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] VM Paused (Lifecycle Event)
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.203 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.205 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805163.14976, 566d6c71-a9a6-49f3-9f46-f9d31e71936b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.206 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] VM Resumed (Lifecycle Event)
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.221 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.224 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.259 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.284 253665 INFO nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Took 7.01 seconds to spawn the instance on the hypervisor.
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.284 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.354 253665 INFO nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Took 8.02 seconds to build instance.
Nov 22 09:52:43 compute-0 nova_compute[253661]: 2025-11-22 09:52:43.370 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:43.630+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:44 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 22 09:52:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:44.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:45 compute-0 ceph-mon[75021]: pgmap v2732: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 22 09:52:45 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:45.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:45 compute-0 nova_compute[253661]: 2025-11-22 09:52:45.701 253665 DEBUG nova.compute.manager [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:52:45 compute-0 nova_compute[253661]: 2025-11-22 09:52:45.701 253665 DEBUG oslo_concurrency.lockutils [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:52:45 compute-0 nova_compute[253661]: 2025-11-22 09:52:45.701 253665 DEBUG oslo_concurrency.lockutils [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:52:45 compute-0 nova_compute[253661]: 2025-11-22 09:52:45.701 253665 DEBUG oslo_concurrency.lockutils [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:52:45 compute-0 nova_compute[253661]: 2025-11-22 09:52:45.702 253665 DEBUG nova.compute.manager [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] No waiting events found dispatching network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:52:45 compute-0 nova_compute[253661]: 2025-11-22 09:52:45.702 253665 WARNING nova.compute.manager [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received unexpected event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e for instance with vm_state active and task_state None.
Nov 22 09:52:46 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.202110) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166202139, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 403, "num_deletes": 251, "total_data_size": 206532, "memory_usage": 215608, "flush_reason": "Manual Compaction"}
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166219149, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 203430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56669, "largest_seqno": 57071, "table_properties": {"data_size": 201095, "index_size": 434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6179, "raw_average_key_size": 19, "raw_value_size": 196345, "raw_average_value_size": 606, "num_data_blocks": 19, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805153, "oldest_key_time": 1763805153, "file_creation_time": 1763805166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 17090 microseconds, and 1479 cpu microseconds.
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.219195) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 203430 bytes OK
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.219218) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.223864) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.223898) EVENT_LOG_v1 {"time_micros": 1763805166223890, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.223920) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 203940, prev total WAL file size 203940, number of live WAL files 2.
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.224426) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(198KB)], [131(10MB)]
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166224491, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11283949, "oldest_snapshot_seqno": -1}
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7718 keys, 9601038 bytes, temperature: kUnknown
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166316487, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9601038, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9550689, "index_size": 29948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19333, "raw_key_size": 203043, "raw_average_key_size": 26, "raw_value_size": 9414158, "raw_average_value_size": 1219, "num_data_blocks": 1157, "num_entries": 7718, "num_filter_entries": 7718, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.316697) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9601038 bytes
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.321409) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.6 rd, 104.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(102.7) write-amplify(47.2) OK, records in: 8227, records dropped: 509 output_compression: NoCompression
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.321435) EVENT_LOG_v1 {"time_micros": 1763805166321425, "job": 80, "event": "compaction_finished", "compaction_time_micros": 92059, "compaction_time_cpu_micros": 24556, "output_level": 6, "num_output_files": 1, "total_output_size": 9601038, "num_input_records": 8227, "num_output_records": 7718, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166321598, "job": 80, "event": "table_file_deletion", "file_number": 133}
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166323235, "job": 80, "event": "table_file_deletion", "file_number": 131}
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.224290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:46 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:52:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 22 09:52:46 compute-0 nova_compute[253661]: 2025-11-22 09:52:46.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:46.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:47 compute-0 ceph-mon[75021]: pgmap v2733: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 22 09:52:47 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:47.604+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:47 compute-0 nova_compute[253661]: 2025-11-22 09:52:47.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:47 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 141 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:48 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:48 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 141 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:48 compute-0 podman[409501]: 2025-11-22 09:52:48.422659936 +0000 UTC m=+0.108215996 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:52:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:48.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:49 compute-0 ceph-mon[75021]: pgmap v2734: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:52:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:49.595+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:49 compute-0 nova_compute[253661]: 2025-11-22 09:52:49.900 253665 DEBUG nova.compute.manager [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:52:49 compute-0 nova_compute[253661]: 2025-11-22 09:52:49.900 253665 DEBUG nova.compute.manager [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing instance network info cache due to event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:52:49 compute-0 nova_compute[253661]: 2025-11-22 09:52:49.900 253665 DEBUG oslo_concurrency.lockutils [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:52:49 compute-0 nova_compute[253661]: 2025-11-22 09:52:49.901 253665 DEBUG oslo_concurrency.lockutils [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:52:49 compute-0 nova_compute[253661]: 2025-11-22 09:52:49.901 253665 DEBUG nova.network.neutron [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:52:50 compute-0 nova_compute[253661]: 2025-11-22 09:52:50.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:52:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:50.597+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:51 compute-0 nova_compute[253661]: 2025-11-22 09:52:51.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:51 compute-0 nova_compute[253661]: 2025-11-22 09:52:51.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:52:51 compute-0 nova_compute[253661]: 2025-11-22 09:52:51.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:52:51 compute-0 ceph-mon[75021]: pgmap v2735: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:52:51 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:51 compute-0 nova_compute[253661]: 2025-11-22 09:52:51.447 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:51.618+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:52:52
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.meta']
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:52:52 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:52:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:52.581+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:52 compute-0 nova_compute[253661]: 2025-11-22 09:52:52.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:52 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 146 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:52:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:52:53 compute-0 ceph-mon[75021]: pgmap v2736: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:52:53 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:53 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 146 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:53.619+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:54 compute-0 nova_compute[253661]: 2025-11-22 09:52:54.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:54 compute-0 nova_compute[253661]: 2025-11-22 09:52:54.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:54 compute-0 nova_compute[253661]: 2025-11-22 09:52:54.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:52:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 22 09:52:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:54.582+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:55 compute-0 nova_compute[253661]: 2025-11-22 09:52:55.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:55 compute-0 nova_compute[253661]: 2025-11-22 09:52:55.275 253665 DEBUG nova.network.neutron [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updated VIF entry in instance network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:52:55 compute-0 nova_compute[253661]: 2025-11-22 09:52:55.275 253665 DEBUG nova.network.neutron [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:52:55 compute-0 nova_compute[253661]: 2025-11-22 09:52:55.297 253665 DEBUG oslo_concurrency.lockutils [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:52:55 compute-0 ceph-mon[75021]: pgmap v2737: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 22 09:52:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:52:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:52:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:52:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:52:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:52:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Nov 22 09:52:56 compute-0 nova_compute[253661]: 2025-11-22 09:52:56.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:56 compute-0 ovn_controller[152872]: 2025-11-22T09:52:56Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:c9:9b 10.100.0.12
Nov 22 09:52:56 compute-0 ovn_controller[152872]: 2025-11-22T09:52:56Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:c9:9b 10.100.0.12
Nov 22 09:52:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:56.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:52:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:52:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:52:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:52:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:52:57 compute-0 ceph-mon[75021]: pgmap v2738: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Nov 22 09:52:57 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:57.544+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:57 compute-0 nova_compute[253661]: 2025-11-22 09:52:57.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:52:57 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 151 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:57 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:52:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 22 09:52:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:58 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 151 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:52:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:58.572+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:59 compute-0 nova_compute[253661]: 2025-11-22 09:52:59.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:52:59 compute-0 ceph-mon[75021]: pgmap v2739: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 22 09:52:59 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:52:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:59.541+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:52:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:53:00 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:00.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:53:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470028346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.717 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.804 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.805 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.808 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.809 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.812 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.812 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.964 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.965 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2943MB free_disk=59.8516960144043GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.966 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:00 compute-0 nova_compute[253661]: 2025-11-22 09:53:00.966 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.045 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.046 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d986b43b-ea74-42e0-903b-eef7a997e4ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.046 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 566d6c71-a9a6-49f3-9f46-f9d31e71936b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.046 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.046 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.124 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:01 compute-0 ceph-mon[75021]: pgmap v2740: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:53:01 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3470028346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:01.513+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:53:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3356258732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.563 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.569 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.589 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.760 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:53:01 compute-0 nova_compute[253661]: 2025-11-22 09:53:01.760 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:53:02 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:02 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3356258732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:02 compute-0 ceph-mon[75021]: pgmap v2741: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:53:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:02.562+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:02 compute-0 nova_compute[253661]: 2025-11-22 09:53:02.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:02 compute-0 nova_compute[253661]: 2025-11-22 09:53:02.761 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:02 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 156 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002276865293514358 of space, bias 1.0, pg target 0.6830595880543074 quantized to 32 (current 32)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:53:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:53:03 compute-0 nova_compute[253661]: 2025-11-22 09:53:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:03 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:03 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 156 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:03.544+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 09:53:04 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:04 compute-0 ceph-mon[75021]: pgmap v2742: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 09:53:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:04.581+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:53:06 compute-0 nova_compute[253661]: 2025-11-22 09:53:06.454 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:06.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:06 compute-0 ceph-mon[75021]: pgmap v2743: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 09:53:06 compute-0 sudo[409572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:53:06 compute-0 sudo[409572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:06 compute-0 sudo[409572]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:07 compute-0 sudo[409597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:53:07 compute-0 sudo[409597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:07 compute-0 sudo[409597]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:07 compute-0 sudo[409622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:53:07 compute-0 sudo[409622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:07 compute-0 sudo[409622]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:07 compute-0 sudo[409647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:53:07 compute-0 sudo[409647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.366 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.369 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.567 253665 DEBUG nova.compute.manager [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.567 253665 DEBUG nova.compute.manager [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing instance network info cache due to event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.568 253665 DEBUG oslo_concurrency.lockutils [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.568 253665 DEBUG oslo_concurrency.lockutils [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.568 253665 DEBUG nova.network.neutron [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:53:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:07.582+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:07 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.633 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.633 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.634 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.634 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.634 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.635 253665 INFO nova.compute.manager [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Terminating instance
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.636 253665 DEBUG nova.compute.manager [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:53:07 compute-0 sudo[409647]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.669 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 kernel: tap43d41379-bb (unregistering): left promiscuous mode
Nov 22 09:53:07 compute-0 NetworkManager[48920]: <info>  [1763805187.6931] device (tap43d41379-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:53:07 compute-0 ovn_controller[152872]: 2025-11-22T09:53:07Z|01616|binding|INFO|Releasing lport 43d41379-bbc3-4a75-be24-8943b05e7a8e from this chassis (sb_readonly=0)
Nov 22 09:53:07 compute-0 ovn_controller[152872]: 2025-11-22T09:53:07Z|01617|binding|INFO|Setting lport 43d41379-bbc3-4a75-be24-8943b05e7a8e down in Southbound
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 ovn_controller[152872]: 2025-11-22T09:53:07Z|01618|binding|INFO|Removing iface tap43d41379-bb ovn-installed in OVS
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.705 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:53:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:53:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:53:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.709 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b'], port_security=['fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe95:c99b/64', 'neutron:device_id': '566d6c71-a9a6-49f3-9f46-f9d31e71936b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0beff345-fc2f-4a68-a4a7-1d4c0960ae91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=43d41379-bbc3-4a75-be24-8943b05e7a8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.710 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 43d41379-bbc3-4a75-be24-8943b05e7a8e in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e unbound from our chassis
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.712 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58b95ca9-260c-49de-9bd2-c16568d51c7e
Nov 22 09:53:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.721 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:53:07 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 7686f124-bcfe-45ff-94c6-57b59d5cdedb does not exist
Nov 22 09:53:07 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d671c9ee-ca13-4133-9509-35bc56bd136e does not exist
Nov 22 09:53:07 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b4c0fd4e-e6da-4545-b345-ffabb46d5309 does not exist
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.728 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[44072d92-f1e4-414c-96a1-fd7a88ddbaf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:53:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:53:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:53:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:53:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:53:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:53:07 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Deactivated successfully.
Nov 22 09:53:07 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Consumed 14.219s CPU time.
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.759 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[41fd2978-2bbc-4bb3-811a-1c19d3a4ee3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.763 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0dfbc3-57b6-42fa-8590-4b9daeadcfcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:07 compute-0 systemd-machined[215941]: Machine qemu-182-instance-00000096 terminated.
Nov 22 09:53:07 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 161 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:07 compute-0 sudo[409712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:53:07 compute-0 sudo[409712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.793 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[97f1876e-8f2e-4447-8bf4-f4c345afbdec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:07 compute-0 sudo[409712]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26b5e6cd-9904-4222-b900-d359c6d4b860]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58b95ca9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0e:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797624, 'reachable_time': 32921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409740, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.827 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[36f92329-494d-493b-9148-40ac0f79b56b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58b95ca9-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797636, 'tstamp': 797636}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409753, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58b95ca9-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797639, 'tstamp': 797639}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409753, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.829 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58b95ca9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.835 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58b95ca9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.835 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.836 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58b95ca9-20, col_values=(('external_ids', {'iface-id': '318882b5-a140-4600-8260-0040c058e797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.836 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:53:07 compute-0 sudo[409741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:53:07 compute-0 sudo[409741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:07 compute-0 sudo[409741]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.868 253665 INFO nova.virt.libvirt.driver [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance destroyed successfully.
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.869 253665 DEBUG nova.objects.instance [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 566d6c71-a9a6-49f3-9f46-f9d31e71936b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.877 253665 DEBUG nova.virt.libvirt.vif [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-262851612',display_name='tempest-TestGettingAddress-server-262851612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-262851612',id=150,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-drb6i5m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:52:43Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=566d6c71-a9a6-49f3-9f46-f9d31e71936b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.878 253665 DEBUG nova.network.os_vif_util [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.878 253665 DEBUG nova.network.os_vif_util [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.879 253665 DEBUG os_vif [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.880 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43d41379-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:07 compute-0 nova_compute[253661]: 2025-11-22 09:53:07.886 253665 INFO os_vif [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb')
Nov 22 09:53:07 compute-0 sudo[409775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:53:07 compute-0 sudo[409775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:07 compute-0 sudo[409775]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:07 compute-0 sudo[409822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:53:07 compute-0 sudo[409822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.271 253665 DEBUG nova.compute.manager [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-unplugged-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.273 253665 DEBUG oslo_concurrency.lockutils [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.273 253665 DEBUG oslo_concurrency.lockutils [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.274 253665 DEBUG oslo_concurrency.lockutils [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.275 253665 DEBUG nova.compute.manager [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] No waiting events found dispatching network-vif-unplugged-43d41379-bbc3-4a75-be24-8943b05e7a8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.275 253665 DEBUG nova.compute.manager [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-unplugged-43d41379-bbc3-4a75-be24-8943b05e7a8e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:53:08 compute-0 podman[409888]: 2025-11-22 09:53:08.322224734 +0000 UTC m=+0.043498684 container create 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.334 253665 INFO nova.virt.libvirt.driver [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Deleting instance files /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b_del
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.334 253665 INFO nova.virt.libvirt.driver [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Deletion of /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b_del complete
Nov 22 09:53:08 compute-0 systemd[1]: Started libpod-conmon-40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9.scope.
Nov 22 09:53:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:53:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 1 active+clean+laggy, 304 active+clean; 288 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Nov 22 09:53:08 compute-0 podman[409888]: 2025-11-22 09:53:08.302151045 +0000 UTC m=+0.023425005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:53:08 compute-0 podman[409888]: 2025-11-22 09:53:08.401800722 +0000 UTC m=+0.123074682 container init 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:53:08 compute-0 podman[409888]: 2025-11-22 09:53:08.408372619 +0000 UTC m=+0.129646569 container start 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:08 compute-0 podman[409888]: 2025-11-22 09:53:08.411672722 +0000 UTC m=+0.132946672 container attach 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:08 compute-0 elastic_blackburn[409904]: 167 167
Nov 22 09:53:08 compute-0 systemd[1]: libpod-40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9.scope: Deactivated successfully.
Nov 22 09:53:08 compute-0 podman[409888]: 2025-11-22 09:53:08.414105604 +0000 UTC m=+0.135379554 container died 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-168a7b75f0a66bddaad31d4eb15df6627186efca7f2eb488a3dcb1bebec7e566-merged.mount: Deactivated successfully.
Nov 22 09:53:08 compute-0 podman[409888]: 2025-11-22 09:53:08.456446568 +0000 UTC m=+0.177720518 container remove 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:53:08 compute-0 systemd[1]: libpod-conmon-40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9.scope: Deactivated successfully.
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.508 253665 INFO nova.compute.manager [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Took 0.87 seconds to destroy the instance on the hypervisor.
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.510 253665 DEBUG oslo.service.loopingcall [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.510 253665 DEBUG nova.compute.manager [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:53:08 compute-0 nova_compute[253661]: 2025-11-22 09:53:08.510 253665 DEBUG nova.network.neutron [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:53:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:08.546+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:53:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:53:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:53:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:53:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:53:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:53:08 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 161 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:08 compute-0 ceph-mon[75021]: pgmap v2744: 305 pgs: 1 active+clean+laggy, 304 active+clean; 288 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Nov 22 09:53:08 compute-0 podman[409928]: 2025-11-22 09:53:08.670744692 +0000 UTC m=+0.053871146 container create d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:53:08 compute-0 systemd[1]: Started libpod-conmon-d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561.scope.
Nov 22 09:53:08 compute-0 podman[409928]: 2025-11-22 09:53:08.648098548 +0000 UTC m=+0.031225032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:53:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:08 compute-0 podman[409928]: 2025-11-22 09:53:08.773716894 +0000 UTC m=+0.156843368 container init d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:08 compute-0 podman[409928]: 2025-11-22 09:53:08.781363247 +0000 UTC m=+0.164489701 container start d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:53:08 compute-0 podman[409928]: 2025-11-22 09:53:08.785277457 +0000 UTC m=+0.168403941 container attach d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:09.547+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:09 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:09 compute-0 bold_wilson[409945]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:53:09 compute-0 bold_wilson[409945]: --> relative data size: 1.0
Nov 22 09:53:09 compute-0 bold_wilson[409945]: --> All data devices are unavailable
Nov 22 09:53:09 compute-0 systemd[1]: libpod-d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561.scope: Deactivated successfully.
Nov 22 09:53:09 compute-0 podman[409928]: 2025-11-22 09:53:09.825705491 +0000 UTC m=+1.208831945 container died d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f-merged.mount: Deactivated successfully.
Nov 22 09:53:09 compute-0 podman[409928]: 2025-11-22 09:53:09.897675486 +0000 UTC m=+1.280801950 container remove d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:53:09 compute-0 systemd[1]: libpod-conmon-d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561.scope: Deactivated successfully.
Nov 22 09:53:09 compute-0 nova_compute[253661]: 2025-11-22 09:53:09.917 253665 DEBUG nova.network.neutron [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updated VIF entry in instance network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:53:09 compute-0 nova_compute[253661]: 2025-11-22 09:53:09.919 253665 DEBUG nova.network.neutron [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:53:09 compute-0 nova_compute[253661]: 2025-11-22 09:53:09.922 253665 DEBUG nova.network.neutron [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:53:09 compute-0 sudo[409822]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:09 compute-0 nova_compute[253661]: 2025-11-22 09:53:09.936 253665 INFO nova.compute.manager [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Took 1.43 seconds to deallocate network for instance.
Nov 22 09:53:09 compute-0 nova_compute[253661]: 2025-11-22 09:53:09.941 253665 DEBUG oslo_concurrency.lockutils [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:53:09 compute-0 nova_compute[253661]: 2025-11-22 09:53:09.981 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:09 compute-0 nova_compute[253661]: 2025-11-22 09:53:09.982 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:10 compute-0 sudo[409989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:53:10 compute-0 sudo[409989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:10 compute-0 sudo[409989]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:10 compute-0 sudo[410014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:53:10 compute-0 sudo[410014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:10 compute-0 sudo[410014]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:10 compute-0 sudo[410039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:53:10 compute-0 sudo[410039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:10 compute-0 sudo[410039]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:10 compute-0 sudo[410064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:53:10 compute-0 sudo[410064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.298 253665 DEBUG oslo_concurrency.processutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.361 253665 DEBUG nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.362 253665 DEBUG oslo_concurrency.lockutils [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.363 253665 DEBUG oslo_concurrency.lockutils [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.363 253665 DEBUG oslo_concurrency.lockutils [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.363 253665 DEBUG nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] No waiting events found dispatching network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.364 253665 WARNING nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received unexpected event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e for instance with vm_state deleted and task_state None.
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.364 253665 DEBUG nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-deleted-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.364 253665 INFO nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Neutron deleted interface 43d41379-bbc3-4a75-be24-8943b05e7a8e; detaching it from the instance and deleting it from the info cache
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.365 253665 DEBUG nova.network.neutron [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.382 253665 DEBUG nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Detach interface failed, port_id=43d41379-bbc3-4a75-be24-8943b05e7a8e, reason: Instance 566d6c71-a9a6-49f3-9f46-f9d31e71936b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:53:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 1 active+clean+laggy, 304 active+clean; 288 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 21 KiB/s wr, 17 op/s
Nov 22 09:53:10 compute-0 podman[410147]: 2025-11-22 09:53:10.544194862 +0000 UTC m=+0.067127404 container create ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:10.576+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:10 compute-0 systemd[1]: Started libpod-conmon-ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba.scope.
Nov 22 09:53:10 compute-0 podman[410147]: 2025-11-22 09:53:10.513991195 +0000 UTC m=+0.036923757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:53:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:53:10 compute-0 podman[410147]: 2025-11-22 09:53:10.632211564 +0000 UTC m=+0.155144116 container init ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:10 compute-0 podman[410147]: 2025-11-22 09:53:10.638689689 +0000 UTC m=+0.161622231 container start ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:53:10 compute-0 podman[410147]: 2025-11-22 09:53:10.642422873 +0000 UTC m=+0.165355415 container attach ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 09:53:10 compute-0 vibrant_bell[410163]: 167 167
Nov 22 09:53:10 compute-0 systemd[1]: libpod-ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba.scope: Deactivated successfully.
Nov 22 09:53:10 compute-0 podman[410147]: 2025-11-22 09:53:10.644861885 +0000 UTC m=+0.167794427 container died ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-337c04d95e716c14cd124fb8b5c51ad069140ad14b1365eb0497fad421f2ab82-merged.mount: Deactivated successfully.
Nov 22 09:53:10 compute-0 podman[410147]: 2025-11-22 09:53:10.686232724 +0000 UTC m=+0.209165266 container remove ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:53:10 compute-0 systemd[1]: libpod-conmon-ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba.scope: Deactivated successfully.
Nov 22 09:53:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:10 compute-0 ceph-mon[75021]: pgmap v2745: 305 pgs: 1 active+clean+laggy, 304 active+clean; 288 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 21 KiB/s wr, 17 op/s
Nov 22 09:53:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:53:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385091223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.801 253665 DEBUG oslo_concurrency.processutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.807 253665 DEBUG nova.compute.provider_tree [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.832 253665 DEBUG nova.scheduler.client.report [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:53:10 compute-0 podman[410188]: 2025-11-22 09:53:10.856554033 +0000 UTC m=+0.040135529 container create 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.872 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:10 compute-0 systemd[1]: Started libpod-conmon-202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e.scope.
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.918 253665 INFO nova.scheduler.client.report [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 566d6c71-a9a6-49f3-9f46-f9d31e71936b
Nov 22 09:53:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:10 compute-0 podman[410188]: 2025-11-22 09:53:10.838929046 +0000 UTC m=+0.022510342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:10 compute-0 podman[410188]: 2025-11-22 09:53:10.952517967 +0000 UTC m=+0.136099273 container init 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:53:10 compute-0 podman[410188]: 2025-11-22 09:53:10.960338345 +0000 UTC m=+0.143919621 container start 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:53:10 compute-0 podman[410188]: 2025-11-22 09:53:10.963873225 +0000 UTC m=+0.147454531 container attach 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:53:10 compute-0 nova_compute[253661]: 2025-11-22 09:53:10.997 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:11.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]: {
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:     "0": [
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:         {
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "devices": [
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "/dev/loop3"
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             ],
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_name": "ceph_lv0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_size": "21470642176",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "name": "ceph_lv0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "tags": {
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.cluster_name": "ceph",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.crush_device_class": "",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.encrypted": "0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.osd_id": "0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.type": "block",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.vdo": "0"
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             },
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "type": "block",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "vg_name": "ceph_vg0"
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:         }
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:     ],
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:     "1": [
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:         {
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "devices": [
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "/dev/loop4"
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             ],
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_name": "ceph_lv1",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_size": "21470642176",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "name": "ceph_lv1",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "tags": {
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.cluster_name": "ceph",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.crush_device_class": "",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.encrypted": "0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.osd_id": "1",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.type": "block",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.vdo": "0"
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             },
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "type": "block",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "vg_name": "ceph_vg1"
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:         }
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:     ],
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:     "2": [
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:         {
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "devices": [
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "/dev/loop5"
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             ],
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_name": "ceph_lv2",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_size": "21470642176",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "name": "ceph_lv2",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "tags": {
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.cluster_name": "ceph",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.crush_device_class": "",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.encrypted": "0",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.osd_id": "2",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.type": "block",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:                 "ceph.vdo": "0"
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             },
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "type": "block",
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:             "vg_name": "ceph_vg2"
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:         }
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]:     ]
Nov 22 09:53:11 compute-0 eloquent_shockley[410204]: }
Nov 22 09:53:11 compute-0 systemd[1]: libpod-202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e.scope: Deactivated successfully.
Nov 22 09:53:11 compute-0 conmon[410204]: conmon 202d9112c254275e6cc9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e.scope/container/memory.events
Nov 22 09:53:11 compute-0 podman[410188]: 2025-11-22 09:53:11.735596405 +0000 UTC m=+0.919177681 container died 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 09:53:11 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/385091223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9-merged.mount: Deactivated successfully.
Nov 22 09:53:11 compute-0 podman[410188]: 2025-11-22 09:53:11.813410179 +0000 UTC m=+0.996991445 container remove 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:11 compute-0 systemd[1]: libpod-conmon-202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e.scope: Deactivated successfully.
Nov 22 09:53:11 compute-0 sudo[410064]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:11 compute-0 podman[410213]: 2025-11-22 09:53:11.852666884 +0000 UTC m=+0.093458181 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 22 09:53:11 compute-0 podman[410216]: 2025-11-22 09:53:11.859130188 +0000 UTC m=+0.099940576 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 09:53:11 compute-0 sudo[410262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:53:11 compute-0 sudo[410262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:11 compute-0 sudo[410262]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:11 compute-0 sudo[410287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:53:11 compute-0 sudo[410287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:11 compute-0 sudo[410287]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:12 compute-0 sudo[410312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:53:12 compute-0 sudo[410312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:12 compute-0 sudo[410312]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:12 compute-0 sudo[410337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:53:12 compute-0 sudo[410337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 1 active+clean+laggy, 304 active+clean; 288 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 21 KiB/s wr, 17 op/s
Nov 22 09:53:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:53:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/490340778' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:53:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:53:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/490340778' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:53:12 compute-0 podman[410401]: 2025-11-22 09:53:12.409679439 +0000 UTC m=+0.041631596 container create 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:53:12 compute-0 systemd[1]: Started libpod-conmon-26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685.scope.
Nov 22 09:53:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:53:12 compute-0 podman[410401]: 2025-11-22 09:53:12.476389241 +0000 UTC m=+0.108341418 container init 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:12 compute-0 podman[410401]: 2025-11-22 09:53:12.389543549 +0000 UTC m=+0.021495736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:53:12 compute-0 podman[410401]: 2025-11-22 09:53:12.485303447 +0000 UTC m=+0.117255604 container start 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 09:53:12 compute-0 kind_perlman[410417]: 167 167
Nov 22 09:53:12 compute-0 systemd[1]: libpod-26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685.scope: Deactivated successfully.
Nov 22 09:53:12 compute-0 podman[410401]: 2025-11-22 09:53:12.490894608 +0000 UTC m=+0.122846755 container attach 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:53:12 compute-0 podman[410401]: 2025-11-22 09:53:12.491221147 +0000 UTC m=+0.123173304 container died 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 09:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c07c440e8eb984e1cab11feba0b6fbfecb6a3f0600934051405be9c25212ad2-merged.mount: Deactivated successfully.
Nov 22 09:53:12 compute-0 podman[410401]: 2025-11-22 09:53:12.548298184 +0000 UTC m=+0.180250341 container remove 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:53:12 compute-0 systemd[1]: libpod-conmon-26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685.scope: Deactivated successfully.
Nov 22 09:53:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:12.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:12 compute-0 podman[410442]: 2025-11-22 09:53:12.724164954 +0000 UTC m=+0.044220622 container create 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:53:12 compute-0 nova_compute[253661]: 2025-11-22 09:53:12.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:12 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:12 compute-0 ceph-mon[75021]: pgmap v2746: 305 pgs: 1 active+clean+laggy, 304 active+clean; 288 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 21 KiB/s wr, 17 op/s
Nov 22 09:53:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/490340778' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:53:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/490340778' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:53:12 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 166 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:12 compute-0 systemd[1]: Started libpod-conmon-9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073.scope.
Nov 22 09:53:12 compute-0 podman[410442]: 2025-11-22 09:53:12.70349514 +0000 UTC m=+0.023550828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:53:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:53:12 compute-0 podman[410442]: 2025-11-22 09:53:12.826833348 +0000 UTC m=+0.146889036 container init 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:53:12 compute-0 podman[410442]: 2025-11-22 09:53:12.834107973 +0000 UTC m=+0.154163641 container start 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:53:12 compute-0 podman[410442]: 2025-11-22 09:53:12.83914093 +0000 UTC m=+0.159196598 container attach 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:53:12 compute-0 nova_compute[253661]: 2025-11-22 09:53:12.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.169 253665 DEBUG nova.compute.manager [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.169 253665 DEBUG nova.compute.manager [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing instance network info cache due to event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.169 253665 DEBUG oslo_concurrency.lockutils [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.169 253665 DEBUG oslo_concurrency.lockutils [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.170 253665 DEBUG nova.network.neutron [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.252 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.252 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.255 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.256 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.256 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.257 253665 INFO nova.compute.manager [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Terminating instance
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.258 253665 DEBUG nova.compute.manager [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:53:13 compute-0 kernel: tapb10caa5b-06 (unregistering): left promiscuous mode
Nov 22 09:53:13 compute-0 NetworkManager[48920]: <info>  [1763805193.3492] device (tapb10caa5b-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:53:13 compute-0 ovn_controller[152872]: 2025-11-22T09:53:13Z|01619|binding|INFO|Releasing lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 from this chassis (sb_readonly=0)
Nov 22 09:53:13 compute-0 ovn_controller[152872]: 2025-11-22T09:53:13Z|01620|binding|INFO|Setting lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 down in Southbound
Nov 22 09:53:13 compute-0 ovn_controller[152872]: 2025-11-22T09:53:13Z|01621|binding|INFO|Removing iface tapb10caa5b-06 ovn-installed in OVS
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.362 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.368 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e'], port_security=['fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8::f816:3eff:fe12:a97e/64', 'neutron:device_id': 'd986b43b-ea74-42e0-903b-eef7a997e4ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0beff345-fc2f-4a68-a4a7-1d4c0960ae91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b10caa5b-0659-423b-9bcf-57a9a1ed30c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.369 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e unbound from our chassis
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.371 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58b95ca9-260c-49de-9bd2-c16568d51c7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.372 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d569b78-c595-4530-8fd8-27b9861f7c4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.372 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e namespace which is not needed anymore
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:13 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000095.scope: Deactivated successfully.
Nov 22 09:53:13 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000095.scope: Consumed 15.889s CPU time.
Nov 22 09:53:13 compute-0 systemd-machined[215941]: Machine qemu-181-instance-00000095 terminated.
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.500 253665 INFO nova.virt.libvirt.driver [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance destroyed successfully.
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.501 253665 DEBUG nova.objects.instance [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid d986b43b-ea74-42e0-903b-eef7a997e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.513 253665 DEBUG nova.virt.libvirt.vif [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-355287164',display_name='tempest-TestGettingAddress-server-355287164',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-355287164',id=149,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:52:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-913op7wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:52:06Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d986b43b-ea74-42e0-903b-eef7a997e4ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.514 253665 DEBUG nova.network.os_vif_util [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.515 253665 DEBUG nova.network.os_vif_util [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.515 253665 DEBUG os_vif [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.517 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb10caa5b-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.521 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.525 253665 INFO os_vif [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06')
Nov 22 09:53:13 compute-0 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [NOTICE]   (408912) : haproxy version is 2.8.14-c23fe91
Nov 22 09:53:13 compute-0 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [NOTICE]   (408912) : path to executable is /usr/sbin/haproxy
Nov 22 09:53:13 compute-0 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [WARNING]  (408912) : Exiting Master process...
Nov 22 09:53:13 compute-0 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [WARNING]  (408912) : Exiting Master process...
Nov 22 09:53:13 compute-0 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [ALERT]    (408912) : Current worker (408915) exited with code 143 (Terminated)
Nov 22 09:53:13 compute-0 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [WARNING]  (408912) : All workers exited. Exiting... (0)
Nov 22 09:53:13 compute-0 systemd[1]: libpod-2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47.scope: Deactivated successfully.
Nov 22 09:53:13 compute-0 podman[410486]: 2025-11-22 09:53:13.550271634 +0000 UTC m=+0.059918171 container died 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:53:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47-userdata-shm.mount: Deactivated successfully.
Nov 22 09:53:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ea709103a9d5440c95c0740482f664efdf69029657dc148bca198fcea4ae2e-merged.mount: Deactivated successfully.
Nov 22 09:53:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:13.633+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:13 compute-0 podman[410486]: 2025-11-22 09:53:13.651755818 +0000 UTC m=+0.161402365 container cleanup 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 09:53:13 compute-0 systemd[1]: libpod-conmon-2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47.scope: Deactivated successfully.
Nov 22 09:53:13 compute-0 podman[410554]: 2025-11-22 09:53:13.737767089 +0000 UTC m=+0.052927014 container remove 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d2885377-6dab-4618-a6e8-55641f23981a]: (4, ('Sat Nov 22 09:53:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e (2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47)\n2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47\nSat Nov 22 09:53:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e (2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47)\n2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.749 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[04fd4ec4-8687-45fd-9ce7-800c2ecf61a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.750 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58b95ca9-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.752 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:13 compute-0 kernel: tap58b95ca9-20: left promiscuous mode
Nov 22 09:53:13 compute-0 nova_compute[253661]: 2025-11-22 09:53:13.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:13 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:13 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 166 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[76caa4ad-6657-4f94-8f7b-e76bf0f4a189]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.837 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc41b2b6-c7e4-45af-9915-dabd79c2e7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a300e06-9af3-438b-bc1e-bc7c539f31d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.858 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5fe158-6392-4490-9629-0bf35a37147b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797615, 'reachable_time': 23597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410582, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d58b95ca9\x2d260c\x2d49de\x2d9bd2\x2dc16568d51c7e.mount: Deactivated successfully.
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.869 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:53:13 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.869 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5eadc0-03f1-4dea-a9b5-700b03a7260b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:13 compute-0 awesome_robinson[410458]: {
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "osd_id": 1,
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "type": "bluestore"
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:     },
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "osd_id": 0,
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "type": "bluestore"
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:     },
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "osd_id": 2,
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:         "type": "bluestore"
Nov 22 09:53:13 compute-0 awesome_robinson[410458]:     }
Nov 22 09:53:13 compute-0 awesome_robinson[410458]: }
Nov 22 09:53:13 compute-0 systemd[1]: libpod-9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073.scope: Deactivated successfully.
Nov 22 09:53:13 compute-0 systemd[1]: libpod-9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073.scope: Consumed 1.071s CPU time.
Nov 22 09:53:13 compute-0 podman[410442]: 2025-11-22 09:53:13.968619903 +0000 UTC m=+1.288675581 container died 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:53:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638-merged.mount: Deactivated successfully.
Nov 22 09:53:14 compute-0 podman[410442]: 2025-11-22 09:53:14.046241321 +0000 UTC m=+1.366296989 container remove 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:53:14 compute-0 systemd[1]: libpod-conmon-9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073.scope: Deactivated successfully.
Nov 22 09:53:14 compute-0 sudo[410337]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:53:14 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:53:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:53:14 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:53:14 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev dd05be6f-8442-4f06-af87-598df44caa8e does not exist
Nov 22 09:53:14 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev fbe91504-54f2-4464-8245-d6b96a981f85 does not exist
Nov 22 09:53:14 compute-0 nova_compute[253661]: 2025-11-22 09:53:14.110 253665 INFO nova.virt.libvirt.driver [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Deleting instance files /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce_del
Nov 22 09:53:14 compute-0 nova_compute[253661]: 2025-11-22 09:53:14.111 253665 INFO nova.virt.libvirt.driver [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Deletion of /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce_del complete
Nov 22 09:53:14 compute-0 sudo[410604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:53:14 compute-0 sudo[410604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:14 compute-0 sudo[410604]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:14 compute-0 nova_compute[253661]: 2025-11-22 09:53:14.187 253665 INFO nova.compute.manager [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Took 0.93 seconds to destroy the instance on the hypervisor.
Nov 22 09:53:14 compute-0 nova_compute[253661]: 2025-11-22 09:53:14.188 253665 DEBUG oslo.service.loopingcall [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:53:14 compute-0 nova_compute[253661]: 2025-11-22 09:53:14.188 253665 DEBUG nova.compute.manager [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:53:14 compute-0 nova_compute[253661]: 2025-11-22 09:53:14.188 253665 DEBUG nova.network.neutron [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:53:14 compute-0 sudo[410629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:53:14 compute-0 sudo[410629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:53:14 compute-0 sudo[410629]: pam_unix(sudo:session): session closed for user root
Nov 22 09:53:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 25 KiB/s wr, 30 op/s
Nov 22 09:53:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:14.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:14 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:14 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:53:14 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:53:14 compute-0 ceph-mon[75021]: pgmap v2747: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 25 KiB/s wr, 30 op/s
Nov 22 09:53:15 compute-0 nova_compute[253661]: 2025-11-22 09:53:15.270 253665 DEBUG nova.compute.manager [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-unplugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:15 compute-0 nova_compute[253661]: 2025-11-22 09:53:15.270 253665 DEBUG oslo_concurrency.lockutils [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:15 compute-0 nova_compute[253661]: 2025-11-22 09:53:15.270 253665 DEBUG oslo_concurrency.lockutils [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:15 compute-0 nova_compute[253661]: 2025-11-22 09:53:15.271 253665 DEBUG oslo_concurrency.lockutils [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:15 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:53:15 compute-0 nova_compute[253661]: 2025-11-22 09:53:15.271 253665 DEBUG nova.compute.manager [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] No waiting events found dispatching network-vif-unplugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:53:15 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 09:53:15 compute-0 nova_compute[253661]: 2025-11-22 09:53:15.271 253665 DEBUG nova.compute.manager [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-unplugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:53:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:15.371 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:15.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:15 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.234 253665 DEBUG nova.network.neutron [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.252 253665 INFO nova.compute.manager [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Took 2.06 seconds to deallocate network for instance.
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.292 253665 DEBUG nova.compute.manager [req-a1c6e121-3de5-4d1e-aa6b-d02f98817c34 req-649c6371-e131-4c37-8048-0d66a198902d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-deleted-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.307 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.308 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 24 KiB/s wr, 30 op/s
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.387 253665 DEBUG nova.network.neutron [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updated VIF entry in instance network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.387 253665 DEBUG nova.network.neutron [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.402 253665 DEBUG oslo_concurrency.lockutils [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.404 253665 DEBUG oslo_concurrency.processutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:16.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:16 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:16 compute-0 ceph-mon[75021]: pgmap v2748: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 24 KiB/s wr, 30 op/s
Nov 22 09:53:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:53:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849791293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.888 253665 DEBUG oslo_concurrency.processutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.894 253665 DEBUG nova.compute.provider_tree [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.908 253665 DEBUG nova.scheduler.client.report [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.931 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:16 compute-0 nova_compute[253661]: 2025-11-22 09:53:16.958 253665 INFO nova.scheduler.client.report [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance d986b43b-ea74-42e0-903b-eef7a997e4ce
Nov 22 09:53:17 compute-0 nova_compute[253661]: 2025-11-22 09:53:17.022 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:17 compute-0 nova_compute[253661]: 2025-11-22 09:53:17.345 253665 DEBUG nova.compute.manager [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:17 compute-0 nova_compute[253661]: 2025-11-22 09:53:17.346 253665 DEBUG oslo_concurrency.lockutils [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:17 compute-0 nova_compute[253661]: 2025-11-22 09:53:17.346 253665 DEBUG oslo_concurrency.lockutils [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:17 compute-0 nova_compute[253661]: 2025-11-22 09:53:17.346 253665 DEBUG oslo_concurrency.lockutils [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:17 compute-0 nova_compute[253661]: 2025-11-22 09:53:17.347 253665 DEBUG nova.compute.manager [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] No waiting events found dispatching network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:53:17 compute-0 nova_compute[253661]: 2025-11-22 09:53:17.347 253665 WARNING nova.compute.manager [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received unexpected event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 for instance with vm_state deleted and task_state None.
Nov 22 09:53:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:17.722+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:17 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 171 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:17 compute-0 nova_compute[253661]: 2025-11-22 09:53:17.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:17 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/849791293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:17 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 171 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 25 KiB/s wr, 58 op/s
Nov 22 09:53:18 compute-0 nova_compute[253661]: 2025-11-22 09:53:18.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:18.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:18 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:18 compute-0 ceph-mon[75021]: pgmap v2749: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 25 KiB/s wr, 58 op/s
Nov 22 09:53:18 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:19 compute-0 podman[410677]: 2025-11-22 09:53:19.430533433 +0000 UTC m=+0.114633108 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:53:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:19.677+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:20 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.7 KiB/s wr, 40 op/s
Nov 22 09:53:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:20.682+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:21 compute-0 ceph-mon[75021]: pgmap v2750: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.7 KiB/s wr, 40 op/s
Nov 22 09:53:21 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:21.694+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:22 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.7 KiB/s wr, 40 op/s
Nov 22 09:53:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:22.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:53:22 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 176 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:53:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:22 compute-0 nova_compute[253661]: 2025-11-22 09:53:22.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:22 compute-0 nova_compute[253661]: 2025-11-22 09:53:22.866 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805187.8656547, 566d6c71-a9a6-49f3-9f46-f9d31e71936b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:53:22 compute-0 nova_compute[253661]: 2025-11-22 09:53:22.867 253665 INFO nova.compute.manager [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] VM Stopped (Lifecycle Event)
Nov 22 09:53:22 compute-0 nova_compute[253661]: 2025-11-22 09:53:22.886 253665 DEBUG nova.compute.manager [None req-6fab7022-6e32-4c11-b939-44bf327a2d90 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:23 compute-0 ceph-mon[75021]: pgmap v2751: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.7 KiB/s wr, 40 op/s
Nov 22 09:53:23 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:23 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 176 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:23 compute-0 nova_compute[253661]: 2025-11-22 09:53:23.521 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:23.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.7 KiB/s wr, 40 op/s
Nov 22 09:53:24 compute-0 ovn_controller[152872]: 2025-11-22T09:53:24Z|01622|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 09:53:24 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:24 compute-0 nova_compute[253661]: 2025-11-22 09:53:24.491 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:24.720+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:25 compute-0 ceph-mon[75021]: pgmap v2752: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.7 KiB/s wr, 40 op/s
Nov 22 09:53:25 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:25.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 22 09:53:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:26.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:26 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:26 compute-0 ceph-mon[75021]: pgmap v2753: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 22 09:53:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:27.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:27 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 181 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:27 compute-0 nova_compute[253661]: 2025-11-22 09:53:27.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:27.996 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:28 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 181 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 22 09:53:28 compute-0 nova_compute[253661]: 2025-11-22 09:53:28.499 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805193.4984808, d986b43b-ea74-42e0-903b-eef7a997e4ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:53:28 compute-0 nova_compute[253661]: 2025-11-22 09:53:28.499 253665 INFO nova.compute.manager [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] VM Stopped (Lifecycle Event)
Nov 22 09:53:28 compute-0 nova_compute[253661]: 2025-11-22 09:53:28.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:28 compute-0 nova_compute[253661]: 2025-11-22 09:53:28.526 253665 DEBUG nova.compute.manager [None req-2f86d777-6543-4f1c-ab65-07815c648ffa - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:28.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:29 compute-0 ceph-mon[75021]: pgmap v2754: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 22 09:53:29 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:29.719+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:30 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:30.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:31 compute-0 ceph-mon[75021]: pgmap v2755: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:31 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:31.710+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:32 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:32.727+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:32 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 186 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:32 compute-0 nova_compute[253661]: 2025-11-22 09:53:32.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:33 compute-0 ceph-mon[75021]: pgmap v2756: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:33 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:33 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 186 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:33 compute-0 nova_compute[253661]: 2025-11-22 09:53:33.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:33.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:34 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:34.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:35 compute-0 ceph-mon[75021]: pgmap v2757: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:35 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:35.732+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:36 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:36 compute-0 ceph-mon[75021]: pgmap v2758: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:36.719+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:37 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:37.722+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:37 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 191 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:37 compute-0 nova_compute[253661]: 2025-11-22 09:53:37.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:38 compute-0 nova_compute[253661]: 2025-11-22 09:53:38.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:38.683+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:38 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:38 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 191 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:38 compute-0 ceph-mon[75021]: pgmap v2759: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:39 compute-0 nova_compute[253661]: 2025-11-22 09:53:39.424 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:39 compute-0 nova_compute[253661]: 2025-11-22 09:53:39.425 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:39 compute-0 nova_compute[253661]: 2025-11-22 09:53:39.438 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:53:39 compute-0 nova_compute[253661]: 2025-11-22 09:53:39.522 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:39 compute-0 nova_compute[253661]: 2025-11-22 09:53:39.523 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:39 compute-0 nova_compute[253661]: 2025-11-22 09:53:39.532 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:53:39 compute-0 nova_compute[253661]: 2025-11-22 09:53:39.532 253665 INFO nova.compute.claims [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:53:39 compute-0 nova_compute[253661]: 2025-11-22 09:53:39.689 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:39.706+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:39 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:39 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:53:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508051849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.158 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.165 253665 DEBUG nova.compute.provider_tree [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.180 253665 DEBUG nova.scheduler.client.report [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.207 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.208 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.251 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.252 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.273 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.295 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:53:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.421 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.422 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.422 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Creating image(s)
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.443 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.463 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.483 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.486 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.570 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.571 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.571 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.572 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.591 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.594 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:40.710+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3508051849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:53:40 compute-0 ceph-mon[75021]: pgmap v2760: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:40 compute-0 nova_compute[253661]: 2025-11-22 09:53:40.935 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:41 compute-0 nova_compute[253661]: 2025-11-22 09:53:41.005 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] resizing rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:53:41 compute-0 nova_compute[253661]: 2025-11-22 09:53:41.045 253665 DEBUG nova.policy [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd4a4c13ace640b98e8ff1360f0112e8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 09:53:41 compute-0 nova_compute[253661]: 2025-11-22 09:53:41.095 253665 DEBUG nova.objects.instance [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'migration_context' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:53:41 compute-0 nova_compute[253661]: 2025-11-22 09:53:41.109 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:53:41 compute-0 nova_compute[253661]: 2025-11-22 09:53:41.110 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Ensure instance console log exists: /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:53:41 compute-0 nova_compute[253661]: 2025-11-22 09:53:41.111 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:41 compute-0 nova_compute[253661]: 2025-11-22 09:53:41.111 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:41 compute-0 nova_compute[253661]: 2025-11-22 09:53:41.111 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:41.694+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:41 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:42 compute-0 podman[410893]: 2025-11-22 09:53:42.355510892 +0000 UTC m=+0.049218379 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:53:42 compute-0 nova_compute[253661]: 2025-11-22 09:53:42.362 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Successfully created port: 57ba7057-9293-4134-9246-25ddf0b3af07 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 09:53:42 compute-0 podman[410894]: 2025-11-22 09:53:42.369247191 +0000 UTC m=+0.060661479 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:53:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:42.715+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:42 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 196 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:42 compute-0 nova_compute[253661]: 2025-11-22 09:53:42.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:42 compute-0 ceph-mon[75021]: pgmap v2761: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:53:42 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:42 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 196 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:43 compute-0 nova_compute[253661]: 2025-11-22 09:53:43.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:43.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:43 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:44 compute-0 nova_compute[253661]: 2025-11-22 09:53:44.268 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Successfully updated port: 57ba7057-9293-4134-9246-25ddf0b3af07 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 09:53:44 compute-0 nova_compute[253661]: 2025-11-22 09:53:44.303 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:53:44 compute-0 nova_compute[253661]: 2025-11-22 09:53:44.304 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquired lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:53:44 compute-0 nova_compute[253661]: 2025-11-22 09:53:44.305 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:53:44 compute-0 nova_compute[253661]: 2025-11-22 09:53:44.377 253665 DEBUG nova.compute.manager [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-changed-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:44 compute-0 nova_compute[253661]: 2025-11-22 09:53:44.378 253665 DEBUG nova.compute.manager [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Refreshing instance network info cache due to event network-changed-57ba7057-9293-4134-9246-25ddf0b3af07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:53:44 compute-0 nova_compute[253661]: 2025-11-22 09:53:44.378 253665 DEBUG oslo_concurrency.lockutils [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:53:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:53:44 compute-0 nova_compute[253661]: 2025-11-22 09:53:44.452 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:53:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:44.739+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:45 compute-0 ceph-mon[75021]: pgmap v2762: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:53:45 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.422 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.502 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Releasing lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.503 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance network_info: |[{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.503 253665 DEBUG oslo_concurrency.lockutils [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.503 253665 DEBUG nova.network.neutron [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Refreshing network info cache for port 57ba7057-9293-4134-9246-25ddf0b3af07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.506 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start _get_guest_xml network_info=[{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.510 253665 WARNING nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.519 253665 DEBUG nova.virt.libvirt.host [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.519 253665 DEBUG nova.virt.libvirt.host [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.523 253665 DEBUG nova.virt.libvirt.host [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.524 253665 DEBUG nova.virt.libvirt.host [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.524 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.524 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.524 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.525 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.525 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.525 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.525 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.526 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.526 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.526 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.526 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.527 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.530 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:45.782+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:53:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506280532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:53:45 compute-0 nova_compute[253661]: 2025-11-22 09:53:45.979 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.003 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.008 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:46 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2506280532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:53:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:53:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:53:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257460261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.484 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.488 253665 DEBUG nova.virt.libvirt.vif [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:53:40Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.488 253665 DEBUG nova.network.os_vif_util [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.489 253665 DEBUG nova.network.os_vif_util [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.490 253665 DEBUG nova.objects.instance [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.503 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <uuid>38b3cb94-17e4-4fd5-830f-1934cf3ee3a1</uuid>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <name>instance-00000097</name>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <nova:name>tempest-TestServerAdvancedOps-server-2052922089</nova:name>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:53:45</nova:creationTime>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <nova:user uuid="dd4a4c13ace640b98e8ff1360f0112e8">tempest-TestServerAdvancedOps-654297744-project-member</nova:user>
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <nova:project uuid="b0b724ea495b4cef9085881ad518a4f0">tempest-TestServerAdvancedOps-654297744</nova:project>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <nova:ports>
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <nova:port uuid="57ba7057-9293-4134-9246-25ddf0b3af07">
Nov 22 09:53:46 compute-0 nova_compute[253661]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:         </nova:port>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       </nova:ports>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <system>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <entry name="serial">38b3cb94-17e4-4fd5-830f-1934cf3ee3a1</entry>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <entry name="uuid">38b3cb94-17e4-4fd5-830f-1934cf3ee3a1</entry>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     </system>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <os>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   </os>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <features>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   </features>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk">
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config">
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       </source>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:53:46 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <interface type="ethernet">
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <mac address="fa:16:3e:5b:5c:38"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <mtu size="1442"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <target dev="tap57ba7057-92"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     </interface>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/console.log" append="off"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <video>
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     </video>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:53:46 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:53:46 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:53:46 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:53:46 compute-0 nova_compute[253661]: </domain>
Nov 22 09:53:46 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.505 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Preparing to wait for external event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.506 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.506 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.507 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.508 253665 DEBUG nova.virt.libvirt.vif [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:53:40Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.509 253665 DEBUG nova.network.os_vif_util [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.510 253665 DEBUG nova.network.os_vif_util [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.511 253665 DEBUG os_vif [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.513 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.514 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.520 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57ba7057-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.520 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57ba7057-92, col_values=(('external_ids', {'iface-id': '57ba7057-9293-4134-9246-25ddf0b3af07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:5c:38', 'vm-uuid': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:46 compute-0 NetworkManager[48920]: <info>  [1763805226.5233] manager: (tap57ba7057-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.532 253665 INFO os_vif [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92')
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.590 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.591 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.591 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] No VIF found with MAC fa:16:3e:5b:5c:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.592 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Using config drive
Nov 22 09:53:46 compute-0 nova_compute[253661]: 2025-11-22 09:53:46.616 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:53:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:46.765+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:47 compute-0 nova_compute[253661]: 2025-11-22 09:53:47.158 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Creating config drive at /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config
Nov 22 09:53:47 compute-0 nova_compute[253661]: 2025-11-22 09:53:47.168 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzczupkfh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:47 compute-0 nova_compute[253661]: 2025-11-22 09:53:47.331 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzczupkfh" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:47 compute-0 ceph-mon[75021]: pgmap v2763: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:53:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3257460261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:53:47 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:47 compute-0 nova_compute[253661]: 2025-11-22 09:53:47.380 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:53:47 compute-0 nova_compute[253661]: 2025-11-22 09:53:47.385 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:53:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:47.743+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:47 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 201 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:47 compute-0 nova_compute[253661]: 2025-11-22 09:53:47.825 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:47 compute-0 nova_compute[253661]: 2025-11-22 09:53:47.918 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:53:47 compute-0 nova_compute[253661]: 2025-11-22 09:53:47.919 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Deleting local config drive /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config because it was imported into RBD.
Nov 22 09:53:47 compute-0 kernel: tap57ba7057-92: entered promiscuous mode
Nov 22 09:53:47 compute-0 NetworkManager[48920]: <info>  [1763805227.9941] manager: (tap57ba7057-92): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Nov 22 09:53:47 compute-0 nova_compute[253661]: 2025-11-22 09:53:47.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:47 compute-0 ovn_controller[152872]: 2025-11-22T09:53:47Z|01623|binding|INFO|Claiming lport 57ba7057-9293-4134-9246-25ddf0b3af07 for this chassis.
Nov 22 09:53:47 compute-0 ovn_controller[152872]: 2025-11-22T09:53:47Z|01624|binding|INFO|57ba7057-9293-4134-9246-25ddf0b3af07: Claiming fa:16:3e:5b:5c:38 10.100.0.12
Nov 22 09:53:48 compute-0 systemd-udevd[411067]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:53:48 compute-0 systemd-machined[215941]: New machine qemu-183-instance-00000097.
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:48 compute-0 ovn_controller[152872]: 2025-11-22T09:53:48Z|01625|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 ovn-installed in OVS
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.038 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:48 compute-0 NetworkManager[48920]: <info>  [1763805228.0423] device (tap57ba7057-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:53:48 compute-0 NetworkManager[48920]: <info>  [1763805228.0432] device (tap57ba7057-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:53:48 compute-0 ovn_controller[152872]: 2025-11-22T09:53:48Z|01626|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 up in Southbound
Nov 22 09:53:48 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000097.
Nov 22 09:53:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:48.047 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:53:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:48.048 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 bound to our chassis
Nov 22 09:53:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:48.050 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:53:48 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:48.051 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32c9241d-ccea-47b7-9503-48cc4f80dfd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 22 09:53:48 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:48 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 201 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.530 253665 DEBUG nova.compute.manager [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.531 253665 DEBUG oslo_concurrency.lockutils [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.531 253665 DEBUG oslo_concurrency.lockutils [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.531 253665 DEBUG oslo_concurrency.lockutils [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.531 253665 DEBUG nova.compute.manager [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Processing event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.584 253665 DEBUG nova.network.neutron [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updated VIF entry in instance network info cache for port 57ba7057-9293-4134-9246-25ddf0b3af07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.584 253665 DEBUG nova.network.neutron [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:53:48 compute-0 nova_compute[253661]: 2025-11-22 09:53:48.598 253665 DEBUG oslo_concurrency.lockutils [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:53:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:53:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 184K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.93 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4662 writes, 21K keys, 4662 commit groups, 1.0 writes per commit group, ingest: 26.32 MB, 0.04 MB/s
                                           Interval WAL: 4662 writes, 1573 syncs, 2.96 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:53:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:48.789+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.075 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805229.0745053, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.075 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Started (Lifecycle Event)
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.077 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.080 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.084 253665 INFO nova.virt.libvirt.driver [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance spawned successfully.
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.084 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.092 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.096 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.105 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.105 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.106 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.106 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.107 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.107 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.131 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.132 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805229.074664, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.132 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Paused (Lifecycle Event)
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.149 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.153 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805229.0801437, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.153 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Resumed (Lifecycle Event)
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.169 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.173 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.197 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.254 253665 INFO nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Took 8.83 seconds to spawn the instance on the hypervisor.
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.255 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.315 253665 INFO nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Took 9.83 seconds to build instance.
Nov 22 09:53:49 compute-0 nova_compute[253661]: 2025-11-22 09:53:49.334 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:49 compute-0 ceph-mon[75021]: pgmap v2764: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 22 09:53:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:49.800+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:50 compute-0 nova_compute[253661]: 2025-11-22 09:53:50.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 22 09:53:50 compute-0 podman[411119]: 2025-11-22 09:53:50.427501612 +0000 UTC m=+0.097083712 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:53:50 compute-0 nova_compute[253661]: 2025-11-22 09:53:50.619 253665 DEBUG nova.compute.manager [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:50 compute-0 nova_compute[253661]: 2025-11-22 09:53:50.620 253665 DEBUG oslo_concurrency.lockutils [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:50 compute-0 nova_compute[253661]: 2025-11-22 09:53:50.621 253665 DEBUG oslo_concurrency.lockutils [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:50 compute-0 nova_compute[253661]: 2025-11-22 09:53:50.621 253665 DEBUG oslo_concurrency.lockutils [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:50 compute-0 nova_compute[253661]: 2025-11-22 09:53:50.621 253665 DEBUG nova.compute.manager [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:53:50 compute-0 nova_compute[253661]: 2025-11-22 09:53:50.621 253665 WARNING nova.compute.manager [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state None.
Nov 22 09:53:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:50 compute-0 ceph-mon[75021]: pgmap v2765: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 22 09:53:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:50.760+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:51 compute-0 nova_compute[253661]: 2025-11-22 09:53:51.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:51 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:51.772+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.259 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.260 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:53:52
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.rgw.root']
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.596 253665 DEBUG nova.objects.instance [None req-be2d0fb1-1bb3-48e6-8ff9-d5fdde4e962a dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.615 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805232.6150796, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.615 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Paused (Lifecycle Event)
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.631 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.634 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.650 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 22 09:53:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:52.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:53:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:53:52 compute-0 nova_compute[253661]: 2025-11-22 09:53:52.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:53 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 206 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:53 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:53 compute-0 ceph-mon[75021]: pgmap v2766: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 22 09:53:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:53.756+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:53 compute-0 kernel: tap57ba7057-92 (unregistering): left promiscuous mode
Nov 22 09:53:53 compute-0 NetworkManager[48920]: <info>  [1763805233.8575] device (tap57ba7057-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:53:53 compute-0 ovn_controller[152872]: 2025-11-22T09:53:53Z|01627|binding|INFO|Releasing lport 57ba7057-9293-4134-9246-25ddf0b3af07 from this chassis (sb_readonly=0)
Nov 22 09:53:53 compute-0 ovn_controller[152872]: 2025-11-22T09:53:53Z|01628|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 down in Southbound
Nov 22 09:53:53 compute-0 nova_compute[253661]: 2025-11-22 09:53:53.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:53 compute-0 ovn_controller[152872]: 2025-11-22T09:53:53Z|01629|binding|INFO|Removing iface tap57ba7057-92 ovn-installed in OVS
Nov 22 09:53:53 compute-0 nova_compute[253661]: 2025-11-22 09:53:53.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:53 compute-0 nova_compute[253661]: 2025-11-22 09:53:53.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:53.897 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:53:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:53.898 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 unbound from our chassis
Nov 22 09:53:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:53.899 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:53:53 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:53.900 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4e14bd9c-dccc-4e38-9901-b378f2a3f29c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:53 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 22 09:53:53 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Consumed 4.697s CPU time.
Nov 22 09:53:53 compute-0 systemd-machined[215941]: Machine qemu-183-instance-00000097 terminated.
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.035 253665 DEBUG nova.compute.manager [None req-be2d0fb1-1bb3-48e6-8ff9-d5fdde4e962a dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.287 253665 DEBUG nova.compute.manager [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.288 253665 DEBUG oslo_concurrency.lockutils [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.288 253665 DEBUG oslo_concurrency.lockutils [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.289 253665 DEBUG oslo_concurrency.lockutils [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.289 253665 DEBUG nova.compute.manager [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.289 253665 WARNING nova.compute.manager [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state None.
Nov 22 09:53:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:53:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:54 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 206 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:54 compute-0 ceph-mon[75021]: pgmap v2767: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.668 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.687 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.688 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.688 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.688 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:53:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:54.802+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.869 253665 INFO nova.compute.manager [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Resuming
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.870 253665 DEBUG nova.objects.instance [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'flavor' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.900 253665 DEBUG oslo_concurrency.lockutils [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.901 253665 DEBUG oslo_concurrency.lockutils [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquired lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:53:54 compute-0 nova_compute[253661]: 2025-11-22 09:53:54.901 253665 DEBUG nova.network.neutron [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:53:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:53:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.2 total, 600.0 interval
                                           Cumulative writes: 44K writes, 177K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.85 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4300 writes, 17K keys, 4300 commit groups, 1.0 writes per commit group, ingest: 22.17 MB, 0.04 MB/s
                                           Interval WAL: 4300 writes, 1636 syncs, 2.63 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:53:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:53:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:53:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:53:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:53:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:53:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:55.792+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:56 compute-0 nova_compute[253661]: 2025-11-22 09:53:56.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:56 compute-0 nova_compute[253661]: 2025-11-22 09:53:56.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:56 compute-0 nova_compute[253661]: 2025-11-22 09:53:56.371 253665 DEBUG nova.compute.manager [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:56 compute-0 nova_compute[253661]: 2025-11-22 09:53:56.372 253665 DEBUG oslo_concurrency.lockutils [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:56 compute-0 nova_compute[253661]: 2025-11-22 09:53:56.372 253665 DEBUG oslo_concurrency.lockutils [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:56 compute-0 nova_compute[253661]: 2025-11-22 09:53:56.372 253665 DEBUG oslo_concurrency.lockutils [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:56 compute-0 nova_compute[253661]: 2025-11-22 09:53:56.372 253665 DEBUG nova.compute.manager [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:53:56 compute-0 nova_compute[253661]: 2025-11-22 09:53:56.373 253665 WARNING nova.compute.manager [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state resuming.
Nov 22 09:53:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:53:56 compute-0 nova_compute[253661]: 2025-11-22 09:53:56.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:56.768+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:53:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:53:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:53:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:53:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:53:57 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:57 compute-0 ceph-mon[75021]: pgmap v2768: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:53:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:57.778+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:57 compute-0 nova_compute[253661]: 2025-11-22 09:53:57.829 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.053 253665 DEBUG nova.network.neutron [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.068 253665 DEBUG oslo_concurrency.lockutils [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Releasing lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.072 253665 DEBUG nova.virt.libvirt.vif [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:53:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:53:54Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.073 253665 DEBUG nova.network.os_vif_util [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.073 253665 DEBUG nova.network.os_vif_util [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.074 253665 DEBUG os_vif [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.074 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.075 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.077 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57ba7057-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.077 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57ba7057-92, col_values=(('external_ids', {'iface-id': '57ba7057-9293-4134-9246-25ddf0b3af07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:5c:38', 'vm-uuid': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.078 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.078 253665 INFO os_vif [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92')
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.115 253665 DEBUG nova.objects.instance [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:53:58 compute-0 kernel: tap57ba7057-92: entered promiscuous mode
Nov 22 09:53:58 compute-0 ovn_controller[152872]: 2025-11-22T09:53:58Z|01630|binding|INFO|Claiming lport 57ba7057-9293-4134-9246-25ddf0b3af07 for this chassis.
Nov 22 09:53:58 compute-0 ovn_controller[152872]: 2025-11-22T09:53:58Z|01631|binding|INFO|57ba7057-9293-4134-9246-25ddf0b3af07: Claiming fa:16:3e:5b:5c:38 10.100.0.12
Nov 22 09:53:58 compute-0 NetworkManager[48920]: <info>  [1763805238.1816] manager: (tap57ba7057-92): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:58 compute-0 ovn_controller[152872]: 2025-11-22T09:53:58Z|01632|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 ovn-installed in OVS
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:53:58 compute-0 systemd-udevd[411177]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:53:58 compute-0 systemd-machined[215941]: New machine qemu-184-instance-00000097.
Nov 22 09:53:58 compute-0 NetworkManager[48920]: <info>  [1763805238.2149] device (tap57ba7057-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:53:58 compute-0 NetworkManager[48920]: <info>  [1763805238.2156] device (tap57ba7057-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:53:58 compute-0 ovn_controller[152872]: 2025-11-22T09:53:58Z|01633|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 up in Southbound
Nov 22 09:53:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:58.224 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:53:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:58.225 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 bound to our chassis
Nov 22 09:53:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:58.226 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:53:58 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:53:58.226 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[267168d6-f43b-46ac-804c-b2a2bfa93b5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:53:58 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000097.
Nov 22 09:53:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 211 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:53:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:53:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:58.738+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.769 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.769 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805238.7686427, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.769 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Started (Lifecycle Event)
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.784 253665 DEBUG nova.compute.manager [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.784 253665 DEBUG nova.objects.instance [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.786 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.790 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.795 253665 INFO nova.virt.libvirt.driver [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance running successfully.
Nov 22 09:53:58 compute-0 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.798 253665 DEBUG nova.virt.libvirt.guest [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.798 253665 DEBUG nova.compute.manager [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.803 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.804 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805238.771586, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.804 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Resumed (Lifecycle Event)
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.824 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.827 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.843 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.989 253665 DEBUG nova.compute.manager [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.990 253665 DEBUG oslo_concurrency.lockutils [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.990 253665 DEBUG oslo_concurrency.lockutils [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.990 253665 DEBUG oslo_concurrency.lockutils [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.991 253665 DEBUG nova.compute.manager [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:53:58 compute-0 nova_compute[253661]: 2025-11-22 09:53:58.991 253665 WARNING nova.compute.manager [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state None.
Nov 22 09:53:59 compute-0 nova_compute[253661]: 2025-11-22 09:53:59.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:53:59 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 211 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:53:59 compute-0 ceph-mon[75021]: pgmap v2769: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 09:53:59 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:53:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:59.691+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:53:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:00 compute-0 nova_compute[253661]: 2025-11-22 09:54:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:00 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 09:54:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 144K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.92 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3530 writes, 14K keys, 3530 commit groups, 1.0 writes per commit group, ingest: 17.14 MB, 0.03 MB/s
                                           Interval WAL: 3530 writes, 1327 syncs, 2.66 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 09:54:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 69 op/s
Nov 22 09:54:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:00.644+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.076 253665 DEBUG nova.compute.manager [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.076 253665 DEBUG oslo_concurrency.lockutils [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.077 253665 DEBUG oslo_concurrency.lockutils [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.077 253665 DEBUG oslo_concurrency.lockutils [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.077 253665 DEBUG nova.compute.manager [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.077 253665 WARNING nova.compute.manager [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state None.
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.256 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.316 253665 DEBUG nova.objects.instance [None req-85dd401f-7155-4360-9fd6-45ac5bf8d048 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.337 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805241.3361979, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.339 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Paused (Lifecycle Event)
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.362 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.367 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.383 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 22 09:54:01 compute-0 ceph-mon[75021]: pgmap v2770: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 69 op/s
Nov 22 09:54:01 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:01 compute-0 nova_compute[253661]: 2025-11-22 09:54:01.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:01.645+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 69 op/s
Nov 22 09:54:02 compute-0 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 09:54:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:02.674+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:02 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:54:02 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/780034624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:54:02 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:02 compute-0 ceph-mon[75021]: pgmap v2771: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 69 op/s
Nov 22 09:54:02 compute-0 nova_compute[253661]: 2025-11-22 09:54:02.786 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:54:02 compute-0 nova_compute[253661]: 2025-11-22 09:54:02.862 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:03 compute-0 kernel: tap57ba7057-92 (unregistering): left promiscuous mode
Nov 22 09:54:03 compute-0 NetworkManager[48920]: <info>  [1763805243.0230] device (tap57ba7057-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:03 compute-0 ovn_controller[152872]: 2025-11-22T09:54:03Z|01634|binding|INFO|Releasing lport 57ba7057-9293-4134-9246-25ddf0b3af07 from this chassis (sb_readonly=0)
Nov 22 09:54:03 compute-0 ovn_controller[152872]: 2025-11-22T09:54:03Z|01635|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 down in Southbound
Nov 22 09:54:03 compute-0 ovn_controller[152872]: 2025-11-22T09:54:03Z|01636|binding|INFO|Removing iface tap57ba7057-92 ovn-installed in OVS
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.037 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011090219443901804 of space, bias 1.0, pg target 0.3327065833170541 quantized to 32 (current 32)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:54:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:54:03 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 22 09:54:03 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Consumed 3.162s CPU time.
Nov 22 09:54:03 compute-0 systemd-machined[215941]: Machine qemu-184-instance-00000097 terminated.
Nov 22 09:54:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:03.125 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:54:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:03.126 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 unbound from our chassis
Nov 22 09:54:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:03.127 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:54:03 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:03.127 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f81fbeef-5c02-407f-b0df-38521b550b3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.144 253665 DEBUG nova.compute.manager [None req-85dd401f-7155-4360-9fd6-45ac5bf8d048 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.225 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.226 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.230 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.230 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.352 253665 DEBUG nova.compute.manager [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.352 253665 DEBUG oslo_concurrency.lockutils [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.353 253665 DEBUG oslo_concurrency.lockutils [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.353 253665 DEBUG oslo_concurrency.lockutils [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.353 253665 DEBUG nova.compute.manager [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.354 253665 WARNING nova.compute.manager [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state None.
Nov 22 09:54:03 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 216 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.402 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.403 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3176MB free_disk=59.9217529296875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.403 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.403 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.483 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.483 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.484 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.484 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:54:03 compute-0 nova_compute[253661]: 2025-11-22 09:54:03.549 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:54:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:03.653+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:54:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2629260196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:54:04 compute-0 nova_compute[253661]: 2025-11-22 09:54:04.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:54:04 compute-0 nova_compute[253661]: 2025-11-22 09:54:04.265 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:54:04 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/780034624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:54:04 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 216 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:04 compute-0 nova_compute[253661]: 2025-11-22 09:54:04.280 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:54:04 compute-0 nova_compute[253661]: 2025-11-22 09:54:04.323 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:54:04 compute-0 nova_compute[253661]: 2025-11-22 09:54:04.323 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:04 compute-0 nova_compute[253661]: 2025-11-22 09:54:04.323 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 74 op/s
Nov 22 09:54:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:04.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.323 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.347 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.360 253665 INFO nova.compute.manager [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Resuming
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.361 253665 DEBUG nova.objects.instance [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'flavor' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.397 253665 DEBUG oslo_concurrency.lockutils [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.398 253665 DEBUG oslo_concurrency.lockutils [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquired lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.398 253665 DEBUG nova.network.neutron [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.439 253665 DEBUG nova.compute.manager [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.440 253665 DEBUG oslo_concurrency.lockutils [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.440 253665 DEBUG oslo_concurrency.lockutils [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.440 253665 DEBUG oslo_concurrency.lockutils [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.441 253665 DEBUG nova.compute.manager [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:54:05 compute-0 nova_compute[253661]: 2025-11-22 09:54:05.441 253665 WARNING nova.compute.manager [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state resuming.
Nov 22 09:54:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2629260196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:54:05 compute-0 ceph-mon[75021]: pgmap v2772: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 74 op/s
Nov 22 09:54:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:05.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 09:54:06 compute-0 nova_compute[253661]: 2025-11-22 09:54:06.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:06.675+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:06 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:06 compute-0 ceph-mon[75021]: pgmap v2773: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.360 253665 DEBUG nova.network.neutron [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.374 253665 DEBUG oslo_concurrency.lockutils [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Releasing lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.380 253665 DEBUG nova.virt.libvirt.vif [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:53:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:54:03Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.381 253665 DEBUG nova.network.os_vif_util [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.382 253665 DEBUG nova.network.os_vif_util [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.382 253665 DEBUG os_vif [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.383 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.384 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.388 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57ba7057-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.388 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57ba7057-92, col_values=(('external_ids', {'iface-id': '57ba7057-9293-4134-9246-25ddf0b3af07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:5c:38', 'vm-uuid': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.389 253665 INFO os_vif [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92')
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.409 253665 DEBUG nova.objects.instance [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:54:07 compute-0 kernel: tap57ba7057-92: entered promiscuous mode
Nov 22 09:54:07 compute-0 NetworkManager[48920]: <info>  [1763805247.4839] manager: (tap57ba7057-92): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Nov 22 09:54:07 compute-0 ovn_controller[152872]: 2025-11-22T09:54:07Z|01637|binding|INFO|Claiming lport 57ba7057-9293-4134-9246-25ddf0b3af07 for this chassis.
Nov 22 09:54:07 compute-0 ovn_controller[152872]: 2025-11-22T09:54:07Z|01638|binding|INFO|57ba7057-9293-4134-9246-25ddf0b3af07: Claiming fa:16:3e:5b:5c:38 10.100.0.12
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.485 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:07.491 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '7', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:54:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:07.492 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 bound to our chassis
Nov 22 09:54:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:07.493 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:54:07 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:07.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea32594-2e04-4f63-a8d9-9955478d608a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:54:07 compute-0 ovn_controller[152872]: 2025-11-22T09:54:07Z|01639|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 ovn-installed in OVS
Nov 22 09:54:07 compute-0 ovn_controller[152872]: 2025-11-22T09:54:07Z|01640|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 up in Southbound
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:07 compute-0 systemd-udevd[411307]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 09:54:07 compute-0 systemd-machined[215941]: New machine qemu-185-instance-00000097.
Nov 22 09:54:07 compute-0 NetworkManager[48920]: <info>  [1763805247.5291] device (tap57ba7057-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 09:54:07 compute-0 NetworkManager[48920]: <info>  [1763805247.5311] device (tap57ba7057-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 09:54:07 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Nov 22 09:54:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:07.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:07 compute-0 nova_compute[253661]: 2025-11-22 09:54:07.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:08 compute-0 nova_compute[253661]: 2025-11-22 09:54:08.087 253665 DEBUG nova.compute.manager [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:54:08 compute-0 nova_compute[253661]: 2025-11-22 09:54:08.087 253665 DEBUG oslo_concurrency.lockutils [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:08 compute-0 nova_compute[253661]: 2025-11-22 09:54:08.088 253665 DEBUG oslo_concurrency.lockutils [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:08 compute-0 nova_compute[253661]: 2025-11-22 09:54:08.088 253665 DEBUG oslo_concurrency.lockutils [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:08 compute-0 nova_compute[253661]: 2025-11-22 09:54:08.088 253665 DEBUG nova.compute.manager [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:54:08 compute-0 nova_compute[253661]: 2025-11-22 09:54:08.088 253665 WARNING nova.compute.manager [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state resuming.
Nov 22 09:54:08 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 221 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 09:54:08 compute-0 nova_compute[253661]: 2025-11-22 09:54:08.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:08.543 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:54:08 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:08.544 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:54:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:08.673+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:09 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:09 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 221 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:09 compute-0 ceph-mon[75021]: pgmap v2774: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.133 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.134 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805249.133124, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.134 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Started (Lifecycle Event)
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.151 253665 DEBUG nova.compute.manager [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.152 253665 DEBUG nova.objects.instance [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.158 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.163 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.169 253665 INFO nova.virt.libvirt.driver [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance running successfully.
Nov 22 09:54:09 compute-0 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.171 253665 DEBUG nova.virt.libvirt.guest [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.172 253665 DEBUG nova.compute.manager [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.196 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.197 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805249.138165, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.198 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Resumed (Lifecycle Event)
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.217 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.220 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:54:09 compute-0 nova_compute[253661]: 2025-11-22 09:54:09.243 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 22 09:54:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:09.715+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:10 compute-0 nova_compute[253661]: 2025-11-22 09:54:10.399 253665 DEBUG nova.compute.manager [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:54:10 compute-0 nova_compute[253661]: 2025-11-22 09:54:10.399 253665 DEBUG oslo_concurrency.lockutils [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:10 compute-0 nova_compute[253661]: 2025-11-22 09:54:10.399 253665 DEBUG oslo_concurrency.lockutils [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:10 compute-0 nova_compute[253661]: 2025-11-22 09:54:10.400 253665 DEBUG oslo_concurrency.lockutils [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:10 compute-0 nova_compute[253661]: 2025-11-22 09:54:10.400 253665 DEBUG nova.compute.manager [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:54:10 compute-0 nova_compute[253661]: 2025-11-22 09:54:10.400 253665 WARNING nova.compute.manager [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state None.
Nov 22 09:54:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 09:54:10 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:10.547 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:54:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:10.729+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:11 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:11 compute-0 ceph-mon[75021]: pgmap v2775: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 09:54:11 compute-0 nova_compute[253661]: 2025-11-22 09:54:11.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:11.768+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:12 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:12 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.489 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.490 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.490 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.490 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.490 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.491 253665 INFO nova.compute.manager [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Terminating instance
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.492 253665 DEBUG nova.compute.manager [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:54:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:54:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1082317192' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:54:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:54:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1082317192' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:54:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:12.753+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:12 compute-0 kernel: tap57ba7057-92 (unregistering): left promiscuous mode
Nov 22 09:54:12 compute-0 NetworkManager[48920]: <info>  [1763805252.7898] device (tap57ba7057-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:54:12 compute-0 ovn_controller[152872]: 2025-11-22T09:54:12Z|01641|binding|INFO|Releasing lport 57ba7057-9293-4134-9246-25ddf0b3af07 from this chassis (sb_readonly=0)
Nov 22 09:54:12 compute-0 ovn_controller[152872]: 2025-11-22T09:54:12Z|01642|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 down in Southbound
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:12 compute-0 ovn_controller[152872]: 2025-11-22T09:54:12Z|01643|binding|INFO|Removing iface tap57ba7057-92 ovn-installed in OVS
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.798 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:12.812 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '8', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:54:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:12.814 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 unbound from our chassis
Nov 22 09:54:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:12.815 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 22 09:54:12 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:12.816 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3f0923-65f3-4f1d-a41d-40b0e4051a91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:54:12 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 22 09:54:12 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 4.745s CPU time.
Nov 22 09:54:12 compute-0 systemd-machined[215941]: Machine qemu-185-instance-00000097 terminated.
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:12 compute-0 podman[411360]: 2025-11-22 09:54:12.891193398 +0000 UTC m=+0.061693866 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 22 09:54:12 compute-0 podman[411361]: 2025-11-22 09:54:12.897354155 +0000 UTC m=+0.068082798 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.929 253665 INFO nova.virt.libvirt.driver [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance destroyed successfully.
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.930 253665 DEBUG nova.objects.instance [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'resources' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.941 253665 DEBUG nova.virt.libvirt.vif [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:53:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:54:09Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.942 253665 DEBUG nova.network.os_vif_util [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.942 253665 DEBUG nova.network.os_vif_util [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.943 253665 DEBUG os_vif [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.945 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.945 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57ba7057-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.949 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:12 compute-0 nova_compute[253661]: 2025-11-22 09:54:12.951 253665 INFO os_vif [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92')
Nov 22 09:54:13 compute-0 nova_compute[253661]: 2025-11-22 09:54:13.121 253665 DEBUG nova.compute.manager [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:54:13 compute-0 nova_compute[253661]: 2025-11-22 09:54:13.122 253665 DEBUG oslo_concurrency.lockutils [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:13 compute-0 nova_compute[253661]: 2025-11-22 09:54:13.122 253665 DEBUG oslo_concurrency.lockutils [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:13 compute-0 nova_compute[253661]: 2025-11-22 09:54:13.123 253665 DEBUG oslo_concurrency.lockutils [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:13 compute-0 nova_compute[253661]: 2025-11-22 09:54:13.123 253665 DEBUG nova.compute.manager [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:54:13 compute-0 nova_compute[253661]: 2025-11-22 09:54:13.123 253665 DEBUG nova.compute.manager [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:54:13 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 226 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:13 compute-0 ceph-mon[75021]: pgmap v2776: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 09:54:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1082317192' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:54:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1082317192' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:54:13 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:13 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 226 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:13.740+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:14 compute-0 sudo[411432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:54:14 compute-0 sudo[411432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:14 compute-0 sudo[411432]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:14 compute-0 sudo[411457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:54:14 compute-0 sudo[411457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:14 compute-0 sudo[411457]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 10 op/s
Nov 22 09:54:14 compute-0 sudo[411482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:54:14 compute-0 sudo[411482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:14 compute-0 sudo[411482]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:14 compute-0 sudo[411507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:54:14 compute-0 sudo[411507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:14 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:14 compute-0 ceph-mon[75021]: pgmap v2777: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 10 op/s
Nov 22 09:54:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:14.697+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:14 compute-0 sudo[411507]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:54:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:54:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:54:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:54:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:54:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:54:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e38c6843-7a75-422b-b1b1-53a05607812c does not exist
Nov 22 09:54:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0cfeb4b7-9d05-4556-af09-515318eab703 does not exist
Nov 22 09:54:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9fa37ce1-e10b-4287-9148-f2a8c66d70fb does not exist
Nov 22 09:54:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:54:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:54:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:54:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:54:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:54:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:54:15 compute-0 sudo[411564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:54:15 compute-0 sudo[411564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:15 compute-0 sudo[411564]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.105 253665 INFO nova.virt.libvirt.driver [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Deleting instance files /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_del
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.107 253665 INFO nova.virt.libvirt.driver [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Deletion of /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_del complete
Nov 22 09:54:15 compute-0 sudo[411589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:54:15 compute-0 sudo[411589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:15 compute-0 sudo[411589]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.202 253665 DEBUG nova.compute.manager [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.203 253665 DEBUG oslo_concurrency.lockutils [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.203 253665 DEBUG oslo_concurrency.lockutils [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.203 253665 DEBUG oslo_concurrency.lockutils [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.204 253665 DEBUG nova.compute.manager [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.204 253665 WARNING nova.compute.manager [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state deleting.
Nov 22 09:54:15 compute-0 sudo[411614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:54:15 compute-0 sudo[411614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:15 compute-0 sudo[411614]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.248 253665 INFO nova.compute.manager [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Took 2.76 seconds to destroy the instance on the hypervisor.
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.249 253665 DEBUG oslo.service.loopingcall [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.249 253665 DEBUG nova.compute.manager [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:54:15 compute-0 nova_compute[253661]: 2025-11-22 09:54:15.249 253665 DEBUG nova.network.neutron [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:54:15 compute-0 sudo[411639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:54:15 compute-0 sudo[411639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:15.672+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:15 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:54:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:54:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:54:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:54:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:54:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:54:15 compute-0 podman[411706]: 2025-11-22 09:54:15.693112882 +0000 UTC m=+0.052259805 container create 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:54:15 compute-0 systemd[1]: Started libpod-conmon-4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de.scope.
Nov 22 09:54:15 compute-0 podman[411706]: 2025-11-22 09:54:15.668399046 +0000 UTC m=+0.027545969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:54:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:54:15 compute-0 podman[411706]: 2025-11-22 09:54:15.829278946 +0000 UTC m=+0.188425869 container init 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 09:54:15 compute-0 podman[411706]: 2025-11-22 09:54:15.837964306 +0000 UTC m=+0.197111209 container start 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 09:54:15 compute-0 brave_yonath[411723]: 167 167
Nov 22 09:54:15 compute-0 systemd[1]: libpod-4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de.scope: Deactivated successfully.
Nov 22 09:54:15 compute-0 conmon[411723]: conmon 4f9d4667b12e308ef064 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de.scope/container/memory.events
Nov 22 09:54:15 compute-0 podman[411706]: 2025-11-22 09:54:15.859568674 +0000 UTC m=+0.218715607 container attach 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 09:54:15 compute-0 podman[411706]: 2025-11-22 09:54:15.860063117 +0000 UTC m=+0.219210020 container died 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 22 09:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f14ff6af7cd1306c695e55ceaca8a97e6780c684d31e9b13726ec349b390f88-merged.mount: Deactivated successfully.
Nov 22 09:54:15 compute-0 podman[411706]: 2025-11-22 09:54:15.95957595 +0000 UTC m=+0.318722853 container remove 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:54:15 compute-0 systemd[1]: libpod-conmon-4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de.scope: Deactivated successfully.
Nov 22 09:54:16 compute-0 podman[411749]: 2025-11-22 09:54:16.138900408 +0000 UTC m=+0.055128840 container create fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:54:16 compute-0 systemd[1]: Started libpod-conmon-fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931.scope.
Nov 22 09:54:16 compute-0 podman[411749]: 2025-11-22 09:54:16.108358473 +0000 UTC m=+0.024586925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:54:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:16 compute-0 podman[411749]: 2025-11-22 09:54:16.255466914 +0000 UTC m=+0.171695366 container init fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:54:16 compute-0 podman[411749]: 2025-11-22 09:54:16.267783756 +0000 UTC m=+0.184012188 container start fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:54:16 compute-0 podman[411749]: 2025-11-22 09:54:16.28139082 +0000 UTC m=+0.197619252 container attach fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:54:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 KiB/s rd, 5 op/s
Nov 22 09:54:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:16.657+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:16 compute-0 nova_compute[253661]: 2025-11-22 09:54:16.672 253665 DEBUG nova.network.neutron [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:54:16 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:16 compute-0 ceph-mon[75021]: pgmap v2778: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 KiB/s rd, 5 op/s
Nov 22 09:54:16 compute-0 nova_compute[253661]: 2025-11-22 09:54:16.732 253665 INFO nova.compute.manager [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Took 1.48 seconds to deallocate network for instance.
Nov 22 09:54:16 compute-0 nova_compute[253661]: 2025-11-22 09:54:16.872 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:16 compute-0 nova_compute[253661]: 2025-11-22 09:54:16.873 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:16 compute-0 nova_compute[253661]: 2025-11-22 09:54:16.941 253665 DEBUG oslo_concurrency.processutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:54:17 compute-0 nova_compute[253661]: 2025-11-22 09:54:17.276 253665 DEBUG nova.compute.manager [req-9371b582-d6eb-49c7-8258-06195107665a req-aac5a369-688f-4e6c-b6c6-f84fceabd311 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-deleted-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:54:17 compute-0 brave_ishizaka[411766]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:54:17 compute-0 brave_ishizaka[411766]: --> relative data size: 1.0
Nov 22 09:54:17 compute-0 brave_ishizaka[411766]: --> All data devices are unavailable
Nov 22 09:54:17 compute-0 systemd[1]: libpod-fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931.scope: Deactivated successfully.
Nov 22 09:54:17 compute-0 podman[411749]: 2025-11-22 09:54:17.335372529 +0000 UTC m=+1.251600981 container died fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:54:17 compute-0 systemd[1]: libpod-fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931.scope: Consumed 1.013s CPU time.
Nov 22 09:54:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:54:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229528339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:54:17 compute-0 nova_compute[253661]: 2025-11-22 09:54:17.392 253665 DEBUG oslo_concurrency.processutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:54:17 compute-0 nova_compute[253661]: 2025-11-22 09:54:17.398 253665 DEBUG nova.compute.provider_tree [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:54:17 compute-0 nova_compute[253661]: 2025-11-22 09:54:17.412 253665 DEBUG nova.scheduler.client.report [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:54:17 compute-0 nova_compute[253661]: 2025-11-22 09:54:17.515 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff-merged.mount: Deactivated successfully.
Nov 22 09:54:17 compute-0 nova_compute[253661]: 2025-11-22 09:54:17.637 253665 INFO nova.scheduler.client.report [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Deleted allocations for instance 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1
Nov 22 09:54:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:17.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:17 compute-0 nova_compute[253661]: 2025-11-22 09:54:17.836 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:17 compute-0 nova_compute[253661]: 2025-11-22 09:54:17.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:17 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1229528339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:54:17 compute-0 podman[411749]: 2025-11-22 09:54:17.94544084 +0000 UTC m=+1.861669272 container remove fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:54:17 compute-0 nova_compute[253661]: 2025-11-22 09:54:17.947 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:17 compute-0 systemd[1]: libpod-conmon-fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931.scope: Deactivated successfully.
Nov 22 09:54:17 compute-0 sudo[411639]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:18 compute-0 sudo[411829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:54:18 compute-0 sudo[411829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:18 compute-0 sudo[411829]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:18 compute-0 sudo[411854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:54:18 compute-0 sudo[411854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:18 compute-0 sudo[411854]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:18 compute-0 sudo[411879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:54:18 compute-0 sudo[411879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:18 compute-0 sudo[411879]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:18 compute-0 sudo[411904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:54:18 compute-0 sudo[411904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:18 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 231 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 09:54:18 compute-0 podman[411968]: 2025-11-22 09:54:18.558336313 +0000 UTC m=+0.068806566 container create 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:54:18 compute-0 podman[411968]: 2025-11-22 09:54:18.513359933 +0000 UTC m=+0.023830206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:54:18 compute-0 systemd[1]: Started libpod-conmon-81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1.scope.
Nov 22 09:54:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:54:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:18.684+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:18 compute-0 podman[411968]: 2025-11-22 09:54:18.712648586 +0000 UTC m=+0.223118869 container init 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 09:54:18 compute-0 podman[411968]: 2025-11-22 09:54:18.719780607 +0000 UTC m=+0.230250860 container start 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:54:18 compute-0 vigilant_noyce[411984]: 167 167
Nov 22 09:54:18 compute-0 systemd[1]: libpod-81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1.scope: Deactivated successfully.
Nov 22 09:54:18 compute-0 conmon[411984]: conmon 81a7104a8ffb43e83be1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1.scope/container/memory.events
Nov 22 09:54:18 compute-0 systemd[1]: Starting dnf makecache...
Nov 22 09:54:18 compute-0 podman[411968]: 2025-11-22 09:54:18.813237477 +0000 UTC m=+0.323707730 container attach 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:54:18 compute-0 podman[411968]: 2025-11-22 09:54:18.813690668 +0000 UTC m=+0.324160921 container died 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:54:19 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:19 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 231 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:19 compute-0 ceph-mon[75021]: pgmap v2779: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 09:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-834d0e0a2589eac454f44cc9f404ce95604ca5ca2154f6b3a80cac6805ae3184-merged.mount: Deactivated successfully.
Nov 22 09:54:19 compute-0 dnf[411989]: Metadata cache refreshed recently.
Nov 22 09:54:19 compute-0 podman[411968]: 2025-11-22 09:54:19.273719265 +0000 UTC m=+0.784189518 container remove 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:54:19 compute-0 systemd[1]: libpod-conmon-81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1.scope: Deactivated successfully.
Nov 22 09:54:19 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 22 09:54:19 compute-0 systemd[1]: Finished dnf makecache.
Nov 22 09:54:19 compute-0 podman[412009]: 2025-11-22 09:54:19.478032516 +0000 UTC m=+0.065923614 container create 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 09:54:19 compute-0 systemd[1]: Started libpod-conmon-8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7.scope.
Nov 22 09:54:19 compute-0 podman[412009]: 2025-11-22 09:54:19.43994936 +0000 UTC m=+0.027840468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:54:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:19 compute-0 podman[412009]: 2025-11-22 09:54:19.599572667 +0000 UTC m=+0.187463785 container init 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:54:19 compute-0 podman[412009]: 2025-11-22 09:54:19.60912683 +0000 UTC m=+0.197017928 container start 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 09:54:19 compute-0 podman[412009]: 2025-11-22 09:54:19.618698553 +0000 UTC m=+0.206589681 container attach 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:54:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:19.684+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:20 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]: {
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:     "0": [
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:         {
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "devices": [
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "/dev/loop3"
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             ],
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_name": "ceph_lv0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_size": "21470642176",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "name": "ceph_lv0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "tags": {
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.cluster_name": "ceph",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.crush_device_class": "",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.encrypted": "0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.osd_id": "0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.type": "block",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.vdo": "0"
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             },
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "type": "block",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "vg_name": "ceph_vg0"
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:         }
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:     ],
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:     "1": [
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:         {
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "devices": [
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "/dev/loop4"
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             ],
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_name": "ceph_lv1",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_size": "21470642176",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "name": "ceph_lv1",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "tags": {
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.cluster_name": "ceph",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.crush_device_class": "",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.encrypted": "0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.osd_id": "1",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.type": "block",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.vdo": "0"
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             },
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "type": "block",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "vg_name": "ceph_vg1"
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:         }
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:     ],
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:     "2": [
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:         {
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "devices": [
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "/dev/loop5"
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             ],
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_name": "ceph_lv2",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_size": "21470642176",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "name": "ceph_lv2",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "tags": {
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.cluster_name": "ceph",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.crush_device_class": "",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.encrypted": "0",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.osd_id": "2",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.type": "block",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:                 "ceph.vdo": "0"
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             },
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "type": "block",
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:             "vg_name": "ceph_vg2"
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:         }
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]:     ]
Nov 22 09:54:20 compute-0 relaxed_grothendieck[412025]: }
Nov 22 09:54:20 compute-0 systemd[1]: libpod-8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7.scope: Deactivated successfully.
Nov 22 09:54:20 compute-0 podman[412009]: 2025-11-22 09:54:20.448173367 +0000 UTC m=+1.036064465 container died 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24-merged.mount: Deactivated successfully.
Nov 22 09:54:20 compute-0 podman[412009]: 2025-11-22 09:54:20.549150089 +0000 UTC m=+1.137041187 container remove 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 09:54:20 compute-0 systemd[1]: libpod-conmon-8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7.scope: Deactivated successfully.
Nov 22 09:54:20 compute-0 sudo[411904]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:20 compute-0 podman[412034]: 2025-11-22 09:54:20.617327667 +0000 UTC m=+0.135941288 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:54:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:20.653+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:20 compute-0 sudo[412066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:54:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:20 compute-0 sudo[412066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:20 compute-0 sudo[412066]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:20 compute-0 sudo[412094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:54:20 compute-0 sudo[412094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:20 compute-0 sudo[412094]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:20 compute-0 sudo[412119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:54:20 compute-0 sudo[412119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:20 compute-0 sudo[412119]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:20 compute-0 sudo[412144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:54:20 compute-0 sudo[412144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:21 compute-0 podman[412208]: 2025-11-22 09:54:21.179041611 +0000 UTC m=+0.054237685 container create 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:54:21 compute-0 systemd[1]: Started libpod-conmon-1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b.scope.
Nov 22 09:54:21 compute-0 podman[412208]: 2025-11-22 09:54:21.14662464 +0000 UTC m=+0.021820734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:54:21 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:21 compute-0 ceph-mon[75021]: pgmap v2780: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 09:54:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:54:21 compute-0 podman[412208]: 2025-11-22 09:54:21.268232824 +0000 UTC m=+0.143428918 container init 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 09:54:21 compute-0 podman[412208]: 2025-11-22 09:54:21.276046242 +0000 UTC m=+0.151242306 container start 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:54:21 compute-0 musing_kapitsa[412223]: 167 167
Nov 22 09:54:21 compute-0 systemd[1]: libpod-1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b.scope: Deactivated successfully.
Nov 22 09:54:21 compute-0 podman[412208]: 2025-11-22 09:54:21.288147038 +0000 UTC m=+0.163343132 container attach 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:54:21 compute-0 podman[412208]: 2025-11-22 09:54:21.288808306 +0000 UTC m=+0.164004370 container died 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9a7a7a5f2e6b658f309a96fcdc53615a8fcb6667b0db721dd310bfdb98fc8cf-merged.mount: Deactivated successfully.
Nov 22 09:54:21 compute-0 podman[412208]: 2025-11-22 09:54:21.346683513 +0000 UTC m=+0.221879587 container remove 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:54:21 compute-0 systemd[1]: libpod-conmon-1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b.scope: Deactivated successfully.
Nov 22 09:54:21 compute-0 podman[412246]: 2025-11-22 09:54:21.520305326 +0000 UTC m=+0.048222904 container create a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:54:21 compute-0 systemd[1]: Started libpod-conmon-a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46.scope.
Nov 22 09:54:21 compute-0 podman[412246]: 2025-11-22 09:54:21.496121123 +0000 UTC m=+0.024038731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:54:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:54:21 compute-0 podman[412246]: 2025-11-22 09:54:21.62692639 +0000 UTC m=+0.154843988 container init a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:54:21 compute-0 podman[412246]: 2025-11-22 09:54:21.635171379 +0000 UTC m=+0.163088957 container start a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 09:54:21 compute-0 podman[412246]: 2025-11-22 09:54:21.649959764 +0000 UTC m=+0.177877362 container attach a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 09:54:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:21.674+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 09:54:22 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:22 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:22 compute-0 ovn_controller[152872]: 2025-11-22T09:54:22Z|01644|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 09:54:22 compute-0 nova_compute[253661]: 2025-11-22 09:54:22.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:22 compute-0 epic_hermann[412262]: {
Nov 22 09:54:22 compute-0 epic_hermann[412262]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "osd_id": 1,
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "type": "bluestore"
Nov 22 09:54:22 compute-0 epic_hermann[412262]:     },
Nov 22 09:54:22 compute-0 epic_hermann[412262]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "osd_id": 0,
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "type": "bluestore"
Nov 22 09:54:22 compute-0 epic_hermann[412262]:     },
Nov 22 09:54:22 compute-0 epic_hermann[412262]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "osd_id": 2,
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:54:22 compute-0 epic_hermann[412262]:         "type": "bluestore"
Nov 22 09:54:22 compute-0 epic_hermann[412262]:     }
Nov 22 09:54:22 compute-0 epic_hermann[412262]: }
Nov 22 09:54:22 compute-0 systemd[1]: libpod-a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46.scope: Deactivated successfully.
Nov 22 09:54:22 compute-0 podman[412246]: 2025-11-22 09:54:22.668125694 +0000 UTC m=+1.196043292 container died a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:54:22 compute-0 systemd[1]: libpod-a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46.scope: Consumed 1.027s CPU time.
Nov 22 09:54:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:22.716+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:54:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:54:22 compute-0 nova_compute[253661]: 2025-11-22 09:54:22.870 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4-merged.mount: Deactivated successfully.
Nov 22 09:54:22 compute-0 nova_compute[253661]: 2025-11-22 09:54:22.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:23 compute-0 podman[412246]: 2025-11-22 09:54:23.276154813 +0000 UTC m=+1.804072431 container remove a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:54:23 compute-0 systemd[1]: libpod-conmon-a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46.scope: Deactivated successfully.
Nov 22 09:54:23 compute-0 sudo[412144]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:54:23 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:54:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:54:23 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:54:23 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 38c02cb6-c09e-4989-9586-7166c1af971d does not exist
Nov 22 09:54:23 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 274ae1f1-086d-412b-a54c-2b4578ee2876 does not exist
Nov 22 09:54:23 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 236 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:23 compute-0 sudo[412308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:54:23 compute-0 sudo[412308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:23 compute-0 sudo[412308]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:23 compute-0 ceph-mon[75021]: pgmap v2781: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 09:54:23 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:23 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:54:23 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:54:23 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 236 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:23 compute-0 sudo[412333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:54:23 compute-0 sudo[412333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:54:23 compute-0 sudo[412333]: pam_unix(sudo:session): session closed for user root
Nov 22 09:54:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:23.741+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 09:54:24 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:24 compute-0 ceph-mon[75021]: pgmap v2782: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 09:54:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:24.726+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:25 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:25.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 22 09:54:26 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:26 compute-0 ceph-mon[75021]: pgmap v2783: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 22 09:54:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:26.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:27.735+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:27 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:27 compute-0 nova_compute[253661]: 2025-11-22 09:54:27.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:27 compute-0 nova_compute[253661]: 2025-11-22 09:54:27.927 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805252.9265118, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:54:27 compute-0 nova_compute[253661]: 2025-11-22 09:54:27.928 253665 INFO nova.compute.manager [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Stopped (Lifecycle Event)
Nov 22 09:54:27 compute-0 nova_compute[253661]: 2025-11-22 09:54:27.951 253665 DEBUG nova.compute.manager [None req-ea684394-2df9-48f4-96e9-5ab22e9989e9 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:54:27 compute-0 nova_compute[253661]: 2025-11-22 09:54:27.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:54:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:54:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:54:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:54:28 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 241 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 22 09:54:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:28.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:28 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 241 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:28 compute-0 ceph-mon[75021]: pgmap v2784: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 22 09:54:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:29.707+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:30 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:30.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:31 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:31 compute-0 ceph-mon[75021]: pgmap v2785: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:31.778+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:32 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:32 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:32.749+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:32 compute-0 nova_compute[253661]: 2025-11-22 09:54:32.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:32 compute-0 nova_compute[253661]: 2025-11-22 09:54:32.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:33 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 246 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:33 compute-0 ceph-mon[75021]: pgmap v2786: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:33 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:33.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:34 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 246 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:34 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:34 compute-0 ceph-mon[75021]: pgmap v2787: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:34.743+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:35.745+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:35 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:36.790+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:37 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:37 compute-0 ceph-mon[75021]: pgmap v2788: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:37 compute-0 nova_compute[253661]: 2025-11-22 09:54:37.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:37 compute-0 nova_compute[253661]: 2025-11-22 09:54:37.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:54:37 compute-0 nova_compute[253661]: 2025-11-22 09:54:37.260 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:54:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:37.742+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:37 compute-0 nova_compute[253661]: 2025-11-22 09:54:37.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:37 compute-0 nova_compute[253661]: 2025-11-22 09:54:37.955 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:38 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 251 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:38.721+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:39 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:39 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 251 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:39 compute-0 ceph-mon[75021]: pgmap v2789: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:39.723+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:40.751+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:41 compute-0 ceph-mon[75021]: pgmap v2790: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:41 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:41.756+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:42 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:42.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:42 compute-0 nova_compute[253661]: 2025-11-22 09:54:42.922 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:42 compute-0 nova_compute[253661]: 2025-11-22 09:54:42.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:43 compute-0 podman[412359]: 2025-11-22 09:54:43.364105452 +0000 UTC m=+0.053236681 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 09:54:43 compute-0 podman[412360]: 2025-11-22 09:54:43.369624032 +0000 UTC m=+0.058702189 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 09:54:43 compute-0 ceph-mon[75021]: pgmap v2791: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:43 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:43 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 256 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:43.728+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:44 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 256 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:44 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:44.687+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:45 compute-0 ceph-mon[75021]: pgmap v2792: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:45 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:45.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:46.612+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:46 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:46 compute-0 ceph-mon[75021]: pgmap v2793: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:47.639+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:47 compute-0 nova_compute[253661]: 2025-11-22 09:54:47.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:47 compute-0 nova_compute[253661]: 2025-11-22 09:54:47.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:47 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:48.599+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:48 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 261 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:49 compute-0 ceph-mon[75021]: pgmap v2794: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:49 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 261 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:49.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:50 compute-0 nova_compute[253661]: 2025-11-22 09:54:50.245 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:50.569+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:51 compute-0 podman[412396]: 2025-11-22 09:54:51.389301855 +0000 UTC m=+0.086605847 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 22 09:54:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:51.576+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:51 compute-0 ceph-mon[75021]: pgmap v2795: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:51 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:51 compute-0 nova_compute[253661]: 2025-11-22 09:54:51.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:52 compute-0 nova_compute[253661]: 2025-11-22 09:54:52.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:52 compute-0 nova_compute[253661]: 2025-11-22 09:54:52.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:54:52 compute-0 nova_compute[253661]: 2025-11-22 09:54:52.245 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:54:52
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log']
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:52.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:54:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:54:52 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:52 compute-0 ceph-mon[75021]: pgmap v2796: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:52 compute-0 nova_compute[253661]: 2025-11-22 09:54:52.925 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:52 compute-0 nova_compute[253661]: 2025-11-22 09:54:52.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:53.539+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:53 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 266 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:54 compute-0 nova_compute[253661]: 2025-11-22 09:54:54.236 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:54 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 266 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:54.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:55.549+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:55 compute-0 ceph-mon[75021]: pgmap v2797: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:54:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:54:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:54:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:54:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:54:55 compute-0 nova_compute[253661]: 2025-11-22 09:54:55.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:56 compute-0 nova_compute[253661]: 2025-11-22 09:54:56.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:56 compute-0 nova_compute[253661]: 2025-11-22 09:54:56.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:54:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:56.539+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:57 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:57 compute-0 ceph-mon[75021]: pgmap v2798: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:57 compute-0 nova_compute[253661]: 2025-11-22 09:54:57.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:54:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:54:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:54:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:54:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:54:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:57.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:57 compute-0 nova_compute[253661]: 2025-11-22 09:54:57.926 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:57 compute-0 nova_compute[253661]: 2025-11-22 09:54:57.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:54:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:58.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 271 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:54:59 compute-0 nova_compute[253661]: 2025-11-22 09:54:59.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:54:59 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:59 compute-0 ceph-mon[75021]: pgmap v2799: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:54:59 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:54:59 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 271 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:54:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:59.575+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:54:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:00 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:00.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:01.501+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:01 compute-0 ceph-mon[75021]: pgmap v2800: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:01 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:02 compute-0 nova_compute[253661]: 2025-11-22 09:55:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:02.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:02 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:02 compute-0 ceph-mon[75021]: pgmap v2801: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:02 compute-0 nova_compute[253661]: 2025-11-22 09:55:02.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:02 compute-0 nova_compute[253661]: 2025-11-22 09:55:02.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007606720469492739 of space, bias 1.0, pg target 0.22820161408478218 quantized to 32 (current 32)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:55:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.258 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:55:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:03.480+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:55:03 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929090523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.705 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:55:03 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:03 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1929090523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.772 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.772 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:55:03 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 276 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.942 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.943 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3260MB free_disk=59.942649841308594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.944 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:55:03 compute-0 nova_compute[253661]: 2025-11-22 09:55:03.944 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.043 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.044 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.044 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.085 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:55:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:55:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397147431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.520 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.526 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:55:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:04.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.541 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.649 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.649 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:55:04 compute-0 nova_compute[253661]: 2025-11-22 09:55:04.650 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:04 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:04 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 276 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:04 compute-0 ceph-mon[75021]: pgmap v2802: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:04 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3397147431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:55:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:05.476+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:06.479+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:06 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:06 compute-0 ceph-mon[75021]: pgmap v2803: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:07.512+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:07 compute-0 ovn_controller[152872]: 2025-11-22T09:55:07Z|01645|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 09:55:07 compute-0 nova_compute[253661]: 2025-11-22 09:55:07.737 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:07 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:07 compute-0 nova_compute[253661]: 2025-11-22 09:55:07.930 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:07 compute-0 nova_compute[253661]: 2025-11-22 09:55:07.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:08.517+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:08 compute-0 ceph-mon[75021]: pgmap v2804: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:08 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 286 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:09.507+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:09.800 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:55:09 compute-0 nova_compute[253661]: 2025-11-22 09:55:09.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:09 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:09.801 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:55:09 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:09 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 286 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:10.522+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:10 compute-0 ceph-mon[75021]: pgmap v2805: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:11.492+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:11 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:12.488+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:12 compute-0 nova_compute[253661]: 2025-11-22 09:55:12.932 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:12 compute-0 nova_compute[253661]: 2025-11-22 09:55:12.966 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:13 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:13 compute-0 ceph-mon[75021]: pgmap v2806: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:13.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:14 compute-0 podman[412467]: 2025-11-22 09:55:14.355248569 +0000 UTC m=+0.054229206 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:55:14 compute-0 podman[412468]: 2025-11-22 09:55:14.361163949 +0000 UTC m=+0.058503675 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:55:14 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:14.508+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:15 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 291 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:15.484+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:15 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:15 compute-0 ceph-mon[75021]: pgmap v2807: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:15 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:15 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:15.803 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:55:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:16.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:16 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 291 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:16 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:16 compute-0 ceph-mon[75021]: pgmap v2808: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:17.545+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:17 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:17 compute-0 nova_compute[253661]: 2025-11-22 09:55:17.920 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:17 compute-0 nova_compute[253661]: 2025-11-22 09:55:17.937 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:55:17 compute-0 nova_compute[253661]: 2025-11-22 09:55:17.939 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:55:17 compute-0 nova_compute[253661]: 2025-11-22 09:55:17.939 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:55:17 compute-0 nova_compute[253661]: 2025-11-22 09:55:17.939 253665 INFO nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] During sync_power_state the instance has a pending task (image_uploading). Skip.
Nov 22 09:55:17 compute-0 nova_compute[253661]: 2025-11-22 09:55:17.939 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:55:17 compute-0 nova_compute[253661]: 2025-11-22 09:55:17.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:18.579+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:18 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:18 compute-0 ceph-mon[75021]: pgmap v2809: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:19.584+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:19 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:20.555+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:20 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:20 compute-0 ceph-mon[75021]: pgmap v2810: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:20 compute-0 nova_compute[253661]: 2025-11-22 09:55:20.862 253665 DEBUG nova.compute.manager [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:55:20 compute-0 nova_compute[253661]: 2025-11-22 09:55:20.862 253665 DEBUG nova.compute.manager [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing instance network info cache due to event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 09:55:20 compute-0 nova_compute[253661]: 2025-11-22 09:55:20.862 253665 DEBUG oslo_concurrency.lockutils [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:55:20 compute-0 nova_compute[253661]: 2025-11-22 09:55:20.862 253665 DEBUG oslo_concurrency.lockutils [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:55:20 compute-0 nova_compute[253661]: 2025-11-22 09:55:20.863 253665 DEBUG nova.network.neutron [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.237 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.237 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.238 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.239 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.239 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.241 253665 INFO nova.compute.manager [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Terminating instance
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.241 253665 DEBUG nova.compute.manager [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:55:21 compute-0 kernel: tap62358b95-9f (unregistering): left promiscuous mode
Nov 22 09:55:21 compute-0 NetworkManager[48920]: <info>  [1763805321.3232] device (tap62358b95-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:55:21 compute-0 ovn_controller[152872]: 2025-11-22T09:55:21Z|01646|binding|INFO|Releasing lport 62358b95-9f4a-404c-8165-dc98c7e3b042 from this chassis (sb_readonly=0)
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.373 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 ovn_controller[152872]: 2025-11-22T09:55:21Z|01647|binding|INFO|Setting lport 62358b95-9f4a-404c-8165-dc98c7e3b042 down in Southbound
Nov 22 09:55:21 compute-0 ovn_controller[152872]: 2025-11-22T09:55:21Z|01648|binding|INFO|Removing iface tap62358b95-9f ovn-installed in OVS
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.387 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000093.scope: Deactivated successfully.
Nov 22 09:55:21 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000093.scope: Consumed 27.810s CPU time.
Nov 22 09:55:21 compute-0 systemd-machined[215941]: Machine qemu-179-instance-00000093 terminated.
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.470 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.482 253665 INFO nova.virt.libvirt.driver [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance destroyed successfully.
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.482 253665 DEBUG nova.objects.instance [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lazy-loading 'resources' on Instance uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.496 253665 DEBUG nova.virt.libvirt.vif [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:49:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1978624834',display_name='tempest-TestSnapshotPattern-server-1978624834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1978624834',id=147,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPbs4cvcme9ivACjshW3GrHRutsNNtC8JsYxZJpO7Wdm0wymVGG4uq7MUY+cUVsrxl6cn1THXZxHPADM3ZJF4hahzevBsWxtyjQn+l0NA1XlnmuhoCdb7kymP1eYu1QPUA==',key_name='tempest-TestSnapshotPattern-1057806612',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:49:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ffacc46512445d8b5c24899a0053196',ramdisk_id='',reservation_id='r-c1keooq8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-98475773',owner_user_name='tempest-TestSnapshotPattern-98475773-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:50:18Z,user_data=None,user_id='1edb692a8ff443038839784febd964b1',uuid=027bdffc-9e8e-4a33-9b06-844890912dc9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.496 253665 DEBUG nova.network.os_vif_util [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converting VIF {"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.497 253665 DEBUG nova.network.os_vif_util [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.497 253665 DEBUG os_vif [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.500 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62358b95-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.505 253665 INFO os_vif [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f')
Nov 22 09:55:21 compute-0 podman[412512]: 2025-11-22 09:55:21.54546729 +0000 UTC m=+0.093104013 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:55:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:21.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.621 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a0:65 10.100.0.3'], port_security=['fa:16:3e:bc:a0:65 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '027bdffc-9e8e-4a33-9b06-844890912dc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-768d62d5-f993-4383-9edf-3d68f19e409c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ffacc46512445d8b5c24899a0053196', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aef6f84b-f5db-4e86-b5ce-afacad080f10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eea3b39d-a626-45c2-a32c-ad267efc3243, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=62358b95-9f4a-404c-8165-dc98c7e3b042) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.622 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 62358b95-9f4a-404c-8165-dc98c7e3b042 in datapath 768d62d5-f993-4383-9edf-3d68f19e409c unbound from our chassis
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.623 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 768d62d5-f993-4383-9edf-3d68f19e409c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.625 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9b52a8-cf21-4b5d-913f-7212ebd42805]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.625 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c namespace which is not needed anymore
Nov 22 09:55:21 compute-0 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [NOTICE]   (405666) : haproxy version is 2.8.14-c23fe91
Nov 22 09:55:21 compute-0 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [NOTICE]   (405666) : path to executable is /usr/sbin/haproxy
Nov 22 09:55:21 compute-0 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [WARNING]  (405666) : Exiting Master process...
Nov 22 09:55:21 compute-0 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [WARNING]  (405666) : Exiting Master process...
Nov 22 09:55:21 compute-0 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [ALERT]    (405666) : Current worker (405668) exited with code 143 (Terminated)
Nov 22 09:55:21 compute-0 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [WARNING]  (405666) : All workers exited. Exiting... (0)
Nov 22 09:55:21 compute-0 systemd[1]: libpod-5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab.scope: Deactivated successfully.
Nov 22 09:55:21 compute-0 podman[412598]: 2025-11-22 09:55:21.756571633 +0000 UTC m=+0.051838035 container died 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:55:21 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab-userdata-shm.mount: Deactivated successfully.
Nov 22 09:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a3a35e44d987f37dea2ee9bb4f5402e567d895145b73865842e34bbbf02d3f6-merged.mount: Deactivated successfully.
Nov 22 09:55:21 compute-0 podman[412598]: 2025-11-22 09:55:21.820387631 +0000 UTC m=+0.115654033 container cleanup 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.823 253665 DEBUG nova.network.neutron [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated VIF entry in instance network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.823 253665 DEBUG nova.network.neutron [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:55:21 compute-0 systemd[1]: libpod-conmon-5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab.scope: Deactivated successfully.
Nov 22 09:55:21 compute-0 podman[412628]: 2025-11-22 09:55:21.883176913 +0000 UTC m=+0.043990836 container remove 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[28a539a4-5d6d-495e-b4c1-5af702a6af1c]: (4, ('Sat Nov 22 09:55:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c (5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab)\n5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab\nSat Nov 22 09:55:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c (5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab)\n5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.890 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df2a4825-11ee-4629-8c28-657600907748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.891 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap768d62d5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:55:21 compute-0 kernel: tap768d62d5-f0: left promiscuous mode
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.893 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 nova_compute[253661]: 2025-11-22 09:55:21.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.907 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1eb193-01b2-453f-bb5f-90f5735362a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.924 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b43b1c0d-fcf7-4f01-91b0-cba40feebbe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.926 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[43cca04e-ad7e-4ef7-8dc0-fa1f0f57bc8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.940 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cab5ae7b-e932-462d-bd46-4180174bf47a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784420, 'reachable_time': 17395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412643, 'error': None, 'target': 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:55:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d768d62d5\x2df993\x2d4383\x2d9edf\x2d3d68f19e409c.mount: Deactivated successfully.
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.943 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:55:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.945 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a122e836-65c5-4f94-be6c-c88c3e69da7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:55:22 compute-0 nova_compute[253661]: 2025-11-22 09:55:22.130 253665 DEBUG oslo_concurrency.lockutils [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:55:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:22.612+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:55:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:55:22 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:22 compute-0 ceph-mon[75021]: pgmap v2811: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:55:22 compute-0 nova_compute[253661]: 2025-11-22 09:55:22.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:23 compute-0 sudo[412644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:55:23 compute-0 sudo[412644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:23 compute-0 sudo[412644]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:23 compute-0 sudo[412669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:55:23 compute-0 sudo[412669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:23 compute-0 sudo[412669]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:23 compute-0 sudo[412694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:55:23 compute-0 sudo[412694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:23 compute-0 sudo[412694]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:23 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:23 compute-0 sudo[412719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:55:23 compute-0 sudo[412719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:23 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 296 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:24 compute-0 nova_compute[253661]: 2025-11-22 09:55:24.241 253665 DEBUG nova.compute.manager [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-unplugged-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:55:24 compute-0 nova_compute[253661]: 2025-11-22 09:55:24.242 253665 DEBUG oslo_concurrency.lockutils [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:55:24 compute-0 nova_compute[253661]: 2025-11-22 09:55:24.242 253665 DEBUG oslo_concurrency.lockutils [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:55:24 compute-0 nova_compute[253661]: 2025-11-22 09:55:24.242 253665 DEBUG oslo_concurrency.lockutils [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:55:24 compute-0 nova_compute[253661]: 2025-11-22 09:55:24.242 253665 DEBUG nova.compute.manager [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] No waiting events found dispatching network-vif-unplugged-62358b95-9f4a-404c-8165-dc98c7e3b042 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:55:24 compute-0 nova_compute[253661]: 2025-11-22 09:55:24.243 253665 DEBUG nova.compute.manager [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-unplugged-62358b95-9f4a-404c-8165-dc98c7e3b042 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:55:24 compute-0 sudo[412719]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:55:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:55:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:55:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:55:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:55:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:55:24 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a46bdeaa-1f29-4e35-ae12-4749e35ac7ea does not exist
Nov 22 09:55:24 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 30fc49c2-c787-457b-842e-a62244ad408a does not exist
Nov 22 09:55:24 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 78624c8c-6a52-4408-9b64-b35e2d8c353f does not exist
Nov 22 09:55:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:55:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:55:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:55:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:55:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:55:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:55:24 compute-0 sudo[412774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:55:24 compute-0 sudo[412774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:24 compute-0 sudo[412774]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 09:55:24 compute-0 sudo[412799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:55:24 compute-0 sudo[412799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:24 compute-0 sudo[412799]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:24 compute-0 sudo[412824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:55:24 compute-0 sudo[412824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:24 compute-0 sudo[412824]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:24 compute-0 sudo[412849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:55:24 compute-0 sudo[412849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:24.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:24 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 296 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:55:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:55:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:55:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:55:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:55:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:55:24 compute-0 ceph-mon[75021]: pgmap v2812: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 09:55:24 compute-0 podman[412913]: 2025-11-22 09:55:24.95886552 +0000 UTC m=+0.041516984 container create 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 09:55:25 compute-0 systemd[1]: Started libpod-conmon-95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19.scope.
Nov 22 09:55:25 compute-0 podman[412913]: 2025-11-22 09:55:24.940514565 +0000 UTC m=+0.023166049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:55:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:55:25 compute-0 podman[412913]: 2025-11-22 09:55:25.080137095 +0000 UTC m=+0.162788639 container init 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 09:55:25 compute-0 podman[412913]: 2025-11-22 09:55:25.095180707 +0000 UTC m=+0.177832181 container start 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:55:25 compute-0 podman[412913]: 2025-11-22 09:55:25.101184408 +0000 UTC m=+0.183835962 container attach 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 09:55:25 compute-0 modest_fermat[412930]: 167 167
Nov 22 09:55:25 compute-0 systemd[1]: libpod-95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19.scope: Deactivated successfully.
Nov 22 09:55:25 compute-0 podman[412913]: 2025-11-22 09:55:25.104618226 +0000 UTC m=+0.187269690 container died 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9231c2f50588bdc240bc3850e214df812e3bf3dd5ec889a9d986a915cbc40a5-merged.mount: Deactivated successfully.
Nov 22 09:55:25 compute-0 podman[412913]: 2025-11-22 09:55:25.175230776 +0000 UTC m=+0.257882250 container remove 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:55:25 compute-0 systemd[1]: libpod-conmon-95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19.scope: Deactivated successfully.
Nov 22 09:55:25 compute-0 podman[412954]: 2025-11-22 09:55:25.346516811 +0000 UTC m=+0.048968113 container create b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 09:55:25 compute-0 systemd[1]: Started libpod-conmon-b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc.scope.
Nov 22 09:55:25 compute-0 podman[412954]: 2025-11-22 09:55:25.322268185 +0000 UTC m=+0.024719497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:55:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:25 compute-0 podman[412954]: 2025-11-22 09:55:25.4561241 +0000 UTC m=+0.158575412 container init b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 09:55:25 compute-0 podman[412954]: 2025-11-22 09:55:25.462949613 +0000 UTC m=+0.165400905 container start b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:55:25 compute-0 podman[412954]: 2025-11-22 09:55:25.467061117 +0000 UTC m=+0.169512429 container attach b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:55:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:25.555+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:25 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:26 compute-0 nova_compute[253661]: 2025-11-22 09:55:26.395 253665 DEBUG nova.compute.manager [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:55:26 compute-0 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 DEBUG oslo_concurrency.lockutils [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:55:26 compute-0 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 DEBUG oslo_concurrency.lockutils [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:55:26 compute-0 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 DEBUG oslo_concurrency.lockutils [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:55:26 compute-0 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 DEBUG nova.compute.manager [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] No waiting events found dispatching network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:55:26 compute-0 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 WARNING nova.compute.manager [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received unexpected event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 for instance with vm_state active and task_state deleting.
Nov 22 09:55:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 09:55:26 compute-0 stoic_mclaren[412970]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:55:26 compute-0 stoic_mclaren[412970]: --> relative data size: 1.0
Nov 22 09:55:26 compute-0 stoic_mclaren[412970]: --> All data devices are unavailable
Nov 22 09:55:26 compute-0 nova_compute[253661]: 2025-11-22 09:55:26.505 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:26 compute-0 systemd[1]: libpod-b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc.scope: Deactivated successfully.
Nov 22 09:55:26 compute-0 podman[412954]: 2025-11-22 09:55:26.506811864 +0000 UTC m=+1.209263176 container died b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850-merged.mount: Deactivated successfully.
Nov 22 09:55:26 compute-0 podman[412954]: 2025-11-22 09:55:26.610883204 +0000 UTC m=+1.313334516 container remove b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 09:55:26 compute-0 systemd[1]: libpod-conmon-b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc.scope: Deactivated successfully.
Nov 22 09:55:26 compute-0 sudo[412849]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:26 compute-0 sudo[413031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:55:26 compute-0 sudo[413031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:26 compute-0 sudo[413031]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:26 compute-0 sudo[413056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:55:26 compute-0 sudo[413056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:26 compute-0 sudo[413056]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:26 compute-0 sudo[413081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:55:26 compute-0 sudo[413081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:26 compute-0 sudo[413081]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:26 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:26 compute-0 ceph-mon[75021]: pgmap v2813: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 09:55:26 compute-0 sudo[413106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:55:26 compute-0 sudo[413106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:27 compute-0 podman[413171]: 2025-11-22 09:55:27.264832368 +0000 UTC m=+0.048415319 container create d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:55:27 compute-0 systemd[1]: Started libpod-conmon-d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b.scope.
Nov 22 09:55:27 compute-0 podman[413171]: 2025-11-22 09:55:27.242980743 +0000 UTC m=+0.026563704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:55:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:55:27 compute-0 podman[413171]: 2025-11-22 09:55:27.379217458 +0000 UTC m=+0.162800429 container init d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 09:55:27 compute-0 podman[413171]: 2025-11-22 09:55:27.390823873 +0000 UTC m=+0.174406854 container start d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:55:27 compute-0 podman[413171]: 2025-11-22 09:55:27.396897566 +0000 UTC m=+0.180480527 container attach d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:55:27 compute-0 gracious_shockley[413187]: 167 167
Nov 22 09:55:27 compute-0 systemd[1]: libpod-d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b.scope: Deactivated successfully.
Nov 22 09:55:27 compute-0 podman[413171]: 2025-11-22 09:55:27.400120088 +0000 UTC m=+0.183703039 container died d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-563fdb97761385ff99ecb316c8eaf3838ceedffe89121d3d5dfbfda4f34b87a3-merged.mount: Deactivated successfully.
Nov 22 09:55:27 compute-0 podman[413171]: 2025-11-22 09:55:27.45146421 +0000 UTC m=+0.235047151 container remove d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:55:27 compute-0 systemd[1]: libpod-conmon-d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b.scope: Deactivated successfully.
Nov 22 09:55:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:27.564+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:27 compute-0 podman[413211]: 2025-11-22 09:55:27.70541761 +0000 UTC m=+0.074566662 container create c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 09:55:27 compute-0 systemd[1]: Started libpod-conmon-c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f.scope.
Nov 22 09:55:27 compute-0 podman[413211]: 2025-11-22 09:55:27.672107606 +0000 UTC m=+0.041256688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:55:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:27 compute-0 podman[413211]: 2025-11-22 09:55:27.797166587 +0000 UTC m=+0.166315689 container init c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:55:27 compute-0 podman[413211]: 2025-11-22 09:55:27.80439503 +0000 UTC m=+0.173544092 container start c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 09:55:27 compute-0 podman[413211]: 2025-11-22 09:55:27.811518931 +0000 UTC m=+0.180667983 container attach c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 09:55:27 compute-0 nova_compute[253661]: 2025-11-22 09:55:27.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:55:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:55:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:55:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:55:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 73 op/s
Nov 22 09:55:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:28.541+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]: {
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:     "0": [
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:         {
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "devices": [
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "/dev/loop3"
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             ],
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_name": "ceph_lv0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_size": "21470642176",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "name": "ceph_lv0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "tags": {
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.cluster_name": "ceph",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.crush_device_class": "",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.encrypted": "0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.osd_id": "0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.type": "block",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.vdo": "0"
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             },
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "type": "block",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "vg_name": "ceph_vg0"
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:         }
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:     ],
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:     "1": [
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:         {
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "devices": [
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "/dev/loop4"
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             ],
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_name": "ceph_lv1",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_size": "21470642176",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "name": "ceph_lv1",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "tags": {
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.cluster_name": "ceph",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.crush_device_class": "",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.encrypted": "0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.osd_id": "1",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.type": "block",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.vdo": "0"
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             },
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "type": "block",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "vg_name": "ceph_vg1"
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:         }
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:     ],
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:     "2": [
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:         {
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "devices": [
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "/dev/loop5"
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             ],
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_name": "ceph_lv2",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_size": "21470642176",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "name": "ceph_lv2",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "tags": {
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.cluster_name": "ceph",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.crush_device_class": "",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.encrypted": "0",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.osd_id": "2",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.type": "block",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:                 "ceph.vdo": "0"
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             },
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "type": "block",
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:             "vg_name": "ceph_vg2"
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:         }
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]:     ]
Nov 22 09:55:28 compute-0 flamboyant_brown[413227]: }
Nov 22 09:55:28 compute-0 systemd[1]: libpod-c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f.scope: Deactivated successfully.
Nov 22 09:55:28 compute-0 conmon[413227]: conmon c60ff675507505e6acd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f.scope/container/memory.events
Nov 22 09:55:28 compute-0 podman[413211]: 2025-11-22 09:55:28.675128072 +0000 UTC m=+1.044277124 container died c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 09:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6-merged.mount: Deactivated successfully.
Nov 22 09:55:28 compute-0 podman[413211]: 2025-11-22 09:55:28.763031991 +0000 UTC m=+1.132181043 container remove c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:55:28 compute-0 systemd[1]: libpod-conmon-c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f.scope: Deactivated successfully.
Nov 22 09:55:28 compute-0 sudo[413106]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:28 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 301 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:28 compute-0 sudo[413249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:55:28 compute-0 sudo[413249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:28 compute-0 sudo[413249]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:28 compute-0 sudo[413274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:55:28 compute-0 sudo[413274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:28 compute-0 sudo[413274]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:29 compute-0 sudo[413299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:55:29 compute-0 sudo[413299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:29 compute-0 sudo[413299]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:29 compute-0 sudo[413324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:55:29 compute-0 sudo[413324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:29 compute-0 ceph-mon[75021]: pgmap v2814: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 73 op/s
Nov 22 09:55:29 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:29 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 301 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:29 compute-0 podman[413389]: 2025-11-22 09:55:29.439549767 +0000 UTC m=+0.045785872 container create ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:55:29 compute-0 systemd[1]: Started libpod-conmon-ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743.scope.
Nov 22 09:55:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:29.497+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:55:29 compute-0 podman[413389]: 2025-11-22 09:55:29.417367744 +0000 UTC m=+0.023603879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:55:29 compute-0 podman[413389]: 2025-11-22 09:55:29.521337761 +0000 UTC m=+0.127573886 container init ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 09:55:29 compute-0 podman[413389]: 2025-11-22 09:55:29.530400841 +0000 UTC m=+0.136636946 container start ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 09:55:29 compute-0 podman[413389]: 2025-11-22 09:55:29.533955261 +0000 UTC m=+0.140191396 container attach ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:55:29 compute-0 nifty_margulis[413405]: 167 167
Nov 22 09:55:29 compute-0 systemd[1]: libpod-ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743.scope: Deactivated successfully.
Nov 22 09:55:29 compute-0 podman[413389]: 2025-11-22 09:55:29.536017733 +0000 UTC m=+0.142253858 container died ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4b0a6fdbd65baa9983a685c607189bc84610d972a27d106072bc05cddcc0b9d-merged.mount: Deactivated successfully.
Nov 22 09:55:29 compute-0 podman[413389]: 2025-11-22 09:55:29.57020761 +0000 UTC m=+0.176443715 container remove ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:55:29 compute-0 systemd[1]: libpod-conmon-ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743.scope: Deactivated successfully.
Nov 22 09:55:29 compute-0 podman[413428]: 2025-11-22 09:55:29.802492441 +0000 UTC m=+0.071513915 container create b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:55:29 compute-0 systemd[1]: Started libpod-conmon-b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4.scope.
Nov 22 09:55:29 compute-0 podman[413428]: 2025-11-22 09:55:29.779858357 +0000 UTC m=+0.048879811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:55:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:55:29 compute-0 podman[413428]: 2025-11-22 09:55:29.902983689 +0000 UTC m=+0.172005143 container init b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 09:55:29 compute-0 podman[413428]: 2025-11-22 09:55:29.909666139 +0000 UTC m=+0.178687573 container start b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 09:55:29 compute-0 podman[413428]: 2025-11-22 09:55:29.914084391 +0000 UTC m=+0.183105825 container attach b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:55:30 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 73 op/s
Nov 22 09:55:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:30.542+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]: {
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "osd_id": 1,
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "type": "bluestore"
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:     },
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "osd_id": 0,
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "type": "bluestore"
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:     },
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "osd_id": 2,
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:         "type": "bluestore"
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]:     }
Nov 22 09:55:30 compute-0 pedantic_hertz[413444]: }
Nov 22 09:55:30 compute-0 systemd[1]: libpod-b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4.scope: Deactivated successfully.
Nov 22 09:55:30 compute-0 podman[413477]: 2025-11-22 09:55:30.943371753 +0000 UTC m=+0.024216805 container died b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50-merged.mount: Deactivated successfully.
Nov 22 09:55:30 compute-0 podman[413477]: 2025-11-22 09:55:30.997019253 +0000 UTC m=+0.077864285 container remove b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 09:55:31 compute-0 systemd[1]: libpod-conmon-b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4.scope: Deactivated successfully.
Nov 22 09:55:31 compute-0 sudo[413324]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:55:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:55:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:55:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:55:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d38c747c-c60e-4d94-b1d1-252751066413 does not exist
Nov 22 09:55:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev f980b5e6-355e-4ffd-9bb5-b1ba0b615f6a does not exist
Nov 22 09:55:31 compute-0 sudo[413492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:55:31 compute-0 sudo[413492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:31 compute-0 sudo[413492]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:31 compute-0 sudo[413517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:55:31 compute-0 sudo[413517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:55:31 compute-0 sudo[413517]: pam_unix(sudo:session): session closed for user root
Nov 22 09:55:31 compute-0 ceph-mon[75021]: pgmap v2815: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 73 op/s
Nov 22 09:55:31 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:31 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:55:31 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:55:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:31.495+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:31 compute-0 nova_compute[253661]: 2025-11-22 09:55:31.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:32 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 73 op/s
Nov 22 09:55:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:32.510+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:32 compute-0 nova_compute[253661]: 2025-11-22 09:55:32.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:33 compute-0 ceph-mon[75021]: pgmap v2816: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 73 op/s
Nov 22 09:55:33 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:33.478+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:33 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 306 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:34 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:34 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 306 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 0 B/s wr, 80 op/s
Nov 22 09:55:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:34.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:35 compute-0 ceph-mon[75021]: pgmap v2817: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 0 B/s wr, 80 op/s
Nov 22 09:55:35 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:35.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 09:55:36 compute-0 nova_compute[253661]: 2025-11-22 09:55:36.481 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805321.4796615, 027bdffc-9e8e-4a33-9b06-844890912dc9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:55:36 compute-0 nova_compute[253661]: 2025-11-22 09:55:36.481 253665 INFO nova.compute.manager [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] VM Stopped (Lifecycle Event)
Nov 22 09:55:36 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:36 compute-0 ceph-mon[75021]: pgmap v2818: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 09:55:36 compute-0 nova_compute[253661]: 2025-11-22 09:55:36.503 253665 DEBUG nova.compute.manager [None req-d28dab39-4da1-4624-982e-9b8a52211a84 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:55:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:36.508+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:36 compute-0 nova_compute[253661]: 2025-11-22 09:55:36.510 253665 DEBUG nova.compute.manager [None req-d28dab39-4da1-4624-982e-9b8a52211a84 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:55:36 compute-0 nova_compute[253661]: 2025-11-22 09:55:36.534 253665 INFO nova.compute.manager [None req-d28dab39-4da1-4624-982e-9b8a52211a84 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 22 09:55:36 compute-0 nova_compute[253661]: 2025-11-22 09:55:36.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:37.541+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:37 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.663482) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805337663624, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2183, "num_deletes": 251, "total_data_size": 2852217, "memory_usage": 2907632, "flush_reason": "Manual Compaction"}
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805337713307, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2786034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57072, "largest_seqno": 59254, "table_properties": {"data_size": 2776888, "index_size": 5319, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23348, "raw_average_key_size": 21, "raw_value_size": 2756754, "raw_average_value_size": 2522, "num_data_blocks": 233, "num_entries": 1093, "num_filter_entries": 1093, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805167, "oldest_key_time": 1763805167, "file_creation_time": 1763805337, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 49946 microseconds, and 12080 cpu microseconds.
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.713448) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2786034 bytes OK
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.713476) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.715590) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.715609) EVENT_LOG_v1 {"time_micros": 1763805337715602, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.715632) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2842709, prev total WAL file size 2842709, number of live WAL files 2.
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.716786) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2720KB)], [134(9376KB)]
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805337716828, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12387072, "oldest_snapshot_seqno": -1}
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8297 keys, 10648309 bytes, temperature: kUnknown
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805337929468, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 10648309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10593817, "index_size": 32619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20805, "raw_key_size": 216964, "raw_average_key_size": 26, "raw_value_size": 10446658, "raw_average_value_size": 1259, "num_data_blocks": 1263, "num_entries": 8297, "num_filter_entries": 8297, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805337, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:55:37 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:55:37 compute-0 nova_compute[253661]: 2025-11-22 09:55:37.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.929765) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 10648309 bytes
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.124871) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.2 rd, 50.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(8.3) write-amplify(3.8) OK, records in: 8811, records dropped: 514 output_compression: NoCompression
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.124928) EVENT_LOG_v1 {"time_micros": 1763805338124908, "job": 82, "event": "compaction_finished", "compaction_time_micros": 212724, "compaction_time_cpu_micros": 40225, "output_level": 6, "num_output_files": 1, "total_output_size": 10648309, "num_input_records": 8811, "num_output_records": 8297, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805338125914, "job": 82, "event": "table_file_deletion", "file_number": 136}
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805338128127, "job": 82, "event": "table_file_deletion", "file_number": 134}
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.716720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:55:38 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:55:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 66 op/s
Nov 22 09:55:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:38.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:38 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:38 compute-0 ceph-mon[75021]: pgmap v2819: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 66 op/s
Nov 22 09:55:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 316 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:39.515+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:39 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:39 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 316 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:40.511+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:40 compute-0 ceph-mon[75021]: pgmap v2820: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:41 compute-0 nova_compute[253661]: 2025-11-22 09:55:41.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:41 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:42.535+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:42 compute-0 ceph-mon[75021]: pgmap v2821: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:42 compute-0 nova_compute[253661]: 2025-11-22 09:55:42.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:43.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:43 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:55:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:44.558+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 321 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:44 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:44 compute-0 ceph-mon[75021]: pgmap v2822: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:55:45 compute-0 podman[413597]: 2025-11-22 09:55:45.38452321 +0000 UTC m=+0.068536749 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 22 09:55:45 compute-0 podman[413598]: 2025-11-22 09:55:45.38454058 +0000 UTC m=+0.068093128 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:55:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:45.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:45 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:45 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 321 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:46.507+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:46 compute-0 nova_compute[253661]: 2025-11-22 09:55:46.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:46 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:46 compute-0 ceph-mon[75021]: pgmap v2823: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:47.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:47 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:48 compute-0 nova_compute[253661]: 2025-11-22 09:55:48.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:55:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:48.468+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:48 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:48 compute-0 ceph-mon[75021]: pgmap v2824: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:55:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:49.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:50.497+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:50 compute-0 ovn_controller[152872]: 2025-11-22T09:55:50Z|01649|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 22 09:55:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:50 compute-0 ceph-mon[75021]: pgmap v2825: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:51.490+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:51 compute-0 nova_compute[253661]: 2025-11-22 09:55:51.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:51 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:52 compute-0 nova_compute[253661]: 2025-11-22 09:55:52.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:55:52
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:55:52 compute-0 podman[413672]: 2025-11-22 09:55:52.391454952 +0000 UTC m=+0.079638071 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 22 09:55:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:52.441+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:55:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:55:52 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:52 compute-0 ceph-mon[75021]: pgmap v2826: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:53 compute-0 nova_compute[253661]: 2025-11-22 09:55:53.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:53.403+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:53 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 326 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:53 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:53 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 326 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:54 compute-0 nova_compute[253661]: 2025-11-22 09:55:54.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:54 compute-0 nova_compute[253661]: 2025-11-22 09:55:54.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:55:54 compute-0 nova_compute[253661]: 2025-11-22 09:55:54.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:55:54 compute-0 nova_compute[253661]: 2025-11-22 09:55:54.252 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 22 09:55:54 compute-0 nova_compute[253661]: 2025-11-22 09:55:54.252 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:55:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:54.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:55:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:54 compute-0 ceph-mon[75021]: pgmap v2827: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:55:55 compute-0 nova_compute[253661]: 2025-11-22 09:55:55.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:55.431+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:55:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:55:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:55:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:55:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:55:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:56.454+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:56 compute-0 nova_compute[253661]: 2025-11-22 09:55:56.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:56 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:56 compute-0 ceph-mon[75021]: pgmap v2828: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:55:57 compute-0 nova_compute[253661]: 2025-11-22 09:55:57.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:57 compute-0 nova_compute[253661]: 2025-11-22 09:55:57.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:55:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:55:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:55:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:55:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:55:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:55:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:57.485+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:57 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:58 compute-0 nova_compute[253661]: 2025-11-22 09:55:58.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:55:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:55:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:58.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 331 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:55:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:58 compute-0 ceph-mon[75021]: pgmap v2829: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:55:58 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 331 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:55:59 compute-0 nova_compute[253661]: 2025-11-22 09:55:59.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:55:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:55:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:59.491+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:55:59 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:00.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:00 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:00 compute-0 ceph-mon[75021]: pgmap v2830: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:01 compute-0 nova_compute[253661]: 2025-11-22 09:56:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:56:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:01.448+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:01 compute-0 nova_compute[253661]: 2025-11-22 09:56:01.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:01 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:02.437+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:03 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:03 compute-0 ceph-mon[75021]: pgmap v2831: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:03 compute-0 nova_compute[253661]: 2025-11-22 09:56:03.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007606720469492739 of space, bias 1.0, pg target 0.22820161408478218 quantized to 32 (current 32)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:56:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:56:03 compute-0 nova_compute[253661]: 2025-11-22 09:56:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:56:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:03.462+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:03 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 336 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:04 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:04 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 336 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.254 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:56:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:56:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:04.462+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:56:04 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201112681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.694 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.770 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.770 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.932 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.934 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3522MB free_disk=59.942649841308594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.934 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:56:04 compute-0 nova_compute[253661]: 2025-11-22 09:56:04.934 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:56:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:05 compute-0 ceph-mon[75021]: pgmap v2832: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:56:05 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4201112681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:56:05 compute-0 nova_compute[253661]: 2025-11-22 09:56:05.157 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:56:05 compute-0 nova_compute[253661]: 2025-11-22 09:56:05.157 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:56:05 compute-0 nova_compute[253661]: 2025-11-22 09:56:05.158 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:56:05 compute-0 nova_compute[253661]: 2025-11-22 09:56:05.281 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:56:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:05.444+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:56:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303169481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:56:05 compute-0 nova_compute[253661]: 2025-11-22 09:56:05.696 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:56:05 compute-0 nova_compute[253661]: 2025-11-22 09:56:05.703 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:56:05 compute-0 nova_compute[253661]: 2025-11-22 09:56:05.717 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:56:05 compute-0 nova_compute[253661]: 2025-11-22 09:56:05.865 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:56:05 compute-0 nova_compute[253661]: 2025-11-22 09:56:05.866 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:56:06 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/303169481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:56:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:06.401+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:06 compute-0 nova_compute[253661]: 2025-11-22 09:56:06.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:06 compute-0 nova_compute[253661]: 2025-11-22 09:56:06.867 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:56:07 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:07 compute-0 ceph-mon[75021]: pgmap v2833: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:07 compute-0 nova_compute[253661]: 2025-11-22 09:56:07.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:56:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:07.373+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:07 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 09:56:07 compute-0 systemd[1]: virtsecretd.service: Consumed 1.223s CPU time.
Nov 22 09:56:08 compute-0 nova_compute[253661]: 2025-11-22 09:56:08.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:08.395+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:56:08 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 341 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:09 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:09 compute-0 ceph-mon[75021]: pgmap v2834: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:56:09 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 341 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:09.414+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:10.404+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:11.383+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:11 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:11 compute-0 ceph-mon[75021]: pgmap v2835: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:11 compute-0 nova_compute[253661]: 2025-11-22 09:56:11.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:12.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:56:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2673727272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:56:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:56:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2673727272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:56:12 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2673727272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:56:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2673727272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:56:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:13 compute-0 nova_compute[253661]: 2025-11-22 09:56:13.113 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:13.378+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:13 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:13 compute-0 ceph-mon[75021]: pgmap v2836: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:13 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 346 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:14.378+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:56:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:15.383+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:15 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:15 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 346 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:15 compute-0 ceph-mon[75021]: pgmap v2837: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 09:56:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:16.334+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:16 compute-0 podman[413816]: 2025-11-22 09:56:16.356206854 +0000 UTC m=+0.051642910 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 09:56:16 compute-0 podman[413815]: 2025-11-22 09:56:16.37853162 +0000 UTC m=+0.075574858 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:56:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:16 compute-0 nova_compute[253661]: 2025-11-22 09:56:16.750 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:16 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:16 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:16 compute-0 ceph-mon[75021]: pgmap v2838: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 09:56:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:17.334+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:18 compute-0 nova_compute[253661]: 2025-11-22 09:56:18.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:18 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:18.319+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 09:56:18 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 351 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:19 compute-0 nova_compute[253661]: 2025-11-22 09:56:19.038 253665 INFO nova.virt.libvirt.driver [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Deleting instance files /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9_del
Nov 22 09:56:19 compute-0 nova_compute[253661]: 2025-11-22 09:56:19.039 253665 INFO nova.virt.libvirt.driver [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Deletion of /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9_del complete
Nov 22 09:56:19 compute-0 nova_compute[253661]: 2025-11-22 09:56:19.231 253665 INFO nova.compute.manager [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Took 57.99 seconds to destroy the instance on the hypervisor.
Nov 22 09:56:19 compute-0 nova_compute[253661]: 2025-11-22 09:56:19.232 253665 DEBUG oslo.service.loopingcall [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:56:19 compute-0 nova_compute[253661]: 2025-11-22 09:56:19.234 253665 DEBUG nova.compute.manager [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:56:19 compute-0 nova_compute[253661]: 2025-11-22 09:56:19.234 253665 DEBUG nova.network.neutron [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:56:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:19.281+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:19 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:19 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:19 compute-0 ceph-mon[75021]: pgmap v2839: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 09:56:19 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 351 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:20.276+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:20 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 16 op/s
Nov 22 09:56:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:21.304+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:21 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:21 compute-0 ceph-mon[75021]: pgmap v2840: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 16 op/s
Nov 22 09:56:21 compute-0 nova_compute[253661]: 2025-11-22 09:56:21.755 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:22.282+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:22 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:56:22.406 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:56:22 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:56:22.407 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.433 253665 DEBUG nova.network.neutron [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.456 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 16 op/s
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.489 253665 DEBUG nova.compute.manager [req-29407488-4d86-4589-bc36-cb436d054e4d req-77fa5bd1-fe6d-46e8-9f47-0310fc9eaa3d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-deleted-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.490 253665 INFO nova.compute.manager [req-29407488-4d86-4589-bc36-cb436d054e4d req-77fa5bd1-fe6d-46e8-9f47-0310fc9eaa3d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Neutron deleted interface 62358b95-9f4a-404c-8165-dc98c7e3b042; detaching it from the instance and deleting it from the info cache
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.490 253665 DEBUG nova.network.neutron [req-29407488-4d86-4589-bc36-cb436d054e4d req-77fa5bd1-fe6d-46e8-9f47-0310fc9eaa3d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.544 253665 INFO nova.compute.manager [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Took 3.31 seconds to deallocate network for instance.
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.551 253665 DEBUG nova.compute.manager [req-29407488-4d86-4589-bc36-cb436d054e4d req-77fa5bd1-fe6d-46e8-9f47-0310fc9eaa3d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Detach interface failed, port_id=62358b95-9f4a-404c-8165-dc98c7e3b042, reason: Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.632 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.632 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:56:22 compute-0 nova_compute[253661]: 2025-11-22 09:56:22.688 253665 DEBUG oslo_concurrency.processutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:56:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:56:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:56:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811789837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:56:23 compute-0 nova_compute[253661]: 2025-11-22 09:56:23.104 253665 DEBUG oslo_concurrency.processutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:56:23 compute-0 nova_compute[253661]: 2025-11-22 09:56:23.111 253665 DEBUG nova.compute.provider_tree [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:56:23 compute-0 nova_compute[253661]: 2025-11-22 09:56:23.115 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:23 compute-0 nova_compute[253661]: 2025-11-22 09:56:23.128 253665 DEBUG nova.scheduler.client.report [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:56:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:23.236+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:23 compute-0 nova_compute[253661]: 2025-11-22 09:56:23.351 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:56:23 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:23 compute-0 ceph-mon[75021]: pgmap v2841: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 16 op/s
Nov 22 09:56:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1811789837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:56:23 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:23 compute-0 podman[413912]: 2025-11-22 09:56:23.378601257 +0000 UTC m=+0.078104972 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 09:56:23 compute-0 nova_compute[253661]: 2025-11-22 09:56:23.776 253665 INFO nova.scheduler.client.report [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Deleted allocations for instance 027bdffc-9e8e-4a33-9b06-844890912dc9
Nov 22 09:56:23 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 356 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:24 compute-0 nova_compute[253661]: 2025-11-22 09:56:24.017 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 62.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:56:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:24.193+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:24 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 356 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:24 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 597 B/s wr, 33 op/s
Nov 22 09:56:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:25.199+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:25 compute-0 ceph-mon[75021]: pgmap v2842: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 597 B/s wr, 33 op/s
Nov 22 09:56:25 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:26.225+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:26 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 597 B/s wr, 26 op/s
Nov 22 09:56:26 compute-0 nova_compute[253661]: 2025-11-22 09:56:26.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:27.204+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:27 compute-0 ceph-mon[75021]: pgmap v2843: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 597 B/s wr, 26 op/s
Nov 22 09:56:27 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:56:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:56:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:56:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:56:27 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:56:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:56:28 compute-0 nova_compute[253661]: 2025-11-22 09:56:28.163 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:28.247+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:56:28.410 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:56:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 597 B/s wr, 26 op/s
Nov 22 09:56:28 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 361 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:29 compute-0 nova_compute[253661]: 2025-11-22 09:56:29.000 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:29 compute-0 nova_compute[253661]: 2025-11-22 09:56:29.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:29.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:29 compute-0 ceph-mon[75021]: pgmap v2844: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 597 B/s wr, 26 op/s
Nov 22 09:56:29 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 361 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:29 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:30.246+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:30 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 597 B/s wr, 16 op/s
Nov 22 09:56:31 compute-0 sudo[413939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:56:31 compute-0 sudo[413939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:31 compute-0 sudo[413939]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:31.270+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:31 compute-0 sudo[413964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:56:31 compute-0 sudo[413964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:31 compute-0 sudo[413964]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:31 compute-0 sudo[413989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:56:31 compute-0 sudo[413989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:31 compute-0 sudo[413989]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:31 compute-0 sudo[414014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:56:31 compute-0 sudo[414014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:31 compute-0 ceph-mon[75021]: pgmap v2845: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 597 B/s wr, 16 op/s
Nov 22 09:56:31 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:31 compute-0 nova_compute[253661]: 2025-11-22 09:56:31.802 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:31 compute-0 sudo[414014]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:56:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:56:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:56:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:56:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:56:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:56:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 3e7402e4-3a4a-4226-ac0f-0b8ef682f6f9 does not exist
Nov 22 09:56:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6cc49166-2612-4ea1-b232-b023f80bdccc does not exist
Nov 22 09:56:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 442e7215-c2fd-4fc2-8175-2915a963fcbc does not exist
Nov 22 09:56:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:56:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:56:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:56:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:56:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:56:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:56:32 compute-0 sudo[414071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:56:32 compute-0 sudo[414071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:32 compute-0 sudo[414071]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:32 compute-0 sudo[414096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:56:32 compute-0 sudo[414096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:32 compute-0 sudo[414096]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:32 compute-0 sudo[414121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:56:32 compute-0 sudo[414121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:32 compute-0 sudo[414121]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:32.224+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:32 compute-0 sudo[414146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:56:32 compute-0 sudo[414146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:56:32 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:56:32 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 597 B/s wr, 16 op/s
Nov 22 09:56:32 compute-0 podman[414212]: 2025-11-22 09:56:32.569993504 +0000 UTC m=+0.040312943 container create b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:56:32 compute-0 systemd[1]: Started libpod-conmon-b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03.scope.
Nov 22 09:56:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:56:32 compute-0 podman[414212]: 2025-11-22 09:56:32.64753785 +0000 UTC m=+0.117857209 container init b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 09:56:32 compute-0 podman[414212]: 2025-11-22 09:56:32.551733641 +0000 UTC m=+0.022053020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:56:32 compute-0 podman[414212]: 2025-11-22 09:56:32.656268102 +0000 UTC m=+0.126587461 container start b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:56:32 compute-0 podman[414212]: 2025-11-22 09:56:32.659524304 +0000 UTC m=+0.129843663 container attach b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 09:56:32 compute-0 dazzling_fermat[414228]: 167 167
Nov 22 09:56:32 compute-0 systemd[1]: libpod-b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03.scope: Deactivated successfully.
Nov 22 09:56:32 compute-0 podman[414212]: 2025-11-22 09:56:32.663435063 +0000 UTC m=+0.133754452 container died b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 09:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-227735bb3029afc857e96e406b24bddec5636afe38884ddb9af5a801bd6c54d7-merged.mount: Deactivated successfully.
Nov 22 09:56:32 compute-0 podman[414212]: 2025-11-22 09:56:32.70666019 +0000 UTC m=+0.176979549 container remove b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:56:32 compute-0 systemd[1]: libpod-conmon-b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03.scope: Deactivated successfully.
Nov 22 09:56:32 compute-0 podman[414253]: 2025-11-22 09:56:32.875934772 +0000 UTC m=+0.049677250 container create c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:56:32 compute-0 systemd[1]: Started libpod-conmon-c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29.scope.
Nov 22 09:56:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:32 compute-0 podman[414253]: 2025-11-22 09:56:32.855890864 +0000 UTC m=+0.029633362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:56:32 compute-0 podman[414253]: 2025-11-22 09:56:32.95036196 +0000 UTC m=+0.124104468 container init c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:56:32 compute-0 podman[414253]: 2025-11-22 09:56:32.959510122 +0000 UTC m=+0.133252600 container start c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 22 09:56:32 compute-0 podman[414253]: 2025-11-22 09:56:32.964874198 +0000 UTC m=+0.138616726 container attach c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:56:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:33.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:33 compute-0 nova_compute[253661]: 2025-11-22 09:56:33.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:33 compute-0 ceph-mon[75021]: pgmap v2846: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 597 B/s wr, 16 op/s
Nov 22 09:56:33 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:33 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 366 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:34 compute-0 musing_faraday[414269]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:56:34 compute-0 musing_faraday[414269]: --> relative data size: 1.0
Nov 22 09:56:34 compute-0 musing_faraday[414269]: --> All data devices are unavailable
Nov 22 09:56:34 compute-0 systemd[1]: libpod-c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29.scope: Deactivated successfully.
Nov 22 09:56:34 compute-0 systemd[1]: libpod-c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29.scope: Consumed 1.065s CPU time.
Nov 22 09:56:34 compute-0 podman[414253]: 2025-11-22 09:56:34.069011468 +0000 UTC m=+1.242753976 container died c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f-merged.mount: Deactivated successfully.
Nov 22 09:56:34 compute-0 podman[414253]: 2025-11-22 09:56:34.131500783 +0000 UTC m=+1.305243261 container remove c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:56:34 compute-0 systemd[1]: libpod-conmon-c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29.scope: Deactivated successfully.
Nov 22 09:56:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:34 compute-0 sudo[414146]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:34.158+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:34 compute-0 sudo[414309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:56:34 compute-0 sudo[414309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:34 compute-0 sudo[414309]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:34 compute-0 sudo[414334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:56:34 compute-0 sudo[414334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:34 compute-0 sudo[414334]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:34 compute-0 sudo[414359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:56:34 compute-0 sudo[414359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:34 compute-0 sudo[414359]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:34 compute-0 sudo[414384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:56:34 compute-0 sudo[414384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:34 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 366 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:34 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 597 B/s wr, 16 op/s
Nov 22 09:56:34 compute-0 podman[414451]: 2025-11-22 09:56:34.772803486 +0000 UTC m=+0.043267608 container create 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:56:34 compute-0 systemd[1]: Started libpod-conmon-6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b.scope.
Nov 22 09:56:34 compute-0 podman[414451]: 2025-11-22 09:56:34.754881411 +0000 UTC m=+0.025345553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:56:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:56:34 compute-0 podman[414451]: 2025-11-22 09:56:34.870299348 +0000 UTC m=+0.140763490 container init 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 09:56:34 compute-0 podman[414451]: 2025-11-22 09:56:34.881242046 +0000 UTC m=+0.151706168 container start 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 09:56:34 compute-0 podman[414451]: 2025-11-22 09:56:34.885130584 +0000 UTC m=+0.155594766 container attach 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:56:34 compute-0 naughty_ellis[414467]: 167 167
Nov 22 09:56:34 compute-0 systemd[1]: libpod-6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b.scope: Deactivated successfully.
Nov 22 09:56:34 compute-0 podman[414451]: 2025-11-22 09:56:34.890542962 +0000 UTC m=+0.161007094 container died 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 09:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-65214dd0e8399f487b5c6190ec07f4d528e8f7aef7e11ab40341329cd2c0b65f-merged.mount: Deactivated successfully.
Nov 22 09:56:34 compute-0 podman[414451]: 2025-11-22 09:56:34.939986215 +0000 UTC m=+0.210450337 container remove 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:56:34 compute-0 systemd[1]: libpod-conmon-6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b.scope: Deactivated successfully.
Nov 22 09:56:35 compute-0 podman[414490]: 2025-11-22 09:56:35.117325283 +0000 UTC m=+0.038620441 container create 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 09:56:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:35.119+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:35 compute-0 systemd[1]: Started libpod-conmon-7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f.scope.
Nov 22 09:56:35 compute-0 podman[414490]: 2025-11-22 09:56:35.10143924 +0000 UTC m=+0.022734418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:56:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:35 compute-0 podman[414490]: 2025-11-22 09:56:35.219791721 +0000 UTC m=+0.141086899 container init 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:56:35 compute-0 podman[414490]: 2025-11-22 09:56:35.226505662 +0000 UTC m=+0.147800810 container start 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 09:56:35 compute-0 podman[414490]: 2025-11-22 09:56:35.229816245 +0000 UTC m=+0.151111423 container attach 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:56:35 compute-0 ceph-mon[75021]: pgmap v2847: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 597 B/s wr, 16 op/s
Nov 22 09:56:35 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:35 compute-0 epic_bartik[414506]: {
Nov 22 09:56:35 compute-0 epic_bartik[414506]:     "0": [
Nov 22 09:56:35 compute-0 epic_bartik[414506]:         {
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "devices": [
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "/dev/loop3"
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             ],
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_name": "ceph_lv0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_size": "21470642176",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "name": "ceph_lv0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "tags": {
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.cluster_name": "ceph",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.crush_device_class": "",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.encrypted": "0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.osd_id": "0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.type": "block",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.vdo": "0"
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             },
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "type": "block",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "vg_name": "ceph_vg0"
Nov 22 09:56:35 compute-0 epic_bartik[414506]:         }
Nov 22 09:56:35 compute-0 epic_bartik[414506]:     ],
Nov 22 09:56:35 compute-0 epic_bartik[414506]:     "1": [
Nov 22 09:56:35 compute-0 epic_bartik[414506]:         {
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "devices": [
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "/dev/loop4"
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             ],
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_name": "ceph_lv1",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_size": "21470642176",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "name": "ceph_lv1",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "tags": {
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.cluster_name": "ceph",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.crush_device_class": "",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.encrypted": "0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.osd_id": "1",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.type": "block",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.vdo": "0"
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             },
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "type": "block",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "vg_name": "ceph_vg1"
Nov 22 09:56:35 compute-0 epic_bartik[414506]:         }
Nov 22 09:56:35 compute-0 epic_bartik[414506]:     ],
Nov 22 09:56:35 compute-0 epic_bartik[414506]:     "2": [
Nov 22 09:56:35 compute-0 epic_bartik[414506]:         {
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "devices": [
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "/dev/loop5"
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             ],
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_name": "ceph_lv2",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_size": "21470642176",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "name": "ceph_lv2",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "tags": {
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.cluster_name": "ceph",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.crush_device_class": "",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.encrypted": "0",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.osd_id": "2",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.type": "block",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:                 "ceph.vdo": "0"
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             },
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "type": "block",
Nov 22 09:56:35 compute-0 epic_bartik[414506]:             "vg_name": "ceph_vg2"
Nov 22 09:56:35 compute-0 epic_bartik[414506]:         }
Nov 22 09:56:35 compute-0 epic_bartik[414506]:     ]
Nov 22 09:56:35 compute-0 epic_bartik[414506]: }
Nov 22 09:56:35 compute-0 systemd[1]: libpod-7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f.scope: Deactivated successfully.
Nov 22 09:56:35 compute-0 podman[414490]: 2025-11-22 09:56:35.972963141 +0000 UTC m=+0.894258299 container died 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:56:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59-merged.mount: Deactivated successfully.
Nov 22 09:56:36 compute-0 podman[414490]: 2025-11-22 09:56:36.03874955 +0000 UTC m=+0.960044708 container remove 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 22 09:56:36 compute-0 systemd[1]: libpod-conmon-7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f.scope: Deactivated successfully.
Nov 22 09:56:36 compute-0 sudo[414384]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:36 compute-0 sudo[414528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:56:36 compute-0 sudo[414528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:36 compute-0 sudo[414528]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:36.136+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:36 compute-0 sudo[414553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:56:36 compute-0 sudo[414553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:36 compute-0 sudo[414553]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:36 compute-0 sudo[414578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:56:36 compute-0 sudo[414578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:36 compute-0 sudo[414578]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:36 compute-0 sudo[414603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:56:36 compute-0 sudo[414603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:36 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:36 compute-0 podman[414669]: 2025-11-22 09:56:36.666423717 +0000 UTC m=+0.041670557 container create 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:56:36 compute-0 systemd[1]: Started libpod-conmon-913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0.scope.
Nov 22 09:56:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:56:36 compute-0 podman[414669]: 2025-11-22 09:56:36.650097793 +0000 UTC m=+0.025344663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:56:36 compute-0 podman[414669]: 2025-11-22 09:56:36.756010909 +0000 UTC m=+0.131257769 container init 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:56:36 compute-0 podman[414669]: 2025-11-22 09:56:36.769285026 +0000 UTC m=+0.144531866 container start 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 09:56:36 compute-0 podman[414669]: 2025-11-22 09:56:36.772966749 +0000 UTC m=+0.148213789 container attach 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:56:36 compute-0 youthful_solomon[414686]: 167 167
Nov 22 09:56:36 compute-0 systemd[1]: libpod-913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0.scope: Deactivated successfully.
Nov 22 09:56:36 compute-0 podman[414669]: 2025-11-22 09:56:36.778885469 +0000 UTC m=+0.154132309 container died 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 09:56:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fc35e9ea41ba6dd7acae83ea7e8f3b60cb35d97155683a028fa0dae6e161ec4-merged.mount: Deactivated successfully.
Nov 22 09:56:36 compute-0 nova_compute[253661]: 2025-11-22 09:56:36.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:36 compute-0 podman[414669]: 2025-11-22 09:56:36.855679246 +0000 UTC m=+0.230926086 container remove 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 09:56:36 compute-0 systemd[1]: libpod-conmon-913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0.scope: Deactivated successfully.
Nov 22 09:56:37 compute-0 podman[414709]: 2025-11-22 09:56:37.042612907 +0000 UTC m=+0.058753591 container create 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:56:37 compute-0 systemd[1]: Started libpod-conmon-664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd.scope.
Nov 22 09:56:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:37.094+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:56:37 compute-0 podman[414709]: 2025-11-22 09:56:37.018731682 +0000 UTC m=+0.034872396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:56:37 compute-0 podman[414709]: 2025-11-22 09:56:37.131187503 +0000 UTC m=+0.147328187 container init 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 09:56:37 compute-0 podman[414709]: 2025-11-22 09:56:37.13699569 +0000 UTC m=+0.153136354 container start 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 09:56:37 compute-0 podman[414709]: 2025-11-22 09:56:37.140081599 +0000 UTC m=+0.156222263 container attach 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 09:56:37 compute-0 ceph-mon[75021]: pgmap v2848: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:37 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:38.115+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]: {
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "osd_id": 1,
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "type": "bluestore"
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:     },
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "osd_id": 0,
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "type": "bluestore"
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:     },
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "osd_id": 2,
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:         "type": "bluestore"
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]:     }
Nov 22 09:56:38 compute-0 exciting_jepsen[414725]: }
Nov 22 09:56:38 compute-0 systemd[1]: libpod-664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd.scope: Deactivated successfully.
Nov 22 09:56:38 compute-0 podman[414709]: 2025-11-22 09:56:38.188709792 +0000 UTC m=+1.204850476 container died 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:56:38 compute-0 systemd[1]: libpod-664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd.scope: Consumed 1.060s CPU time.
Nov 22 09:56:38 compute-0 nova_compute[253661]: 2025-11-22 09:56:38.202 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5-merged.mount: Deactivated successfully.
Nov 22 09:56:38 compute-0 podman[414709]: 2025-11-22 09:56:38.267689954 +0000 UTC m=+1.283830618 container remove 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:56:38 compute-0 systemd[1]: libpod-conmon-664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd.scope: Deactivated successfully.
Nov 22 09:56:38 compute-0 sudo[414603]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:56:38 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:56:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:56:38 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:56:38 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 403a83a7-6df9-4dbc-aac7-3471d1636031 does not exist
Nov 22 09:56:38 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev eea6a6db-a4b2-4cda-aab0-3d7a9c7686a4 does not exist
Nov 22 09:56:38 compute-0 sudo[414772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:56:38 compute-0 sudo[414772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:38 compute-0 sudo[414772]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:38 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:56:38 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:56:38 compute-0 sudo[414797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:56:38 compute-0 sudo[414797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:56:38 compute-0 sudo[414797]: pam_unix(sudo:session): session closed for user root
Nov 22 09:56:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 371 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:39.066+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:39 compute-0 ceph-mon[75021]: pgmap v2849: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:39 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 371 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:39 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:40.027+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:40 compute-0 ceph-mon[75021]: pgmap v2850: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:41.063+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:41 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:41 compute-0 nova_compute[253661]: 2025-11-22 09:56:41.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:42.056+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:42 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:42 compute-0 ceph-mon[75021]: pgmap v2851: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:43.085+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:43 compute-0 nova_compute[253661]: 2025-11-22 09:56:43.204 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:43 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:43 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 376 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:44.093+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:44 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 376 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:44 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:44 compute-0 ceph-mon[75021]: pgmap v2852: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:45.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:45 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:46.097+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:46 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:46 compute-0 ceph-mon[75021]: pgmap v2853: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:46 compute-0 nova_compute[253661]: 2025-11-22 09:56:46.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:47.133+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:47 compute-0 podman[414823]: 2025-11-22 09:56:47.411513775 +0000 UTC m=+0.092575809 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 09:56:47 compute-0 podman[414824]: 2025-11-22 09:56:47.425749636 +0000 UTC m=+0.097749360 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:56:47 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:48.160+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:48 compute-0 nova_compute[253661]: 2025-11-22 09:56:48.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:48 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:48 compute-0 ceph-mon[75021]: pgmap v2854: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:48 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 387 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:49.119+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:49 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 387 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:50.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:50 compute-0 ceph-mon[75021]: pgmap v2855: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:51.120+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:51 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:51 compute-0 nova_compute[253661]: 2025-11-22 09:56:51.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:52.126+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:56:52
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:52 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:52 compute-0 ceph-mon[75021]: pgmap v2856: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:56:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:56:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:53.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:53 compute-0 nova_compute[253661]: 2025-11-22 09:56:53.208 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:53 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:54.140+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:54 compute-0 nova_compute[253661]: 2025-11-22 09:56:54.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:56:54 compute-0 nova_compute[253661]: 2025-11-22 09:56:54.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:56:54 compute-0 nova_compute[253661]: 2025-11-22 09:56:54.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:56:54 compute-0 nova_compute[253661]: 2025-11-22 09:56:54.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:56:54 compute-0 nova_compute[253661]: 2025-11-22 09:56:54.246 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:56:54 compute-0 podman[414859]: 2025-11-22 09:56:54.427769611 +0000 UTC m=+0.117703395 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 09:56:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 392 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:54 compute-0 ceph-mon[75021]: pgmap v2857: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:55.107+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:55 compute-0 nova_compute[253661]: 2025-11-22 09:56:55.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:56:55 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 392 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:56:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:56:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:56:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:56:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:56:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:56:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:56.086+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:56 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:56 compute-0 ceph-mon[75021]: pgmap v2858: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:56 compute-0 nova_compute[253661]: 2025-11-22 09:56:56.909 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:57.060+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:57 compute-0 nova_compute[253661]: 2025-11-22 09:56:57.215 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "a7eff414-1d1e-4670-a9ca-5477d690015b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:56:57 compute-0 nova_compute[253661]: 2025-11-22 09:56:57.216 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:56:57 compute-0 nova_compute[253661]: 2025-11-22 09:56:57.304 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 09:56:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:56:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:56:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:56:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:56:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:56:57 compute-0 nova_compute[253661]: 2025-11-22 09:56:57.464 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:56:57 compute-0 nova_compute[253661]: 2025-11-22 09:56:57.465 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:56:57 compute-0 nova_compute[253661]: 2025-11-22 09:56:57.474 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 09:56:57 compute-0 nova_compute[253661]: 2025-11-22 09:56:57.475 253665 INFO nova.compute.claims [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Claim successful on node compute-0.ctlplane.example.com
Nov 22 09:56:57 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:57 compute-0 nova_compute[253661]: 2025-11-22 09:56:57.964 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:56:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:58.035+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:56:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:56:58 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257337994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:56:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.489 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.495 253665 DEBUG nova.compute.provider_tree [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.513 253665 DEBUG nova.scheduler.client.report [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.544 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.545 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.608 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.609 253665 DEBUG nova.network.neutron [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.663 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.682 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 09:56:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:58 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2257337994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:56:58 compute-0 ceph-mon[75021]: pgmap v2859: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.849 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.851 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.852 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Creating image(s)
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.887 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:56:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.919 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.924553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805418924635, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1184, "num_deletes": 250, "total_data_size": 1344088, "memory_usage": 1377912, "flush_reason": "Manual Compaction"}
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.950 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:56:58 compute-0 nova_compute[253661]: 2025-11-22 09:56:58.955 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805418967458, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 856888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59255, "largest_seqno": 60438, "table_properties": {"data_size": 852505, "index_size": 1714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13468, "raw_average_key_size": 21, "raw_value_size": 842304, "raw_average_value_size": 1349, "num_data_blocks": 75, "num_entries": 624, "num_filter_entries": 624, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805338, "oldest_key_time": 1763805338, "file_creation_time": 1763805418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 42976 microseconds, and 3410 cpu microseconds.
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.967538) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 856888 bytes OK
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.967563) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.981729) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.981789) EVENT_LOG_v1 {"time_micros": 1763805418981776, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.981820) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1338502, prev total WAL file size 1338502, number of live WAL files 2.
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.982719) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323536' seq:72057594037927935, type:22 .. '6D6772737461740032353037' seq:0, type:0; will stop at (end)
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(836KB)], [137(10MB)]
Nov 22 09:56:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805418982761, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11505197, "oldest_snapshot_seqno": -1}
Nov 22 09:56:59 compute-0 nova_compute[253661]: 2025-11-22 09:56:59.012 253665 DEBUG nova.network.neutron [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 09:56:59 compute-0 nova_compute[253661]: 2025-11-22 09:56:59.012 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 09:56:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:59.041+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:56:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:56:59 compute-0 nova_compute[253661]: 2025-11-22 09:56:59.049 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:56:59 compute-0 nova_compute[253661]: 2025-11-22 09:56:59.050 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:56:59 compute-0 nova_compute[253661]: 2025-11-22 09:56:59.051 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:56:59 compute-0 nova_compute[253661]: 2025-11-22 09:56:59.052 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8444 keys, 8636317 bytes, temperature: kUnknown
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805419071961, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8636317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8584937, "index_size": 29187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 220739, "raw_average_key_size": 26, "raw_value_size": 8439245, "raw_average_value_size": 999, "num_data_blocks": 1121, "num_entries": 8444, "num_filter_entries": 8444, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.072368) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8636317 bytes
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.075380) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.8 rd, 96.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.2 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(23.5) write-amplify(10.1) OK, records in: 8921, records dropped: 477 output_compression: NoCompression
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.075407) EVENT_LOG_v1 {"time_micros": 1763805419075394, "job": 84, "event": "compaction_finished", "compaction_time_micros": 89316, "compaction_time_cpu_micros": 26015, "output_level": 6, "num_output_files": 1, "total_output_size": 8636317, "num_input_records": 8921, "num_output_records": 8444, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805419075740, "job": 84, "event": "table_file_deletion", "file_number": 139}
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805419077789, "job": 84, "event": "table_file_deletion", "file_number": 137}
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.982611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:56:59 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:56:59 compute-0 nova_compute[253661]: 2025-11-22 09:56:59.081 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:56:59 compute-0 nova_compute[253661]: 2025-11-22 09:56:59.086 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a7eff414-1d1e-4670-a9ca-5477d690015b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:56:59 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.080 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a7eff414-1d1e-4670-a9ca-5477d690015b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.994s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.154 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] resizing rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.422 253665 DEBUG nova.objects.instance [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lazy-loading 'migration_context' on Instance uuid a7eff414-1d1e-4670-a9ca-5477d690015b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.439 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.440 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Ensure instance console log exists: /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.441 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.441 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.441 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.443 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.446 253665 WARNING nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.452 253665 DEBUG nova.virt.libvirt.host [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.453 253665 DEBUG nova.virt.libvirt.host [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.456 253665 DEBUG nova.virt.libvirt.host [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.457 253665 DEBUG nova.virt.libvirt.host [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.457 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.458 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.458 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.459 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.459 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.459 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.460 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.460 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.460 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.461 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.461 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.461 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.464 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:57:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:57:00 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253056536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.919 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.944 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:57:00 compute-0 nova_compute[253661]: 2025-11-22 09:57:00.948 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:57:00 compute-0 ceph-mon[75021]: pgmap v2860: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:00 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/253056536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:57:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:01.075+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:01 compute-0 nova_compute[253661]: 2025-11-22 09:57:01.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:57:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 09:57:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335997378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:57:01 compute-0 nova_compute[253661]: 2025-11-22 09:57:01.422 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:57:01 compute-0 nova_compute[253661]: 2025-11-22 09:57:01.424 253665 DEBUG nova.objects.instance [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid a7eff414-1d1e-4670-a9ca-5477d690015b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:57:01 compute-0 nova_compute[253661]: 2025-11-22 09:57:01.442 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <uuid>a7eff414-1d1e-4670-a9ca-5477d690015b</uuid>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <name>instance-00000098</name>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <memory>131072</memory>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <vcpu>1</vcpu>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <metadata>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <nova:name>tempest-AggregatesAdminTestJSON-server-1241444193</nova:name>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <nova:creationTime>2025-11-22 09:57:00</nova:creationTime>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <nova:flavor name="m1.nano">
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <nova:memory>128</nova:memory>
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <nova:disk>1</nova:disk>
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <nova:swap>0</nova:swap>
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <nova:vcpus>1</nova:vcpus>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       </nova:flavor>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <nova:owner>
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <nova:user uuid="c1b227b0892a49698bf98933a153ab9c">tempest-AggregatesAdminTestJSON-562303690-project-member</nova:user>
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <nova:project uuid="1aa9495592994e239bcee4c6e795c5d2">tempest-AggregatesAdminTestJSON-562303690</nova:project>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       </nova:owner>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <nova:ports/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     </nova:instance>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   </metadata>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <sysinfo type="smbios">
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <system>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <entry name="manufacturer">RDO</entry>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <entry name="product">OpenStack Compute</entry>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <entry name="serial">a7eff414-1d1e-4670-a9ca-5477d690015b</entry>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <entry name="uuid">a7eff414-1d1e-4670-a9ca-5477d690015b</entry>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <entry name="family">Virtual Machine</entry>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     </system>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   </sysinfo>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <os>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <boot dev="hd"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <smbios mode="sysinfo"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   </os>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <features>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <acpi/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <apic/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <vmcoreinfo/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   </features>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <clock offset="utc">
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <timer name="hpet" present="no"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   </clock>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <cpu mode="host-model" match="exact">
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   </cpu>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   <devices>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <disk type="network" device="disk">
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a7eff414-1d1e-4670-a9ca-5477d690015b_disk">
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       </source>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <target dev="vda" bus="virtio"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <disk type="network" device="cdrom">
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <driver type="raw" cache="none"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <source protocol="rbd" name="vms/a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config">
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <host name="192.168.122.100" port="6789"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       </source>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <auth username="openstack">
Nov 22 09:57:01 compute-0 nova_compute[253661]:         <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       </auth>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <target dev="sda" bus="sata"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     </disk>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <serial type="pty">
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <log file="/var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/console.log" append="off"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     </serial>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <video>
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <model type="virtio"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     </video>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <input type="tablet" bus="usb"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <rng model="virtio">
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <backend model="random">/dev/urandom</backend>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     </rng>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <controller type="usb" index="0"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     <memballoon model="virtio">
Nov 22 09:57:01 compute-0 nova_compute[253661]:       <stats period="10"/>
Nov 22 09:57:01 compute-0 nova_compute[253661]:     </memballoon>
Nov 22 09:57:01 compute-0 nova_compute[253661]:   </devices>
Nov 22 09:57:01 compute-0 nova_compute[253661]: </domain>
Nov 22 09:57:01 compute-0 nova_compute[253661]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 09:57:01 compute-0 nova_compute[253661]: 2025-11-22 09:57:01.492 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:57:01 compute-0 nova_compute[253661]: 2025-11-22 09:57:01.493 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 09:57:01 compute-0 nova_compute[253661]: 2025-11-22 09:57:01.493 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Using config drive
Nov 22 09:57:01 compute-0 nova_compute[253661]: 2025-11-22 09:57:01.511 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:57:01 compute-0 nova_compute[253661]: 2025-11-22 09:57:01.913 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:01 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:01 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1335997378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 09:57:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:02.121+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:02 compute-0 nova_compute[253661]: 2025-11-22 09:57:02.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:57:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:03 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:03 compute-0 ceph-mon[75021]: pgmap v2861: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:57:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:57:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:03.121+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:03 compute-0 nova_compute[253661]: 2025-11-22 09:57:03.221 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:03 compute-0 nova_compute[253661]: 2025-11-22 09:57:03.275 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Creating config drive at /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config
Nov 22 09:57:03 compute-0 nova_compute[253661]: 2025-11-22 09:57:03.280 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8h1uymm8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:57:03 compute-0 nova_compute[253661]: 2025-11-22 09:57:03.432 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8h1uymm8" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:57:03 compute-0 nova_compute[253661]: 2025-11-22 09:57:03.466 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 09:57:03 compute-0 nova_compute[253661]: 2025-11-22 09:57:03.471 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:57:03 compute-0 nova_compute[253661]: 2025-11-22 09:57:03.668 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:57:03 compute-0 nova_compute[253661]: 2025-11-22 09:57:03.669 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Deleting local config drive /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config because it was imported into RBD.
Nov 22 09:57:03 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 22 09:57:03 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 22 09:57:03 compute-0 systemd-machined[215941]: New machine qemu-186-instance-00000098.
Nov 22 09:57:03 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Nov 22 09:57:03 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 396 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.923022) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423923095, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 305, "num_deletes": 255, "total_data_size": 75399, "memory_usage": 81608, "flush_reason": "Manual Compaction"}
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423927008, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 74821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60439, "largest_seqno": 60743, "table_properties": {"data_size": 72831, "index_size": 153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4975, "raw_average_key_size": 17, "raw_value_size": 68928, "raw_average_value_size": 244, "num_data_blocks": 7, "num_entries": 282, "num_filter_entries": 282, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805419, "oldest_key_time": 1763805419, "file_creation_time": 1763805423, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 4044 microseconds, and 1488 cpu microseconds.
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.927071) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 74821 bytes OK
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.927097) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.928891) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.928921) EVENT_LOG_v1 {"time_micros": 1763805423928912, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.928951) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 73177, prev total WAL file size 73177, number of live WAL files 2.
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.929984) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353130' seq:72057594037927935, type:22 .. '6C6F676D0032373631' seq:0, type:0; will stop at (end)
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(73KB)], [140(8433KB)]
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423930051, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 8711138, "oldest_snapshot_seqno": -1}
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8209 keys, 8600641 bytes, temperature: kUnknown
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423988852, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 8600641, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8550284, "index_size": 28740, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 216812, "raw_average_key_size": 26, "raw_value_size": 8408273, "raw_average_value_size": 1024, "num_data_blocks": 1100, "num_entries": 8209, "num_filter_entries": 8209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805423, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.989101) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 8600641 bytes
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.991107) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.0 rd, 146.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.2 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(231.4) write-amplify(114.9) OK, records in: 8726, records dropped: 517 output_compression: NoCompression
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.991127) EVENT_LOG_v1 {"time_micros": 1763805423991117, "job": 86, "event": "compaction_finished", "compaction_time_micros": 58878, "compaction_time_cpu_micros": 22447, "output_level": 6, "num_output_files": 1, "total_output_size": 8600641, "num_input_records": 8726, "num_output_records": 8209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423991241, "job": 86, "event": "table_file_deletion", "file_number": 142}
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423992523, "job": 86, "event": "table_file_deletion", "file_number": 140}
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.929805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:57:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:57:04 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:04 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 396 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:04.133+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.523 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.524 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.524 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805424.5221908, a7eff414-1d1e-4670-a9ca-5477d690015b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.524 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] VM Resumed (Lifecycle Event)
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.532 253665 INFO nova.virt.libvirt.driver [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance spawned successfully.
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.533 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.561 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.568 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.571 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.571 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.572 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.572 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.572 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.573 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.607 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.608 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805424.5248044, a7eff414-1d1e-4670-a9ca-5477d690015b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.609 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] VM Started (Lifecycle Event)
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.626 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.632 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.662 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.682 253665 INFO nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Took 5.83 seconds to spawn the instance on the hypervisor.
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.682 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:57:04 compute-0 nova_compute[253661]: 2025-11-22 09:57:04.862 253665 INFO nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Took 7.43 seconds to build instance.
Nov 22 09:57:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:05 compute-0 ceph-mon[75021]: pgmap v2862: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.056 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:57:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:05.170+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.261 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:57:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:57:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209909598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.805 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.898 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:57:05 compute-0 nova_compute[253661]: 2025-11-22 09:57:05.898 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 09:57:06 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3209909598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.088 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.089 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3350MB free_disk=59.92204284667969GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.089 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.090 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.192 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance a7eff414-1d1e-4670-a9ca-5477d690015b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.193 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.193 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:57:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:06.198+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.225 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.253 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.254 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.278 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: 0fcd87d5-8036-4d27-8f48-a032d34c7fdf _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.325 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.421 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:57:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:57:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412150792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.978 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:57:06 compute-0 nova_compute[253661]: 2025-11-22 09:57:06.985 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.004 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:57:07 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:07 compute-0 ceph-mon[75021]: pgmap v2863: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 09:57:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/412150792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.040 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:57:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:07.208+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.643 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "a7eff414-1d1e-4670-a9ca-5477d690015b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.644 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.645 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "a7eff414-1d1e-4670-a9ca-5477d690015b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.645 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.645 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.646 253665 INFO nova.compute.manager [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Terminating instance
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.647 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "refresh_cache-a7eff414-1d1e-4670-a9ca-5477d690015b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.647 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquired lock "refresh_cache-a7eff414-1d1e-4670-a9ca-5477d690015b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:57:07 compute-0 nova_compute[253661]: 2025-11-22 09:57:07.648 253665 DEBUG nova.network.neutron [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 09:57:08 compute-0 nova_compute[253661]: 2025-11-22 09:57:08.037 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:57:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:08 compute-0 nova_compute[253661]: 2025-11-22 09:57:08.072 253665 DEBUG nova.network.neutron [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:57:08 compute-0 ovn_controller[152872]: 2025-11-22T09:57:08Z|01650|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 22 09:57:08 compute-0 nova_compute[253661]: 2025-11-22 09:57:08.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:08.234+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:08 compute-0 nova_compute[253661]: 2025-11-22 09:57:08.286 253665 DEBUG nova.network.neutron [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:57:08 compute-0 nova_compute[253661]: 2025-11-22 09:57:08.298 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Releasing lock "refresh_cache-a7eff414-1d1e-4670-a9ca-5477d690015b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:57:08 compute-0 nova_compute[253661]: 2025-11-22 09:57:08.299 253665 DEBUG nova.compute.manager [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:57:08 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 22 09:57:08 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 4.536s CPU time.
Nov 22 09:57:08 compute-0 systemd-machined[215941]: Machine qemu-186-instance-00000098 terminated.
Nov 22 09:57:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:57:08 compute-0 nova_compute[253661]: 2025-11-22 09:57:08.532 253665 INFO nova.virt.libvirt.driver [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance destroyed successfully.
Nov 22 09:57:08 compute-0 nova_compute[253661]: 2025-11-22 09:57:08.533 253665 DEBUG nova.objects.instance [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lazy-loading 'resources' on Instance uuid a7eff414-1d1e-4670-a9ca-5477d690015b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:57:08 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 401 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:09 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:09 compute-0 ceph-mon[75021]: pgmap v2864: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:57:09 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 401 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:09 compute-0 nova_compute[253661]: 2025-11-22 09:57:09.050 253665 INFO nova.virt.libvirt.driver [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Deleting instance files /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b_del
Nov 22 09:57:09 compute-0 nova_compute[253661]: 2025-11-22 09:57:09.051 253665 INFO nova.virt.libvirt.driver [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Deletion of /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b_del complete
Nov 22 09:57:09 compute-0 nova_compute[253661]: 2025-11-22 09:57:09.130 253665 INFO nova.compute.manager [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 22 09:57:09 compute-0 nova_compute[253661]: 2025-11-22 09:57:09.131 253665 DEBUG oslo.service.loopingcall [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:57:09 compute-0 nova_compute[253661]: 2025-11-22 09:57:09.132 253665 DEBUG nova.compute.manager [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:57:09 compute-0 nova_compute[253661]: 2025-11-22 09:57:09.132 253665 DEBUG nova.network.neutron [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:57:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:09.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.068 253665 DEBUG nova.network.neutron [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.077 253665 DEBUG nova.network.neutron [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.087 253665 INFO nova.compute.manager [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Took 0.95 seconds to deallocate network for instance.
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.122 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.123 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.172 253665 DEBUG oslo_concurrency.processutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:57:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:10.205+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:57:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:57:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145690875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.620 253665 DEBUG oslo_concurrency.processutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.626 253665 DEBUG nova.compute.provider_tree [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.639 253665 DEBUG nova.scheduler.client.report [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.656 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.681 253665 INFO nova.scheduler.client.report [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Deleted allocations for instance a7eff414-1d1e-4670-a9ca-5477d690015b
Nov 22 09:57:10 compute-0 nova_compute[253661]: 2025-11-22 09:57:10.738 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:57:11 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:11 compute-0 ceph-mon[75021]: pgmap v2865: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:57:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4145690875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:57:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:11.212+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:11 compute-0 nova_compute[253661]: 2025-11-22 09:57:11.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:12 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:12.210+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:57:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567971391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:57:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:57:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567971391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:57:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:57:13 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1567971391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:57:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1567971391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:57:13 compute-0 ceph-mon[75021]: pgmap v2866: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 09:57:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:13.161+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:13 compute-0 nova_compute[253661]: 2025-11-22 09:57:13.217 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:13 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 406 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:14 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:14 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 406 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:14.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 22 09:57:15 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:15 compute-0 ceph-mon[75021]: pgmap v2867: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 22 09:57:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:15.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:16 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:16.206+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 09:57:16 compute-0 nova_compute[253661]: 2025-11-22 09:57:16.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:17 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:17 compute-0 ceph-mon[75021]: pgmap v2868: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 09:57:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:17.251+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:18 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:18.224+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:18 compute-0 nova_compute[253661]: 2025-11-22 09:57:18.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:18 compute-0 podman[415361]: 2025-11-22 09:57:18.385352028 +0000 UTC m=+0.072794117 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 22 09:57:18 compute-0 podman[415362]: 2025-11-22 09:57:18.434439404 +0000 UTC m=+0.111722885 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 09:57:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 09:57:18 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 412 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:19 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:19 compute-0 ceph-mon[75021]: pgmap v2869: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 09:57:19 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 412 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:19.181+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:20 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:20.160+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 22 09:57:21 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:21 compute-0 ceph-mon[75021]: pgmap v2870: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 22 09:57:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:21.191+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:21 compute-0 nova_compute[253661]: 2025-11-22 09:57:21.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:22 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:22.171+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 22 09:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:57:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:57:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:23.155+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:23 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:23 compute-0 ceph-mon[75021]: pgmap v2871: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 22 09:57:23 compute-0 nova_compute[253661]: 2025-11-22 09:57:23.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:23 compute-0 nova_compute[253661]: 2025-11-22 09:57:23.531 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805428.52945, a7eff414-1d1e-4670-a9ca-5477d690015b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:57:23 compute-0 nova_compute[253661]: 2025-11-22 09:57:23.531 253665 INFO nova.compute.manager [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] VM Stopped (Lifecycle Event)
Nov 22 09:57:23 compute-0 nova_compute[253661]: 2025-11-22 09:57:23.565 253665 DEBUG nova.compute.manager [None req-79fec480-9453-4174-a1a4-224de9db142c - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:57:23 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 416 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:24 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:24 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 416 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:24.173+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 22 09:57:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:25.214+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:25 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:25 compute-0 ceph-mon[75021]: pgmap v2872: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 22 09:57:25 compute-0 podman[415399]: 2025-11-22 09:57:25.427068811 +0000 UTC m=+0.114913025 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 09:57:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:26.168+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:26 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:26 compute-0 nova_compute[253661]: 2025-11-22 09:57:26.963 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:27.188+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:27 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:57:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:57:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:57:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:57:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:57:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:57:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:28.170+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:28 compute-0 nova_compute[253661]: 2025-11-22 09:57:28.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:28 compute-0 ceph-mon[75021]: pgmap v2873: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:57:28.694 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:57:28 compute-0 nova_compute[253661]: 2025-11-22 09:57:28.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:57:28.697 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:57:28 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 422 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:29.204+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:29 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 422 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:29 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:30.157+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:30 compute-0 ceph-mon[75021]: pgmap v2874: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:30 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:31.134+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:31 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:31 compute-0 nova_compute[253661]: 2025-11-22 09:57:31.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:32.151+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:32 compute-0 ceph-mon[75021]: pgmap v2875: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:32 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:33.100+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:33 compute-0 nova_compute[253661]: 2025-11-22 09:57:33.292 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:33 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:33 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 427 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:34.091+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:34 compute-0 ceph-mon[75021]: pgmap v2876: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:34 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 427 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:34 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:34 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:57:34.699 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:57:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:35.099+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:35 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:36.111+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:36 compute-0 ceph-mon[75021]: pgmap v2877: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:36 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:36 compute-0 nova_compute[253661]: 2025-11-22 09:57:36.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:37.116+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:37 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:38.154+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:38 compute-0 nova_compute[253661]: 2025-11-22 09:57:38.294 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:38 compute-0 ceph-mon[75021]: pgmap v2878: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:38 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:38 compute-0 sudo[415426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:38 compute-0 sudo[415426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:38 compute-0 sudo[415426]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:38 compute-0 sudo[415451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:57:38 compute-0 sudo[415451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:38 compute-0 sudo[415451]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:38 compute-0 sudo[415476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:38 compute-0 sudo[415476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:38 compute-0 sudo[415476]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:38 compute-0 sudo[415501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 09:57:38 compute-0 sudo[415501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 431 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:39 compute-0 sudo[415501]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:57:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:39.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:57:39 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:39 compute-0 sudo[415546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:39 compute-0 sudo[415546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:39 compute-0 sudo[415546]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:39 compute-0 sudo[415571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:57:39 compute-0 sudo[415571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:39 compute-0 sudo[415571]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:39 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 431 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:39 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:39 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:39 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:39 compute-0 sudo[415596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:39 compute-0 sudo[415596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:39 compute-0 sudo[415596]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:39 compute-0 sudo[415621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:57:39 compute-0 sudo[415621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:40 compute-0 sudo[415621]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:40.146+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 59348556-2c42-49a5-8e19-e046b1317b24 does not exist
Nov 22 09:57:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 788079e9-07f7-42c5-8121-3022c6f562c1 does not exist
Nov 22 09:57:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e9e63380-c8d5-4858-9c2c-7b2a211a5ea4 does not exist
Nov 22 09:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:57:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:57:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:57:40 compute-0 sudo[415677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:40 compute-0 sudo[415677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:40 compute-0 sudo[415677]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:40 compute-0 sudo[415702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:57:40 compute-0 sudo[415702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:40 compute-0 sudo[415702]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:40 compute-0 sudo[415727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:40 compute-0 sudo[415727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:40 compute-0 sudo[415727]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:40 compute-0 sudo[415752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:57:40 compute-0 sudo[415752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:40 compute-0 ceph-mon[75021]: pgmap v2879: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:57:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:57:40 compute-0 podman[415816]: 2025-11-22 09:57:40.936055427 +0000 UTC m=+0.066182149 container create b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:57:40 compute-0 systemd[1]: Started libpod-conmon-b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356.scope.
Nov 22 09:57:40 compute-0 podman[415816]: 2025-11-22 09:57:40.89470443 +0000 UTC m=+0.024831202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:57:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:57:41 compute-0 podman[415816]: 2025-11-22 09:57:41.041958522 +0000 UTC m=+0.172085284 container init b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:57:41 compute-0 podman[415816]: 2025-11-22 09:57:41.050990074 +0000 UTC m=+0.181116806 container start b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:57:41 compute-0 podman[415816]: 2025-11-22 09:57:41.055109556 +0000 UTC m=+0.185236278 container attach b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 09:57:41 compute-0 vigilant_yonath[415832]: 167 167
Nov 22 09:57:41 compute-0 systemd[1]: libpod-b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356.scope: Deactivated successfully.
Nov 22 09:57:41 compute-0 podman[415816]: 2025-11-22 09:57:41.060779605 +0000 UTC m=+0.190906327 container died b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:57:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40dea2b50875389bf41afa93007b03c92144d4ce0158b951b08c047c0349db0-merged.mount: Deactivated successfully.
Nov 22 09:57:41 compute-0 podman[415816]: 2025-11-22 09:57:41.102350997 +0000 UTC m=+0.232477699 container remove b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 09:57:41 compute-0 systemd[1]: libpod-conmon-b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356.scope: Deactivated successfully.
Nov 22 09:57:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:41.143+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:41 compute-0 podman[415856]: 2025-11-22 09:57:41.260196069 +0000 UTC m=+0.038321333 container create b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:57:41 compute-0 systemd[1]: Started libpod-conmon-b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992.scope.
Nov 22 09:57:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:41 compute-0 podman[415856]: 2025-11-22 09:57:41.242563195 +0000 UTC m=+0.020688469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:57:41 compute-0 podman[415856]: 2025-11-22 09:57:41.341616181 +0000 UTC m=+0.119741435 container init b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:57:41 compute-0 podman[415856]: 2025-11-22 09:57:41.35133219 +0000 UTC m=+0.129457444 container start b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 09:57:41 compute-0 podman[415856]: 2025-11-22 09:57:41.354239172 +0000 UTC m=+0.132364416 container attach b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:57:41 compute-0 nova_compute[253661]: 2025-11-22 09:57:41.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:42.192+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:42 compute-0 ceph-mon[75021]: pgmap v2880: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:42 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:42 compute-0 sweet_archimedes[415872]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:57:42 compute-0 sweet_archimedes[415872]: --> relative data size: 1.0
Nov 22 09:57:42 compute-0 sweet_archimedes[415872]: --> All data devices are unavailable
Nov 22 09:57:42 compute-0 systemd[1]: libpod-b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992.scope: Deactivated successfully.
Nov 22 09:57:42 compute-0 systemd[1]: libpod-b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992.scope: Consumed 1.087s CPU time.
Nov 22 09:57:42 compute-0 podman[415856]: 2025-11-22 09:57:42.48245486 +0000 UTC m=+1.260580234 container died b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 09:57:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc-merged.mount: Deactivated successfully.
Nov 22 09:57:42 compute-0 podman[415856]: 2025-11-22 09:57:42.545346597 +0000 UTC m=+1.323471841 container remove b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:57:42 compute-0 systemd[1]: libpod-conmon-b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992.scope: Deactivated successfully.
Nov 22 09:57:42 compute-0 sudo[415752]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:42 compute-0 sudo[415912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:42 compute-0 sudo[415912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:42 compute-0 sudo[415912]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:42 compute-0 sudo[415937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:57:42 compute-0 sudo[415937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:42 compute-0 sudo[415937]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:42 compute-0 sudo[415962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:42 compute-0 sudo[415962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:42 compute-0 sudo[415962]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:42 compute-0 sudo[415987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:57:42 compute-0 sudo[415987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:43.201+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:43 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:43 compute-0 podman[416051]: 2025-11-22 09:57:43.276419437 +0000 UTC m=+0.045373687 container create 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:57:43 compute-0 nova_compute[253661]: 2025-11-22 09:57:43.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:43 compute-0 systemd[1]: Started libpod-conmon-689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822.scope.
Nov 22 09:57:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:57:43 compute-0 podman[416051]: 2025-11-22 09:57:43.255213526 +0000 UTC m=+0.024167796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:57:43 compute-0 podman[416051]: 2025-11-22 09:57:43.360630418 +0000 UTC m=+0.129584688 container init 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:57:43 compute-0 podman[416051]: 2025-11-22 09:57:43.369898896 +0000 UTC m=+0.138853146 container start 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:57:43 compute-0 podman[416051]: 2025-11-22 09:57:43.373514285 +0000 UTC m=+0.142468545 container attach 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 09:57:43 compute-0 laughing_carson[416067]: 167 167
Nov 22 09:57:43 compute-0 systemd[1]: libpod-689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822.scope: Deactivated successfully.
Nov 22 09:57:43 compute-0 podman[416051]: 2025-11-22 09:57:43.377216196 +0000 UTC m=+0.146170486 container died 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 09:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2682cfe65998ff588fdc02a48f6ad3d19b6d44746db664ed58e964066e7e899-merged.mount: Deactivated successfully.
Nov 22 09:57:43 compute-0 podman[416051]: 2025-11-22 09:57:43.422985572 +0000 UTC m=+0.191939842 container remove 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 22 09:57:43 compute-0 systemd[1]: libpod-conmon-689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822.scope: Deactivated successfully.
Nov 22 09:57:43 compute-0 podman[416091]: 2025-11-22 09:57:43.615328462 +0000 UTC m=+0.057612248 container create b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 09:57:43 compute-0 systemd[1]: Started libpod-conmon-b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21.scope.
Nov 22 09:57:43 compute-0 podman[416091]: 2025-11-22 09:57:43.590757508 +0000 UTC m=+0.033041344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:57:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:43 compute-0 podman[416091]: 2025-11-22 09:57:43.706293859 +0000 UTC m=+0.148577665 container init b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 09:57:43 compute-0 podman[416091]: 2025-11-22 09:57:43.720293954 +0000 UTC m=+0.162577740 container start b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:57:43 compute-0 podman[416091]: 2025-11-22 09:57:43.723923233 +0000 UTC m=+0.166207019 container attach b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:57:43 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 437 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:44.215+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:44 compute-0 ceph-mon[75021]: pgmap v2881: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:44 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:44 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 437 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:44 compute-0 awesome_johnson[416108]: {
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:     "0": [
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:         {
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "devices": [
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "/dev/loop3"
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             ],
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_name": "ceph_lv0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_size": "21470642176",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "name": "ceph_lv0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "tags": {
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.cluster_name": "ceph",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.crush_device_class": "",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.encrypted": "0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.osd_id": "0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.type": "block",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.vdo": "0"
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             },
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "type": "block",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "vg_name": "ceph_vg0"
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:         }
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:     ],
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:     "1": [
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:         {
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "devices": [
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "/dev/loop4"
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             ],
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_name": "ceph_lv1",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_size": "21470642176",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "name": "ceph_lv1",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "tags": {
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.cluster_name": "ceph",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.crush_device_class": "",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.encrypted": "0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.osd_id": "1",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.type": "block",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.vdo": "0"
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             },
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "type": "block",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "vg_name": "ceph_vg1"
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:         }
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:     ],
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:     "2": [
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:         {
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "devices": [
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "/dev/loop5"
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             ],
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_name": "ceph_lv2",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_size": "21470642176",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "name": "ceph_lv2",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "tags": {
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.cluster_name": "ceph",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.crush_device_class": "",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.encrypted": "0",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.osd_id": "2",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.type": "block",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:                 "ceph.vdo": "0"
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             },
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "type": "block",
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:             "vg_name": "ceph_vg2"
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:         }
Nov 22 09:57:44 compute-0 awesome_johnson[416108]:     ]
Nov 22 09:57:44 compute-0 awesome_johnson[416108]: }
Nov 22 09:57:44 compute-0 systemd[1]: libpod-b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21.scope: Deactivated successfully.
Nov 22 09:57:44 compute-0 podman[416117]: 2025-11-22 09:57:44.683676808 +0000 UTC m=+0.035747020 container died b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 09:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af-merged.mount: Deactivated successfully.
Nov 22 09:57:44 compute-0 podman[416117]: 2025-11-22 09:57:44.73824647 +0000 UTC m=+0.090316662 container remove b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 09:57:44 compute-0 systemd[1]: libpod-conmon-b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21.scope: Deactivated successfully.
Nov 22 09:57:44 compute-0 sudo[415987]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:44 compute-0 sudo[416132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:44 compute-0 sudo[416132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:44 compute-0 sudo[416132]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:44 compute-0 sudo[416157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:57:44 compute-0 sudo[416157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:44 compute-0 sudo[416157]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:45 compute-0 sudo[416182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:45 compute-0 sudo[416182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:45 compute-0 sudo[416182]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:45 compute-0 sudo[416207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:57:45 compute-0 sudo[416207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:45.248+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:45 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:45 compute-0 podman[416271]: 2025-11-22 09:57:45.465153167 +0000 UTC m=+0.051437576 container create 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:57:45 compute-0 systemd[1]: Started libpod-conmon-04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e.scope.
Nov 22 09:57:45 compute-0 podman[416271]: 2025-11-22 09:57:45.437302182 +0000 UTC m=+0.023586681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:57:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:57:45 compute-0 podman[416271]: 2025-11-22 09:57:45.563047205 +0000 UTC m=+0.149331634 container init 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:57:45 compute-0 podman[416271]: 2025-11-22 09:57:45.574944078 +0000 UTC m=+0.161228487 container start 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:57:45 compute-0 unruffled_lewin[416287]: 167 167
Nov 22 09:57:45 compute-0 podman[416271]: 2025-11-22 09:57:45.578591357 +0000 UTC m=+0.164875766 container attach 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 09:57:45 compute-0 systemd[1]: libpod-04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e.scope: Deactivated successfully.
Nov 22 09:57:45 compute-0 podman[416271]: 2025-11-22 09:57:45.581393196 +0000 UTC m=+0.167677605 container died 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 09:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c99be7e3fbb9b366c0015f8ff7617f856e744a09d4167313b6c1147bb2779f9d-merged.mount: Deactivated successfully.
Nov 22 09:57:45 compute-0 podman[416271]: 2025-11-22 09:57:45.621025802 +0000 UTC m=+0.207310221 container remove 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 09:57:45 compute-0 systemd[1]: libpod-conmon-04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e.scope: Deactivated successfully.
Nov 22 09:57:45 compute-0 podman[416311]: 2025-11-22 09:57:45.824148187 +0000 UTC m=+0.048959115 container create ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 09:57:45 compute-0 systemd[1]: Started libpod-conmon-ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb.scope.
Nov 22 09:57:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:57:45 compute-0 podman[416311]: 2025-11-22 09:57:45.806162075 +0000 UTC m=+0.030973023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:57:45 compute-0 podman[416311]: 2025-11-22 09:57:45.911043244 +0000 UTC m=+0.135854172 container init ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:57:45 compute-0 podman[416311]: 2025-11-22 09:57:45.918541128 +0000 UTC m=+0.143352056 container start ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 09:57:45 compute-0 podman[416311]: 2025-11-22 09:57:45.921574413 +0000 UTC m=+0.146385371 container attach ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 09:57:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:46.216+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:46 compute-0 ceph-mon[75021]: pgmap v2882: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:46 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]: {
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "osd_id": 1,
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "type": "bluestore"
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:     },
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "osd_id": 0,
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "type": "bluestore"
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:     },
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "osd_id": 2,
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:         "type": "bluestore"
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]:     }
Nov 22 09:57:46 compute-0 cranky_lovelace[416328]: }
Nov 22 09:57:46 compute-0 systemd[1]: libpod-ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb.scope: Deactivated successfully.
Nov 22 09:57:46 compute-0 podman[416311]: 2025-11-22 09:57:46.969601879 +0000 UTC m=+1.194412847 container died ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 09:57:46 compute-0 systemd[1]: libpod-ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb.scope: Consumed 1.055s CPU time.
Nov 22 09:57:47 compute-0 nova_compute[253661]: 2025-11-22 09:57:47.015 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f-merged.mount: Deactivated successfully.
Nov 22 09:57:47 compute-0 podman[416311]: 2025-11-22 09:57:47.061208692 +0000 UTC m=+1.286019640 container remove ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:57:47 compute-0 systemd[1]: libpod-conmon-ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb.scope: Deactivated successfully.
Nov 22 09:57:47 compute-0 sudo[416207]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:57:47 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:57:47 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:47 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0ce397da-5f87-49a7-905a-5f182341d52c does not exist
Nov 22 09:57:47 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev bbc07c5e-5de1-46ea-93e4-4f587066cc3f does not exist
Nov 22 09:57:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:47.196+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:47 compute-0 sudo[416375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:57:47 compute-0 sudo[416375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:47 compute-0 sudo[416375]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:47 compute-0 sudo[416400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:57:47 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:47 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:47 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:57:47 compute-0 sudo[416400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:57:47 compute-0 sudo[416400]: pam_unix(sudo:session): session closed for user root
Nov 22 09:57:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:48.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:48 compute-0 nova_compute[253661]: 2025-11-22 09:57:48.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:48 compute-0 ceph-mon[75021]: pgmap v2883: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:48 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:48 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 441 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:49.267+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:49 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 441 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:49 compute-0 podman[416425]: 2025-11-22 09:57:49.412011648 +0000 UTC m=+0.095462818 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 09:57:49 compute-0 podman[416426]: 2025-11-22 09:57:49.416357285 +0000 UTC m=+0.091574573 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 09:57:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:50.310+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:50 compute-0 ceph-mon[75021]: pgmap v2884: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:51.289+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:51 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:52 compute-0 nova_compute[253661]: 2025-11-22 09:57:52.018 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:52.256+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:57:52
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'backups', 'vms', 'volumes']
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:57:52 compute-0 ceph-mon[75021]: pgmap v2885: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:52 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:57:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:57:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:53.272+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:53 compute-0 nova_compute[253661]: 2025-11-22 09:57:53.299 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:53 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:53 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 446 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:53 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:54 compute-0 nova_compute[253661]: 2025-11-22 09:57:54.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:57:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:54.230+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:54 compute-0 ceph-mon[75021]: pgmap v2886: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:54 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 446 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:55 compute-0 nova_compute[253661]: 2025-11-22 09:57:55.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:57:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:55.227+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:57:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:57:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:57:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:57:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:57:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:56.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:56 compute-0 nova_compute[253661]: 2025-11-22 09:57:56.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:57:56 compute-0 nova_compute[253661]: 2025-11-22 09:57:56.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:57:56 compute-0 nova_compute[253661]: 2025-11-22 09:57:56.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:57:56 compute-0 nova_compute[253661]: 2025-11-22 09:57:56.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:57:56 compute-0 ceph-mon[75021]: pgmap v2887: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:56 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:56 compute-0 podman[416460]: 2025-11-22 09:57:56.448354173 +0000 UTC m=+0.134925910 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 22 09:57:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:57 compute-0 nova_compute[253661]: 2025-11-22 09:57:57.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:57.191+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:57 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:57:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:57:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:57:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:57:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:57:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:58.184+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:58 compute-0 nova_compute[253661]: 2025-11-22 09:57:58.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:57:58 compute-0 ceph-mon[75021]: pgmap v2888: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:57:58 compute-0 ovn_controller[152872]: 2025-11-22T09:57:58Z|01651|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 22 09:57:58 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 451 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:57:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:59.138+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:57:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:57:59 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 451 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:57:59 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:00.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:00 compute-0 nova_compute[253661]: 2025-11-22 09:58:00.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:00 compute-0 nova_compute[253661]: 2025-11-22 09:58:00.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:58:00 compute-0 ceph-mon[75021]: pgmap v2889: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:00 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:01.084+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:01 compute-0 nova_compute[253661]: 2025-11-22 09:58:01.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:01 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:02 compute-0 nova_compute[253661]: 2025-11-22 09:58:02.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:02.130+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:02 compute-0 nova_compute[253661]: 2025-11-22 09:58:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:02 compute-0 ceph-mon[75021]: pgmap v2890: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:02 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:58:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:58:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:03.156+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:03 compute-0 nova_compute[253661]: 2025-11-22 09:58:03.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:03 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:03 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 457 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:03 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:04.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:04 compute-0 ceph-mon[75021]: pgmap v2891: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:04 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 457 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:04 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:05.119+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.251 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.252 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:58:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:58:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3625945087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.732 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.944 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.945 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3506MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.945 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:58:05 compute-0 nova_compute[253661]: 2025-11-22 09:58:05.946 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:58:06 compute-0 nova_compute[253661]: 2025-11-22 09:58:06.038 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:58:06 compute-0 nova_compute[253661]: 2025-11-22 09:58:06.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:58:06 compute-0 nova_compute[253661]: 2025-11-22 09:58:06.061 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:58:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:06.155+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:06 compute-0 ceph-mon[75021]: pgmap v2892: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3625945087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:58:06 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:58:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771040852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:58:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:06 compute-0 nova_compute[253661]: 2025-11-22 09:58:06.525 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:58:06 compute-0 nova_compute[253661]: 2025-11-22 09:58:06.532 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:58:06 compute-0 nova_compute[253661]: 2025-11-22 09:58:06.546 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:58:06 compute-0 nova_compute[253661]: 2025-11-22 09:58:06.751 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:58:06 compute-0 nova_compute[253661]: 2025-11-22 09:58:06.752 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:58:07 compute-0 nova_compute[253661]: 2025-11-22 09:58:07.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:07.195+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1771040852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:58:07 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:07 compute-0 nova_compute[253661]: 2025-11-22 09:58:07.753 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:07 compute-0 nova_compute[253661]: 2025-11-22 09:58:07.754 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:08.154+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:08 compute-0 nova_compute[253661]: 2025-11-22 09:58:08.374 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:08 compute-0 ceph-mon[75021]: pgmap v2893: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:08 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 462 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:09.196+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:09 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 462 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:09 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:10.222+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:10 compute-0 ceph-mon[75021]: pgmap v2894: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 09:58:11 compute-0 nova_compute[253661]: 2025-11-22 09:58:11.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:11 compute-0 ceph-mon[75021]: pgmap v2895: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:12 compute-0 nova_compute[253661]: 2025-11-22 09:58:12.078 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:58:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/789421861' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:58:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:58:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/789421861' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:58:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/789421861' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:58:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/789421861' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:58:13 compute-0 sshd-session[416531]: Connection closed by 92.118.39.92 port 42556
Nov 22 09:58:13 compute-0 nova_compute[253661]: 2025-11-22 09:58:13.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:13 compute-0 ceph-mon[75021]: pgmap v2896: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:13 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 467 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:14 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 467 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:15 compute-0 ceph-mon[75021]: pgmap v2897: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:17 compute-0 nova_compute[253661]: 2025-11-22 09:58:17.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:17 compute-0 ceph-mon[75021]: pgmap v2898: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:18 compute-0 nova_compute[253661]: 2025-11-22 09:58:18.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:18 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 471 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:19 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 471 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:20 compute-0 ceph-mon[75021]: pgmap v2899: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:20 compute-0 podman[416532]: 2025-11-22 09:58:20.404671823 +0000 UTC m=+0.087827671 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:58:20 compute-0 podman[416533]: 2025-11-22 09:58:20.418712799 +0000 UTC m=+0.098823372 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:58:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:22 compute-0 nova_compute[253661]: 2025-11-22 09:58:22.112 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:22 compute-0 ceph-mon[75021]: pgmap v2900: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:58:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.150826) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503150919, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1167, "num_deletes": 251, "total_data_size": 1353463, "memory_usage": 1386800, "flush_reason": "Manual Compaction"}
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503182018, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1321670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60744, "largest_seqno": 61910, "table_properties": {"data_size": 1316476, "index_size": 2461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13356, "raw_average_key_size": 20, "raw_value_size": 1305164, "raw_average_value_size": 2014, "num_data_blocks": 108, "num_entries": 648, "num_filter_entries": 648, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805424, "oldest_key_time": 1763805424, "file_creation_time": 1763805503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 31245 microseconds, and 4666 cpu microseconds.
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.182083) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1321670 bytes OK
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.182113) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.185959) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.185977) EVENT_LOG_v1 {"time_micros": 1763805503185971, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.186002) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1347947, prev total WAL file size 1347947, number of live WAL files 2.
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.186662) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1290KB)], [143(8399KB)]
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503186757, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 9922311, "oldest_snapshot_seqno": -1}
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8343 keys, 8428177 bytes, temperature: kUnknown
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503269065, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 8428177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8377374, "index_size": 28866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 220741, "raw_average_key_size": 26, "raw_value_size": 8233321, "raw_average_value_size": 986, "num_data_blocks": 1097, "num_entries": 8343, "num_filter_entries": 8343, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.269375) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 8428177 bytes
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.298766) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.4 rd, 102.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(13.9) write-amplify(6.4) OK, records in: 8857, records dropped: 514 output_compression: NoCompression
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.298822) EVENT_LOG_v1 {"time_micros": 1763805503298803, "job": 88, "event": "compaction_finished", "compaction_time_micros": 82398, "compaction_time_cpu_micros": 24586, "output_level": 6, "num_output_files": 1, "total_output_size": 8428177, "num_input_records": 8857, "num_output_records": 8343, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503299631, "job": 88, "event": "table_file_deletion", "file_number": 145}
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503302079, "job": 88, "event": "table_file_deletion", "file_number": 143}
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.186486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:58:23 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 09:58:23 compute-0 nova_compute[253661]: 2025-11-22 09:58:23.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:23 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 476 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:24 compute-0 ceph-mon[75021]: pgmap v2901: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:24 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 476 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:26 compute-0 ceph-mon[75021]: pgmap v2902: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 09:58:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:27 compute-0 nova_compute[253661]: 2025-11-22 09:58:27.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:27 compute-0 podman[416572]: 2025-11-22 09:58:27.394516406 +0000 UTC m=+0.084727056 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:58:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:58:28.000 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:58:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:58:28.000 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:58:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:58:28.001 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:58:28 compute-0 ceph-mon[75021]: pgmap v2903: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:28 compute-0 nova_compute[253661]: 2025-11-22 09:58:28.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:28 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 481 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:29 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 481 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:30 compute-0 ceph-mon[75021]: pgmap v2904: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:32 compute-0 nova_compute[253661]: 2025-11-22 09:58:32.121 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:32 compute-0 ceph-mon[75021]: pgmap v2905: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:33 compute-0 nova_compute[253661]: 2025-11-22 09:58:33.402 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:33 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 486 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:34 compute-0 ceph-mon[75021]: pgmap v2906: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:34 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 486 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:36 compute-0 ceph-mon[75021]: pgmap v2907: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:37 compute-0 nova_compute[253661]: 2025-11-22 09:58:37.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:38 compute-0 nova_compute[253661]: 2025-11-22 09:58:38.438 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:38 compute-0 ceph-mon[75021]: pgmap v2908: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:38 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 496 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:39 compute-0 ceph-mon[75021]: pgmap v2909: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:39 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 496 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:41.140+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:41 compute-0 ceph-mon[75021]: pgmap v2910: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:41 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:42.096+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:42 compute-0 nova_compute[253661]: 2025-11-22 09:58:42.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:42 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:43.097+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:43 compute-0 nova_compute[253661]: 2025-11-22 09:58:43.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:44 compute-0 ceph-mon[75021]: pgmap v2911: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:44 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:44.124+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:45 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 501 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:45.172+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:45 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:46.125+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:46 compute-0 ceph-mon[75021]: pgmap v2912: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:46 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 501 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:46 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:46 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:47.095+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:47 compute-0 nova_compute[253661]: 2025-11-22 09:58:47.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:47 compute-0 sudo[416598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:58:47 compute-0 sudo[416598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:47 compute-0 sudo[416598]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:47 compute-0 sudo[416623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:58:47 compute-0 sudo[416623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:47 compute-0 sudo[416623]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:47 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:47 compute-0 sudo[416648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:58:47 compute-0 sudo[416648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:47 compute-0 sudo[416648]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:47 compute-0 sudo[416673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 09:58:47 compute-0 sudo[416673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:48 compute-0 sudo[416673]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:48.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 09:58:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:58:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 09:58:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 09:58:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:58:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev bbffb866-1d01-4afc-a484-4968f512f85e does not exist
Nov 22 09:58:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 79c69271-abb1-452d-b994-531febaca8e1 does not exist
Nov 22 09:58:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev fc4a4a62-9f0e-4ec3-957b-c956458be006 does not exist
Nov 22 09:58:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 09:58:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 09:58:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 09:58:48 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:58:48 compute-0 sudo[416728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:58:48 compute-0 sudo[416728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:48 compute-0 sudo[416728]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:48 compute-0 sudo[416753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:58:48 compute-0 sudo[416753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:48 compute-0 sudo[416753]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:48 compute-0 sudo[416778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:58:48 compute-0 sudo[416778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:48 compute-0 sudo[416778]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:48 compute-0 nova_compute[253661]: 2025-11-22 09:58:48.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:48 compute-0 ceph-mon[75021]: pgmap v2913: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:48 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:58:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 09:58:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 09:58:48 compute-0 sudo[416803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 09:58:48 compute-0 sudo[416803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:48 compute-0 podman[416870]: 2025-11-22 09:58:48.868798952 +0000 UTC m=+0.026998485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:58:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:49 compute-0 podman[416870]: 2025-11-22 09:58:49.074261545 +0000 UTC m=+0.232461038 container create d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:58:49 compute-0 systemd[1]: Started libpod-conmon-d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c.scope.
Nov 22 09:58:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:49.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:58:49 compute-0 podman[416870]: 2025-11-22 09:58:49.217417236 +0000 UTC m=+0.375616779 container init d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 09:58:49 compute-0 podman[416870]: 2025-11-22 09:58:49.23182081 +0000 UTC m=+0.390020303 container start d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 09:58:49 compute-0 xenodochial_payne[416888]: 167 167
Nov 22 09:58:49 compute-0 systemd[1]: libpod-d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c.scope: Deactivated successfully.
Nov 22 09:58:49 compute-0 podman[416870]: 2025-11-22 09:58:49.255286927 +0000 UTC m=+0.413486480 container attach d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:58:49 compute-0 podman[416870]: 2025-11-22 09:58:49.256025716 +0000 UTC m=+0.414225189 container died d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 09:58:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8348f6c8cdb7d6520d3f94f373f2e17afff0c2f69c19d03107d94586c517329-merged.mount: Deactivated successfully.
Nov 22 09:58:49 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:49 compute-0 podman[416870]: 2025-11-22 09:58:49.606215358 +0000 UTC m=+0.764414821 container remove d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:58:49 compute-0 systemd[1]: libpod-conmon-d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c.scope: Deactivated successfully.
Nov 22 09:58:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:50.100+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:50 compute-0 podman[416911]: 2025-11-22 09:58:50.037013203 +0000 UTC m=+0.256991051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:58:50 compute-0 podman[416911]: 2025-11-22 09:58:50.127745684 +0000 UTC m=+0.347723512 container create 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:58:50 compute-0 systemd[1]: Started libpod-conmon-34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c.scope.
Nov 22 09:58:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:50 compute-0 podman[416911]: 2025-11-22 09:58:50.44916179 +0000 UTC m=+0.669139718 container init 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 22 09:58:50 compute-0 podman[416911]: 2025-11-22 09:58:50.459138745 +0000 UTC m=+0.679116573 container start 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 09:58:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:50 compute-0 podman[416911]: 2025-11-22 09:58:50.656672063 +0000 UTC m=+0.876649931 container attach 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 09:58:50 compute-0 ceph-mon[75021]: pgmap v2914: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:50 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:51.063+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:51 compute-0 podman[416941]: 2025-11-22 09:58:51.392529321 +0000 UTC m=+0.081023843 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 09:58:51 compute-0 podman[416942]: 2025-11-22 09:58:51.397486893 +0000 UTC m=+0.086010406 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 22 09:58:51 compute-0 zealous_gagarin[416928]: --> passed data devices: 0 physical, 3 LVM
Nov 22 09:58:51 compute-0 zealous_gagarin[416928]: --> relative data size: 1.0
Nov 22 09:58:51 compute-0 zealous_gagarin[416928]: --> All data devices are unavailable
Nov 22 09:58:51 compute-0 systemd[1]: libpod-34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c.scope: Deactivated successfully.
Nov 22 09:58:51 compute-0 systemd[1]: libpod-34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c.scope: Consumed 1.114s CPU time.
Nov 22 09:58:51 compute-0 podman[416911]: 2025-11-22 09:58:51.644051317 +0000 UTC m=+1.864029165 container died 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:58:51 compute-0 ceph-mon[75021]: pgmap v2915: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:51 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:52.080+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:52 compute-0 nova_compute[253661]: 2025-11-22 09:58:52.162 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990-merged.mount: Deactivated successfully.
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:58:52
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:58:52 compute-0 podman[416911]: 2025-11-22 09:58:52.51141654 +0000 UTC m=+2.731394358 container remove 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:58:52 compute-0 sudo[416803]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:52 compute-0 systemd[1]: libpod-conmon-34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c.scope: Deactivated successfully.
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:52 compute-0 sudo[417010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:58:52 compute-0 sudo[417010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:52 compute-0 sudo[417010]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:52 compute-0 sudo[417035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:58:52 compute-0 sudo[417035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:52 compute-0 sudo[417035]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:58:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:58:52 compute-0 sudo[417060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:58:52 compute-0 sudo[417060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:52 compute-0 sudo[417060]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:52 compute-0 sudo[417085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 09:58:52 compute-0 sudo[417085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:52 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:53.077+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:53 compute-0 podman[417151]: 2025-11-22 09:58:53.345674977 +0000 UTC m=+0.119315225 container create f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:58:53 compute-0 podman[417151]: 2025-11-22 09:58:53.251556442 +0000 UTC m=+0.025196730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:58:53 compute-0 nova_compute[253661]: 2025-11-22 09:58:53.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:53 compute-0 systemd[1]: Started libpod-conmon-f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55.scope.
Nov 22 09:58:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:58:53 compute-0 podman[417151]: 2025-11-22 09:58:53.672955066 +0000 UTC m=+0.446595324 container init f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:58:53 compute-0 podman[417151]: 2025-11-22 09:58:53.687319839 +0000 UTC m=+0.460960107 container start f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 09:58:53 compute-0 sleepy_cartwright[417167]: 167 167
Nov 22 09:58:53 compute-0 systemd[1]: libpod-f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55.scope: Deactivated successfully.
Nov 22 09:58:53 compute-0 conmon[417167]: conmon f41cad74bf44254bfd78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55.scope/container/memory.events
Nov 22 09:58:53 compute-0 podman[417151]: 2025-11-22 09:58:53.813499074 +0000 UTC m=+0.587139412 container attach f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 09:58:53 compute-0 podman[417151]: 2025-11-22 09:58:53.81541221 +0000 UTC m=+0.589052538 container died f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 09:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-12d217134b9be0a1fcc6e0363a32d5fd4ae94c7dec834662298988483b160bc8-merged.mount: Deactivated successfully.
Nov 22 09:58:53 compute-0 podman[417151]: 2025-11-22 09:58:53.968594747 +0000 UTC m=+0.742234985 container remove f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:58:53 compute-0 ceph-mon[75021]: pgmap v2916: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:53 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:53 compute-0 systemd[1]: libpod-conmon-f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55.scope: Deactivated successfully.
Nov 22 09:58:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 506 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:54.115+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:54 compute-0 podman[417192]: 2025-11-22 09:58:54.206844807 +0000 UTC m=+0.116550558 container create 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 09:58:54 compute-0 podman[417192]: 2025-11-22 09:58:54.132590081 +0000 UTC m=+0.042295802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:58:54 compute-0 systemd[1]: Started libpod-conmon-0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39.scope.
Nov 22 09:58:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:54 compute-0 podman[417192]: 2025-11-22 09:58:54.478016977 +0000 UTC m=+0.387722778 container init 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:58:54 compute-0 podman[417192]: 2025-11-22 09:58:54.493357484 +0000 UTC m=+0.403063195 container start 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:58:54 compute-0 podman[417192]: 2025-11-22 09:58:54.497422224 +0000 UTC m=+0.407127975 container attach 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 09:58:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:55.089+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:55 compute-0 nova_compute[253661]: 2025-11-22 09:58:55.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:55 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 506 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:55 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]: {
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:     "0": [
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:         {
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "devices": [
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "/dev/loop3"
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             ],
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_name": "ceph_lv0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_size": "21470642176",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "name": "ceph_lv0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "tags": {
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.cluster_name": "ceph",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.crush_device_class": "",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.encrypted": "0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.osd_id": "0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.type": "block",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.vdo": "0"
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             },
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "type": "block",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "vg_name": "ceph_vg0"
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:         }
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:     ],
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:     "1": [
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:         {
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "devices": [
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "/dev/loop4"
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             ],
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_name": "ceph_lv1",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_size": "21470642176",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "name": "ceph_lv1",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "tags": {
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.cluster_name": "ceph",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.crush_device_class": "",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.encrypted": "0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.osd_id": "1",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.type": "block",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.vdo": "0"
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             },
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "type": "block",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "vg_name": "ceph_vg1"
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:         }
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:     ],
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:     "2": [
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:         {
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "devices": [
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "/dev/loop5"
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             ],
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_name": "ceph_lv2",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_size": "21470642176",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "name": "ceph_lv2",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "tags": {
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.cluster_name": "ceph",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.crush_device_class": "",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.encrypted": "0",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.osd_id": "2",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.type": "block",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:                 "ceph.vdo": "0"
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             },
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "type": "block",
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:             "vg_name": "ceph_vg2"
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:         }
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]:     ]
Nov 22 09:58:55 compute-0 funny_chebyshev[417209]: }
Nov 22 09:58:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:58:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:58:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:58:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:58:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:58:55 compute-0 systemd[1]: libpod-0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39.scope: Deactivated successfully.
Nov 22 09:58:55 compute-0 systemd[1]: libpod-0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39.scope: Consumed 1.290s CPU time.
Nov 22 09:58:55 compute-0 podman[417192]: 2025-11-22 09:58:55.824283288 +0000 UTC m=+1.733989029 container died 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 09:58:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:56.116+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211-merged.mount: Deactivated successfully.
Nov 22 09:58:56 compute-0 nova_compute[253661]: 2025-11-22 09:58:56.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:56 compute-0 podman[417192]: 2025-11-22 09:58:56.580879637 +0000 UTC m=+2.490585348 container remove 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:58:56 compute-0 sudo[417085]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:56 compute-0 ceph-mon[75021]: pgmap v2917: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:56 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:56 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:56 compute-0 sudo[417231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:58:56 compute-0 systemd[1]: libpod-conmon-0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39.scope: Deactivated successfully.
Nov 22 09:58:56 compute-0 sudo[417231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:56 compute-0 sudo[417231]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:56 compute-0 sudo[417256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:58:56 compute-0 sudo[417256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:56 compute-0 sudo[417256]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:56 compute-0 sudo[417281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:58:56 compute-0 sudo[417281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:56 compute-0 sudo[417281]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:56 compute-0 sudo[417306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 09:58:56 compute-0 sudo[417306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:57.112+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:57 compute-0 nova_compute[253661]: 2025-11-22 09:58:57.166 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:57 compute-0 podman[417371]: 2025-11-22 09:58:57.195984764 +0000 UTC m=+0.040073976 container create 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 09:58:57 compute-0 systemd[1]: Started libpod-conmon-1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87.scope.
Nov 22 09:58:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:58:57 compute-0 podman[417371]: 2025-11-22 09:58:57.269628366 +0000 UTC m=+0.113717588 container init 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 09:58:57 compute-0 podman[417371]: 2025-11-22 09:58:57.179590702 +0000 UTC m=+0.023679924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:58:57 compute-0 podman[417371]: 2025-11-22 09:58:57.275806327 +0000 UTC m=+0.119895529 container start 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 09:58:57 compute-0 podman[417371]: 2025-11-22 09:58:57.279234472 +0000 UTC m=+0.123323674 container attach 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:58:57 compute-0 angry_knuth[417388]: 167 167
Nov 22 09:58:57 compute-0 systemd[1]: libpod-1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87.scope: Deactivated successfully.
Nov 22 09:58:57 compute-0 podman[417371]: 2025-11-22 09:58:57.28197482 +0000 UTC m=+0.126064022 container died 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 09:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-816483efdfee3468a12ddce582312409b4bc8cdcb2e1d837ad8ec1058b04fff6-merged.mount: Deactivated successfully.
Nov 22 09:58:57 compute-0 podman[417371]: 2025-11-22 09:58:57.319998715 +0000 UTC m=+0.164087917 container remove 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:58:57 compute-0 systemd[1]: libpod-conmon-1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87.scope: Deactivated successfully.
Nov 22 09:58:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:58:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:58:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:58:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:58:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:58:57 compute-0 podman[417413]: 2025-11-22 09:58:57.475742005 +0000 UTC m=+0.039013771 container create db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 09:58:57 compute-0 systemd[1]: Started libpod-conmon-db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19.scope.
Nov 22 09:58:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 09:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 09:58:57 compute-0 podman[417413]: 2025-11-22 09:58:57.546896155 +0000 UTC m=+0.110167951 container init db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 09:58:57 compute-0 podman[417413]: 2025-11-22 09:58:57.458942772 +0000 UTC m=+0.022214558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 09:58:57 compute-0 podman[417413]: 2025-11-22 09:58:57.557615469 +0000 UTC m=+0.120887225 container start db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 09:58:57 compute-0 podman[417413]: 2025-11-22 09:58:57.562034808 +0000 UTC m=+0.125306594 container attach db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 09:58:57 compute-0 podman[417427]: 2025-11-22 09:58:57.624604447 +0000 UTC m=+0.111580606 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:58:57 compute-0 ceph-mon[75021]: pgmap v2918: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:57 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:58.090+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:58 compute-0 nova_compute[253661]: 2025-11-22 09:58:58.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:58:58 compute-0 nova_compute[253661]: 2025-11-22 09:58:58.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:58:58 compute-0 nova_compute[253661]: 2025-11-22 09:58:58.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:58:58 compute-0 nova_compute[253661]: 2025-11-22 09:58:58.242 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:58:58 compute-0 nova_compute[253661]: 2025-11-22 09:58:58.477 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:58:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:58 compute-0 confident_benz[417430]: {
Nov 22 09:58:58 compute-0 confident_benz[417430]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "osd_id": 1,
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "type": "bluestore"
Nov 22 09:58:58 compute-0 confident_benz[417430]:     },
Nov 22 09:58:58 compute-0 confident_benz[417430]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "osd_id": 0,
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "type": "bluestore"
Nov 22 09:58:58 compute-0 confident_benz[417430]:     },
Nov 22 09:58:58 compute-0 confident_benz[417430]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "osd_id": 2,
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 09:58:58 compute-0 confident_benz[417430]:         "type": "bluestore"
Nov 22 09:58:58 compute-0 confident_benz[417430]:     }
Nov 22 09:58:58 compute-0 confident_benz[417430]: }
Nov 22 09:58:58 compute-0 systemd[1]: libpod-db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19.scope: Deactivated successfully.
Nov 22 09:58:58 compute-0 systemd[1]: libpod-db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19.scope: Consumed 1.039s CPU time.
Nov 22 09:58:58 compute-0 podman[417413]: 2025-11-22 09:58:58.592707816 +0000 UTC m=+1.155979582 container died db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 09:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59-merged.mount: Deactivated successfully.
Nov 22 09:58:58 compute-0 podman[417413]: 2025-11-22 09:58:58.682285669 +0000 UTC m=+1.245557435 container remove db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:58:58 compute-0 systemd[1]: libpod-conmon-db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19.scope: Deactivated successfully.
Nov 22 09:58:58 compute-0 sudo[417306]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 09:58:58 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:58:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 09:58:58 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:58:58 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 793b5e67-da40-4dea-993a-ee9a773df2ff does not exist
Nov 22 09:58:58 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 57cbaa32-fd3e-44f4-94da-ce8dbcf2155a does not exist
Nov 22 09:58:58 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:58 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:58:58 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 09:58:58 compute-0 sudo[417498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:58:58 compute-0 sudo[417498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:58 compute-0 sudo[417498]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:58 compute-0 sudo[417523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 09:58:58 compute-0 sudo[417523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:58:58 compute-0 sudo[417523]: pam_unix(sudo:session): session closed for user root
Nov 22 09:58:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 516 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:58:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:58:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:59.072+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:58:59 compute-0 ceph-mon[75021]: pgmap v2919: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:58:59 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 516 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:58:59 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:00.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:00 compute-0 nova_compute[253661]: 2025-11-22 09:59:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:00 compute-0 nova_compute[253661]: 2025-11-22 09:59:00.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:59:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:00 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:00.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:01.934+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:02 compute-0 nova_compute[253661]: 2025-11-22 09:59:02.170 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:02 compute-0 ceph-mon[75021]: pgmap v2920: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:02 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:02.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 09:59:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 09:59:03 compute-0 nova_compute[253661]: 2025-11-22 09:59:03.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:03 compute-0 nova_compute[253661]: 2025-11-22 09:59:03.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:03 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:03 compute-0 nova_compute[253661]: 2025-11-22 09:59:03.479 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:03 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 09:59:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:03.883+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:04 compute-0 ceph-mon[75021]: pgmap v2921: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:04 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:04.898+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.258 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:59:05 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 521 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:05 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:05 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:59:05 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2484235646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.701 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.851 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.852 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3487MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.852 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.852 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.917 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.918 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:59:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:05.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:05 compute-0 nova_compute[253661]: 2025-11-22 09:59:05.934 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:59:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 09:59:06 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4267733888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:59:06 compute-0 ceph-mon[75021]: pgmap v2922: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:06 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:06 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 521 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:06 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2484235646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:59:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:06 compute-0 nova_compute[253661]: 2025-11-22 09:59:06.570 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:59:06 compute-0 nova_compute[253661]: 2025-11-22 09:59:06.576 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:59:06 compute-0 nova_compute[253661]: 2025-11-22 09:59:06.590 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:59:06 compute-0 nova_compute[253661]: 2025-11-22 09:59:06.591 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:59:06 compute-0 nova_compute[253661]: 2025-11-22 09:59:06.592 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:59:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:06.916+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:07 compute-0 nova_compute[253661]: 2025-11-22 09:59:07.175 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:07 compute-0 nova_compute[253661]: 2025-11-22 09:59:07.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:07 compute-0 nova_compute[253661]: 2025-11-22 09:59:07.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:07 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:07 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4267733888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 09:59:07 compute-0 ceph-mon[75021]: pgmap v2923: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:07 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:07.915+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:08 compute-0 nova_compute[253661]: 2025-11-22 09:59:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:08 compute-0 nova_compute[253661]: 2025-11-22 09:59:08.480 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:08.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:09 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:09.894+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:10 compute-0 ceph-mon[75021]: pgmap v2924: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:10 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:10.927+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:11 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:11 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:11.897+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:12 compute-0 nova_compute[253661]: 2025-11-22 09:59:12.179 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 09:59:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2051394240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:59:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 09:59:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2051394240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:59:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:12 compute-0 ceph-mon[75021]: pgmap v2925: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:12 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2051394240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 09:59:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2051394240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 09:59:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:12.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:13 compute-0 nova_compute[253661]: 2025-11-22 09:59:13.241 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:13 compute-0 nova_compute[253661]: 2025-11-22 09:59:13.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:59:13 compute-0 nova_compute[253661]: 2025-11-22 09:59:13.533 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:13 compute-0 ceph-mon[75021]: pgmap v2926: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:13 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:13.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 526 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:14 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:14 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 526 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:14.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:15.975+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:16 compute-0 ceph-mon[75021]: pgmap v2927: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:16 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:16.969+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:17 compute-0 nova_compute[253661]: 2025-11-22 09:59:17.182 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:17 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:17 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:18.006+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:18 compute-0 nova_compute[253661]: 2025-11-22 09:59:18.535 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:18 compute-0 ceph-mon[75021]: pgmap v2928: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:18 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:18.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 536 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:19 compute-0 ceph-mon[75021]: pgmap v2929: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:19 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:19 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 536 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:19.995+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:20 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:21.032+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:21 compute-0 ceph-mon[75021]: pgmap v2930: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:21 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:22.080+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:22 compute-0 nova_compute[253661]: 2025-11-22 09:59:22.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:22 compute-0 podman[417594]: 2025-11-22 09:59:22.368680703 +0000 UTC m=+0.055103026 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 09:59:22 compute-0 podman[417593]: 2025-11-22 09:59:22.392307704 +0000 UTC m=+0.082451949 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 09:59:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:59:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:59:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:23.100+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:23 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:23 compute-0 nova_compute[253661]: 2025-11-22 09:59:23.540 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:24.149+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:24 compute-0 ceph-mon[75021]: pgmap v2931: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:24 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:25.183+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:25 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 541 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:25 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:26.216+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:26 compute-0 ceph-mon[75021]: pgmap v2932: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:26 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:26 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 541 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:27 compute-0 nova_compute[253661]: 2025-11-22 09:59:27.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:27.253+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:27 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:27 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:59:28.002 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:59:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:59:28.003 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:59:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 09:59:28.003 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:59:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:28.247+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:28 compute-0 ceph-mon[75021]: pgmap v2933: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:28 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:28 compute-0 podman[417631]: 2025-11-22 09:59:28.456859339 +0000 UTC m=+0.144044884 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:59:28 compute-0 nova_compute[253661]: 2025-11-22 09:59:28.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:29.245+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:29 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:30.294+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:30 compute-0 ceph-mon[75021]: pgmap v2934: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:30 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:31.263+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:31 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:32 compute-0 nova_compute[253661]: 2025-11-22 09:59:32.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:32.287+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:32 compute-0 ceph-mon[75021]: pgmap v2935: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:32 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:33.333+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:33 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:33 compute-0 nova_compute[253661]: 2025-11-22 09:59:33.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 546 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:34.320+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:34 compute-0 ceph-mon[75021]: pgmap v2936: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:34 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 546 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:34 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:35.336+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:35 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:36.381+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:36 compute-0 ceph-mon[75021]: pgmap v2937: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:37 compute-0 nova_compute[253661]: 2025-11-22 09:59:37.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:37.425+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:37 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:38 compute-0 ceph-mon[75021]: pgmap v2938: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:38 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:38.470+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:38 compute-0 nova_compute[253661]: 2025-11-22 09:59:38.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 551 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:39.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:39 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:39 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 551 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:40.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:40 compute-0 ceph-mon[75021]: pgmap v2939: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:40 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:41.479+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:41 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:42 compute-0 nova_compute[253661]: 2025-11-22 09:59:42.202 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:42.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:42 compute-0 ceph-mon[75021]: pgmap v2940: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:42 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:43.470+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:43 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:43 compute-0 nova_compute[253661]: 2025-11-22 09:59:43.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 556 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:44.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:44 compute-0 ceph-mon[75021]: pgmap v2941: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:44 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:44 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 556 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:45 compute-0 nova_compute[253661]: 2025-11-22 09:59:45.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:45 compute-0 nova_compute[253661]: 2025-11-22 09:59:45.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:59:45 compute-0 nova_compute[253661]: 2025-11-22 09:59:45.258 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:59:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:45.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:45 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:46.448+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:46 compute-0 ceph-mon[75021]: pgmap v2942: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:46 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:47 compute-0 nova_compute[253661]: 2025-11-22 09:59:47.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:47.450+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:47 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:48.441+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:48 compute-0 ceph-mon[75021]: pgmap v2943: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:48 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:48 compute-0 nova_compute[253661]: 2025-11-22 09:59:48.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 561 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:49.399+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:49 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:49 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 561 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:50.444+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:50 compute-0 ceph-mon[75021]: pgmap v2944: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:50 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:51.419+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:51 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:51 compute-0 ceph-mon[75021]: pgmap v2945: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:52 compute-0 nova_compute[253661]: 2025-11-22 09:59:52.210 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:59:52
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 09:59:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:52.453+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:52 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 09:59:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 09:59:53 compute-0 podman[417658]: 2025-11-22 09:59:53.362122054 +0000 UTC m=+0.051713973 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:59:53 compute-0 podman[417659]: 2025-11-22 09:59:53.369036264 +0000 UTC m=+0.053495756 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 22 09:59:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:53.464+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:53 compute-0 nova_compute[253661]: 2025-11-22 09:59:53.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:53 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:53 compute-0 ceph-mon[75021]: pgmap v2946: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 566 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:54.490+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:54 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:54 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 566 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:55.476+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:55 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:55 compute-0 ceph-mon[75021]: pgmap v2947: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 09:59:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:59:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:59:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:59:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:59:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:56.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:56 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:57 compute-0 nova_compute[253661]: 2025-11-22 09:59:57.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:57 compute-0 nova_compute[253661]: 2025-11-22 09:59:57.237 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:57 compute-0 nova_compute[253661]: 2025-11-22 09:59:57.238 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:59:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:57.426+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 09:59:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 09:59:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 09:59:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 09:59:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 09:59:57 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:57 compute-0 ceph-mon[75021]: pgmap v2948: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:58.474+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:58 compute-0 nova_compute[253661]: 2025-11-22 09:59:58.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:59:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:58 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:59 compute-0 sudo[417697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:59:59 compute-0 sudo[417697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:59:59 compute-0 sudo[417697]: pam_unix(sudo:session): session closed for user root
Nov 22 09:59:59 compute-0 sudo[417728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 09:59:59 compute-0 sudo[417728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:59:59 compute-0 sudo[417728]: pam_unix(sudo:session): session closed for user root
Nov 22 09:59:59 compute-0 podman[417721]: 2025-11-22 09:59:59.109872266 +0000 UTC m=+0.096875103 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 09:59:59 compute-0 sudo[417768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 09:59:59 compute-0 sudo[417768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:59:59 compute-0 sudo[417768]: pam_unix(sudo:session): session closed for user root
Nov 22 09:59:59 compute-0 sudo[417798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 09:59:59 compute-0 sudo[417798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 09:59:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 576 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 09:59:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:59.467+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 09:59:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:59 compute-0 podman[417896]: 2025-11-22 09:59:59.835970684 +0000 UTC m=+0.093253855 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 22 09:59:59 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 09:59:59 compute-0 ceph-mon[75021]: pgmap v2949: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 09:59:59 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 576 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 09:59:59 compute-0 podman[417896]: 2025-11-22 09:59:59.987356267 +0000 UTC m=+0.244639418 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 10:00:00 compute-0 nova_compute[253661]: 2025-11-22 10:00:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:00 compute-0 nova_compute[253661]: 2025-11-22 10:00:00.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:00:00 compute-0 nova_compute[253661]: 2025-11-22 10:00:00.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:00:00 compute-0 nova_compute[253661]: 2025-11-22 10:00:00.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:00:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:00.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:00 compute-0 sudo[417798]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:00:00 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:00:00 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:00 compute-0 sudo[418057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:00:00 compute-0 sudo[418057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:00 compute-0 sudo[418057]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:00 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:00 compute-0 sudo[418082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:00:00 compute-0 sudo[418082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:00 compute-0 sudo[418082]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:01 compute-0 sudo[418107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:00:01 compute-0 sudo[418107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:01 compute-0 sudo[418107]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:01 compute-0 sudo[418132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:00:01 compute-0 sudo[418132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:01 compute-0 nova_compute[253661]: 2025-11-22 10:00:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:01 compute-0 nova_compute[253661]: 2025-11-22 10:00:01.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:00:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:01.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:01 compute-0 sudo[418132]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:00:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:00:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:00:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:00:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:00:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:01 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 3b6478a8-886a-4444-b2bd-d3d56268310a does not exist
Nov 22 10:00:01 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d5d9bc11-da27-47f6-b276-8491c1020729 does not exist
Nov 22 10:00:01 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 68891905-0d9d-4af8-9e98-24e2cc03d3e6 does not exist
Nov 22 10:00:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:00:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:00:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:00:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:00:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:00:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:00:01 compute-0 sudo[418188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:00:01 compute-0 sudo[418188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:01 compute-0 sudo[418188]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:01 compute-0 sudo[418213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:00:01 compute-0 sudo[418213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:01 compute-0 sudo[418213]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:01 compute-0 sudo[418238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:00:01 compute-0 sudo[418238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:01 compute-0 sudo[418238]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:02 compute-0 sudo[418263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:00:02 compute-0 sudo[418263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:02 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:02 compute-0 ceph-mon[75021]: pgmap v2950: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:00:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:00:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:00:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:00:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:00:02 compute-0 nova_compute[253661]: 2025-11-22 10:00:02.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:02 compute-0 podman[418328]: 2025-11-22 10:00:02.367795483 +0000 UTC m=+0.046737600 container create d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:00:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:02.402+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:02 compute-0 systemd[1]: Started libpod-conmon-d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9.scope.
Nov 22 10:00:02 compute-0 podman[418328]: 2025-11-22 10:00:02.34651213 +0000 UTC m=+0.025454277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:00:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:00:02 compute-0 podman[418328]: 2025-11-22 10:00:02.465933887 +0000 UTC m=+0.144876004 container init d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:00:02 compute-0 podman[418328]: 2025-11-22 10:00:02.481708895 +0000 UTC m=+0.160651012 container start d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 10:00:02 compute-0 podman[418328]: 2025-11-22 10:00:02.485542819 +0000 UTC m=+0.164484936 container attach d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:00:02 compute-0 inspiring_euclid[418344]: 167 167
Nov 22 10:00:02 compute-0 systemd[1]: libpod-d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9.scope: Deactivated successfully.
Nov 22 10:00:02 compute-0 conmon[418344]: conmon d030a9ac915a14735241 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9.scope/container/memory.events
Nov 22 10:00:02 compute-0 podman[418328]: 2025-11-22 10:00:02.48963806 +0000 UTC m=+0.168580177 container died d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 10:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b537fefaa098117ee6d2a785d2b301c57794800a81744576fdc76f9183aaccd6-merged.mount: Deactivated successfully.
Nov 22 10:00:02 compute-0 podman[418328]: 2025-11-22 10:00:02.531663933 +0000 UTC m=+0.210606040 container remove d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 22 10:00:02 compute-0 systemd[1]: libpod-conmon-d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9.scope: Deactivated successfully.
Nov 22 10:00:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:02 compute-0 podman[418367]: 2025-11-22 10:00:02.692197552 +0000 UTC m=+0.041121493 container create 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:00:02 compute-0 systemd[1]: Started libpod-conmon-98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a.scope.
Nov 22 10:00:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:02 compute-0 podman[418367]: 2025-11-22 10:00:02.675385299 +0000 UTC m=+0.024309270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:00:02 compute-0 podman[418367]: 2025-11-22 10:00:02.778110985 +0000 UTC m=+0.127034966 container init 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:00:02 compute-0 podman[418367]: 2025-11-22 10:00:02.785408044 +0000 UTC m=+0.134331985 container start 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 10:00:02 compute-0 podman[418367]: 2025-11-22 10:00:02.789872234 +0000 UTC m=+0.138796185 container attach 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:00:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:00:03 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:03.376+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:03 compute-0 nova_compute[253661]: 2025-11-22 10:00:03.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:03 compute-0 frosty_payne[418383]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:00:03 compute-0 frosty_payne[418383]: --> relative data size: 1.0
Nov 22 10:00:03 compute-0 frosty_payne[418383]: --> All data devices are unavailable
Nov 22 10:00:03 compute-0 systemd[1]: libpod-98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a.scope: Deactivated successfully.
Nov 22 10:00:03 compute-0 systemd[1]: libpod-98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a.scope: Consumed 1.034s CPU time.
Nov 22 10:00:03 compute-0 podman[418367]: 2025-11-22 10:00:03.894334477 +0000 UTC m=+1.243258418 container died 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739-merged.mount: Deactivated successfully.
Nov 22 10:00:03 compute-0 podman[418367]: 2025-11-22 10:00:03.957116872 +0000 UTC m=+1.306040803 container remove 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:00:03 compute-0 systemd[1]: libpod-conmon-98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a.scope: Deactivated successfully.
Nov 22 10:00:03 compute-0 sudo[418263]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:04 compute-0 sudo[418423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:00:04 compute-0 sudo[418423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:04 compute-0 sudo[418423]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:04 compute-0 sudo[418448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:00:04 compute-0 sudo[418448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:04 compute-0 sudo[418448]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:04 compute-0 sudo[418473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:00:04 compute-0 sudo[418473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:04 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:04 compute-0 ceph-mon[75021]: pgmap v2951: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:04 compute-0 sudo[418473]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:04 compute-0 nova_compute[253661]: 2025-11-22 10:00:04.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:04 compute-0 sudo[418498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:00:04 compute-0 sudo[418498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:04.369+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:04 compute-0 podman[418561]: 2025-11-22 10:00:04.743748899 +0000 UTC m=+0.053226431 container create 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 10:00:04 compute-0 systemd[1]: Started libpod-conmon-7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0.scope.
Nov 22 10:00:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:00:04 compute-0 podman[418561]: 2025-11-22 10:00:04.722056825 +0000 UTC m=+0.031534367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:00:04 compute-0 podman[418561]: 2025-11-22 10:00:04.829079937 +0000 UTC m=+0.138557549 container init 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 10:00:04 compute-0 podman[418561]: 2025-11-22 10:00:04.838605152 +0000 UTC m=+0.148082674 container start 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:00:04 compute-0 podman[418561]: 2025-11-22 10:00:04.84220406 +0000 UTC m=+0.151681622 container attach 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 10:00:04 compute-0 hopeful_blackwell[418577]: 167 167
Nov 22 10:00:04 compute-0 systemd[1]: libpod-7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0.scope: Deactivated successfully.
Nov 22 10:00:04 compute-0 podman[418561]: 2025-11-22 10:00:04.847653234 +0000 UTC m=+0.157130826 container died 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e441218bf327d6770faea2f2354d9b069853742344a600b2ce8950edaa90b60-merged.mount: Deactivated successfully.
Nov 22 10:00:04 compute-0 podman[418561]: 2025-11-22 10:00:04.898661019 +0000 UTC m=+0.208138551 container remove 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:00:04 compute-0 systemd[1]: libpod-conmon-7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0.scope: Deactivated successfully.
Nov 22 10:00:05 compute-0 podman[418601]: 2025-11-22 10:00:05.090958998 +0000 UTC m=+0.049724774 container create abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:00:05 compute-0 systemd[1]: Started libpod-conmon-abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600.scope.
Nov 22 10:00:05 compute-0 podman[418601]: 2025-11-22 10:00:05.067254045 +0000 UTC m=+0.026019861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:00:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:05 compute-0 podman[418601]: 2025-11-22 10:00:05.187267857 +0000 UTC m=+0.146033723 container init abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:00:05 compute-0 podman[418601]: 2025-11-22 10:00:05.200190144 +0000 UTC m=+0.158955960 container start abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 10:00:05 compute-0 podman[418601]: 2025-11-22 10:00:05.20897329 +0000 UTC m=+0.167739096 container attach abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:00:05 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 581 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:05 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:05 compute-0 nova_compute[253661]: 2025-11-22 10:00:05.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:05.370+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]: {
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:     "0": [
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:         {
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "devices": [
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "/dev/loop3"
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             ],
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_name": "ceph_lv0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_size": "21470642176",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "name": "ceph_lv0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "tags": {
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.cluster_name": "ceph",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.crush_device_class": "",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.encrypted": "0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.osd_id": "0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.type": "block",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.vdo": "0"
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             },
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "type": "block",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "vg_name": "ceph_vg0"
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:         }
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:     ],
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:     "1": [
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:         {
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "devices": [
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "/dev/loop4"
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             ],
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_name": "ceph_lv1",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_size": "21470642176",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "name": "ceph_lv1",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "tags": {
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.cluster_name": "ceph",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.crush_device_class": "",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.encrypted": "0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.osd_id": "1",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.type": "block",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.vdo": "0"
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             },
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "type": "block",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "vg_name": "ceph_vg1"
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:         }
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:     ],
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:     "2": [
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:         {
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "devices": [
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "/dev/loop5"
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             ],
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_name": "ceph_lv2",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_size": "21470642176",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "name": "ceph_lv2",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "tags": {
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.cluster_name": "ceph",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.crush_device_class": "",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.encrypted": "0",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.osd_id": "2",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.type": "block",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:                 "ceph.vdo": "0"
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             },
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "type": "block",
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:             "vg_name": "ceph_vg2"
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:         }
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]:     ]
Nov 22 10:00:06 compute-0 festive_chebyshev[418617]: }
Nov 22 10:00:06 compute-0 systemd[1]: libpod-abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600.scope: Deactivated successfully.
Nov 22 10:00:06 compute-0 podman[418601]: 2025-11-22 10:00:06.053138802 +0000 UTC m=+1.011904578 container died abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c-merged.mount: Deactivated successfully.
Nov 22 10:00:06 compute-0 podman[418601]: 2025-11-22 10:00:06.11892361 +0000 UTC m=+1.077689386 container remove abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:00:06 compute-0 systemd[1]: libpod-conmon-abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600.scope: Deactivated successfully.
Nov 22 10:00:06 compute-0 sudo[418498]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:06 compute-0 nova_compute[253661]: 2025-11-22 10:00:06.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:06 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:06 compute-0 ceph-mon[75021]: pgmap v2952: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:06 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 581 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:06 compute-0 sudo[418638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:00:06 compute-0 sudo[418638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:06 compute-0 sudo[418638]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:06 compute-0 sudo[418663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:00:06 compute-0 sudo[418663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:06 compute-0 sudo[418663]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:06 compute-0 sudo[418688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:00:06 compute-0 sudo[418688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:06 compute-0 sudo[418688]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:06.378+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:06 compute-0 sudo[418713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:00:06 compute-0 sudo[418713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:06 compute-0 podman[418781]: 2025-11-22 10:00:06.761021522 +0000 UTC m=+0.041034791 container create 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 10:00:06 compute-0 systemd[1]: Started libpod-conmon-2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a.scope.
Nov 22 10:00:06 compute-0 podman[418781]: 2025-11-22 10:00:06.742099226 +0000 UTC m=+0.022112495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:00:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:00:06 compute-0 podman[418781]: 2025-11-22 10:00:06.867137221 +0000 UTC m=+0.147150480 container init 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:00:06 compute-0 podman[418781]: 2025-11-22 10:00:06.873259993 +0000 UTC m=+0.153273252 container start 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:00:06 compute-0 podman[418781]: 2025-11-22 10:00:06.87682527 +0000 UTC m=+0.156838549 container attach 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 10:00:06 compute-0 mystifying_mcclintock[418797]: 167 167
Nov 22 10:00:06 compute-0 systemd[1]: libpod-2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a.scope: Deactivated successfully.
Nov 22 10:00:06 compute-0 podman[418781]: 2025-11-22 10:00:06.880266535 +0000 UTC m=+0.160279804 container died 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 10:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-65c185d7b8c4fa91be9879b6c0e20b0490b0f769965a949b7ba0a822423ba297-merged.mount: Deactivated successfully.
Nov 22 10:00:06 compute-0 podman[418781]: 2025-11-22 10:00:06.921378036 +0000 UTC m=+0.201391295 container remove 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:00:06 compute-0 systemd[1]: libpod-conmon-2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a.scope: Deactivated successfully.
Nov 22 10:00:07 compute-0 podman[418824]: 2025-11-22 10:00:07.123014695 +0000 UTC m=+0.046229428 container create ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:00:07 compute-0 systemd[1]: Started libpod-conmon-ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54.scope.
Nov 22 10:00:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:00:07 compute-0 podman[418824]: 2025-11-22 10:00:07.104072719 +0000 UTC m=+0.027287452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:00:07 compute-0 podman[418824]: 2025-11-22 10:00:07.215992462 +0000 UTC m=+0.139207215 container init ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.220 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:07 compute-0 podman[418824]: 2025-11-22 10:00:07.231059762 +0000 UTC m=+0.154274505 container start ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:00:07 compute-0 podman[418824]: 2025-11-22 10:00:07.235758358 +0000 UTC m=+0.158973111 container attach ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:00:07 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.252 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.252 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:00:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:07.376+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:00:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465728619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.674 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.840 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.842 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3442MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.842 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.842 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.915 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.916 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:00:07 compute-0 nova_compute[253661]: 2025-11-22 10:00:07.932 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:00:08 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:08 compute-0 ceph-mon[75021]: pgmap v2953: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/465728619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:00:08 compute-0 trusting_fermi[418841]: {
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "osd_id": 1,
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "type": "bluestore"
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:     },
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "osd_id": 0,
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "type": "bluestore"
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:     },
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "osd_id": 2,
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:         "type": "bluestore"
Nov 22 10:00:08 compute-0 trusting_fermi[418841]:     }
Nov 22 10:00:08 compute-0 trusting_fermi[418841]: }
Nov 22 10:00:08 compute-0 systemd[1]: libpod-ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54.scope: Deactivated successfully.
Nov 22 10:00:08 compute-0 systemd[1]: libpod-ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54.scope: Consumed 1.045s CPU time.
Nov 22 10:00:08 compute-0 podman[418824]: 2025-11-22 10:00:08.283335083 +0000 UTC m=+1.206549826 container died ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 10:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb-merged.mount: Deactivated successfully.
Nov 22 10:00:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:08.337+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:08 compute-0 podman[418824]: 2025-11-22 10:00:08.349805927 +0000 UTC m=+1.273020660 container remove ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 10:00:08 compute-0 systemd[1]: libpod-conmon-ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54.scope: Deactivated successfully.
Nov 22 10:00:08 compute-0 sudo[418713]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:00:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:00:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312017244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:00:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:00:08 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6b2cca4f-b5bb-410c-9a7c-85a192240ec0 does not exist
Nov 22 10:00:08 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev fc4db60e-7048-421d-98b9-204e5efa0693 does not exist
Nov 22 10:00:08 compute-0 nova_compute[253661]: 2025-11-22 10:00:08.418 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:00:08 compute-0 nova_compute[253661]: 2025-11-22 10:00:08.426 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:00:08 compute-0 nova_compute[253661]: 2025-11-22 10:00:08.446 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:00:08 compute-0 nova_compute[253661]: 2025-11-22 10:00:08.448 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:00:08 compute-0 nova_compute[253661]: 2025-11-22 10:00:08.448 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:00:08 compute-0 sudo[418929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:00:08 compute-0 sudo[418929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:08 compute-0 sudo[418929]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:08 compute-0 nova_compute[253661]: 2025-11-22 10:00:08.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:08 compute-0 sudo[418954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:00:08 compute-0 sudo[418954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:00:08 compute-0 sudo[418954]: pam_unix(sudo:session): session closed for user root
Nov 22 10:00:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:09 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:09 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3312017244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:00:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:00:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:09.315+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:10 compute-0 ceph-mon[75021]: pgmap v2954: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:10 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:10.350+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:10 compute-0 nova_compute[253661]: 2025-11-22 10:00:10.450 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:11 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:11.377+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:12 compute-0 nova_compute[253661]: 2025-11-22 10:00:12.224 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:12 compute-0 ceph-mon[75021]: pgmap v2955: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:12.386+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:00:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1348051008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:00:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:00:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1348051008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:00:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:13 compute-0 nova_compute[253661]: 2025-11-22 10:00:13.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:13 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1348051008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:00:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1348051008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:00:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:13.374+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:13 compute-0 nova_compute[253661]: 2025-11-22 10:00:13.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 586 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:14 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:14 compute-0 ceph-mon[75021]: pgmap v2956: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:14 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 586 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:14.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:15 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:15.358+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:16 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:16 compute-0 ceph-mon[75021]: pgmap v2957: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:16 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:16.348+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:17 compute-0 nova_compute[253661]: 2025-11-22 10:00:17.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:17.302+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:17 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:18.282+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:18 compute-0 ceph-mon[75021]: pgmap v2958: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:18 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:18 compute-0 nova_compute[253661]: 2025-11-22 10:00:18.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 591 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:19.251+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:19 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:19 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 591 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:20.217+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:20 compute-0 ceph-mon[75021]: pgmap v2959: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:20 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:21.174+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:21 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:22.143+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:22 compute-0 nova_compute[253661]: 2025-11-22 10:00:22.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:22 compute-0 ceph-mon[75021]: pgmap v2960: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:22 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:00:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:00:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:23.172+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:23 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:23 compute-0 nova_compute[253661]: 2025-11-22 10:00:23.603 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:24.216+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 596 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:24 compute-0 ceph-mon[75021]: pgmap v2961: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:24 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:24 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 596 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:24 compute-0 podman[418979]: 2025-11-22 10:00:24.390268706 +0000 UTC m=+0.069004139 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 22 10:00:24 compute-0 podman[418980]: 2025-11-22 10:00:24.398046717 +0000 UTC m=+0.077414615 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 10:00:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:25.180+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:25 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:25 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:26.149+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:26 compute-0 ceph-mon[75021]: pgmap v2962: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:26 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:27.112+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:27 compute-0 nova_compute[253661]: 2025-11-22 10:00:27.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:27 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:00:28.004 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:00:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:00:28.005 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:00:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:00:28.005 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:00:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:28.087+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:28 compute-0 ceph-mon[75021]: pgmap v2963: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:28 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:28 compute-0 nova_compute[253661]: 2025-11-22 10:00:28.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:29.079+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 602 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:29 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:29 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 602 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:29 compute-0 podman[419016]: 2025-11-22 10:00:29.426123789 +0000 UTC m=+0.123543039 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 10:00:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:30.044+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:30 compute-0 ceph-mon[75021]: pgmap v2964: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:30 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:31.035+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:31 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:32.011+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:32 compute-0 nova_compute[253661]: 2025-11-22 10:00:32.241 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:32 compute-0 ceph-mon[75021]: pgmap v2965: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:32 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:33.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:33 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:33 compute-0 nova_compute[253661]: 2025-11-22 10:00:33.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:34.032+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 606 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:34 compute-0 ceph-mon[75021]: pgmap v2966: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:34 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:34 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 606 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:35.009+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:35 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:36.023+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:36 compute-0 ceph-mon[75021]: pgmap v2967: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:36 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:37.059+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:37 compute-0 nova_compute[253661]: 2025-11-22 10:00:37.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:37 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:38.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:38 compute-0 ceph-mon[75021]: pgmap v2968: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:38 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:38 compute-0 nova_compute[253661]: 2025-11-22 10:00:38.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:39.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 611 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:39 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:39 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 611 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:40.141+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:40 compute-0 ceph-mon[75021]: pgmap v2969: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:40 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:41.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:41 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:42.199+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:42 compute-0 nova_compute[253661]: 2025-11-22 10:00:42.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:42 compute-0 ceph-mon[75021]: pgmap v2970: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:42 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:43.243+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:43 compute-0 ceph-mon[75021]: pgmap v2971: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:43 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:43 compute-0 nova_compute[253661]: 2025-11-22 10:00:43.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 616 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:44.290+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:44 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 616 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:44 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:45.245+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:45 compute-0 ceph-mon[75021]: pgmap v2972: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:45 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:46.196+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:46 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:47.194+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:47 compute-0 nova_compute[253661]: 2025-11-22 10:00:47.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:47 compute-0 ceph-mon[75021]: pgmap v2973: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:47 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:48.153+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:48 compute-0 nova_compute[253661]: 2025-11-22 10:00:48.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:48 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:49.111+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 627 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:49 compute-0 ceph-mon[75021]: pgmap v2974: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:49 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:49 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 627 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:50.090+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:50 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:51.130+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:51 compute-0 ceph-mon[75021]: pgmap v2975: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:51 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:52.123+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:52 compute-0 nova_compute[253661]: 2025-11-22 10:00:52.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:00:52
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta']
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:52 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:00:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:00:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:53.091+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:53 compute-0 nova_compute[253661]: 2025-11-22 10:00:53.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:53 compute-0 ceph-mon[75021]: pgmap v2976: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:53 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:54.057+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 631 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:54 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:55.027+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:55 compute-0 podman[419043]: 2025-11-22 10:00:55.372048251 +0000 UTC m=+0.063366891 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 10:00:55 compute-0 podman[419044]: 2025-11-22 10:00:55.375468256 +0000 UTC m=+0.062993482 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:00:55 compute-0 ceph-mon[75021]: pgmap v2977: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:55 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 631 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:00:55 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:00:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:00:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:00:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:00:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:00:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:56.071+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:56 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:57.027+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:57 compute-0 nova_compute[253661]: 2025-11-22 10:00:57.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:00:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:00:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:00:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:00:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:00:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:00:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 63K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1772 writes, 8277 keys, 1772 commit groups, 1.0 writes per commit group, ingest: 9.43 MB, 0.02 MB/s
                                           Interval WAL: 1772 writes, 1772 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     39.1      1.81              0.26        44    0.041       0      0       0.0       0.0
                                             L6      1/0    8.04 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   5.0     95.1     80.8      4.39              1.09        43    0.102    279K    23K       0.0       0.0
                                            Sum      1/0    8.04 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   6.0     67.4     68.7      6.20              1.35        87    0.071    279K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2     56.3     56.2      1.10              0.20        12    0.092     51K   3063       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     95.1     80.8      4.39              1.09        43    0.102    279K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     39.2      1.80              0.26        43    0.042       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.069, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.42 GB write, 0.08 MB/s write, 0.41 GB read, 0.08 MB/s read, 6.2 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 47.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000397 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3097,45.58 MB,14.9942%) FilterBlock(88,792.30 KB,0.254516%) IndexBlock(88,1.25 MB,0.412524%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 10:00:57 compute-0 ceph-mon[75021]: pgmap v2978: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:57 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:57.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:58 compute-0 nova_compute[253661]: 2025-11-22 10:00:58.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:58 compute-0 nova_compute[253661]: 2025-11-22 10:00:58.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:00:58 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:58.955+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:59 compute-0 nova_compute[253661]: 2025-11-22 10:00:59.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:00:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:00:59 compute-0 ceph-mon[75021]: pgmap v2979: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:00:59 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:00:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:59.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:00:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:00 compute-0 podman[419082]: 2025-11-22 10:01:00.390383328 +0000 UTC m=+0.089246468 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 10:01:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:00 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:00.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:01.933+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:01 compute-0 ceph-mon[75021]: pgmap v2980: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:01 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:02 compute-0 CROND[419109]: (root) CMD (run-parts /etc/cron.hourly)
Nov 22 10:01:02 compute-0 run-parts[419112]: (/etc/cron.hourly) starting 0anacron
Nov 22 10:01:02 compute-0 run-parts[419118]: (/etc/cron.hourly) finished 0anacron
Nov 22 10:01:02 compute-0 CROND[419108]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 22 10:01:02 compute-0 nova_compute[253661]: 2025-11-22 10:01:02.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:01:02 compute-0 nova_compute[253661]: 2025-11-22 10:01:02.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:01:02 compute-0 nova_compute[253661]: 2025-11-22 10:01:02.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:01:02 compute-0 nova_compute[253661]: 2025-11-22 10:01:02.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:02 compute-0 nova_compute[253661]: 2025-11-22 10:01:02.906 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:01:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:02.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:02 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:01:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:01:03 compute-0 nova_compute[253661]: 2025-11-22 10:01:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:01:03 compute-0 nova_compute[253661]: 2025-11-22 10:01:03.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:01:03 compute-0 nova_compute[253661]: 2025-11-22 10:01:03.637 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:03.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:04 compute-0 ceph-mon[75021]: pgmap v2981: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:04 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:04 compute-0 nova_compute[253661]: 2025-11-22 10:01:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:01:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 636 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:04.906+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:05 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:05 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 636 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:05.865+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:06 compute-0 ceph-mon[75021]: pgmap v2982: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:06 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:06 compute-0 nova_compute[253661]: 2025-11-22 10:01:06.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:01:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:06.873+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:07 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.299 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:01:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604114902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.727 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.894 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.895 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.895 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:01:07 compute-0 nova_compute[253661]: 2025-11-22 10:01:07.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:01:08 compute-0 ceph-mon[75021]: pgmap v2983: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:08 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:01:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1604114902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:01:08 compute-0 nova_compute[253661]: 2025-11-22 10:01:08.063 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:01:08 compute-0 nova_compute[253661]: 2025-11-22 10:01:08.063 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:01:08 compute-0 nova_compute[253661]: 2025-11-22 10:01:08.169 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:01:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:01:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778119789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:01:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:08 compute-0 nova_compute[253661]: 2025-11-22 10:01:08.633 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:01:08 compute-0 nova_compute[253661]: 2025-11-22 10:01:08.640 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:01:08 compute-0 nova_compute[253661]: 2025-11-22 10:01:08.644 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:08 compute-0 sudo[419161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:01:08 compute-0 sudo[419161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:08 compute-0 nova_compute[253661]: 2025-11-22 10:01:08.659 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:01:08 compute-0 sudo[419161]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:08 compute-0 nova_compute[253661]: 2025-11-22 10:01:08.661 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:01:08 compute-0 nova_compute[253661]: 2025-11-22 10:01:08.662 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:01:08 compute-0 sudo[419188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:01:08 compute-0 sudo[419188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:08 compute-0 sudo[419188]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:08 compute-0 sudo[419213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:01:08 compute-0 sudo[419213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:08 compute-0 sudo[419213]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:08 compute-0 sudo[419238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:01:08 compute-0 sudo[419238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3778119789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:01:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 646 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:09 compute-0 sudo[419238]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:01:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:01:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:01:09 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:01:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:01:09 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:01:09 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 423f8840-60a5-49a9-9229-09a8e3e8058e does not exist
Nov 22 10:01:09 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 20793c4a-8c75-48bb-b226-8bf09083fec8 does not exist
Nov 22 10:01:09 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b35fe79a-86a6-4607-8ea7-3dacdaf81cbb does not exist
Nov 22 10:01:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:01:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:01:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:01:09 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:01:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:01:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:01:09 compute-0 sudo[419294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:01:09 compute-0 sudo[419294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:09 compute-0 sudo[419294]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:09 compute-0 sudo[419319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:01:09 compute-0 sudo[419319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:09 compute-0 sudo[419319]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:09 compute-0 sudo[419344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:01:09 compute-0 sudo[419344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:09 compute-0 sudo[419344]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:09 compute-0 sudo[419369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:01:09 compute-0 sudo[419369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:10 compute-0 ceph-mon[75021]: pgmap v2984: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:10 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 646 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:10 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:01:10 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:01:10 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:01:10 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:01:10 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:01:10 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:01:10 compute-0 podman[419434]: 2025-11-22 10:01:10.172224924 +0000 UTC m=+0.037749290 container create f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:01:10 compute-0 systemd[1]: Started libpod-conmon-f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26.scope.
Nov 22 10:01:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:01:10 compute-0 podman[419434]: 2025-11-22 10:01:10.154818375 +0000 UTC m=+0.020342781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:01:10 compute-0 podman[419434]: 2025-11-22 10:01:10.262563408 +0000 UTC m=+0.128087794 container init f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 10:01:10 compute-0 podman[419434]: 2025-11-22 10:01:10.270729149 +0000 UTC m=+0.136253525 container start f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 10:01:10 compute-0 podman[419434]: 2025-11-22 10:01:10.275723882 +0000 UTC m=+0.141248248 container attach f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:01:10 compute-0 systemd[1]: libpod-f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26.scope: Deactivated successfully.
Nov 22 10:01:10 compute-0 priceless_volhard[419451]: 167 167
Nov 22 10:01:10 compute-0 conmon[419451]: conmon f11ad7681db75fc2dedd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26.scope/container/memory.events
Nov 22 10:01:10 compute-0 podman[419434]: 2025-11-22 10:01:10.27887622 +0000 UTC m=+0.144400606 container died f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:01:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-07e80bef68984375350d58472030ffa4f381a18f11ff3b5066d98bbfa84cc81d-merged.mount: Deactivated successfully.
Nov 22 10:01:10 compute-0 podman[419434]: 2025-11-22 10:01:10.316275151 +0000 UTC m=+0.181799507 container remove f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:01:10 compute-0 systemd[1]: libpod-conmon-f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26.scope: Deactivated successfully.
Nov 22 10:01:10 compute-0 podman[419474]: 2025-11-22 10:01:10.485160167 +0000 UTC m=+0.044060605 container create 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:01:10 compute-0 systemd[1]: Started libpod-conmon-684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37.scope.
Nov 22 10:01:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:01:10 compute-0 podman[419474]: 2025-11-22 10:01:10.469016741 +0000 UTC m=+0.027917209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:10 compute-0 podman[419474]: 2025-11-22 10:01:10.586352019 +0000 UTC m=+0.145252507 container init 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:01:10 compute-0 podman[419474]: 2025-11-22 10:01:10.594710075 +0000 UTC m=+0.153610513 container start 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:01:10 compute-0 podman[419474]: 2025-11-22 10:01:10.598627761 +0000 UTC m=+0.157528199 container attach 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:01:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:10 compute-0 nova_compute[253661]: 2025-11-22 10:01:10.663 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:01:11 compute-0 objective_shirley[419490]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:01:11 compute-0 objective_shirley[419490]: --> relative data size: 1.0
Nov 22 10:01:11 compute-0 objective_shirley[419490]: --> All data devices are unavailable
Nov 22 10:01:11 compute-0 systemd[1]: libpod-684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37.scope: Deactivated successfully.
Nov 22 10:01:11 compute-0 podman[419474]: 2025-11-22 10:01:11.679838887 +0000 UTC m=+1.238739325 container died 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:01:11 compute-0 systemd[1]: libpod-684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37.scope: Consumed 1.028s CPU time.
Nov 22 10:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb-merged.mount: Deactivated successfully.
Nov 22 10:01:11 compute-0 podman[419474]: 2025-11-22 10:01:11.738216654 +0000 UTC m=+1.297117092 container remove 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 22 10:01:11 compute-0 systemd[1]: libpod-conmon-684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37.scope: Deactivated successfully.
Nov 22 10:01:11 compute-0 sudo[419369]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:11 compute-0 sudo[419532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:01:11 compute-0 sudo[419532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:11 compute-0 sudo[419532]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:11 compute-0 sudo[419557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:01:11 compute-0 sudo[419557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:11 compute-0 sudo[419557]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:12 compute-0 sudo[419582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:01:12 compute-0 sudo[419582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:12 compute-0 sudo[419582]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:12 compute-0 sudo[419607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:01:12 compute-0 sudo[419607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:12 compute-0 ceph-mon[75021]: pgmap v2985: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:12 compute-0 nova_compute[253661]: 2025-11-22 10:01:12.302 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:01:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577879536' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:01:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:01:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577879536' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:01:12 compute-0 podman[419672]: 2025-11-22 10:01:12.466661906 +0000 UTC m=+0.055221901 container create 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:01:12 compute-0 systemd[1]: Started libpod-conmon-5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84.scope.
Nov 22 10:01:12 compute-0 podman[419672]: 2025-11-22 10:01:12.437293964 +0000 UTC m=+0.025853999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:01:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:01:12 compute-0 podman[419672]: 2025-11-22 10:01:12.558426235 +0000 UTC m=+0.146986250 container init 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 10:01:12 compute-0 podman[419672]: 2025-11-22 10:01:12.565061318 +0000 UTC m=+0.153621303 container start 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:01:12 compute-0 podman[419672]: 2025-11-22 10:01:12.568985965 +0000 UTC m=+0.157545950 container attach 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:01:12 compute-0 systemd[1]: libpod-5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84.scope: Deactivated successfully.
Nov 22 10:01:12 compute-0 gifted_nash[419688]: 167 167
Nov 22 10:01:12 compute-0 conmon[419688]: conmon 5be5866da4bfde19c912 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84.scope/container/memory.events
Nov 22 10:01:12 compute-0 podman[419672]: 2025-11-22 10:01:12.573275381 +0000 UTC m=+0.161835376 container died 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ff346a89dc72178075d54a817eeba0b2d870bb845c69b54fd267055f9b960b1-merged.mount: Deactivated successfully.
Nov 22 10:01:12 compute-0 podman[419672]: 2025-11-22 10:01:12.618635737 +0000 UTC m=+0.207195722 container remove 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:01:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:12 compute-0 systemd[1]: libpod-conmon-5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84.scope: Deactivated successfully.
Nov 22 10:01:12 compute-0 podman[419712]: 2025-11-22 10:01:12.796192698 +0000 UTC m=+0.048699760 container create 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:01:12 compute-0 systemd[1]: Started libpod-conmon-3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878.scope.
Nov 22 10:01:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:01:12 compute-0 podman[419712]: 2025-11-22 10:01:12.779710512 +0000 UTC m=+0.032217594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:12 compute-0 podman[419712]: 2025-11-22 10:01:12.89339667 +0000 UTC m=+0.145903762 container init 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 10:01:12 compute-0 podman[419712]: 2025-11-22 10:01:12.900521276 +0000 UTC m=+0.153028338 container start 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 10:01:12 compute-0 podman[419712]: 2025-11-22 10:01:12.904014322 +0000 UTC m=+0.156521384 container attach 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 22 10:01:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3577879536' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:01:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3577879536' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:01:13 compute-0 nova_compute[253661]: 2025-11-22 10:01:13.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]: {
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:     "0": [
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:         {
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "devices": [
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "/dev/loop3"
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             ],
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_name": "ceph_lv0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_size": "21470642176",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "name": "ceph_lv0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "tags": {
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.cluster_name": "ceph",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.crush_device_class": "",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.encrypted": "0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.osd_id": "0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.type": "block",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.vdo": "0"
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             },
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "type": "block",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "vg_name": "ceph_vg0"
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:         }
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:     ],
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:     "1": [
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:         {
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "devices": [
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "/dev/loop4"
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             ],
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_name": "ceph_lv1",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_size": "21470642176",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "name": "ceph_lv1",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "tags": {
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.cluster_name": "ceph",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.crush_device_class": "",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.encrypted": "0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.osd_id": "1",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.type": "block",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.vdo": "0"
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             },
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "type": "block",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "vg_name": "ceph_vg1"
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:         }
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:     ],
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:     "2": [
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:         {
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "devices": [
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "/dev/loop5"
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             ],
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_name": "ceph_lv2",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_size": "21470642176",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "name": "ceph_lv2",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "tags": {
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.cluster_name": "ceph",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.crush_device_class": "",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.encrypted": "0",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.osd_id": "2",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.type": "block",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:                 "ceph.vdo": "0"
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             },
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "type": "block",
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:             "vg_name": "ceph_vg2"
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:         }
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]:     ]
Nov 22 10:01:13 compute-0 distracted_mccarthy[419729]: }
Nov 22 10:01:13 compute-0 systemd[1]: libpod-3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878.scope: Deactivated successfully.
Nov 22 10:01:13 compute-0 podman[419712]: 2025-11-22 10:01:13.686938375 +0000 UTC m=+0.939445437 container died 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 10:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06-merged.mount: Deactivated successfully.
Nov 22 10:01:13 compute-0 podman[419712]: 2025-11-22 10:01:13.747478135 +0000 UTC m=+0.999985197 container remove 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:01:13 compute-0 systemd[1]: libpod-conmon-3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878.scope: Deactivated successfully.
Nov 22 10:01:13 compute-0 sudo[419607]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:13 compute-0 sudo[419749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:01:13 compute-0 sudo[419749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:13 compute-0 sudo[419749]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:13 compute-0 sudo[419774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:01:13 compute-0 sudo[419774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:13 compute-0 sudo[419774]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:13 compute-0 sudo[419799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:01:13 compute-0 sudo[419799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:13 compute-0 sudo[419799]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:14 compute-0 sudo[419824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:01:14 compute-0 sudo[419824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:14 compute-0 ceph-mon[75021]: pgmap v2986: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:14 compute-0 podman[419889]: 2025-11-22 10:01:14.40533938 +0000 UTC m=+0.043030321 container create 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 10:01:14 compute-0 systemd[1]: Started libpod-conmon-669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c.scope.
Nov 22 10:01:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:01:14 compute-0 podman[419889]: 2025-11-22 10:01:14.478430529 +0000 UTC m=+0.116121470 container init 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 10:01:14 compute-0 podman[419889]: 2025-11-22 10:01:14.387670765 +0000 UTC m=+0.025361726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:01:14 compute-0 podman[419889]: 2025-11-22 10:01:14.486676252 +0000 UTC m=+0.124367203 container start 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 10:01:14 compute-0 podman[419889]: 2025-11-22 10:01:14.490493366 +0000 UTC m=+0.128184427 container attach 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:01:14 compute-0 gracious_tesla[419905]: 167 167
Nov 22 10:01:14 compute-0 systemd[1]: libpod-669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c.scope: Deactivated successfully.
Nov 22 10:01:14 compute-0 podman[419889]: 2025-11-22 10:01:14.492368892 +0000 UTC m=+0.130059843 container died 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-225d00ea74ea5ff9a8b24dd95211489ca179dc2254e3358c7916f09ccfeb1680-merged.mount: Deactivated successfully.
Nov 22 10:01:14 compute-0 podman[419889]: 2025-11-22 10:01:14.529584739 +0000 UTC m=+0.167275680 container remove 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 10:01:14 compute-0 systemd[1]: libpod-conmon-669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c.scope: Deactivated successfully.
Nov 22 10:01:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:14 compute-0 podman[419928]: 2025-11-22 10:01:14.67835122 +0000 UTC m=+0.039890192 container create 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:01:14 compute-0 systemd[1]: Started libpod-conmon-103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80.scope.
Nov 22 10:01:14 compute-0 podman[419928]: 2025-11-22 10:01:14.660700787 +0000 UTC m=+0.022239779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:01:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:01:14 compute-0 podman[419928]: 2025-11-22 10:01:14.780842303 +0000 UTC m=+0.142381285 container init 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:01:14 compute-0 podman[419928]: 2025-11-22 10:01:14.787517798 +0000 UTC m=+0.149056770 container start 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:01:14 compute-0 podman[419928]: 2025-11-22 10:01:14.790112432 +0000 UTC m=+0.151651404 container attach 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:01:15 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 651 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.106923) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675106957, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2156, "num_deletes": 251, "total_data_size": 2822160, "memory_usage": 2867696, "flush_reason": "Manual Compaction"}
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675122278, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2756079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61911, "largest_seqno": 64066, "table_properties": {"data_size": 2747251, "index_size": 5065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22571, "raw_average_key_size": 21, "raw_value_size": 2727553, "raw_average_value_size": 2558, "num_data_blocks": 222, "num_entries": 1066, "num_filter_entries": 1066, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805504, "oldest_key_time": 1763805504, "file_creation_time": 1763805675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 15409 microseconds, and 5843 cpu microseconds.
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.122327) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2756079 bytes OK
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.122349) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.124700) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.124713) EVENT_LOG_v1 {"time_micros": 1763805675124709, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.124729) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2812841, prev total WAL file size 2812841, number of live WAL files 2.
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.125462) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2691KB)], [146(8230KB)]
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675125528, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 11184256, "oldest_snapshot_seqno": -1}
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8895 keys, 9739112 bytes, temperature: kUnknown
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675172603, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 9739112, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9683965, "index_size": 31802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 233885, "raw_average_key_size": 26, "raw_value_size": 9529379, "raw_average_value_size": 1071, "num_data_blocks": 1216, "num_entries": 8895, "num_filter_entries": 8895, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:01:15 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 651 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.172889) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 9739112 bytes
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.174248) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.1 rd, 206.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 9409, records dropped: 514 output_compression: NoCompression
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.174268) EVENT_LOG_v1 {"time_micros": 1763805675174258, "job": 90, "event": "compaction_finished", "compaction_time_micros": 47169, "compaction_time_cpu_micros": 25902, "output_level": 6, "num_output_files": 1, "total_output_size": 9739112, "num_input_records": 9409, "num_output_records": 8895, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675174901, "job": 90, "event": "table_file_deletion", "file_number": 148}
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675176268, "job": 90, "event": "table_file_deletion", "file_number": 146}
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.125371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:15 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]: {
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "osd_id": 1,
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "type": "bluestore"
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:     },
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "osd_id": 0,
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "type": "bluestore"
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:     },
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "osd_id": 2,
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:         "type": "bluestore"
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]:     }
Nov 22 10:01:15 compute-0 xenodochial_mclean[419944]: }
Nov 22 10:01:15 compute-0 systemd[1]: libpod-103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80.scope: Deactivated successfully.
Nov 22 10:01:15 compute-0 podman[419928]: 2025-11-22 10:01:15.763556505 +0000 UTC m=+1.125095477 container died 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55-merged.mount: Deactivated successfully.
Nov 22 10:01:15 compute-0 podman[419928]: 2025-11-22 10:01:15.821501782 +0000 UTC m=+1.183040764 container remove 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:01:15 compute-0 systemd[1]: libpod-conmon-103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80.scope: Deactivated successfully.
Nov 22 10:01:15 compute-0 sudo[419824]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:01:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:01:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:01:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:01:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d0c3f236-5029-4b54-a1ee-b570f66c2ba4 does not exist
Nov 22 10:01:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 031491c6-905f-4353-aaa8-57cf6eec0774 does not exist
Nov 22 10:01:15 compute-0 sudo[419992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:01:15 compute-0 sudo[419992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:15 compute-0 sudo[419992]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:16 compute-0 sudo[420017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:01:16 compute-0 sudo[420017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:01:16 compute-0 sudo[420017]: pam_unix(sudo:session): session closed for user root
Nov 22 10:01:16 compute-0 ceph-mon[75021]: pgmap v2987: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:01:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:01:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:17 compute-0 nova_compute[253661]: 2025-11-22 10:01:17.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:18 compute-0 ceph-mon[75021]: pgmap v2988: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:18 compute-0 nova_compute[253661]: 2025-11-22 10:01:18.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:20 compute-0 ceph-mon[75021]: pgmap v2989: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:01:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:22 compute-0 ceph-mon[75021]: pgmap v2990: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:22 compute-0 nova_compute[253661]: 2025-11-22 10:01:22.308 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:01:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:01:23 compute-0 nova_compute[253661]: 2025-11-22 10:01:23.650 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:24 compute-0 ceph-mon[75021]: pgmap v2991: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 656 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:25 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 656 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:26 compute-0 ceph-mon[75021]: pgmap v2992: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:26 compute-0 podman[420043]: 2025-11-22 10:01:26.392229139 +0000 UTC m=+0.073526951 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 10:01:26 compute-0 podman[420042]: 2025-11-22 10:01:26.41783259 +0000 UTC m=+0.094895608 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:01:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:27 compute-0 nova_compute[253661]: 2025-11-22 10:01:27.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:01:28.006 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:01:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:01:28.007 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:01:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:01:28.007 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:01:28 compute-0 ceph-mon[75021]: pgmap v2993: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:28 compute-0 nova_compute[253661]: 2025-11-22 10:01:28.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 666 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:30 compute-0 ceph-mon[75021]: pgmap v2994: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:30 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 666 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:31 compute-0 podman[420081]: 2025-11-22 10:01:31.417786413 +0000 UTC m=+0.106187505 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 10:01:32 compute-0 ceph-mon[75021]: pgmap v2995: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:32 compute-0 nova_compute[253661]: 2025-11-22 10:01:32.311 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:33 compute-0 nova_compute[253661]: 2025-11-22 10:01:33.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:34 compute-0 ceph-mon[75021]: pgmap v2996: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:35 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 671 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:35 compute-0 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 671 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:36 compute-0 ceph-mon[75021]: pgmap v2997: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:37 compute-0 nova_compute[253661]: 2025-11-22 10:01:37.311 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:37.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:38 compute-0 ceph-mon[75021]: pgmap v2998: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:38 compute-0 nova_compute[253661]: 2025-11-22 10:01:38.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:38.888+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:39 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:39.868+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:40 compute-0 ceph-mon[75021]: pgmap v2999: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:40 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:40.914+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:41 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:41.868+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:42 compute-0 nova_compute[253661]: 2025-11-22 10:01:42.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:42 compute-0 ceph-mon[75021]: pgmap v3000: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:42 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:42.892+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:43 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:43 compute-0 nova_compute[253661]: 2025-11-22 10:01:43.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:43.936+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 676 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.266038) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704266084, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 517, "num_deletes": 255, "total_data_size": 467334, "memory_usage": 477616, "flush_reason": "Manual Compaction"}
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704271408, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 461638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64067, "largest_seqno": 64583, "table_properties": {"data_size": 458789, "index_size": 820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6710, "raw_average_key_size": 18, "raw_value_size": 453056, "raw_average_value_size": 1248, "num_data_blocks": 36, "num_entries": 363, "num_filter_entries": 363, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805675, "oldest_key_time": 1763805675, "file_creation_time": 1763805704, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 5405 microseconds, and 2355 cpu microseconds.
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.271443) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 461638 bytes OK
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.271465) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273204) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273219) EVENT_LOG_v1 {"time_micros": 1763805704273214, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273236) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 464348, prev total WAL file size 464348, number of live WAL files 2.
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273655) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373630' seq:72057594037927935, type:22 .. '6C6F676D0033303131' seq:0, type:0; will stop at (end)
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(450KB)], [149(9510KB)]
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704273706, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 10200750, "oldest_snapshot_seqno": -1}
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8738 keys, 10069186 bytes, temperature: kUnknown
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704321557, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 10069186, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10014249, "index_size": 31976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 231560, "raw_average_key_size": 26, "raw_value_size": 9861606, "raw_average_value_size": 1128, "num_data_blocks": 1222, "num_entries": 8738, "num_filter_entries": 8738, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805704, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.321886) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10069186 bytes
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.323123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.7 rd, 209.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.3 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(43.9) write-amplify(21.8) OK, records in: 9258, records dropped: 520 output_compression: NoCompression
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.323147) EVENT_LOG_v1 {"time_micros": 1763805704323135, "job": 92, "event": "compaction_finished", "compaction_time_micros": 47969, "compaction_time_cpu_micros": 24521, "output_level": 6, "num_output_files": 1, "total_output_size": 10069186, "num_input_records": 9258, "num_output_records": 8738, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704323577, "job": 92, "event": "table_file_deletion", "file_number": 151}
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704326046, "job": 92, "event": "table_file_deletion", "file_number": 149}
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:01:44 compute-0 ceph-mon[75021]: pgmap v3001: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:44 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:44 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 676 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:44.956+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:45 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:45.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:46 compute-0 ceph-mon[75021]: pgmap v3002: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:46 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:46 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:46.920+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:47 compute-0 nova_compute[253661]: 2025-11-22 10:01:47.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:47 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:47 compute-0 sshd-session[420108]: Invalid user ubuntu from 92.118.39.92 port 37142
Nov 22 10:01:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:47.898+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:47 compute-0 sshd-session[420108]: Connection closed by invalid user ubuntu 92.118.39.92 port 37142 [preauth]
Nov 22 10:01:48 compute-0 ceph-mon[75021]: pgmap v3003: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:48 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:48 compute-0 nova_compute[253661]: 2025-11-22 10:01:48.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:48.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 681 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:49 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:49 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 681 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:49.908+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:50 compute-0 ceph-mon[75021]: pgmap v3004: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:50 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:50.919+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:51 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:51.959+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:52 compute-0 nova_compute[253661]: 2025-11-22 10:01:52.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:01:52
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.control', 'images', '.rgw.root']
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:01:52 compute-0 ceph-mon[75021]: pgmap v3005: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:52 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:01:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:01:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:52.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:53 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:53 compute-0 nova_compute[253661]: 2025-11-22 10:01:53.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:53.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 686 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:54 compute-0 ceph-mon[75021]: pgmap v3006: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:54 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:54 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 686 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:54.997+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:55 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:01:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:01:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:01:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:01:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:01:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:55.950+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:56 compute-0 ceph-mon[75021]: pgmap v3007: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:56 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:56.933+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:57 compute-0 nova_compute[253661]: 2025-11-22 10:01:57.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:01:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:01:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:57.898+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:01:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:58.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:01:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 691 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:01:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:01:59 compute-0 nova_compute[253661]: 2025-11-22 10:01:59.290 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:01:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:01:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:01:59 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:01:59 compute-0 podman[420110]: 2025-11-22 10:01:59.364427624 +0000 UTC m=+0.053917228 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 10:01:59 compute-0 podman[420111]: 2025-11-22 10:01:59.393459878 +0000 UTC m=+0.072530445 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 10:01:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:59.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:01:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:00 compute-0 nova_compute[253661]: 2025-11-22 10:02:00.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:00 compute-0 ceph-mon[75021]: pgmap v3008: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:00 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:00 compute-0 ceph-mon[75021]: pgmap v3009: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:00 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:00 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 691 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:00.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:01 compute-0 nova_compute[253661]: 2025-11-22 10:02:01.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:01 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:01.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:02 compute-0 ceph-mon[75021]: pgmap v3010: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:02 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:02 compute-0 nova_compute[253661]: 2025-11-22 10:02:02.353 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:02 compute-0 podman[420150]: 2025-11-22 10:02:02.423647833 +0000 UTC m=+0.104993766 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:02:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:02.967+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:02:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:02:03 compute-0 nova_compute[253661]: 2025-11-22 10:02:03.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:03 compute-0 nova_compute[253661]: 2025-11-22 10:02:03.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:02:03 compute-0 nova_compute[253661]: 2025-11-22 10:02:03.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:02:03 compute-0 nova_compute[253661]: 2025-11-22 10:02:03.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:02:03 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:03.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:04 compute-0 nova_compute[253661]: 2025-11-22 10:02:04.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:04 compute-0 nova_compute[253661]: 2025-11-22 10:02:04.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:04 compute-0 nova_compute[253661]: 2025-11-22 10:02:04.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:02:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 696 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:04 compute-0 nova_compute[253661]: 2025-11-22 10:02:04.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:04 compute-0 ceph-mon[75021]: pgmap v3011: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:04 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:04 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 696 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:04.878+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:05 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:05.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:06 compute-0 ceph-mon[75021]: pgmap v3012: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:06 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:06.873+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.262 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:07 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:02:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108095252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.761 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:02:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:07.909+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.970 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.971 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.972 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:02:07 compute-0 nova_compute[253661]: 2025-11-22 10:02:07.972 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.027 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.027 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.090 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.112 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.113 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.130 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.151 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.209 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:02:08 compute-0 ceph-mon[75021]: pgmap v3013: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:08 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:08 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1108095252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:02:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:02:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391868096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.722 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.732 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.749 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.752 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:02:08 compute-0 nova_compute[253661]: 2025-11-22 10:02:08.752 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:02:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:08.892+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 701 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:09 compute-0 nova_compute[253661]: 2025-11-22 10:02:09.325 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:09 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2391868096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:02:09 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 701 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:09 compute-0 nova_compute[253661]: 2025-11-22 10:02:09.754 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:09 compute-0 nova_compute[253661]: 2025-11-22 10:02:09.755 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:09.930+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:10 compute-0 ceph-mon[75021]: pgmap v3014: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:10 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:10.923+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:11 compute-0 nova_compute[253661]: 2025-11-22 10:02:11.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:11 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:11.913+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:12 compute-0 nova_compute[253661]: 2025-11-22 10:02:12.358 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:12 compute-0 ceph-mon[75021]: pgmap v3015: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:12 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:02:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2989815337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:02:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:02:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2989815337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:02:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:12.865+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:13 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2989815337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:02:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2989815337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:02:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:13.892+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 706 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:14 compute-0 nova_compute[253661]: 2025-11-22 10:02:14.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:14 compute-0 ceph-mon[75021]: pgmap v3016: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:14 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:14 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:14 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 706 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:14.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:15 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:02:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:15.912+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:16 compute-0 sudo[420221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:02:16 compute-0 sudo[420221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:16 compute-0 sudo[420221]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:16 compute-0 sudo[420246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:02:16 compute-0 sudo[420246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:16 compute-0 sudo[420246]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:16 compute-0 sudo[420271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:02:16 compute-0 sudo[420271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:16 compute-0 sudo[420271]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:16 compute-0 sudo[420296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:02:16 compute-0 sudo[420296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:16 compute-0 ceph-mon[75021]: pgmap v3017: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:16 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:16 compute-0 sudo[420296]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:02:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:02:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:02:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:02:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:02:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:16.957+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:02:16 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4abd0143-4b88-47ac-9225-f334262fe12a does not exist
Nov 22 10:02:16 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 451fe9f6-50b3-4be6-b89b-e9eb88b9243c does not exist
Nov 22 10:02:16 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 3f5872e3-a200-417a-af93-33a1623b0c8b does not exist
Nov 22 10:02:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:02:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:02:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:02:16 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:02:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:02:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:02:17 compute-0 sudo[420352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:02:17 compute-0 sudo[420352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:17 compute-0 sudo[420352]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:17 compute-0 sudo[420377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:02:17 compute-0 sudo[420377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:17 compute-0 sudo[420377]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:17 compute-0 sudo[420402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:02:17 compute-0 sudo[420402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:17 compute-0 sudo[420402]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:17 compute-0 nova_compute[253661]: 2025-11-22 10:02:17.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:02:17 compute-0 sudo[420427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:02:17 compute-0 sudo[420427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:17 compute-0 nova_compute[253661]: 2025-11-22 10:02:17.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:02:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:02:17 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:02:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:02:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:02:17 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:02:17 compute-0 podman[420491]: 2025-11-22 10:02:17.716831705 +0000 UTC m=+0.028264757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:02:17 compute-0 podman[420491]: 2025-11-22 10:02:17.91810823 +0000 UTC m=+0.229541262 container create 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:02:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:17.998+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:18 compute-0 systemd[1]: Started libpod-conmon-5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9.scope.
Nov 22 10:02:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:02:18 compute-0 podman[420491]: 2025-11-22 10:02:18.205224298 +0000 UTC m=+0.516657350 container init 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 10:02:18 compute-0 podman[420491]: 2025-11-22 10:02:18.214882906 +0000 UTC m=+0.526315938 container start 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 10:02:18 compute-0 sweet_mendeleev[420507]: 167 167
Nov 22 10:02:18 compute-0 systemd[1]: libpod-5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9.scope: Deactivated successfully.
Nov 22 10:02:18 compute-0 podman[420491]: 2025-11-22 10:02:18.281831504 +0000 UTC m=+0.593264536 container attach 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:02:18 compute-0 podman[420491]: 2025-11-22 10:02:18.285513124 +0000 UTC m=+0.596946176 container died 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-96c6ec78a89bc3720527e7af28bbc3ad13b7ebd7a6c4277cdfb4e0304cc6a7ba-merged.mount: Deactivated successfully.
Nov 22 10:02:18 compute-0 podman[420491]: 2025-11-22 10:02:18.543587488 +0000 UTC m=+0.855020550 container remove 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 10:02:18 compute-0 ceph-mon[75021]: pgmap v3018: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:18 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:18 compute-0 systemd[1]: libpod-conmon-5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9.scope: Deactivated successfully.
Nov 22 10:02:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:18 compute-0 podman[420531]: 2025-11-22 10:02:18.746526264 +0000 UTC m=+0.046022674 container create dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 10:02:18 compute-0 systemd[1]: Started libpod-conmon-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope.
Nov 22 10:02:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:18 compute-0 podman[420531]: 2025-11-22 10:02:18.7280943 +0000 UTC m=+0.027590720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:18 compute-0 podman[420531]: 2025-11-22 10:02:18.845101711 +0000 UTC m=+0.144598141 container init dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:02:18 compute-0 podman[420531]: 2025-11-22 10:02:18.853542828 +0000 UTC m=+0.153039228 container start dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 10:02:18 compute-0 podman[420531]: 2025-11-22 10:02:18.858366467 +0000 UTC m=+0.157862927 container attach dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:02:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:18.989+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 711 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:19 compute-0 nova_compute[253661]: 2025-11-22 10:02:19.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:19 compute-0 ceph-mon[75021]: pgmap v3019: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:19 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:19 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 711 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:19.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:20 compute-0 lucid_volhard[420548]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:02:20 compute-0 lucid_volhard[420548]: --> relative data size: 1.0
Nov 22 10:02:20 compute-0 lucid_volhard[420548]: --> All data devices are unavailable
Nov 22 10:02:20 compute-0 systemd[1]: libpod-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope: Deactivated successfully.
Nov 22 10:02:20 compute-0 systemd[1]: libpod-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope: Consumed 1.200s CPU time.
Nov 22 10:02:20 compute-0 conmon[420548]: conmon dbc66aea6068412ecbce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope/container/memory.events
Nov 22 10:02:20 compute-0 podman[420531]: 2025-11-22 10:02:20.099131431 +0000 UTC m=+1.398627871 container died dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:02:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b-merged.mount: Deactivated successfully.
Nov 22 10:02:20 compute-0 podman[420531]: 2025-11-22 10:02:20.173110692 +0000 UTC m=+1.472607102 container remove dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:02:20 compute-0 systemd[1]: libpod-conmon-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope: Deactivated successfully.
Nov 22 10:02:20 compute-0 sudo[420427]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:20 compute-0 sudo[420589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:02:20 compute-0 sudo[420589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:20 compute-0 sudo[420589]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:20 compute-0 sudo[420614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:02:20 compute-0 sudo[420614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:20 compute-0 sudo[420614]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:20 compute-0 sudo[420639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:02:20 compute-0 sudo[420639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:20 compute-0 sudo[420639]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:20 compute-0 sudo[420664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:02:20 compute-0 sudo[420664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:20 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:20.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:20 compute-0 podman[420729]: 2025-11-22 10:02:20.970728387 +0000 UTC m=+0.052804731 container create 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:02:21 compute-0 systemd[1]: Started libpod-conmon-1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654.scope.
Nov 22 10:02:21 compute-0 podman[420729]: 2025-11-22 10:02:20.950183321 +0000 UTC m=+0.032259715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:02:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:02:21 compute-0 podman[420729]: 2025-11-22 10:02:21.067813156 +0000 UTC m=+0.149889520 container init 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 10:02:21 compute-0 podman[420729]: 2025-11-22 10:02:21.075833974 +0000 UTC m=+0.157910318 container start 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 10:02:21 compute-0 podman[420729]: 2025-11-22 10:02:21.079179977 +0000 UTC m=+0.161256341 container attach 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 10:02:21 compute-0 eloquent_fermat[420745]: 167 167
Nov 22 10:02:21 compute-0 systemd[1]: libpod-1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654.scope: Deactivated successfully.
Nov 22 10:02:21 compute-0 podman[420729]: 2025-11-22 10:02:21.082643822 +0000 UTC m=+0.164720166 container died 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-466cffd2d80954eb467e1c398a7ff69f3c883dcbf62fdbaaceaf614a18f044e2-merged.mount: Deactivated successfully.
Nov 22 10:02:21 compute-0 podman[420729]: 2025-11-22 10:02:21.168815324 +0000 UTC m=+0.250891668 container remove 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 10:02:21 compute-0 systemd[1]: libpod-conmon-1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654.scope: Deactivated successfully.
Nov 22 10:02:21 compute-0 podman[420769]: 2025-11-22 10:02:21.380282349 +0000 UTC m=+0.053425427 container create 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:02:21 compute-0 systemd[1]: Started libpod-conmon-209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d.scope.
Nov 22 10:02:21 compute-0 podman[420769]: 2025-11-22 10:02:21.356603606 +0000 UTC m=+0.029746704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:02:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:21 compute-0 podman[420769]: 2025-11-22 10:02:21.480597968 +0000 UTC m=+0.153741066 container init 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:02:21 compute-0 podman[420769]: 2025-11-22 10:02:21.488164315 +0000 UTC m=+0.161307383 container start 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:02:21 compute-0 podman[420769]: 2025-11-22 10:02:21.491801364 +0000 UTC m=+0.164944772 container attach 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:02:21 compute-0 ceph-mon[75021]: pgmap v3020: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:21 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:21.920+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]: {
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:     "0": [
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:         {
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "devices": [
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "/dev/loop3"
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             ],
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_name": "ceph_lv0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_size": "21470642176",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "name": "ceph_lv0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "tags": {
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.cluster_name": "ceph",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.crush_device_class": "",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.encrypted": "0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.osd_id": "0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.type": "block",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.vdo": "0"
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             },
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "type": "block",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "vg_name": "ceph_vg0"
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:         }
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:     ],
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:     "1": [
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:         {
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "devices": [
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "/dev/loop4"
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             ],
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_name": "ceph_lv1",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_size": "21470642176",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "name": "ceph_lv1",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "tags": {
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.cluster_name": "ceph",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.crush_device_class": "",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.encrypted": "0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.osd_id": "1",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.type": "block",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.vdo": "0"
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             },
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "type": "block",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "vg_name": "ceph_vg1"
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:         }
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:     ],
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:     "2": [
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:         {
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "devices": [
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "/dev/loop5"
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             ],
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_name": "ceph_lv2",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_size": "21470642176",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "name": "ceph_lv2",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "tags": {
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.cluster_name": "ceph",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.crush_device_class": "",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.encrypted": "0",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.osd_id": "2",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.type": "block",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:                 "ceph.vdo": "0"
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             },
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "type": "block",
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:             "vg_name": "ceph_vg2"
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:         }
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]:     ]
Nov 22 10:02:22 compute-0 eloquent_lumiere[420786]: }
Nov 22 10:02:22 compute-0 systemd[1]: libpod-209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d.scope: Deactivated successfully.
Nov 22 10:02:22 compute-0 podman[420769]: 2025-11-22 10:02:22.353710442 +0000 UTC m=+1.026853520 container died 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:02:22 compute-0 nova_compute[253661]: 2025-11-22 10:02:22.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671-merged.mount: Deactivated successfully.
Nov 22 10:02:22 compute-0 podman[420769]: 2025-11-22 10:02:22.415953234 +0000 UTC m=+1.089096312 container remove 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:02:22 compute-0 systemd[1]: libpod-conmon-209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d.scope: Deactivated successfully.
Nov 22 10:02:22 compute-0 sudo[420664]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:22 compute-0 sudo[420809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:02:22 compute-0 sudo[420809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:22 compute-0 sudo[420809]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:22 compute-0 sudo[420834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:02:22 compute-0 sudo[420834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:22 compute-0 sudo[420834]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:22 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:22 compute-0 sudo[420859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:02:22 compute-0 sudo[420859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:22 compute-0 sudo[420859]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:02:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:02:22 compute-0 sudo[420884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:02:22 compute-0 sudo[420884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:22.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:23 compute-0 podman[420949]: 2025-11-22 10:02:23.169667228 +0000 UTC m=+0.046787053 container create a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:02:23 compute-0 systemd[1]: Started libpod-conmon-a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2.scope.
Nov 22 10:02:23 compute-0 podman[420949]: 2025-11-22 10:02:23.147960223 +0000 UTC m=+0.025080078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:02:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:02:23 compute-0 podman[420949]: 2025-11-22 10:02:23.264482032 +0000 UTC m=+0.141601867 container init a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 22 10:02:23 compute-0 podman[420949]: 2025-11-22 10:02:23.277635846 +0000 UTC m=+0.154755671 container start a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 22 10:02:23 compute-0 podman[420949]: 2025-11-22 10:02:23.281718937 +0000 UTC m=+0.158838742 container attach a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 10:02:23 compute-0 nervous_borg[420965]: 167 167
Nov 22 10:02:23 compute-0 systemd[1]: libpod-a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2.scope: Deactivated successfully.
Nov 22 10:02:23 compute-0 podman[420949]: 2025-11-22 10:02:23.287724564 +0000 UTC m=+0.164844419 container died a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-86f1e1db2f7740b89efefc7e59c9d2946c7e1ee365620ece45eef74bdd019df1-merged.mount: Deactivated successfully.
Nov 22 10:02:23 compute-0 podman[420949]: 2025-11-22 10:02:23.341482197 +0000 UTC m=+0.218602022 container remove a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 10:02:23 compute-0 systemd[1]: libpod-conmon-a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2.scope: Deactivated successfully.
Nov 22 10:02:23 compute-0 podman[420989]: 2025-11-22 10:02:23.564104437 +0000 UTC m=+0.072963757 container create 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:02:23 compute-0 systemd[1]: Started libpod-conmon-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope.
Nov 22 10:02:23 compute-0 podman[420989]: 2025-11-22 10:02:23.536732124 +0000 UTC m=+0.045591524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:02:23 compute-0 ceph-mon[75021]: pgmap v3021: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:23 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:02:23 compute-0 podman[420989]: 2025-11-22 10:02:23.672182769 +0000 UTC m=+0.181042089 container init 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:02:23 compute-0 podman[420989]: 2025-11-22 10:02:23.686660265 +0000 UTC m=+0.195519625 container start 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:02:23 compute-0 podman[420989]: 2025-11-22 10:02:23.691156225 +0000 UTC m=+0.200015585 container attach 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 10:02:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:24.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 716 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:24 compute-0 nova_compute[253661]: 2025-11-22 10:02:24.331 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:24 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:24 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 716 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]: {
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "osd_id": 1,
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "type": "bluestore"
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:     },
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "osd_id": 0,
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "type": "bluestore"
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:     },
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "osd_id": 2,
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:         "type": "bluestore"
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]:     }
Nov 22 10:02:24 compute-0 laughing_grothendieck[421005]: }
Nov 22 10:02:24 compute-0 systemd[1]: libpod-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope: Deactivated successfully.
Nov 22 10:02:24 compute-0 systemd[1]: libpod-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope: Consumed 1.131s CPU time.
Nov 22 10:02:24 compute-0 conmon[421005]: conmon 642ba035e5b6ddbe31b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope/container/memory.events
Nov 22 10:02:24 compute-0 podman[420989]: 2025-11-22 10:02:24.813543655 +0000 UTC m=+1.322402975 container died 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064-merged.mount: Deactivated successfully.
Nov 22 10:02:24 compute-0 podman[420989]: 2025-11-22 10:02:24.880205036 +0000 UTC m=+1.389064356 container remove 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 10:02:24 compute-0 systemd[1]: libpod-conmon-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope: Deactivated successfully.
Nov 22 10:02:24 compute-0 sudo[420884]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:02:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:02:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:02:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:02:24 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8461952b-e900-4efd-ab4f-9ed7c438ad9f does not exist
Nov 22 10:02:24 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev aca971f1-23e5-4494-9002-a98de3bbf982 does not exist
Nov 22 10:02:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:24.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:25 compute-0 sudo[421053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:02:25 compute-0 sudo[421053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:25 compute-0 sudo[421053]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:25 compute-0 sudo[421078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:02:25 compute-0 sudo[421078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:02:25 compute-0 sudo[421078]: pam_unix(sudo:session): session closed for user root
Nov 22 10:02:25 compute-0 ceph-mon[75021]: pgmap v3022: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:02:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:02:25 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:25.968+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:26.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:26 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:27 compute-0 nova_compute[253661]: 2025-11-22 10:02:27.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:27 compute-0 ceph-mon[75021]: pgmap v3023: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:27 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:27.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:02:28.008 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:02:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:02:28.009 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:02:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:02:28.009 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:02:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:28.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:28 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 726 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:29 compute-0 nova_compute[253661]: 2025-11-22 10:02:29.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:29.955+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:29 compute-0 ceph-mon[75021]: pgmap v3024: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:02:29 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:29 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 726 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:30 compute-0 podman[421103]: 2025-11-22 10:02:30.422051778 +0000 UTC m=+0.094246360 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:02:30 compute-0 podman[421104]: 2025-11-22 10:02:30.457251165 +0000 UTC m=+0.129201802 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 22 10:02:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:30 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:31.002+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:31 compute-0 ceph-mon[75021]: pgmap v3025: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:31 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:32.008+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:32 compute-0 nova_compute[253661]: 2025-11-22 10:02:32.371 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:32.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:33 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:33 compute-0 podman[421141]: 2025-11-22 10:02:33.426177381 +0000 UTC m=+0.112462839 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller)
Nov 22 10:02:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:34.008+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:34 compute-0 ceph-mon[75021]: pgmap v3026: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:34 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:34 compute-0 nova_compute[253661]: 2025-11-22 10:02:34.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:35 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 731 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:35 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:35.052+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:36 compute-0 ceph-mon[75021]: pgmap v3027: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:36 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 731 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:36 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:36.055+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:37.022+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:37 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:37 compute-0 nova_compute[253661]: 2025-11-22 10:02:37.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:37.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:38 compute-0 ceph-mon[75021]: pgmap v3028: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:38 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:39.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:39 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:39 compute-0 nova_compute[253661]: 2025-11-22 10:02:39.339 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:39.978+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:40 compute-0 ceph-mon[75021]: pgmap v3029: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:40 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:40.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:41 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:41.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:42 compute-0 ceph-mon[75021]: pgmap v3030: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:42 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:42 compute-0 nova_compute[253661]: 2025-11-22 10:02:42.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:43.002+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:43 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:44.005+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 736 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:44 compute-0 ceph-mon[75021]: pgmap v3031: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:44 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:44 compute-0 nova_compute[253661]: 2025-11-22 10:02:44.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:45.002+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:45 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:45 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 736 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:46.027+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:46 compute-0 ceph-mon[75021]: pgmap v3032: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:46 compute-0 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 10:02:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:47.033+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:47 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:47 compute-0 nova_compute[253661]: 2025-11-22 10:02:47.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:48.025+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:48 compute-0 ceph-mon[75021]: pgmap v3033: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:48 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:48.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 741 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:49 compute-0 nova_compute[253661]: 2025-11-22 10:02:49.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:49 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:49 compute-0 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 741 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:50.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:50 compute-0 ceph-mon[75021]: pgmap v3034: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:50 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:51.036+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:51 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:52.030+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:02:52
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'vms', 'images', '.rgw.root']
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:02:52 compute-0 nova_compute[253661]: 2025-11-22 10:02:52.389 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:52 compute-0 ceph-mon[75021]: pgmap v3035: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:52 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:02:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:02:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:53.061+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:53 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:54.045+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 746 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:54 compute-0 nova_compute[253661]: 2025-11-22 10:02:54.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:54 compute-0 ceph-mon[75021]: pgmap v3036: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:54 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:54 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 746 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:55.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:55 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:02:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:02:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:02:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:02:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:02:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:56.072+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:56 compute-0 ceph-mon[75021]: pgmap v3037: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:56 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:57.032+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:57 compute-0 nova_compute[253661]: 2025-11-22 10:02:57.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:57 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:57.996+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:58 compute-0 ceph-mon[75021]: pgmap v3038: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:58 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:02:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:59.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:02:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:02:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:02:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:02:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:02:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 751 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:02:59 compute-0 nova_compute[253661]: 2025-11-22 10:02:59.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:02:59 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:59 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:02:59 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 751 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:02:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:59.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:02:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:00 compute-0 ceph-mon[75021]: pgmap v3039: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:00 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:00.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:01 compute-0 nova_compute[253661]: 2025-11-22 10:03:01.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:03:01 compute-0 nova_compute[253661]: 2025-11-22 10:03:01.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:03:01 compute-0 podman[421167]: 2025-11-22 10:03:01.389869653 +0000 UTC m=+0.075023259 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 10:03:01 compute-0 podman[421168]: 2025-11-22 10:03:01.390921239 +0000 UTC m=+0.073666605 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 22 10:03:01 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:02.003+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:02 compute-0 nova_compute[253661]: 2025-11-22 10:03:02.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:02 compute-0 ceph-mon[75021]: pgmap v3040: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:02 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:03.040+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:03:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:03:03 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:04.057+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:04 compute-0 nova_compute[253661]: 2025-11-22 10:03:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:03:04 compute-0 nova_compute[253661]: 2025-11-22 10:03:04.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:03:04 compute-0 nova_compute[253661]: 2025-11-22 10:03:04.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:03:04 compute-0 nova_compute[253661]: 2025-11-22 10:03:04.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:03:04 compute-0 nova_compute[253661]: 2025-11-22 10:03:04.246 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:03:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 756 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:04 compute-0 nova_compute[253661]: 2025-11-22 10:03:04.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:04 compute-0 podman[421205]: 2025-11-22 10:03:04.398753062 +0000 UTC m=+0.089236478 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 10:03:04 compute-0 ceph-mon[75021]: pgmap v3041: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:04 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:04 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 756 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:05.033+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:05 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:06.039+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:06 compute-0 nova_compute[253661]: 2025-11-22 10:03:06.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:03:06 compute-0 nova_compute[253661]: 2025-11-22 10:03:06.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:03:06 compute-0 ceph-mon[75021]: pgmap v3042: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:06 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:07.012+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:07 compute-0 nova_compute[253661]: 2025-11-22 10:03:07.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:07 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:07.990+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.275 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.277 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.278 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.278 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.279 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:03:08 compute-0 ceph-mon[75021]: pgmap v3043: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:08 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:08 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:03:08 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463145236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.760 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.902 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.903 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3530MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.904 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.904 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.973 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.973 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:03:08 compute-0 nova_compute[253661]: 2025-11-22 10:03:08.998 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:03:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:09.009+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 761 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:09 compute-0 nova_compute[253661]: 2025-11-22 10:03:09.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:03:09 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/569418396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:03:09 compute-0 nova_compute[253661]: 2025-11-22 10:03:09.431 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:03:09 compute-0 nova_compute[253661]: 2025-11-22 10:03:09.439 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:03:09 compute-0 nova_compute[253661]: 2025-11-22 10:03:09.452 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:03:09 compute-0 nova_compute[253661]: 2025-11-22 10:03:09.453 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:03:09 compute-0 nova_compute[253661]: 2025-11-22 10:03:09.454 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:03:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3463145236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:03:09 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:09 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 761 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:09 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/569418396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:03:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:10.020+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:10 compute-0 nova_compute[253661]: 2025-11-22 10:03:10.454 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:03:10 compute-0 nova_compute[253661]: 2025-11-22 10:03:10.454 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:03:10 compute-0 ceph-mon[75021]: pgmap v3044: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:10 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:10.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:11 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:11.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:12 compute-0 nova_compute[253661]: 2025-11-22 10:03:12.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:03:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132826932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:03:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:03:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132826932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:03:12 compute-0 ceph-mon[75021]: pgmap v3045: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:12 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3132826932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:03:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3132826932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:03:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:12.912+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:13 compute-0 nova_compute[253661]: 2025-11-22 10:03:13.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:03:13 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:13.953+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 766 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:14 compute-0 nova_compute[253661]: 2025-11-22 10:03:14.353 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:14 compute-0 ceph-mon[75021]: pgmap v3046: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:14 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:14 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 766 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:14.906+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:15 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:15.859+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:16 compute-0 ceph-mon[75021]: pgmap v3047: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:16 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:16.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:17 compute-0 nova_compute[253661]: 2025-11-22 10:03:17.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:17 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:17.918+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:18 compute-0 ceph-mon[75021]: pgmap v3048: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:18 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 10:03:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:18.924+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 771 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:19 compute-0 nova_compute[253661]: 2025-11-22 10:03:19.356 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:19 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:19 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 771 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:19.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:20 compute-0 ceph-mon[75021]: pgmap v3049: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 10:03:20 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 10:03:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:20.878+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:21 compute-0 ceph-mon[75021]: pgmap v3050: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 10:03:21 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:21.898+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:22 compute-0 nova_compute[253661]: 2025-11-22 10:03:22.412 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:22 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 10:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:03:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:03:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:22.936+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:23 compute-0 ceph-mon[75021]: pgmap v3051: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 10:03:23 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:23.956+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 776 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:24 compute-0 nova_compute[253661]: 2025-11-22 10:03:24.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:24 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:24 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 776 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:03:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:24.956+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:25 compute-0 sudo[421275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:25 compute-0 sudo[421275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:25 compute-0 sudo[421275]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:25 compute-0 sudo[421300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:03:25 compute-0 sudo[421300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:25 compute-0 sudo[421300]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:25 compute-0 sudo[421325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:25 compute-0 sudo[421325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:25 compute-0 sudo[421325]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:25 compute-0 sudo[421350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:03:25 compute-0 sudo[421350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:25 compute-0 ceph-mon[75021]: pgmap v3052: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:03:25 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:25 compute-0 sudo[421350]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:25 compute-0 sudo[421406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:25 compute-0 sudo[421406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:25 compute-0 sudo[421406]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:25.949+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:25 compute-0 sudo[421431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:03:25 compute-0 sudo[421431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:25 compute-0 sudo[421431]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:26 compute-0 sudo[421456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:26 compute-0 sudo[421456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:26 compute-0 sudo[421456]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:26 compute-0 sudo[421481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 22 10:03:26 compute-0 sudo[421481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:26 compute-0 sudo[421481]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:03:26 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:03:26 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:03:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:03:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:03:26 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:03:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:03:26 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:26 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8a4532a2-fa9b-474a-8abc-c756609af5f2 does not exist
Nov 22 10:03:26 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a98ccef8-3a78-4584-9383-a84e5cd1ddbc does not exist
Nov 22 10:03:26 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e320c87d-f995-4f0d-8a0e-ad703158ca70 does not exist
Nov 22 10:03:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:03:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:03:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:03:26 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:03:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:03:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:03:26 compute-0 sudo[421525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:26 compute-0 sudo[421525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:26 compute-0 sudo[421525]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:26 compute-0 sudo[421550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:03:26 compute-0 sudo[421550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:26 compute-0 sudo[421550]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:26 compute-0 sudo[421575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:26 compute-0 sudo[421575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:26 compute-0 sudo[421575]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:26 compute-0 sudo[421600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:03:26 compute-0 sudo[421600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:03:26 compute-0 podman[421665]: 2025-11-22 10:03:26.950986936 +0000 UTC m=+0.047346747 container create 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:03:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:26.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:26 compute-0 systemd[1]: Started libpod-conmon-643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e.scope.
Nov 22 10:03:27 compute-0 podman[421665]: 2025-11-22 10:03:26.927572209 +0000 UTC m=+0.023932060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:03:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:03:27 compute-0 podman[421665]: 2025-11-22 10:03:27.042205701 +0000 UTC m=+0.138565532 container init 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:03:27 compute-0 podman[421665]: 2025-11-22 10:03:27.053799317 +0000 UTC m=+0.150159128 container start 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:03:27 compute-0 podman[421665]: 2025-11-22 10:03:27.057218771 +0000 UTC m=+0.153578662 container attach 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 10:03:27 compute-0 vigorous_mirzakhani[421681]: 167 167
Nov 22 10:03:27 compute-0 systemd[1]: libpod-643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e.scope: Deactivated successfully.
Nov 22 10:03:27 compute-0 conmon[421681]: conmon 643eda9b2ce3a919ef9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e.scope/container/memory.events
Nov 22 10:03:27 compute-0 podman[421665]: 2025-11-22 10:03:27.063737412 +0000 UTC m=+0.160097233 container died 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-aea5a8248e87072cfc69595d38f5857c7878cfaac047711735ab80f889f761ef-merged.mount: Deactivated successfully.
Nov 22 10:03:27 compute-0 podman[421665]: 2025-11-22 10:03:27.107597781 +0000 UTC m=+0.203957592 container remove 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 10:03:27 compute-0 systemd[1]: libpod-conmon-643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e.scope: Deactivated successfully.
Nov 22 10:03:27 compute-0 podman[421706]: 2025-11-22 10:03:27.280407935 +0000 UTC m=+0.053740194 container create cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:03:27 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:03:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:03:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:03:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:03:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:03:27 compute-0 systemd[1]: Started libpod-conmon-cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165.scope.
Nov 22 10:03:27 compute-0 podman[421706]: 2025-11-22 10:03:27.25419489 +0000 UTC m=+0.027527139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:03:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:27 compute-0 podman[421706]: 2025-11-22 10:03:27.377099775 +0000 UTC m=+0.150432014 container init cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 10:03:27 compute-0 podman[421706]: 2025-11-22 10:03:27.384542559 +0000 UTC m=+0.157874838 container start cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 10:03:27 compute-0 podman[421706]: 2025-11-22 10:03:27.38863517 +0000 UTC m=+0.161967499 container attach cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:03:27 compute-0 nova_compute[253661]: 2025-11-22 10:03:27.414 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:27.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:03:28.010 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:03:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:03:28.013 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:03:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:03:28.014 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:03:28 compute-0 ceph-mon[75021]: pgmap v3053: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:03:28 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:28 compute-0 stupefied_grothendieck[421722]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:03:28 compute-0 stupefied_grothendieck[421722]: --> relative data size: 1.0
Nov 22 10:03:28 compute-0 stupefied_grothendieck[421722]: --> All data devices are unavailable
Nov 22 10:03:28 compute-0 systemd[1]: libpod-cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165.scope: Deactivated successfully.
Nov 22 10:03:28 compute-0 podman[421706]: 2025-11-22 10:03:28.417186169 +0000 UTC m=+1.190518388 container died cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad-merged.mount: Deactivated successfully.
Nov 22 10:03:28 compute-0 podman[421706]: 2025-11-22 10:03:28.486765642 +0000 UTC m=+1.260097861 container remove cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:03:28 compute-0 systemd[1]: libpod-conmon-cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165.scope: Deactivated successfully.
Nov 22 10:03:28 compute-0 sudo[421600]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:28 compute-0 sudo[421764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:28 compute-0 sudo[421764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:28 compute-0 sudo[421764]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:03:28 compute-0 sudo[421789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:03:28 compute-0 sudo[421789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:28 compute-0 sudo[421789]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:28 compute-0 sudo[421814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:28 compute-0 sudo[421814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:28 compute-0 sudo[421814]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:28 compute-0 sudo[421839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:03:28 compute-0 sudo[421839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:28.944+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:29 compute-0 podman[421905]: 2025-11-22 10:03:29.295615723 +0000 UTC m=+0.049792437 container create ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 10:03:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 781 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:29 compute-0 systemd[1]: Started libpod-conmon-ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814.scope.
Nov 22 10:03:29 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:29 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 781 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:29 compute-0 nova_compute[253661]: 2025-11-22 10:03:29.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:03:29 compute-0 podman[421905]: 2025-11-22 10:03:29.275451906 +0000 UTC m=+0.029628630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:03:29 compute-0 podman[421905]: 2025-11-22 10:03:29.386336486 +0000 UTC m=+0.140513220 container init ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:03:29 compute-0 podman[421905]: 2025-11-22 10:03:29.394481206 +0000 UTC m=+0.148657920 container start ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 10:03:29 compute-0 podman[421905]: 2025-11-22 10:03:29.401820178 +0000 UTC m=+0.155996942 container attach ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:03:29 compute-0 interesting_gates[421922]: 167 167
Nov 22 10:03:29 compute-0 systemd[1]: libpod-ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814.scope: Deactivated successfully.
Nov 22 10:03:29 compute-0 podman[421905]: 2025-11-22 10:03:29.404200426 +0000 UTC m=+0.158377180 container died ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 10:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d3c362f65c35a352fe8a893e17b5faa2fca1378f6e9bbfd48ac63cbdad8c242-merged.mount: Deactivated successfully.
Nov 22 10:03:29 compute-0 podman[421905]: 2025-11-22 10:03:29.451130391 +0000 UTC m=+0.205307125 container remove ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 10:03:29 compute-0 systemd[1]: libpod-conmon-ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814.scope: Deactivated successfully.
Nov 22 10:03:29 compute-0 podman[421946]: 2025-11-22 10:03:29.651882463 +0000 UTC m=+0.052295298 container create 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 22 10:03:29 compute-0 systemd[1]: Started libpod-conmon-456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d.scope.
Nov 22 10:03:29 compute-0 podman[421946]: 2025-11-22 10:03:29.631373768 +0000 UTC m=+0.031786633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:03:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:29 compute-0 podman[421946]: 2025-11-22 10:03:29.773725852 +0000 UTC m=+0.174138707 container init 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 10:03:29 compute-0 podman[421946]: 2025-11-22 10:03:29.786965229 +0000 UTC m=+0.187378094 container start 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:03:29 compute-0 podman[421946]: 2025-11-22 10:03:29.79234365 +0000 UTC m=+0.192756575 container attach 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 10:03:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:29.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:30 compute-0 ceph-mon[75021]: pgmap v3054: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:03:30 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:30 compute-0 naughty_elion[421964]: {
Nov 22 10:03:30 compute-0 naughty_elion[421964]:     "0": [
Nov 22 10:03:30 compute-0 naughty_elion[421964]:         {
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "devices": [
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "/dev/loop3"
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             ],
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_name": "ceph_lv0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_size": "21470642176",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "name": "ceph_lv0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "tags": {
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.cluster_name": "ceph",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.crush_device_class": "",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.encrypted": "0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.osd_id": "0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.type": "block",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.vdo": "0"
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             },
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "type": "block",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "vg_name": "ceph_vg0"
Nov 22 10:03:30 compute-0 naughty_elion[421964]:         }
Nov 22 10:03:30 compute-0 naughty_elion[421964]:     ],
Nov 22 10:03:30 compute-0 naughty_elion[421964]:     "1": [
Nov 22 10:03:30 compute-0 naughty_elion[421964]:         {
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "devices": [
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "/dev/loop4"
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             ],
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_name": "ceph_lv1",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_size": "21470642176",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "name": "ceph_lv1",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "tags": {
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.cluster_name": "ceph",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.crush_device_class": "",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.encrypted": "0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.osd_id": "1",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.type": "block",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.vdo": "0"
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             },
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "type": "block",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "vg_name": "ceph_vg1"
Nov 22 10:03:30 compute-0 naughty_elion[421964]:         }
Nov 22 10:03:30 compute-0 naughty_elion[421964]:     ],
Nov 22 10:03:30 compute-0 naughty_elion[421964]:     "2": [
Nov 22 10:03:30 compute-0 naughty_elion[421964]:         {
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "devices": [
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "/dev/loop5"
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             ],
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_name": "ceph_lv2",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_size": "21470642176",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "name": "ceph_lv2",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "tags": {
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.cluster_name": "ceph",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.crush_device_class": "",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.encrypted": "0",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.osd_id": "2",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.type": "block",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:                 "ceph.vdo": "0"
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             },
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "type": "block",
Nov 22 10:03:30 compute-0 naughty_elion[421964]:             "vg_name": "ceph_vg2"
Nov 22 10:03:30 compute-0 naughty_elion[421964]:         }
Nov 22 10:03:30 compute-0 naughty_elion[421964]:     ]
Nov 22 10:03:30 compute-0 naughty_elion[421964]: }
Nov 22 10:03:30 compute-0 systemd[1]: libpod-456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d.scope: Deactivated successfully.
Nov 22 10:03:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 10:03:30 compute-0 podman[421973]: 2025-11-22 10:03:30.73274077 +0000 UTC m=+0.030767888 container died 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11-merged.mount: Deactivated successfully.
Nov 22 10:03:30 compute-0 podman[421973]: 2025-11-22 10:03:30.796833948 +0000 UTC m=+0.094861046 container remove 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:03:30 compute-0 systemd[1]: libpod-conmon-456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d.scope: Deactivated successfully.
Nov 22 10:03:30 compute-0 sudo[421839]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:30 compute-0 sudo[421988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:30.930+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:30 compute-0 sudo[421988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:30 compute-0 sudo[421988]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:31 compute-0 sudo[422013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:03:31 compute-0 sudo[422013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:31 compute-0 sudo[422013]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:31 compute-0 sudo[422038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:31 compute-0 sudo[422038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:31 compute-0 sudo[422038]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:31 compute-0 sudo[422063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:03:31 compute-0 sudo[422063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:31 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:31 compute-0 podman[422127]: 2025-11-22 10:03:31.534952938 +0000 UTC m=+0.044475296 container create 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 10:03:31 compute-0 systemd[1]: Started libpod-conmon-22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906.scope.
Nov 22 10:03:31 compute-0 podman[422127]: 2025-11-22 10:03:31.514607047 +0000 UTC m=+0.024129405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:03:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:03:31 compute-0 podman[422127]: 2025-11-22 10:03:31.636025396 +0000 UTC m=+0.145547764 container init 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 22 10:03:31 compute-0 podman[422127]: 2025-11-22 10:03:31.645807467 +0000 UTC m=+0.155329805 container start 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:03:31 compute-0 kind_bell[422145]: 167 167
Nov 22 10:03:31 compute-0 systemd[1]: libpod-22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906.scope: Deactivated successfully.
Nov 22 10:03:31 compute-0 podman[422127]: 2025-11-22 10:03:31.653330353 +0000 UTC m=+0.162852691 container attach 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 10:03:31 compute-0 podman[422127]: 2025-11-22 10:03:31.653831325 +0000 UTC m=+0.163353653 container died 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 10:03:31 compute-0 podman[422141]: 2025-11-22 10:03:31.662800535 +0000 UTC m=+0.067167394 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 10:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eb6480b8b581856f94dbcfcbe8e8cb29c8ac18356713e74729b5972cae7e3ad-merged.mount: Deactivated successfully.
Nov 22 10:03:31 compute-0 podman[422127]: 2025-11-22 10:03:31.696934256 +0000 UTC m=+0.206456594 container remove 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 10:03:31 compute-0 podman[422144]: 2025-11-22 10:03:31.699132619 +0000 UTC m=+0.103235661 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 10:03:31 compute-0 systemd[1]: libpod-conmon-22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906.scope: Deactivated successfully.
Nov 22 10:03:31 compute-0 podman[422207]: 2025-11-22 10:03:31.903618523 +0000 UTC m=+0.068591359 container create e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 22 10:03:31 compute-0 systemd[1]: Started libpod-conmon-e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828.scope.
Nov 22 10:03:31 compute-0 podman[422207]: 2025-11-22 10:03:31.879862548 +0000 UTC m=+0.044835504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:03:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:31.975+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:03:32 compute-0 podman[422207]: 2025-11-22 10:03:32.006698181 +0000 UTC m=+0.171671097 container init e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:03:32 compute-0 podman[422207]: 2025-11-22 10:03:32.018129042 +0000 UTC m=+0.183101878 container start e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:03:32 compute-0 podman[422207]: 2025-11-22 10:03:32.022459509 +0000 UTC m=+0.187432335 container attach e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 10:03:32 compute-0 ceph-mon[75021]: pgmap v3055: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 10:03:32 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:32 compute-0 nova_compute[253661]: 2025-11-22 10:03:32.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 10:03:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:32.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:33 compute-0 blissful_curran[422224]: {
Nov 22 10:03:33 compute-0 blissful_curran[422224]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "osd_id": 1,
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "type": "bluestore"
Nov 22 10:03:33 compute-0 blissful_curran[422224]:     },
Nov 22 10:03:33 compute-0 blissful_curran[422224]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "osd_id": 0,
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "type": "bluestore"
Nov 22 10:03:33 compute-0 blissful_curran[422224]:     },
Nov 22 10:03:33 compute-0 blissful_curran[422224]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "osd_id": 2,
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:03:33 compute-0 blissful_curran[422224]:         "type": "bluestore"
Nov 22 10:03:33 compute-0 blissful_curran[422224]:     }
Nov 22 10:03:33 compute-0 blissful_curran[422224]: }
Nov 22 10:03:33 compute-0 systemd[1]: libpod-e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828.scope: Deactivated successfully.
Nov 22 10:03:33 compute-0 systemd[1]: libpod-e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828.scope: Consumed 1.238s CPU time.
Nov 22 10:03:33 compute-0 podman[422257]: 2025-11-22 10:03:33.321299392 +0000 UTC m=+0.044507346 container died e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 10:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d-merged.mount: Deactivated successfully.
Nov 22 10:03:33 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:33 compute-0 podman[422257]: 2025-11-22 10:03:33.385270437 +0000 UTC m=+0.108478351 container remove e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 10:03:33 compute-0 systemd[1]: libpod-conmon-e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828.scope: Deactivated successfully.
Nov 22 10:03:33 compute-0 sudo[422063]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:03:33 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:03:33 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:33 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9a12728d-ffef-486e-9701-27a8f7f1a31a does not exist
Nov 22 10:03:33 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 70abbe0c-4fd9-4314-9266-d16f03822086 does not exist
Nov 22 10:03:33 compute-0 sudo[422272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:03:33 compute-0 sudo[422272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:33 compute-0 sudo[422272]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:33 compute-0 sudo[422297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:03:33 compute-0 sudo[422297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:03:33 compute-0 sudo[422297]: pam_unix(sudo:session): session closed for user root
Nov 22 10:03:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:33.893+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 786 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:34 compute-0 nova_compute[253661]: 2025-11-22 10:03:34.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:34 compute-0 ceph-mon[75021]: pgmap v3056: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 10:03:34 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:03:34 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 786 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 10:03:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:34.864+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:35 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:35 compute-0 podman[422322]: 2025-11-22 10:03:35.412885161 +0000 UTC m=+0.098111286 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:03:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:35.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:36 compute-0 ceph-mon[75021]: pgmap v3057: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 10:03:36 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:36.856+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:37 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:37 compute-0 nova_compute[253661]: 2025-11-22 10:03:37.424 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:37.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:38 compute-0 ceph-mon[75021]: pgmap v3058: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:38 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:38.885+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 791 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:39 compute-0 nova_compute[253661]: 2025-11-22 10:03:39.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:39 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:39 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 791 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:39.848+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:40 compute-0 ceph-mon[75021]: pgmap v3059: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:40 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:40.851+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:41 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:41.878+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:42 compute-0 nova_compute[253661]: 2025-11-22 10:03:42.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:42 compute-0 ceph-mon[75021]: pgmap v3060: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:42 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:42.905+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:43 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:43 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:43.928+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 796 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:44 compute-0 nova_compute[253661]: 2025-11-22 10:03:44.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:44 compute-0 ceph-mon[75021]: pgmap v3061: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:44 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:44 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 796 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:44.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:45 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:45.938+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:46 compute-0 ceph-mon[75021]: pgmap v3062: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:46 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:46.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:47 compute-0 nova_compute[253661]: 2025-11-22 10:03:47.430 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:47 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:03:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:47.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:48 compute-0 ceph-mon[75021]: pgmap v3063: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:48 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:03:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 185K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 15K syncs, 2.92 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 311 writes, 649 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 311 writes, 148 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 10:03:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:48.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 801 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:49 compute-0 nova_compute[253661]: 2025-11-22 10:03:49.389 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:49 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:49 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 801 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:49.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:50 compute-0 ceph-mon[75021]: pgmap v3064: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:50 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:50.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:51 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:51.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:03:52
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', '.mgr', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:03:52 compute-0 nova_compute[253661]: 2025-11-22 10:03:52.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:52 compute-0 ceph-mon[75021]: pgmap v3065: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:52 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:03:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:03:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:52.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:53 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:54.013+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 806 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:54 compute-0 nova_compute[253661]: 2025-11-22 10:03:54.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:54 compute-0 ceph-mon[75021]: pgmap v3066: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:54 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:54 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 806 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:55.021+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:03:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5401.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 178K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 15K syncs, 2.83 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 560 writes, 1092 keys, 560 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s
                                           Interval WAL: 560 writes, 273 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 10:03:55 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:03:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:03:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:03:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:03:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:03:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:56.059+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:56 compute-0 ceph-mon[75021]: pgmap v3067: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:56 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:57.056+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:57 compute-0 nova_compute[253661]: 2025-11-22 10:03:57.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:57 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:58.042+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:58 compute-0 ceph-mon[75021]: pgmap v3068: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:58 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:03:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:59.005+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:03:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:03:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:03:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:03:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:03:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 811 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:03:59 compute-0 nova_compute[253661]: 2025-11-22 10:03:59.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:03:59 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:03:59 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 811 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:03:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:59.957+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:03:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:04:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 145K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.90 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 417 writes, 910 keys, 417 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 417 writes, 202 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 10:04:00 compute-0 ceph-mon[75021]: pgmap v3069: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:00 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:00.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:01 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:01.969+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:02 compute-0 podman[422348]: 2025-11-22 10:04:02.409813909 +0000 UTC m=+0.093023361 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 10:04:02 compute-0 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 10:04:02 compute-0 podman[422349]: 2025-11-22 10:04:02.435751248 +0000 UTC m=+0.117339340 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 10:04:02 compute-0 nova_compute[253661]: 2025-11-22 10:04:02.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:02 compute-0 ceph-mon[75021]: pgmap v3070: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:02 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:02.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:04:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:04:03 compute-0 nova_compute[253661]: 2025-11-22 10:04:03.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:03 compute-0 nova_compute[253661]: 2025-11-22 10:04:03.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:03 compute-0 ceph-mon[75021]: pgmap v3071: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:03 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.636494) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843636547, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1885, "num_deletes": 251, "total_data_size": 2348220, "memory_usage": 2394816, "flush_reason": "Manual Compaction"}
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843648467, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2288878, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64584, "largest_seqno": 66468, "table_properties": {"data_size": 2281141, "index_size": 4230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20284, "raw_average_key_size": 21, "raw_value_size": 2263733, "raw_average_value_size": 2360, "num_data_blocks": 186, "num_entries": 959, "num_filter_entries": 959, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805704, "oldest_key_time": 1763805704, "file_creation_time": 1763805843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 12069 microseconds, and 6250 cpu microseconds.
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.648560) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2288878 bytes OK
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.648598) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.650606) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.650637) EVENT_LOG_v1 {"time_micros": 1763805843650626, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.650673) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2339910, prev total WAL file size 2339910, number of live WAL files 2.
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.652106) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2235KB)], [152(9833KB)]
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843652160, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 12358064, "oldest_snapshot_seqno": -1}
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 9183 keys, 10949903 bytes, temperature: kUnknown
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843732378, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10949903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10891656, "index_size": 34198, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 242417, "raw_average_key_size": 26, "raw_value_size": 10730543, "raw_average_value_size": 1168, "num_data_blocks": 1310, "num_entries": 9183, "num_filter_entries": 9183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.732696) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10949903 bytes
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.734310) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.9 rd, 136.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.6 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(10.2) write-amplify(4.8) OK, records in: 9697, records dropped: 514 output_compression: NoCompression
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.734367) EVENT_LOG_v1 {"time_micros": 1763805843734359, "job": 94, "event": "compaction_finished", "compaction_time_micros": 80314, "compaction_time_cpu_micros": 58069, "output_level": 6, "num_output_files": 1, "total_output_size": 10949903, "num_input_records": 9697, "num_output_records": 9183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843734882, "job": 94, "event": "table_file_deletion", "file_number": 154}
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843736745, "job": 94, "event": "table_file_deletion", "file_number": 152}
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.652003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:04:03 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:04:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:03.943+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 819 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:04 compute-0 nova_compute[253661]: 2025-11-22 10:04:04.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:04 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:04 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 819 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:04.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:05 compute-0 nova_compute[253661]: 2025-11-22 10:04:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:05 compute-0 nova_compute[253661]: 2025-11-22 10:04:05.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:04:05 compute-0 nova_compute[253661]: 2025-11-22 10:04:05.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:04:05 compute-0 nova_compute[253661]: 2025-11-22 10:04:05.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:04:05 compute-0 ceph-mon[75021]: pgmap v3072: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:05 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:06.010+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:06 compute-0 nova_compute[253661]: 2025-11-22 10:04:06.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:06 compute-0 podman[422387]: 2025-11-22 10:04:06.471558566 +0000 UTC m=+0.142855138 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:04:06 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:07.029+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:07 compute-0 nova_compute[253661]: 2025-11-22 10:04:07.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:07 compute-0 ceph-mon[75021]: pgmap v3073: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:07 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:08.071+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:08 compute-0 nova_compute[253661]: 2025-11-22 10:04:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:08 compute-0 nova_compute[253661]: 2025-11-22 10:04:08.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:04:08 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:09.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 824 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:09 compute-0 nova_compute[253661]: 2025-11-22 10:04:09.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:09 compute-0 ceph-mon[75021]: pgmap v3074: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:09 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:09 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 824 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:10.123+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:10 compute-0 nova_compute[253661]: 2025-11-22 10:04:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:10 compute-0 nova_compute[253661]: 2025-11-22 10:04:10.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:10 compute-0 nova_compute[253661]: 2025-11-22 10:04:10.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:04:10 compute-0 nova_compute[253661]: 2025-11-22 10:04:10.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:04:10 compute-0 nova_compute[253661]: 2025-11-22 10:04:10.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:04:10 compute-0 nova_compute[253661]: 2025-11-22 10:04:10.260 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:04:10 compute-0 nova_compute[253661]: 2025-11-22 10:04:10.260 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:04:10 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:04:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/689085572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:04:10 compute-0 nova_compute[253661]: 2025-11-22 10:04:10.795 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.036 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3546MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.039 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.040 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:04:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:11.110+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.130 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.131 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.158 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:04:11 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:04:11 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2210598809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.642 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.651 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.670 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.672 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:04:11 compute-0 nova_compute[253661]: 2025-11-22 10:04:11.672 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:04:11 compute-0 ceph-mon[75021]: pgmap v3075: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/689085572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:04:11 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:11 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2210598809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:04:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:12.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:12 compute-0 nova_compute[253661]: 2025-11-22 10:04:12.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:12 compute-0 nova_compute[253661]: 2025-11-22 10:04:12.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:04:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1460149436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:04:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:04:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1460149436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:04:12 compute-0 nova_compute[253661]: 2025-11-22 10:04:12.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:12 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1460149436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:04:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1460149436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:04:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:13.129+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:13 compute-0 ceph-mon[75021]: pgmap v3076: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:13 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:14.085+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 829 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:14 compute-0 nova_compute[253661]: 2025-11-22 10:04:14.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:14 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:14 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 829 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:15.118+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:15 compute-0 nova_compute[253661]: 2025-11-22 10:04:15.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:15 compute-0 ceph-mon[75021]: pgmap v3077: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:15 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:16.123+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:16 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:17.115+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:17 compute-0 nova_compute[253661]: 2025-11-22 10:04:17.477 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:17 compute-0 ceph-mon[75021]: pgmap v3078: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:17 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:18.102+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:18 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:19.061+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 834 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:19 compute-0 nova_compute[253661]: 2025-11-22 10:04:19.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:19 compute-0 ceph-mon[75021]: pgmap v3079: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:19 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:19 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 834 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:20.058+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:20 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:04:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:21.099+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:21 compute-0 nova_compute[253661]: 2025-11-22 10:04:21.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:21 compute-0 ceph-mon[75021]: pgmap v3080: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:21 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:22.073+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:22 compute-0 nova_compute[253661]: 2025-11-22 10:04:22.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:04:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:04:22 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:23.075+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:23 compute-0 ceph-mon[75021]: pgmap v3081: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:23 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:24.067+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 839 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:24 compute-0 nova_compute[253661]: 2025-11-22 10:04:24.450 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:24 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:24 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 839 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:25.078+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:25 compute-0 ceph-mon[75021]: pgmap v3082: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:25 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:26.108+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:26 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:27.106+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:27 compute-0 nova_compute[253661]: 2025-11-22 10:04:27.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:27 compute-0 nova_compute[253661]: 2025-11-22 10:04:27.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 10:04:27 compute-0 nova_compute[253661]: 2025-11-22 10:04:27.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:27 compute-0 ceph-mon[75021]: pgmap v3083: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:27 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:04:28.012 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:04:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:04:28.012 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:04:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:04:28.013 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:04:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:28.145+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:28 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:29.121+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 844 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:29 compute-0 nova_compute[253661]: 2025-11-22 10:04:29.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:29 compute-0 ceph-mon[75021]: pgmap v3084: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:29 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:29 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 844 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:30.074+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:30 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:31.055+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:31 compute-0 ceph-mon[75021]: pgmap v3085: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:31 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:32.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:32 compute-0 nova_compute[253661]: 2025-11-22 10:04:32.488 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:32 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:32.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:33 compute-0 podman[422457]: 2025-11-22 10:04:33.381086582 +0000 UTC m=+0.071716946 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 10:04:33 compute-0 podman[422458]: 2025-11-22 10:04:33.381086592 +0000 UTC m=+0.063384631 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 10:04:33 compute-0 sudo[422497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:04:33 compute-0 sudo[422497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:33 compute-0 sudo[422497]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:33 compute-0 sudo[422522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:04:33 compute-0 sudo[422522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:33 compute-0 sudo[422522]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:33 compute-0 sudo[422547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:04:33 compute-0 sudo[422547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:33 compute-0 sudo[422547]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:33 compute-0 ceph-mon[75021]: pgmap v3086: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:33 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:33 compute-0 sudo[422572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:04:33 compute-0 sudo[422572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:34.019+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 849 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:34 compute-0 nova_compute[253661]: 2025-11-22 10:04:34.455 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:34 compute-0 sudo[422572]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:04:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:04:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:04:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:04:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:04:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:04:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 17eb6a2e-09f3-4315-8e62-36770a79d6b9 does not exist
Nov 22 10:04:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 20afb727-dfa4-4562-ae27-67f359ce1a11 does not exist
Nov 22 10:04:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8a45c9ae-f1d5-4960-9731-95866f1d9d11 does not exist
Nov 22 10:04:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:04:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:04:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:04:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:04:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:04:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:04:34 compute-0 sudo[422628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:04:34 compute-0 sudo[422628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:34 compute-0 sudo[422628]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:34 compute-0 sudo[422653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:04:34 compute-0 sudo[422653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:34 compute-0 sudo[422653]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:34 compute-0 sudo[422678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:04:34 compute-0 sudo[422678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:34 compute-0 sudo[422678]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:34 compute-0 sudo[422703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:04:34 compute-0 sudo[422703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:34 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:34 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 849 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:04:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:04:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:04:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:04:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:04:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:04:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:34.980+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:35 compute-0 podman[422767]: 2025-11-22 10:04:35.301586249 +0000 UTC m=+0.059758002 container create e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 22 10:04:35 compute-0 systemd[1]: Started libpod-conmon-e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995.scope.
Nov 22 10:04:35 compute-0 podman[422767]: 2025-11-22 10:04:35.273974469 +0000 UTC m=+0.032146322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:04:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:04:35 compute-0 podman[422767]: 2025-11-22 10:04:35.401510349 +0000 UTC m=+0.159682152 container init e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 10:04:35 compute-0 podman[422767]: 2025-11-22 10:04:35.415500993 +0000 UTC m=+0.173672766 container start e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:04:35 compute-0 podman[422767]: 2025-11-22 10:04:35.419375819 +0000 UTC m=+0.177547622 container attach e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:04:35 compute-0 cool_williams[422783]: 167 167
Nov 22 10:04:35 compute-0 systemd[1]: libpod-e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995.scope: Deactivated successfully.
Nov 22 10:04:35 compute-0 conmon[422783]: conmon e811f682b76cb0777c96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995.scope/container/memory.events
Nov 22 10:04:35 compute-0 podman[422767]: 2025-11-22 10:04:35.425233633 +0000 UTC m=+0.183405386 container died e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 10:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ef8774ea0fa5d734d03cc0470e5330f51fa5255c77df897726ea4253bd6a5d5-merged.mount: Deactivated successfully.
Nov 22 10:04:35 compute-0 podman[422767]: 2025-11-22 10:04:35.469150084 +0000 UTC m=+0.227321857 container remove e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:04:35 compute-0 systemd[1]: libpod-conmon-e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995.scope: Deactivated successfully.
Nov 22 10:04:35 compute-0 podman[422806]: 2025-11-22 10:04:35.639425275 +0000 UTC m=+0.046425393 container create da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:04:35 compute-0 systemd[1]: Started libpod-conmon-da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a.scope.
Nov 22 10:04:35 compute-0 podman[422806]: 2025-11-22 10:04:35.620605112 +0000 UTC m=+0.027605260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:04:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:35 compute-0 podman[422806]: 2025-11-22 10:04:35.756199691 +0000 UTC m=+0.163199889 container init da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 10:04:35 compute-0 podman[422806]: 2025-11-22 10:04:35.770053892 +0000 UTC m=+0.177054050 container start da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 10:04:35 compute-0 podman[422806]: 2025-11-22 10:04:35.776628483 +0000 UTC m=+0.183628641 container attach da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:04:35 compute-0 ceph-mon[75021]: pgmap v3087: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:04:35 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:35.993+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:36 compute-0 nice_rubin[422823]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:04:36 compute-0 nice_rubin[422823]: --> relative data size: 1.0
Nov 22 10:04:36 compute-0 nice_rubin[422823]: --> All data devices are unavailable
Nov 22 10:04:36 compute-0 systemd[1]: libpod-da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a.scope: Deactivated successfully.
Nov 22 10:04:36 compute-0 podman[422806]: 2025-11-22 10:04:36.907712007 +0000 UTC m=+1.314712135 container died da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 10:04:36 compute-0 systemd[1]: libpod-da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a.scope: Consumed 1.097s CPU time.
Nov 22 10:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3-merged.mount: Deactivated successfully.
Nov 22 10:04:36 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:36 compute-0 podman[422806]: 2025-11-22 10:04:36.97851746 +0000 UTC m=+1.385517568 container remove da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 10:04:36 compute-0 systemd[1]: libpod-conmon-da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a.scope: Deactivated successfully.
Nov 22 10:04:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:37.000+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:37 compute-0 sudo[422703]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:37 compute-0 podman[422853]: 2025-11-22 10:04:37.064740732 +0000 UTC m=+0.117117823 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 10:04:37 compute-0 sudo[422886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:04:37 compute-0 sudo[422886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:37 compute-0 sudo[422886]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:37 compute-0 sudo[422914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:04:37 compute-0 sudo[422914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:37 compute-0 sudo[422914]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:37 compute-0 sudo[422939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:04:37 compute-0 sudo[422939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:37 compute-0 sudo[422939]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:37 compute-0 sudo[422964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:04:37 compute-0 sudo[422964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:37 compute-0 nova_compute[253661]: 2025-11-22 10:04:37.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:37 compute-0 podman[423029]: 2025-11-22 10:04:37.813698999 +0000 UTC m=+0.076801532 container create 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 10:04:37 compute-0 systemd[1]: Started libpod-conmon-94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa.scope.
Nov 22 10:04:37 compute-0 podman[423029]: 2025-11-22 10:04:37.783921286 +0000 UTC m=+0.047023679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:04:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:04:37 compute-0 podman[423029]: 2025-11-22 10:04:37.909854006 +0000 UTC m=+0.172956319 container init 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:04:37 compute-0 podman[423029]: 2025-11-22 10:04:37.919649258 +0000 UTC m=+0.182751571 container start 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:04:37 compute-0 podman[423029]: 2025-11-22 10:04:37.923740128 +0000 UTC m=+0.186842461 container attach 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:04:37 compute-0 quizzical_allen[423045]: 167 167
Nov 22 10:04:37 compute-0 systemd[1]: libpod-94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa.scope: Deactivated successfully.
Nov 22 10:04:37 compute-0 podman[423029]: 2025-11-22 10:04:37.926942477 +0000 UTC m=+0.190044810 container died 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 10:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a804b735849b59e60f48e31111c7d97947bad9c1f5385ff84b0ecea8eb7dbcc2-merged.mount: Deactivated successfully.
Nov 22 10:04:37 compute-0 podman[423029]: 2025-11-22 10:04:37.967394693 +0000 UTC m=+0.230497006 container remove 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 10:04:37 compute-0 systemd[1]: libpod-conmon-94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa.scope: Deactivated successfully.
Nov 22 10:04:37 compute-0 ceph-mon[75021]: pgmap v3088: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:37 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:38.037+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:38 compute-0 podman[423068]: 2025-11-22 10:04:38.140385842 +0000 UTC m=+0.048431464 container create 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 10:04:38 compute-0 systemd[1]: Started libpod-conmon-1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462.scope.
Nov 22 10:04:38 compute-0 podman[423068]: 2025-11-22 10:04:38.118364709 +0000 UTC m=+0.026410361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:04:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:38 compute-0 podman[423068]: 2025-11-22 10:04:38.268021673 +0000 UTC m=+0.176067305 container init 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:04:38 compute-0 podman[423068]: 2025-11-22 10:04:38.277822235 +0000 UTC m=+0.185867887 container start 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 10:04:38 compute-0 podman[423068]: 2025-11-22 10:04:38.311273408 +0000 UTC m=+0.219319070 container attach 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 10:04:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:39 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:39.068+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:39 compute-0 amazing_wing[423084]: {
Nov 22 10:04:39 compute-0 amazing_wing[423084]:     "0": [
Nov 22 10:04:39 compute-0 amazing_wing[423084]:         {
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "devices": [
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "/dev/loop3"
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             ],
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_name": "ceph_lv0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_size": "21470642176",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "name": "ceph_lv0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "tags": {
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.cluster_name": "ceph",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.crush_device_class": "",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.encrypted": "0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.osd_id": "0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.type": "block",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.vdo": "0"
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             },
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "type": "block",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "vg_name": "ceph_vg0"
Nov 22 10:04:39 compute-0 amazing_wing[423084]:         }
Nov 22 10:04:39 compute-0 amazing_wing[423084]:     ],
Nov 22 10:04:39 compute-0 amazing_wing[423084]:     "1": [
Nov 22 10:04:39 compute-0 amazing_wing[423084]:         {
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "devices": [
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "/dev/loop4"
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             ],
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_name": "ceph_lv1",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_size": "21470642176",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "name": "ceph_lv1",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "tags": {
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.cluster_name": "ceph",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.crush_device_class": "",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.encrypted": "0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.osd_id": "1",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.type": "block",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.vdo": "0"
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             },
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "type": "block",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "vg_name": "ceph_vg1"
Nov 22 10:04:39 compute-0 amazing_wing[423084]:         }
Nov 22 10:04:39 compute-0 amazing_wing[423084]:     ],
Nov 22 10:04:39 compute-0 amazing_wing[423084]:     "2": [
Nov 22 10:04:39 compute-0 amazing_wing[423084]:         {
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "devices": [
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "/dev/loop5"
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             ],
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_name": "ceph_lv2",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_size": "21470642176",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "name": "ceph_lv2",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "tags": {
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.cluster_name": "ceph",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.crush_device_class": "",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.encrypted": "0",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.osd_id": "2",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.type": "block",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:                 "ceph.vdo": "0"
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             },
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "type": "block",
Nov 22 10:04:39 compute-0 amazing_wing[423084]:             "vg_name": "ceph_vg2"
Nov 22 10:04:39 compute-0 amazing_wing[423084]:         }
Nov 22 10:04:39 compute-0 amazing_wing[423084]:     ]
Nov 22 10:04:39 compute-0 amazing_wing[423084]: }
Nov 22 10:04:39 compute-0 systemd[1]: libpod-1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462.scope: Deactivated successfully.
Nov 22 10:04:39 compute-0 conmon[423084]: conmon 1afdb66dd2f5d7aa0f02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462.scope/container/memory.events
Nov 22 10:04:39 compute-0 podman[423068]: 2025-11-22 10:04:39.148265922 +0000 UTC m=+1.056311544 container died 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 10:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f-merged.mount: Deactivated successfully.
Nov 22 10:04:39 compute-0 podman[423068]: 2025-11-22 10:04:39.226920369 +0000 UTC m=+1.134966001 container remove 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:04:39 compute-0 systemd[1]: libpod-conmon-1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462.scope: Deactivated successfully.
Nov 22 10:04:39 compute-0 sudo[422964]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 854 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:39 compute-0 sudo[423105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:04:39 compute-0 sudo[423105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:39 compute-0 sudo[423105]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:39 compute-0 sudo[423130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:04:39 compute-0 sudo[423130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:39 compute-0 sudo[423130]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:39 compute-0 nova_compute[253661]: 2025-11-22 10:04:39.458 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:39 compute-0 sudo[423155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:04:39 compute-0 sudo[423155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:39 compute-0 sudo[423155]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:39 compute-0 sudo[423180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:04:39 compute-0 sudo[423180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:39 compute-0 podman[423246]: 2025-11-22 10:04:39.985933563 +0000 UTC m=+0.047691675 container create 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 10:04:40 compute-0 ceph-mon[75021]: pgmap v3089: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:40 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 854 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:40 compute-0 systemd[1]: Started libpod-conmon-68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab.scope.
Nov 22 10:04:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:40.031+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:40 compute-0 podman[423246]: 2025-11-22 10:04:39.96552959 +0000 UTC m=+0.027287742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:04:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:04:40 compute-0 podman[423246]: 2025-11-22 10:04:40.079413243 +0000 UTC m=+0.141171385 container init 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:04:40 compute-0 podman[423246]: 2025-11-22 10:04:40.086844697 +0000 UTC m=+0.148602819 container start 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:04:40 compute-0 podman[423246]: 2025-11-22 10:04:40.09066609 +0000 UTC m=+0.152424192 container attach 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 10:04:40 compute-0 elastic_swartz[423263]: 167 167
Nov 22 10:04:40 compute-0 systemd[1]: libpod-68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab.scope: Deactivated successfully.
Nov 22 10:04:40 compute-0 podman[423246]: 2025-11-22 10:04:40.094976657 +0000 UTC m=+0.156734759 container died 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:04:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0884ddbff43dff30790bc9e701e2948ad498573846f40b04021b8a6c4f01d92-merged.mount: Deactivated successfully.
Nov 22 10:04:40 compute-0 podman[423246]: 2025-11-22 10:04:40.133748021 +0000 UTC m=+0.195506123 container remove 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 10:04:40 compute-0 systemd[1]: libpod-conmon-68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab.scope: Deactivated successfully.
Nov 22 10:04:40 compute-0 podman[423286]: 2025-11-22 10:04:40.328798303 +0000 UTC m=+0.059804993 container create 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:04:40 compute-0 systemd[1]: Started libpod-conmon-420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33.scope.
Nov 22 10:04:40 compute-0 podman[423286]: 2025-11-22 10:04:40.296914308 +0000 UTC m=+0.027921108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:04:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:04:40 compute-0 podman[423286]: 2025-11-22 10:04:40.43634552 +0000 UTC m=+0.167352220 container init 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 10:04:40 compute-0 podman[423286]: 2025-11-22 10:04:40.450423476 +0000 UTC m=+0.181430176 container start 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:04:40 compute-0 podman[423286]: 2025-11-22 10:04:40.453209775 +0000 UTC m=+0.184216465 container attach 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 10:04:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:40.998+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:41 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:41 compute-0 kind_almeida[423302]: {
Nov 22 10:04:41 compute-0 kind_almeida[423302]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "osd_id": 1,
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "type": "bluestore"
Nov 22 10:04:41 compute-0 kind_almeida[423302]:     },
Nov 22 10:04:41 compute-0 kind_almeida[423302]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "osd_id": 0,
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "type": "bluestore"
Nov 22 10:04:41 compute-0 kind_almeida[423302]:     },
Nov 22 10:04:41 compute-0 kind_almeida[423302]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "osd_id": 2,
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:04:41 compute-0 kind_almeida[423302]:         "type": "bluestore"
Nov 22 10:04:41 compute-0 kind_almeida[423302]:     }
Nov 22 10:04:41 compute-0 kind_almeida[423302]: }
Nov 22 10:04:41 compute-0 systemd[1]: libpod-420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33.scope: Deactivated successfully.
Nov 22 10:04:41 compute-0 systemd[1]: libpod-420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33.scope: Consumed 1.055s CPU time.
Nov 22 10:04:41 compute-0 podman[423286]: 2025-11-22 10:04:41.498929957 +0000 UTC m=+1.229936667 container died 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 10:04:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a-merged.mount: Deactivated successfully.
Nov 22 10:04:41 compute-0 podman[423286]: 2025-11-22 10:04:41.566171513 +0000 UTC m=+1.297178203 container remove 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 10:04:41 compute-0 systemd[1]: libpod-conmon-420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33.scope: Deactivated successfully.
Nov 22 10:04:41 compute-0 sudo[423180]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:04:41 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:04:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:04:41 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:04:41 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2dff942a-1001-44b6-839c-65288f05eeab does not exist
Nov 22 10:04:41 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 13d8cda4-786d-4104-bf70-d582f771ac3d does not exist
Nov 22 10:04:41 compute-0 sudo[423349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:04:41 compute-0 sudo[423349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:41 compute-0 sudo[423349]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:41 compute-0 sudo[423374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:04:41 compute-0 sudo[423374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:04:41 compute-0 sudo[423374]: pam_unix(sudo:session): session closed for user root
Nov 22 10:04:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:42.001+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:42 compute-0 ceph-mon[75021]: pgmap v3090: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:42 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:04:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:04:42 compute-0 nova_compute[253661]: 2025-11-22 10:04:42.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:43 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:43.047+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:44 compute-0 ceph-mon[75021]: pgmap v3091: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:44 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:44.076+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 859 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:44 compute-0 nova_compute[253661]: 2025-11-22 10:04:44.461 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:45 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:45 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 859 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:45.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:46 compute-0 ceph-mon[75021]: pgmap v3092: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:46 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:46.152+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:47 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:47.119+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:47 compute-0 nova_compute[253661]: 2025-11-22 10:04:47.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:48.088+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:48 compute-0 ceph-mon[75021]: pgmap v3093: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:48 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:49.121+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 865 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:49 compute-0 nova_compute[253661]: 2025-11-22 10:04:49.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:50.108+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:50 compute-0 ceph-mon[75021]: pgmap v3094: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:50 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 865 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:51 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:04:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:51.153+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:52 compute-0 ceph-mon[75021]: pgmap v3095: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:52 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:52.190+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:04:52
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:04:52 compute-0 nova_compute[253661]: 2025-11-22 10:04:52.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:04:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:04:53 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:53.224+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:54 compute-0 ceph-mon[75021]: pgmap v3096: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:54 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:54.233+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 870 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:54 compute-0 nova_compute[253661]: 2025-11-22 10:04:54.466 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:55 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:55 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 870 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:55.202+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:55 compute-0 nova_compute[253661]: 2025-11-22 10:04:55.339 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:04:55 compute-0 nova_compute[253661]: 2025-11-22 10:04:55.340 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 10:04:55 compute-0 nova_compute[253661]: 2025-11-22 10:04:55.354 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 10:04:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:04:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:04:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:04:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:04:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:04:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:56.155+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:56 compute-0 ceph-mon[75021]: pgmap v3097: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:56 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:57.115+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:57 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:57 compute-0 nova_compute[253661]: 2025-11-22 10:04:57.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:04:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:58.139+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:58 compute-0 ceph-mon[75021]: pgmap v3098: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:58 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:04:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:59.116+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:04:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:59 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:04:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:04:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:04:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:04:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:04:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:04:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 875 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:04:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:04:59 compute-0 nova_compute[253661]: 2025-11-22 10:04:59.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:00.164+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:00 compute-0 ceph-mon[75021]: pgmap v3099: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:00 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:00 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 875 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:01.204+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:01 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:02 compute-0 ceph-mon[75021]: pgmap v3100: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:02 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:02.235+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:02 compute-0 nova_compute[253661]: 2025-11-22 10:05:02.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:03.201+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:05:03 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:05:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:05:03 compute-0 nova_compute[253661]: 2025-11-22 10:05:03.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:04.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:04 compute-0 nova_compute[253661]: 2025-11-22 10:05:04.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:04 compute-0 ceph-mon[75021]: pgmap v3101: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:04 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:04 compute-0 sshd-session[423399]: Invalid user sol from 92.118.39.92 port 56538
Nov 22 10:05:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 880 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:04 compute-0 podman[423401]: 2025-11-22 10:05:04.391795119 +0000 UTC m=+0.075707975 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 10:05:04 compute-0 podman[423402]: 2025-11-22 10:05:04.397828777 +0000 UTC m=+0.082249796 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 10:05:04 compute-0 sshd-session[423399]: Connection closed by invalid user sol 92.118.39.92 port 56538 [preauth]
Nov 22 10:05:04 compute-0 nova_compute[253661]: 2025-11-22 10:05:04.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:05.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:05 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:05 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 880 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:06.221+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:06 compute-0 ceph-mon[75021]: pgmap v3102: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:06 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:07 compute-0 nova_compute[253661]: 2025-11-22 10:05:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:07 compute-0 nova_compute[253661]: 2025-11-22 10:05:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:05:07 compute-0 nova_compute[253661]: 2025-11-22 10:05:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:05:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:07.237+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:07 compute-0 nova_compute[253661]: 2025-11-22 10:05:07.242 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:05:07 compute-0 nova_compute[253661]: 2025-11-22 10:05:07.242 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:07 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:07 compute-0 podman[423441]: 2025-11-22 10:05:07.418155678 +0000 UTC m=+0.110884701 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 10:05:07 compute-0 nova_compute[253661]: 2025-11-22 10:05:07.553 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:08 compute-0 nova_compute[253661]: 2025-11-22 10:05:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:08 compute-0 nova_compute[253661]: 2025-11-22 10:05:08.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:05:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:08.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:08 compute-0 ceph-mon[75021]: pgmap v3103: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:08 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:09.263+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:09 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 885 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:09 compute-0 nova_compute[253661]: 2025-11-22 10:05:09.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:10 compute-0 ceph-mon[75021]: pgmap v3104: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:10 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:10 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 885 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:10.297+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:11 compute-0 nova_compute[253661]: 2025-11-22 10:05:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:11.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:11 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.263 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.264 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:05:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:12.289+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:12 compute-0 ceph-mon[75021]: pgmap v3105: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:12 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:05:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438071525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:05:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:05:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438071525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:05:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1173368490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.791 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.941 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.943 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.944 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:05:12 compute-0 nova_compute[253661]: 2025-11-22 10:05:12.944 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:05:13 compute-0 nova_compute[253661]: 2025-11-22 10:05:13.003 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:05:13 compute-0 nova_compute[253661]: 2025-11-22 10:05:13.004 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:05:13 compute-0 nova_compute[253661]: 2025-11-22 10:05:13.030 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:05:13 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/438071525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:05:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/438071525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:05:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1173368490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:05:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:13.320+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:05:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425683252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:05:13 compute-0 nova_compute[253661]: 2025-11-22 10:05:13.481 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:05:13 compute-0 nova_compute[253661]: 2025-11-22 10:05:13.490 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:05:13 compute-0 nova_compute[253661]: 2025-11-22 10:05:13.513 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:05:13 compute-0 nova_compute[253661]: 2025-11-22 10:05:13.515 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:05:13 compute-0 nova_compute[253661]: 2025-11-22 10:05:13.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:05:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:14.294+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 890 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:14 compute-0 ceph-mon[75021]: pgmap v3106: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:14 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3425683252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:05:14 compute-0 nova_compute[253661]: 2025-11-22 10:05:14.516 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:14 compute-0 nova_compute[253661]: 2025-11-22 10:05:14.521 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:15 compute-0 nova_compute[253661]: 2025-11-22 10:05:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:15.290+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:15 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:15 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 890 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:16.333+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:16 compute-0 ceph-mon[75021]: pgmap v3107: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:16 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:17.355+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:17 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:17 compute-0 nova_compute[253661]: 2025-11-22 10:05:17.603 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:18 compute-0 ceph-mon[75021]: pgmap v3108: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:18 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:18.394+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:18 compute-0 nova_compute[253661]: 2025-11-22 10:05:18.476 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:05:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 895 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:19 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:19 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 895 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:19.435+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:19 compute-0 nova_compute[253661]: 2025-11-22 10:05:19.567 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:20 compute-0 ceph-mon[75021]: pgmap v3109: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:20 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:20.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:21.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:21 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:22.362+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:22 compute-0 ceph-mon[75021]: pgmap v3110: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:22 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:22 compute-0 nova_compute[253661]: 2025-11-22 10:05:22.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 22 10:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:05:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:05:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:23.377+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:05:23 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 900 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.334719) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924334756, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1186, "num_deletes": 252, "total_data_size": 1317340, "memory_usage": 1339896, "flush_reason": "Manual Compaction"}
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924343580, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 842992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66469, "largest_seqno": 67654, "table_properties": {"data_size": 838661, "index_size": 1662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13514, "raw_average_key_size": 21, "raw_value_size": 828378, "raw_average_value_size": 1321, "num_data_blocks": 74, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805844, "oldest_key_time": 1763805844, "file_creation_time": 1763805924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 8908 microseconds, and 2962 cpu microseconds.
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.343624) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 842992 bytes OK
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.343647) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.344916) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.344934) EVENT_LOG_v1 {"time_micros": 1763805924344928, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.344954) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1311738, prev total WAL file size 1311738, number of live WAL files 2.
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.345643) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353036' seq:72057594037927935, type:22 .. '6D6772737461740032373539' seq:0, type:0; will stop at (end)
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(823KB)], [155(10MB)]
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924345668, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 11792895, "oldest_snapshot_seqno": -1}
Nov 22 10:05:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:24.393+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 9328 keys, 8875500 bytes, temperature: kUnknown
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924395808, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 8875500, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8820452, "index_size": 30615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23365, "raw_key_size": 246138, "raw_average_key_size": 26, "raw_value_size": 8660814, "raw_average_value_size": 928, "num_data_blocks": 1162, "num_entries": 9328, "num_filter_entries": 9328, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.396061) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 8875500 bytes
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.397277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.8 rd, 176.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.4 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(24.5) write-amplify(10.5) OK, records in: 9810, records dropped: 482 output_compression: NoCompression
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.397292) EVENT_LOG_v1 {"time_micros": 1763805924397285, "job": 96, "event": "compaction_finished", "compaction_time_micros": 50231, "compaction_time_cpu_micros": 25337, "output_level": 6, "num_output_files": 1, "total_output_size": 8875500, "num_input_records": 9810, "num_output_records": 9328, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924397535, "job": 96, "event": "table_file_deletion", "file_number": 157}
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924399569, "job": 96, "event": "table_file_deletion", "file_number": 155}
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.345579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:05:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:05:24 compute-0 ceph-mon[75021]: pgmap v3111: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 22 10:05:24 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:05:24 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 900 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:24 compute-0 nova_compute[253661]: 2025-11-22 10:05:24.569 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 22 10:05:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:25.417+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:25 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:26.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:26 compute-0 ceph-mon[75021]: pgmap v3112: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 22 10:05:26 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 22 10:05:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:27.365+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:27 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:27 compute-0 nova_compute[253661]: 2025-11-22 10:05:27.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:05:28.013 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:05:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:05:28.014 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:05:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:05:28.014 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:05:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:28.352+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:28 compute-0 ceph-mon[75021]: pgmap v3113: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 22 10:05:28 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:05:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 905 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:29.387+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:29 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:29 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 905 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:29 compute-0 nova_compute[253661]: 2025-11-22 10:05:29.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:30.358+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:30 compute-0 ceph-mon[75021]: pgmap v3114: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:05:30 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:05:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:31.353+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:31 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:32.323+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:32 compute-0 ceph-mon[75021]: pgmap v3115: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:05:32 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:32 compute-0 nova_compute[253661]: 2025-11-22 10:05:32.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:05:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:33.313+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:33 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:34.288+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 910 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:34 compute-0 ceph-mon[75021]: pgmap v3116: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:05:34 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:34 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 910 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:34 compute-0 nova_compute[253661]: 2025-11-22 10:05:34.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Nov 22 10:05:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:35.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:35 compute-0 podman[423511]: 2025-11-22 10:05:35.399394518 +0000 UTC m=+0.078029482 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 10:05:35 compute-0 podman[423512]: 2025-11-22 10:05:35.418285293 +0000 UTC m=+0.087895465 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 10:05:35 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:36.293+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:36 compute-0 ceph-mon[75021]: pgmap v3117: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Nov 22 10:05:36 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 22 10:05:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:37.314+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:37 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:37 compute-0 nova_compute[253661]: 2025-11-22 10:05:37.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:38.360+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:38 compute-0 podman[423551]: 2025-11-22 10:05:38.429397917 +0000 UTC m=+0.112151442 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 10:05:38 compute-0 ceph-mon[75021]: pgmap v3118: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 22 10:05:38 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Nov 22 10:05:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 915 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:39.340+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:39 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:39 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 915 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:39 compute-0 nova_compute[253661]: 2025-11-22 10:05:39.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:40.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:40 compute-0 ceph-mon[75021]: pgmap v3119: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Nov 22 10:05:40 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:40 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:41.418+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:41 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:41 compute-0 sudo[423578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:41 compute-0 sudo[423578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:41 compute-0 sudo[423578]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:42 compute-0 sudo[423603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:05:42 compute-0 sudo[423603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:42 compute-0 sudo[423603]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:42 compute-0 sudo[423628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:42 compute-0 sudo[423628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:42 compute-0 sudo[423628]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:42 compute-0 sudo[423653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:05:42 compute-0 sudo[423653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:42.432+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:42 compute-0 ceph-mon[75021]: pgmap v3120: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:42 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:42 compute-0 sudo[423653]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:42 compute-0 nova_compute[253661]: 2025-11-22 10:05:42.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:42 compute-0 sudo[423710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:42 compute-0 sudo[423710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:42 compute-0 sudo[423710]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:42 compute-0 sudo[423735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:05:42 compute-0 sudo[423735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:42 compute-0 sudo[423735]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:42 compute-0 sudo[423760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:42 compute-0 sudo[423760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:42 compute-0 sudo[423760]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:42 compute-0 sudo[423785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- inventory --format=json-pretty --filter-for-batch
Nov 22 10:05:42 compute-0 sudo[423785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:43 compute-0 podman[423849]: 2025-11-22 10:05:43.336743741 +0000 UTC m=+0.045340848 container create 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 22 10:05:43 compute-0 systemd[1]: Started libpod-conmon-4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8.scope.
Nov 22 10:05:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:43.389+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:43 compute-0 podman[423849]: 2025-11-22 10:05:43.318146852 +0000 UTC m=+0.026743979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:05:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:05:43 compute-0 podman[423849]: 2025-11-22 10:05:43.430576121 +0000 UTC m=+0.139173238 container init 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 10:05:43 compute-0 podman[423849]: 2025-11-22 10:05:43.440814052 +0000 UTC m=+0.149411149 container start 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 10:05:43 compute-0 podman[423849]: 2025-11-22 10:05:43.447483797 +0000 UTC m=+0.156080904 container attach 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:05:43 compute-0 dazzling_borg[423865]: 167 167
Nov 22 10:05:43 compute-0 systemd[1]: libpod-4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8.scope: Deactivated successfully.
Nov 22 10:05:43 compute-0 podman[423849]: 2025-11-22 10:05:43.450978182 +0000 UTC m=+0.159575289 container died 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f62671ae6e76e613a2929e42567dc6956b66c0df16fd477486afb0ea7980705f-merged.mount: Deactivated successfully.
Nov 22 10:05:43 compute-0 podman[423849]: 2025-11-22 10:05:43.499393684 +0000 UTC m=+0.207990781 container remove 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 10:05:43 compute-0 systemd[1]: libpod-conmon-4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8.scope: Deactivated successfully.
Nov 22 10:05:43 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:43 compute-0 podman[423891]: 2025-11-22 10:05:43.677942229 +0000 UTC m=+0.041198324 container create cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:05:43 compute-0 systemd[1]: Started libpod-conmon-cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3.scope.
Nov 22 10:05:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:43 compute-0 podman[423891]: 2025-11-22 10:05:43.660617273 +0000 UTC m=+0.023873388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:05:43 compute-0 podman[423891]: 2025-11-22 10:05:43.76774031 +0000 UTC m=+0.130996435 container init cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:05:43 compute-0 podman[423891]: 2025-11-22 10:05:43.780820652 +0000 UTC m=+0.144076777 container start cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:05:43 compute-0 podman[423891]: 2025-11-22 10:05:43.785570009 +0000 UTC m=+0.148826134 container attach cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 10:05:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 920 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:44.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:44 compute-0 ceph-mon[75021]: pgmap v3121: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:44 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 920 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:44 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:44 compute-0 nova_compute[253661]: 2025-11-22 10:05:44.620 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]: [
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:     {
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         "available": false,
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         "ceph_device": false,
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         "lsm_data": {},
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         "lvs": [],
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         "path": "/dev/sr0",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         "rejected_reasons": [
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "Has a FileSystem",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "Insufficient space (<5GB)"
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         ],
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         "sys_api": {
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "actuators": null,
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "device_nodes": "sr0",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "devname": "sr0",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "human_readable_size": "482.00 KB",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "id_bus": "ata",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "model": "QEMU DVD-ROM",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "nr_requests": "2",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "parent": "/dev/sr0",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "partitions": {},
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "path": "/dev/sr0",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "removable": "1",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "rev": "2.5+",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "ro": "0",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "rotational": "1",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "sas_address": "",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "sas_device_handle": "",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "scheduler_mode": "mq-deadline",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "sectors": 0,
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "sectorsize": "2048",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "size": 493568.0,
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "support_discard": "2048",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "type": "disk",
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:             "vendor": "QEMU"
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:         }
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]:     }
Nov 22 10:05:45 compute-0 pensive_nightingale[423908]: ]
Nov 22 10:05:45 compute-0 systemd[1]: libpod-cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3.scope: Deactivated successfully.
Nov 22 10:05:45 compute-0 systemd[1]: libpod-cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3.scope: Consumed 1.537s CPU time.
Nov 22 10:05:45 compute-0 podman[423891]: 2025-11-22 10:05:45.279214088 +0000 UTC m=+1.642470203 container died cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752-merged.mount: Deactivated successfully.
Nov 22 10:05:45 compute-0 podman[423891]: 2025-11-22 10:05:45.344282779 +0000 UTC m=+1.707538864 container remove cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 10:05:45 compute-0 systemd[1]: libpod-conmon-cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3.scope: Deactivated successfully.
Nov 22 10:05:45 compute-0 sudo[423785]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:05:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:05:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:05:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:05:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:05:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:05:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:05:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:45 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6febcea1-14fd-47c1-b3a8-ff7510c17371 does not exist
Nov 22 10:05:45 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9f3d8736-da6a-4946-addc-7019b02f4685 does not exist
Nov 22 10:05:45 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 291fede7-c742-41d9-b647-58707cc8dd9e does not exist
Nov 22 10:05:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:05:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:05:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:05:45 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:05:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:05:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:05:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:45.460+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:45 compute-0 sudo[425746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:45 compute-0 sudo[425746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:45 compute-0 sudo[425746]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:45 compute-0 sudo[425771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:05:45 compute-0 sudo[425771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:45 compute-0 sudo[425771]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:45 compute-0 sudo[425796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:45 compute-0 sudo[425796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:45 compute-0 sudo[425796]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:45 compute-0 sudo[425821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:05:45 compute-0 sudo[425821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:46 compute-0 podman[425886]: 2025-11-22 10:05:46.023480709 +0000 UTC m=+0.042673912 container create 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:05:46 compute-0 systemd[1]: Started libpod-conmon-38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774.scope.
Nov 22 10:05:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:05:46 compute-0 podman[425886]: 2025-11-22 10:05:46.005838005 +0000 UTC m=+0.025031238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:05:46 compute-0 podman[425886]: 2025-11-22 10:05:46.112370947 +0000 UTC m=+0.131564150 container init 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:05:46 compute-0 podman[425886]: 2025-11-22 10:05:46.119568014 +0000 UTC m=+0.138761217 container start 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:05:46 compute-0 podman[425886]: 2025-11-22 10:05:46.122917657 +0000 UTC m=+0.142110970 container attach 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 10:05:46 compute-0 laughing_shtern[425902]: 167 167
Nov 22 10:05:46 compute-0 systemd[1]: libpod-38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774.scope: Deactivated successfully.
Nov 22 10:05:46 compute-0 podman[425886]: 2025-11-22 10:05:46.126846853 +0000 UTC m=+0.146040046 container died 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 10:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d9a9639fb932e8eaab9391452108b3bf81333a9e2a1827ef3f41dd09b0c24fc-merged.mount: Deactivated successfully.
Nov 22 10:05:46 compute-0 podman[425886]: 2025-11-22 10:05:46.164030479 +0000 UTC m=+0.183223682 container remove 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 10:05:46 compute-0 systemd[1]: libpod-conmon-38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774.scope: Deactivated successfully.
Nov 22 10:05:46 compute-0 podman[425924]: 2025-11-22 10:05:46.312720909 +0000 UTC m=+0.038861908 container create d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 10:05:46 compute-0 systemd[1]: Started libpod-conmon-d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8.scope.
Nov 22 10:05:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:46 compute-0 podman[425924]: 2025-11-22 10:05:46.295621618 +0000 UTC m=+0.021762637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:46 compute-0 ceph-mon[75021]: pgmap v3122: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:05:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:05:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:05:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:05:46 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:05:46 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:46 compute-0 podman[425924]: 2025-11-22 10:05:46.414748451 +0000 UTC m=+0.140889470 container init d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:05:46 compute-0 podman[425924]: 2025-11-22 10:05:46.424091961 +0000 UTC m=+0.150232960 container start d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 10:05:46 compute-0 podman[425924]: 2025-11-22 10:05:46.42733145 +0000 UTC m=+0.153472589 container attach d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 10:05:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:46.494+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:47 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:47 compute-0 gracious_elgamal[425940]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:05:47 compute-0 gracious_elgamal[425940]: --> relative data size: 1.0
Nov 22 10:05:47 compute-0 gracious_elgamal[425940]: --> All data devices are unavailable
Nov 22 10:05:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:47.454+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:47 compute-0 systemd[1]: libpod-d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8.scope: Deactivated successfully.
Nov 22 10:05:47 compute-0 podman[425924]: 2025-11-22 10:05:47.485831797 +0000 UTC m=+1.211972826 container died d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:05:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9-merged.mount: Deactivated successfully.
Nov 22 10:05:47 compute-0 podman[425924]: 2025-11-22 10:05:47.547403333 +0000 UTC m=+1.273544332 container remove d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 10:05:47 compute-0 systemd[1]: libpod-conmon-d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8.scope: Deactivated successfully.
Nov 22 10:05:47 compute-0 sudo[425821]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:47 compute-0 sudo[425980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:47 compute-0 sudo[425980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:47 compute-0 sudo[425980]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:47 compute-0 nova_compute[253661]: 2025-11-22 10:05:47.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:47 compute-0 sudo[426005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:05:47 compute-0 sudo[426005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:47 compute-0 sudo[426005]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:47 compute-0 sudo[426030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:47 compute-0 sudo[426030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:47 compute-0 sudo[426030]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:47 compute-0 sudo[426055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:05:47 compute-0 sudo[426055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:48 compute-0 podman[426120]: 2025-11-22 10:05:48.239009479 +0000 UTC m=+0.086209644 container create 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 10:05:48 compute-0 podman[426120]: 2025-11-22 10:05:48.176400547 +0000 UTC m=+0.023600732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:05:48 compute-0 systemd[1]: Started libpod-conmon-034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd.scope.
Nov 22 10:05:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:05:48 compute-0 podman[426120]: 2025-11-22 10:05:48.393363558 +0000 UTC m=+0.240563723 container init 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:05:48 compute-0 podman[426120]: 2025-11-22 10:05:48.39994474 +0000 UTC m=+0.247144905 container start 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:05:48 compute-0 dreamy_lederberg[426137]: 167 167
Nov 22 10:05:48 compute-0 systemd[1]: libpod-034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd.scope: Deactivated successfully.
Nov 22 10:05:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:48.420+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:48 compute-0 podman[426120]: 2025-11-22 10:05:48.456942963 +0000 UTC m=+0.304143128 container attach 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:05:48 compute-0 podman[426120]: 2025-11-22 10:05:48.457815594 +0000 UTC m=+0.305015759 container died 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 10:05:48 compute-0 ceph-mon[75021]: pgmap v3123: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:48 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ac25471a14ebb6cb467bf8b2463e6abde7dce624930efd1f14e1a3b123f4a65-merged.mount: Deactivated successfully.
Nov 22 10:05:48 compute-0 podman[426120]: 2025-11-22 10:05:48.585738134 +0000 UTC m=+0.432938299 container remove 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 10:05:48 compute-0 systemd[1]: libpod-conmon-034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd.scope: Deactivated successfully.
Nov 22 10:05:48 compute-0 podman[426164]: 2025-11-22 10:05:48.74646121 +0000 UTC m=+0.042063217 container create 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 10:05:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:48 compute-0 systemd[1]: Started libpod-conmon-52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4.scope.
Nov 22 10:05:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:05:48 compute-0 podman[426164]: 2025-11-22 10:05:48.728357824 +0000 UTC m=+0.023959861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:48 compute-0 podman[426164]: 2025-11-22 10:05:48.842260318 +0000 UTC m=+0.137862325 container init 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:05:48 compute-0 podman[426164]: 2025-11-22 10:05:48.852012348 +0000 UTC m=+0.147614355 container start 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 10:05:48 compute-0 podman[426164]: 2025-11-22 10:05:48.857090503 +0000 UTC m=+0.152692540 container attach 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 10:05:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 925 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:49.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:49 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:49 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 925 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:49 compute-0 nova_compute[253661]: 2025-11-22 10:05:49.622 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]: {
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:     "0": [
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:         {
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "devices": [
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "/dev/loop3"
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             ],
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_name": "ceph_lv0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_size": "21470642176",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "name": "ceph_lv0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "tags": {
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.cluster_name": "ceph",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.crush_device_class": "",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.encrypted": "0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.osd_id": "0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.type": "block",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.vdo": "0"
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             },
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "type": "block",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "vg_name": "ceph_vg0"
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:         }
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:     ],
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:     "1": [
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:         {
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "devices": [
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "/dev/loop4"
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             ],
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_name": "ceph_lv1",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_size": "21470642176",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "name": "ceph_lv1",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "tags": {
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.cluster_name": "ceph",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.crush_device_class": "",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.encrypted": "0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.osd_id": "1",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.type": "block",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.vdo": "0"
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             },
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "type": "block",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "vg_name": "ceph_vg1"
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:         }
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:     ],
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:     "2": [
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:         {
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "devices": [
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "/dev/loop5"
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             ],
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_name": "ceph_lv2",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_size": "21470642176",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "name": "ceph_lv2",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "tags": {
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.cluster_name": "ceph",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.crush_device_class": "",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.encrypted": "0",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.osd_id": "2",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.type": "block",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:                 "ceph.vdo": "0"
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             },
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "type": "block",
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:             "vg_name": "ceph_vg2"
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:         }
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]:     ]
Nov 22 10:05:49 compute-0 adoring_goldwasser[426181]: }
Nov 22 10:05:49 compute-0 systemd[1]: libpod-52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4.scope: Deactivated successfully.
Nov 22 10:05:49 compute-0 podman[426164]: 2025-11-22 10:05:49.655913088 +0000 UTC m=+0.951515095 container died 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 10:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0-merged.mount: Deactivated successfully.
Nov 22 10:05:49 compute-0 podman[426164]: 2025-11-22 10:05:49.710777658 +0000 UTC m=+1.006379665 container remove 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 10:05:49 compute-0 systemd[1]: libpod-conmon-52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4.scope: Deactivated successfully.
Nov 22 10:05:49 compute-0 sudo[426055]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:49 compute-0 sudo[426201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:49 compute-0 sudo[426201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:49 compute-0 sudo[426201]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:49 compute-0 sudo[426226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:05:49 compute-0 sudo[426226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:49 compute-0 sudo[426226]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:49 compute-0 sudo[426251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:49 compute-0 sudo[426251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:49 compute-0 sudo[426251]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:50 compute-0 sudo[426276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:05:50 compute-0 sudo[426276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:50 compute-0 podman[426341]: 2025-11-22 10:05:50.337929957 +0000 UTC m=+0.043995504 container create b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 10:05:50 compute-0 systemd[1]: Started libpod-conmon-b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f.scope.
Nov 22 10:05:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:05:50 compute-0 podman[426341]: 2025-11-22 10:05:50.405358487 +0000 UTC m=+0.111424024 container init b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 10:05:50 compute-0 podman[426341]: 2025-11-22 10:05:50.412123003 +0000 UTC m=+0.118188550 container start b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:05:50 compute-0 podman[426341]: 2025-11-22 10:05:50.317209586 +0000 UTC m=+0.023275153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:05:50 compute-0 podman[426341]: 2025-11-22 10:05:50.41561272 +0000 UTC m=+0.121678267 container attach b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:05:50 compute-0 lucid_grothendieck[426358]: 167 167
Nov 22 10:05:50 compute-0 systemd[1]: libpod-b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f.scope: Deactivated successfully.
Nov 22 10:05:50 compute-0 podman[426341]: 2025-11-22 10:05:50.417768602 +0000 UTC m=+0.123834149 container died b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa0cd8fb525f658d934742423177e912c87bc989137502fea840d1a3a89415a2-merged.mount: Deactivated successfully.
Nov 22 10:05:50 compute-0 podman[426341]: 2025-11-22 10:05:50.459905489 +0000 UTC m=+0.165971036 container remove b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 10:05:50 compute-0 systemd[1]: libpod-conmon-b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f.scope: Deactivated successfully.
Nov 22 10:05:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:50.478+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:50 compute-0 ceph-mon[75021]: pgmap v3124: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:50 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:50 compute-0 podman[426381]: 2025-11-22 10:05:50.650976063 +0000 UTC m=+0.044649130 container create a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:05:50 compute-0 systemd[1]: Started libpod-conmon-a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd.scope.
Nov 22 10:05:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:50 compute-0 podman[426381]: 2025-11-22 10:05:50.631458973 +0000 UTC m=+0.025132060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:05:50 compute-0 podman[426381]: 2025-11-22 10:05:50.740430805 +0000 UTC m=+0.134103902 container init a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:05:50 compute-0 podman[426381]: 2025-11-22 10:05:50.750743299 +0000 UTC m=+0.144416366 container start a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 10:05:50 compute-0 podman[426381]: 2025-11-22 10:05:50.754996644 +0000 UTC m=+0.148669731 container attach a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 10:05:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:51.480+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:51 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]: {
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "osd_id": 1,
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "type": "bluestore"
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:     },
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "osd_id": 0,
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "type": "bluestore"
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:     },
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "osd_id": 2,
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:         "type": "bluestore"
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]:     }
Nov 22 10:05:51 compute-0 compassionate_kepler[426397]: }
Nov 22 10:05:51 compute-0 systemd[1]: libpod-a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd.scope: Deactivated successfully.
Nov 22 10:05:51 compute-0 systemd[1]: libpod-a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd.scope: Consumed 1.099s CPU time.
Nov 22 10:05:51 compute-0 podman[426381]: 2025-11-22 10:05:51.844199517 +0000 UTC m=+1.237872584 container died a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 10:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec-merged.mount: Deactivated successfully.
Nov 22 10:05:51 compute-0 podman[426381]: 2025-11-22 10:05:51.904249135 +0000 UTC m=+1.297922202 container remove a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:05:51 compute-0 systemd[1]: libpod-conmon-a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd.scope: Deactivated successfully.
Nov 22 10:05:51 compute-0 sudo[426276]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:05:51 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:05:51 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b6f93535-4bbe-438c-89bc-616cf734242e does not exist
Nov 22 10:05:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 12555245-e955-4221-9c17-2549d19ac16a does not exist
Nov 22 10:05:52 compute-0 sudo[426446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:05:52 compute-0 sudo[426446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:52 compute-0 sudo[426446]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:52 compute-0 sudo[426471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:05:52 compute-0 sudo[426471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:05:52 compute-0 sudo[426471]: pam_unix(sudo:session): session closed for user root
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:05:52
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:05:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:52.499+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:52 compute-0 ceph-mon[75021]: pgmap v3125: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:52 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:05:52 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:52 compute-0 nova_compute[253661]: 2025-11-22 10:05:52.714 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:05:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:05:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:53.522+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:53 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 930 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:54.508+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:54 compute-0 nova_compute[253661]: 2025-11-22 10:05:54.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:54 compute-0 ceph-mon[75021]: pgmap v3126: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:54 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 930 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:54 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:55.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:05:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:05:55 compute-0 ceph-mon[75021]: pgmap v3127: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:05:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:05:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:05:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:56.507+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:56 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:56 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:57.487+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:57 compute-0 nova_compute[253661]: 2025-11-22 10:05:57.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:57 compute-0 ceph-mon[75021]: pgmap v3128: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:57 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:58.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:58 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:05:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:05:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:05:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 934 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:05:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:05:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:05:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:59.449+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:05:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:05:59 compute-0 nova_compute[253661]: 2025-11-22 10:05:59.627 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:05:59 compute-0 ceph-mon[75021]: pgmap v3129: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:05:59 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 934 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:05:59 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:00.464+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:00 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:01.464+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:01 compute-0 ceph-mon[75021]: pgmap v3130: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:01 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:02.490+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:02 compute-0 nova_compute[253661]: 2025-11-22 10:06:02.721 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:02 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:06:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:06:03 compute-0 nova_compute[253661]: 2025-11-22 10:06:03.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:03.470+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:03 compute-0 ceph-mon[75021]: pgmap v3131: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:03 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:04 compute-0 nova_compute[253661]: 2025-11-22 10:06:04.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 940 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.346220) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964346264, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 753, "num_deletes": 259, "total_data_size": 733849, "memory_usage": 749176, "flush_reason": "Manual Compaction"}
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964353743, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 712199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67655, "largest_seqno": 68407, "table_properties": {"data_size": 708598, "index_size": 1316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9397, "raw_average_key_size": 19, "raw_value_size": 700787, "raw_average_value_size": 1463, "num_data_blocks": 58, "num_entries": 479, "num_filter_entries": 479, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805924, "oldest_key_time": 1763805924, "file_creation_time": 1763805964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 7559 microseconds, and 2843 cpu microseconds.
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.353777) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 712199 bytes OK
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.353795) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356419) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356431) EVENT_LOG_v1 {"time_micros": 1763805964356426, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356447) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 729879, prev total WAL file size 729879, number of live WAL files 2.
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356794) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303130' seq:72057594037927935, type:22 .. '6C6F676D0033323635' seq:0, type:0; will stop at (end)
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(695KB)], [158(8667KB)]
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964356816, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 9587699, "oldest_snapshot_seqno": -1}
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 9278 keys, 9438621 bytes, temperature: kUnknown
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964420583, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 9438621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9383045, "index_size": 31271, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 246313, "raw_average_key_size": 26, "raw_value_size": 9223423, "raw_average_value_size": 994, "num_data_blocks": 1186, "num_entries": 9278, "num_filter_entries": 9278, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.420953) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 9438621 bytes
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.422875) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.9 rd, 147.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(26.7) write-amplify(13.3) OK, records in: 9807, records dropped: 529 output_compression: NoCompression
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.422891) EVENT_LOG_v1 {"time_micros": 1763805964422883, "job": 98, "event": "compaction_finished", "compaction_time_micros": 63969, "compaction_time_cpu_micros": 25808, "output_level": 6, "num_output_files": 1, "total_output_size": 9438621, "num_input_records": 9807, "num_output_records": 9278, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964423487, "job": 98, "event": "table_file_deletion", "file_number": 160}
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964425569, "job": 98, "event": "table_file_deletion", "file_number": 158}
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:04.467+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:04 compute-0 nova_compute[253661]: 2025-11-22 10:06:04.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:05 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 940 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:05 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:05.456+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:06 compute-0 ceph-mon[75021]: pgmap v3132: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:06 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:06 compute-0 podman[426496]: 2025-11-22 10:06:06.372854826 +0000 UTC m=+0.064376136 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 22 10:06:06 compute-0 podman[426497]: 2025-11-22 10:06:06.386520653 +0000 UTC m=+0.077301984 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 22 10:06:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:06.445+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:07 compute-0 nova_compute[253661]: 2025-11-22 10:06:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:07 compute-0 nova_compute[253661]: 2025-11-22 10:06:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:06:07 compute-0 nova_compute[253661]: 2025-11-22 10:06:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:06:07 compute-0 nova_compute[253661]: 2025-11-22 10:06:07.239 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:06:07 compute-0 nova_compute[253661]: 2025-11-22 10:06:07.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:07 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:07.405+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:07 compute-0 nova_compute[253661]: 2025-11-22 10:06:07.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:08.365+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:08 compute-0 ceph-mon[75021]: pgmap v3133: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:08 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 945 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:09 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:09 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 945 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:09.387+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:09 compute-0 podman[426532]: 2025-11-22 10:06:09.429743897 +0000 UTC m=+0.119987665 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 10:06:09 compute-0 nova_compute[253661]: 2025-11-22 10:06:09.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:10 compute-0 nova_compute[253661]: 2025-11-22 10:06:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:10 compute-0 nova_compute[253661]: 2025-11-22 10:06:10.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:06:10 compute-0 ceph-mon[75021]: pgmap v3134: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:10 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:10.407+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:11.382+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:11 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.252 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.252 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:06:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:12.404+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:12 compute-0 ceph-mon[75021]: pgmap v3135: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:12 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:06:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/59297485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:06:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:06:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/59297485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:06:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:06:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952283727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.747 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:06:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.887 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.889 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.890 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:06:12 compute-0 nova_compute[253661]: 2025-11-22 10:06:12.890 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:06:13 compute-0 nova_compute[253661]: 2025-11-22 10:06:13.090 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:06:13 compute-0 nova_compute[253661]: 2025-11-22 10:06:13.091 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:06:13 compute-0 nova_compute[253661]: 2025-11-22 10:06:13.109 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:06:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:13.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:13 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/59297485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:06:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/59297485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:06:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2952283727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:06:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:06:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157286242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:06:13 compute-0 nova_compute[253661]: 2025-11-22 10:06:13.569 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:06:13 compute-0 nova_compute[253661]: 2025-11-22 10:06:13.574 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:06:13 compute-0 nova_compute[253661]: 2025-11-22 10:06:13.587 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:06:13 compute-0 nova_compute[253661]: 2025-11-22 10:06:13.589 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:06:13 compute-0 nova_compute[253661]: 2025-11-22 10:06:13.589 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:06:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 950 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:14.419+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:14 compute-0 ceph-mon[75021]: pgmap v3136: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:14 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3157286242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:06:14 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 950 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:14 compute-0 nova_compute[253661]: 2025-11-22 10:06:14.589 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:14 compute-0 nova_compute[253661]: 2025-11-22 10:06:14.590 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:14 compute-0 nova_compute[253661]: 2025-11-22 10:06:14.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:15 compute-0 nova_compute[253661]: 2025-11-22 10:06:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:15.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:15 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:16.325+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:16 compute-0 ceph-mon[75021]: pgmap v3137: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:16 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:17.283+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:17 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:17 compute-0 nova_compute[253661]: 2025-11-22 10:06:17.769 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:18.275+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:18 compute-0 ceph-mon[75021]: pgmap v3138: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:18 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:19.258+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 955 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:19 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:19 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 955 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:19 compute-0 nova_compute[253661]: 2025-11-22 10:06:19.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:20.274+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:20 compute-0 ceph-mon[75021]: pgmap v3139: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:20 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:21.240+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:21 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:22.198+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:22 compute-0 nova_compute[253661]: 2025-11-22 10:06:22.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:06:22 compute-0 ceph-mon[75021]: pgmap v3140: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:22 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:22 compute-0 nova_compute[253661]: 2025-11-22 10:06:22.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:06:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:06:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:23.190+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:23 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:24.174+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 960 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:24 compute-0 ceph-mon[75021]: pgmap v3141: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:24 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:24 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 960 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:24 compute-0 nova_compute[253661]: 2025-11-22 10:06:24.672 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:25.214+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:25 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:25 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:26.250+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:26 compute-0 ceph-mon[75021]: pgmap v3142: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:26 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:27.249+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:27 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:27 compute-0 nova_compute[253661]: 2025-11-22 10:06:27.777 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:06:28.014 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:06:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:06:28.015 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:06:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:06:28.015 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:06:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:28.285+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:28 compute-0 ceph-mon[75021]: pgmap v3143: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:28 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:29.296+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 965 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:29 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:29 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 965 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:29 compute-0 nova_compute[253661]: 2025-11-22 10:06:29.675 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:30.261+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:30 compute-0 ceph-mon[75021]: pgmap v3144: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:30 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:31.222+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:31 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:32.236+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:32 compute-0 ceph-mon[75021]: pgmap v3145: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:32 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:32 compute-0 nova_compute[253661]: 2025-11-22 10:06:32.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:33.208+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:33 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:34.217+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 970 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:34 compute-0 ceph-mon[75021]: pgmap v3146: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:34 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:34 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 970 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:34 compute-0 nova_compute[253661]: 2025-11-22 10:06:34.680 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:35.201+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:35 compute-0 ceph-mon[75021]: pgmap v3147: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:35 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:36.213+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:36 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:37.226+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:37 compute-0 podman[426603]: 2025-11-22 10:06:37.380150913 +0000 UTC m=+0.067315828 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 10:06:37 compute-0 podman[426604]: 2025-11-22 10:06:37.429887088 +0000 UTC m=+0.107759434 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 10:06:37 compute-0 ceph-mon[75021]: pgmap v3148: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:37 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:37 compute-0 nova_compute[253661]: 2025-11-22 10:06:37.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:38.252+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:38 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:39.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 975 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:39 compute-0 nova_compute[253661]: 2025-11-22 10:06:39.682 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:39 compute-0 ceph-mon[75021]: pgmap v3149: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:39 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:39 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 975 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:40.223+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:40 compute-0 podman[426642]: 2025-11-22 10:06:40.444664332 +0000 UTC m=+0.128103965 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:06:40 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:41.241+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:41 compute-0 ceph-mon[75021]: pgmap v3150: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:41 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:42.215+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:42 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:42 compute-0 nova_compute[253661]: 2025-11-22 10:06:42.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:43.199+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:43 compute-0 ceph-mon[75021]: pgmap v3151: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:43 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:44.184+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 980 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:44 compute-0 nova_compute[253661]: 2025-11-22 10:06:44.685 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:44 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:44 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 980 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:45.231+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:45 compute-0 ceph-mon[75021]: pgmap v3152: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:45 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.771373) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005771432, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 727, "num_deletes": 251, "total_data_size": 669735, "memory_usage": 683176, "flush_reason": "Manual Compaction"}
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005781386, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 658992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68408, "largest_seqno": 69134, "table_properties": {"data_size": 655471, "index_size": 1236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9194, "raw_average_key_size": 19, "raw_value_size": 648007, "raw_average_value_size": 1405, "num_data_blocks": 55, "num_entries": 461, "num_filter_entries": 461, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805964, "oldest_key_time": 1763805964, "file_creation_time": 1763806005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 10070 microseconds, and 3801 cpu microseconds.
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.781436) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 658992 bytes OK
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.781463) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.783792) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.783864) EVENT_LOG_v1 {"time_micros": 1763806005783850, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.783900) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 665907, prev total WAL file size 665907, number of live WAL files 2.
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.784700) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(643KB)], [161(9217KB)]
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005784739, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 10097613, "oldest_snapshot_seqno": -1}
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 9229 keys, 8678948 bytes, temperature: kUnknown
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005849076, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 8678948, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8624322, "index_size": 30449, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23109, "raw_key_size": 246264, "raw_average_key_size": 26, "raw_value_size": 8466086, "raw_average_value_size": 917, "num_data_blocks": 1145, "num_entries": 9229, "num_filter_entries": 9229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.849621) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 8678948 bytes
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.851035) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.6 rd, 134.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.0 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(28.5) write-amplify(13.2) OK, records in: 9739, records dropped: 510 output_compression: NoCompression
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.851064) EVENT_LOG_v1 {"time_micros": 1763806005851050, "job": 100, "event": "compaction_finished", "compaction_time_micros": 64498, "compaction_time_cpu_micros": 25906, "output_level": 6, "num_output_files": 1, "total_output_size": 8678948, "num_input_records": 9739, "num_output_records": 9229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005851387, "job": 100, "event": "table_file_deletion", "file_number": 163}
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005853635, "job": 100, "event": "table_file_deletion", "file_number": 161}
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.784606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:45 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:06:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:46.279+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:46 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:47.291+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:47 compute-0 ceph-mon[75021]: pgmap v3153: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:47 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:47 compute-0 nova_compute[253661]: 2025-11-22 10:06:47.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:48.289+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:48 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:49.304+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 985 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:49 compute-0 nova_compute[253661]: 2025-11-22 10:06:49.686 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:49 compute-0 ceph-mon[75021]: pgmap v3154: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:49 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:49 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 985 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:50.350+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:50 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:51.382+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:51 compute-0 ceph-mon[75021]: pgmap v3155: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:51 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:52 compute-0 sudo[426668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:06:52 compute-0 sudo[426668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:52 compute-0 sudo[426668]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:52 compute-0 sudo[426693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:06:52 compute-0 sudo[426693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:52 compute-0 sudo[426693]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:52 compute-0 sudo[426718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:06:52 compute-0 sudo[426718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:52 compute-0 sudo[426718]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:06:52
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'images', 'vms']
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:06:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:52.350+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:52 compute-0 sudo[426743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:06:52 compute-0 sudo[426743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:52 compute-0 sudo[426743]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:52 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:52 compute-0 nova_compute[253661]: 2025-11-22 10:06:52.861 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:06:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:06:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:06:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:06:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:06:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ba4af6e4-fcb0-433a-91cc-f9e723842f42 does not exist
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c0bf4a41-8b64-4aba-b1bc-ed34d2ef7344 does not exist
Nov 22 10:06:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 698c475a-5774-4c7b-a865-2823cb7cfd46 does not exist
Nov 22 10:06:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:06:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:06:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:06:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:06:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:06:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:06:52 compute-0 sudo[426799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:06:52 compute-0 sudo[426799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:52 compute-0 sudo[426799]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:53 compute-0 sudo[426824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:06:53 compute-0 sudo[426824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:53 compute-0 sudo[426824]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:53 compute-0 sudo[426849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:06:53 compute-0 sudo[426849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:53 compute-0 sudo[426849]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:53 compute-0 sudo[426874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:06:53 compute-0 sudo[426874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:53.309+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:53 compute-0 podman[426938]: 2025-11-22 10:06:53.52142981 +0000 UTC m=+0.044465476 container create 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 10:06:53 compute-0 systemd[1]: Started libpod-conmon-28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4.scope.
Nov 22 10:06:53 compute-0 podman[426938]: 2025-11-22 10:06:53.502338059 +0000 UTC m=+0.025373745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:06:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:06:53 compute-0 podman[426938]: 2025-11-22 10:06:53.633916039 +0000 UTC m=+0.156951735 container init 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:06:53 compute-0 podman[426938]: 2025-11-22 10:06:53.646840386 +0000 UTC m=+0.169876052 container start 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 10:06:53 compute-0 podman[426938]: 2025-11-22 10:06:53.650590589 +0000 UTC m=+0.173626285 container attach 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:06:53 compute-0 xenodochial_kilby[426954]: 167 167
Nov 22 10:06:53 compute-0 systemd[1]: libpod-28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4.scope: Deactivated successfully.
Nov 22 10:06:53 compute-0 podman[426938]: 2025-11-22 10:06:53.656016963 +0000 UTC m=+0.179052659 container died 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 10:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e39e854f3cdf55342d8b2ac8f14805e7fe864c9172e63bbb91c26c5dab24c4d-merged.mount: Deactivated successfully.
Nov 22 10:06:53 compute-0 podman[426938]: 2025-11-22 10:06:53.706995267 +0000 UTC m=+0.230030933 container remove 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 10:06:53 compute-0 systemd[1]: libpod-conmon-28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4.scope: Deactivated successfully.
Nov 22 10:06:53 compute-0 ceph-mon[75021]: pgmap v3156: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:06:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:06:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:06:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:06:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:06:53 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:06:53 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:53 compute-0 podman[426978]: 2025-11-22 10:06:53.884762473 +0000 UTC m=+0.046172227 container create 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:06:53 compute-0 systemd[1]: Started libpod-conmon-6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979.scope.
Nov 22 10:06:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:53 compute-0 podman[426978]: 2025-11-22 10:06:53.864858594 +0000 UTC m=+0.026268518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:06:53 compute-0 podman[426978]: 2025-11-22 10:06:53.964391804 +0000 UTC m=+0.125801608 container init 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:06:53 compute-0 podman[426978]: 2025-11-22 10:06:53.97198149 +0000 UTC m=+0.133391244 container start 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 10:06:53 compute-0 podman[426978]: 2025-11-22 10:06:53.975470947 +0000 UTC m=+0.136880771 container attach 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 10:06:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:54.335+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 990 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:54 compute-0 nova_compute[253661]: 2025-11-22 10:06:54.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:54 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:54 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 990 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:55 compute-0 practical_keller[426994]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:06:55 compute-0 practical_keller[426994]: --> relative data size: 1.0
Nov 22 10:06:55 compute-0 practical_keller[426994]: --> All data devices are unavailable
Nov 22 10:06:55 compute-0 systemd[1]: libpod-6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979.scope: Deactivated successfully.
Nov 22 10:06:55 compute-0 systemd[1]: libpod-6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979.scope: Consumed 1.033s CPU time.
Nov 22 10:06:55 compute-0 podman[426978]: 2025-11-22 10:06:55.067915329 +0000 UTC m=+1.229325093 container died 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 10:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7-merged.mount: Deactivated successfully.
Nov 22 10:06:55 compute-0 podman[426978]: 2025-11-22 10:06:55.127590738 +0000 UTC m=+1.289000492 container remove 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 10:06:55 compute-0 systemd[1]: libpod-conmon-6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979.scope: Deactivated successfully.
Nov 22 10:06:55 compute-0 sudo[426874]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:55 compute-0 sudo[427038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:06:55 compute-0 sudo[427038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:55 compute-0 sudo[427038]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:55 compute-0 sudo[427063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:06:55 compute-0 sudo[427063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:55 compute-0 sudo[427063]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:55.328+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:55 compute-0 sudo[427088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:06:55 compute-0 sudo[427088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:55 compute-0 sudo[427088]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:55 compute-0 sudo[427113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:06:55 compute-0 sudo[427113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:55 compute-0 podman[427179]: 2025-11-22 10:06:55.728773797 +0000 UTC m=+0.025103579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:06:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:06:55 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:06:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:56.324+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:06:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:06:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:06:56 compute-0 podman[427179]: 2025-11-22 10:06:56.552425452 +0000 UTC m=+0.848755224 container create 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:06:56 compute-0 ceph-mon[75021]: pgmap v3157: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:56 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:56 compute-0 systemd[1]: Started libpod-conmon-57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851.scope.
Nov 22 10:06:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:06:56 compute-0 podman[427179]: 2025-11-22 10:06:56.635726733 +0000 UTC m=+0.932056555 container init 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 10:06:56 compute-0 podman[427179]: 2025-11-22 10:06:56.64333602 +0000 UTC m=+0.939665782 container start 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 10:06:56 compute-0 podman[427179]: 2025-11-22 10:06:56.646072798 +0000 UTC m=+0.942402630 container attach 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 10:06:56 compute-0 sweet_archimedes[427195]: 167 167
Nov 22 10:06:56 compute-0 systemd[1]: libpod-57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851.scope: Deactivated successfully.
Nov 22 10:06:56 compute-0 podman[427179]: 2025-11-22 10:06:56.647974735 +0000 UTC m=+0.944304507 container died 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8034d4d72cd866a75abb1ffe6d72c290c55bde232559887d957310bf88f36539-merged.mount: Deactivated successfully.
Nov 22 10:06:56 compute-0 podman[427179]: 2025-11-22 10:06:56.687842386 +0000 UTC m=+0.984172148 container remove 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 10:06:56 compute-0 systemd[1]: libpod-conmon-57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851.scope: Deactivated successfully.
Nov 22 10:06:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:56 compute-0 podman[427218]: 2025-11-22 10:06:56.878590261 +0000 UTC m=+0.050582006 container create f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 10:06:56 compute-0 systemd[1]: Started libpod-conmon-f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e.scope.
Nov 22 10:06:56 compute-0 podman[427218]: 2025-11-22 10:06:56.859073461 +0000 UTC m=+0.031065206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:06:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:56 compute-0 podman[427218]: 2025-11-22 10:06:56.977265561 +0000 UTC m=+0.149257306 container init f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:06:56 compute-0 podman[427218]: 2025-11-22 10:06:56.983996767 +0000 UTC m=+0.155988502 container start f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 10:06:56 compute-0 podman[427218]: 2025-11-22 10:06:56.988420395 +0000 UTC m=+0.160412110 container attach f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:06:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:57.339+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:57 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:57 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]: {
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:     "0": [
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:         {
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "devices": [
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "/dev/loop3"
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             ],
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_name": "ceph_lv0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_size": "21470642176",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "name": "ceph_lv0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "tags": {
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.cluster_name": "ceph",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.crush_device_class": "",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.encrypted": "0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.osd_id": "0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.type": "block",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.vdo": "0"
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             },
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "type": "block",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "vg_name": "ceph_vg0"
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:         }
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:     ],
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:     "1": [
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:         {
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "devices": [
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "/dev/loop4"
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             ],
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_name": "ceph_lv1",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_size": "21470642176",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "name": "ceph_lv1",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "tags": {
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.cluster_name": "ceph",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.crush_device_class": "",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.encrypted": "0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.osd_id": "1",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.type": "block",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.vdo": "0"
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             },
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "type": "block",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "vg_name": "ceph_vg1"
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:         }
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:     ],
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:     "2": [
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:         {
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "devices": [
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "/dev/loop5"
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             ],
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_name": "ceph_lv2",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_size": "21470642176",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "name": "ceph_lv2",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "tags": {
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.cluster_name": "ceph",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.crush_device_class": "",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.encrypted": "0",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.osd_id": "2",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.type": "block",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:                 "ceph.vdo": "0"
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             },
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "type": "block",
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:             "vg_name": "ceph_vg2"
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:         }
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]:     ]
Nov 22 10:06:57 compute-0 nostalgic_tesla[427235]: }
Nov 22 10:06:57 compute-0 systemd[1]: libpod-f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e.scope: Deactivated successfully.
Nov 22 10:06:57 compute-0 podman[427244]: 2025-11-22 10:06:57.855628133 +0000 UTC m=+0.032508201 container died f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:06:57 compute-0 nova_compute[253661]: 2025-11-22 10:06:57.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7-merged.mount: Deactivated successfully.
Nov 22 10:06:57 compute-0 podman[427244]: 2025-11-22 10:06:57.925429561 +0000 UTC m=+0.102309619 container remove f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:06:57 compute-0 systemd[1]: libpod-conmon-f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e.scope: Deactivated successfully.
Nov 22 10:06:57 compute-0 sudo[427113]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:58 compute-0 sudo[427259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:06:58 compute-0 sudo[427259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:58 compute-0 sudo[427259]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:58 compute-0 sudo[427284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:06:58 compute-0 sudo[427284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:58 compute-0 sudo[427284]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:58 compute-0 sudo[427309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:06:58 compute-0 sudo[427309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:58 compute-0 sudo[427309]: pam_unix(sudo:session): session closed for user root
Nov 22 10:06:58 compute-0 sudo[427334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:06:58 compute-0 sudo[427334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:06:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:58.353+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:58 compute-0 ceph-mon[75021]: pgmap v3158: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:58 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:58 compute-0 podman[427400]: 2025-11-22 10:06:58.692707749 +0000 UTC m=+0.052827371 container create 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:06:58 compute-0 systemd[1]: Started libpod-conmon-6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3.scope.
Nov 22 10:06:58 compute-0 podman[427400]: 2025-11-22 10:06:58.669829457 +0000 UTC m=+0.029949119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:06:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:06:58 compute-0 podman[427400]: 2025-11-22 10:06:58.797234233 +0000 UTC m=+0.157353905 container init 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 10:06:58 compute-0 podman[427400]: 2025-11-22 10:06:58.807935766 +0000 UTC m=+0.168055428 container start 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 22 10:06:58 compute-0 podman[427400]: 2025-11-22 10:06:58.812301473 +0000 UTC m=+0.172421135 container attach 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:06:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:06:58 compute-0 awesome_benz[427416]: 167 167
Nov 22 10:06:58 compute-0 systemd[1]: libpod-6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3.scope: Deactivated successfully.
Nov 22 10:06:58 compute-0 podman[427400]: 2025-11-22 10:06:58.815173444 +0000 UTC m=+0.175293106 container died 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae0231d54be356b52de63b794f819bac75da8d75513dbd49852d38405667b6b9-merged.mount: Deactivated successfully.
Nov 22 10:06:58 compute-0 podman[427400]: 2025-11-22 10:06:58.867203935 +0000 UTC m=+0.227323577 container remove 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 10:06:58 compute-0 systemd[1]: libpod-conmon-6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3.scope: Deactivated successfully.
Nov 22 10:06:59 compute-0 podman[427439]: 2025-11-22 10:06:59.070349946 +0000 UTC m=+0.046070315 container create 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:06:59 compute-0 systemd[1]: Started libpod-conmon-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope.
Nov 22 10:06:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:06:59 compute-0 podman[427439]: 2025-11-22 10:06:59.052797693 +0000 UTC m=+0.028518082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:06:59 compute-0 podman[427439]: 2025-11-22 10:06:59.167143059 +0000 UTC m=+0.142863458 container init 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 10:06:59 compute-0 podman[427439]: 2025-11-22 10:06:59.174265884 +0000 UTC m=+0.149986253 container start 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:06:59 compute-0 podman[427439]: 2025-11-22 10:06:59.177625586 +0000 UTC m=+0.153345955 container attach 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 10:06:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 995 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:06:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:59.368+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:06:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:06:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:06:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:06:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:06:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:06:59 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 995 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:06:59 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:06:59 compute-0 nova_compute[253661]: 2025-11-22 10:06:59.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:00 compute-0 infallible_edison[427456]: {
Nov 22 10:07:00 compute-0 infallible_edison[427456]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "osd_id": 1,
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "type": "bluestore"
Nov 22 10:07:00 compute-0 infallible_edison[427456]:     },
Nov 22 10:07:00 compute-0 infallible_edison[427456]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "osd_id": 0,
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "type": "bluestore"
Nov 22 10:07:00 compute-0 infallible_edison[427456]:     },
Nov 22 10:07:00 compute-0 infallible_edison[427456]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "osd_id": 2,
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:07:00 compute-0 infallible_edison[427456]:         "type": "bluestore"
Nov 22 10:07:00 compute-0 infallible_edison[427456]:     }
Nov 22 10:07:00 compute-0 infallible_edison[427456]: }
Nov 22 10:07:00 compute-0 systemd[1]: libpod-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope: Deactivated successfully.
Nov 22 10:07:00 compute-0 systemd[1]: libpod-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope: Consumed 1.036s CPU time.
Nov 22 10:07:00 compute-0 conmon[427456]: conmon 87fa04487078b713604f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope/container/memory.events
Nov 22 10:07:00 compute-0 podman[427489]: 2025-11-22 10:07:00.250158238 +0000 UTC m=+0.026086532 container died 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:07:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa-merged.mount: Deactivated successfully.
Nov 22 10:07:00 compute-0 podman[427489]: 2025-11-22 10:07:00.304263241 +0000 UTC m=+0.080191515 container remove 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:07:00 compute-0 systemd[1]: libpod-conmon-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope: Deactivated successfully.
Nov 22 10:07:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:00.329+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:00 compute-0 sudo[427334]: pam_unix(sudo:session): session closed for user root
Nov 22 10:07:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:07:00 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:07:00 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:07:00 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:07:00 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c539ce19-c914-4a12-89ef-3756d46bdfbc does not exist
Nov 22 10:07:00 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 3df6099a-63e7-4431-9464-f212f6fc73ce does not exist
Nov 22 10:07:00 compute-0 sudo[427504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:07:00 compute-0 sudo[427504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:07:00 compute-0 sudo[427504]: pam_unix(sudo:session): session closed for user root
Nov 22 10:07:00 compute-0 sudo[427529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:07:00 compute-0 sudo[427529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:07:00 compute-0 sudo[427529]: pam_unix(sudo:session): session closed for user root
Nov 22 10:07:00 compute-0 ceph-mon[75021]: pgmap v3159: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:00 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:07:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:07:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:01.339+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:01 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:02.378+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:02 compute-0 ceph-mon[75021]: pgmap v3160: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:02 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:02 compute-0 nova_compute[253661]: 2025-11-22 10:07:02.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:07:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:07:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:03.360+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:03 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:04 compute-0 nova_compute[253661]: 2025-11-22 10:07:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:07:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:04.359+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1000 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:04 compute-0 ceph-mon[75021]: pgmap v3161: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:04 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:04 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1000 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:04 compute-0 nova_compute[253661]: 2025-11-22 10:07:04.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:05.387+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:05 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:06 compute-0 nova_compute[253661]: 2025-11-22 10:07:06.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:07:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:06.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:06 compute-0 ceph-mon[75021]: pgmap v3162: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:06 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:07 compute-0 nova_compute[253661]: 2025-11-22 10:07:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:07:07 compute-0 nova_compute[253661]: 2025-11-22 10:07:07.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:07:07 compute-0 nova_compute[253661]: 2025-11-22 10:07:07.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:07:07 compute-0 nova_compute[253661]: 2025-11-22 10:07:07.239 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:07:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:07.342+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:07 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:07 compute-0 nova_compute[253661]: 2025-11-22 10:07:07.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:08.314+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:08 compute-0 podman[427555]: 2025-11-22 10:07:08.38411038 +0000 UTC m=+0.065306699 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 10:07:08 compute-0 podman[427554]: 2025-11-22 10:07:08.392301152 +0000 UTC m=+0.072435124 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 10:07:08 compute-0 ceph-mon[75021]: pgmap v3163: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:08 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:09 compute-0 nova_compute[253661]: 2025-11-22 10:07:09.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:07:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:09.360+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1005 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:09 compute-0 ceph-mon[75021]: pgmap v3164: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:09 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:09 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1005 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:09 compute-0 nova_compute[253661]: 2025-11-22 10:07:09.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:10.315+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:10 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:11 compute-0 nova_compute[253661]: 2025-11-22 10:07:11.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:07:11 compute-0 nova_compute[253661]: 2025-11-22 10:07:11.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:07:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:11.293+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:11 compute-0 podman[427588]: 2025-11-22 10:07:11.400302228 +0000 UTC m=+0.093184025 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:07:11 compute-0 ceph-mon[75021]: pgmap v3165: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:11 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:12.282+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:07:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4067917069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:07:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:07:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4067917069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:07:12 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/4067917069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:07:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/4067917069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:07:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:12 compute-0 nova_compute[253661]: 2025-11-22 10:07:12.913 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:07:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:13.249+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.260 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.261 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:07:13 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:07:13 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313661028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.727 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:07:13 compute-0 ceph-mon[75021]: pgmap v3166: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:13 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/313661028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.917 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.919 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.919 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.919 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.992 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:07:13 compute-0 nova_compute[253661]: 2025-11-22 10:07:13.993 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.054 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.152 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.153 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.174 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.204 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.225 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:07:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:14.298+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1010 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:07:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1793474515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.731 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.739 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:07:14 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:14 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1010 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:14 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1793474515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.756 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.758 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:07:14 compute-0 nova_compute[253661]: 2025-11-22 10:07:14.758 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:07:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:15.304+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:15 compute-0 ceph-mon[75021]: pgmap v3167: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:15 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:16.337+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:16 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:16 compute-0 nova_compute[253661]: 2025-11-22 10:07:16.759 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:07:16 compute-0 nova_compute[253661]: 2025-11-22 10:07:16.760 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:07:16 compute-0 nova_compute[253661]: 2025-11-22 10:07:16.760 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:07:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:17.349+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:17 compute-0 ceph-mon[75021]: pgmap v3168: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:17 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:17 compute-0 nova_compute[253661]: 2025-11-22 10:07:17.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:18.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:18 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:19.340+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1015 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:19 compute-0 nova_compute[253661]: 2025-11-22 10:07:19.700 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:19 compute-0 ceph-mon[75021]: pgmap v3169: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:07:19 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:19 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1015 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:20.310+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:20 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:21.295+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:21 compute-0 ceph-mon[75021]: pgmap v3170: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:21 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:22.272+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:07:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:07:22 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:22 compute-0 nova_compute[253661]: 2025-11-22 10:07:22.920 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:23.250+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:23 compute-0 ceph-mon[75021]: pgmap v3171: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:23 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:24.206+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1020 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:24 compute-0 nova_compute[253661]: 2025-11-22 10:07:24.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:24 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:24 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1020 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:25.172+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:25 compute-0 ceph-mon[75021]: pgmap v3172: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:25 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:26.210+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:26 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:27.247+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:27 compute-0 ceph-mon[75021]: pgmap v3173: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:27 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:27 compute-0 nova_compute[253661]: 2025-11-22 10:07:27.950 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:07:28.015 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:07:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:07:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:07:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:07:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:07:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:28.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:28 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:29.252+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1025 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:29 compute-0 nova_compute[253661]: 2025-11-22 10:07:29.703 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:29 compute-0 ceph-mon[75021]: pgmap v3174: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:29 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:29 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1025 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:30.272+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:30 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:31.263+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:31 compute-0 ceph-mon[75021]: pgmap v3175: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:31 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:32.219+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:32 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:32 compute-0 nova_compute[253661]: 2025-11-22 10:07:32.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:33.200+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:33 compute-0 ceph-mon[75021]: pgmap v3176: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:33 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:34.154+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1030 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:34 compute-0 nova_compute[253661]: 2025-11-22 10:07:34.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:34 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:34 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1030 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:35.107+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:35 compute-0 ceph-mon[75021]: pgmap v3177: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:35 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:36.059+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:36 compute-0 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 10:07:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:37.061+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:37 compute-0 ceph-mon[75021]: pgmap v3178: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:37 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:37 compute-0 nova_compute[253661]: 2025-11-22 10:07:37.956 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:38.067+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:38 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:39.047+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1034 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:39 compute-0 podman[427660]: 2025-11-22 10:07:39.424820092 +0000 UTC m=+0.098712790 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 10:07:39 compute-0 podman[427661]: 2025-11-22 10:07:39.435568917 +0000 UTC m=+0.103641292 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 22 10:07:39 compute-0 nova_compute[253661]: 2025-11-22 10:07:39.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:39 compute-0 ceph-mon[75021]: pgmap v3179: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:39 compute-0 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1034 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:40.034+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:40 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:41.042+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:41 compute-0 ceph-mon[75021]: pgmap v3180: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:41 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:41.999+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:42 compute-0 podman[427699]: 2025-11-22 10:07:42.437536625 +0000 UTC m=+0.118784205 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 22 10:07:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:42 compute-0 nova_compute[253661]: 2025-11-22 10:07:42.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:42 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:43.011+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:43 compute-0 ceph-mon[75021]: pgmap v3181: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:43 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:44.051+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1039 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:44 compute-0 nova_compute[253661]: 2025-11-22 10:07:44.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:45 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:45 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1039 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:45.042+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:45.997+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:46 compute-0 ceph-mon[75021]: pgmap v3182: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:46 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:46.980+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:47 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:47 compute-0 nova_compute[253661]: 2025-11-22 10:07:47.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:47.995+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:48 compute-0 ceph-mon[75021]: pgmap v3183: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:48 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:49.023+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:49 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1044 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:49 compute-0 nova_compute[253661]: 2025-11-22 10:07:49.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:50.023+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:50 compute-0 ceph-mon[75021]: pgmap v3184: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:50 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:50 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1044 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:51.062+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:51 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:52.063+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:52 compute-0 ceph-mon[75021]: pgmap v3185: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:52 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:07:52
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.data', 'volumes', '.rgw.root']
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:07:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:52 compute-0 nova_compute[253661]: 2025-11-22 10:07:52.966 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:53 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:53.079+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:54.075+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:54 compute-0 ceph-mon[75021]: pgmap v3186: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:54 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1049 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:54 compute-0 nova_compute[253661]: 2025-11-22 10:07:54.715 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:55 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:55 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1049 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:55.104+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:56.061+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:56 compute-0 ceph-mon[75021]: pgmap v3187: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:56 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:07:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:07:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:07:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:07:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:07:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:57.078+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:57 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:57 compute-0 nova_compute[253661]: 2025-11-22 10:07:57.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:07:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:58.092+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:58 compute-0 ceph-mon[75021]: pgmap v3188: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:58 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:07:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:59.049+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:07:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:59 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:07:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1054 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:07:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:07:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:07:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:07:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:07:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:07:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:07:59 compute-0 nova_compute[253661]: 2025-11-22 10:07:59.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:00.028+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:00 compute-0 ceph-mon[75021]: pgmap v3189: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:00 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:00 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1054 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:00 compute-0 sudo[427728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:00 compute-0 sudo[427728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:00 compute-0 sudo[427728]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:00 compute-0 sudo[427753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:08:00 compute-0 sudo[427753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:00 compute-0 sudo[427753]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:00 compute-0 sudo[427778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:00 compute-0 sudo[427778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:00 compute-0 sudo[427778]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:00 compute-0 sudo[427803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 10:08:00 compute-0 sudo[427803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:01.065+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:01 compute-0 sudo[427803]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:08:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:08:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:01 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:01 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:01 compute-0 sudo[427848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:01 compute-0 sudo[427848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:01 compute-0 sudo[427848]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:01 compute-0 sudo[427873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:08:01 compute-0 sudo[427873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:01 compute-0 sudo[427873]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:01 compute-0 sudo[427898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:01 compute-0 sudo[427898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:01 compute-0 sudo[427898]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:01 compute-0 sudo[427923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:08:01 compute-0 sudo[427923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:01 compute-0 sudo[427923]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:08:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:08:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:08:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:08:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:08:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:01 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 93cf6347-0f18-4eea-b4d0-75699a267ab8 does not exist
Nov 22 10:08:01 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 571ab854-3a11-4b35-a525-8bf221ecb1ac does not exist
Nov 22 10:08:01 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 58ea6602-25b4-4bc5-b42e-76ab98c848e9 does not exist
Nov 22 10:08:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:08:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:08:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:08:01 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:08:01 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:08:01 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:08:01 compute-0 sudo[427979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:01 compute-0 sudo[427979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:02 compute-0 sudo[427979]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:02 compute-0 sudo[428004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:08:02 compute-0 sudo[428004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:02 compute-0 sudo[428004]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:02.106+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:02 compute-0 sudo[428029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:02 compute-0 sudo[428029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:02 compute-0 sudo[428029]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:02 compute-0 ceph-mon[75021]: pgmap v3190: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:02 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:08:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:08:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:08:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:08:02 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:08:02 compute-0 sudo[428054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:08:02 compute-0 sudo[428054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:02 compute-0 podman[428120]: 2025-11-22 10:08:02.57374101 +0000 UTC m=+0.048552397 container create f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 10:08:02 compute-0 systemd[1]: Started libpod-conmon-f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460.scope.
Nov 22 10:08:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:08:02 compute-0 podman[428120]: 2025-11-22 10:08:02.553515711 +0000 UTC m=+0.028327138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:08:02 compute-0 podman[428120]: 2025-11-22 10:08:02.664066803 +0000 UTC m=+0.138878250 container init f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:08:02 compute-0 podman[428120]: 2025-11-22 10:08:02.672816288 +0000 UTC m=+0.147627705 container start f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:08:02 compute-0 podman[428120]: 2025-11-22 10:08:02.67655821 +0000 UTC m=+0.151369637 container attach f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:08:02 compute-0 beautiful_bassi[428136]: 167 167
Nov 22 10:08:02 compute-0 systemd[1]: libpod-f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460.scope: Deactivated successfully.
Nov 22 10:08:02 compute-0 podman[428120]: 2025-11-22 10:08:02.680667471 +0000 UTC m=+0.155478868 container died f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d086bb52e3fa894aa31699d0f7026a7148b7efd7fa392a88fd78aac367b8a503-merged.mount: Deactivated successfully.
Nov 22 10:08:02 compute-0 podman[428120]: 2025-11-22 10:08:02.721951628 +0000 UTC m=+0.196763025 container remove f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 10:08:02 compute-0 systemd[1]: libpod-conmon-f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460.scope: Deactivated successfully.
Nov 22 10:08:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:02 compute-0 podman[428159]: 2025-11-22 10:08:02.892924307 +0000 UTC m=+0.052236478 container create 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:08:02 compute-0 systemd[1]: Started libpod-conmon-03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39.scope.
Nov 22 10:08:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:02 compute-0 podman[428159]: 2025-11-22 10:08:02.871714984 +0000 UTC m=+0.031027225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:08:02 compute-0 podman[428159]: 2025-11-22 10:08:02.973767996 +0000 UTC m=+0.133080177 container init 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:08:02 compute-0 nova_compute[253661]: 2025-11-22 10:08:02.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:02 compute-0 podman[428159]: 2025-11-22 10:08:02.982469881 +0000 UTC m=+0.141782032 container start 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 10:08:02 compute-0 podman[428159]: 2025-11-22 10:08:02.986423518 +0000 UTC m=+0.145735749 container attach 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 10:08:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:03.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:03 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:08:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:08:04 compute-0 determined_jennings[428176]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:08:04 compute-0 determined_jennings[428176]: --> relative data size: 1.0
Nov 22 10:08:04 compute-0 determined_jennings[428176]: --> All data devices are unavailable
Nov 22 10:08:04 compute-0 systemd[1]: libpod-03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39.scope: Deactivated successfully.
Nov 22 10:08:04 compute-0 podman[428159]: 2025-11-22 10:08:04.100187716 +0000 UTC m=+1.259499877 container died 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:08:04 compute-0 systemd[1]: libpod-03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39.scope: Consumed 1.067s CPU time.
Nov 22 10:08:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:04.118+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6-merged.mount: Deactivated successfully.
Nov 22 10:08:04 compute-0 ceph-mon[75021]: pgmap v3191: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:04 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:04 compute-0 podman[428159]: 2025-11-22 10:08:04.173708095 +0000 UTC m=+1.333020266 container remove 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 10:08:04 compute-0 systemd[1]: libpod-conmon-03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39.scope: Deactivated successfully.
Nov 22 10:08:04 compute-0 sudo[428054]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:04 compute-0 sudo[428215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:04 compute-0 sudo[428215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:04 compute-0 sudo[428215]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:04 compute-0 sudo[428240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:08:04 compute-0 sudo[428240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:04 compute-0 sudo[428240]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1059 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:04 compute-0 sudo[428265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:04 compute-0 sudo[428265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:04 compute-0 sudo[428265]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:04 compute-0 sudo[428290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:08:04 compute-0 sudo[428290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:04 compute-0 nova_compute[253661]: 2025-11-22 10:08:04.721 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:04 compute-0 podman[428354]: 2025-11-22 10:08:04.910109093 +0000 UTC m=+0.119599875 container create d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:08:04 compute-0 podman[428354]: 2025-11-22 10:08:04.81412116 +0000 UTC m=+0.023611942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:08:04 compute-0 systemd[1]: Started libpod-conmon-d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00.scope.
Nov 22 10:08:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:08:05 compute-0 podman[428354]: 2025-11-22 10:08:05.083737947 +0000 UTC m=+0.293228759 container init d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:08:05 compute-0 podman[428354]: 2025-11-22 10:08:05.092406471 +0000 UTC m=+0.301897243 container start d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:08:05 compute-0 nostalgic_hofstadter[428370]: 167 167
Nov 22 10:08:05 compute-0 systemd[1]: libpod-d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00.scope: Deactivated successfully.
Nov 22 10:08:05 compute-0 podman[428354]: 2025-11-22 10:08:05.098683925 +0000 UTC m=+0.308174727 container attach d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 10:08:05 compute-0 podman[428354]: 2025-11-22 10:08:05.099529056 +0000 UTC m=+0.309019838 container died d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:08:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:05.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:05 compute-0 nova_compute[253661]: 2025-11-22 10:08:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:05 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:05 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1059 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-83f6e18a69fd230aed3786a645b56df2827ec4a33e25b9e83c32e108ba48b150-merged.mount: Deactivated successfully.
Nov 22 10:08:05 compute-0 podman[428354]: 2025-11-22 10:08:05.444512219 +0000 UTC m=+0.654003001 container remove d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 10:08:05 compute-0 systemd[1]: libpod-conmon-d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00.scope: Deactivated successfully.
Nov 22 10:08:05 compute-0 podman[428395]: 2025-11-22 10:08:05.65381891 +0000 UTC m=+0.045055409 container create 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 10:08:05 compute-0 systemd[1]: Started libpod-conmon-66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792.scope.
Nov 22 10:08:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:05 compute-0 podman[428395]: 2025-11-22 10:08:05.636187287 +0000 UTC m=+0.027423806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:05 compute-0 podman[428395]: 2025-11-22 10:08:05.753976847 +0000 UTC m=+0.145213426 container init 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:08:05 compute-0 podman[428395]: 2025-11-22 10:08:05.761087542 +0000 UTC m=+0.152324031 container start 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:08:05 compute-0 podman[428395]: 2025-11-22 10:08:05.764499106 +0000 UTC m=+0.155735615 container attach 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 10:08:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:06.153+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:06 compute-0 ceph-mon[75021]: pgmap v3192: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:06 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:06 compute-0 hungry_newton[428412]: {
Nov 22 10:08:06 compute-0 hungry_newton[428412]:     "0": [
Nov 22 10:08:06 compute-0 hungry_newton[428412]:         {
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "devices": [
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "/dev/loop3"
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             ],
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_name": "ceph_lv0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_size": "21470642176",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "name": "ceph_lv0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "tags": {
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.cluster_name": "ceph",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.crush_device_class": "",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.encrypted": "0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.osd_id": "0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.type": "block",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.vdo": "0"
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             },
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "type": "block",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "vg_name": "ceph_vg0"
Nov 22 10:08:06 compute-0 hungry_newton[428412]:         }
Nov 22 10:08:06 compute-0 hungry_newton[428412]:     ],
Nov 22 10:08:06 compute-0 hungry_newton[428412]:     "1": [
Nov 22 10:08:06 compute-0 hungry_newton[428412]:         {
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "devices": [
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "/dev/loop4"
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             ],
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_name": "ceph_lv1",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_size": "21470642176",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "name": "ceph_lv1",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "tags": {
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.cluster_name": "ceph",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.crush_device_class": "",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.encrypted": "0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.osd_id": "1",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.type": "block",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.vdo": "0"
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             },
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "type": "block",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "vg_name": "ceph_vg1"
Nov 22 10:08:06 compute-0 hungry_newton[428412]:         }
Nov 22 10:08:06 compute-0 hungry_newton[428412]:     ],
Nov 22 10:08:06 compute-0 hungry_newton[428412]:     "2": [
Nov 22 10:08:06 compute-0 hungry_newton[428412]:         {
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "devices": [
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "/dev/loop5"
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             ],
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_name": "ceph_lv2",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_size": "21470642176",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "name": "ceph_lv2",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "tags": {
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.cluster_name": "ceph",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.crush_device_class": "",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.encrypted": "0",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.osd_id": "2",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.type": "block",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:                 "ceph.vdo": "0"
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             },
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "type": "block",
Nov 22 10:08:06 compute-0 hungry_newton[428412]:             "vg_name": "ceph_vg2"
Nov 22 10:08:06 compute-0 hungry_newton[428412]:         }
Nov 22 10:08:06 compute-0 hungry_newton[428412]:     ]
Nov 22 10:08:06 compute-0 hungry_newton[428412]: }
Nov 22 10:08:06 compute-0 systemd[1]: libpod-66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792.scope: Deactivated successfully.
Nov 22 10:08:06 compute-0 conmon[428412]: conmon 66aced40f85fb29456ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792.scope/container/memory.events
Nov 22 10:08:06 compute-0 podman[428395]: 2025-11-22 10:08:06.600223998 +0000 UTC m=+0.991460477 container died 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 10:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d-merged.mount: Deactivated successfully.
Nov 22 10:08:06 compute-0 podman[428395]: 2025-11-22 10:08:06.654344741 +0000 UTC m=+1.045581230 container remove 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 10:08:06 compute-0 systemd[1]: libpod-conmon-66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792.scope: Deactivated successfully.
Nov 22 10:08:06 compute-0 sudo[428290]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:06 compute-0 sudo[428435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:06 compute-0 sudo[428435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:06 compute-0 sudo[428435]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:06 compute-0 sudo[428460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:08:06 compute-0 sudo[428460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:06 compute-0 sudo[428460]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:06 compute-0 sudo[428485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:06 compute-0 sudo[428485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:06 compute-0 sudo[428485]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:06 compute-0 sudo[428510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:08:06 compute-0 sudo[428510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:07.122+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:07 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:07 compute-0 podman[428577]: 2025-11-22 10:08:07.320420658 +0000 UTC m=+0.042551019 container create bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 10:08:07 compute-0 systemd[1]: Started libpod-conmon-bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa.scope.
Nov 22 10:08:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:08:07 compute-0 podman[428577]: 2025-11-22 10:08:07.301078341 +0000 UTC m=+0.023208712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:08:07 compute-0 podman[428577]: 2025-11-22 10:08:07.397960896 +0000 UTC m=+0.120091257 container init bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:08:07 compute-0 podman[428577]: 2025-11-22 10:08:07.405467492 +0000 UTC m=+0.127597853 container start bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 10:08:07 compute-0 competent_germain[428593]: 167 167
Nov 22 10:08:07 compute-0 systemd[1]: libpod-bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa.scope: Deactivated successfully.
Nov 22 10:08:07 compute-0 podman[428577]: 2025-11-22 10:08:07.429446382 +0000 UTC m=+0.151576763 container attach bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:08:07 compute-0 podman[428577]: 2025-11-22 10:08:07.431083752 +0000 UTC m=+0.153214143 container died bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 10:08:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9508e82a3920cac17c415321ddc6013bd6205b78b9605fcf3092c2a07eb33d4f-merged.mount: Deactivated successfully.
Nov 22 10:08:07 compute-0 podman[428577]: 2025-11-22 10:08:07.619958352 +0000 UTC m=+0.342088723 container remove bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:08:07 compute-0 systemd[1]: libpod-conmon-bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa.scope: Deactivated successfully.
Nov 22 10:08:07 compute-0 podman[428617]: 2025-11-22 10:08:07.832954845 +0000 UTC m=+0.085062746 container create 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:08:07 compute-0 podman[428617]: 2025-11-22 10:08:07.78807476 +0000 UTC m=+0.040182691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:08:07 compute-0 systemd[1]: Started libpod-conmon-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope.
Nov 22 10:08:08 compute-0 nova_compute[253661]: 2025-11-22 10:08:08.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:08:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:08.085+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:08 compute-0 podman[428617]: 2025-11-22 10:08:08.111265006 +0000 UTC m=+0.363372987 container init 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 10:08:08 compute-0 podman[428617]: 2025-11-22 10:08:08.117981601 +0000 UTC m=+0.370089542 container start 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:08:08 compute-0 podman[428617]: 2025-11-22 10:08:08.155973777 +0000 UTC m=+0.408082098 container attach 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:08:08 compute-0 nova_compute[253661]: 2025-11-22 10:08:08.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:08 compute-0 nova_compute[253661]: 2025-11-22 10:08:08.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:08 compute-0 nova_compute[253661]: 2025-11-22 10:08:08.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:08:08 compute-0 nova_compute[253661]: 2025-11-22 10:08:08.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:08:08 compute-0 nova_compute[253661]: 2025-11-22 10:08:08.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:08:08 compute-0 ceph-mon[75021]: pgmap v3193: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:08 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:09.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:09 compute-0 loving_mayer[428636]: {
Nov 22 10:08:09 compute-0 loving_mayer[428636]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "osd_id": 1,
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "type": "bluestore"
Nov 22 10:08:09 compute-0 loving_mayer[428636]:     },
Nov 22 10:08:09 compute-0 loving_mayer[428636]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "osd_id": 0,
Nov 22 10:08:09 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:08:09 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "type": "bluestore"
Nov 22 10:08:09 compute-0 loving_mayer[428636]:     },
Nov 22 10:08:09 compute-0 loving_mayer[428636]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "osd_id": 2,
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:08:09 compute-0 loving_mayer[428636]:         "type": "bluestore"
Nov 22 10:08:09 compute-0 loving_mayer[428636]:     }
Nov 22 10:08:09 compute-0 loving_mayer[428636]: }
Nov 22 10:08:09 compute-0 systemd[1]: libpod-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope: Deactivated successfully.
Nov 22 10:08:09 compute-0 systemd[1]: libpod-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope: Consumed 1.037s CPU time.
Nov 22 10:08:09 compute-0 conmon[428636]: conmon 0c0c05a573da376701d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope/container/memory.events
Nov 22 10:08:09 compute-0 podman[428617]: 2025-11-22 10:08:09.161416878 +0000 UTC m=+1.413524799 container died 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 10:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064-merged.mount: Deactivated successfully.
Nov 22 10:08:09 compute-0 podman[428617]: 2025-11-22 10:08:09.222993613 +0000 UTC m=+1.475101514 container remove 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:08:09 compute-0 systemd[1]: libpod-conmon-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope: Deactivated successfully.
Nov 22 10:08:09 compute-0 sudo[428510]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:08:09 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:08:09 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:09 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5646cf36-dfaf-4332-a99f-8bf538161c10 does not exist
Nov 22 10:08:09 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 382699f5-e949-4023-98c8-7fa703d9973c does not exist
Nov 22 10:08:09 compute-0 sudo[428683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:08:09 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:09 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:08:09 compute-0 sudo[428683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:09 compute-0 sudo[428683]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1065 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:09 compute-0 sudo[428708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:08:09 compute-0 sudo[428708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:08:09 compute-0 sudo[428708]: pam_unix(sudo:session): session closed for user root
Nov 22 10:08:09 compute-0 nova_compute[253661]: 2025-11-22 10:08:09.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:10.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:10 compute-0 nova_compute[253661]: 2025-11-22 10:08:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:10 compute-0 ceph-mon[75021]: pgmap v3194: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:10 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:10 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1065 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:10 compute-0 podman[428733]: 2025-11-22 10:08:10.373181396 +0000 UTC m=+0.066153859 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 10:08:10 compute-0 podman[428734]: 2025-11-22 10:08:10.381253695 +0000 UTC m=+0.073973032 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 22 10:08:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:11.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:11 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:12.156+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:12 compute-0 nova_compute[253661]: 2025-11-22 10:08:12.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:12 compute-0 nova_compute[253661]: 2025-11-22 10:08:12.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:08:12 compute-0 ceph-mon[75021]: pgmap v3195: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:12 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:08:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1165278847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:08:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:08:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1165278847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:08:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:13 compute-0 nova_compute[253661]: 2025-11-22 10:08:13.078 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:13.141+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:13 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1165278847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:08:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1165278847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:08:13 compute-0 podman[428771]: 2025-11-22 10:08:13.395337493 +0000 UTC m=+0.088852759 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 10:08:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:14.162+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:08:14 compute-0 ceph-mon[75021]: pgmap v3196: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:14 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1069 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:08:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082360209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.885 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.886 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.886 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.887 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.958 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.958 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:08:14 compute-0 nova_compute[253661]: 2025-11-22 10:08:14.979 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:08:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:15.120+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:15 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:15 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1069 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2082360209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:08:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:08:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3414982244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:08:15 compute-0 nova_compute[253661]: 2025-11-22 10:08:15.459 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:08:15 compute-0 nova_compute[253661]: 2025-11-22 10:08:15.467 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:08:15 compute-0 nova_compute[253661]: 2025-11-22 10:08:15.485 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:08:15 compute-0 nova_compute[253661]: 2025-11-22 10:08:15.487 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:08:15 compute-0 nova_compute[253661]: 2025-11-22 10:08:15.487 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:08:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:16.088+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:16 compute-0 ceph-mon[75021]: pgmap v3197: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:16 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3414982244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:08:16 compute-0 nova_compute[253661]: 2025-11-22 10:08:16.488 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:16 compute-0 nova_compute[253661]: 2025-11-22 10:08:16.489 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:17.091+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:17 compute-0 nova_compute[253661]: 2025-11-22 10:08:17.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:17 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:18.056+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:18 compute-0 nova_compute[253661]: 2025-11-22 10:08:18.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:18 compute-0 ceph-mon[75021]: pgmap v3198: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:18 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:19.015+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1074 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:19 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:19 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1074 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:19 compute-0 nova_compute[253661]: 2025-11-22 10:08:19.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:20.037+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:20 compute-0 ceph-mon[75021]: pgmap v3199: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:20 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:21.053+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:21 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:22.039+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:22 compute-0 ceph-mon[75021]: pgmap v3200: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:22 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:08:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:08:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:23.017+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:23 compute-0 nova_compute[253661]: 2025-11-22 10:08:23.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:23 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:24.004+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1079 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:24 compute-0 ceph-mon[75021]: pgmap v3201: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:24 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:24 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1079 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:24 compute-0 sshd-session[428841]: Invalid user solana from 92.118.39.92 port 47690
Nov 22 10:08:24 compute-0 sshd-session[428841]: Connection closed by invalid user solana 92.118.39.92 port 47690 [preauth]
Nov 22 10:08:24 compute-0 nova_compute[253661]: 2025-11-22 10:08:24.732 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:25.003+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:25 compute-0 nova_compute[253661]: 2025-11-22 10:08:25.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:08:25 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:26.021+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:26 compute-0 ceph-mon[75021]: pgmap v3202: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:26 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:27.052+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:27 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:08:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:08:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:08:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:08:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:08:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:08:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:28.034+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:28 compute-0 nova_compute[253661]: 2025-11-22 10:08:28.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:28 compute-0 ceph-mon[75021]: pgmap v3203: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:28 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:29.003+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1084 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:29 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:29 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1084 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:29 compute-0 nova_compute[253661]: 2025-11-22 10:08:29.735 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:29.978+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:30 compute-0 ceph-mon[75021]: pgmap v3204: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:30 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:30.949+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:31 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:31.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:32 compute-0 ceph-mon[75021]: pgmap v3205: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:32 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:32.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:33 compute-0 nova_compute[253661]: 2025-11-22 10:08:33.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:33 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:33.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1089 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:34 compute-0 ceph-mon[75021]: pgmap v3206: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:34 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:34 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1089 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:34 compute-0 nova_compute[253661]: 2025-11-22 10:08:34.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:34.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:35 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:35.958+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:36 compute-0 ceph-mon[75021]: pgmap v3207: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:36 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:36.928+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:37 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:37.902+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:38 compute-0 nova_compute[253661]: 2025-11-22 10:08:38.096 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:38 compute-0 ceph-mon[75021]: pgmap v3208: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:38 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:38.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1094 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:39 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1094 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:39 compute-0 nova_compute[253661]: 2025-11-22 10:08:39.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:39.859+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:40 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:40 compute-0 ceph-mon[75021]: pgmap v3209: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:40.845+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:41 compute-0 podman[428843]: 2025-11-22 10:08:41.378213948 +0000 UTC m=+0.067867411 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:08:41 compute-0 podman[428844]: 2025-11-22 10:08:41.383512529 +0000 UTC m=+0.071997383 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:08:41 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:41.815+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:42 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:42 compute-0 ceph-mon[75021]: pgmap v3210: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:42 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:42.772+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:43 compute-0 nova_compute[253661]: 2025-11-22 10:08:43.134 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:43 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:43.817+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1099 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:44 compute-0 podman[428882]: 2025-11-22 10:08:44.411539179 +0000 UTC m=+0.098551408 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 10:08:44 compute-0 ceph-mon[75021]: pgmap v3211: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:44 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:44 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1099 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:44.787+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:44 compute-0 nova_compute[253661]: 2025-11-22 10:08:44.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:45 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:45.779+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:46 compute-0 ceph-mon[75021]: pgmap v3212: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:46 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:46.804+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:47 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:47.768+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:48 compute-0 nova_compute[253661]: 2025-11-22 10:08:48.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:48 compute-0 ceph-mon[75021]: pgmap v3213: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:48 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:48.760+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1104 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:49 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:49 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1104 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:49.737+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:49 compute-0 nova_compute[253661]: 2025-11-22 10:08:49.792 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:50 compute-0 ceph-mon[75021]: pgmap v3214: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:50 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:50.702+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:51 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:51.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:08:52
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.log', 'vms', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta']
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:08:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:52.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:52 compute-0 ceph-mon[75021]: pgmap v3215: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:52 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:08:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:53 compute-0 nova_compute[253661]: 2025-11-22 10:08:53.141 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:53 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:53.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1109 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:54.682+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:54 compute-0 ceph-mon[75021]: pgmap v3216: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:54 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:54 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1109 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:54 compute-0 nova_compute[253661]: 2025-11-22 10:08:54.794 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:55.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:55 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:08:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:08:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:08:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:08:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:08:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:56.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:56 compute-0 ceph-mon[75021]: pgmap v3217: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:56 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:57.689+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:57 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:57 compute-0 ceph-mon[75021]: pgmap v3218: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:58 compute-0 nova_compute[253661]: 2025-11-22 10:08:58.145 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:08:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:58.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:08:59 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1114 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:08:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:08:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:08:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:08:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:08:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:08:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:08:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:59.641+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:08:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:08:59 compute-0 nova_compute[253661]: 2025-11-22 10:08:59.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:00 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:00 compute-0 ceph-mon[75021]: pgmap v3219: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:00 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1114 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:00.675+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:01 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:01.719+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:02 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:02 compute-0 ceph-mon[75021]: pgmap v3220: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:02.695+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:03 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:03 compute-0 nova_compute[253661]: 2025-11-22 10:09:03.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:09:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:09:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:03.713+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:04 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:04 compute-0 ceph-mon[75021]: pgmap v3221: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1119 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:04.687+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:04 compute-0 nova_compute[253661]: 2025-11-22 10:09:04.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:05 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:05 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1119 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:05 compute-0 nova_compute[253661]: 2025-11-22 10:09:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:05.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:06 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:06 compute-0 ceph-mon[75021]: pgmap v3222: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:06.684+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:07 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:07.704+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:08 compute-0 nova_compute[253661]: 2025-11-22 10:09:08.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:08 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:08 compute-0 ceph-mon[75021]: pgmap v3223: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:08.689+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:09 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:09 compute-0 nova_compute[253661]: 2025-11-22 10:09:09.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:09 compute-0 nova_compute[253661]: 2025-11-22 10:09:09.233 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:09:09 compute-0 nova_compute[253661]: 2025-11-22 10:09:09.233 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:09:09 compute-0 nova_compute[253661]: 2025-11-22 10:09:09.253 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:09:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1124 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:09 compute-0 sudo[428909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:09:09 compute-0 sudo[428909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:09 compute-0 sudo[428909]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:09 compute-0 sudo[428934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:09:09 compute-0 sudo[428934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:09 compute-0 sudo[428934]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:09 compute-0 sudo[428959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:09:09 compute-0 sudo[428959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:09 compute-0 sudo[428959]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:09.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:09 compute-0 sudo[428984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:09:09 compute-0 sudo[428984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:09 compute-0 nova_compute[253661]: 2025-11-22 10:09:09.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:10 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:10 compute-0 ceph-mon[75021]: pgmap v3224: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:10 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1124 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:10 compute-0 nova_compute[253661]: 2025-11-22 10:09:10.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:10 compute-0 sudo[428984]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 10:09:10 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 10:09:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:09:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:09:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:09:10 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:09:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:09:10 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:09:10 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 61f14f6d-a484-4e55-80d2-778b2f64ecd3 does not exist
Nov 22 10:09:10 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev ff0933eb-fa76-48aa-94f1-5019fe6d55bb does not exist
Nov 22 10:09:10 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev f1523180-802f-45be-954c-b3925935dfb4 does not exist
Nov 22 10:09:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:09:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:09:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:09:10 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:09:10 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:09:10 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:09:10 compute-0 sudo[429040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:09:10 compute-0 sudo[429040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:10 compute-0 sudo[429040]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:10 compute-0 sudo[429065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:09:10 compute-0 sudo[429065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:10 compute-0 sudo[429065]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:10 compute-0 sudo[429090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:09:10 compute-0 sudo[429090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:10 compute-0 sudo[429090]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:10.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:10 compute-0 sudo[429115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:09:10 compute-0 sudo[429115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:11 compute-0 podman[429180]: 2025-11-22 10:09:11.149213423 +0000 UTC m=+0.059403233 container create 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:09:11 compute-0 systemd[1]: Started libpod-conmon-34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4.scope.
Nov 22 10:09:11 compute-0 podman[429180]: 2025-11-22 10:09:11.122828613 +0000 UTC m=+0.033018403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:09:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:09:11 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 10:09:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:09:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:09:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:09:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:09:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:09:11 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:09:11 compute-0 podman[429180]: 2025-11-22 10:09:11.253180981 +0000 UTC m=+0.163370801 container init 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 10:09:11 compute-0 podman[429180]: 2025-11-22 10:09:11.268216771 +0000 UTC m=+0.178406551 container start 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 10:09:11 compute-0 podman[429180]: 2025-11-22 10:09:11.273126452 +0000 UTC m=+0.183316232 container attach 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:09:11 compute-0 great_kepler[429196]: 167 167
Nov 22 10:09:11 compute-0 systemd[1]: libpod-34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4.scope: Deactivated successfully.
Nov 22 10:09:11 compute-0 podman[429180]: 2025-11-22 10:09:11.281410976 +0000 UTC m=+0.191600816 container died 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 10:09:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-41979220069ae723e411bdc927a10b4cc53c018b81dc0c4a82599fa3119e4437-merged.mount: Deactivated successfully.
Nov 22 10:09:11 compute-0 podman[429180]: 2025-11-22 10:09:11.337717362 +0000 UTC m=+0.247907132 container remove 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 10:09:11 compute-0 systemd[1]: libpod-conmon-34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4.scope: Deactivated successfully.
Nov 22 10:09:11 compute-0 podman[429220]: 2025-11-22 10:09:11.528132697 +0000 UTC m=+0.056931822 container create 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 10:09:11 compute-0 systemd[1]: Started libpod-conmon-704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7.scope.
Nov 22 10:09:11 compute-0 podman[429220]: 2025-11-22 10:09:11.503966973 +0000 UTC m=+0.032766188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:09:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:11 compute-0 podman[429220]: 2025-11-22 10:09:11.659237333 +0000 UTC m=+0.188036518 container init 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 10:09:11 compute-0 podman[429220]: 2025-11-22 10:09:11.668565163 +0000 UTC m=+0.197364288 container start 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 10:09:11 compute-0 podman[429220]: 2025-11-22 10:09:11.672413487 +0000 UTC m=+0.201212642 container attach 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:09:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:11.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:11 compute-0 podman[429235]: 2025-11-22 10:09:11.682937036 +0000 UTC m=+0.105344823 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:09:11 compute-0 podman[429236]: 2025-11-22 10:09:11.688589975 +0000 UTC m=+0.109568187 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true)
Nov 22 10:09:12 compute-0 nova_compute[253661]: 2025-11-22 10:09:12.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:12 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:12 compute-0 ceph-mon[75021]: pgmap v3225: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:09:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1367630835' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:09:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:09:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1367630835' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:09:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:12.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:12 compute-0 fervent_bassi[429249]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:09:12 compute-0 fervent_bassi[429249]: --> relative data size: 1.0
Nov 22 10:09:12 compute-0 fervent_bassi[429249]: --> All data devices are unavailable
Nov 22 10:09:12 compute-0 systemd[1]: libpod-704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7.scope: Deactivated successfully.
Nov 22 10:09:12 compute-0 systemd[1]: libpod-704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7.scope: Consumed 1.108s CPU time.
Nov 22 10:09:12 compute-0 podman[429220]: 2025-11-22 10:09:12.836824071 +0000 UTC m=+1.365623206 container died 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:09:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047-merged.mount: Deactivated successfully.
Nov 22 10:09:12 compute-0 podman[429220]: 2025-11-22 10:09:12.892742086 +0000 UTC m=+1.421541211 container remove 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 10:09:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:12 compute-0 systemd[1]: libpod-conmon-704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7.scope: Deactivated successfully.
Nov 22 10:09:12 compute-0 sudo[429115]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:13 compute-0 sudo[429312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:09:13 compute-0 sudo[429312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:13 compute-0 sudo[429312]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:13 compute-0 sudo[429337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:09:13 compute-0 sudo[429337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:13 compute-0 sudo[429337]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:13 compute-0 sudo[429362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:09:13 compute-0 sudo[429362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:13 compute-0 sudo[429362]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:13 compute-0 sudo[429387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:09:13 compute-0 sudo[429387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:13 compute-0 nova_compute[253661]: 2025-11-22 10:09:13.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:13 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1367630835' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:09:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1367630835' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:09:13 compute-0 podman[429453]: 2025-11-22 10:09:13.654800119 +0000 UTC m=+0.064740394 container create 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:09:13 compute-0 systemd[1]: Started libpod-conmon-782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f.scope.
Nov 22 10:09:13 compute-0 podman[429453]: 2025-11-22 10:09:13.628774869 +0000 UTC m=+0.038715184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:09:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:13.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:09:13 compute-0 podman[429453]: 2025-11-22 10:09:13.76617225 +0000 UTC m=+0.176112555 container init 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 10:09:13 compute-0 podman[429453]: 2025-11-22 10:09:13.780087083 +0000 UTC m=+0.190027378 container start 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 10:09:13 compute-0 podman[429453]: 2025-11-22 10:09:13.784556163 +0000 UTC m=+0.194496548 container attach 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 10:09:13 compute-0 eager_hodgkin[429469]: 167 167
Nov 22 10:09:13 compute-0 systemd[1]: libpod-782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f.scope: Deactivated successfully.
Nov 22 10:09:13 compute-0 podman[429453]: 2025-11-22 10:09:13.787707961 +0000 UTC m=+0.197648266 container died 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-740dfc7813fb39898479be9c89458311a9e8922889762ccd679705726f2d40b4-merged.mount: Deactivated successfully.
Nov 22 10:09:13 compute-0 podman[429453]: 2025-11-22 10:09:13.845946103 +0000 UTC m=+0.255886398 container remove 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 22 10:09:13 compute-0 systemd[1]: libpod-conmon-782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f.scope: Deactivated successfully.
Nov 22 10:09:14 compute-0 podman[429495]: 2025-11-22 10:09:14.068484459 +0000 UTC m=+0.051548499 container create 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:09:14 compute-0 systemd[1]: Started libpod-conmon-02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471.scope.
Nov 22 10:09:14 compute-0 podman[429495]: 2025-11-22 10:09:14.046643122 +0000 UTC m=+0.029707152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:09:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:14 compute-0 podman[429495]: 2025-11-22 10:09:14.185456728 +0000 UTC m=+0.168520838 container init 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:09:14 compute-0 podman[429495]: 2025-11-22 10:09:14.195344771 +0000 UTC m=+0.178408841 container start 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:09:14 compute-0 podman[429495]: 2025-11-22 10:09:14.199720269 +0000 UTC m=+0.182784329 container attach 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:09:14 compute-0 nova_compute[253661]: 2025-11-22 10:09:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:14 compute-0 nova_compute[253661]: 2025-11-22 10:09:14.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:09:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1129 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:14 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:14 compute-0 ceph-mon[75021]: pgmap v3226: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:14 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1129 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:14.709+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:14 compute-0 nova_compute[253661]: 2025-11-22 10:09:14.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:15 compute-0 vibrant_keller[429512]: {
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:     "0": [
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:         {
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "devices": [
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "/dev/loop3"
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             ],
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_name": "ceph_lv0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_size": "21470642176",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "name": "ceph_lv0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "tags": {
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.cluster_name": "ceph",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.crush_device_class": "",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.encrypted": "0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.osd_id": "0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.type": "block",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.vdo": "0"
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             },
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "type": "block",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "vg_name": "ceph_vg0"
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:         }
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:     ],
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:     "1": [
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:         {
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "devices": [
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "/dev/loop4"
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             ],
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_name": "ceph_lv1",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_size": "21470642176",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "name": "ceph_lv1",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "tags": {
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.cluster_name": "ceph",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.crush_device_class": "",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.encrypted": "0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.osd_id": "1",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.type": "block",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.vdo": "0"
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             },
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "type": "block",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "vg_name": "ceph_vg1"
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:         }
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:     ],
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:     "2": [
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:         {
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "devices": [
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "/dev/loop5"
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             ],
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_name": "ceph_lv2",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_size": "21470642176",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "name": "ceph_lv2",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "tags": {
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.cluster_name": "ceph",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.crush_device_class": "",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.encrypted": "0",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.osd_id": "2",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.type": "block",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:                 "ceph.vdo": "0"
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             },
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "type": "block",
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:             "vg_name": "ceph_vg2"
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:         }
Nov 22 10:09:15 compute-0 vibrant_keller[429512]:     ]
Nov 22 10:09:15 compute-0 vibrant_keller[429512]: }
Nov 22 10:09:15 compute-0 systemd[1]: libpod-02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471.scope: Deactivated successfully.
Nov 22 10:09:15 compute-0 podman[429495]: 2025-11-22 10:09:15.04423574 +0000 UTC m=+1.027299780 container died 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 10:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994-merged.mount: Deactivated successfully.
Nov 22 10:09:15 compute-0 podman[429495]: 2025-11-22 10:09:15.114461798 +0000 UTC m=+1.097525808 container remove 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:09:15 compute-0 systemd[1]: libpod-conmon-02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471.scope: Deactivated successfully.
Nov 22 10:09:15 compute-0 sudo[429387]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:15 compute-0 podman[429522]: 2025-11-22 10:09:15.202030183 +0000 UTC m=+0.123863669 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:15 compute-0 sudo[429552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:09:15 compute-0 sudo[429552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:15 compute-0 sudo[429552]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.254 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:09:15 compute-0 sudo[429585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:09:15 compute-0 sudo[429585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:15 compute-0 sudo[429585]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:15 compute-0 sudo[429611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:09:15 compute-0 sudo[429611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:15 compute-0 sudo[429611]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:15 compute-0 sudo[429636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:09:15 compute-0 sudo[429636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:15 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:09:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728628970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:09:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:15.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.733 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:09:15 compute-0 podman[429722]: 2025-11-22 10:09:15.929235387 +0000 UTC m=+0.071184173 container create 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.978 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:09:15 compute-0 systemd[1]: Started libpod-conmon-84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3.scope.
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.980 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3493MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.980 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:09:15 compute-0 nova_compute[253661]: 2025-11-22 10:09:15.981 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:09:15 compute-0 podman[429722]: 2025-11-22 10:09:15.900247484 +0000 UTC m=+0.042196360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:09:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:09:16 compute-0 podman[429722]: 2025-11-22 10:09:16.042492104 +0000 UTC m=+0.184440910 container init 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:09:16 compute-0 podman[429722]: 2025-11-22 10:09:16.051994618 +0000 UTC m=+0.193943444 container start 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 10:09:16 compute-0 podman[429722]: 2025-11-22 10:09:16.058481337 +0000 UTC m=+0.200430183 container attach 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:09:16 compute-0 systemd[1]: libpod-84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3.scope: Deactivated successfully.
Nov 22 10:09:16 compute-0 focused_galileo[429739]: 167 167
Nov 22 10:09:16 compute-0 conmon[429739]: conmon 84ee42a81af4bf0517fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3.scope/container/memory.events
Nov 22 10:09:16 compute-0 podman[429722]: 2025-11-22 10:09:16.06301727 +0000 UTC m=+0.204966066 container died 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 10:09:16 compute-0 nova_compute[253661]: 2025-11-22 10:09:16.079 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:09:16 compute-0 nova_compute[253661]: 2025-11-22 10:09:16.081 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fbd1cdd32785c1f705ec3285e3215f100dc6f4088ff566f6ba43def69a82683-merged.mount: Deactivated successfully.
Nov 22 10:09:16 compute-0 nova_compute[253661]: 2025-11-22 10:09:16.105 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:09:16 compute-0 podman[429722]: 2025-11-22 10:09:16.109221547 +0000 UTC m=+0.251170343 container remove 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 22 10:09:16 compute-0 systemd[1]: libpod-conmon-84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3.scope: Deactivated successfully.
Nov 22 10:09:16 compute-0 podman[429762]: 2025-11-22 10:09:16.298691859 +0000 UTC m=+0.050268918 container create 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 10:09:16 compute-0 systemd[1]: Started libpod-conmon-71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b.scope.
Nov 22 10:09:16 compute-0 podman[429762]: 2025-11-22 10:09:16.275930509 +0000 UTC m=+0.027507558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:09:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:09:16 compute-0 podman[429762]: 2025-11-22 10:09:16.398189637 +0000 UTC m=+0.149766696 container init 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 10:09:16 compute-0 podman[429762]: 2025-11-22 10:09:16.411629288 +0000 UTC m=+0.163206297 container start 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:09:16 compute-0 podman[429762]: 2025-11-22 10:09:16.415348639 +0000 UTC m=+0.166925698 container attach 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 10:09:16 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:16 compute-0 ceph-mon[75021]: pgmap v3227: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:16 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3728628970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:09:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:09:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278779866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:09:16 compute-0 nova_compute[253661]: 2025-11-22 10:09:16.628 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:09:16 compute-0 nova_compute[253661]: 2025-11-22 10:09:16.636 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:09:16 compute-0 nova_compute[253661]: 2025-11-22 10:09:16.660 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:09:16 compute-0 nova_compute[253661]: 2025-11-22 10:09:16.662 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:09:16 compute-0 nova_compute[253661]: 2025-11-22 10:09:16.662 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:09:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:16.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]: {
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "osd_id": 1,
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "type": "bluestore"
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:     },
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "osd_id": 0,
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "type": "bluestore"
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:     },
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "osd_id": 2,
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:         "type": "bluestore"
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]:     }
Nov 22 10:09:17 compute-0 relaxed_chaum[429797]: }
Nov 22 10:09:17 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4278779866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:09:17 compute-0 systemd[1]: libpod-71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b.scope: Deactivated successfully.
Nov 22 10:09:17 compute-0 systemd[1]: libpod-71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b.scope: Consumed 1.117s CPU time.
Nov 22 10:09:17 compute-0 podman[429762]: 2025-11-22 10:09:17.523735744 +0000 UTC m=+1.275312773 container died 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 10:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70-merged.mount: Deactivated successfully.
Nov 22 10:09:17 compute-0 podman[429762]: 2025-11-22 10:09:17.590287441 +0000 UTC m=+1.341864470 container remove 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 10:09:17 compute-0 systemd[1]: libpod-conmon-71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b.scope: Deactivated successfully.
Nov 22 10:09:17 compute-0 sudo[429636]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:09:17 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:09:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:09:17 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:09:17 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 27cb19db-9765-47ec-bde2-aaa3c1a85c55 does not exist
Nov 22 10:09:17 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a6723859-f3f8-46f4-98fe-b2ee0043da17 does not exist
Nov 22 10:09:17 compute-0 nova_compute[253661]: 2025-11-22 10:09:17.662 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:17 compute-0 nova_compute[253661]: 2025-11-22 10:09:17.662 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:17 compute-0 nova_compute[253661]: 2025-11-22 10:09:17.662 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:17.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:17 compute-0 sudo[429843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:09:17 compute-0 sudo[429843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:17 compute-0 sudo[429843]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:17 compute-0 sudo[429868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:09:17 compute-0 sudo[429868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:09:17 compute-0 sudo[429868]: pam_unix(sudo:session): session closed for user root
Nov 22 10:09:18 compute-0 nova_compute[253661]: 2025-11-22 10:09:18.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:18 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:18 compute-0 ceph-mon[75021]: pgmap v3228: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:18 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:09:18 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:09:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:18.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1134 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:19 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:19 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1134 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:19.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:19 compute-0 nova_compute[253661]: 2025-11-22 10:09:19.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:20 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:20 compute-0 ceph-mon[75021]: pgmap v3229: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:20.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:21 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:21.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:22 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:22 compute-0 ceph-mon[75021]: pgmap v3230: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:22.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:09:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:09:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:23 compute-0 nova_compute[253661]: 2025-11-22 10:09:23.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:23 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:23.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:24 compute-0 nova_compute[253661]: 2025-11-22 10:09:24.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1139 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:24 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:24 compute-0 ceph-mon[75021]: pgmap v3231: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:24 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1139 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:24.636+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:24 compute-0 nova_compute[253661]: 2025-11-22 10:09:24.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:25 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:25.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:26 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:26 compute-0 ceph-mon[75021]: pgmap v3232: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:26.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:27 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.606966) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167607003, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2168, "num_deletes": 251, "total_data_size": 2744548, "memory_usage": 2796888, "flush_reason": "Manual Compaction"}
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Nov 22 10:09:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:27.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167632438, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 2667871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69135, "largest_seqno": 71302, "table_properties": {"data_size": 2659095, "index_size": 4949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22940, "raw_average_key_size": 21, "raw_value_size": 2639311, "raw_average_value_size": 2443, "num_data_blocks": 217, "num_entries": 1080, "num_filter_entries": 1080, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806006, "oldest_key_time": 1763806006, "file_creation_time": 1763806167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 25550 microseconds, and 8784 cpu microseconds.
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.632503) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 2667871 bytes OK
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.632536) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.633998) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.634015) EVENT_LOG_v1 {"time_micros": 1763806167634010, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.634035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 2735145, prev total WAL file size 2735145, number of live WAL files 2.
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.635545) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(2605KB)], [164(8475KB)]
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167635613, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11346819, "oldest_snapshot_seqno": -1}
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 9795 keys, 9918981 bytes, temperature: kUnknown
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167687274, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9918981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9859973, "index_size": 33423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 259777, "raw_average_key_size": 26, "raw_value_size": 9690962, "raw_average_value_size": 989, "num_data_blocks": 1265, "num_entries": 9795, "num_filter_entries": 9795, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.687582) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9918981 bytes
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.688777) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.2 rd, 191.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.3 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 10309, records dropped: 514 output_compression: NoCompression
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.688791) EVENT_LOG_v1 {"time_micros": 1763806167688784, "job": 102, "event": "compaction_finished", "compaction_time_micros": 51776, "compaction_time_cpu_micros": 33425, "output_level": 6, "num_output_files": 1, "total_output_size": 9918981, "num_input_records": 10309, "num_output_records": 9795, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167689424, "job": 102, "event": "table_file_deletion", "file_number": 166}
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167691035, "job": 102, "event": "table_file_deletion", "file_number": 164}
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.635437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:09:27 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:09:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:09:28.017 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:09:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:09:28.017 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:09:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:09:28.017 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:09:28 compute-0 nova_compute[253661]: 2025-11-22 10:09:28.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:28 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:28 compute-0 ceph-mon[75021]: pgmap v3233: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:28 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:28.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1144 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:29 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:29 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1144 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:29.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:29 compute-0 nova_compute[253661]: 2025-11-22 10:09:29.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:30.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:30 compute-0 ceph-mon[75021]: pgmap v3234: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:30 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:30 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:31 compute-0 nova_compute[253661]: 2025-11-22 10:09:31.242 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:31 compute-0 nova_compute[253661]: 2025-11-22 10:09:31.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 10:09:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:31.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:32.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:32 compute-0 ceph-mon[75021]: pgmap v3235: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:32 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:33 compute-0 nova_compute[253661]: 2025-11-22 10:09:33.290 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:33.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:33 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1149 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:34 compute-0 ceph-mon[75021]: pgmap v3236: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:34 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:34 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1149 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:34.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:34 compute-0 nova_compute[253661]: 2025-11-22 10:09:34.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:35.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:35 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:36.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:36 compute-0 ceph-mon[75021]: pgmap v3237: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:36 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:36 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:37.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:38 compute-0 nova_compute[253661]: 2025-11-22 10:09:38.294 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:38.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:38 compute-0 ceph-mon[75021]: pgmap v3238: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:38 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1154 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:39.589+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:39 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1154 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:39 compute-0 nova_compute[253661]: 2025-11-22 10:09:39.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:40.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:40 compute-0 ceph-mon[75021]: pgmap v3239: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:41.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:41 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:41 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:42 compute-0 podman[429893]: 2025-11-22 10:09:42.419313173 +0000 UTC m=+0.100841252 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:09:42 compute-0 podman[429894]: 2025-11-22 10:09:42.429509004 +0000 UTC m=+0.110770937 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 22 10:09:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:42.546+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:42 compute-0 ceph-mon[75021]: pgmap v3240: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:42 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:43 compute-0 nova_compute[253661]: 2025-11-22 10:09:43.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:43.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:43 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1159 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:44.514+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:44 compute-0 ceph-mon[75021]: pgmap v3241: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:44 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1159 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:44 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:44 compute-0 nova_compute[253661]: 2025-11-22 10:09:44.888 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:45 compute-0 podman[429929]: 2025-11-22 10:09:45.397409316 +0000 UTC m=+0.090696643 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 10:09:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:45.493+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:45 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:46.454+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:46 compute-0 ceph-mon[75021]: pgmap v3242: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:46 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:47.481+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:47 compute-0 ceph-mon[75021]: pgmap v3243: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:47 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:48 compute-0 nova_compute[253661]: 2025-11-22 10:09:48.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:48.505+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:48 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1165 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:49.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:49 compute-0 ceph-mon[75021]: pgmap v3244: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:49 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1165 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:49 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:49 compute-0 nova_compute[253661]: 2025-11-22 10:09:49.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:50.558+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:50 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:51.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:51 compute-0 ceph-mon[75021]: pgmap v3245: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:51 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:09:52
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', 'backups', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root']
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:09:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:52.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:09:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:53 compute-0 nova_compute[253661]: 2025-11-22 10:09:53.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:53.634+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:53 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:53 compute-0 ceph-mon[75021]: pgmap v3246: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1169 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:54.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:54 compute-0 nova_compute[253661]: 2025-11-22 10:09:54.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:54 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:54 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1169 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:55.660+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:55 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:55 compute-0 ceph-mon[75021]: pgmap v3247: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:09:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:09:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:09:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:09:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:09:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:56.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:57 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:57.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:58 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:58 compute-0 ceph-mon[75021]: pgmap v3248: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:58 compute-0 nova_compute[253661]: 2025-11-22 10:09:58.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:09:58 compute-0 nova_compute[253661]: 2025-11-22 10:09:58.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 10:09:58 compute-0 nova_compute[253661]: 2025-11-22 10:09:58.262 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 10:09:58 compute-0 nova_compute[253661]: 2025-11-22 10:09:58.348 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:09:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:58.691+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:09:59 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1174 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:09:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:09:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:09:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:09:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:09:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:09:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:09:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:59.656+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:09:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:09:59 compute-0 nova_compute[253661]: 2025-11-22 10:09:59.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:00 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:00 compute-0 ceph-mon[75021]: pgmap v3249: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:00 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1174 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:00.696+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:01 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:01.705+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:02 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:02 compute-0 ceph-mon[75021]: pgmap v3250: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:02.748+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:03 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:10:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:10:03 compute-0 nova_compute[253661]: 2025-11-22 10:10:03.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:03.698+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:04 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:04 compute-0 ceph-mon[75021]: pgmap v3251: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1179 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:04.723+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:04 compute-0 nova_compute[253661]: 2025-11-22 10:10:04.931 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:05 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:05 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1179 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:05.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:06 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:06 compute-0 ceph-mon[75021]: pgmap v3252: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:06 compute-0 nova_compute[253661]: 2025-11-22 10:10:06.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:06.644+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:07 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:07.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:08 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:08 compute-0 ceph-mon[75021]: pgmap v3253: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:08 compute-0 nova_compute[253661]: 2025-11-22 10:10:08.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:08.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:09 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1184 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:09.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:09 compute-0 nova_compute[253661]: 2025-11-22 10:10:09.934 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:10 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:10 compute-0 ceph-mon[75021]: pgmap v3254: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:10 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1184 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:10.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:11 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:11 compute-0 nova_compute[253661]: 2025-11-22 10:10:11.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:11 compute-0 nova_compute[253661]: 2025-11-22 10:10:11.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:10:11 compute-0 nova_compute[253661]: 2025-11-22 10:10:11.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:10:11 compute-0 nova_compute[253661]: 2025-11-22 10:10:11.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:10:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:11.763+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:12 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:12 compute-0 ceph-mon[75021]: pgmap v3255: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:12 compute-0 nova_compute[253661]: 2025-11-22 10:10:12.234 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:10:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4015370798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:10:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:10:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4015370798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:10:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:12.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:13 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/4015370798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:10:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/4015370798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:10:13 compute-0 podman[429956]: 2025-11-22 10:10:13.384664711 +0000 UTC m=+0.072357552 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 10:10:13 compute-0 nova_compute[253661]: 2025-11-22 10:10:13.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:13 compute-0 podman[429955]: 2025-11-22 10:10:13.41429309 +0000 UTC m=+0.099629732 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 10:10:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:13.792+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:14 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:14 compute-0 ceph-mon[75021]: pgmap v3256: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:14 compute-0 nova_compute[253661]: 2025-11-22 10:10:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:14 compute-0 nova_compute[253661]: 2025-11-22 10:10:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:14 compute-0 nova_compute[253661]: 2025-11-22 10:10:14.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:10:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1189 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:14.744+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:14 compute-0 nova_compute[253661]: 2025-11-22 10:10:14.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:15 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:15 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1189 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:15.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:16 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:16 compute-0 ceph-mon[75021]: pgmap v3257: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:10:16 compute-0 podman[429995]: 2025-11-22 10:10:16.448223068 +0000 UTC m=+0.135462215 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:10:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:16.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:16 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:10:16 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613545980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.726 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.871 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.872 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.873 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.873 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.931 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.931 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:10:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:16 compute-0 nova_compute[253661]: 2025-11-22 10:10:16.945 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:10:17 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:17 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2613545980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:10:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:10:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138327405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:10:17 compute-0 nova_compute[253661]: 2025-11-22 10:10:17.411 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:10:17 compute-0 nova_compute[253661]: 2025-11-22 10:10:17.416 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:10:17 compute-0 nova_compute[253661]: 2025-11-22 10:10:17.427 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:10:17 compute-0 nova_compute[253661]: 2025-11-22 10:10:17.428 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:10:17 compute-0 nova_compute[253661]: 2025-11-22 10:10:17.428 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:10:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:17.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:17 compute-0 sudo[430066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:17 compute-0 sudo[430066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:17 compute-0 sudo[430066]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:17 compute-0 sudo[430091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:10:17 compute-0 sudo[430091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:17 compute-0 sudo[430091]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:18 compute-0 sudo[430116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:18 compute-0 sudo[430116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:18 compute-0 sudo[430116]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:18 compute-0 sudo[430141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 10:10:18 compute-0 sudo[430141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:18 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:18 compute-0 ceph-mon[75021]: pgmap v3258: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4138327405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:10:18 compute-0 nova_compute[253661]: 2025-11-22 10:10:18.405 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:18 compute-0 nova_compute[253661]: 2025-11-22 10:10:18.427 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:18 compute-0 nova_compute[253661]: 2025-11-22 10:10:18.428 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:18 compute-0 podman[430239]: 2025-11-22 10:10:18.638596326 +0000 UTC m=+0.071343416 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:10:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:18.664+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:18 compute-0 podman[430239]: 2025-11-22 10:10:18.74478275 +0000 UTC m=+0.177529870 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 10:10:18 compute-0 sshd-session[430260]: Accepted publickey for zuul from 192.168.122.30 port 41716 ssh2: ECDSA SHA256:tikRPC42/ncVfP2lnh0iO6vjJo8w9amYgweJm9+SStg
Nov 22 10:10:18 compute-0 systemd-logind[822]: New session 52 of user zuul.
Nov 22 10:10:18 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 22 10:10:18 compute-0 sshd-session[430260]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 10:10:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:19 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1194 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:19 compute-0 sudo[430141]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:10:19 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:10:19 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:19 compute-0 sudo[430446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:19 compute-0 sudo[430446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:19 compute-0 sudo[430446]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:19 compute-0 sudo[430471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:10:19 compute-0 sudo[430471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:19 compute-0 sudo[430471]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:19.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:19 compute-0 sudo[430496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:19 compute-0 sudo[430496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:19 compute-0 sudo[430496]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:19 compute-0 sudo[430521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:10:19 compute-0 sudo[430521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:19 compute-0 nova_compute[253661]: 2025-11-22 10:10:19.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:20 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:20 compute-0 ceph-mon[75021]: pgmap v3259: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:20 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1194 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:20 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:20 compute-0 sudo[430521]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:10:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:10:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:10:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:10:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:10:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 868cfc09-154e-4c25-b7c2-6c5a9ded3464 does not exist
Nov 22 10:10:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev fea7bea6-8db4-4f67-bff0-0023af980dd1 does not exist
Nov 22 10:10:20 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2a7e325e-9e9a-427d-adb4-fd5ef77b5657 does not exist
Nov 22 10:10:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:10:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:10:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:10:20 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:10:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:10:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:10:20 compute-0 sudo[430577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:20 compute-0 sudo[430577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:20 compute-0 sudo[430577]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:20 compute-0 sudo[430602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:10:20 compute-0 sudo[430602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:20 compute-0 sudo[430602]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:20 compute-0 sudo[430627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:20 compute-0 sudo[430627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:20 compute-0 sudo[430627]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:20 compute-0 sudo[430652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:10:20 compute-0 sudo[430652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:20.733+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:21 compute-0 podman[430717]: 2025-11-22 10:10:21.078017734 +0000 UTC m=+0.053224701 container create 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:10:21 compute-0 systemd[1]: Started libpod-conmon-6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2.scope.
Nov 22 10:10:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:10:21 compute-0 podman[430717]: 2025-11-22 10:10:21.051208024 +0000 UTC m=+0.026415051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:10:21 compute-0 podman[430717]: 2025-11-22 10:10:21.160165806 +0000 UTC m=+0.135372763 container init 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 10:10:21 compute-0 podman[430717]: 2025-11-22 10:10:21.168222494 +0000 UTC m=+0.143429431 container start 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:10:21 compute-0 podman[430717]: 2025-11-22 10:10:21.171436133 +0000 UTC m=+0.146643110 container attach 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:10:21 compute-0 pensive_napier[430734]: 167 167
Nov 22 10:10:21 compute-0 systemd[1]: libpod-6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2.scope: Deactivated successfully.
Nov 22 10:10:21 compute-0 podman[430717]: 2025-11-22 10:10:21.176510698 +0000 UTC m=+0.151717645 container died 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 10:10:21 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:10:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:10:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:10:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:10:21 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-91284a34d2f5545cfd9102731c75de39e6b48697204587062a45be84514372bb-merged.mount: Deactivated successfully.
Nov 22 10:10:21 compute-0 podman[430717]: 2025-11-22 10:10:21.225030701 +0000 UTC m=+0.200237678 container remove 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 10:10:21 compute-0 systemd[1]: libpod-conmon-6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2.scope: Deactivated successfully.
Nov 22 10:10:21 compute-0 podman[430757]: 2025-11-22 10:10:21.413427038 +0000 UTC m=+0.049137700 container create 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:10:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:10:21.431 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 10:10:21 compute-0 nova_compute[253661]: 2025-11-22 10:10:21.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:21 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:10:21.432 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 10:10:21 compute-0 systemd[1]: Started libpod-conmon-4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce.scope.
Nov 22 10:10:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:21 compute-0 podman[430757]: 2025-11-22 10:10:21.393572039 +0000 UTC m=+0.029282741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:10:21 compute-0 podman[430757]: 2025-11-22 10:10:21.494109882 +0000 UTC m=+0.129820574 container init 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:10:21 compute-0 podman[430757]: 2025-11-22 10:10:21.500687135 +0000 UTC m=+0.136397797 container start 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 10:10:21 compute-0 podman[430757]: 2025-11-22 10:10:21.504069798 +0000 UTC m=+0.139780470 container attach 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 10:10:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:21.715+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:22 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:22 compute-0 ceph-mon[75021]: pgmap v3260: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:22 compute-0 cranky_franklin[430773]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:10:22 compute-0 cranky_franklin[430773]: --> relative data size: 1.0
Nov 22 10:10:22 compute-0 cranky_franklin[430773]: --> All data devices are unavailable
Nov 22 10:10:22 compute-0 systemd[1]: libpod-4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce.scope: Deactivated successfully.
Nov 22 10:10:22 compute-0 podman[430757]: 2025-11-22 10:10:22.525460982 +0000 UTC m=+1.161171634 container died 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3-merged.mount: Deactivated successfully.
Nov 22 10:10:22 compute-0 podman[430757]: 2025-11-22 10:10:22.58956918 +0000 UTC m=+1.225279832 container remove 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 10:10:22 compute-0 systemd[1]: libpod-conmon-4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce.scope: Deactivated successfully.
Nov 22 10:10:22 compute-0 sudo[430652]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:22 compute-0 sudo[430998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:22 compute-0 sudo[430998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:22 compute-0 sudo[430998]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:22.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:22 compute-0 sudo[431023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:10:22 compute-0 sudo[431023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:22 compute-0 sudo[431023]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:22 compute-0 sudo[431048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:22 compute-0 sudo[431048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:22 compute-0 sudo[431048]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:10:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:10:22 compute-0 sudo[431073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:10:22 compute-0 sudo[431073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:23 compute-0 podman[431138]: 2025-11-22 10:10:23.129253339 +0000 UTC m=+0.037324848 container create f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 10:10:23 compute-0 systemd[1]: Started libpod-conmon-f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5.scope.
Nov 22 10:10:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:10:23 compute-0 podman[431138]: 2025-11-22 10:10:23.112167239 +0000 UTC m=+0.020238768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:10:23 compute-0 podman[431138]: 2025-11-22 10:10:23.208342036 +0000 UTC m=+0.116413565 container init f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:10:23 compute-0 podman[431138]: 2025-11-22 10:10:23.216490957 +0000 UTC m=+0.124562486 container start f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 10:10:23 compute-0 podman[431138]: 2025-11-22 10:10:23.221202142 +0000 UTC m=+0.129273681 container attach f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 10:10:23 compute-0 eager_shannon[431154]: 167 167
Nov 22 10:10:23 compute-0 systemd[1]: libpod-f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5.scope: Deactivated successfully.
Nov 22 10:10:23 compute-0 podman[431138]: 2025-11-22 10:10:23.223833287 +0000 UTC m=+0.131904796 container died f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 10:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d33bfac015c1cd2f5d048bb2beced284730ca087e74123a17d304800a7a69cec-merged.mount: Deactivated successfully.
Nov 22 10:10:23 compute-0 podman[431138]: 2025-11-22 10:10:23.260085289 +0000 UTC m=+0.168156798 container remove f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 22 10:10:23 compute-0 systemd[1]: libpod-conmon-f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5.scope: Deactivated successfully.
Nov 22 10:10:23 compute-0 nova_compute[253661]: 2025-11-22 10:10:23.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:23 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:23 compute-0 podman[431176]: 2025-11-22 10:10:23.441788111 +0000 UTC m=+0.049886579 container create 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 10:10:23 compute-0 systemd[1]: Started libpod-conmon-56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd.scope.
Nov 22 10:10:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:23 compute-0 podman[431176]: 2025-11-22 10:10:23.422952217 +0000 UTC m=+0.031050695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:10:23 compute-0 podman[431176]: 2025-11-22 10:10:23.525638653 +0000 UTC m=+0.133737141 container init 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 10:10:23 compute-0 podman[431176]: 2025-11-22 10:10:23.540442538 +0000 UTC m=+0.148540996 container start 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:10:23 compute-0 podman[431176]: 2025-11-22 10:10:23.54379994 +0000 UTC m=+0.151898428 container attach 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:10:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:23.672+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:24 compute-0 stupefied_buck[431193]: {
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:     "0": [
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:         {
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "devices": [
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "/dev/loop3"
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             ],
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_name": "ceph_lv0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_size": "21470642176",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "name": "ceph_lv0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "tags": {
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.cluster_name": "ceph",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.crush_device_class": "",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.encrypted": "0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.osd_id": "0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.type": "block",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.vdo": "0"
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             },
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "type": "block",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "vg_name": "ceph_vg0"
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:         }
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:     ],
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:     "1": [
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:         {
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "devices": [
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "/dev/loop4"
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             ],
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_name": "ceph_lv1",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_size": "21470642176",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "name": "ceph_lv1",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "tags": {
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.cluster_name": "ceph",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.crush_device_class": "",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.encrypted": "0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.osd_id": "1",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.type": "block",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.vdo": "0"
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             },
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "type": "block",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "vg_name": "ceph_vg1"
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:         }
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:     ],
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:     "2": [
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:         {
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "devices": [
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "/dev/loop5"
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             ],
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_name": "ceph_lv2",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_size": "21470642176",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "name": "ceph_lv2",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "tags": {
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.cluster_name": "ceph",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.crush_device_class": "",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.encrypted": "0",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.osd_id": "2",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.type": "block",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:                 "ceph.vdo": "0"
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             },
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "type": "block",
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:             "vg_name": "ceph_vg2"
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:         }
Nov 22 10:10:24 compute-0 stupefied_buck[431193]:     ]
Nov 22 10:10:24 compute-0 stupefied_buck[431193]: }
Nov 22 10:10:24 compute-0 systemd[1]: libpod-56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd.scope: Deactivated successfully.
Nov 22 10:10:24 compute-0 podman[431176]: 2025-11-22 10:10:24.353980137 +0000 UTC m=+0.962078615 container died 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4-merged.mount: Deactivated successfully.
Nov 22 10:10:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1199 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.414176) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224414235, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 899, "num_deletes": 256, "total_data_size": 938323, "memory_usage": 955944, "flush_reason": "Manual Compaction"}
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224424138, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 924177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71303, "largest_seqno": 72201, "table_properties": {"data_size": 919967, "index_size": 1733, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10861, "raw_average_key_size": 20, "raw_value_size": 910783, "raw_average_value_size": 1677, "num_data_blocks": 77, "num_entries": 543, "num_filter_entries": 543, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806168, "oldest_key_time": 1763806168, "file_creation_time": 1763806224, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 10040 microseconds, and 4205 cpu microseconds.
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.424213) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 924177 bytes OK
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.424239) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.428500) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.428519) EVENT_LOG_v1 {"time_micros": 1763806224428513, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.428540) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 933790, prev total WAL file size 933790, number of live WAL files 2.
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.429240) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323634' seq:72057594037927935, type:22 .. '6C6F676D0033353137' seq:0, type:0; will stop at (end)
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(902KB)], [167(9686KB)]
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224429381, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10843158, "oldest_snapshot_seqno": -1}
Nov 22 10:10:24 compute-0 podman[431176]: 2025-11-22 10:10:24.444838483 +0000 UTC m=+1.052936941 container remove 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:10:24 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:24 compute-0 ceph-mon[75021]: pgmap v3261: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:24 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1199 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:24 compute-0 systemd[1]: libpod-conmon-56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd.scope: Deactivated successfully.
Nov 22 10:10:24 compute-0 sudo[431073]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 9814 keys, 10708418 bytes, temperature: kUnknown
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224487164, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10708418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10648215, "index_size": 34554, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 261504, "raw_average_key_size": 26, "raw_value_size": 10477697, "raw_average_value_size": 1067, "num_data_blocks": 1310, "num_entries": 9814, "num_filter_entries": 9814, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806224, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.487464) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10708418 bytes
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.488985) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.4 rd, 185.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(23.3) write-amplify(11.6) OK, records in: 10338, records dropped: 524 output_compression: NoCompression
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.489006) EVENT_LOG_v1 {"time_micros": 1763806224488997, "job": 104, "event": "compaction_finished", "compaction_time_micros": 57847, "compaction_time_cpu_micros": 34740, "output_level": 6, "num_output_files": 1, "total_output_size": 10708418, "num_input_records": 10338, "num_output_records": 9814, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224489307, "job": 104, "event": "table_file_deletion", "file_number": 169}
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224491630, "job": 104, "event": "table_file_deletion", "file_number": 167}
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.429137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:10:24 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:10:24 compute-0 sudo[431214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:24 compute-0 sudo[431214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:24 compute-0 sudo[431214]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:24 compute-0 sudo[431239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:10:24 compute-0 sudo[431239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:24 compute-0 sudo[431239]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:24.720+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:24 compute-0 sudo[431264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:24 compute-0 sudo[431264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:24 compute-0 sudo[431264]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:24 compute-0 sudo[431312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:10:24 compute-0 sudo[431312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:24 compute-0 sshd-session[430291]: Connection closed by 192.168.122.30 port 41716
Nov 22 10:10:24 compute-0 sshd-session[430260]: pam_unix(sshd:session): session closed for user zuul
Nov 22 10:10:24 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 22 10:10:24 compute-0 systemd-logind[822]: Session 52 logged out. Waiting for processes to exit.
Nov 22 10:10:24 compute-0 systemd-logind[822]: Removed session 52.
Nov 22 10:10:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:24 compute-0 nova_compute[253661]: 2025-11-22 10:10:24.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:25 compute-0 podman[431378]: 2025-11-22 10:10:25.396583793 +0000 UTC m=+0.072367702 container create ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 10:10:25 compute-0 systemd[1]: Started libpod-conmon-ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475.scope.
Nov 22 10:10:25 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:25 compute-0 podman[431378]: 2025-11-22 10:10:25.367536778 +0000 UTC m=+0.043320787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:10:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:10:25 compute-0 podman[431378]: 2025-11-22 10:10:25.491018786 +0000 UTC m=+0.166802735 container init ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 10:10:25 compute-0 podman[431378]: 2025-11-22 10:10:25.498958422 +0000 UTC m=+0.174742331 container start ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 10:10:25 compute-0 podman[431378]: 2025-11-22 10:10:25.502901029 +0000 UTC m=+0.178684948 container attach ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:10:25 compute-0 hardcore_jang[431394]: 167 167
Nov 22 10:10:25 compute-0 systemd[1]: libpod-ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475.scope: Deactivated successfully.
Nov 22 10:10:25 compute-0 conmon[431394]: conmon ad5169cb9d81966c1873 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475.scope/container/memory.events
Nov 22 10:10:25 compute-0 podman[431378]: 2025-11-22 10:10:25.50783505 +0000 UTC m=+0.183618969 container died ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-34c78179b90af8b2b74f0aa37214bbdb3fbd34ceac2cdfccdc027d153d3c2349-merged.mount: Deactivated successfully.
Nov 22 10:10:25 compute-0 podman[431378]: 2025-11-22 10:10:25.554445257 +0000 UTC m=+0.230229176 container remove ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:10:25 compute-0 systemd[1]: libpod-conmon-ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475.scope: Deactivated successfully.
Nov 22 10:10:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:25.728+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:25 compute-0 podman[431419]: 2025-11-22 10:10:25.794019983 +0000 UTC m=+0.050841323 container create 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:10:25 compute-0 systemd[1]: Started libpod-conmon-6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723.scope.
Nov 22 10:10:25 compute-0 podman[431419]: 2025-11-22 10:10:25.773488057 +0000 UTC m=+0.030309417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:10:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:10:25 compute-0 podman[431419]: 2025-11-22 10:10:25.897635533 +0000 UTC m=+0.154456863 container init 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 10:10:25 compute-0 podman[431419]: 2025-11-22 10:10:25.91097077 +0000 UTC m=+0.167792090 container start 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 10:10:25 compute-0 podman[431419]: 2025-11-22 10:10:25.913981104 +0000 UTC m=+0.170802424 container attach 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:10:26 compute-0 nova_compute[253661]: 2025-11-22 10:10:26.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:10:26 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:26 compute-0 ceph-mon[75021]: pgmap v3262: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:26.731+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:26 compute-0 trusting_wright[431435]: {
Nov 22 10:10:26 compute-0 trusting_wright[431435]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "osd_id": 1,
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "type": "bluestore"
Nov 22 10:10:26 compute-0 trusting_wright[431435]:     },
Nov 22 10:10:26 compute-0 trusting_wright[431435]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "osd_id": 0,
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "type": "bluestore"
Nov 22 10:10:26 compute-0 trusting_wright[431435]:     },
Nov 22 10:10:26 compute-0 trusting_wright[431435]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "osd_id": 2,
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:10:26 compute-0 trusting_wright[431435]:         "type": "bluestore"
Nov 22 10:10:26 compute-0 trusting_wright[431435]:     }
Nov 22 10:10:26 compute-0 trusting_wright[431435]: }
Nov 22 10:10:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:26 compute-0 systemd[1]: libpod-6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723.scope: Deactivated successfully.
Nov 22 10:10:26 compute-0 podman[431419]: 2025-11-22 10:10:26.955651377 +0000 UTC m=+1.212472707 container died 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 10:10:26 compute-0 systemd[1]: libpod-6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723.scope: Consumed 1.051s CPU time.
Nov 22 10:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9-merged.mount: Deactivated successfully.
Nov 22 10:10:27 compute-0 podman[431419]: 2025-11-22 10:10:27.008366795 +0000 UTC m=+1.265188115 container remove 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 10:10:27 compute-0 systemd[1]: libpod-conmon-6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723.scope: Deactivated successfully.
Nov 22 10:10:27 compute-0 sudo[431312]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:10:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:10:27 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c220d0e1-6c32-4599-aa80-bac0f6800d99 does not exist
Nov 22 10:10:27 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 70aa17a3-6524-4d42-b861-9272992938a3 does not exist
Nov 22 10:10:27 compute-0 sudo[431481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:10:27 compute-0 sudo[431481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:27 compute-0 sudo[431481]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:27 compute-0 sudo[431506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:10:27 compute-0 sudo[431506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:10:27 compute-0 sudo[431506]: pam_unix(sudo:session): session closed for user root
Nov 22 10:10:27 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:27 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:10:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:27.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:10:28.017 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:10:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:10:28.018 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:10:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:10:28.018 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:10:28 compute-0 nova_compute[253661]: 2025-11-22 10:10:28.411 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:28 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:28 compute-0 ceph-mon[75021]: pgmap v3263: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:28.817+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1204 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:29 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:10:29.434 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 10:10:29 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:29 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1204 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:29.794+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:30 compute-0 nova_compute[253661]: 2025-11-22 10:10:30.004 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:30 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:30 compute-0 ceph-mon[75021]: pgmap v3264: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:30.843+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:31 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:31.803+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:32 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:32 compute-0 ceph-mon[75021]: pgmap v3265: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:32.847+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:33 compute-0 nova_compute[253661]: 2025-11-22 10:10:33.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:33 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:33.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1209 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:34 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:34 compute-0 ceph-mon[75021]: pgmap v3266: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:34 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1209 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:34.848+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:35 compute-0 nova_compute[253661]: 2025-11-22 10:10:35.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:35 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:35.880+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:36 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:36 compute-0 ceph-mon[75021]: pgmap v3267: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:36.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:36 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:37 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:37.791+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:38 compute-0 nova_compute[253661]: 2025-11-22 10:10:38.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:38 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:38 compute-0 ceph-mon[75021]: pgmap v3268: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:38.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:38 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1214 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:39 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1214 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:39.870+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:40 compute-0 nova_compute[253661]: 2025-11-22 10:10:40.008 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:40 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:40 compute-0 ceph-mon[75021]: pgmap v3269: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:40.840+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:40 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:41 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:41.793+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:42 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:42 compute-0 ceph-mon[75021]: pgmap v3270: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:42.777+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:42 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:43 compute-0 nova_compute[253661]: 2025-11-22 10:10:43.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:43 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:43 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:43.816+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:44 compute-0 podman[431531]: 2025-11-22 10:10:44.399086495 +0000 UTC m=+0.072827913 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:10:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1219 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:44 compute-0 podman[431532]: 2025-11-22 10:10:44.423968357 +0000 UTC m=+0.100797592 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:10:44 compute-0 ceph-mon[75021]: pgmap v3271: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:44 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:44 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1219 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:44.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:44 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:45 compute-0 nova_compute[253661]: 2025-11-22 10:10:45.049 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:45 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:45.877+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:46 compute-0 ceph-mon[75021]: pgmap v3272: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:46 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:46.832+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:46 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:47 compute-0 podman[431569]: 2025-11-22 10:10:47.463709386 +0000 UTC m=+0.145782598 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 10:10:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:47.818+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:47 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:47 compute-0 ceph-mon[75021]: pgmap v3273: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:48 compute-0 nova_compute[253661]: 2025-11-22 10:10:48.425 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:48.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:48 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:48 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1224 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:49 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:49 compute-0 ceph-mon[75021]: pgmap v3274: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:49 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1224 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:49.851+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:50 compute-0 nova_compute[253661]: 2025-11-22 10:10:50.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:50 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:50.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:50 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:51 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:51 compute-0 ceph-mon[75021]: pgmap v3275: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:51.891+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:10:52
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', '.rgw.root', 'vms']
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:10:52 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:52.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:52 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:53 compute-0 nova_compute[253661]: 2025-11-22 10:10:53.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:53 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:53 compute-0 ceph-mon[75021]: pgmap v3276: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:53.950+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1229 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:54 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:54 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1229 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:54.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:54 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:55 compute-0 nova_compute[253661]: 2025-11-22 10:10:55.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:55 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:55 compute-0 ceph-mon[75021]: pgmap v3277: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:55.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:10:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:10:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:10:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:10:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:10:56 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:56.920+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:56 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:10:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 72K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1830 writes, 8975 keys, 1830 commit groups, 1.0 writes per commit group, ingest: 9.55 MB, 0.02 MB/s
                                           Interval WAL: 1830 writes, 1830 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     42.9      1.90              0.30        52    0.037       0      0       0.0       0.0
                                             L6      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.3    103.2     88.5      4.86              1.34        51    0.095    357K    27K       0.0       0.0
                                            Sum      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.3     74.1     75.7      6.76              1.64       103    0.066    357K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9    149.2    153.1      0.56              0.29        16    0.035     78K   4107       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    103.2     88.5      4.86              1.34        51    0.095    357K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     42.9      1.90              0.30        51    0.037       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.080, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.50 GB write, 0.09 MB/s write, 0.49 GB read, 0.08 MB/s read, 6.8 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 55.83 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000876 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3609,53.30 MB,17.5342%) FilterBlock(104,1001.55 KB,0.321735%) IndexBlock(104,1.55 MB,0.509779%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 10:10:57 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:57 compute-0 ceph-mon[75021]: pgmap v3278: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:57.931+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:58 compute-0 nova_compute[253661]: 2025-11-22 10:10:58.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:10:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:58.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:58 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:58 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1234 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:10:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:10:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:10:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:10:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:10:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:10:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:10:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:59.856+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:10:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:59 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:10:59 compute-0 ceph-mon[75021]: pgmap v3279: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:10:59 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1234 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:00 compute-0 nova_compute[253661]: 2025-11-22 10:11:00.056 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:00.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:00 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:00 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:01.818+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:01 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:01 compute-0 ceph-mon[75021]: pgmap v3280: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:02.801+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:02 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:02 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:11:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:11:03 compute-0 nova_compute[253661]: 2025-11-22 10:11:03.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:03.805+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:03 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:03 compute-0 ceph-mon[75021]: pgmap v3281: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1239 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:04.840+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:04 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:04 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1239 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:04 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:05 compute-0 nova_compute[253661]: 2025-11-22 10:11:05.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:05.825+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:05 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:05 compute-0 ceph-mon[75021]: pgmap v3282: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:06.871+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:06 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:06 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:07 compute-0 nova_compute[253661]: 2025-11-22 10:11:07.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:11:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:07.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:08 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:08 compute-0 ceph-mon[75021]: pgmap v3283: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:08 compute-0 nova_compute[253661]: 2025-11-22 10:11:08.527 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:08.855+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:08 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:09 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1244 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:09.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:10 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:10 compute-0 ceph-mon[75021]: pgmap v3284: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:10 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1244 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:10 compute-0 nova_compute[253661]: 2025-11-22 10:11:10.105 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:10.873+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:10 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:11 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:11 compute-0 nova_compute[253661]: 2025-11-22 10:11:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:11:11 compute-0 nova_compute[253661]: 2025-11-22 10:11:11.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:11:11 compute-0 nova_compute[253661]: 2025-11-22 10:11:11.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:11:11 compute-0 nova_compute[253661]: 2025-11-22 10:11:11.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:11:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:11.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:12 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:12 compute-0 ceph-mon[75021]: pgmap v3285: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:11:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3557443404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:11:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:11:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3557443404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:11:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:12.836+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:12 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:13 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3557443404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:11:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3557443404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:11:13 compute-0 nova_compute[253661]: 2025-11-22 10:11:13.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:13.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:14 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:14 compute-0 ceph-mon[75021]: pgmap v3286: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:14 compute-0 nova_compute[253661]: 2025-11-22 10:11:14.236 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:11:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1249 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:14.811+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:14 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:15 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:15 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1249 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:15 compute-0 nova_compute[253661]: 2025-11-22 10:11:15.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:15 compute-0 nova_compute[253661]: 2025-11-22 10:11:15.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:11:15 compute-0 nova_compute[253661]: 2025-11-22 10:11:15.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:11:15 compute-0 nova_compute[253661]: 2025-11-22 10:11:15.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:11:15 compute-0 podman[431597]: 2025-11-22 10:11:15.367442507 +0000 UTC m=+0.054943824 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 10:11:15 compute-0 podman[431596]: 2025-11-22 10:11:15.36878949 +0000 UTC m=+0.057644130 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 10:11:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:15.799+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:16 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:16 compute-0 ceph-mon[75021]: pgmap v3287: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:16.807+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:16 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:17 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:11:17 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:11:17 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936066357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.694 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.835 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.836 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3568MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.836 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.837 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:11:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:17.841+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.904 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.905 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:11:17 compute-0 nova_compute[253661]: 2025-11-22 10:11:17.927 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:11:18 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:18 compute-0 ceph-mon[75021]: pgmap v3288: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1936066357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:11:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:11:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441160920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:11:18 compute-0 nova_compute[253661]: 2025-11-22 10:11:18.361 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:11:18 compute-0 nova_compute[253661]: 2025-11-22 10:11:18.368 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:11:18 compute-0 nova_compute[253661]: 2025-11-22 10:11:18.386 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:11:18 compute-0 nova_compute[253661]: 2025-11-22 10:11:18.388 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:11:18 compute-0 nova_compute[253661]: 2025-11-22 10:11:18.388 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:11:18 compute-0 podman[431676]: 2025-11-22 10:11:18.407076572 +0000 UTC m=+0.098298710 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 10:11:18 compute-0 nova_compute[253661]: 2025-11-22 10:11:18.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:18.876+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:18 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:19 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2441160920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:11:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1254 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:19.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:20 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:20 compute-0 ceph-mon[75021]: pgmap v3289: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:20 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1254 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:20 compute-0 nova_compute[253661]: 2025-11-22 10:11:20.173 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:20.872+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:20 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:21 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:21.891+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:22 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:22 compute-0 ceph-mon[75021]: pgmap v3290: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:11:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:11:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:22.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:22 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:23 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:23 compute-0 nova_compute[253661]: 2025-11-22 10:11:23.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:23.924+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:24 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:24 compute-0 ceph-mon[75021]: pgmap v3291: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1259 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:24.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:24 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:25 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:25 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1259 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:25 compute-0 nova_compute[253661]: 2025-11-22 10:11:25.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:25.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:26 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:26 compute-0 ceph-mon[75021]: pgmap v3292: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:26 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:27.016+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:27 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:27 compute-0 sudo[431705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:11:27 compute-0 sudo[431705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:27 compute-0 sudo[431705]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:27 compute-0 sudo[431730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:11:27 compute-0 sudo[431730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:27 compute-0 sudo[431730]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:27 compute-0 sudo[431755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:11:27 compute-0 sudo[431755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:27 compute-0 sudo[431755]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:27 compute-0 sudo[431780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:11:27 compute-0 sudo[431780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:27 compute-0 sudo[431780]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:11:28.018 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:11:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:11:28.019 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:11:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:11:28.019 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:11:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:28.046+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:11:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:11:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:11:28 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:11:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:11:28 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:11:28 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c715f888-1a0e-4336-bffb-0c95fcd6d203 does not exist
Nov 22 10:11:28 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4cc644c6-b48a-4daf-8c3c-a3908b9af563 does not exist
Nov 22 10:11:28 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev af673936-09b6-4e98-8845-e850d654fde0 does not exist
Nov 22 10:11:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:11:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:11:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:11:28 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:11:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:11:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:11:28 compute-0 sudo[431834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:11:28 compute-0 sudo[431834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:28 compute-0 sudo[431834]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:28 compute-0 ceph-mon[75021]: pgmap v3293: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:28 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:11:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:11:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:11:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:11:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:11:28 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:11:28 compute-0 sudo[431859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:11:28 compute-0 sudo[431859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:28 compute-0 sudo[431859]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:28 compute-0 sudo[431884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:11:28 compute-0 sudo[431884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:28 compute-0 sudo[431884]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:28 compute-0 sudo[431909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:11:28 compute-0 sudo[431909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:28 compute-0 nova_compute[253661]: 2025-11-22 10:11:28.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:28 compute-0 podman[431974]: 2025-11-22 10:11:28.724005794 +0000 UTC m=+0.045239424 container create c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 10:11:28 compute-0 systemd[1]: Started libpod-conmon-c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f.scope.
Nov 22 10:11:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:11:28 compute-0 podman[431974]: 2025-11-22 10:11:28.70309955 +0000 UTC m=+0.024333210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:11:28 compute-0 podman[431974]: 2025-11-22 10:11:28.813740512 +0000 UTC m=+0.134974222 container init c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 10:11:28 compute-0 podman[431974]: 2025-11-22 10:11:28.820536869 +0000 UTC m=+0.141770509 container start c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:11:28 compute-0 podman[431974]: 2025-11-22 10:11:28.82417998 +0000 UTC m=+0.145413700 container attach c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:11:28 compute-0 systemd[1]: libpod-c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f.scope: Deactivated successfully.
Nov 22 10:11:28 compute-0 zealous_robinson[431990]: 167 167
Nov 22 10:11:28 compute-0 conmon[431990]: conmon c3ef36f53a761aa9bda9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f.scope/container/memory.events
Nov 22 10:11:28 compute-0 podman[431974]: 2025-11-22 10:11:28.828888105 +0000 UTC m=+0.150121765 container died c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:11:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-525b428c70fec544b6997dfd1afae871e4ec24820d2d65d04ceaf4150740c750-merged.mount: Deactivated successfully.
Nov 22 10:11:28 compute-0 podman[431974]: 2025-11-22 10:11:28.880491875 +0000 UTC m=+0.201725535 container remove c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:11:28 compute-0 systemd[1]: libpod-conmon-c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f.scope: Deactivated successfully.
Nov 22 10:11:28 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:29.076+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:29 compute-0 podman[432014]: 2025-11-22 10:11:29.097127815 +0000 UTC m=+0.057254849 container create d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 10:11:29 compute-0 systemd[1]: Started libpod-conmon-d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe.scope.
Nov 22 10:11:29 compute-0 podman[432014]: 2025-11-22 10:11:29.064103543 +0000 UTC m=+0.024230667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:11:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:29 compute-0 podman[432014]: 2025-11-22 10:11:29.201784941 +0000 UTC m=+0.161912015 container init d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 10:11:29 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:29 compute-0 podman[432014]: 2025-11-22 10:11:29.214223707 +0000 UTC m=+0.174350731 container start d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:11:29 compute-0 podman[432014]: 2025-11-22 10:11:29.217154459 +0000 UTC m=+0.177281483 container attach d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 22 10:11:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1264 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:30.107+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:30 compute-0 nova_compute[253661]: 2025-11-22 10:11:30.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:30 compute-0 ceph-mon[75021]: pgmap v3294: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:30 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:30 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1264 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:30 compute-0 strange_turing[432031]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:11:30 compute-0 strange_turing[432031]: --> relative data size: 1.0
Nov 22 10:11:30 compute-0 strange_turing[432031]: --> All data devices are unavailable
Nov 22 10:11:30 compute-0 systemd[1]: libpod-d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe.scope: Deactivated successfully.
Nov 22 10:11:30 compute-0 systemd[1]: libpod-d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe.scope: Consumed 1.172s CPU time.
Nov 22 10:11:30 compute-0 podman[432014]: 2025-11-22 10:11:30.428595319 +0000 UTC m=+1.388722363 container died d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 10:11:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae-merged.mount: Deactivated successfully.
Nov 22 10:11:30 compute-0 podman[432014]: 2025-11-22 10:11:30.486753811 +0000 UTC m=+1.446880835 container remove d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:11:30 compute-0 systemd[1]: libpod-conmon-d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe.scope: Deactivated successfully.
Nov 22 10:11:30 compute-0 sudo[431909]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:30 compute-0 sudo[432074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:11:30 compute-0 sudo[432074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:30 compute-0 sudo[432074]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:30 compute-0 sudo[432099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:11:30 compute-0 sudo[432099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:30 compute-0 sudo[432099]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:30 compute-0 sudo[432124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:11:30 compute-0 sudo[432124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:30 compute-0 sudo[432124]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:30 compute-0 sudo[432149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:11:30 compute-0 sudo[432149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:30 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:31.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:31 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:31 compute-0 podman[432214]: 2025-11-22 10:11:31.289295269 +0000 UTC m=+0.033923356 container create 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 10:11:31 compute-0 systemd[1]: Started libpod-conmon-57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6.scope.
Nov 22 10:11:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:11:31 compute-0 podman[432214]: 2025-11-22 10:11:31.274287419 +0000 UTC m=+0.018915526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:11:31 compute-0 podman[432214]: 2025-11-22 10:11:31.37794035 +0000 UTC m=+0.122568507 container init 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:11:31 compute-0 podman[432214]: 2025-11-22 10:11:31.386281755 +0000 UTC m=+0.130909872 container start 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 10:11:31 compute-0 stupefied_aryabhata[432230]: 167 167
Nov 22 10:11:31 compute-0 systemd[1]: libpod-57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6.scope: Deactivated successfully.
Nov 22 10:11:31 compute-0 podman[432214]: 2025-11-22 10:11:31.391032453 +0000 UTC m=+0.135660570 container attach 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:11:31 compute-0 podman[432214]: 2025-11-22 10:11:31.395191745 +0000 UTC m=+0.139819902 container died 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 10:11:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f387fbaa6976ca9ac65bc6fd3250da5768bbab6de8b20baeba10d37656ed667e-merged.mount: Deactivated successfully.
Nov 22 10:11:31 compute-0 podman[432214]: 2025-11-22 10:11:31.456865163 +0000 UTC m=+0.201493250 container remove 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 10:11:31 compute-0 systemd[1]: libpod-conmon-57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6.scope: Deactivated successfully.
Nov 22 10:11:31 compute-0 podman[432254]: 2025-11-22 10:11:31.653954373 +0000 UTC m=+0.069776658 container create ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 10:11:31 compute-0 systemd[1]: Started libpod-conmon-ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847.scope.
Nov 22 10:11:31 compute-0 podman[432254]: 2025-11-22 10:11:31.62052488 +0000 UTC m=+0.036347215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:11:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:31 compute-0 podman[432254]: 2025-11-22 10:11:31.757503141 +0000 UTC m=+0.173325466 container init ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 22 10:11:31 compute-0 podman[432254]: 2025-11-22 10:11:31.770383808 +0000 UTC m=+0.186206093 container start ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:11:31 compute-0 podman[432254]: 2025-11-22 10:11:31.775537954 +0000 UTC m=+0.191360229 container attach ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:11:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:32.189+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:32 compute-0 ceph-mon[75021]: pgmap v3295: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:32 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:32 compute-0 infallible_dirac[432271]: {
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:     "0": [
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:         {
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "devices": [
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "/dev/loop3"
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             ],
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_name": "ceph_lv0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_size": "21470642176",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "name": "ceph_lv0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "tags": {
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.cluster_name": "ceph",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.crush_device_class": "",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.encrypted": "0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.osd_id": "0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.type": "block",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.vdo": "0"
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             },
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "type": "block",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "vg_name": "ceph_vg0"
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:         }
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:     ],
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:     "1": [
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:         {
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "devices": [
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "/dev/loop4"
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             ],
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_name": "ceph_lv1",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_size": "21470642176",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "name": "ceph_lv1",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "tags": {
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.cluster_name": "ceph",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.crush_device_class": "",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.encrypted": "0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.osd_id": "1",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.type": "block",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.vdo": "0"
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             },
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "type": "block",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "vg_name": "ceph_vg1"
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:         }
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:     ],
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:     "2": [
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:         {
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "devices": [
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "/dev/loop5"
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             ],
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_name": "ceph_lv2",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_size": "21470642176",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "name": "ceph_lv2",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "tags": {
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.cluster_name": "ceph",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.crush_device_class": "",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.encrypted": "0",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.osd_id": "2",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.type": "block",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:                 "ceph.vdo": "0"
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             },
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "type": "block",
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:             "vg_name": "ceph_vg2"
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:         }
Nov 22 10:11:32 compute-0 infallible_dirac[432271]:     ]
Nov 22 10:11:32 compute-0 infallible_dirac[432271]: }
Nov 22 10:11:32 compute-0 systemd[1]: libpod-ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847.scope: Deactivated successfully.
Nov 22 10:11:32 compute-0 podman[432254]: 2025-11-22 10:11:32.534386137 +0000 UTC m=+0.950208392 container died ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0-merged.mount: Deactivated successfully.
Nov 22 10:11:32 compute-0 podman[432254]: 2025-11-22 10:11:32.590417136 +0000 UTC m=+1.006239381 container remove ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:11:32 compute-0 systemd[1]: libpod-conmon-ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847.scope: Deactivated successfully.
Nov 22 10:11:32 compute-0 sudo[432149]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:32 compute-0 sudo[432293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:11:32 compute-0 sudo[432293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:32 compute-0 sudo[432293]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:32 compute-0 sudo[432318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:11:32 compute-0 sudo[432318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:32 compute-0 sudo[432318]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:32 compute-0 sudo[432343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:11:32 compute-0 sudo[432343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:32 compute-0 sudo[432343]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:32 compute-0 sudo[432368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:11:32 compute-0 sudo[432368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:32 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:33.145+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:33 compute-0 podman[432434]: 2025-11-22 10:11:33.245502546 +0000 UTC m=+0.057866125 container create 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:11:33 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:33 compute-0 systemd[1]: Started libpod-conmon-19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d.scope.
Nov 22 10:11:33 compute-0 podman[432434]: 2025-11-22 10:11:33.222352427 +0000 UTC m=+0.034716026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:11:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:11:33 compute-0 podman[432434]: 2025-11-22 10:11:33.372135133 +0000 UTC m=+0.184498752 container init 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 10:11:33 compute-0 podman[432434]: 2025-11-22 10:11:33.384100436 +0000 UTC m=+0.196464015 container start 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 10:11:33 compute-0 podman[432434]: 2025-11-22 10:11:33.389189112 +0000 UTC m=+0.201552721 container attach 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:11:33 compute-0 distracted_fermi[432450]: 167 167
Nov 22 10:11:33 compute-0 systemd[1]: libpod-19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d.scope: Deactivated successfully.
Nov 22 10:11:33 compute-0 podman[432434]: 2025-11-22 10:11:33.393016076 +0000 UTC m=+0.205379695 container died 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 10:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bc1b14e2c0fcc27ac34f342671d407a6d5959c51c328a6850c741a9f10e4529-merged.mount: Deactivated successfully.
Nov 22 10:11:33 compute-0 podman[432434]: 2025-11-22 10:11:33.443750405 +0000 UTC m=+0.256113994 container remove 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 10:11:33 compute-0 systemd[1]: libpod-conmon-19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d.scope: Deactivated successfully.
Nov 22 10:11:33 compute-0 nova_compute[253661]: 2025-11-22 10:11:33.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:33 compute-0 podman[432475]: 2025-11-22 10:11:33.732484319 +0000 UTC m=+0.085478434 container create a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 10:11:33 compute-0 podman[432475]: 2025-11-22 10:11:33.695850368 +0000 UTC m=+0.048844533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:11:33 compute-0 systemd[1]: Started libpod-conmon-a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd.scope.
Nov 22 10:11:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:11:33 compute-0 podman[432475]: 2025-11-22 10:11:33.841493662 +0000 UTC m=+0.194487827 container init a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 10:11:33 compute-0 podman[432475]: 2025-11-22 10:11:33.85401131 +0000 UTC m=+0.207005395 container start a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 10:11:33 compute-0 podman[432475]: 2025-11-22 10:11:33.858869729 +0000 UTC m=+0.211863894 container attach a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 10:11:34 compute-0 sshd-session[432468]: Invalid user node from 92.118.39.92 port 38832
Nov 22 10:11:34 compute-0 sshd-session[432468]: Connection closed by invalid user node 92.118.39.92 port 38832 [preauth]
Nov 22 10:11:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:34.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:34 compute-0 ceph-mon[75021]: pgmap v3296: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:34 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1270 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:34 compute-0 practical_easley[432491]: {
Nov 22 10:11:34 compute-0 practical_easley[432491]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "osd_id": 1,
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "type": "bluestore"
Nov 22 10:11:34 compute-0 practical_easley[432491]:     },
Nov 22 10:11:34 compute-0 practical_easley[432491]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "osd_id": 0,
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "type": "bluestore"
Nov 22 10:11:34 compute-0 practical_easley[432491]:     },
Nov 22 10:11:34 compute-0 practical_easley[432491]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "osd_id": 2,
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:11:34 compute-0 practical_easley[432491]:         "type": "bluestore"
Nov 22 10:11:34 compute-0 practical_easley[432491]:     }
Nov 22 10:11:34 compute-0 practical_easley[432491]: }
Nov 22 10:11:34 compute-0 systemd[1]: libpod-a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd.scope: Deactivated successfully.
Nov 22 10:11:34 compute-0 systemd[1]: libpod-a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd.scope: Consumed 1.029s CPU time.
Nov 22 10:11:34 compute-0 podman[432475]: 2025-11-22 10:11:34.878673305 +0000 UTC m=+1.231667390 container died a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 10:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834-merged.mount: Deactivated successfully.
Nov 22 10:11:34 compute-0 podman[432475]: 2025-11-22 10:11:34.943007488 +0000 UTC m=+1.296001583 container remove a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 10:11:34 compute-0 systemd[1]: libpod-conmon-a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd.scope: Deactivated successfully.
Nov 22 10:11:34 compute-0 sudo[432368]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:11:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:11:34 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:11:35 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:11:35 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 7820ef98-6049-46ae-a76f-e78d760ce2c7 does not exist
Nov 22 10:11:35 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c5160e30-e2fc-4d85-a0bc-ac92a0bf6e1b does not exist
Nov 22 10:11:35 compute-0 sudo[432539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:11:35 compute-0 sudo[432539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:35 compute-0 sudo[432539]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:35 compute-0 sudo[432564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:11:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:35.161+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:35 compute-0 sudo[432564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:11:35 compute-0 sudo[432564]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:35 compute-0 nova_compute[253661]: 2025-11-22 10:11:35.179 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:35 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:35 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1270 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:11:35 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:11:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:36.162+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:11:36.181 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 10:11:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:11:36.182 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 10:11:36 compute-0 nova_compute[253661]: 2025-11-22 10:11:36.182 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:36 compute-0 ceph-mon[75021]: pgmap v3297: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:36 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:11:36.451 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 10:11:36 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:11:36.452 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 10:11:36 compute-0 nova_compute[253661]: 2025-11-22 10:11:36.452 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:37.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:37 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:38.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:38 compute-0 ceph-mon[75021]: pgmap v3298: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:38 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:38 compute-0 nova_compute[253661]: 2025-11-22 10:11:38.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:39.133+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1275 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:40.105+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:40 compute-0 nova_compute[253661]: 2025-11-22 10:11:40.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:40 compute-0 ceph-mon[75021]: pgmap v3299: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:40 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:40 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1275 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:41.079+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:41 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:42.102+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:42 compute-0 ceph-mon[75021]: pgmap v3300: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:42 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:42 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:11:42.454 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 10:11:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:43.065+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:43 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:43 compute-0 sshd-session[432589]: Accepted publickey for zuul from 192.168.122.30 port 38096 ssh2: ECDSA SHA256:tikRPC42/ncVfP2lnh0iO6vjJo8w9amYgweJm9+SStg
Nov 22 10:11:43 compute-0 systemd-logind[822]: New session 53 of user zuul.
Nov 22 10:11:43 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 22 10:11:43 compute-0 sshd-session[432589]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 10:11:43 compute-0 nova_compute[253661]: 2025-11-22 10:11:43.553 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:44.094+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:44 compute-0 sudo[432685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Nov 22 10:11:44 compute-0 sudo[432685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:44 compute-0 sudo[432685]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:44 compute-0 ceph-mon[75021]: pgmap v3301: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:44 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:44 compute-0 sudo[432711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Nov 22 10:11:44 compute-0 sudo[432711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:44 compute-0 groupadd[432713]: group added to /etc/group: name=podman, GID=42479
Nov 22 10:11:44 compute-0 groupadd[432713]: group added to /etc/gshadow: name=podman
Nov 22 10:11:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1279 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:44 compute-0 groupadd[432713]: new group: name=podman, GID=42479
Nov 22 10:11:44 compute-0 sudo[432711]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:44 compute-0 sudo[432719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Nov 22 10:11:44 compute-0 sudo[432719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:44 compute-0 usermod[432721]: add 'zuul' to group 'podman'
Nov 22 10:11:44 compute-0 usermod[432721]: add 'zuul' to shadow group 'podman'
Nov 22 10:11:44 compute-0 sudo[432719]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:44 compute-0 sudo[432728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Nov 22 10:11:44 compute-0 sudo[432728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:44 compute-0 sudo[432728]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:44 compute-0 sudo[432731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Nov 22 10:11:44 compute-0 sudo[432731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:44 compute-0 sudo[432731]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:44 compute-0 sudo[432734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Nov 22 10:11:44 compute-0 sudo[432734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:44 compute-0 sudo[432734]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:44 compute-0 sudo[432737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Nov 22 10:11:44 compute-0 sudo[432737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:44 compute-0 sudo[432737]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:44 compute-0 sudo[432740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Nov 22 10:11:44 compute-0 sudo[432740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:44 compute-0 sudo[432740]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:44 compute-0 sudo[432743]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Nov 22 10:11:44 compute-0 sudo[432743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:44 compute-0 systemd[1]: Reloading.
Nov 22 10:11:44 compute-0 systemd-rc-local-generator[432767]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 10:11:44 compute-0 systemd-sysv-generator[432771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 10:11:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:45.051+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:45 compute-0 sudo[432743]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:45 compute-0 sudo[432781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Nov 22 10:11:45 compute-0 nova_compute[253661]: 2025-11-22 10:11:45.183 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:45 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:11:45.184 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 10:11:45 compute-0 sudo[432781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:45 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:45 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1279 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:45 compute-0 sudo[432781]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:45 compute-0 sudo[432810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Nov 22 10:11:45 compute-0 sudo[432810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:45 compute-0 podman[432785]: 2025-11-22 10:11:45.479108324 +0000 UTC m=+0.066759454 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:11:45 compute-0 systemd[1]: Reloading.
Nov 22 10:11:45 compute-0 podman[432784]: 2025-11-22 10:11:45.496565373 +0000 UTC m=+0.089870112 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 10:11:45 compute-0 systemd-sysv-generator[432851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 10:11:45 compute-0 systemd-rc-local-generator[432844]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 10:11:45 compute-0 systemd[1]: Starting Podman API Socket...
Nov 22 10:11:45 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 22 10:11:45 compute-0 sudo[432810]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:45 compute-0 sudo[432859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Nov 22 10:11:45 compute-0 sudo[432859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:45 compute-0 sudo[432859]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:45 compute-0 sudo[432862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Nov 22 10:11:45 compute-0 sudo[432862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:45 compute-0 sudo[432862]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:45 compute-0 sudo[432865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Nov 22 10:11:45 compute-0 sudo[432865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:45 compute-0 sudo[432865]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:45 compute-0 sudo[432868]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Nov 22 10:11:45 compute-0 sudo[432868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:45 compute-0 sudo[432868]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:45 compute-0 sudo[432871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Nov 22 10:11:45 compute-0 sudo[432871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:46 compute-0 sudo[432871]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:46 compute-0 sudo[432874]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Nov 22 10:11:46 compute-0 dbus-broker-launch[813]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Nov 22 10:11:46 compute-0 sudo[432874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:46 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Nov 22 10:11:46 compute-0 systemd[1]: Closed Podman API Socket.
Nov 22 10:11:46 compute-0 systemd[1]: Stopping Podman API Socket...
Nov 22 10:11:46 compute-0 systemd[1]: Starting Podman API Socket...
Nov 22 10:11:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:46.044+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:46 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 22 10:11:46 compute-0 sudo[432874]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:46 compute-0 sudo[432688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Nov 22 10:11:46 compute-0 sudo[432688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:11:46 compute-0 sudo[432688]: pam_unix(sudo:session): session closed for user root
Nov 22 10:11:46 compute-0 sshd-session[432880]: Accepted publickey for zuul from 192.168.122.30 port 49136 ssh2: ECDSA SHA256:tikRPC42/ncVfP2lnh0iO6vjJo8w9amYgweJm9+SStg
Nov 22 10:11:46 compute-0 systemd-logind[822]: New session 54 of user zuul.
Nov 22 10:11:46 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 22 10:11:46 compute-0 sshd-session[432880]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 10:11:46 compute-0 systemd[1]: Starting Podman API Service...
Nov 22 10:11:46 compute-0 systemd[1]: Started Podman API Service.
Nov 22 10:11:46 compute-0 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 22 10:11:46 compute-0 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="Setting parallel job count to 25"
Nov 22 10:11:46 compute-0 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="Using sqlite as database backend"
Nov 22 10:11:46 compute-0 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 22 10:11:46 compute-0 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 22 10:11:46 compute-0 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 22 10:11:46 compute-0 podman[432884]: @ - - [22/Nov/2025:10:11:46 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 22 10:11:46 compute-0 podman[432884]: @ - - [22/Nov/2025:10:11:46 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 24897 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 22 10:11:46 compute-0 ceph-mon[75021]: pgmap v3302: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:46 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:47.031+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:47 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:47.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:48 compute-0 ceph-mon[75021]: pgmap v3303: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:48 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:48 compute-0 nova_compute[253661]: 2025-11-22 10:11:48.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:48.954+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:49 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1284 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:49 compute-0 podman[432897]: 2025-11-22 10:11:49.467705981 +0000 UTC m=+0.149328755 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:11:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:49.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:50 compute-0 nova_compute[253661]: 2025-11-22 10:11:50.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:50 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:50 compute-0 ceph-mon[75021]: pgmap v3304: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:50 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1284 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:50.900+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:51 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:51.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:11:52
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.meta', '.mgr', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:11:52 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:52 compute-0 ceph-mon[75021]: pgmap v3305: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:11:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:11:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:52.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:53 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:53 compute-0 nova_compute[253661]: 2025-11-22 10:11:53.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:53.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:54 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:54 compute-0 ceph-mon[75021]: pgmap v3306: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1289 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:54.886+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:55 compute-0 nova_compute[253661]: 2025-11-22 10:11:55.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:55 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:55 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1289 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:55.916+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:11:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:11:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:11:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:11:56 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:56 compute-0 ceph-mon[75021]: pgmap v3307: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:11:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:56.951+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:57 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:57 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:57 compute-0 ceph-mon[75021]: pgmap v3308: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:57.982+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:58 compute-0 nova_compute[253661]: 2025-11-22 10:11:58.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:11:58 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:58.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:11:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1294 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:11:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:11:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:11:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:11:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:11:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:11:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:11:59 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:11:59 compute-0 ceph-mon[75021]: pgmap v3309: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:11:59 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1294 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:00.011+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:00 compute-0 nova_compute[253661]: 2025-11-22 10:12:00.235 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:00 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:00.978+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:01 compute-0 podman[432884]: time="2025-11-22T10:12:01Z" level=info msg="Received shutdown.Stop(), terminating!" PID=432884
Nov 22 10:12:01 compute-0 systemd[1]: podman.service: Deactivated successfully.
Nov 22 10:12:01 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:01 compute-0 ceph-mon[75021]: pgmap v3310: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:01.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:02 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:02.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:12:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:12:03 compute-0 nova_compute[253661]: 2025-11-22 10:12:03.598 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:03 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:03 compute-0 ceph-mon[75021]: pgmap v3311: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:03.985+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1299 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:04 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:04 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1299 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:04.959+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:05 compute-0 nova_compute[253661]: 2025-11-22 10:12:05.237 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:05 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:05 compute-0 ceph-mon[75021]: pgmap v3312: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:05.967+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:06 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:06.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:07 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:07 compute-0 ceph-mon[75021]: pgmap v3313: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:07.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:08 compute-0 nova_compute[253661]: 2025-11-22 10:12:08.602 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:08 compute-0 sudo[432924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Nov 22 10:12:08 compute-0 sudo[432924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:12:08 compute-0 sudo[432924]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:08 compute-0 sudo[432949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Nov 22 10:12:08 compute-0 sudo[432949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:12:08 compute-0 sudo[432949]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:08 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:08 compute-0 sshd-session[432592]: Connection closed by 192.168.122.30 port 38096
Nov 22 10:12:08 compute-0 sshd-session[432589]: pam_unix(sshd:session): session closed for user zuul
Nov 22 10:12:08 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 22 10:12:08 compute-0 systemd[1]: session-53.scope: Consumed 1.393s CPU time.
Nov 22 10:12:08 compute-0 systemd-logind[822]: Session 53 logged out. Waiting for processes to exit.
Nov 22 10:12:08 compute-0 systemd-logind[822]: Removed session 53.
Nov 22 10:12:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:09.030+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1304 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:09 compute-0 sshd-session[432883]: Connection closed by 192.168.122.30 port 49136
Nov 22 10:12:09 compute-0 sshd-session[432880]: pam_unix(sshd:session): session closed for user zuul
Nov 22 10:12:09 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Nov 22 10:12:09 compute-0 systemd-logind[822]: Session 54 logged out. Waiting for processes to exit.
Nov 22 10:12:09 compute-0 systemd-logind[822]: Removed session 54.
Nov 22 10:12:09 compute-0 ceph-mon[75021]: pgmap v3314: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:09 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:09 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1304 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:10.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:10 compute-0 nova_compute[253661]: 2025-11-22 10:12:10.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:10 compute-0 nova_compute[253661]: 2025-11-22 10:12:10.388 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:10 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:10.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:11 compute-0 nova_compute[253661]: 2025-11-22 10:12:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:11 compute-0 nova_compute[253661]: 2025-11-22 10:12:11.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:12:11 compute-0 nova_compute[253661]: 2025-11-22 10:12:11.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:12:11 compute-0 nova_compute[253661]: 2025-11-22 10:12:11.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:12:11 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:11 compute-0 ceph-mon[75021]: pgmap v3315: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.903580) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331903619, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1515, "num_deletes": 251, "total_data_size": 1794751, "memory_usage": 1829744, "flush_reason": "Manual Compaction"}
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331916146, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1754842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72202, "largest_seqno": 73716, "table_properties": {"data_size": 1748493, "index_size": 3230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16700, "raw_average_key_size": 20, "raw_value_size": 1734293, "raw_average_value_size": 2173, "num_data_blocks": 142, "num_entries": 798, "num_filter_entries": 798, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806224, "oldest_key_time": 1763806224, "file_creation_time": 1763806331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 12604 microseconds, and 5428 cpu microseconds.
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.916182) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1754842 bytes OK
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.916199) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.917697) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.917711) EVENT_LOG_v1 {"time_micros": 1763806331917706, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.917724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1787903, prev total WAL file size 1787903, number of live WAL files 2.
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.918338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1713KB)], [170(10MB)]
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331918361, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12463260, "oldest_snapshot_seqno": -1}
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 10098 keys, 11081056 bytes, temperature: kUnknown
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331979416, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11081056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11018922, "index_size": 35781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 268777, "raw_average_key_size": 26, "raw_value_size": 10843245, "raw_average_value_size": 1073, "num_data_blocks": 1356, "num_entries": 10098, "num_filter_entries": 10098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.979701) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11081056 bytes
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.980926) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.8 rd, 181.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(13.4) write-amplify(6.3) OK, records in: 10612, records dropped: 514 output_compression: NoCompression
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.980948) EVENT_LOG_v1 {"time_micros": 1763806331980937, "job": 106, "event": "compaction_finished", "compaction_time_micros": 61160, "compaction_time_cpu_micros": 34508, "output_level": 6, "num_output_files": 1, "total_output_size": 11081056, "num_input_records": 10612, "num_output_records": 10098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:12:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:11.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331981608, "job": 106, "event": "table_file_deletion", "file_number": 172}
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331984081, "job": 106, "event": "table_file_deletion", "file_number": 170}
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.918260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:12:11 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:12:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:12:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492359398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:12:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:12:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492359398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:12:12 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3492359398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:12:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/3492359398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:12:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:13.022+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:13 compute-0 nova_compute[253661]: 2025-11-22 10:12:13.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:13 compute-0 ceph-mon[75021]: pgmap v3316: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:13 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:13.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1309 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:14 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:14 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1309 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:14.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:15 compute-0 nova_compute[253661]: 2025-11-22 10:12:15.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:15 compute-0 nova_compute[253661]: 2025-11-22 10:12:15.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:15 compute-0 podman[432974]: 2025-11-22 10:12:15.61317367 +0000 UTC m=+0.058225794 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:12:15 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:15 compute-0 ceph-mon[75021]: pgmap v3317: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:15.934+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:16 compute-0 nova_compute[253661]: 2025-11-22 10:12:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:16 compute-0 podman[432993]: 2025-11-22 10:12:16.415138764 +0000 UTC m=+0.094088397 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 22 10:12:16 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:16.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:17 compute-0 nova_compute[253661]: 2025-11-22 10:12:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:17 compute-0 nova_compute[253661]: 2025-11-22 10:12:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:17 compute-0 nova_compute[253661]: 2025-11-22 10:12:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:17 compute-0 nova_compute[253661]: 2025-11-22 10:12:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:17 compute-0 nova_compute[253661]: 2025-11-22 10:12:17.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:12:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:17.909+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:17 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:17 compute-0 ceph-mon[75021]: pgmap v3318: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.257 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:12:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450177183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.751 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:12:18 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:18 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1450177183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:12:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:18.958+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.985 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.987 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3564MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.987 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:12:18 compute-0 nova_compute[253661]: 2025-11-22 10:12:18.988 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:12:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.108 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.109 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.195 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.258 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.259 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.279 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.308 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.328 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:12:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1314 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:12:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263260387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.851 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.860 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.882 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.885 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:12:19 compute-0 nova_compute[253661]: 2025-11-22 10:12:19.885 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:12:19 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:19 compute-0 ceph-mon[75021]: pgmap v3319: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:19 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1314 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1263260387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:12:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:19.992+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:20 compute-0 nova_compute[253661]: 2025-11-22 10:12:20.242 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:20 compute-0 podman[433059]: 2025-11-22 10:12:20.434088969 +0000 UTC m=+0.120862205 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 10:12:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:20.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:20 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:21.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:21 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:21 compute-0 ceph-mon[75021]: pgmap v3320: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:12:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:12:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:22.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:22 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:23 compute-0 nova_compute[253661]: 2025-11-22 10:12:23.612 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:23.989+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:23 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:23 compute-0 ceph-mon[75021]: pgmap v3321: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1319 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:24.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:25 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:25 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1319 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:25 compute-0 nova_compute[253661]: 2025-11-22 10:12:25.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:25.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:26 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:26 compute-0 ceph-mon[75021]: pgmap v3322: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:26.908+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:27 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:27.876+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:12:28.019 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:12:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:12:28.020 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:12:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:12:28.020 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:12:28 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:28 compute-0 ceph-mon[75021]: pgmap v3323: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:28 compute-0 nova_compute[253661]: 2025-11-22 10:12:28.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:28.838+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:28 compute-0 nova_compute[253661]: 2025-11-22 10:12:28.878 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:12:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:29 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1324 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:29.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:30 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:30 compute-0 ceph-mon[75021]: pgmap v3324: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:30 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1324 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:30 compute-0 nova_compute[253661]: 2025-11-22 10:12:30.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:30.917+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:31 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:31.957+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:32 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:32 compute-0 ceph-mon[75021]: pgmap v3325: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:32.965+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:33 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:33 compute-0 nova_compute[253661]: 2025-11-22 10:12:33.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:33.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:34 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:34 compute-0 ceph-mon[75021]: pgmap v3326: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1329 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:34.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:35 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:35 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1329 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:35 compute-0 nova_compute[253661]: 2025-11-22 10:12:35.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:35 compute-0 sudo[433085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:12:35 compute-0 sudo[433085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:35 compute-0 sudo[433085]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:35 compute-0 sudo[433110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:12:35 compute-0 sudo[433110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:35 compute-0 sudo[433110]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:35 compute-0 sudo[433135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:12:35 compute-0 sudo[433135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:35 compute-0 sudo[433135]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:35 compute-0 sudo[433160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:12:35 compute-0 sudo[433160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:35.951+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:36 compute-0 sudo[433160]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:36 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:36 compute-0 ceph-mon[75021]: pgmap v3327: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:12:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:12:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:12:36 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:12:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:12:36 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:12:36 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9068e73e-bae2-43d4-a0db-f653d3d8fc97 does not exist
Nov 22 10:12:36 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev f9788f05-4758-4913-b57e-67306c01c5d2 does not exist
Nov 22 10:12:36 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5479cd4c-f0f8-4dc6-a881-683591428cdc does not exist
Nov 22 10:12:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:12:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:12:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:12:36 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:12:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:12:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:12:36 compute-0 sudo[433215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:12:36 compute-0 sudo[433215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:36 compute-0 sudo[433215]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:36 compute-0 sudo[433240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:12:36 compute-0 sudo[433240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:36 compute-0 sudo[433240]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:36 compute-0 sudo[433265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:12:36 compute-0 sudo[433265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:36 compute-0 sudo[433265]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:36 compute-0 sudo[433290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:12:36 compute-0 sudo[433290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:36 compute-0 podman[433356]: 2025-11-22 10:12:36.876419363 +0000 UTC m=+0.048912014 container create de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 10:12:36 compute-0 systemd[1]: Started libpod-conmon-de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b.scope.
Nov 22 10:12:36 compute-0 podman[433356]: 2025-11-22 10:12:36.856214517 +0000 UTC m=+0.028707128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:12:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:12:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:36.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:36 compute-0 podman[433356]: 2025-11-22 10:12:36.988536052 +0000 UTC m=+0.161028683 container init de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:12:37 compute-0 podman[433356]: 2025-11-22 10:12:37.003981842 +0000 UTC m=+0.176474453 container start de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:12:37 compute-0 podman[433356]: 2025-11-22 10:12:37.008673858 +0000 UTC m=+0.181166469 container attach de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 10:12:37 compute-0 reverent_gould[433372]: 167 167
Nov 22 10:12:37 compute-0 systemd[1]: libpod-de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b.scope: Deactivated successfully.
Nov 22 10:12:37 compute-0 podman[433356]: 2025-11-22 10:12:37.012598315 +0000 UTC m=+0.185090936 container died de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 22 10:12:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-06cb4860342cf7d6c4d765734a0f131eabc6693c0f3a9c2659a51e89e5e379da-merged.mount: Deactivated successfully.
Nov 22 10:12:37 compute-0 podman[433356]: 2025-11-22 10:12:37.063982249 +0000 UTC m=+0.236474890 container remove de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 10:12:37 compute-0 systemd[1]: libpod-conmon-de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b.scope: Deactivated successfully.
Nov 22 10:12:37 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:12:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:12:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:12:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:12:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:12:37 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:12:37 compute-0 podman[433398]: 2025-11-22 10:12:37.320802759 +0000 UTC m=+0.072811013 container create 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 10:12:37 compute-0 systemd[1]: Started libpod-conmon-81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13.scope.
Nov 22 10:12:37 compute-0 podman[433398]: 2025-11-22 10:12:37.292872422 +0000 UTC m=+0.044880716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:12:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:37 compute-0 podman[433398]: 2025-11-22 10:12:37.448614624 +0000 UTC m=+0.200622948 container init 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 10:12:37 compute-0 podman[433398]: 2025-11-22 10:12:37.461124342 +0000 UTC m=+0.213132596 container start 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 10:12:37 compute-0 podman[433398]: 2025-11-22 10:12:37.470255347 +0000 UTC m=+0.222263601 container attach 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 10:12:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:37.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:38 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:38 compute-0 ceph-mon[75021]: pgmap v3328: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:38 compute-0 boring_euclid[433415]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:12:38 compute-0 boring_euclid[433415]: --> relative data size: 1.0
Nov 22 10:12:38 compute-0 boring_euclid[433415]: --> All data devices are unavailable
Nov 22 10:12:38 compute-0 systemd[1]: libpod-81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13.scope: Deactivated successfully.
Nov 22 10:12:38 compute-0 podman[433398]: 2025-11-22 10:12:38.614894833 +0000 UTC m=+1.366903057 container died 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 10:12:38 compute-0 systemd[1]: libpod-81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13.scope: Consumed 1.099s CPU time.
Nov 22 10:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2-merged.mount: Deactivated successfully.
Nov 22 10:12:38 compute-0 nova_compute[253661]: 2025-11-22 10:12:38.658 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:38 compute-0 podman[433398]: 2025-11-22 10:12:38.675401602 +0000 UTC m=+1.427409816 container remove 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:12:38 compute-0 systemd[1]: libpod-conmon-81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13.scope: Deactivated successfully.
Nov 22 10:12:38 compute-0 sudo[433290]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:38 compute-0 sudo[433459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:12:38 compute-0 sudo[433459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:38 compute-0 sudo[433459]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:38 compute-0 sudo[433484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:12:38 compute-0 sudo[433484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:38 compute-0 sudo[433484]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:38.933+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:38 compute-0 sudo[433509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:12:38 compute-0 sudo[433509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:38 compute-0 sudo[433509]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:39 compute-0 sudo[433534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:12:39 compute-0 sudo[433534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1334 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:39 compute-0 podman[433599]: 2025-11-22 10:12:39.485151588 +0000 UTC m=+0.059151597 container create 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 10:12:39 compute-0 systemd[1]: Started libpod-conmon-81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d.scope.
Nov 22 10:12:39 compute-0 podman[433599]: 2025-11-22 10:12:39.456171575 +0000 UTC m=+0.030171634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:12:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:12:39 compute-0 podman[433599]: 2025-11-22 10:12:39.575161563 +0000 UTC m=+0.149161572 container init 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 10:12:39 compute-0 podman[433599]: 2025-11-22 10:12:39.587916847 +0000 UTC m=+0.161916846 container start 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 10:12:39 compute-0 podman[433599]: 2025-11-22 10:12:39.59253411 +0000 UTC m=+0.166534159 container attach 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 22 10:12:39 compute-0 vigorous_blackwell[433615]: 167 167
Nov 22 10:12:39 compute-0 systemd[1]: libpod-81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d.scope: Deactivated successfully.
Nov 22 10:12:39 compute-0 podman[433599]: 2025-11-22 10:12:39.59454228 +0000 UTC m=+0.168542269 container died 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-965f5c9349925a87a657e9498e97c279d372640bfc7ec290bdff3d2d9717181f-merged.mount: Deactivated successfully.
Nov 22 10:12:39 compute-0 podman[433599]: 2025-11-22 10:12:39.633939839 +0000 UTC m=+0.207939828 container remove 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:12:39 compute-0 systemd[1]: libpod-conmon-81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d.scope: Deactivated successfully.
Nov 22 10:12:39 compute-0 podman[433638]: 2025-11-22 10:12:39.831147582 +0000 UTC m=+0.039601436 container create 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 10:12:39 compute-0 systemd[1]: Started libpod-conmon-42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0.scope.
Nov 22 10:12:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:12:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:39.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:39 compute-0 podman[433638]: 2025-11-22 10:12:39.814633526 +0000 UTC m=+0.023087410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:12:39 compute-0 podman[433638]: 2025-11-22 10:12:39.919703331 +0000 UTC m=+0.128157285 container init 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 10:12:39 compute-0 podman[433638]: 2025-11-22 10:12:39.926074937 +0000 UTC m=+0.134528831 container start 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:12:39 compute-0 podman[433638]: 2025-11-22 10:12:39.930167178 +0000 UTC m=+0.138621062 container attach 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 10:12:40 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:40 compute-0 ceph-mon[75021]: pgmap v3329: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:40 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1334 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:40 compute-0 nova_compute[253661]: 2025-11-22 10:12:40.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:40 compute-0 busy_banzai[433654]: {
Nov 22 10:12:40 compute-0 busy_banzai[433654]:     "0": [
Nov 22 10:12:40 compute-0 busy_banzai[433654]:         {
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "devices": [
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "/dev/loop3"
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             ],
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_name": "ceph_lv0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_size": "21470642176",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "name": "ceph_lv0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "tags": {
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.cluster_name": "ceph",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.crush_device_class": "",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.encrypted": "0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.osd_id": "0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.type": "block",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.vdo": "0"
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             },
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "type": "block",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "vg_name": "ceph_vg0"
Nov 22 10:12:40 compute-0 busy_banzai[433654]:         }
Nov 22 10:12:40 compute-0 busy_banzai[433654]:     ],
Nov 22 10:12:40 compute-0 busy_banzai[433654]:     "1": [
Nov 22 10:12:40 compute-0 busy_banzai[433654]:         {
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "devices": [
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "/dev/loop4"
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             ],
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_name": "ceph_lv1",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_size": "21470642176",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "name": "ceph_lv1",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "tags": {
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.cluster_name": "ceph",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.crush_device_class": "",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.encrypted": "0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.osd_id": "1",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.type": "block",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.vdo": "0"
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             },
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "type": "block",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "vg_name": "ceph_vg1"
Nov 22 10:12:40 compute-0 busy_banzai[433654]:         }
Nov 22 10:12:40 compute-0 busy_banzai[433654]:     ],
Nov 22 10:12:40 compute-0 busy_banzai[433654]:     "2": [
Nov 22 10:12:40 compute-0 busy_banzai[433654]:         {
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "devices": [
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "/dev/loop5"
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             ],
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_name": "ceph_lv2",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_size": "21470642176",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "name": "ceph_lv2",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "tags": {
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.cluster_name": "ceph",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.crush_device_class": "",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.encrypted": "0",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.osd_id": "2",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.type": "block",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:                 "ceph.vdo": "0"
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             },
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "type": "block",
Nov 22 10:12:40 compute-0 busy_banzai[433654]:             "vg_name": "ceph_vg2"
Nov 22 10:12:40 compute-0 busy_banzai[433654]:         }
Nov 22 10:12:40 compute-0 busy_banzai[433654]:     ]
Nov 22 10:12:40 compute-0 busy_banzai[433654]: }
Nov 22 10:12:40 compute-0 systemd[1]: libpod-42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0.scope: Deactivated successfully.
Nov 22 10:12:40 compute-0 conmon[433654]: conmon 42ed0c6d079d923c8f56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0.scope/container/memory.events
Nov 22 10:12:40 compute-0 podman[433638]: 2025-11-22 10:12:40.720668791 +0000 UTC m=+0.929122675 container died 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6-merged.mount: Deactivated successfully.
Nov 22 10:12:40 compute-0 podman[433638]: 2025-11-22 10:12:40.800216328 +0000 UTC m=+1.008670202 container remove 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:12:40 compute-0 systemd[1]: libpod-conmon-42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0.scope: Deactivated successfully.
Nov 22 10:12:40 compute-0 sudo[433534]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:40.861+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:40 compute-0 sudo[433677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:12:40 compute-0 sudo[433677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:40 compute-0 sudo[433677]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:41 compute-0 sudo[433702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:12:41 compute-0 sudo[433702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:41 compute-0 sudo[433702]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:41 compute-0 sudo[433727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:12:41 compute-0 sudo[433727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:41 compute-0 sudo[433727]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:41 compute-0 sudo[433752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:12:41 compute-0 sudo[433752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:41 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:41 compute-0 podman[433816]: 2025-11-22 10:12:41.567277414 +0000 UTC m=+0.061614027 container create 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:12:41 compute-0 systemd[1]: Started libpod-conmon-9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708.scope.
Nov 22 10:12:41 compute-0 podman[433816]: 2025-11-22 10:12:41.548383858 +0000 UTC m=+0.042720451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:12:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:12:41 compute-0 podman[433816]: 2025-11-22 10:12:41.661029401 +0000 UTC m=+0.155366024 container init 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 10:12:41 compute-0 podman[433816]: 2025-11-22 10:12:41.668015003 +0000 UTC m=+0.162351616 container start 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 10:12:41 compute-0 confident_panini[433833]: 167 167
Nov 22 10:12:41 compute-0 systemd[1]: libpod-9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708.scope: Deactivated successfully.
Nov 22 10:12:41 compute-0 podman[433816]: 2025-11-22 10:12:41.672850311 +0000 UTC m=+0.167186934 container attach 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 10:12:41 compute-0 conmon[433833]: conmon 9c9d05e00beb573267f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708.scope/container/memory.events
Nov 22 10:12:41 compute-0 podman[433816]: 2025-11-22 10:12:41.673830936 +0000 UTC m=+0.168167519 container died 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-47b1d829077b97182da298683884f9fda833ffec312e2cff38efb085361476f2-merged.mount: Deactivated successfully.
Nov 22 10:12:41 compute-0 podman[433816]: 2025-11-22 10:12:41.721728424 +0000 UTC m=+0.216064997 container remove 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:12:41 compute-0 systemd[1]: libpod-conmon-9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708.scope: Deactivated successfully.
Nov 22 10:12:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:41.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:41 compute-0 podman[433856]: 2025-11-22 10:12:41.937084544 +0000 UTC m=+0.064941230 container create 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 10:12:41 compute-0 systemd[1]: Started libpod-conmon-0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b.scope.
Nov 22 10:12:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:12:42 compute-0 podman[433856]: 2025-11-22 10:12:41.912118609 +0000 UTC m=+0.039975355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:12:42 compute-0 podman[433856]: 2025-11-22 10:12:42.02270046 +0000 UTC m=+0.150557146 container init 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 10:12:42 compute-0 podman[433856]: 2025-11-22 10:12:42.029792405 +0000 UTC m=+0.157649091 container start 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 10:12:42 compute-0 podman[433856]: 2025-11-22 10:12:42.032980113 +0000 UTC m=+0.160836869 container attach 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 10:12:42 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:42 compute-0 ceph-mon[75021]: pgmap v3330: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:42.867+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:43 compute-0 zen_curran[433872]: {
Nov 22 10:12:43 compute-0 zen_curran[433872]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "osd_id": 1,
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "type": "bluestore"
Nov 22 10:12:43 compute-0 zen_curran[433872]:     },
Nov 22 10:12:43 compute-0 zen_curran[433872]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "osd_id": 0,
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "type": "bluestore"
Nov 22 10:12:43 compute-0 zen_curran[433872]:     },
Nov 22 10:12:43 compute-0 zen_curran[433872]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "osd_id": 2,
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:12:43 compute-0 zen_curran[433872]:         "type": "bluestore"
Nov 22 10:12:43 compute-0 zen_curran[433872]:     }
Nov 22 10:12:43 compute-0 zen_curran[433872]: }
Nov 22 10:12:43 compute-0 systemd[1]: libpod-0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b.scope: Deactivated successfully.
Nov 22 10:12:43 compute-0 podman[433856]: 2025-11-22 10:12:43.098697598 +0000 UTC m=+1.226554314 container died 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:12:43 compute-0 systemd[1]: libpod-0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b.scope: Consumed 1.072s CPU time.
Nov 22 10:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb-merged.mount: Deactivated successfully.
Nov 22 10:12:43 compute-0 podman[433856]: 2025-11-22 10:12:43.163168644 +0000 UTC m=+1.291025360 container remove 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 10:12:43 compute-0 systemd[1]: libpod-conmon-0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b.scope: Deactivated successfully.
Nov 22 10:12:43 compute-0 sudo[433752]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:12:43 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:12:43 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:12:43 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:12:43 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 31152732-d2d1-490f-bb7d-5dec00ddb5e2 does not exist
Nov 22 10:12:43 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 9482fd16-014a-4de4-8772-20b46c324a5e does not exist
Nov 22 10:12:43 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:43 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:12:43 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:12:43 compute-0 sudo[433918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:12:43 compute-0 sudo[433918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:43 compute-0 sudo[433918]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:43 compute-0 sudo[433943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:12:43 compute-0 sudo[433943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:12:43 compute-0 sudo[433943]: pam_unix(sudo:session): session closed for user root
Nov 22 10:12:43 compute-0 nova_compute[253661]: 2025-11-22 10:12:43.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:43.846+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:44 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:44 compute-0 ceph-mon[75021]: pgmap v3331: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1339 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:44.825+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:45 compute-0 nova_compute[253661]: 2025-11-22 10:12:45.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:45 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:45 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1339 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:45.844+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:46 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:46 compute-0 ceph-mon[75021]: pgmap v3332: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:46 compute-0 podman[433968]: 2025-11-22 10:12:46.402653211 +0000 UTC m=+0.081180279 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 10:12:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:46.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:47 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:47 compute-0 podman[433988]: 2025-11-22 10:12:47.386377518 +0000 UTC m=+0.076006732 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 10:12:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:47.773+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:48 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:48 compute-0 ceph-mon[75021]: pgmap v3333: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:48 compute-0 nova_compute[253661]: 2025-11-22 10:12:48.666 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:48.782+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:49 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1344 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:49.804+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:50 compute-0 nova_compute[253661]: 2025-11-22 10:12:50.252 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:50 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:50 compute-0 ceph-mon[75021]: pgmap v3334: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:50 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1344 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:50.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:51 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:51 compute-0 podman[434009]: 2025-11-22 10:12:51.449336336 +0000 UTC m=+0.135456835 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:12:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:51.763+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:52 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:52 compute-0 ceph-mon[75021]: pgmap v3335: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:12:52
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta']
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:12:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:12:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:52.799+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:53 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:53 compute-0 nova_compute[253661]: 2025-11-22 10:12:53.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:53.822+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:54 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:54 compute-0 ceph-mon[75021]: pgmap v3336: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1349 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:54.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:55 compute-0 nova_compute[253661]: 2025-11-22 10:12:55.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:55 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:55 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1349 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:55.905+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:12:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:12:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:12:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:12:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:12:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:56.875+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:56 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:56 compute-0 ceph-mon[75021]: pgmap v3337: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:57.841+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:57 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:57 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:57 compute-0 ceph-mon[75021]: pgmap v3338: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:58 compute-0 nova_compute[253661]: 2025-11-22 10:12:58.674 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:12:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:58.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:58 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1354 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:12:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:12:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:12:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:12:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:12:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:12:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:12:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:59.829+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:12:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:59 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:12:59 compute-0 ceph-mon[75021]: pgmap v3339: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:12:59 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1354 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:00 compute-0 nova_compute[253661]: 2025-11-22 10:13:00.280 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:00.847+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:00 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:01.833+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:01 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:01 compute-0 ceph-mon[75021]: pgmap v3340: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:02.793+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:03 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:13:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:13:03 compute-0 nova_compute[253661]: 2025-11-22 10:13:03.678 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:03.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:04 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:04 compute-0 ceph-mon[75021]: pgmap v3341: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1359 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:04.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:05 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:05 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1359 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:05 compute-0 nova_compute[253661]: 2025-11-22 10:13:05.282 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:05.846+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:06 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:06 compute-0 ceph-mon[75021]: pgmap v3342: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:06.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:07 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:07.801+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:08 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:08 compute-0 ceph-mon[75021]: pgmap v3343: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:08 compute-0 nova_compute[253661]: 2025-11-22 10:13:08.682 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:08.754+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:09 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1364 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:09.766+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:10 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:10 compute-0 ceph-mon[75021]: pgmap v3344: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:10 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1364 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:10 compute-0 nova_compute[253661]: 2025-11-22 10:13:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:13:10 compute-0 nova_compute[253661]: 2025-11-22 10:13:10.283 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:10.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:11 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:11.798+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:12 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:12 compute-0 ceph-mon[75021]: pgmap v3345: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:12 compute-0 nova_compute[253661]: 2025-11-22 10:13:12.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:13:12 compute-0 nova_compute[253661]: 2025-11-22 10:13:12.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:13:12 compute-0 nova_compute[253661]: 2025-11-22 10:13:12.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:13:12 compute-0 nova_compute[253661]: 2025-11-22 10:13:12.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:13:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:13:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/552056989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:13:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:13:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/552056989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:13:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:12.805+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:13 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/552056989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:13:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/552056989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:13:13 compute-0 nova_compute[253661]: 2025-11-22 10:13:13.687 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:13.771+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:14 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:14 compute-0 ceph-mon[75021]: pgmap v3346: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1369 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:14.773+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:15 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:15 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1369 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:15 compute-0 nova_compute[253661]: 2025-11-22 10:13:15.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:15.767+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:16 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:16 compute-0 ceph-mon[75021]: pgmap v3347: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:16.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:17 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:17 compute-0 nova_compute[253661]: 2025-11-22 10:13:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:13:17 compute-0 nova_compute[253661]: 2025-11-22 10:13:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:13:17 compute-0 podman[434036]: 2025-11-22 10:13:17.379203813 +0000 UTC m=+0.073517289 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 10:13:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:17.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:18 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:18 compute-0 ceph-mon[75021]: pgmap v3348: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.263 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.263 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:13:18 compute-0 podman[434056]: 2025-11-22 10:13:18.364473338 +0000 UTC m=+0.057902536 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 10:13:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:18.685+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:13:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424032644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:13:18 compute-0 nova_compute[253661]: 2025-11-22 10:13:18.737 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.010 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.012 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3569MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.012 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.013 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:13:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.082 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.082 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.100 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:13:19 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1424032644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:13:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1374 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:13:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3043942726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.566 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.573 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.590 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.593 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:13:19 compute-0 nova_compute[253661]: 2025-11-22 10:13:19.593 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:13:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:19.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:20 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:20 compute-0 ceph-mon[75021]: pgmap v3349: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:20 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1374 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3043942726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:13:20 compute-0 nova_compute[253661]: 2025-11-22 10:13:20.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:20.662+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:21 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:21.707+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:22 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:22 compute-0 ceph-mon[75021]: pgmap v3350: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:22 compute-0 podman[434117]: 2025-11-22 10:13:22.422859834 +0000 UTC m=+0.117113583 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 10:13:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:22.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:13:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:13:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:23 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:23.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:23 compute-0 nova_compute[253661]: 2025-11-22 10:13:23.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:24 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:24 compute-0 ceph-mon[75021]: pgmap v3351: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1379 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:24.706+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:25 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:25 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1379 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:25 compute-0 nova_compute[253661]: 2025-11-22 10:13:25.291 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:25.662+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:26 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:26 compute-0 ceph-mon[75021]: pgmap v3352: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:26.695+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:27 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:27.697+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:13:28.021 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:13:28.021 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:13:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:13:28.021 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:13:28 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:28 compute-0 ceph-mon[75021]: pgmap v3353: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:28.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:28 compute-0 nova_compute[253661]: 2025-11-22 10:13:28.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:29 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1384 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:29.657+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:30 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:30 compute-0 ceph-mon[75021]: pgmap v3354: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:30 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1384 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:30 compute-0 nova_compute[253661]: 2025-11-22 10:13:30.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:30.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:31 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:31 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:31.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:32 compute-0 ceph-mon[75021]: pgmap v3355: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:32 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:32.663+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:33.664+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:33 compute-0 nova_compute[253661]: 2025-11-22 10:13:33.703 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:34 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:34 compute-0 ceph-mon[75021]: pgmap v3356: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1389 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:34.631+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:35 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:35 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1389 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:35 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:35 compute-0 nova_compute[253661]: 2025-11-22 10:13:35.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:35.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:36 compute-0 ceph-mon[75021]: pgmap v3357: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:36 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:36.686+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:37.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:38 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:38 compute-0 ceph-mon[75021]: pgmap v3358: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:38.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:38 compute-0 nova_compute[253661]: 2025-11-22 10:13:38.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1394 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:39.685+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:40 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:40 compute-0 ceph-mon[75021]: pgmap v3359: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:40 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1394 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:40 compute-0 nova_compute[253661]: 2025-11-22 10:13:40.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:40.700+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:41 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:41.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:42 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:42 compute-0 ceph-mon[75021]: pgmap v3360: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:42.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:43 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:43 compute-0 sudo[434145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:13:43 compute-0 sudo[434145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:43 compute-0 sudo[434145]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:43 compute-0 sudo[434170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:13:43 compute-0 sudo[434170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:43 compute-0 sudo[434170]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:43 compute-0 sudo[434195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:13:43 compute-0 sudo[434195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:43 compute-0 sudo[434195]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:43.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:43 compute-0 sudo[434220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:13:43 compute-0 sudo[434220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:43 compute-0 nova_compute[253661]: 2025-11-22 10:13:43.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:44 compute-0 sudo[434220]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:13:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:13:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:13:44 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:13:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:13:44 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:13:44 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e78f120f-5191-4bb2-8090-bb299e4bed44 does not exist
Nov 22 10:13:44 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 460acd00-9db6-4215-b796-0bb402bc0af2 does not exist
Nov 22 10:13:44 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev f4997b02-4213-4855-9246-1870bdba77f7 does not exist
Nov 22 10:13:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:13:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:13:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:13:44 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:13:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:13:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:13:44 compute-0 sudo[434275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:13:44 compute-0 sudo[434275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:44 compute-0 sudo[434275]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:44 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:44 compute-0 ceph-mon[75021]: pgmap v3361: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:44 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:44 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:13:44 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:13:44 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:13:44 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:13:44 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:13:44 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:13:44 compute-0 sudo[434300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:13:44 compute-0 sudo[434300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:44 compute-0 sudo[434300]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1399 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.451952) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424451989, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1322, "num_deletes": 250, "total_data_size": 1511685, "memory_usage": 1546104, "flush_reason": "Manual Compaction"}
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424459557, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 954256, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73717, "largest_seqno": 75038, "table_properties": {"data_size": 949539, "index_size": 1920, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14793, "raw_average_key_size": 21, "raw_value_size": 938242, "raw_average_value_size": 1375, "num_data_blocks": 85, "num_entries": 682, "num_filter_entries": 682, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806332, "oldest_key_time": 1763806332, "file_creation_time": 1763806424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 7652 microseconds, and 2915 cpu microseconds.
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.459597) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 954256 bytes OK
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.459618) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.461699) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.461716) EVENT_LOG_v1 {"time_micros": 1763806424461711, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.461732) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1505541, prev total WAL file size 1505541, number of live WAL files 2.
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.462402) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373538' seq:72057594037927935, type:22 .. '6D6772737461740033303039' seq:0, type:0; will stop at (end)
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(931KB)], [173(10MB)]
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424462439, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12035312, "oldest_snapshot_seqno": -1}
Nov 22 10:13:44 compute-0 sudo[434325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:13:44 compute-0 sudo[434325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:44 compute-0 sudo[434325]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 10308 keys, 9222932 bytes, temperature: kUnknown
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424524629, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 9222932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9163401, "index_size": 32666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 273977, "raw_average_key_size": 26, "raw_value_size": 8987763, "raw_average_value_size": 871, "num_data_blocks": 1228, "num_entries": 10308, "num_filter_entries": 10308, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.525102) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 9222932 bytes
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.527140) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.6 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.6 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(22.3) write-amplify(9.7) OK, records in: 10780, records dropped: 472 output_compression: NoCompression
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.527166) EVENT_LOG_v1 {"time_micros": 1763806424527154, "job": 108, "event": "compaction_finished", "compaction_time_micros": 62484, "compaction_time_cpu_micros": 39327, "output_level": 6, "num_output_files": 1, "total_output_size": 9222932, "num_input_records": 10780, "num_output_records": 10308, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424528052, "job": 108, "event": "table_file_deletion", "file_number": 175}
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424530939, "job": 108, "event": "table_file_deletion", "file_number": 173}
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.462347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:13:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:13:44 compute-0 sudo[434350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:13:44 compute-0 sudo[434350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:44.611+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:44 compute-0 podman[434415]: 2025-11-22 10:13:44.871386923 +0000 UTC m=+0.038090308 container create 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:13:44 compute-0 systemd[1]: Started libpod-conmon-9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b.scope.
Nov 22 10:13:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:13:44 compute-0 podman[434415]: 2025-11-22 10:13:44.854505738 +0000 UTC m=+0.021209153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:13:44 compute-0 podman[434415]: 2025-11-22 10:13:44.96021727 +0000 UTC m=+0.126920665 container init 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 10:13:44 compute-0 podman[434415]: 2025-11-22 10:13:44.968499474 +0000 UTC m=+0.135202849 container start 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 10:13:44 compute-0 podman[434415]: 2025-11-22 10:13:44.971517278 +0000 UTC m=+0.138220653 container attach 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:13:44 compute-0 adoring_payne[434432]: 167 167
Nov 22 10:13:44 compute-0 systemd[1]: libpod-9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b.scope: Deactivated successfully.
Nov 22 10:13:44 compute-0 conmon[434432]: conmon 9834d74f2dbfcf8b4a23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b.scope/container/memory.events
Nov 22 10:13:44 compute-0 podman[434415]: 2025-11-22 10:13:44.975127396 +0000 UTC m=+0.141830771 container died 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e24961de1a563531af831543ac0de4c83edf0fb931256993cc44621c810222ae-merged.mount: Deactivated successfully.
Nov 22 10:13:45 compute-0 podman[434415]: 2025-11-22 10:13:45.019945839 +0000 UTC m=+0.186649214 container remove 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:13:45 compute-0 systemd[1]: libpod-conmon-9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b.scope: Deactivated successfully.
Nov 22 10:13:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:45 compute-0 podman[434456]: 2025-11-22 10:13:45.26299467 +0000 UTC m=+0.077769705 container create 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 10:13:45 compute-0 systemd[1]: Started libpod-conmon-9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d.scope.
Nov 22 10:13:45 compute-0 podman[434456]: 2025-11-22 10:13:45.231757062 +0000 UTC m=+0.046532147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:13:45 compute-0 nova_compute[253661]: 2025-11-22 10:13:45.346 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:45 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1399 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:45 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:45 compute-0 podman[434456]: 2025-11-22 10:13:45.381128757 +0000 UTC m=+0.195903832 container init 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 22 10:13:45 compute-0 podman[434456]: 2025-11-22 10:13:45.396534366 +0000 UTC m=+0.211309401 container start 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:13:45 compute-0 podman[434456]: 2025-11-22 10:13:45.40236234 +0000 UTC m=+0.217137415 container attach 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:13:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:45.645+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:46 compute-0 ceph-mon[75021]: pgmap v3362: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:46 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:46 compute-0 agitated_chandrasekhar[434473]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:13:46 compute-0 agitated_chandrasekhar[434473]: --> relative data size: 1.0
Nov 22 10:13:46 compute-0 agitated_chandrasekhar[434473]: --> All data devices are unavailable
Nov 22 10:13:46 compute-0 systemd[1]: libpod-9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d.scope: Deactivated successfully.
Nov 22 10:13:46 compute-0 systemd[1]: libpod-9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d.scope: Consumed 1.119s CPU time.
Nov 22 10:13:46 compute-0 podman[434456]: 2025-11-22 10:13:46.564395414 +0000 UTC m=+1.379170439 container died 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff-merged.mount: Deactivated successfully.
Nov 22 10:13:46 compute-0 podman[434456]: 2025-11-22 10:13:46.613463352 +0000 UTC m=+1.428238347 container remove 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 22 10:13:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:46.614+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:46 compute-0 systemd[1]: libpod-conmon-9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d.scope: Deactivated successfully.
Nov 22 10:13:46 compute-0 sudo[434350]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:46 compute-0 sudo[434515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:13:46 compute-0 sudo[434515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:46 compute-0 sudo[434515]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:46 compute-0 sudo[434540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:13:46 compute-0 sudo[434540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:46 compute-0 sudo[434540]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:46 compute-0 sudo[434565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:13:46 compute-0 sudo[434565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:46 compute-0 sudo[434565]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:46 compute-0 sudo[434590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:13:46 compute-0 sudo[434590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:47 compute-0 podman[434655]: 2025-11-22 10:13:47.233869049 +0000 UTC m=+0.059284001 container create 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 10:13:47 compute-0 systemd[1]: Started libpod-conmon-538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d.scope.
Nov 22 10:13:47 compute-0 podman[434655]: 2025-11-22 10:13:47.200845206 +0000 UTC m=+0.026260238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:13:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:13:47 compute-0 podman[434655]: 2025-11-22 10:13:47.327046761 +0000 UTC m=+0.152461723 container init 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:13:47 compute-0 podman[434655]: 2025-11-22 10:13:47.334786602 +0000 UTC m=+0.160201564 container start 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 10:13:47 compute-0 podman[434655]: 2025-11-22 10:13:47.338488832 +0000 UTC m=+0.163903804 container attach 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:13:47 compute-0 nervous_brattain[434671]: 167 167
Nov 22 10:13:47 compute-0 systemd[1]: libpod-538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d.scope: Deactivated successfully.
Nov 22 10:13:47 compute-0 conmon[434671]: conmon 538c8e797dcb5ad086e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d.scope/container/memory.events
Nov 22 10:13:47 compute-0 podman[434655]: 2025-11-22 10:13:47.343167128 +0000 UTC m=+0.168582060 container died 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e56258037361383fb64f4bfd66dcc2dab993eba206f47d325c9fc4388d3616c9-merged.mount: Deactivated successfully.
Nov 22 10:13:47 compute-0 podman[434655]: 2025-11-22 10:13:47.397795372 +0000 UTC m=+0.223210344 container remove 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 10:13:47 compute-0 systemd[1]: libpod-conmon-538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d.scope: Deactivated successfully.
Nov 22 10:13:47 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:47 compute-0 podman[434689]: 2025-11-22 10:13:47.513833568 +0000 UTC m=+0.069574734 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 10:13:47 compute-0 podman[434714]: 2025-11-22 10:13:47.561847399 +0000 UTC m=+0.039492313 container create 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:13:47 compute-0 systemd[1]: Started libpod-conmon-48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c.scope.
Nov 22 10:13:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:47.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:13:47 compute-0 podman[434714]: 2025-11-22 10:13:47.545617379 +0000 UTC m=+0.023262313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:47 compute-0 podman[434714]: 2025-11-22 10:13:47.658103418 +0000 UTC m=+0.135748332 container init 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:13:47 compute-0 podman[434714]: 2025-11-22 10:13:47.667184131 +0000 UTC m=+0.144829045 container start 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 10:13:47 compute-0 podman[434714]: 2025-11-22 10:13:47.670197915 +0000 UTC m=+0.147842829 container attach 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 10:13:48 compute-0 blissful_tharp[434731]: {
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:     "0": [
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:         {
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "devices": [
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "/dev/loop3"
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             ],
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_name": "ceph_lv0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_size": "21470642176",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "name": "ceph_lv0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "tags": {
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.cluster_name": "ceph",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.crush_device_class": "",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.encrypted": "0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.osd_id": "0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.type": "block",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.vdo": "0"
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             },
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "type": "block",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "vg_name": "ceph_vg0"
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:         }
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:     ],
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:     "1": [
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:         {
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "devices": [
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "/dev/loop4"
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             ],
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_name": "ceph_lv1",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_size": "21470642176",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "name": "ceph_lv1",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "tags": {
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.cluster_name": "ceph",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.crush_device_class": "",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.encrypted": "0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.osd_id": "1",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.type": "block",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.vdo": "0"
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             },
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "type": "block",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "vg_name": "ceph_vg1"
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:         }
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:     ],
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:     "2": [
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:         {
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "devices": [
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "/dev/loop5"
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             ],
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_name": "ceph_lv2",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_size": "21470642176",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "name": "ceph_lv2",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "tags": {
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.cluster_name": "ceph",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.crush_device_class": "",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.encrypted": "0",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.osd_id": "2",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.type": "block",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:                 "ceph.vdo": "0"
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             },
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "type": "block",
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:             "vg_name": "ceph_vg2"
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:         }
Nov 22 10:13:48 compute-0 blissful_tharp[434731]:     ]
Nov 22 10:13:48 compute-0 blissful_tharp[434731]: }
Nov 22 10:13:48 compute-0 ceph-mon[75021]: pgmap v3363: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:48 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:48 compute-0 systemd[1]: libpod-48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c.scope: Deactivated successfully.
Nov 22 10:13:48 compute-0 podman[434714]: 2025-11-22 10:13:48.496656093 +0000 UTC m=+0.974301057 container died 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 10:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b-merged.mount: Deactivated successfully.
Nov 22 10:13:48 compute-0 podman[434714]: 2025-11-22 10:13:48.566306336 +0000 UTC m=+1.043951250 container remove 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:13:48 compute-0 systemd[1]: libpod-conmon-48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c.scope: Deactivated successfully.
Nov 22 10:13:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:48.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:48 compute-0 sudo[434590]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:48 compute-0 podman[434741]: 2025-11-22 10:13:48.635088208 +0000 UTC m=+0.100488533 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 10:13:48 compute-0 sudo[434772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:13:48 compute-0 sudo[434772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:13:48 compute-0 sudo[434772]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:48 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 185K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 15K syncs, 2.91 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e325090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e325090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e325090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 10:13:48 compute-0 nova_compute[253661]: 2025-11-22 10:13:48.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:48 compute-0 sudo[434798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:13:48 compute-0 sudo[434798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:48 compute-0 sudo[434798]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:48 compute-0 sudo[434823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:13:48 compute-0 sudo[434823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:48 compute-0 sudo[434823]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:48 compute-0 sudo[434848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:13:48 compute-0 sudo[434848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:49 compute-0 podman[434910]: 2025-11-22 10:13:49.340908417 +0000 UTC m=+0.050146445 container create 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:13:49 compute-0 systemd[1]: Started libpod-conmon-31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0.scope.
Nov 22 10:13:49 compute-0 podman[434910]: 2025-11-22 10:13:49.321214792 +0000 UTC m=+0.030452850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:13:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:13:49 compute-0 podman[434910]: 2025-11-22 10:13:49.436300325 +0000 UTC m=+0.145538433 container init 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:13:49 compute-0 podman[434910]: 2025-11-22 10:13:49.44747174 +0000 UTC m=+0.156709748 container start 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 10:13:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1404 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:49 compute-0 podman[434910]: 2025-11-22 10:13:49.451107368 +0000 UTC m=+0.160345456 container attach 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:13:49 compute-0 bold_dubinsky[434926]: 167 167
Nov 22 10:13:49 compute-0 systemd[1]: libpod-31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0.scope: Deactivated successfully.
Nov 22 10:13:49 compute-0 podman[434910]: 2025-11-22 10:13:49.455282921 +0000 UTC m=+0.164520989 container died 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f85c30c4036b57f7569122a474ee51af4c1014c4c42f38104e447dcf3bb300b-merged.mount: Deactivated successfully.
Nov 22 10:13:49 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:49 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1404 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:49 compute-0 podman[434910]: 2025-11-22 10:13:49.505484207 +0000 UTC m=+0.214722245 container remove 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:13:49 compute-0 systemd[1]: libpod-conmon-31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0.scope: Deactivated successfully.
Nov 22 10:13:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:49.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:49 compute-0 podman[434950]: 2025-11-22 10:13:49.700509226 +0000 UTC m=+0.060827168 container create 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:13:49 compute-0 systemd[1]: Started libpod-conmon-03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b.scope.
Nov 22 10:13:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:49 compute-0 podman[434950]: 2025-11-22 10:13:49.681630971 +0000 UTC m=+0.041948933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:13:49 compute-0 podman[434950]: 2025-11-22 10:13:49.785970298 +0000 UTC m=+0.146288280 container init 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:13:49 compute-0 podman[434950]: 2025-11-22 10:13:49.79493559 +0000 UTC m=+0.155253532 container start 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:13:49 compute-0 podman[434950]: 2025-11-22 10:13:49.798745864 +0000 UTC m=+0.159063806 container attach 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 10:13:50 compute-0 nova_compute[253661]: 2025-11-22 10:13:50.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:50 compute-0 ceph-mon[75021]: pgmap v3364: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:50 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:50.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:50 compute-0 stoic_herschel[434967]: {
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "osd_id": 1,
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "type": "bluestore"
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:     },
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "osd_id": 0,
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "type": "bluestore"
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:     },
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "osd_id": 2,
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:         "type": "bluestore"
Nov 22 10:13:50 compute-0 stoic_herschel[434967]:     }
Nov 22 10:13:50 compute-0 stoic_herschel[434967]: }
Nov 22 10:13:50 compute-0 systemd[1]: libpod-03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b.scope: Deactivated successfully.
Nov 22 10:13:50 compute-0 podman[434950]: 2025-11-22 10:13:50.904951144 +0000 UTC m=+1.265269136 container died 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:13:50 compute-0 systemd[1]: libpod-03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b.scope: Consumed 1.117s CPU time.
Nov 22 10:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211-merged.mount: Deactivated successfully.
Nov 22 10:13:50 compute-0 podman[434950]: 2025-11-22 10:13:50.983132097 +0000 UTC m=+1.343450029 container remove 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:13:50 compute-0 systemd[1]: libpod-conmon-03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b.scope: Deactivated successfully.
Nov 22 10:13:51 compute-0 sudo[434848]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:13:51 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:13:51 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:13:51 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:13:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev e475411b-83a4-4609-9fb4-1b89c446225d does not exist
Nov 22 10:13:51 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5ee3d942-9cfc-40b1-8a29-2a8ed3c8e8c8 does not exist
Nov 22 10:13:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:51 compute-0 sudo[435012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:13:51 compute-0 sudo[435012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:51 compute-0 sudo[435012]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:51 compute-0 sudo[435037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:13:51 compute-0 sudo[435037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:13:51 compute-0 sudo[435037]: pam_unix(sudo:session): session closed for user root
Nov 22 10:13:51 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:13:51 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:13:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:51.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:13:52
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes']
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:13:52 compute-0 ceph-mon[75021]: pgmap v3365: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:52 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:52.575+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:13:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:13:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:53 compute-0 podman[435062]: 2025-11-22 10:13:53.458991002 +0000 UTC m=+0.136820418 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:13:53 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:53.595+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:53 compute-0 nova_compute[253661]: 2025-11-22 10:13:53.778 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1409 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:54 compute-0 ceph-mon[75021]: pgmap v3366: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:54 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:54 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1409 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:54.608+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:13:55 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6001.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 179K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.83 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 274 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a31090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a31090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a31090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 10:13:55 compute-0 nova_compute[253661]: 2025-11-22 10:13:55.350 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:55 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:55.630+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:56 compute-0 ceph-mon[75021]: pgmap v3367: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:56 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:56.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:13:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:13:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:13:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:13:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:13:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:57 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:57.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:58 compute-0 ceph-mon[75021]: pgmap v3368: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:58 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:58.648+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:58 compute-0 nova_compute[253661]: 2025-11-22 10:13:58.782 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:13:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:13:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1415 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:13:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:13:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:13:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:13:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:13:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:13:59 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:13:59 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1415 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:13:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:59.651+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:13:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:00 compute-0 nova_compute[253661]: 2025-11-22 10:14:00.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:14:00 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 145K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.90 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 10:14:00 compute-0 ceph-mon[75021]: pgmap v3369: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:00 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:00.616+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:01 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:01.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:02 compute-0 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 10:14:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:02.579+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:02 compute-0 ceph-mon[75021]: pgmap v3370: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:02 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:14:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:14:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:03.571+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:03 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:03 compute-0 nova_compute[253661]: 2025-11-22 10:14:03.786 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1420 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:04.582+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:04 compute-0 ceph-mon[75021]: pgmap v3371: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:04 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:04 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1420 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:05 compute-0 nova_compute[253661]: 2025-11-22 10:14:05.353 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:05 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:05.616+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.233 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.248 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.255 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 WARNING nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 WARNING nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Removable base files: /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 22 10:14:06 compute-0 nova_compute[253661]: 2025-11-22 10:14:06.257 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 22 10:14:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:06.580+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:06 compute-0 ceph-mon[75021]: pgmap v3372: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:06 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:07.596+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:07 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:08.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:08 compute-0 ceph-mon[75021]: pgmap v3373: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:08 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:08 compute-0 nova_compute[253661]: 2025-11-22 10:14:08.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1425 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:09.614+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:09 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:09 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1425 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:10 compute-0 nova_compute[253661]: 2025-11-22 10:14:10.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:10.640+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:10 compute-0 ceph-mon[75021]: pgmap v3374: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:10 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:11 compute-0 nova_compute[253661]: 2025-11-22 10:14:11.256 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:11.648+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:11 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:14:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/444475390' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:14:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:14:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/444475390' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:14:12 compute-0 ceph-mon[75021]: pgmap v3375: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:12 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/444475390' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:14:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/444475390' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:14:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:12.683+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:13 compute-0 nova_compute[253661]: 2025-11-22 10:14:13.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:13 compute-0 nova_compute[253661]: 2025-11-22 10:14:13.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:14:13 compute-0 nova_compute[253661]: 2025-11-22 10:14:13.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:14:13 compute-0 nova_compute[253661]: 2025-11-22 10:14:13.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:14:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:13.729+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:13 compute-0 nova_compute[253661]: 2025-11-22 10:14:13.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1429 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:14 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:14 compute-0 ceph-mon[75021]: pgmap v3376: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:14 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1429 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:14.700+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:15 compute-0 nova_compute[253661]: 2025-11-22 10:14:15.357 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:15 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:15 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:15.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:16.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:16 compute-0 ceph-mon[75021]: pgmap v3377: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:16 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:17 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:17.701+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.258 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:14:18 compute-0 podman[435090]: 2025-11-22 10:14:18.427444441 +0000 UTC m=+0.097554392 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:14:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:18.677+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:18 compute-0 ceph-mon[75021]: pgmap v3378: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:18 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:18 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:14:18 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1309196864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.765 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:14:18 compute-0 nova_compute[253661]: 2025-11-22 10:14:18.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.005 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.007 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3542MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.008 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.008 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.075 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.076 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:14:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.099 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:14:19 compute-0 podman[435150]: 2025-11-22 10:14:19.388396807 +0000 UTC m=+0.076149495 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3)
Nov 22 10:14:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1434 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:14:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398911495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.586 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.593 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.604 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.606 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:14:19 compute-0 nova_compute[253661]: 2025-11-22 10:14:19.606 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:14:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:19.673+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:19 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1309196864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:14:19 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1434 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:19 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1398911495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:14:20 compute-0 nova_compute[253661]: 2025-11-22 10:14:20.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:20 compute-0 nova_compute[253661]: 2025-11-22 10:14:20.605 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:20 compute-0 nova_compute[253661]: 2025-11-22 10:14:20.605 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:20 compute-0 nova_compute[253661]: 2025-11-22 10:14:20.605 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:20 compute-0 nova_compute[253661]: 2025-11-22 10:14:20.606 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:14:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:20.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:20 compute-0 ceph-mon[75021]: pgmap v3379: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:20 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:21.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:21 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:21 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:22.654+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:22 compute-0 ceph-mon[75021]: pgmap v3380: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:22 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:14:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:14:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:23.668+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:23 compute-0 nova_compute[253661]: 2025-11-22 10:14:23.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:24 compute-0 podman[435174]: 2025-11-22 10:14:24.438132427 +0000 UTC m=+0.133325111 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 22 10:14:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1439 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:24.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:24 compute-0 ceph-mon[75021]: pgmap v3381: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:24 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:24 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1439 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:24 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:25 compute-0 nova_compute[253661]: 2025-11-22 10:14:25.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:25.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:26.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:26 compute-0 ceph-mon[75021]: pgmap v3382: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:26 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:27 compute-0 nova_compute[253661]: 2025-11-22 10:14:27.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:27.663+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:27 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:27 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:14:28.022 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:14:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:14:28.023 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:14:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:14:28.023 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:14:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:28.674+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:28 compute-0 ceph-mon[75021]: pgmap v3383: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:28 compute-0 nova_compute[253661]: 2025-11-22 10:14:28.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1444 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:29.709+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:29 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:29 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1444 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:30 compute-0 nova_compute[253661]: 2025-11-22 10:14:30.362 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:30.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:30 compute-0 ceph-mon[75021]: pgmap v3384: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:30 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:31.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:31 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:32 compute-0 nova_compute[253661]: 2025-11-22 10:14:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:32 compute-0 nova_compute[253661]: 2025-11-22 10:14:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 10:14:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:32.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:32 compute-0 ceph-mon[75021]: pgmap v3385: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:32 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:32 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:33.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:33 compute-0 ceph-mon[75021]: pgmap v3386: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:33 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:33 compute-0 nova_compute[253661]: 2025-11-22 10:14:33.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1449 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:34.646+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:34 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1449 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:34 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:35 compute-0 nova_compute[253661]: 2025-11-22 10:14:35.364 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:35.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:35 compute-0 ceph-mon[75021]: pgmap v3387: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:36.670+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:36 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:37.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:37 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:37 compute-0 ceph-mon[75021]: pgmap v3388: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:37 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:38 compute-0 nova_compute[253661]: 2025-11-22 10:14:38.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:14:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:38.596+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:38 compute-0 nova_compute[253661]: 2025-11-22 10:14:38.844 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:38 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:38 compute-0 sshd-session[435201]: Invalid user validator from 92.118.39.92 port 58194
Nov 22 10:14:39 compute-0 sshd-session[435201]: Connection closed by invalid user validator 92.118.39.92 port 58194 [preauth]
Nov 22 10:14:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1454 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:39.569+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:39 compute-0 ceph-mon[75021]: pgmap v3389: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:39 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1454 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:14:40 compute-0 nova_compute[253661]: 2025-11-22 10:14:40.367 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:40.593+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:40 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:41.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:41 compute-0 ceph-mon[75021]: pgmap v3390: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:41 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:42.537+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:42 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 10:14:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:43.500+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:43 compute-0 nova_compute[253661]: 2025-11-22 10:14:43.847 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:43 compute-0 ceph-mon[75021]: pgmap v3391: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 10:14:43 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1460 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.466004) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484466034, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 932, "num_deletes": 257, "total_data_size": 993086, "memory_usage": 1010328, "flush_reason": "Manual Compaction"}
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484473786, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 978110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75039, "largest_seqno": 75970, "table_properties": {"data_size": 973776, "index_size": 1857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11057, "raw_average_key_size": 19, "raw_value_size": 964334, "raw_average_value_size": 1740, "num_data_blocks": 82, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806425, "oldest_key_time": 1763806425, "file_creation_time": 1763806484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 7819 microseconds, and 3708 cpu microseconds.
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.473820) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 978110 bytes OK
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.473838) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.475522) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.475538) EVENT_LOG_v1 {"time_micros": 1763806484475534, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.475555) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 988441, prev total WAL file size 988441, number of live WAL files 2.
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.476092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353136' seq:72057594037927935, type:22 .. '6C6F676D0033373639' seq:0, type:0; will stop at (end)
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(955KB)], [176(9006KB)]
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484476160, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 10201042, "oldest_snapshot_seqno": -1}
Nov 22 10:14:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:44.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 10337 keys, 10060879 bytes, temperature: kUnknown
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484526585, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 10060879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10000044, "index_size": 33906, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25861, "raw_key_size": 275907, "raw_average_key_size": 26, "raw_value_size": 9822773, "raw_average_value_size": 950, "num_data_blocks": 1278, "num_entries": 10337, "num_filter_entries": 10337, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.526851) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 10060879 bytes
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.528038) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.0 rd, 199.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.8 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(20.7) write-amplify(10.3) OK, records in: 10862, records dropped: 525 output_compression: NoCompression
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.528053) EVENT_LOG_v1 {"time_micros": 1763806484528046, "job": 110, "event": "compaction_finished", "compaction_time_micros": 50500, "compaction_time_cpu_micros": 27274, "output_level": 6, "num_output_files": 1, "total_output_size": 10060879, "num_input_records": 10862, "num_output_records": 10337, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484528446, "job": 110, "event": "table_file_deletion", "file_number": 178}
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484529940, "job": 110, "event": "table_file_deletion", "file_number": 176}
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.475985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:44 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:44 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1460 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:44 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:45 compute-0 nova_compute[253661]: 2025-11-22 10:14:45.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:45.521+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:45 compute-0 ceph-mon[75021]: pgmap v3392: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:45 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:46.504+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:46 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:47.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:47 compute-0 ceph-mon[75021]: pgmap v3393: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:47 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:48.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:48 compute-0 nova_compute[253661]: 2025-11-22 10:14:48.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:48 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:49 compute-0 podman[435203]: 2025-11-22 10:14:49.355863776 +0000 UTC m=+0.055155628 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:14:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1465 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:49.535+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:49 compute-0 ceph-mon[75021]: pgmap v3394: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:49 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1465 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:49 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:50 compute-0 nova_compute[253661]: 2025-11-22 10:14:50.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:50 compute-0 podman[435223]: 2025-11-22 10:14:50.374190404 +0000 UTC m=+0.067577303 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:14:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:50.537+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:50 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:51 compute-0 sudo[435243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:14:51 compute-0 sudo[435243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:51 compute-0 sudo[435243]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:51 compute-0 sudo[435268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:14:51 compute-0 sudo[435268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:51 compute-0 sudo[435268]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:51 compute-0 sudo[435293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:14:51 compute-0 sudo[435293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:51 compute-0 sudo[435293]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:51 compute-0 sudo[435318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:14:51 compute-0 sudo[435318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:51.505+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:51 compute-0 ceph-mon[75021]: pgmap v3395: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:51 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:52 compute-0 sudo[435318]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:14:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:14:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:14:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:14:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:14:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 784831a1-432e-4828-8479-a492076fc052 does not exist
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 219825e0-58bf-4a96-a0c8-8308240a8647 does not exist
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8b10c3d3-bddc-4693-932f-ad1bd6ced888 does not exist
Nov 22 10:14:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:14:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:14:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:14:52 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:14:52 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:14:52 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:14:52 compute-0 sudo[435374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:14:52 compute-0 sudo[435374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:52 compute-0 sudo[435374]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:52 compute-0 sudo[435399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:14:52 compute-0 sudo[435399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:52 compute-0 sudo[435399]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:52 compute-0 sudo[435424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:14:52 compute-0 sudo[435424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:52 compute-0 sudo[435424]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:52 compute-0 sudo[435449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:14:52 compute-0 sudo[435449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:14:52
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', '.rgw.root', 'images']
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:14:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:52.511+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:52 compute-0 podman[435514]: 2025-11-22 10:14:52.650877237 +0000 UTC m=+0.036015006 container create 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:14:52 compute-0 systemd[1]: Started libpod-conmon-21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e.scope.
Nov 22 10:14:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:14:52 compute-0 podman[435514]: 2025-11-22 10:14:52.634585847 +0000 UTC m=+0.019723626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:14:52 compute-0 podman[435514]: 2025-11-22 10:14:52.739100378 +0000 UTC m=+0.124238167 container init 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:14:52 compute-0 podman[435514]: 2025-11-22 10:14:52.745267131 +0000 UTC m=+0.130404890 container start 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:14:52 compute-0 podman[435514]: 2025-11-22 10:14:52.748824258 +0000 UTC m=+0.133962037 container attach 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:14:52 compute-0 exciting_jackson[435531]: 167 167
Nov 22 10:14:52 compute-0 systemd[1]: libpod-21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e.scope: Deactivated successfully.
Nov 22 10:14:52 compute-0 podman[435514]: 2025-11-22 10:14:52.751548215 +0000 UTC m=+0.136685974 container died 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-76e5fee803d649883e9bb8f0d2222aa05492f7aad9ce02d9726f1dbc9ce366ea-merged.mount: Deactivated successfully.
Nov 22 10:14:52 compute-0 podman[435514]: 2025-11-22 10:14:52.793911458 +0000 UTC m=+0.179049227 container remove 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:14:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:14:52 compute-0 systemd[1]: libpod-conmon-21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e.scope: Deactivated successfully.
Nov 22 10:14:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:14:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:14:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:14:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:14:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:14:52 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:14:52 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:52 compute-0 podman[435553]: 2025-11-22 10:14:52.964748422 +0000 UTC m=+0.040172140 container create 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 10:14:52 compute-0 systemd[1]: Started libpod-conmon-2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a.scope.
Nov 22 10:14:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:53 compute-0 podman[435553]: 2025-11-22 10:14:53.034380314 +0000 UTC m=+0.109804052 container init 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:14:53 compute-0 podman[435553]: 2025-11-22 10:14:52.946073101 +0000 UTC m=+0.021496849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:14:53 compute-0 podman[435553]: 2025-11-22 10:14:53.044803281 +0000 UTC m=+0.120226999 container start 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 10:14:53 compute-0 podman[435553]: 2025-11-22 10:14:53.049786843 +0000 UTC m=+0.125210581 container attach 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 10:14:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:53.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:53 compute-0 nova_compute[253661]: 2025-11-22 10:14:53.889 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:53 compute-0 ceph-mon[75021]: pgmap v3396: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:14:53 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:54 compute-0 agitated_ishizaka[435569]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:14:54 compute-0 agitated_ishizaka[435569]: --> relative data size: 1.0
Nov 22 10:14:54 compute-0 agitated_ishizaka[435569]: --> All data devices are unavailable
Nov 22 10:14:54 compute-0 systemd[1]: libpod-2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a.scope: Deactivated successfully.
Nov 22 10:14:54 compute-0 podman[435553]: 2025-11-22 10:14:54.113268314 +0000 UTC m=+1.188692042 container died 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 10:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c-merged.mount: Deactivated successfully.
Nov 22 10:14:54 compute-0 podman[435553]: 2025-11-22 10:14:54.169963029 +0000 UTC m=+1.245386767 container remove 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:14:54 compute-0 systemd[1]: libpod-conmon-2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a.scope: Deactivated successfully.
Nov 22 10:14:54 compute-0 sudo[435449]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:54 compute-0 sudo[435610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:14:54 compute-0 sudo[435610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:54 compute-0 sudo[435610]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:54 compute-0 sudo[435635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:14:54 compute-0 sudo[435635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:54 compute-0 sudo[435635]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:54 compute-0 sudo[435660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:14:54 compute-0 sudo[435660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:54 compute-0 sudo[435660]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1470 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:54 compute-0 sudo[435685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:14:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:54.490+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:54 compute-0 sudo[435685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:54 compute-0 podman[435709]: 2025-11-22 10:14:54.633523595 +0000 UTC m=+0.131934317 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 10:14:54 compute-0 podman[435779]: 2025-11-22 10:14:54.855709803 +0000 UTC m=+0.039434091 container create 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:14:54 compute-0 systemd[1]: Started libpod-conmon-3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d.scope.
Nov 22 10:14:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:14:54 compute-0 podman[435779]: 2025-11-22 10:14:54.837757001 +0000 UTC m=+0.021481329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:14:54 compute-0 podman[435779]: 2025-11-22 10:14:54.93804582 +0000 UTC m=+0.121770148 container init 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 10:14:54 compute-0 podman[435779]: 2025-11-22 10:14:54.946096918 +0000 UTC m=+0.129821206 container start 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:14:54 compute-0 podman[435779]: 2025-11-22 10:14:54.949372598 +0000 UTC m=+0.133096936 container attach 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 10:14:54 compute-0 vigorous_sinoussi[435794]: 167 167
Nov 22 10:14:54 compute-0 systemd[1]: libpod-3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d.scope: Deactivated successfully.
Nov 22 10:14:54 compute-0 podman[435779]: 2025-11-22 10:14:54.951338336 +0000 UTC m=+0.135062644 container died 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:14:54 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1470 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:54 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c76fc2089509c928ced8fff1506ef593ee42fb8ebfe8c5b45f2889103c28a544-merged.mount: Deactivated successfully.
Nov 22 10:14:54 compute-0 podman[435779]: 2025-11-22 10:14:54.992203163 +0000 UTC m=+0.175927491 container remove 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 10:14:55 compute-0 systemd[1]: libpod-conmon-3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d.scope: Deactivated successfully.
Nov 22 10:14:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 10:14:55 compute-0 podman[435818]: 2025-11-22 10:14:55.149966894 +0000 UTC m=+0.044116086 container create da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 10:14:55 compute-0 systemd[1]: Started libpod-conmon-da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046.scope.
Nov 22 10:14:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:55 compute-0 podman[435818]: 2025-11-22 10:14:55.131280645 +0000 UTC m=+0.025429837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:14:55 compute-0 podman[435818]: 2025-11-22 10:14:55.229209914 +0000 UTC m=+0.123359076 container init da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 22 10:14:55 compute-0 podman[435818]: 2025-11-22 10:14:55.236602626 +0000 UTC m=+0.130751788 container start da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:14:55 compute-0 podman[435818]: 2025-11-22 10:14:55.239659271 +0000 UTC m=+0.133808433 container attach da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:14:55 compute-0 nova_compute[253661]: 2025-11-22 10:14:55.373 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:55.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]: {
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:     "0": [
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:         {
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "devices": [
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "/dev/loop3"
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             ],
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_name": "ceph_lv0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_size": "21470642176",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "name": "ceph_lv0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "tags": {
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.cluster_name": "ceph",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.crush_device_class": "",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.encrypted": "0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.osd_id": "0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.type": "block",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.vdo": "0"
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             },
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "type": "block",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "vg_name": "ceph_vg0"
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:         }
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:     ],
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:     "1": [
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:         {
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "devices": [
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "/dev/loop4"
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             ],
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_name": "ceph_lv1",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_size": "21470642176",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "name": "ceph_lv1",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "tags": {
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.cluster_name": "ceph",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.crush_device_class": "",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.encrypted": "0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.osd_id": "1",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.type": "block",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.vdo": "0"
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             },
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "type": "block",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "vg_name": "ceph_vg1"
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:         }
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:     ],
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:     "2": [
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:         {
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "devices": [
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "/dev/loop5"
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             ],
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_name": "ceph_lv2",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_size": "21470642176",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "name": "ceph_lv2",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "tags": {
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.cluster_name": "ceph",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.crush_device_class": "",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.encrypted": "0",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.osd_id": "2",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.type": "block",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:                 "ceph.vdo": "0"
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             },
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "type": "block",
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:             "vg_name": "ceph_vg2"
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:         }
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]:     ]
Nov 22 10:14:55 compute-0 nice_heyrovsky[435834]: }
Nov 22 10:14:55 compute-0 systemd[1]: libpod-da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046.scope: Deactivated successfully.
Nov 22 10:14:55 compute-0 podman[435818]: 2025-11-22 10:14:55.977388795 +0000 UTC m=+0.871537977 container died da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:14:55 compute-0 ceph-mon[75021]: pgmap v3397: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 10:14:55 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58-merged.mount: Deactivated successfully.
Nov 22 10:14:56 compute-0 podman[435818]: 2025-11-22 10:14:56.036463688 +0000 UTC m=+0.930612850 container remove da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 22 10:14:56 compute-0 systemd[1]: libpod-conmon-da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046.scope: Deactivated successfully.
Nov 22 10:14:56 compute-0 sudo[435685]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:56 compute-0 sudo[435857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:14:56 compute-0 sudo[435857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:56 compute-0 sudo[435857]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:56 compute-0 sudo[435882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:14:56 compute-0 sudo[435882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:56 compute-0 sudo[435882]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:56 compute-0 sudo[435907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:14:56 compute-0 sudo[435907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:56 compute-0 sudo[435907]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:56 compute-0 sudo[435932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:14:56 compute-0 sudo[435932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:56.567+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:56 compute-0 podman[435997]: 2025-11-22 10:14:56.688087344 +0000 UTC m=+0.051577641 container create d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 10:14:56 compute-0 systemd[1]: Started libpod-conmon-d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688.scope.
Nov 22 10:14:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:14:56 compute-0 podman[435997]: 2025-11-22 10:14:56.764973276 +0000 UTC m=+0.128463573 container init d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:14:56 compute-0 podman[435997]: 2025-11-22 10:14:56.670599133 +0000 UTC m=+0.034089420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:14:56 compute-0 podman[435997]: 2025-11-22 10:14:56.773225449 +0000 UTC m=+0.136715756 container start d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 10:14:56 compute-0 podman[435997]: 2025-11-22 10:14:56.777147675 +0000 UTC m=+0.140637992 container attach d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 10:14:56 compute-0 gallant_ellis[436013]: 167 167
Nov 22 10:14:56 compute-0 systemd[1]: libpod-d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688.scope: Deactivated successfully.
Nov 22 10:14:56 compute-0 conmon[436013]: conmon d403b08139431563078b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688.scope/container/memory.events
Nov 22 10:14:56 compute-0 podman[435997]: 2025-11-22 10:14:56.780984809 +0000 UTC m=+0.144475096 container died d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:14:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:14:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:14:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:14:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:14:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b45814c6baed6f671b5d73afd854a243c743a1392e5dc84d94c0c46c83ff67bf-merged.mount: Deactivated successfully.
Nov 22 10:14:56 compute-0 podman[435997]: 2025-11-22 10:14:56.819957108 +0000 UTC m=+0.183447375 container remove d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:14:56 compute-0 systemd[1]: libpod-conmon-d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688.scope: Deactivated successfully.
Nov 22 10:14:56 compute-0 podman[436036]: 2025-11-22 10:14:56.969480908 +0000 UTC m=+0.040022066 container create 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:14:56 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:57 compute-0 systemd[1]: Started libpod-conmon-14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98.scope.
Nov 22 10:14:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:14:57 compute-0 podman[436036]: 2025-11-22 10:14:57.043490069 +0000 UTC m=+0.114031277 container init 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:14:57 compute-0 podman[436036]: 2025-11-22 10:14:56.953698659 +0000 UTC m=+0.024239827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:14:57 compute-0 podman[436036]: 2025-11-22 10:14:57.049174759 +0000 UTC m=+0.119715917 container start 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 10:14:57 compute-0 podman[436036]: 2025-11-22 10:14:57.052185423 +0000 UTC m=+0.122726641 container attach 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:14:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:57.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:58 compute-0 ceph-mon[75021]: pgmap v3398: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:58 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]: {
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "osd_id": 1,
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "type": "bluestore"
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:     },
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "osd_id": 0,
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "type": "bluestore"
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:     },
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "osd_id": 2,
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:         "type": "bluestore"
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]:     }
Nov 22 10:14:58 compute-0 focused_ardinghelli[436052]: }
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.017913) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498017939, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 422, "num_deletes": 251, "total_data_size": 237660, "memory_usage": 245320, "flush_reason": "Manual Compaction"}
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498021083, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 234151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75971, "largest_seqno": 76392, "table_properties": {"data_size": 231743, "index_size": 443, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6423, "raw_average_key_size": 19, "raw_value_size": 226777, "raw_average_value_size": 678, "num_data_blocks": 20, "num_entries": 334, "num_filter_entries": 334, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806484, "oldest_key_time": 1763806484, "file_creation_time": 1763806498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 3209 microseconds, and 1052 cpu microseconds.
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.021118) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 234151 bytes OK
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.021138) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022345) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022362) EVENT_LOG_v1 {"time_micros": 1763806498022356, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022379) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 234983, prev total WAL file size 234983, number of live WAL files 2.
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022742) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(228KB)], [179(9825KB)]
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498022795, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10295030, "oldest_snapshot_seqno": -1}
Nov 22 10:14:58 compute-0 systemd[1]: libpod-14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98.scope: Deactivated successfully.
Nov 22 10:14:58 compute-0 podman[436036]: 2025-11-22 10:14:58.040225286 +0000 UTC m=+1.110766444 container died 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 10160 keys, 8915210 bytes, temperature: kUnknown
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498076244, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 8915210, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8856403, "index_size": 32326, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 273035, "raw_average_key_size": 26, "raw_value_size": 8682946, "raw_average_value_size": 854, "num_data_blocks": 1205, "num_entries": 10160, "num_filter_entries": 10160, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2-merged.mount: Deactivated successfully.
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.076605) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 8915210 bytes
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.081081) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.1 rd, 166.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(82.0) write-amplify(38.1) OK, records in: 10671, records dropped: 511 output_compression: NoCompression
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.081110) EVENT_LOG_v1 {"time_micros": 1763806498081098, "job": 112, "event": "compaction_finished", "compaction_time_micros": 53596, "compaction_time_cpu_micros": 28771, "output_level": 6, "num_output_files": 1, "total_output_size": 8915210, "num_input_records": 10671, "num_output_records": 10160, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498081431, "job": 112, "event": "table_file_deletion", "file_number": 181}
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498083298, "job": 112, "event": "table_file_deletion", "file_number": 179}
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:58 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:14:58 compute-0 podman[436036]: 2025-11-22 10:14:58.105441471 +0000 UTC m=+1.175982629 container remove 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:14:58 compute-0 systemd[1]: libpod-conmon-14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98.scope: Deactivated successfully.
Nov 22 10:14:58 compute-0 sudo[435932]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:14:58 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:14:58 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:14:58 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:14:58 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 704c40b2-59d9-4ab1-b4d9-0c4f843f0ddb does not exist
Nov 22 10:14:58 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1f63fa22-3fa0-46f5-91a6-d5ffcdc02885 does not exist
Nov 22 10:14:58 compute-0 sudo[436098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:14:58 compute-0 sudo[436098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:58 compute-0 sudo[436098]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:58 compute-0 sudo[436123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:14:58 compute-0 sudo[436123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:14:58 compute-0 sudo[436123]: pam_unix(sudo:session): session closed for user root
Nov 22 10:14:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:58.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:58 compute-0 nova_compute[253661]: 2025-11-22 10:14:58.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:14:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:14:59 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:14:59 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:14:59 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:14:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:14:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:14:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:14:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1475 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:14:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:14:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:14:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:14:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:59.603+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:14:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:00 compute-0 ceph-mon[75021]: pgmap v3399: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:00 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1475 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:00 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:00 compute-0 nova_compute[253661]: 2025-11-22 10:15:00.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:00.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:01 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:01.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:02 compute-0 ceph-mon[75021]: pgmap v3400: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:02 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:02.573+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:03 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:15:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:15:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:03.601+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:03 compute-0 nova_compute[253661]: 2025-11-22 10:15:03.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:04 compute-0 ceph-mon[75021]: pgmap v3401: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:04 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1479 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:04.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:05 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1479 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:05 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:05 compute-0 nova_compute[253661]: 2025-11-22 10:15:05.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:05.593+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:06 compute-0 ceph-mon[75021]: pgmap v3402: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:06 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:06.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:07 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:07.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:08 compute-0 ceph-mon[75021]: pgmap v3403: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:08 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:08.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:08 compute-0 nova_compute[253661]: 2025-11-22 10:15:08.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:09 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:09 compute-0 nova_compute[253661]: 2025-11-22 10:15:09.242 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:09 compute-0 nova_compute[253661]: 2025-11-22 10:15:09.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 10:15:09 compute-0 nova_compute[253661]: 2025-11-22 10:15:09.263 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 10:15:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1484 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:09.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:10 compute-0 ceph-mon[75021]: pgmap v3404: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:10 compute-0 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1484 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:10 compute-0 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:15:10 compute-0 nova_compute[253661]: 2025-11-22 10:15:10.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:10.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:11 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:11.578+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:12 compute-0 ceph-mon[75021]: pgmap v3405: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:12 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:15:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2732250456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:15:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:15:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2732250456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:15:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:12.572+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:13 compute-0 nova_compute[253661]: 2025-11-22 10:15:13.249 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:13 compute-0 nova_compute[253661]: 2025-11-22 10:15:13.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:15:13 compute-0 nova_compute[253661]: 2025-11-22 10:15:13.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:15:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2732250456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:15:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2732250456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:15:13 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:13 compute-0 nova_compute[253661]: 2025-11-22 10:15:13.298 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:15:13 compute-0 nova_compute[253661]: 2025-11-22 10:15:13.298 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:13.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:13 compute-0 nova_compute[253661]: 2025-11-22 10:15:13.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:14 compute-0 ceph-mon[75021]: pgmap v3406: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:14 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1490 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:14.619+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:15 compute-0 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1490 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:15 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:15 compute-0 nova_compute[253661]: 2025-11-22 10:15:15.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:15.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:16 compute-0 ceph-mon[75021]: pgmap v3407: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:16 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:16.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:17 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:17.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:18 compute-0 nova_compute[253661]: 2025-11-22 10:15:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:18 compute-0 ceph-mon[75021]: pgmap v3408: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:18 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:18.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:18 compute-0 nova_compute[253661]: 2025-11-22 10:15:18.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.263 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.263 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:15:19 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1495 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:19.561+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:15:19 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735788013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.694 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.838 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.840 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.840 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.840 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.896 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.897 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:15:19 compute-0 nova_compute[253661]: 2025-11-22 10:15:19.915 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:15:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:15:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1337356490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:15:20 compute-0 ceph-mon[75021]: pgmap v3409: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:20 compute-0 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1495 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:20 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/735788013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:15:20 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1337356490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:15:20 compute-0 nova_compute[253661]: 2025-11-22 10:15:20.339 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:15:20 compute-0 nova_compute[253661]: 2025-11-22 10:15:20.345 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:15:20 compute-0 nova_compute[253661]: 2025-11-22 10:15:20.358 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:15:20 compute-0 nova_compute[253661]: 2025-11-22 10:15:20.360 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:15:20 compute-0 nova_compute[253661]: 2025-11-22 10:15:20.360 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:15:20 compute-0 podman[436190]: 2025-11-22 10:15:20.36907643 +0000 UTC m=+0.059785933 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 10:15:20 compute-0 nova_compute[253661]: 2025-11-22 10:15:20.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:20.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:21 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:21 compute-0 nova_compute[253661]: 2025-11-22 10:15:21.352 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:21 compute-0 nova_compute[253661]: 2025-11-22 10:15:21.353 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:21 compute-0 nova_compute[253661]: 2025-11-22 10:15:21.353 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:21 compute-0 nova_compute[253661]: 2025-11-22 10:15:21.353 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:21 compute-0 nova_compute[253661]: 2025-11-22 10:15:21.353 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:15:21 compute-0 podman[436211]: 2025-11-22 10:15:21.370114873 +0000 UTC m=+0.067043740 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 10:15:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:21.607+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:22 compute-0 ceph-mon[75021]: pgmap v3410: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:22 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:22.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:15:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:15:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 22 10:15:23 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:23.564+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:23 compute-0 nova_compute[253661]: 2025-11-22 10:15:23.909 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:24 compute-0 ceph-mon[75021]: pgmap v3411: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 22 10:15:24 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1500 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:24.533+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Nov 22 10:15:25 compute-0 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1500 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:25 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:25 compute-0 nova_compute[253661]: 2025-11-22 10:15:25.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:25 compute-0 podman[436231]: 2025-11-22 10:15:25.468293549 +0000 UTC m=+0.153281992 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 10:15:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:25.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:26 compute-0 ceph-mon[75021]: pgmap v3412: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Nov 22 10:15:26 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:26.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 22 10:15:27 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:27.536+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:15:28.024 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:15:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:15:28.024 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:15:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:15:28.024 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:15:28 compute-0 ceph-mon[75021]: pgmap v3413: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 22 10:15:28 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:28.577+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:28 compute-0 nova_compute[253661]: 2025-11-22 10:15:28.914 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:15:29 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1505 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:29.598+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:30 compute-0 nova_compute[253661]: 2025-11-22 10:15:30.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:30 compute-0 ceph-mon[75021]: pgmap v3414: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:15:30 compute-0 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1505 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:30 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:30.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:15:31 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:31.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:32 compute-0 ceph-mon[75021]: pgmap v3415: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:15:32 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:32.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:15:33 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:33.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:33 compute-0 nova_compute[253661]: 2025-11-22 10:15:33.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1510 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:34 compute-0 ceph-mon[75021]: pgmap v3416: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 10:15:34 compute-0 nova_compute[253661]: 2025-11-22 10:15:34.476 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:15:34 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:34.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Nov 22 10:15:35 compute-0 nova_compute[253661]: 2025-11-22 10:15:35.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:35 compute-0 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1510 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:35 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:35.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:36 compute-0 ceph-mon[75021]: pgmap v3417: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Nov 22 10:15:36 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:36.571+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 22 10:15:37 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:37.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:38 compute-0 ceph-mon[75021]: pgmap v3418: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 22 10:15:38 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:38.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:38 compute-0 nova_compute[253661]: 2025-11-22 10:15:38.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 10:15:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1515 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:39 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:39 compute-0 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1515 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:39.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:40 compute-0 nova_compute[253661]: 2025-11-22 10:15:40.396 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:40 compute-0 ceph-mon[75021]: pgmap v3419: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 10:15:40 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:40.581+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:41 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:41.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:42 compute-0 ceph-mon[75021]: pgmap v3420: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:42 compute-0 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:15:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:42.601+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:43 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:43.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:43 compute-0 nova_compute[253661]: 2025-11-22 10:15:43.942 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1520 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:44.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:44 compute-0 ceph-mon[75021]: pgmap v3421: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:44 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:44 compute-0 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1520 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:45 compute-0 nova_compute[253661]: 2025-11-22 10:15:45.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:45 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:45.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:46 compute-0 ceph-mon[75021]: pgmap v3422: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:46 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:46.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:47 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:47.626+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:48 compute-0 ceph-mon[75021]: pgmap v3423: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:48 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:48.607+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:48 compute-0 nova_compute[253661]: 2025-11-22 10:15:48.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1525 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:49.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:49 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:49 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1525 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:50 compute-0 nova_compute[253661]: 2025-11-22 10:15:50.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:50.585+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:50 compute-0 ceph-mon[75021]: pgmap v3424: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:50 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:51 compute-0 podman[436258]: 2025-11-22 10:15:51.412247344 +0000 UTC m=+0.093628255 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:15:51 compute-0 podman[436278]: 2025-11-22 10:15:51.487331981 +0000 UTC m=+0.070930686 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 10:15:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:51.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:51 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:15:52
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data']
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:15:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:52.574+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:52 compute-0 ceph-mon[75021]: pgmap v3425: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:52 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:15:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:15:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:53.608+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:53 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:53 compute-0 nova_compute[253661]: 2025-11-22 10:15:53.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1530 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:54.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:54 compute-0 ceph-mon[75021]: pgmap v3426: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:54 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:54 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1530 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:55 compute-0 nova_compute[253661]: 2025-11-22 10:15:55.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:55.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:55 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:56 compute-0 podman[436298]: 2025-11-22 10:15:56.480358524 +0000 UTC m=+0.171731416 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 10:15:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:56.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:15:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:15:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:15:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:15:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:15:56 compute-0 ceph-mon[75021]: pgmap v3427: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:15:56 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:56 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:57.671+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:57 compute-0 ceph-mon[75021]: pgmap v3428: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:57 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:58 compute-0 sudo[436326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:15:58 compute-0 sudo[436326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:15:58 compute-0 sudo[436326]: pam_unix(sudo:session): session closed for user root
Nov 22 10:15:58 compute-0 sudo[436351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:15:58 compute-0 sudo[436351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:15:58 compute-0 sudo[436351]: pam_unix(sudo:session): session closed for user root
Nov 22 10:15:58 compute-0 sudo[436376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:15:58 compute-0 sudo[436376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:15:58 compute-0 sudo[436376]: pam_unix(sudo:session): session closed for user root
Nov 22 10:15:58 compute-0 sudo[436401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:15:58 compute-0 sudo[436401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:15:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:58.701+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:15:58 compute-0 nova_compute[253661]: 2025-11-22 10:15:58.956 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:15:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:15:59 compute-0 sudo[436401]: pam_unix(sudo:session): session closed for user root
Nov 22 10:15:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:15:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:15:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:15:59 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:15:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:15:59 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:15:59 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d3d10e2d-d582-420f-a74a-3043b23a8a66 does not exist
Nov 22 10:15:59 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev dbe6789a-f403-4fb1-bb9d-02efa18211d0 does not exist
Nov 22 10:15:59 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8bed83a2-01a5-43bc-8467-eacf572bd937 does not exist
Nov 22 10:15:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:15:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:15:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:15:59 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:15:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:15:59 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:15:59 compute-0 sudo[436460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:15:59 compute-0 sudo[436460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:15:59 compute-0 sudo[436460]: pam_unix(sudo:session): session closed for user root
Nov 22 10:15:59 compute-0 sudo[436485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:15:59 compute-0 sudo[436485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:15:59 compute-0 sudo[436485]: pam_unix(sudo:session): session closed for user root
Nov 22 10:15:59 compute-0 sudo[436510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:15:59 compute-0 sudo[436510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:15:59 compute-0 sudo[436510]: pam_unix(sudo:session): session closed for user root
Nov 22 10:15:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:15:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:15:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:15:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:15:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:15:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1534 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:15:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:15:59 compute-0 sudo[436535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:15:59 compute-0 sudo[436535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:15:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:59.750+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:15:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:00 compute-0 podman[436601]: 2025-11-22 10:16:00.030162017 +0000 UTC m=+0.059202758 container create f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 10:16:00 compute-0 systemd[1]: Started libpod-conmon-f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655.scope.
Nov 22 10:16:00 compute-0 podman[436601]: 2025-11-22 10:15:59.999050031 +0000 UTC m=+0.028090812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:16:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:16:00 compute-0 podman[436601]: 2025-11-22 10:16:00.151625705 +0000 UTC m=+0.180666446 container init f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:16:00 compute-0 podman[436601]: 2025-11-22 10:16:00.165539638 +0000 UTC m=+0.194580339 container start f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 10:16:00 compute-0 podman[436601]: 2025-11-22 10:16:00.169904855 +0000 UTC m=+0.198945696 container attach f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 10:16:00 compute-0 systemd[1]: libpod-f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655.scope: Deactivated successfully.
Nov 22 10:16:00 compute-0 boring_kapitsa[436617]: 167 167
Nov 22 10:16:00 compute-0 podman[436601]: 2025-11-22 10:16:00.17740517 +0000 UTC m=+0.206445921 container died f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 10:16:00 compute-0 conmon[436617]: conmon f09fff5d57da1600c040 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655.scope/container/memory.events
Nov 22 10:16:00 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:00 compute-0 ceph-mon[75021]: pgmap v3429: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:16:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:16:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:16:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:16:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:16:00 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:16:00 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1534 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf9e65bd2428ba5846091f943d4f1e7cd24bd56f429de2b54280ac862adc8f38-merged.mount: Deactivated successfully.
Nov 22 10:16:00 compute-0 podman[436601]: 2025-11-22 10:16:00.226911988 +0000 UTC m=+0.255952689 container remove f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:16:00 compute-0 systemd[1]: libpod-conmon-f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655.scope: Deactivated successfully.
Nov 22 10:16:00 compute-0 nova_compute[253661]: 2025-11-22 10:16:00.402 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:00 compute-0 podman[436641]: 2025-11-22 10:16:00.418866612 +0000 UTC m=+0.064497308 container create f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 10:16:00 compute-0 systemd[1]: Started libpod-conmon-f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe.scope.
Nov 22 10:16:00 compute-0 podman[436641]: 2025-11-22 10:16:00.386460914 +0000 UTC m=+0.032091710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:16:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:00 compute-0 podman[436641]: 2025-11-22 10:16:00.530663203 +0000 UTC m=+0.176293989 container init f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:16:00 compute-0 podman[436641]: 2025-11-22 10:16:00.54071058 +0000 UTC m=+0.186341276 container start f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:16:00 compute-0 podman[436641]: 2025-11-22 10:16:00.544806251 +0000 UTC m=+0.190436977 container attach f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 10:16:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:00.764+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:01 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:01 compute-0 suspicious_meninsky[436658]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:16:01 compute-0 suspicious_meninsky[436658]: --> relative data size: 1.0
Nov 22 10:16:01 compute-0 suspicious_meninsky[436658]: --> All data devices are unavailable
Nov 22 10:16:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:01.784+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:01 compute-0 systemd[1]: libpod-f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe.scope: Deactivated successfully.
Nov 22 10:16:01 compute-0 systemd[1]: libpod-f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe.scope: Consumed 1.205s CPU time.
Nov 22 10:16:01 compute-0 podman[436687]: 2025-11-22 10:16:01.856868127 +0000 UTC m=+0.032346637 container died f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb-merged.mount: Deactivated successfully.
Nov 22 10:16:01 compute-0 podman[436687]: 2025-11-22 10:16:01.943690833 +0000 UTC m=+0.119169333 container remove f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 22 10:16:01 compute-0 systemd[1]: libpod-conmon-f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe.scope: Deactivated successfully.
Nov 22 10:16:02 compute-0 sudo[436535]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:02 compute-0 sudo[436701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:16:02 compute-0 sudo[436701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:02 compute-0 sudo[436701]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:02 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:02 compute-0 ceph-mon[75021]: pgmap v3430: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:02 compute-0 sudo[436726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:16:02 compute-0 sudo[436726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:02 compute-0 sudo[436726]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:02 compute-0 sudo[436751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:16:02 compute-0 sudo[436751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:02 compute-0 sudo[436751]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:02 compute-0 sudo[436776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:16:02 compute-0 sudo[436776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:02 compute-0 podman[436842]: 2025-11-22 10:16:02.730857173 +0000 UTC m=+0.047747406 container create 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 10:16:02 compute-0 systemd[1]: Started libpod-conmon-4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846.scope.
Nov 22 10:16:02 compute-0 podman[436842]: 2025-11-22 10:16:02.708117784 +0000 UTC m=+0.025008077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:16:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:16:02 compute-0 podman[436842]: 2025-11-22 10:16:02.824072617 +0000 UTC m=+0.140962900 container init 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:16:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:02.823+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:02 compute-0 podman[436842]: 2025-11-22 10:16:02.837721394 +0000 UTC m=+0.154611627 container start 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 10:16:02 compute-0 podman[436842]: 2025-11-22 10:16:02.841760143 +0000 UTC m=+0.158650416 container attach 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 10:16:02 compute-0 strange_yalow[436858]: 167 167
Nov 22 10:16:02 compute-0 systemd[1]: libpod-4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846.scope: Deactivated successfully.
Nov 22 10:16:02 compute-0 podman[436842]: 2025-11-22 10:16:02.847037932 +0000 UTC m=+0.163928205 container died 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d576643373b2f41a77f98ea8c13e2f11aa5f53db7b8791aef791a9b773362fdd-merged.mount: Deactivated successfully.
Nov 22 10:16:02 compute-0 podman[436842]: 2025-11-22 10:16:02.897849443 +0000 UTC m=+0.214739706 container remove 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 10:16:02 compute-0 systemd[1]: libpod-conmon-4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846.scope: Deactivated successfully.
Nov 22 10:16:03 compute-0 podman[436881]: 2025-11-22 10:16:03.111695055 +0000 UTC m=+0.051688692 container create 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:03 compute-0 systemd[1]: Started libpod-conmon-484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad.scope.
Nov 22 10:16:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:16:03 compute-0 podman[436881]: 2025-11-22 10:16:03.09035495 +0000 UTC m=+0.030348637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:03 compute-0 podman[436881]: 2025-11-22 10:16:03.206302393 +0000 UTC m=+0.146296070 container init 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:16:03 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:03 compute-0 podman[436881]: 2025-11-22 10:16:03.22608857 +0000 UTC m=+0.166082217 container start 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 10:16:03 compute-0 podman[436881]: 2025-11-22 10:16:03.230501358 +0000 UTC m=+0.170495045 container attach 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:16:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:16:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:03.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:03 compute-0 practical_hamilton[436897]: {
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:     "0": [
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:         {
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "devices": [
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "/dev/loop3"
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             ],
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_name": "ceph_lv0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_size": "21470642176",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "name": "ceph_lv0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "tags": {
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.cluster_name": "ceph",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.crush_device_class": "",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.encrypted": "0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.osd_id": "0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.type": "block",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.vdo": "0"
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             },
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "type": "block",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "vg_name": "ceph_vg0"
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:         }
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:     ],
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:     "1": [
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:         {
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "devices": [
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "/dev/loop4"
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             ],
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_name": "ceph_lv1",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_size": "21470642176",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "name": "ceph_lv1",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "tags": {
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.cluster_name": "ceph",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.crush_device_class": "",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.encrypted": "0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.osd_id": "1",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.type": "block",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.vdo": "0"
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             },
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "type": "block",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "vg_name": "ceph_vg1"
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:         }
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:     ],
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:     "2": [
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:         {
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "devices": [
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "/dev/loop5"
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             ],
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_name": "ceph_lv2",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_size": "21470642176",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "name": "ceph_lv2",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "tags": {
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.cluster_name": "ceph",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.crush_device_class": "",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.encrypted": "0",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.osd_id": "2",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.type": "block",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:                 "ceph.vdo": "0"
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             },
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "type": "block",
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:             "vg_name": "ceph_vg2"
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:         }
Nov 22 10:16:03 compute-0 practical_hamilton[436897]:     ]
Nov 22 10:16:03 compute-0 practical_hamilton[436897]: }
Nov 22 10:16:03 compute-0 nova_compute[253661]: 2025-11-22 10:16:03.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:04 compute-0 systemd[1]: libpod-484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad.scope: Deactivated successfully.
Nov 22 10:16:04 compute-0 podman[436906]: 2025-11-22 10:16:04.082488524 +0000 UTC m=+0.046128236 container died 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea-merged.mount: Deactivated successfully.
Nov 22 10:16:04 compute-0 podman[436906]: 2025-11-22 10:16:04.170543521 +0000 UTC m=+0.134183173 container remove 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:16:04 compute-0 systemd[1]: libpod-conmon-484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad.scope: Deactivated successfully.
Nov 22 10:16:04 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:04 compute-0 ceph-mon[75021]: pgmap v3431: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:04 compute-0 sudo[436776]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:04 compute-0 sudo[436921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:16:04 compute-0 sudo[436921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:04 compute-0 sudo[436921]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:04 compute-0 sudo[436946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:16:04 compute-0 sudo[436946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:04 compute-0 sudo[436946]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1539 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:04 compute-0 sudo[436971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:16:04 compute-0 sudo[436971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:04 compute-0 sudo[436971]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:04 compute-0 sudo[436996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:16:04 compute-0 sudo[436996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:04.881+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:05 compute-0 podman[437060]: 2025-11-22 10:16:05.063878173 +0000 UTC m=+0.063506004 container create e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 10:16:05 compute-0 systemd[1]: Started libpod-conmon-e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e.scope.
Nov 22 10:16:05 compute-0 podman[437060]: 2025-11-22 10:16:05.03896172 +0000 UTC m=+0.038589561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:16:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:16:05 compute-0 podman[437060]: 2025-11-22 10:16:05.169908332 +0000 UTC m=+0.169536213 container init e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 10:16:05 compute-0 podman[437060]: 2025-11-22 10:16:05.17792786 +0000 UTC m=+0.177555691 container start e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:16:05 compute-0 podman[437060]: 2025-11-22 10:16:05.18281689 +0000 UTC m=+0.182444721 container attach e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:16:05 compute-0 keen_northcutt[437076]: 167 167
Nov 22 10:16:05 compute-0 systemd[1]: libpod-e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e.scope: Deactivated successfully.
Nov 22 10:16:05 compute-0 conmon[437076]: conmon e572701ba8623c739b96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e.scope/container/memory.events
Nov 22 10:16:05 compute-0 podman[437060]: 2025-11-22 10:16:05.187810233 +0000 UTC m=+0.187438094 container died e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 10:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a6a35b430456d49a0869519f473ae0cb7889f0d7cf874b67a6c9eaee4c19612-merged.mount: Deactivated successfully.
Nov 22 10:16:05 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:05 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1539 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:05 compute-0 podman[437060]: 2025-11-22 10:16:05.246367994 +0000 UTC m=+0.245995825 container remove e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 10:16:05 compute-0 systemd[1]: libpod-conmon-e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e.scope: Deactivated successfully.
Nov 22 10:16:05 compute-0 nova_compute[253661]: 2025-11-22 10:16:05.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:05 compute-0 podman[437102]: 2025-11-22 10:16:05.491052345 +0000 UTC m=+0.065925753 container create f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 10:16:05 compute-0 systemd[1]: Started libpod-conmon-f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6.scope.
Nov 22 10:16:05 compute-0 podman[437102]: 2025-11-22 10:16:05.459432537 +0000 UTC m=+0.034306005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:16:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:16:05 compute-0 podman[437102]: 2025-11-22 10:16:05.611261473 +0000 UTC m=+0.186134941 container init f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:16:05 compute-0 podman[437102]: 2025-11-22 10:16:05.625676908 +0000 UTC m=+0.200550316 container start f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:16:05 compute-0 podman[437102]: 2025-11-22 10:16:05.631094821 +0000 UTC m=+0.205968329 container attach f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 10:16:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:05.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:06 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:06 compute-0 ceph-mon[75021]: pgmap v3432: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:06 compute-0 strange_haslett[437118]: {
Nov 22 10:16:06 compute-0 strange_haslett[437118]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "osd_id": 1,
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "type": "bluestore"
Nov 22 10:16:06 compute-0 strange_haslett[437118]:     },
Nov 22 10:16:06 compute-0 strange_haslett[437118]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "osd_id": 0,
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "type": "bluestore"
Nov 22 10:16:06 compute-0 strange_haslett[437118]:     },
Nov 22 10:16:06 compute-0 strange_haslett[437118]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "osd_id": 2,
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:16:06 compute-0 strange_haslett[437118]:         "type": "bluestore"
Nov 22 10:16:06 compute-0 strange_haslett[437118]:     }
Nov 22 10:16:06 compute-0 strange_haslett[437118]: }
Nov 22 10:16:06 compute-0 systemd[1]: libpod-f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6.scope: Deactivated successfully.
Nov 22 10:16:06 compute-0 systemd[1]: libpod-f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6.scope: Consumed 1.106s CPU time.
Nov 22 10:16:06 compute-0 podman[437102]: 2025-11-22 10:16:06.722946119 +0000 UTC m=+1.297819587 container died f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 10:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d-merged.mount: Deactivated successfully.
Nov 22 10:16:06 compute-0 podman[437102]: 2025-11-22 10:16:06.810514733 +0000 UTC m=+1.385388121 container remove f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:16:06 compute-0 systemd[1]: libpod-conmon-f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6.scope: Deactivated successfully.
Nov 22 10:16:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:06.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:06 compute-0 sudo[436996]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:16:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:16:06 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:16:06 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:16:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 40e64a35-5f29-4658-b683-e9c266dd9e08 does not exist
Nov 22 10:16:06 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5c18801e-501b-42c2-9ff4-e4fc82d2b874 does not exist
Nov 22 10:16:06 compute-0 sudo[437163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:16:06 compute-0 sudo[437163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:06 compute-0 sudo[437163]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:07 compute-0 sudo[437188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:16:07 compute-0 sudo[437188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:16:07 compute-0 sudo[437188]: pam_unix(sudo:session): session closed for user root
Nov 22 10:16:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:07 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:16:07 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:16:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:07.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:08 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:16:08 compute-0 ceph-mon[75021]: pgmap v3433: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:08.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:08 compute-0 nova_compute[253661]: 2025-11-22 10:16:08.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:09 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1544 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:09.897+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:10 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:10 compute-0 ceph-mon[75021]: pgmap v3434: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:10 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1544 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:10 compute-0 nova_compute[253661]: 2025-11-22 10:16:10.407 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:10.879+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:11 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:11.888+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:12 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:12 compute-0 ceph-mon[75021]: pgmap v3435: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:16:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054248413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:16:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:16:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054248413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:16:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:12.845+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:13 compute-0 nova_compute[253661]: 2025-11-22 10:16:13.254 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:13 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2054248413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:16:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2054248413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:16:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:13.823+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:13 compute-0 nova_compute[253661]: 2025-11-22 10:16:13.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:14 compute-0 nova_compute[253661]: 2025-11-22 10:16:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:14 compute-0 nova_compute[253661]: 2025-11-22 10:16:14.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:16:14 compute-0 nova_compute[253661]: 2025-11-22 10:16:14.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:16:14 compute-0 nova_compute[253661]: 2025-11-22 10:16:14.277 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:16:14 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:14 compute-0 ceph-mon[75021]: pgmap v3436: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1549 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:14.777+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:15 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:15 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1549 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:15 compute-0 nova_compute[253661]: 2025-11-22 10:16:15.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:15.738+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:16 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:16 compute-0 ceph-mon[75021]: pgmap v3437: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:16.750+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:17 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:17.702+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:18 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:18 compute-0 ceph-mon[75021]: pgmap v3438: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:18.665+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:18 compute-0 nova_compute[253661]: 2025-11-22 10:16:18.975 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:19 compute-0 nova_compute[253661]: 2025-11-22 10:16:19.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:19 compute-0 nova_compute[253661]: 2025-11-22 10:16:19.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:19 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:19 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1554 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:19.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.265 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.265 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:16:20 compute-0 ceph-mon[75021]: pgmap v3439: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:20 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1554 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:20 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:20.634+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:20 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:16:20 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486489532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.761 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.948 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.949 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.950 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:16:20 compute-0 nova_compute[253661]: 2025-11-22 10:16:20.950 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:16:21 compute-0 nova_compute[253661]: 2025-11-22 10:16:21.003 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:16:21 compute-0 nova_compute[253661]: 2025-11-22 10:16:21.003 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:16:21 compute-0 nova_compute[253661]: 2025-11-22 10:16:21.029 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:16:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:21 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3486489532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:16:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:16:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469726142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:16:21 compute-0 nova_compute[253661]: 2025-11-22 10:16:21.504 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:16:21 compute-0 nova_compute[253661]: 2025-11-22 10:16:21.513 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:16:21 compute-0 nova_compute[253661]: 2025-11-22 10:16:21.534 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:16:21 compute-0 nova_compute[253661]: 2025-11-22 10:16:21.537 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:16:21 compute-0 nova_compute[253661]: 2025-11-22 10:16:21.537 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:16:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:21.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:22 compute-0 podman[437257]: 2025-11-22 10:16:22.357603825 +0000 UTC m=+0.053000866 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 10:16:22 compute-0 podman[437258]: 2025-11-22 10:16:22.366210836 +0000 UTC m=+0.059212548 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:16:22 compute-0 ceph-mon[75021]: pgmap v3440: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1469726142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:16:22 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:22 compute-0 nova_compute[253661]: 2025-11-22 10:16:22.539 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:22 compute-0 nova_compute[253661]: 2025-11-22 10:16:22.539 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:22 compute-0 nova_compute[253661]: 2025-11-22 10:16:22.540 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:22 compute-0 nova_compute[253661]: 2025-11-22 10:16:22.540 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:16:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:22.704+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:16:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:16:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:23.672+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:23 compute-0 nova_compute[253661]: 2025-11-22 10:16:23.980 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:24 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:24 compute-0 ceph-mon[75021]: pgmap v3441: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:24 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1559 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:24.622+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:25 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1559 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:25 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:25 compute-0 nova_compute[253661]: 2025-11-22 10:16:25.412 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:25.595+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:26 compute-0 ceph-mon[75021]: pgmap v3442: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:26 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:26.585+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:27 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:27 compute-0 podman[437294]: 2025-11-22 10:16:27.419427122 +0000 UTC m=+0.110344376 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 10:16:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:27.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:16:28.025 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:16:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:16:28.026 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:16:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:16:28.026 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:16:28 compute-0 ceph-mon[75021]: pgmap v3443: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:28 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:28.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:29 compute-0 nova_compute[253661]: 2025-11-22 10:16:29.026 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:29 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1565 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:29.517+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:30 compute-0 nova_compute[253661]: 2025-11-22 10:16:30.413 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:30 compute-0 ceph-mon[75021]: pgmap v3444: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:30 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1565 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:30 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:30.508+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:31 compute-0 nova_compute[253661]: 2025-11-22 10:16:31.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:16:31 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:31.500+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:32 compute-0 ceph-mon[75021]: pgmap v3445: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:32 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:32.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:33 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:33.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:34 compute-0 nova_compute[253661]: 2025-11-22 10:16:34.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:34 compute-0 ceph-mon[75021]: pgmap v3446: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:34 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1570 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:34.547+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:35 compute-0 nova_compute[253661]: 2025-11-22 10:16:35.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:35 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1570 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:35 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:35.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:36 compute-0 ceph-mon[75021]: pgmap v3447: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:36 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:36.521+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:37.506+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:37 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:38.488+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:38 compute-0 ceph-mon[75021]: pgmap v3448: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:38 compute-0 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:16:38 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:39 compute-0 nova_compute[253661]: 2025-11-22 10:16:39.031 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1575 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:39.515+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:39 compute-0 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1575 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:39 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:40 compute-0 nova_compute[253661]: 2025-11-22 10:16:40.418 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:40.546+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:40 compute-0 ceph-mon[75021]: pgmap v3449: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:40 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:41.544+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:41 compute-0 ceph-mon[75021]: pgmap v3450: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:41 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:42.500+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:42 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:43.480+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:43 compute-0 ceph-mon[75021]: pgmap v3451: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:43 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:44 compute-0 nova_compute[253661]: 2025-11-22 10:16:44.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:44.452+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1580 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:44 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:44 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1580 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:45.412+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:45 compute-0 nova_compute[253661]: 2025-11-22 10:16:45.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:45 compute-0 ceph-mon[75021]: pgmap v3452: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:45 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:46.366+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:46 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:47.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:47 compute-0 ceph-mon[75021]: pgmap v3453: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:47 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:48.377+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:48 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:49 compute-0 nova_compute[253661]: 2025-11-22 10:16:49.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:49.348+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1585 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:49 compute-0 ceph-mon[75021]: pgmap v3454: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:49 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:49 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1585 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:50.394+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:50 compute-0 nova_compute[253661]: 2025-11-22 10:16:50.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:50 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:51.385+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:51 compute-0 ceph-mon[75021]: pgmap v3455: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:51 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:52.375+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:16:52
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'vms', 'images']
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:16:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:16:52 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:53 compute-0 ceph-mgr[75315]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1636168236
Nov 22 10:16:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:53.346+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:53 compute-0 podman[437321]: 2025-11-22 10:16:53.359116127 +0000 UTC m=+0.054865241 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 22 10:16:53 compute-0 podman[437320]: 2025-11-22 10:16:53.365416062 +0000 UTC m=+0.052990146 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:16:53 compute-0 ceph-mon[75021]: pgmap v3456: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:53 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:54 compute-0 nova_compute[253661]: 2025-11-22 10:16:54.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:54.341+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1590 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:16:54 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:54 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1590 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:55.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:55 compute-0 nova_compute[253661]: 2025-11-22 10:16:55.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:55 compute-0 ceph-mon[75021]: pgmap v3457: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 10:16:55 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:56.406+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:16:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:16:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:16:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:16:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:16:56 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:57.438+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:58 compute-0 ceph-mon[75021]: pgmap v3458: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:58 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:58 compute-0 podman[437358]: 2025-11-22 10:16:58.383468503 +0000 UTC m=+0.078368810 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:16:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:58.446+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:59 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:59 compute-0 nova_compute[253661]: 2025-11-22 10:16:59.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:16:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:16:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:59.464+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:16:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:16:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:16:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:16:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:16:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:16:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:16:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1595 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:16:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:00 compute-0 ceph-mon[75021]: pgmap v3459: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:00 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:00 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1595 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:00 compute-0 nova_compute[253661]: 2025-11-22 10:17:00.424 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:00.459+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:01 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:01.496+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:02 compute-0 ceph-mon[75021]: pgmap v3460: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:02 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:02.486+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:03 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:17:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:17:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:03.499+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:04 compute-0 nova_compute[253661]: 2025-11-22 10:17:04.111 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:04 compute-0 ceph-mon[75021]: pgmap v3461: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:04 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1600 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:04.499+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:05 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1600 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:05 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:05 compute-0 nova_compute[253661]: 2025-11-22 10:17:05.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:05.526+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:06 compute-0 ceph-mon[75021]: pgmap v3462: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:06 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:06.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:07 compute-0 sudo[437385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:17:07 compute-0 sudo[437385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:07 compute-0 sudo[437385]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:07 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:07 compute-0 sudo[437410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:17:07 compute-0 sudo[437410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:07 compute-0 sudo[437410]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:07 compute-0 sudo[437435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:17:07 compute-0 sudo[437435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:07 compute-0 sudo[437435]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:07 compute-0 sudo[437460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:17:07 compute-0 sudo[437460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:07.477+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:07 compute-0 sudo[437460]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:17:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:17:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:17:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:17:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:17:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:17:07 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c51991e1-ca15-40e1-a341-2ba96b221f0c does not exist
Nov 22 10:17:07 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 8f13ef03-50c0-497e-bbaa-8a58ee937adc does not exist
Nov 22 10:17:07 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 6fa81e18-7278-449b-bb45-a8280242b9fe does not exist
Nov 22 10:17:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:17:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:17:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:17:07 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:17:07 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:17:07 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:17:08 compute-0 sudo[437516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:17:08 compute-0 sudo[437516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:08 compute-0 sudo[437516]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:08 compute-0 sudo[437541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:17:08 compute-0 sudo[437541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:08 compute-0 sudo[437541]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:08 compute-0 sudo[437566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:17:08 compute-0 sudo[437566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:08 compute-0 sudo[437566]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:08 compute-0 sudo[437591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:17:08 compute-0 sudo[437591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:08 compute-0 ceph-mon[75021]: pgmap v3463: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:08 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:17:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:17:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:17:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:17:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:17:08 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:17:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:08.474+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:08 compute-0 podman[437656]: 2025-11-22 10:17:08.625225148 +0000 UTC m=+0.068234911 container create 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:17:08 compute-0 systemd[1]: Started libpod-conmon-684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd.scope.
Nov 22 10:17:08 compute-0 podman[437656]: 2025-11-22 10:17:08.597098646 +0000 UTC m=+0.040108459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:17:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:17:08 compute-0 podman[437656]: 2025-11-22 10:17:08.740438983 +0000 UTC m=+0.183448806 container init 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 10:17:08 compute-0 podman[437656]: 2025-11-22 10:17:08.751587437 +0000 UTC m=+0.194597170 container start 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:17:08 compute-0 podman[437656]: 2025-11-22 10:17:08.755465553 +0000 UTC m=+0.198475386 container attach 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 10:17:08 compute-0 angry_kepler[437672]: 167 167
Nov 22 10:17:08 compute-0 systemd[1]: libpod-684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd.scope: Deactivated successfully.
Nov 22 10:17:08 compute-0 podman[437677]: 2025-11-22 10:17:08.827341601 +0000 UTC m=+0.043394899 container died 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:17:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-254dd9917cb7835b8f223e35f5149ef865489501c3889b6dad6f49a4bf27c3ce-merged.mount: Deactivated successfully.
Nov 22 10:17:08 compute-0 podman[437677]: 2025-11-22 10:17:08.874528582 +0000 UTC m=+0.090581880 container remove 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:17:08 compute-0 systemd[1]: libpod-conmon-684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd.scope: Deactivated successfully.
Nov 22 10:17:09 compute-0 nova_compute[253661]: 2025-11-22 10:17:09.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:09 compute-0 podman[437699]: 2025-11-22 10:17:09.136888679 +0000 UTC m=+0.063392082 container create be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:17:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:09 compute-0 systemd[1]: Started libpod-conmon-be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76.scope.
Nov 22 10:17:09 compute-0 podman[437699]: 2025-11-22 10:17:09.116391774 +0000 UTC m=+0.042895207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:17:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:09 compute-0 podman[437699]: 2025-11-22 10:17:09.22999039 +0000 UTC m=+0.156493823 container init be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 10:17:09 compute-0 podman[437699]: 2025-11-22 10:17:09.237652889 +0000 UTC m=+0.164156292 container start be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 10:17:09 compute-0 podman[437699]: 2025-11-22 10:17:09.241070982 +0000 UTC m=+0.167574385 container attach be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:17:09 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:09.463+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1605 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:10 compute-0 ceph-mon[75021]: pgmap v3464: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:10 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:10 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1605 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:10 compute-0 blissful_mayer[437715]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:17:10 compute-0 blissful_mayer[437715]: --> relative data size: 1.0
Nov 22 10:17:10 compute-0 blissful_mayer[437715]: --> All data devices are unavailable
Nov 22 10:17:10 compute-0 systemd[1]: libpod-be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76.scope: Deactivated successfully.
Nov 22 10:17:10 compute-0 podman[437699]: 2025-11-22 10:17:10.323338074 +0000 UTC m=+1.249841477 container died be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:17:10 compute-0 systemd[1]: libpod-be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76.scope: Consumed 1.021s CPU time.
Nov 22 10:17:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb-merged.mount: Deactivated successfully.
Nov 22 10:17:10 compute-0 podman[437699]: 2025-11-22 10:17:10.378102732 +0000 UTC m=+1.304606125 container remove be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 10:17:10 compute-0 systemd[1]: libpod-conmon-be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76.scope: Deactivated successfully.
Nov 22 10:17:10 compute-0 sudo[437591]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:10 compute-0 nova_compute[253661]: 2025-11-22 10:17:10.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:10.430+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:10 compute-0 sudo[437756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:17:10 compute-0 sudo[437756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:10 compute-0 sudo[437756]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:10 compute-0 sudo[437781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:17:10 compute-0 sudo[437781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:10 compute-0 sudo[437781]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:10 compute-0 sudo[437806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:17:10 compute-0 sudo[437806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:10 compute-0 sudo[437806]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:10 compute-0 sudo[437831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:17:10 compute-0 sudo[437831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:10 compute-0 podman[437893]: 2025-11-22 10:17:10.996861058 +0000 UTC m=+0.046755982 container create 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:17:11 compute-0 systemd[1]: Started libpod-conmon-0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1.scope.
Nov 22 10:17:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:17:11 compute-0 podman[437893]: 2025-11-22 10:17:10.979618584 +0000 UTC m=+0.029513528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:17:11 compute-0 podman[437893]: 2025-11-22 10:17:11.077885722 +0000 UTC m=+0.127780646 container init 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 10:17:11 compute-0 podman[437893]: 2025-11-22 10:17:11.083754606 +0000 UTC m=+0.133649530 container start 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:17:11 compute-0 podman[437893]: 2025-11-22 10:17:11.086851102 +0000 UTC m=+0.136746046 container attach 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 10:17:11 compute-0 vigorous_taussig[437909]: 167 167
Nov 22 10:17:11 compute-0 systemd[1]: libpod-0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1.scope: Deactivated successfully.
Nov 22 10:17:11 compute-0 podman[437893]: 2025-11-22 10:17:11.088749349 +0000 UTC m=+0.138644273 container died 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 10:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-18a3562cb90adf344a378d5184eaee8dc5ccf71c02d4d7d7ab62db0b5da05ae2-merged.mount: Deactivated successfully.
Nov 22 10:17:11 compute-0 podman[437893]: 2025-11-22 10:17:11.123961045 +0000 UTC m=+0.173855969 container remove 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 10:17:11 compute-0 systemd[1]: libpod-conmon-0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1.scope: Deactivated successfully.
Nov 22 10:17:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:11 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:11 compute-0 podman[437933]: 2025-11-22 10:17:11.315799436 +0000 UTC m=+0.057420124 container create bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 10:17:11 compute-0 systemd[1]: Started libpod-conmon-bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2.scope.
Nov 22 10:17:11 compute-0 podman[437933]: 2025-11-22 10:17:11.290225066 +0000 UTC m=+0.031845764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:17:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:11 compute-0 podman[437933]: 2025-11-22 10:17:11.432953988 +0000 UTC m=+0.174574686 container init bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:17:11 compute-0 podman[437933]: 2025-11-22 10:17:11.443947659 +0000 UTC m=+0.185568347 container start bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:17:11 compute-0 podman[437933]: 2025-11-22 10:17:11.452693895 +0000 UTC m=+0.194314573 container attach bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:17:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:11.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]: {
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:     "0": [
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:         {
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "devices": [
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "/dev/loop3"
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             ],
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_name": "ceph_lv0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_size": "21470642176",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "name": "ceph_lv0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "tags": {
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.cluster_name": "ceph",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.crush_device_class": "",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.encrypted": "0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.osd_id": "0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.type": "block",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.vdo": "0"
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             },
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "type": "block",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "vg_name": "ceph_vg0"
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:         }
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:     ],
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:     "1": [
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:         {
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "devices": [
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "/dev/loop4"
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             ],
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_name": "ceph_lv1",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_size": "21470642176",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "name": "ceph_lv1",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "tags": {
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.cluster_name": "ceph",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.crush_device_class": "",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.encrypted": "0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.osd_id": "1",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.type": "block",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.vdo": "0"
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             },
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "type": "block",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "vg_name": "ceph_vg1"
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:         }
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:     ],
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:     "2": [
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:         {
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "devices": [
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "/dev/loop5"
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             ],
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_name": "ceph_lv2",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_size": "21470642176",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "name": "ceph_lv2",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "tags": {
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.cluster_name": "ceph",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.crush_device_class": "",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.encrypted": "0",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.osd_id": "2",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.type": "block",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:                 "ceph.vdo": "0"
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             },
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "type": "block",
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:             "vg_name": "ceph_vg2"
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:         }
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]:     ]
Nov 22 10:17:12 compute-0 unruffled_gauss[437949]: }
Nov 22 10:17:12 compute-0 systemd[1]: libpod-bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2.scope: Deactivated successfully.
Nov 22 10:17:12 compute-0 podman[437933]: 2025-11-22 10:17:12.196538319 +0000 UTC m=+0.938159007 container died bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 10:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa-merged.mount: Deactivated successfully.
Nov 22 10:17:12 compute-0 podman[437933]: 2025-11-22 10:17:12.247866271 +0000 UTC m=+0.989486949 container remove bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 10:17:12 compute-0 systemd[1]: libpod-conmon-bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2.scope: Deactivated successfully.
Nov 22 10:17:12 compute-0 sudo[437831]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:12 compute-0 ceph-mon[75021]: pgmap v3465: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:12 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:12 compute-0 sudo[437974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:17:12 compute-0 sudo[437974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:12 compute-0 sudo[437974]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:12 compute-0 sudo[437999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:17:12 compute-0 sudo[437999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:12 compute-0 sudo[437999]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:12 compute-0 sudo[438024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:17:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:17:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2381018576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:17:12 compute-0 sudo[438024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:17:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2381018576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:17:12 compute-0 sudo[438024]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:12.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:12 compute-0 sudo[438049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:17:12 compute-0 sudo[438049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:12 compute-0 podman[438115]: 2025-11-22 10:17:12.860934937 +0000 UTC m=+0.047442308 container create 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 22 10:17:12 compute-0 systemd[1]: Started libpod-conmon-5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394.scope.
Nov 22 10:17:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:17:12 compute-0 podman[438115]: 2025-11-22 10:17:12.840663979 +0000 UTC m=+0.027171350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:17:12 compute-0 podman[438115]: 2025-11-22 10:17:12.948947103 +0000 UTC m=+0.135454484 container init 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:17:12 compute-0 podman[438115]: 2025-11-22 10:17:12.954545221 +0000 UTC m=+0.141052572 container start 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:17:12 compute-0 podman[438115]: 2025-11-22 10:17:12.957578705 +0000 UTC m=+0.144086086 container attach 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:17:12 compute-0 vigilant_hamilton[438131]: 167 167
Nov 22 10:17:12 compute-0 systemd[1]: libpod-5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394.scope: Deactivated successfully.
Nov 22 10:17:12 compute-0 podman[438115]: 2025-11-22 10:17:12.960942159 +0000 UTC m=+0.147449490 container died 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 10:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a07a0f673e0f15cf6b38eeedf72ffcfc5bd3e3ff1a0b7ebc8c6f120f4558029-merged.mount: Deactivated successfully.
Nov 22 10:17:12 compute-0 podman[438115]: 2025-11-22 10:17:12.998506233 +0000 UTC m=+0.185013554 container remove 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:17:13 compute-0 systemd[1]: libpod-conmon-5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394.scope: Deactivated successfully.
Nov 22 10:17:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:13 compute-0 podman[438154]: 2025-11-22 10:17:13.175108719 +0000 UTC m=+0.047450349 container create 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:17:13 compute-0 systemd[1]: Started libpod-conmon-13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e.scope.
Nov 22 10:17:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:17:13 compute-0 podman[438154]: 2025-11-22 10:17:13.156228674 +0000 UTC m=+0.028570334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:17:13 compute-0 podman[438154]: 2025-11-22 10:17:13.2629701 +0000 UTC m=+0.135311750 container init 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:17:13 compute-0 podman[438154]: 2025-11-22 10:17:13.271019378 +0000 UTC m=+0.143361048 container start 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 10:17:13 compute-0 podman[438154]: 2025-11-22 10:17:13.275078198 +0000 UTC m=+0.147419848 container attach 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 10:17:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2381018576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:17:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2381018576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:17:13 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:13.413+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:14 compute-0 nova_compute[253661]: 2025-11-22 10:17:14.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]: {
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "osd_id": 1,
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "type": "bluestore"
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:     },
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "osd_id": 0,
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "type": "bluestore"
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:     },
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "osd_id": 2,
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:         "type": "bluestore"
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]:     }
Nov 22 10:17:14 compute-0 determined_mccarthy[438171]: }
Nov 22 10:17:14 compute-0 systemd[1]: libpod-13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e.scope: Deactivated successfully.
Nov 22 10:17:14 compute-0 podman[438154]: 2025-11-22 10:17:14.313544903 +0000 UTC m=+1.185886553 container died 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:17:14 compute-0 systemd[1]: libpod-13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e.scope: Consumed 1.047s CPU time.
Nov 22 10:17:14 compute-0 ceph-mon[75021]: pgmap v3466: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:14 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2-merged.mount: Deactivated successfully.
Nov 22 10:17:14 compute-0 podman[438154]: 2025-11-22 10:17:14.371096839 +0000 UTC m=+1.243438469 container remove 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 10:17:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:14.373+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:17:14 compute-0 systemd[1]: libpod-conmon-13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e.scope: Deactivated successfully.
Nov 22 10:17:14 compute-0 sudo[438049]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:17:14 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:17:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:17:14 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:17:14 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 737255a0-a29a-4006-963f-c24642377326 does not exist
Nov 22 10:17:14 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 39ac3acb-4437-4228-a762-05bb21854a72 does not exist
Nov 22 10:17:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1610 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:14 compute-0 sudo[438214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:17:14 compute-0 sudo[438214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:14 compute-0 sudo[438214]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:14 compute-0 sudo[438239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:17:14 compute-0 sudo[438239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:17:14 compute-0 sudo[438239]: pam_unix(sudo:session): session closed for user root
Nov 22 10:17:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:15 compute-0 nova_compute[253661]: 2025-11-22 10:17:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:17:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:15.388+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:17:15 compute-0 ceph-mon[75021]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:17:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:17:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:17:15 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1610 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:15 compute-0 nova_compute[253661]: 2025-11-22 10:17:15.431 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:16 compute-0 nova_compute[253661]: 2025-11-22 10:17:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:17:16 compute-0 nova_compute[253661]: 2025-11-22 10:17:16.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:17:16 compute-0 nova_compute[253661]: 2025-11-22 10:17:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:17:16 compute-0 nova_compute[253661]: 2025-11-22 10:17:16.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:17:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:16.405+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:16 compute-0 ceph-mon[75021]: pgmap v3467: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:16 compute-0 ceph-mon[75021]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:17:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:17.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:17 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:18.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:18 compute-0 ceph-mon[75021]: pgmap v3468: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:18 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:19 compute-0 nova_compute[253661]: 2025-11-22 10:17:19.120 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:19.398+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:19 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1615 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:20.414+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:20 compute-0 nova_compute[253661]: 2025-11-22 10:17:20.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:20 compute-0 ceph-mon[75021]: pgmap v3469: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:20 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:20 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1615 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.308 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.309 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.309 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.310 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.310 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:17:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:21.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:21 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:17:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/636004843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:17:21 compute-0 nova_compute[253661]: 2025-11-22 10:17:21.817 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.002 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.004 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3535MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.004 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.004 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.267 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.268 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.367 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 10:17:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:22.438+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:22 compute-0 ceph-mon[75021]: pgmap v3470: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:22 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/636004843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.524 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.525 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.543 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.576 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 10:17:22 compute-0 nova_compute[253661]: 2025-11-22 10:17:22.591 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:17:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:17:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:17:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091111089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:17:23 compute-0 nova_compute[253661]: 2025-11-22 10:17:23.040 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:17:23 compute-0 nova_compute[253661]: 2025-11-22 10:17:23.049 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:17:23 compute-0 nova_compute[253661]: 2025-11-22 10:17:23.065 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:17:23 compute-0 nova_compute[253661]: 2025-11-22 10:17:23.068 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:17:23 compute-0 nova_compute[253661]: 2025-11-22 10:17:23.068 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:17:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:23.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:23 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1091111089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:17:24 compute-0 nova_compute[253661]: 2025-11-22 10:17:24.062 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:17:24 compute-0 nova_compute[253661]: 2025-11-22 10:17:24.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:24 compute-0 nova_compute[253661]: 2025-11-22 10:17:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:17:24 compute-0 nova_compute[253661]: 2025-11-22 10:17:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:17:24 compute-0 podman[438308]: 2025-11-22 10:17:24.421169934 +0000 UTC m=+0.095745897 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:17:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:24.430+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:24 compute-0 podman[438309]: 2025-11-22 10:17:24.457193201 +0000 UTC m=+0.131701293 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 22 10:17:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1620 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:24 compute-0 ceph-mon[75021]: pgmap v3471: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:24 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:24 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1620 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:25.436+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:25 compute-0 nova_compute[253661]: 2025-11-22 10:17:25.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:25 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:26.436+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:26 compute-0 ceph-mon[75021]: pgmap v3472: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:26 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:27.411+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:27 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:17:28.027 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:17:28.028 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:17:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:17:28.028 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:17:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:28.417+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:28 compute-0 ceph-mon[75021]: pgmap v3473: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:28 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:29 compute-0 nova_compute[253661]: 2025-11-22 10:17:29.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:29.458+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:29 compute-0 podman[438344]: 2025-11-22 10:17:29.466769373 +0000 UTC m=+0.153709313 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 22 10:17:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1625 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:29 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:29 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1625 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:30 compute-0 nova_compute[253661]: 2025-11-22 10:17:30.438 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:30.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:30 compute-0 ceph-mon[75021]: pgmap v3474: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:30 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:31.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:31 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:32.485+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:32 compute-0 ceph-mon[75021]: pgmap v3475: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:32 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:33.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:33 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:34 compute-0 nova_compute[253661]: 2025-11-22 10:17:34.177 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:34.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1630 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:34 compute-0 ceph-mon[75021]: pgmap v3476: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:34 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:34 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1630 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:35 compute-0 nova_compute[253661]: 2025-11-22 10:17:35.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:35.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:35 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:36.418+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:36 compute-0 ceph-mon[75021]: pgmap v3477: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:36 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:37.394+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:37 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:37 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:38.386+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:38 compute-0 ceph-mon[75021]: pgmap v3478: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:38 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:39 compute-0 nova_compute[253661]: 2025-11-22 10:17:39.179 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:39.395+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1635 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:39 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:39 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1635 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:40.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:40 compute-0 nova_compute[253661]: 2025-11-22 10:17:40.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:40 compute-0 ceph-mon[75021]: pgmap v3479: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:40 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:41.413+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:41 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:42.455+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:42 compute-0 ceph-mon[75021]: pgmap v3480: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:42 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.786666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662786740, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 2164, "num_deletes": 251, "total_data_size": 2729383, "memory_usage": 2788176, "flush_reason": "Manual Compaction"}
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662807473, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2663485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76393, "largest_seqno": 78556, "table_properties": {"data_size": 2654735, "index_size": 4923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22917, "raw_average_key_size": 21, "raw_value_size": 2635048, "raw_average_value_size": 2444, "num_data_blocks": 216, "num_entries": 1078, "num_filter_entries": 1078, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806498, "oldest_key_time": 1763806498, "file_creation_time": 1763806662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 20852 microseconds, and 12504 cpu microseconds.
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.807521) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2663485 bytes OK
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.807542) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.809670) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.809686) EVENT_LOG_v1 {"time_micros": 1763806662809681, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.809704) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2720008, prev total WAL file size 2720008, number of live WAL files 2.
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.810735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2601KB)], [182(8706KB)]
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662810798, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 11578695, "oldest_snapshot_seqno": -1}
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 10724 keys, 10192213 bytes, temperature: kUnknown
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662899827, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10192213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10128956, "index_size": 35368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26821, "raw_key_size": 286525, "raw_average_key_size": 26, "raw_value_size": 9944732, "raw_average_value_size": 927, "num_data_blocks": 1327, "num_entries": 10724, "num_filter_entries": 10724, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.900437) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10192213 bytes
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.902051) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.8 rd, 114.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 11238, records dropped: 514 output_compression: NoCompression
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.902084) EVENT_LOG_v1 {"time_micros": 1763806662902067, "job": 114, "event": "compaction_finished", "compaction_time_micros": 89203, "compaction_time_cpu_micros": 51448, "output_level": 6, "num_output_files": 1, "total_output_size": 10192213, "num_input_records": 11238, "num_output_records": 10724, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662903158, "job": 114, "event": "table_file_deletion", "file_number": 184}
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662906822, "job": 114, "event": "table_file_deletion", "file_number": 182}
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.810557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:17:42 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:17:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:43.435+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:43 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:44 compute-0 nova_compute[253661]: 2025-11-22 10:17:44.182 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:44.466+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1640 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:44 compute-0 ceph-mon[75021]: pgmap v3481: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:44 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:44 compute-0 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1640 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:45 compute-0 nova_compute[253661]: 2025-11-22 10:17:45.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:45.514+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:45 compute-0 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:17:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:46.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'images' : 12 ])
Nov 22 10:17:46 compute-0 ceph-mon[75021]: pgmap v3482: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:46 compute-0 ceph-mon[75021]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'images' : 12 ])
Nov 22 10:17:46 compute-0 sshd-session[438371]: Invalid user polkadot from 92.118.39.92 port 49352
Nov 22 10:17:47 compute-0 sshd-session[438371]: Connection closed by invalid user polkadot 92.118.39.92 port 49352 [preauth]
Nov 22 10:17:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:47.465+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:47 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:48.436+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:48 compute-0 ceph-mon[75021]: pgmap v3483: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:48 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:49 compute-0 nova_compute[253661]: 2025-11-22 10:17:49.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:49.444+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1645 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:49 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:49 compute-0 ceph-mon[75021]: Health check update: 12 slow ops, oldest one blocked for 1645 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:50.421+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:50 compute-0 nova_compute[253661]: 2025-11-22 10:17:50.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:50 compute-0 ceph-mon[75021]: pgmap v3484: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:50 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:51.374+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:51 compute-0 ceph-mon[75021]: pgmap v3485: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:51 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:52.348+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:17:52
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta']
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:17:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:17:52 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:53.368+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:53 compute-0 ceph-mon[75021]: pgmap v3486: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:53 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:54 compute-0 nova_compute[253661]: 2025-11-22 10:17:54.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:54.336+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1650 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:54 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:54 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1650 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:55.373+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:55 compute-0 podman[438374]: 2025-11-22 10:17:55.391532825 +0000 UTC m=+0.069876158 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 22 10:17:55 compute-0 podman[438373]: 2025-11-22 10:17:55.406729959 +0000 UTC m=+0.083664038 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 10:17:55 compute-0 nova_compute[253661]: 2025-11-22 10:17:55.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:55 compute-0 ceph-mon[75021]: pgmap v3487: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:55 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:56.338+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:17:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:17:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:17:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:17:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:17:56 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:57.327+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:57 compute-0 ceph-mon[75021]: pgmap v3488: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:57 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:58.370+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:58 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:59 compute-0 nova_compute[253661]: 2025-11-22 10:17:59.192 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:17:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:59.344+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:17:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:17:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:17:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:17:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:17:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:17:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1655 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:17:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:17:59 compute-0 ceph-mon[75021]: pgmap v3489: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:17:59 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:17:59 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1655 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:00.392+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:00 compute-0 podman[438413]: 2025-11-22 10:18:00.437239694 +0000 UTC m=+0.121481137 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:18:00 compute-0 nova_compute[253661]: 2025-11-22 10:18:00.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:00 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:01.369+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:01 compute-0 ceph-mon[75021]: pgmap v3490: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:01 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:02.351+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:02 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:03.304+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:18:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:18:03 compute-0 ceph-mon[75021]: pgmap v3491: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:03 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:04 compute-0 nova_compute[253661]: 2025-11-22 10:18:04.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:04.331+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1660 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:04 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:04 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1660 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:05.286+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:05 compute-0 nova_compute[253661]: 2025-11-22 10:18:05.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:05 compute-0 ceph-mon[75021]: pgmap v3492: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:05 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:06.296+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:07 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:07.339+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:08 compute-0 ceph-mon[75021]: pgmap v3493: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:08 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:08.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:09 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:09 compute-0 nova_compute[253661]: 2025-11-22 10:18:09.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:09.251+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1665 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:10 compute-0 ceph-mon[75021]: pgmap v3494: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:10 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:10 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1665 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:10.254+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:10 compute-0 nova_compute[253661]: 2025-11-22 10:18:10.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:11 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:11.257+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:12 compute-0 ceph-mon[75021]: pgmap v3495: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:12 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:12.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:18:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215536891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:18:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:18:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215536891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:18:13 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2215536891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:18:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2215536891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:18:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:13.319+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:14 compute-0 ceph-mon[75021]: pgmap v3496: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:14 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:14 compute-0 nova_compute[253661]: 2025-11-22 10:18:14.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:14.292+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1670 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:14 compute-0 sudo[438440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:14 compute-0 sudo[438440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:14 compute-0 sudo[438440]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:14 compute-0 sudo[438465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:18:14 compute-0 sudo[438465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:14 compute-0 sudo[438465]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:14 compute-0 sudo[438490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:14 compute-0 sudo[438490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:14 compute-0 sudo[438490]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:14 compute-0 sudo[438515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 10:18:14 compute-0 sudo[438515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:15 compute-0 sudo[438515]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:18:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:18:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:15 compute-0 sudo[438560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:15 compute-0 sudo[438560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:15 compute-0 sudo[438560]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:15 compute-0 sudo[438585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:18:15 compute-0 sudo[438585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:15 compute-0 sudo[438585]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:15 compute-0 sudo[438610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:15 compute-0 sudo[438610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:15 compute-0 sudo[438610]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:15 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:15 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1670 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:15 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:15 compute-0 sudo[438635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:18:15 compute-0 sudo[438635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:15.283+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:15 compute-0 nova_compute[253661]: 2025-11-22 10:18:15.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:15 compute-0 sudo[438635]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:18:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:18:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:18:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:18:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:18:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1912043c-2474-429b-b08f-4e187680c98c does not exist
Nov 22 10:18:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 732fe3bb-8fa9-4e6c-8ed0-0abc7f4f2eb5 does not exist
Nov 22 10:18:15 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev eb3563f8-16ab-4fc2-b5f3-da0692a72f15 does not exist
Nov 22 10:18:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:18:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:18:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:18:15 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:18:15 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:18:15 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:18:15 compute-0 sudo[438689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:15 compute-0 sudo[438689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:15 compute-0 sudo[438689]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:15 compute-0 sudo[438714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:18:15 compute-0 sudo[438714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:15 compute-0 sudo[438714]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:15 compute-0 sudo[438739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:15 compute-0 sudo[438739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:15 compute-0 sudo[438739]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:15 compute-0 sudo[438764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:18:15 compute-0 sudo[438764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:16 compute-0 nova_compute[253661]: 2025-11-22 10:18:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:16 compute-0 nova_compute[253661]: 2025-11-22 10:18:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:18:16 compute-0 nova_compute[253661]: 2025-11-22 10:18:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:18:16 compute-0 ceph-mon[75021]: pgmap v3497: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:16 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:18:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:18:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:18:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:18:16 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:18:16 compute-0 nova_compute[253661]: 2025-11-22 10:18:16.245 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:18:16 compute-0 podman[438830]: 2025-11-22 10:18:16.278535347 +0000 UTC m=+0.045185241 container create 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:18:16 compute-0 systemd[1]: Started libpod-conmon-2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee.scope.
Nov 22 10:18:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:16.313+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:18:16 compute-0 podman[438830]: 2025-11-22 10:18:16.339351231 +0000 UTC m=+0.106001155 container init 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:18:16 compute-0 podman[438830]: 2025-11-22 10:18:16.345717298 +0000 UTC m=+0.112367192 container start 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:18:16 compute-0 podman[438830]: 2025-11-22 10:18:16.34867909 +0000 UTC m=+0.115329004 container attach 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 10:18:16 compute-0 kind_herschel[438846]: 167 167
Nov 22 10:18:16 compute-0 systemd[1]: libpod-2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee.scope: Deactivated successfully.
Nov 22 10:18:16 compute-0 podman[438830]: 2025-11-22 10:18:16.35193424 +0000 UTC m=+0.118584144 container died 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:18:16 compute-0 podman[438830]: 2025-11-22 10:18:16.259487918 +0000 UTC m=+0.026137842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2571b72c16eed962fdc716e76ea3a8ff6a0842f3472077787ba677916cc030a-merged.mount: Deactivated successfully.
Nov 22 10:18:16 compute-0 podman[438830]: 2025-11-22 10:18:16.396721151 +0000 UTC m=+0.163371045 container remove 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 10:18:16 compute-0 systemd[1]: libpod-conmon-2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee.scope: Deactivated successfully.
Nov 22 10:18:16 compute-0 podman[438870]: 2025-11-22 10:18:16.542668958 +0000 UTC m=+0.037778009 container create d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:18:16 compute-0 systemd[1]: Started libpod-conmon-d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794.scope.
Nov 22 10:18:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:16 compute-0 podman[438870]: 2025-11-22 10:18:16.615952469 +0000 UTC m=+0.111061540 container init d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 10:18:16 compute-0 podman[438870]: 2025-11-22 10:18:16.525768533 +0000 UTC m=+0.020877614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:18:16 compute-0 podman[438870]: 2025-11-22 10:18:16.624914249 +0000 UTC m=+0.120023300 container start d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:18:16 compute-0 podman[438870]: 2025-11-22 10:18:16.62820947 +0000 UTC m=+0.123318551 container attach d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 10:18:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:17 compute-0 nova_compute[253661]: 2025-11-22 10:18:17.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:17 compute-0 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:18:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:17.292+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:17 compute-0 elastic_satoshi[438888]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:18:17 compute-0 elastic_satoshi[438888]: --> relative data size: 1.0
Nov 22 10:18:17 compute-0 elastic_satoshi[438888]: --> All data devices are unavailable
Nov 22 10:18:17 compute-0 systemd[1]: libpod-d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794.scope: Deactivated successfully.
Nov 22 10:18:17 compute-0 podman[438870]: 2025-11-22 10:18:17.61617999 +0000 UTC m=+1.111289051 container died d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 10:18:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e-merged.mount: Deactivated successfully.
Nov 22 10:18:17 compute-0 podman[438870]: 2025-11-22 10:18:17.671132271 +0000 UTC m=+1.166241322 container remove d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 10:18:17 compute-0 systemd[1]: libpod-conmon-d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794.scope: Deactivated successfully.
Nov 22 10:18:17 compute-0 sudo[438764]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:17 compute-0 sudo[438930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:17 compute-0 sudo[438930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:17 compute-0 sudo[438930]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:17 compute-0 sudo[438955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:18:17 compute-0 sudo[438955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:17 compute-0 sudo[438955]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:17 compute-0 sudo[438980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:17 compute-0 sudo[438980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:17 compute-0 sudo[438980]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:18 compute-0 sudo[439005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:18:18 compute-0 sudo[439005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:18.290+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:18 compute-0 ceph-mon[75021]: pgmap v3498: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:18 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:18 compute-0 podman[439070]: 2025-11-22 10:18:18.39057772 +0000 UTC m=+0.064789193 container create cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:18:18 compute-0 podman[439070]: 2025-11-22 10:18:18.349097632 +0000 UTC m=+0.023309155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:18:18 compute-0 systemd[1]: Started libpod-conmon-cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77.scope.
Nov 22 10:18:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:18:18 compute-0 podman[439070]: 2025-11-22 10:18:18.607729998 +0000 UTC m=+0.281941491 container init cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 10:18:18 compute-0 podman[439070]: 2025-11-22 10:18:18.61475682 +0000 UTC m=+0.288968313 container start cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 10:18:18 compute-0 vigorous_keldysh[439086]: 167 167
Nov 22 10:18:18 compute-0 systemd[1]: libpod-cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77.scope: Deactivated successfully.
Nov 22 10:18:18 compute-0 conmon[439086]: conmon cc2d6e0cb64f8186064d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77.scope/container/memory.events
Nov 22 10:18:18 compute-0 podman[439070]: 2025-11-22 10:18:18.810968742 +0000 UTC m=+0.485180235 container attach cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:18:18 compute-0 podman[439070]: 2025-11-22 10:18:18.811514485 +0000 UTC m=+0.485725958 container died cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 10:18:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:19 compute-0 nova_compute[253661]: 2025-11-22 10:18:19.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:19.270+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-915292745a9a3d6faaf507b193ee66704b606996ad34b6bfbbfc69eddcb8ab3c-merged.mount: Deactivated successfully.
Nov 22 10:18:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1675 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:19 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:20 compute-0 podman[439070]: 2025-11-22 10:18:20.052708427 +0000 UTC m=+1.726919930 container remove cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:18:20 compute-0 systemd[1]: libpod-conmon-cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77.scope: Deactivated successfully.
Nov 22 10:18:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:20.268+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:20 compute-0 podman[439111]: 2025-11-22 10:18:20.250773864 +0000 UTC m=+0.034222161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:18:20 compute-0 podman[439111]: 2025-11-22 10:18:20.454733847 +0000 UTC m=+0.238182134 container create a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 10:18:20 compute-0 nova_compute[253661]: 2025-11-22 10:18:20.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:20 compute-0 systemd[1]: Started libpod-conmon-a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f.scope.
Nov 22 10:18:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:20 compute-0 podman[439111]: 2025-11-22 10:18:20.603175975 +0000 UTC m=+0.386624312 container init a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 10:18:20 compute-0 podman[439111]: 2025-11-22 10:18:20.617369044 +0000 UTC m=+0.400817331 container start a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 10:18:20 compute-0 podman[439111]: 2025-11-22 10:18:20.62247885 +0000 UTC m=+0.405927157 container attach a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 10:18:20 compute-0 ceph-mon[75021]: pgmap v3499: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:20 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:20 compute-0 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1675 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:20 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:21.236+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:21 compute-0 infallible_hertz[439128]: {
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:     "0": [
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:         {
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "devices": [
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "/dev/loop3"
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             ],
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_name": "ceph_lv0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_size": "21470642176",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "name": "ceph_lv0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "tags": {
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.cluster_name": "ceph",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.crush_device_class": "",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.encrypted": "0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.osd_id": "0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.type": "block",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.vdo": "0"
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             },
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "type": "block",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "vg_name": "ceph_vg0"
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:         }
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:     ],
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:     "1": [
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:         {
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "devices": [
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "/dev/loop4"
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             ],
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_name": "ceph_lv1",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_size": "21470642176",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "name": "ceph_lv1",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "tags": {
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.cluster_name": "ceph",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.crush_device_class": "",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.encrypted": "0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.osd_id": "1",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.type": "block",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.vdo": "0"
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             },
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "type": "block",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "vg_name": "ceph_vg1"
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:         }
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:     ],
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:     "2": [
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:         {
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "devices": [
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "/dev/loop5"
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             ],
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_name": "ceph_lv2",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_size": "21470642176",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "name": "ceph_lv2",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "tags": {
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.cluster_name": "ceph",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.crush_device_class": "",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.encrypted": "0",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.osd_id": "2",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.type": "block",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:                 "ceph.vdo": "0"
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             },
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "type": "block",
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:             "vg_name": "ceph_vg2"
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:         }
Nov 22 10:18:21 compute-0 infallible_hertz[439128]:     ]
Nov 22 10:18:21 compute-0 infallible_hertz[439128]: }
Nov 22 10:18:21 compute-0 systemd[1]: libpod-a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f.scope: Deactivated successfully.
Nov 22 10:18:21 compute-0 podman[439111]: 2025-11-22 10:18:21.408282591 +0000 UTC m=+1.191730898 container died a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 10:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584-merged.mount: Deactivated successfully.
Nov 22 10:18:21 compute-0 podman[439111]: 2025-11-22 10:18:21.496781056 +0000 UTC m=+1.280229313 container remove a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:18:21 compute-0 systemd[1]: libpod-conmon-a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f.scope: Deactivated successfully.
Nov 22 10:18:21 compute-0 sudo[439005]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:21 compute-0 sudo[439149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:21 compute-0 sudo[439149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:21 compute-0 sudo[439149]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:21 compute-0 sudo[439174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:18:21 compute-0 sudo[439174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:21 compute-0 sudo[439174]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:21 compute-0 sudo[439199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:21 compute-0 sudo[439199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:21 compute-0 sudo[439199]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:21 compute-0 sudo[439224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:18:21 compute-0 sudo[439224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:21 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:22 compute-0 podman[439290]: 2025-11-22 10:18:22.061220397 +0000 UTC m=+0.036175731 container create 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:18:22 compute-0 systemd[1]: Started libpod-conmon-20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4.scope.
Nov 22 10:18:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:18:22 compute-0 podman[439290]: 2025-11-22 10:18:22.128081 +0000 UTC m=+0.103036344 container init 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:18:22 compute-0 podman[439290]: 2025-11-22 10:18:22.135141993 +0000 UTC m=+0.110097357 container start 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 10:18:22 compute-0 eager_booth[439306]: 167 167
Nov 22 10:18:22 compute-0 systemd[1]: libpod-20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4.scope: Deactivated successfully.
Nov 22 10:18:22 compute-0 podman[439290]: 2025-11-22 10:18:22.139662704 +0000 UTC m=+0.114618048 container attach 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 10:18:22 compute-0 podman[439290]: 2025-11-22 10:18:22.140151606 +0000 UTC m=+0.115106940 container died 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:18:22 compute-0 podman[439290]: 2025-11-22 10:18:22.044858654 +0000 UTC m=+0.019814008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:18:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-eef13438b587f580fd572af637e8fc00c8a4962aa9de99f18bc5ce69cb6cb61b-merged.mount: Deactivated successfully.
Nov 22 10:18:22 compute-0 podman[439290]: 2025-11-22 10:18:22.171925627 +0000 UTC m=+0.146880961 container remove 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 10:18:22 compute-0 systemd[1]: libpod-conmon-20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4.scope: Deactivated successfully.
Nov 22 10:18:22 compute-0 nova_compute[253661]: 2025-11-22 10:18:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:22.228+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:22 compute-0 nova_compute[253661]: 2025-11-22 10:18:22.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:22 compute-0 nova_compute[253661]: 2025-11-22 10:18:22.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:18:22 compute-0 podman[439331]: 2025-11-22 10:18:22.350359193 +0000 UTC m=+0.050538463 container create 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:18:22 compute-0 systemd[1]: Started libpod-conmon-94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e.scope.
Nov 22 10:18:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:18:22 compute-0 podman[439331]: 2025-11-22 10:18:22.328791342 +0000 UTC m=+0.028970682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:18:22 compute-0 podman[439331]: 2025-11-22 10:18:22.433169538 +0000 UTC m=+0.133348848 container init 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 10:18:22 compute-0 podman[439331]: 2025-11-22 10:18:22.440676162 +0000 UTC m=+0.140855442 container start 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:18:22 compute-0 podman[439331]: 2025-11-22 10:18:22.444588628 +0000 UTC m=+0.144767908 container attach 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 10:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:18:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:18:22 compute-0 ceph-mon[75021]: pgmap v3500: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:22 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.264 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.265 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:18:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:23.269+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:23 compute-0 modest_tesla[439347]: {
Nov 22 10:18:23 compute-0 modest_tesla[439347]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "osd_id": 1,
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "type": "bluestore"
Nov 22 10:18:23 compute-0 modest_tesla[439347]:     },
Nov 22 10:18:23 compute-0 modest_tesla[439347]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "osd_id": 0,
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "type": "bluestore"
Nov 22 10:18:23 compute-0 modest_tesla[439347]:     },
Nov 22 10:18:23 compute-0 modest_tesla[439347]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "osd_id": 2,
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:18:23 compute-0 modest_tesla[439347]:         "type": "bluestore"
Nov 22 10:18:23 compute-0 modest_tesla[439347]:     }
Nov 22 10:18:23 compute-0 modest_tesla[439347]: }
Nov 22 10:18:23 compute-0 systemd[1]: libpod-94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e.scope: Deactivated successfully.
Nov 22 10:18:23 compute-0 podman[439331]: 2025-11-22 10:18:23.439853357 +0000 UTC m=+1.140032637 container died 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 10:18:23 compute-0 systemd[1]: libpod-94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e.scope: Consumed 1.003s CPU time.
Nov 22 10:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3-merged.mount: Deactivated successfully.
Nov 22 10:18:23 compute-0 podman[439331]: 2025-11-22 10:18:23.505551642 +0000 UTC m=+1.205730922 container remove 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 10:18:23 compute-0 systemd[1]: libpod-conmon-94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e.scope: Deactivated successfully.
Nov 22 10:18:23 compute-0 sudo[439224]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:18:23 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:18:23 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:23 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev b54fe231-2f01-4a0f-867c-9f3bb42d0d51 does not exist
Nov 22 10:18:23 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev bbfe4b76-50e5-4034-912b-21c22dc217e2 does not exist
Nov 22 10:18:23 compute-0 sudo[439413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:18:23 compute-0 sudo[439413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:23 compute-0 sudo[439413]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:23 compute-0 sudo[439438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:18:23 compute-0 sudo[439438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:18:23 compute-0 sudo[439438]: pam_unix(sudo:session): session closed for user root
Nov 22 10:18:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:18:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813828880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.715 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.871 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.872 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3497MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.873 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.873 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:18:23 compute-0 nova_compute[253661]: 2025-11-22 10:18:23.998 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:18:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:24.230+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:24 compute-0 nova_compute[253661]: 2025-11-22 10:18:24.302 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:18:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878930382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:18:24 compute-0 nova_compute[253661]: 2025-11-22 10:18:24.410 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:18:24 compute-0 nova_compute[253661]: 2025-11-22 10:18:24.414 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:18:24 compute-0 nova_compute[253661]: 2025-11-22 10:18:24.427 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:18:24 compute-0 nova_compute[253661]: 2025-11-22 10:18:24.429 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:18:24 compute-0 nova_compute[253661]: 2025-11-22 10:18:24.429 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:18:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1680 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:24 compute-0 ceph-mon[75021]: pgmap v3501: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:24 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:24 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:18:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1813828880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:18:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2878930382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:18:24 compute-0 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1680 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:25.242+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:25 compute-0 nova_compute[253661]: 2025-11-22 10:18:25.430 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:25 compute-0 nova_compute[253661]: 2025-11-22 10:18:25.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:25 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:26 compute-0 nova_compute[253661]: 2025-11-22 10:18:26.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:26.254+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:26 compute-0 podman[439487]: 2025-11-22 10:18:26.402350742 +0000 UTC m=+0.084329634 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 10:18:26 compute-0 podman[439488]: 2025-11-22 10:18:26.418292673 +0000 UTC m=+0.092530245 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible)
Nov 22 10:18:26 compute-0 ceph-mon[75021]: pgmap v3502: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:26 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:27.213+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:27 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:18:28.028 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:18:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:18:28.028 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:18:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:18:28.029 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:18:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:28.235+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:28 compute-0 ceph-mon[75021]: pgmap v3503: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:28 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:29.209+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:29 compute-0 nova_compute[253661]: 2025-11-22 10:18:29.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1685 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:29 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:29 compute-0 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1685 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:30.235+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:30 compute-0 nova_compute[253661]: 2025-11-22 10:18:30.505 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:30 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:30 compute-0 ceph-mon[75021]: pgmap v3504: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:31.209+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:31 compute-0 podman[439527]: 2025-11-22 10:18:31.448704497 +0000 UTC m=+0.128605572 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:18:31 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:32.177+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:32 compute-0 nova_compute[253661]: 2025-11-22 10:18:32.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:18:32 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:32 compute-0 ceph-mon[75021]: pgmap v3505: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:33.212+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:33 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:34.254+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:34 compute-0 nova_compute[253661]: 2025-11-22 10:18:34.310 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1690 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:34 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:34 compute-0 ceph-mon[75021]: pgmap v3506: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:34 compute-0 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1690 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:35.260+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:35 compute-0 nova_compute[253661]: 2025-11-22 10:18:35.561 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:35 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:36.238+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:36 compute-0 ceph-mon[75021]: pgmap v3507: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:36 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:37.265+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:37 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:38.256+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:38 compute-0 ceph-mon[75021]: pgmap v3508: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:38 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:39.208+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:39 compute-0 nova_compute[253661]: 2025-11-22 10:18:39.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1695 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:39 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:39 compute-0 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1695 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:40.227+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:40 compute-0 nova_compute[253661]: 2025-11-22 10:18:40.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:40 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:40 compute-0 ceph-mon[75021]: pgmap v3509: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:41.263+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:41 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:42.310+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:42 compute-0 ceph-mon[75021]: pgmap v3510: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:42 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:42 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:43.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:43 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:44.343+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:44 compute-0 nova_compute[253661]: 2025-11-22 10:18:44.358 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1700 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:44 compute-0 ceph-mon[75021]: pgmap v3511: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:44 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:44 compute-0 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1700 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:45.380+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:45 compute-0 nova_compute[253661]: 2025-11-22 10:18:45.568 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:45 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:46.411+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:46 compute-0 ceph-mon[75021]: pgmap v3512: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:46 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:47.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:47 compute-0 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:18:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:48.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:48 compute-0 ceph-mon[75021]: pgmap v3513: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:48 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:49 compute-0 nova_compute[253661]: 2025-11-22 10:18:49.362 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:49.382+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1705 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:49 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:49 compute-0 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1705 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:50.408+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:50 compute-0 nova_compute[253661]: 2025-11-22 10:18:50.570 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:50 compute-0 ceph-mon[75021]: pgmap v3514: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:50 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:51.411+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:51 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:18:52
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images', '.rgw.root', 'volumes', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.meta']
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:18:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:52.440+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:18:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:18:52 compute-0 ceph-mon[75021]: pgmap v3515: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:52 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:53.418+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:53 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:54 compute-0 nova_compute[253661]: 2025-11-22 10:18:54.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:54.432+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1710 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:54 compute-0 ceph-mon[75021]: pgmap v3516: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:54 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:54 compute-0 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1710 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:55.398+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:55 compute-0 nova_compute[253661]: 2025-11-22 10:18:55.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:55 compute-0 ceph-mon[75021]: pgmap v3517: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:55 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:56.357+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:18:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:18:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:18:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:18:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:18:56 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:57.381+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:57 compute-0 podman[439553]: 2025-11-22 10:18:57.405779805 +0000 UTC m=+0.082517329 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:18:57 compute-0 podman[439554]: 2025-11-22 10:18:57.414265374 +0000 UTC m=+0.088433434 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 10:18:57 compute-0 ceph-mon[75021]: pgmap v3518: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:57 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:58.335+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:58 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:59.376+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:18:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:59 compute-0 nova_compute[253661]: 2025-11-22 10:18:59.412 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:18:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:18:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:18:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:18:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:18:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:18:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1715 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:18:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:18:59 compute-0 ceph-mon[75021]: pgmap v3519: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:18:59 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:18:59 compute-0 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1715 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:00.343+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:00 compute-0 nova_compute[253661]: 2025-11-22 10:19:00.602 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:00 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:01.359+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:01 compute-0 ceph-mon[75021]: pgmap v3520: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:01 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:02.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:02 compute-0 podman[439591]: 2025-11-22 10:19:02.478114638 +0000 UTC m=+0.146929042 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 10:19:02 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:03.361+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:19:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:19:03 compute-0 ceph-mon[75021]: pgmap v3521: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:03 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:04.385+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:04 compute-0 nova_compute[253661]: 2025-11-22 10:19:04.415 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1720 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.524511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744524625, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1213, "num_deletes": 258, "total_data_size": 1386403, "memory_usage": 1411376, "flush_reason": "Manual Compaction"}
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744549398, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1354008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78557, "largest_seqno": 79769, "table_properties": {"data_size": 1348686, "index_size": 2525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13704, "raw_average_key_size": 20, "raw_value_size": 1336910, "raw_average_value_size": 1983, "num_data_blocks": 111, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806663, "oldest_key_time": 1763806663, "file_creation_time": 1763806744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 25251 microseconds, and 8149 cpu microseconds.
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.549758) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1354008 bytes OK
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.549807) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.553987) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.554006) EVENT_LOG_v1 {"time_micros": 1763806744554000, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.554035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 1380678, prev total WAL file size 1380678, number of live WAL files 2.
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.554831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373638' seq:72057594037927935, type:22 .. '6C6F676D0034303232' seq:0, type:0; will stop at (end)
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1322KB)], [185(9953KB)]
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744554868, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 11546221, "oldest_snapshot_seqno": -1}
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 10870 keys, 11404333 bytes, temperature: kUnknown
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744646831, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 11404333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11338819, "index_size": 37241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27205, "raw_key_size": 291017, "raw_average_key_size": 26, "raw_value_size": 11150622, "raw_average_value_size": 1025, "num_data_blocks": 1403, "num_entries": 10870, "num_filter_entries": 10870, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.647212) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11404333 bytes
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.649564) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.4 rd, 123.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.7 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(17.0) write-amplify(8.4) OK, records in: 11398, records dropped: 528 output_compression: NoCompression
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.649594) EVENT_LOG_v1 {"time_micros": 1763806744649581, "job": 116, "event": "compaction_finished", "compaction_time_micros": 92074, "compaction_time_cpu_micros": 34133, "output_level": 6, "num_output_files": 1, "total_output_size": 11404333, "num_input_records": 11398, "num_output_records": 10870, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744650155, "job": 116, "event": "table_file_deletion", "file_number": 187}
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744653799, "job": 116, "event": "table_file_deletion", "file_number": 185}
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.554748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:19:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:19:04 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:04 compute-0 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1720 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:05.354+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:05 compute-0 nova_compute[253661]: 2025-11-22 10:19:05.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:05 compute-0 ceph-mon[75021]: pgmap v3522: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:05 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:06.351+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:06 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:07.323+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:07 compute-0 ceph-mon[75021]: pgmap v3523: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:07 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:08.349+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:08 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:09.365+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:09 compute-0 nova_compute[253661]: 2025-11-22 10:19:09.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1725 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:10 compute-0 ceph-mon[75021]: pgmap v3524: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:10 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:10 compute-0 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1725 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:10.381+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:10 compute-0 nova_compute[253661]: 2025-11-22 10:19:10.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:11 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:11.370+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:12 compute-0 ceph-mon[75021]: pgmap v3525: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:12 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:12.344+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:19:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2990060046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:19:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:19:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2990060046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:19:13 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2990060046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:19:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2990060046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:19:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:13.380+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:14 compute-0 ceph-mon[75021]: pgmap v3526: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:14 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:14.362+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:14 compute-0 nova_compute[253661]: 2025-11-22 10:19:14.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1730 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:15 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:15 compute-0 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1730 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:15.357+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:15 compute-0 nova_compute[253661]: 2025-11-22 10:19:15.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:16 compute-0 ceph-mon[75021]: pgmap v3527: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:16 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:16 compute-0 nova_compute[253661]: 2025-11-22 10:19:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:16 compute-0 nova_compute[253661]: 2025-11-22 10:19:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:19:16 compute-0 nova_compute[253661]: 2025-11-22 10:19:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:19:16 compute-0 nova_compute[253661]: 2025-11-22 10:19:16.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:19:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:16.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:17 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:17.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:18 compute-0 ceph-mon[75021]: pgmap v3528: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:18 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:18 compute-0 nova_compute[253661]: 2025-11-22 10:19:18.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:18.428+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:19 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:19 compute-0 nova_compute[253661]: 2025-11-22 10:19:19.425 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:19.431+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1735 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:20 compute-0 ceph-mon[75021]: pgmap v3529: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:20 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:20 compute-0 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1735 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:20.433+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:20 compute-0 nova_compute[253661]: 2025-11-22 10:19:20.613 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:21 compute-0 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:19:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:21.392+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:22 compute-0 ceph-mon[75021]: pgmap v3530: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:22 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:22 compute-0 nova_compute[253661]: 2025-11-22 10:19:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:22.416+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:19:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:19:23 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:23 compute-0 nova_compute[253661]: 2025-11-22 10:19:23.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:23 compute-0 nova_compute[253661]: 2025-11-22 10:19:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:23 compute-0 nova_compute[253661]: 2025-11-22 10:19:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:19:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:23.437+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:23 compute-0 sudo[439620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:19:23 compute-0 sudo[439620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:23 compute-0 sudo[439620]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:23 compute-0 sudo[439645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:19:23 compute-0 sudo[439645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:23 compute-0 sudo[439645]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:23 compute-0 sudo[439670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:19:23 compute-0 sudo[439670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:23 compute-0 sudo[439670]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:24 compute-0 sudo[439695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:19:24 compute-0 sudo[439695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:24 compute-0 ceph-mon[75021]: pgmap v3531: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:24 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:24 compute-0 nova_compute[253661]: 2025-11-22 10:19:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:24 compute-0 nova_compute[253661]: 2025-11-22 10:19:24.430 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:24.452+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:24 compute-0 sudo[439695]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 10:19:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 10:19:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:19:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:19:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:19:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:19:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:19:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:19:24 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 078e78d2-9dc0-4142-aeac-4b34f096778a does not exist
Nov 22 10:19:24 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 07943f61-122f-4655-8429-46cf5a4a53ca does not exist
Nov 22 10:19:24 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 1584a471-b370-4e8a-964d-1df8beaf1048 does not exist
Nov 22 10:19:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:19:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:19:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:19:24 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:19:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:19:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:19:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1740 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:24 compute-0 sudo[439751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:19:24 compute-0 sudo[439751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:24 compute-0 sudo[439751]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:24 compute-0 sudo[439776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:19:24 compute-0 sudo[439776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:24 compute-0 sudo[439776]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:24 compute-0 sudo[439801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:19:24 compute-0 sudo[439801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:24 compute-0 sudo[439801]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:24 compute-0 sudo[439826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:19:24 compute-0 sudo[439826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:25 compute-0 podman[439890]: 2025-11-22 10:19:25.092364818 +0000 UTC m=+0.045439398 container create 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:19:25 compute-0 systemd[1]: Started libpod-conmon-0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78.scope.
Nov 22 10:19:25 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 10:19:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:19:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:19:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:19:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:19:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:19:25 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:19:25 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1740 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:25 compute-0 podman[439890]: 2025-11-22 10:19:25.071042035 +0000 UTC m=+0.024116665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:19:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:19:25 compute-0 podman[439890]: 2025-11-22 10:19:25.184303077 +0000 UTC m=+0.137377667 container init 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 10:19:25 compute-0 podman[439890]: 2025-11-22 10:19:25.203479969 +0000 UTC m=+0.156554549 container start 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:19:25 compute-0 podman[439890]: 2025-11-22 10:19:25.207692102 +0000 UTC m=+0.160766712 container attach 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 10:19:25 compute-0 stoic_kirch[439907]: 167 167
Nov 22 10:19:25 compute-0 systemd[1]: libpod-0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78.scope: Deactivated successfully.
Nov 22 10:19:25 compute-0 conmon[439907]: conmon 0ddb2d6e85c20a072a63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78.scope/container/memory.events
Nov 22 10:19:25 compute-0 podman[439890]: 2025-11-22 10:19:25.212627494 +0000 UTC m=+0.165702084 container died 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3761fd6199af71927a20f44ca6c8dce06c2602ba715048b7a5ae20695adf8ddb-merged.mount: Deactivated successfully.
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.256 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:19:25 compute-0 podman[439890]: 2025-11-22 10:19:25.259416834 +0000 UTC m=+0.212491414 container remove 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:19:25 compute-0 systemd[1]: libpod-conmon-0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78.scope: Deactivated successfully.
Nov 22 10:19:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:25.426+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:25 compute-0 podman[439941]: 2025-11-22 10:19:25.496865879 +0000 UTC m=+0.078594042 container create e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:19:25 compute-0 systemd[1]: Started libpod-conmon-e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35.scope.
Nov 22 10:19:25 compute-0 podman[439941]: 2025-11-22 10:19:25.467364644 +0000 UTC m=+0.049092857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:19:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:25 compute-0 podman[439941]: 2025-11-22 10:19:25.604109684 +0000 UTC m=+0.185837867 container init e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:25 compute-0 podman[439941]: 2025-11-22 10:19:25.618817546 +0000 UTC m=+0.200545729 container start e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 10:19:25 compute-0 podman[439941]: 2025-11-22 10:19:25.623729726 +0000 UTC m=+0.205457899 container attach e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:19:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:19:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/545251252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.702 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.858 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.859 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3512MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.860 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.860 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.958 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.959 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:19:25 compute-0 nova_compute[253661]: 2025-11-22 10:19:25.977 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:19:26 compute-0 ceph-mon[75021]: pgmap v3532: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:26 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/545251252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:19:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:19:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104404180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:19:26 compute-0 nova_compute[253661]: 2025-11-22 10:19:26.389 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:19:26 compute-0 nova_compute[253661]: 2025-11-22 10:19:26.399 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:19:26 compute-0 nova_compute[253661]: 2025-11-22 10:19:26.413 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:19:26 compute-0 nova_compute[253661]: 2025-11-22 10:19:26.416 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:19:26 compute-0 nova_compute[253661]: 2025-11-22 10:19:26.417 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:19:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:26.463+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:26 compute-0 admiring_ptolemy[439966]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:19:26 compute-0 admiring_ptolemy[439966]: --> relative data size: 1.0
Nov 22 10:19:26 compute-0 admiring_ptolemy[439966]: --> All data devices are unavailable
Nov 22 10:19:26 compute-0 systemd[1]: libpod-e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35.scope: Deactivated successfully.
Nov 22 10:19:26 compute-0 podman[439941]: 2025-11-22 10:19:26.656779334 +0000 UTC m=+1.238507467 container died e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:19:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e-merged.mount: Deactivated successfully.
Nov 22 10:19:26 compute-0 podman[439941]: 2025-11-22 10:19:26.732623438 +0000 UTC m=+1.314351571 container remove e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:19:26 compute-0 systemd[1]: libpod-conmon-e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35.scope: Deactivated successfully.
Nov 22 10:19:26 compute-0 sudo[439826]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:26 compute-0 sudo[440030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:19:26 compute-0 sudo[440030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:26 compute-0 sudo[440030]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:26 compute-0 sudo[440055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:19:26 compute-0 sudo[440055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:26 compute-0 sudo[440055]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:26 compute-0 sudo[440080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:19:26 compute-0 sudo[440080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:26 compute-0 sudo[440080]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:27 compute-0 sudo[440105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:19:27 compute-0 sudo[440105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2104404180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:19:27 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:27 compute-0 podman[440170]: 2025-11-22 10:19:27.377476045 +0000 UTC m=+0.036632551 container create 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:19:27 compute-0 systemd[1]: Started libpod-conmon-43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced.scope.
Nov 22 10:19:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:19:27 compute-0 podman[440170]: 2025-11-22 10:19:27.451017342 +0000 UTC m=+0.110173868 container init 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 10:19:27 compute-0 podman[440170]: 2025-11-22 10:19:27.360861367 +0000 UTC m=+0.020017893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:19:27 compute-0 podman[440170]: 2025-11-22 10:19:27.45740714 +0000 UTC m=+0.116563646 container start 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:19:27 compute-0 podman[440170]: 2025-11-22 10:19:27.460891695 +0000 UTC m=+0.120048201 container attach 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:19:27 compute-0 nice_taussig[440186]: 167 167
Nov 22 10:19:27 compute-0 systemd[1]: libpod-43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced.scope: Deactivated successfully.
Nov 22 10:19:27 compute-0 podman[440170]: 2025-11-22 10:19:27.462853894 +0000 UTC m=+0.122010420 container died 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 10:19:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-15b746bb366cbb6f62504d6cb45c6a12445e78799b268d86d9b5bf180529e52f-merged.mount: Deactivated successfully.
Nov 22 10:19:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:27.503+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:27 compute-0 podman[440170]: 2025-11-22 10:19:27.507059539 +0000 UTC m=+0.166216055 container remove 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:19:27 compute-0 systemd[1]: libpod-conmon-43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced.scope: Deactivated successfully.
Nov 22 10:19:27 compute-0 podman[440190]: 2025-11-22 10:19:27.517710042 +0000 UTC m=+0.064375433 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:19:27 compute-0 podman[440189]: 2025-11-22 10:19:27.517184159 +0000 UTC m=+0.064788874 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 22 10:19:27 compute-0 podman[440245]: 2025-11-22 10:19:27.663036493 +0000 UTC m=+0.045087359 container create 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 10:19:27 compute-0 systemd[1]: Started libpod-conmon-6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6.scope.
Nov 22 10:19:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:27 compute-0 podman[440245]: 2025-11-22 10:19:27.736484478 +0000 UTC m=+0.118535384 container init 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:19:27 compute-0 podman[440245]: 2025-11-22 10:19:27.643512523 +0000 UTC m=+0.025563429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:19:27 compute-0 podman[440245]: 2025-11-22 10:19:27.750707227 +0000 UTC m=+0.132758123 container start 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 10:19:27 compute-0 podman[440245]: 2025-11-22 10:19:27.756380626 +0000 UTC m=+0.138431542 container attach 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 10:19:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:19:28.029 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:19:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:19:28.030 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:19:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:19:28.030 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:19:28 compute-0 ceph-mon[75021]: pgmap v3533: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:28 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:28 compute-0 nova_compute[253661]: 2025-11-22 10:19:28.418 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]: {
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:     "0": [
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:         {
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "devices": [
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "/dev/loop3"
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             ],
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_name": "ceph_lv0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_size": "21470642176",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "name": "ceph_lv0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "tags": {
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.cluster_name": "ceph",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.crush_device_class": "",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.encrypted": "0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.osd_id": "0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.type": "block",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.vdo": "0"
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             },
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "type": "block",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "vg_name": "ceph_vg0"
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:         }
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:     ],
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:     "1": [
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:         {
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "devices": [
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "/dev/loop4"
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             ],
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_name": "ceph_lv1",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_size": "21470642176",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "name": "ceph_lv1",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "tags": {
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.cluster_name": "ceph",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.crush_device_class": "",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.encrypted": "0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.osd_id": "1",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.type": "block",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.vdo": "0"
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             },
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "type": "block",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "vg_name": "ceph_vg1"
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:         }
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:     ],
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:     "2": [
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:         {
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "devices": [
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "/dev/loop5"
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             ],
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_name": "ceph_lv2",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_size": "21470642176",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "name": "ceph_lv2",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "tags": {
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.cluster_name": "ceph",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.crush_device_class": "",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.encrypted": "0",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.osd_id": "2",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.type": "block",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:                 "ceph.vdo": "0"
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             },
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "type": "block",
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:             "vg_name": "ceph_vg2"
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:         }
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]:     ]
Nov 22 10:19:28 compute-0 cranky_mirzakhani[440261]: }
Nov 22 10:19:28 compute-0 systemd[1]: libpod-6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6.scope: Deactivated successfully.
Nov 22 10:19:28 compute-0 podman[440245]: 2025-11-22 10:19:28.524701958 +0000 UTC m=+0.906752854 container died 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:19:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:28.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc-merged.mount: Deactivated successfully.
Nov 22 10:19:28 compute-0 podman[440245]: 2025-11-22 10:19:28.587973763 +0000 UTC m=+0.970024619 container remove 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 10:19:28 compute-0 systemd[1]: libpod-conmon-6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6.scope: Deactivated successfully.
Nov 22 10:19:28 compute-0 sudo[440105]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:28 compute-0 sudo[440282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:19:28 compute-0 sudo[440282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:28 compute-0 sudo[440282]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:28 compute-0 sudo[440307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:19:28 compute-0 sudo[440307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:28 compute-0 sudo[440307]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:28 compute-0 sudo[440332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:19:28 compute-0 sudo[440332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:28 compute-0 sudo[440332]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:28 compute-0 sudo[440357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:19:28 compute-0 sudo[440357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:29 compute-0 podman[440422]: 2025-11-22 10:19:29.212610415 +0000 UTC m=+0.076090292 container create ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 10:19:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:29 compute-0 podman[440422]: 2025-11-22 10:19:29.157056519 +0000 UTC m=+0.020536436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:19:29 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:29 compute-0 systemd[1]: Started libpod-conmon-ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0.scope.
Nov 22 10:19:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:19:29 compute-0 podman[440422]: 2025-11-22 10:19:29.434172489 +0000 UTC m=+0.297652456 container init ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:19:29 compute-0 nova_compute[253661]: 2025-11-22 10:19:29.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:29 compute-0 podman[440422]: 2025-11-22 10:19:29.445676502 +0000 UTC m=+0.309156429 container start ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 10:19:29 compute-0 upbeat_tharp[440438]: 167 167
Nov 22 10:19:29 compute-0 systemd[1]: libpod-ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0.scope: Deactivated successfully.
Nov 22 10:19:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1745 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:29 compute-0 podman[440422]: 2025-11-22 10:19:29.522969231 +0000 UTC m=+0.386449218 container attach ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:19:29 compute-0 podman[440422]: 2025-11-22 10:19:29.524277133 +0000 UTC m=+0.387757060 container died ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:19:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:29.535+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2da2154ab9b0540b708daaaf2a9a8dfb6fc2b8d0047515590b8ef4ee503eaba2-merged.mount: Deactivated successfully.
Nov 22 10:19:29 compute-0 podman[440422]: 2025-11-22 10:19:29.67917135 +0000 UTC m=+0.542651267 container remove ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 10:19:29 compute-0 systemd[1]: libpod-conmon-ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0.scope: Deactivated successfully.
Nov 22 10:19:29 compute-0 podman[440464]: 2025-11-22 10:19:29.869114367 +0000 UTC m=+0.044603816 container create 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:19:29 compute-0 systemd[1]: Started libpod-conmon-3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b.scope.
Nov 22 10:19:29 compute-0 podman[440464]: 2025-11-22 10:19:29.852686624 +0000 UTC m=+0.028176093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:19:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:19:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:19:29 compute-0 podman[440464]: 2025-11-22 10:19:29.973601546 +0000 UTC m=+0.149091015 container init 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 10:19:29 compute-0 podman[440464]: 2025-11-22 10:19:29.992732886 +0000 UTC m=+0.168222335 container start 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:19:29 compute-0 podman[440464]: 2025-11-22 10:19:29.996463927 +0000 UTC m=+0.171953376 container attach 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:19:30 compute-0 ceph-mon[75021]: pgmap v3534: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:30 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1745 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:30 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:30.571+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:30 compute-0 nova_compute[253661]: 2025-11-22 10:19:30.617 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]: {
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "osd_id": 1,
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "type": "bluestore"
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:     },
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "osd_id": 0,
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "type": "bluestore"
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:     },
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "osd_id": 2,
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:         "type": "bluestore"
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]:     }
Nov 22 10:19:30 compute-0 pensive_brahmagupta[440481]: }
Nov 22 10:19:31 compute-0 systemd[1]: libpod-3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b.scope: Deactivated successfully.
Nov 22 10:19:31 compute-0 podman[440464]: 2025-11-22 10:19:31.027854504 +0000 UTC m=+1.203343963 container died 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 10:19:31 compute-0 systemd[1]: libpod-3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b.scope: Consumed 1.041s CPU time.
Nov 22 10:19:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871-merged.mount: Deactivated successfully.
Nov 22 10:19:31 compute-0 podman[440464]: 2025-11-22 10:19:31.09688819 +0000 UTC m=+1.272377649 container remove 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 10:19:31 compute-0 systemd[1]: libpod-conmon-3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b.scope: Deactivated successfully.
Nov 22 10:19:31 compute-0 sudo[440357]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:19:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:19:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:19:31 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:19:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev a1dc3f0d-c208-438e-a9eb-73f79ecdd683 does not exist
Nov 22 10:19:31 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 38361ec1-ac04-4d07-8d4d-5d61e1d2f4ed does not exist
Nov 22 10:19:31 compute-0 sudo[440529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:19:31 compute-0 sudo[440529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:31 compute-0 sudo[440529]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:31 compute-0 sudo[440554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:19:31 compute-0 sudo[440554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:19:31 compute-0 sudo[440554]: pam_unix(sudo:session): session closed for user root
Nov 22 10:19:31 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:31 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:19:31 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:19:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:31.576+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:32 compute-0 ceph-mon[75021]: pgmap v3535: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:32 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:32.577+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:33 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:33 compute-0 podman[440579]: 2025-11-22 10:19:33.403303472 +0000 UTC m=+0.098240926 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:19:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:33.560+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:34 compute-0 ceph-mon[75021]: pgmap v3536: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:34 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:34 compute-0 nova_compute[253661]: 2025-11-22 10:19:34.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1750 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:34.563+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:35 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1750 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:35 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:35.559+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:35 compute-0 nova_compute[253661]: 2025-11-22 10:19:35.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:36 compute-0 ceph-mon[75021]: pgmap v3537: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:36 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:36.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:37 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:37.617+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:38 compute-0 ceph-mon[75021]: pgmap v3538: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:38 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:38.654+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:39 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:39 compute-0 nova_compute[253661]: 2025-11-22 10:19:39.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1755 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:39.687+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:40 compute-0 ceph-mon[75021]: pgmap v3539: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:40 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1755 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:40 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:40 compute-0 nova_compute[253661]: 2025-11-22 10:19:40.619 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:40.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:41 compute-0 nova_compute[253661]: 2025-11-22 10:19:41.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:41 compute-0 nova_compute[253661]: 2025-11-22 10:19:41.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 10:19:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:41 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:41.603+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:42 compute-0 ceph-mon[75021]: pgmap v3540: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:42 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:42.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:43 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:43.611+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:44 compute-0 nova_compute[253661]: 2025-11-22 10:19:44.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:44 compute-0 ceph-mon[75021]: pgmap v3541: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:44 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1760 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:44.598+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:45 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1760 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:45 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:45 compute-0 nova_compute[253661]: 2025-11-22 10:19:45.620 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:45.642+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:46 compute-0 ceph-mon[75021]: pgmap v3542: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:46 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:46.603+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:47 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:47.618+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:48 compute-0 ceph-mon[75021]: pgmap v3543: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:48 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:48.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:49 compute-0 nova_compute[253661]: 2025-11-22 10:19:49.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:49 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1765 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:49.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:50 compute-0 nova_compute[253661]: 2025-11-22 10:19:50.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:19:50 compute-0 ceph-mon[75021]: pgmap v3544: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:50 compute-0 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1765 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:50 compute-0 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:19:50 compute-0 nova_compute[253661]: 2025-11-22 10:19:50.623 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:50.635+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:51 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:51.589+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:19:52
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:19:52 compute-0 ceph-mon[75021]: pgmap v3545: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:52 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:52.560+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:19:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:19:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:53.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:53 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:54 compute-0 nova_compute[253661]: 2025-11-22 10:19:54.450 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1770 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:19:54 compute-0 ceph-mon[75021]: pgmap v3546: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:54 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:54 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1770 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:54.567+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:55.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:55 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:55 compute-0 nova_compute[253661]: 2025-11-22 10:19:55.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:56.519+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:19:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:19:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:19:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:19:56 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:19:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:57 compute-0 ceph-mon[75021]: pgmap v3547: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:57 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:57.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:58 compute-0 podman[440606]: 2025-11-22 10:19:58.363935236 +0000 UTC m=+0.057421292 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:19:58 compute-0 podman[440607]: 2025-11-22 10:19:58.372946047 +0000 UTC m=+0.065292735 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 10:19:58 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:58 compute-0 ceph-mon[75021]: pgmap v3548: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:58 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:58.520+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:19:59 compute-0 nova_compute[253661]: 2025-11-22 10:19:59.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:19:59 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:19:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:19:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:19:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:19:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:19:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:59.522+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:19:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:19:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1775 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:19:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:00.495+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:00 compute-0 ceph-mon[75021]: pgmap v3549: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:00 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:00 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1775 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:00 compute-0 nova_compute[253661]: 2025-11-22 10:20:00.627 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:01 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:01.542+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:02.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:02 compute-0 ceph-mon[75021]: pgmap v3550: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:02 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:20:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:20:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:03.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:03 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:04 compute-0 podman[440639]: 2025-11-22 10:20:04.396540649 +0000 UTC m=+0.094494593 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 22 10:20:04 compute-0 nova_compute[253661]: 2025-11-22 10:20:04.455 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:04.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1780 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:04 compute-0 ceph-mon[75021]: pgmap v3551: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:04 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:04 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1780 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:05.485+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:05 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:05 compute-0 nova_compute[253661]: 2025-11-22 10:20:05.628 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:06.534+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:06 compute-0 ceph-mon[75021]: pgmap v3552: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:06 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:07.555+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:07 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:08.538+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:08 compute-0 ceph-mon[75021]: pgmap v3553: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:08 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:09 compute-0 nova_compute[253661]: 2025-11-22 10:20:09.459 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1785 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:09.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:09 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:09 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1785 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:10.517+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:10 compute-0 ceph-mon[75021]: pgmap v3554: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:10 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:10 compute-0 nova_compute[253661]: 2025-11-22 10:20:10.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:11.520+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:11 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:20:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1218516067' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:20:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:20:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1218516067' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:20:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:12.538+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:12 compute-0 ceph-mon[75021]: pgmap v3555: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:12 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1218516067' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:20:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/1218516067' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:20:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:13.536+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:13 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:14 compute-0 nova_compute[253661]: 2025-11-22 10:20:14.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:14.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1790 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:14 compute-0 ceph-mon[75021]: pgmap v3556: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:14 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:14 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1790 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:15.468+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:15 compute-0 nova_compute[253661]: 2025-11-22 10:20:15.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:15 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:16.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:16 compute-0 ceph-mon[75021]: pgmap v3557: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:16 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:17 compute-0 nova_compute[253661]: 2025-11-22 10:20:17.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:17 compute-0 nova_compute[253661]: 2025-11-22 10:20:17.239 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:20:17 compute-0 nova_compute[253661]: 2025-11-22 10:20:17.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:20:17 compute-0 nova_compute[253661]: 2025-11-22 10:20:17.251 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:20:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:17.492+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:17 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:18 compute-0 nova_compute[253661]: 2025-11-22 10:20:18.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:18 compute-0 nova_compute[253661]: 2025-11-22 10:20:18.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 10:20:18 compute-0 nova_compute[253661]: 2025-11-22 10:20:18.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 10:20:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:18.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:18 compute-0 ceph-mon[75021]: pgmap v3558: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:18 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:19 compute-0 nova_compute[253661]: 2025-11-22 10:20:19.465 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:19.503+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1795 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:19 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:19 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1795 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:20 compute-0 nova_compute[253661]: 2025-11-22 10:20:20.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:20.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:20 compute-0 nova_compute[253661]: 2025-11-22 10:20:20.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:20 compute-0 ceph-mon[75021]: pgmap v3559: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:20 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:21.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:21 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:22 compute-0 nova_compute[253661]: 2025-11-22 10:20:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:22.529+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:22 compute-0 ceph-mon[75021]: pgmap v3560: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:22 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:20:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:20:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:23.547+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:23 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:23 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:24 compute-0 nova_compute[253661]: 2025-11-22 10:20:24.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:24 compute-0 nova_compute[253661]: 2025-11-22 10:20:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:24 compute-0 nova_compute[253661]: 2025-11-22 10:20:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:24 compute-0 nova_compute[253661]: 2025-11-22 10:20:24.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:20:24 compute-0 nova_compute[253661]: 2025-11-22 10:20:24.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1800 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:24.561+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:24 compute-0 ceph-mon[75021]: pgmap v3561: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:24 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1800 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:24 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:25.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:25 compute-0 nova_compute[253661]: 2025-11-22 10:20:25.638 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:25 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.847400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825847474, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1202, "num_deletes": 251, "total_data_size": 1350348, "memory_usage": 1372656, "flush_reason": "Manual Compaction"}
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825861510, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 1328764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79770, "largest_seqno": 80971, "table_properties": {"data_size": 1323460, "index_size": 2507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13861, "raw_average_key_size": 20, "raw_value_size": 1311792, "raw_average_value_size": 1963, "num_data_blocks": 110, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806744, "oldest_key_time": 1763806744, "file_creation_time": 1763806825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 14168 microseconds, and 8405 cpu microseconds.
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.861570) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 1328764 bytes OK
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.861601) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.863750) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.863776) EVENT_LOG_v1 {"time_micros": 1763806825863768, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.863988) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1344683, prev total WAL file size 1344683, number of live WAL files 2.
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.865088) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(1297KB)], [188(10MB)]
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825865152, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 12733097, "oldest_snapshot_seqno": -1}
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 11024 keys, 11340015 bytes, temperature: kUnknown
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825979355, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 11340015, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11273726, "index_size": 37632, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 295451, "raw_average_key_size": 26, "raw_value_size": 11082861, "raw_average_value_size": 1005, "num_data_blocks": 1414, "num_entries": 11024, "num_filter_entries": 11024, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.979630) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11340015 bytes
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.981090) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.4 rd, 99.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(18.1) write-amplify(8.5) OK, records in: 11538, records dropped: 514 output_compression: NoCompression
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.981111) EVENT_LOG_v1 {"time_micros": 1763806825981101, "job": 118, "event": "compaction_finished", "compaction_time_micros": 114291, "compaction_time_cpu_micros": 50219, "output_level": 6, "num_output_files": 1, "total_output_size": 11340015, "num_input_records": 11538, "num_output_records": 11024, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825981490, "job": 118, "event": "table_file_deletion", "file_number": 190}
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825983435, "job": 118, "event": "table_file_deletion", "file_number": 188}
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.864953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:20:25 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.282 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.283 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.284 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.284 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.284 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:20:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:26.543+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:20:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78187973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.777 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:20:26 compute-0 ceph-mon[75021]: pgmap v3562: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:26 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/78187973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.977 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.979 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.979 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:20:26 compute-0 nova_compute[253661]: 2025-11-22 10:20:26.980 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:20:27 compute-0 nova_compute[253661]: 2025-11-22 10:20:27.061 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:20:27 compute-0 nova_compute[253661]: 2025-11-22 10:20:27.062 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:20:27 compute-0 nova_compute[253661]: 2025-11-22 10:20:27.081 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:20:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:20:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840024796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:20:27 compute-0 nova_compute[253661]: 2025-11-22 10:20:27.532 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:20:27 compute-0 nova_compute[253661]: 2025-11-22 10:20:27.539 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:20:27 compute-0 nova_compute[253661]: 2025-11-22 10:20:27.565 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:20:27 compute-0 nova_compute[253661]: 2025-11-22 10:20:27.566 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:20:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:27.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:27 compute-0 nova_compute[253661]: 2025-11-22 10:20:27.567 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:20:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3840024796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:20:27 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:20:28.030 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:20:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:20:28.031 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:20:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:20:28.031 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:20:28 compute-0 nova_compute[253661]: 2025-11-22 10:20:28.567 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:28.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:28 compute-0 ceph-mon[75021]: pgmap v3563: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:28 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:29 compute-0 nova_compute[253661]: 2025-11-22 10:20:29.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:29 compute-0 podman[440709]: 2025-11-22 10:20:29.364746125 +0000 UTC m=+0.046942705 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:20:29 compute-0 podman[440710]: 2025-11-22 10:20:29.378215457 +0000 UTC m=+0.057056834 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 10:20:29 compute-0 nova_compute[253661]: 2025-11-22 10:20:29.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1805 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:29.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:30 compute-0 ceph-mon[75021]: pgmap v3564: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:30 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1805 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:30 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:30.580+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:30 compute-0 nova_compute[253661]: 2025-11-22 10:20:30.639 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:31 compute-0 sudo[440748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:31 compute-0 sudo[440748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:31 compute-0 sudo[440748]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:31 compute-0 sudo[440773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:20:31 compute-0 sudo[440773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:31 compute-0 sudo[440773]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:31 compute-0 sudo[440798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:31 compute-0 sudo[440798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:31 compute-0 sudo[440798]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:31 compute-0 sudo[440823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 10:20:31 compute-0 sudo[440823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:31.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:31 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:32 compute-0 podman[440921]: 2025-11-22 10:20:32.291058051 +0000 UTC m=+0.210874224 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 10:20:32 compute-0 podman[440942]: 2025-11-22 10:20:32.534274188 +0000 UTC m=+0.115573892 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 10:20:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:32.567+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:32 compute-0 podman[440921]: 2025-11-22 10:20:32.586219735 +0000 UTC m=+0.506035658 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 10:20:32 compute-0 ceph-mon[75021]: pgmap v3565: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:32 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:33 compute-0 sudo[440823]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:20:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:33 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:33 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:20:33 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:33 compute-0 sudo[441080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:33 compute-0 sudo[441080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:33 compute-0 sudo[441080]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:33 compute-0 sudo[441105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:20:33 compute-0 sudo[441105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:33 compute-0 sudo[441105]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:33.539+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:33 compute-0 sudo[441130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:33 compute-0 sudo[441130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:33 compute-0 sudo[441130]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:33 compute-0 sudo[441155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:20:33 compute-0 sudo[441155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:33 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:33 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:33 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:34 compute-0 sudo[441155]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:20:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:20:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:20:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:20:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:20:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev fdb8d409-e675-4940-a70c-8adfd5918053 does not exist
Nov 22 10:20:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev d8bba59b-da59-485a-bb63-bd4f218666df does not exist
Nov 22 10:20:34 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 3db106c2-faf2-4b79-a52a-47022f82c1b9 does not exist
Nov 22 10:20:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:20:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:20:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:20:34 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:20:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:20:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:20:34 compute-0 sudo[441210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:34 compute-0 sudo[441210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:34 compute-0 sudo[441210]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:34 compute-0 sudo[441235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:20:34 compute-0 sudo[441235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:34 compute-0 sudo[441235]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:34 compute-0 sudo[441260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:34 compute-0 sudo[441260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:34 compute-0 sudo[441260]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:34 compute-0 sudo[441285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:20:34 compute-0 sudo[441285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:34 compute-0 nova_compute[253661]: 2025-11-22 10:20:34.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1810 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:34.541+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:34 compute-0 podman[441309]: 2025-11-22 10:20:34.56364001 +0000 UTC m=+0.104341295 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:20:34 compute-0 podman[441375]: 2025-11-22 10:20:34.792765721 +0000 UTC m=+0.044802522 container create a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 10:20:34 compute-0 ceph-mon[75021]: pgmap v3566: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:34 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:20:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:20:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:20:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:20:34 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:20:34 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1810 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:34 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:34 compute-0 systemd[1]: Started libpod-conmon-a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d.scope.
Nov 22 10:20:34 compute-0 podman[441375]: 2025-11-22 10:20:34.771670553 +0000 UTC m=+0.023707244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:20:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:20:34 compute-0 podman[441375]: 2025-11-22 10:20:34.894160992 +0000 UTC m=+0.146197753 container init a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:20:34 compute-0 podman[441375]: 2025-11-22 10:20:34.906019934 +0000 UTC m=+0.158056645 container start a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:20:34 compute-0 podman[441375]: 2025-11-22 10:20:34.910391152 +0000 UTC m=+0.162427913 container attach a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:20:34 compute-0 ecstatic_shamir[441391]: 167 167
Nov 22 10:20:34 compute-0 systemd[1]: libpod-a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d.scope: Deactivated successfully.
Nov 22 10:20:34 compute-0 conmon[441391]: conmon a89292f7a1e76705d501 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d.scope/container/memory.events
Nov 22 10:20:34 compute-0 podman[441375]: 2025-11-22 10:20:34.916258626 +0000 UTC m=+0.168295307 container died a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:20:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-eec9ad2b5109ea0a8a473b5146e6c059757a2a3647555ec51097f654a19e06d5-merged.mount: Deactivated successfully.
Nov 22 10:20:34 compute-0 podman[441375]: 2025-11-22 10:20:34.965481615 +0000 UTC m=+0.217518326 container remove a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:20:34 compute-0 systemd[1]: libpod-conmon-a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d.scope: Deactivated successfully.
Nov 22 10:20:35 compute-0 podman[441414]: 2025-11-22 10:20:35.169812646 +0000 UTC m=+0.067250293 container create e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:20:35 compute-0 systemd[1]: Started libpod-conmon-e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62.scope.
Nov 22 10:20:35 compute-0 podman[441414]: 2025-11-22 10:20:35.148184635 +0000 UTC m=+0.045622312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:20:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:35 compute-0 podman[441414]: 2025-11-22 10:20:35.269577588 +0000 UTC m=+0.167015325 container init e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:20:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:35 compute-0 podman[441414]: 2025-11-22 10:20:35.277928193 +0000 UTC m=+0.175365860 container start e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:20:35 compute-0 podman[441414]: 2025-11-22 10:20:35.282345422 +0000 UTC m=+0.179783109 container attach e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:20:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:35.577+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:35 compute-0 nova_compute[253661]: 2025-11-22 10:20:35.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:35 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:36 compute-0 nova_compute[253661]: 2025-11-22 10:20:36.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:20:36 compute-0 brave_murdock[441430]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:20:36 compute-0 brave_murdock[441430]: --> relative data size: 1.0
Nov 22 10:20:36 compute-0 brave_murdock[441430]: --> All data devices are unavailable
Nov 22 10:20:36 compute-0 systemd[1]: libpod-e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62.scope: Deactivated successfully.
Nov 22 10:20:36 compute-0 podman[441459]: 2025-11-22 10:20:36.355856684 +0000 UTC m=+0.026446761 container died e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 10:20:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37-merged.mount: Deactivated successfully.
Nov 22 10:20:36 compute-0 podman[441459]: 2025-11-22 10:20:36.415586662 +0000 UTC m=+0.086176719 container remove e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 10:20:36 compute-0 systemd[1]: libpod-conmon-e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62.scope: Deactivated successfully.
Nov 22 10:20:36 compute-0 sudo[441285]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:36 compute-0 sudo[441475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:36 compute-0 sudo[441475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:36 compute-0 sudo[441475]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:36.545+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:36 compute-0 sudo[441500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:20:36 compute-0 sudo[441500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:36 compute-0 sudo[441500]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:36 compute-0 sudo[441525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:36 compute-0 sudo[441525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:36 compute-0 sudo[441525]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:36 compute-0 sudo[441550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:20:36 compute-0 sudo[441550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:36 compute-0 ceph-mon[75021]: pgmap v3567: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:36 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:36 compute-0 podman[441615]: 2025-11-22 10:20:36.990700026 +0000 UTC m=+0.042586408 container create df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 10:20:37 compute-0 systemd[1]: Started libpod-conmon-df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534.scope.
Nov 22 10:20:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:20:37 compute-0 podman[441615]: 2025-11-22 10:20:36.971914874 +0000 UTC m=+0.023801266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:20:37 compute-0 podman[441615]: 2025-11-22 10:20:37.071085621 +0000 UTC m=+0.122972083 container init df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 10:20:37 compute-0 podman[441615]: 2025-11-22 10:20:37.078514903 +0000 UTC m=+0.130401275 container start df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 10:20:37 compute-0 podman[441615]: 2025-11-22 10:20:37.082481941 +0000 UTC m=+0.134368403 container attach df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 10:20:37 compute-0 elated_shannon[441631]: 167 167
Nov 22 10:20:37 compute-0 systemd[1]: libpod-df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534.scope: Deactivated successfully.
Nov 22 10:20:37 compute-0 podman[441615]: 2025-11-22 10:20:37.087654008 +0000 UTC m=+0.139540390 container died df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:20:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c07bb9e05b0a1c1226fefcfce4a9dce27a4b4ddce535d88418a72d6e14a1284e-merged.mount: Deactivated successfully.
Nov 22 10:20:37 compute-0 podman[441615]: 2025-11-22 10:20:37.153138178 +0000 UTC m=+0.205024560 container remove df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 10:20:37 compute-0 systemd[1]: libpod-conmon-df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534.scope: Deactivated successfully.
Nov 22 10:20:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:37 compute-0 podman[441655]: 2025-11-22 10:20:37.39008425 +0000 UTC m=+0.069981320 container create 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 10:20:37 compute-0 systemd[1]: Started libpod-conmon-84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a.scope.
Nov 22 10:20:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:37 compute-0 podman[441655]: 2025-11-22 10:20:37.366995843 +0000 UTC m=+0.046892943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:20:37 compute-0 podman[441655]: 2025-11-22 10:20:37.471231655 +0000 UTC m=+0.151128815 container init 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 10:20:37 compute-0 podman[441655]: 2025-11-22 10:20:37.485383992 +0000 UTC m=+0.165281072 container start 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 10:20:37 compute-0 podman[441655]: 2025-11-22 10:20:37.490716804 +0000 UTC m=+0.170613944 container attach 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:20:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:37.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:37 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]: {
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:     "0": [
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:         {
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "devices": [
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "/dev/loop3"
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             ],
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_name": "ceph_lv0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_size": "21470642176",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "name": "ceph_lv0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "tags": {
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.cluster_name": "ceph",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.crush_device_class": "",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.encrypted": "0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.osd_id": "0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.type": "block",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.vdo": "0"
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             },
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "type": "block",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "vg_name": "ceph_vg0"
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:         }
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:     ],
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:     "1": [
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:         {
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "devices": [
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "/dev/loop4"
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             ],
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_name": "ceph_lv1",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_size": "21470642176",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "name": "ceph_lv1",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "tags": {
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.cluster_name": "ceph",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.crush_device_class": "",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.encrypted": "0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.osd_id": "1",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.type": "block",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.vdo": "0"
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             },
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "type": "block",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "vg_name": "ceph_vg1"
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:         }
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:     ],
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:     "2": [
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:         {
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "devices": [
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "/dev/loop5"
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             ],
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_name": "ceph_lv2",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_size": "21470642176",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "name": "ceph_lv2",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "tags": {
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.cluster_name": "ceph",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.crush_device_class": "",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.encrypted": "0",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.osd_id": "2",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.type": "block",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:                 "ceph.vdo": "0"
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             },
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "type": "block",
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:             "vg_name": "ceph_vg2"
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:         }
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]:     ]
Nov 22 10:20:38 compute-0 hardcore_lichterman[441672]: }
Nov 22 10:20:38 compute-0 systemd[1]: libpod-84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a.scope: Deactivated successfully.
Nov 22 10:20:38 compute-0 podman[441655]: 2025-11-22 10:20:38.334371776 +0000 UTC m=+1.014268836 container died 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:20:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55-merged.mount: Deactivated successfully.
Nov 22 10:20:38 compute-0 podman[441655]: 2025-11-22 10:20:38.399734763 +0000 UTC m=+1.079631833 container remove 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 10:20:38 compute-0 systemd[1]: libpod-conmon-84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a.scope: Deactivated successfully.
Nov 22 10:20:38 compute-0 sudo[441550]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:38 compute-0 sudo[441695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:38 compute-0 sudo[441695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:38 compute-0 sudo[441695]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:38.574+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:38 compute-0 sudo[441720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:20:38 compute-0 sudo[441720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:38 compute-0 sudo[441720]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:38 compute-0 sudo[441745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:38 compute-0 sudo[441745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:38 compute-0 sudo[441745]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:38 compute-0 sudo[441770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:20:38 compute-0 sudo[441770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:38 compute-0 ceph-mon[75021]: pgmap v3568: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:38 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:39 compute-0 podman[441834]: 2025-11-22 10:20:39.083723031 +0000 UTC m=+0.049309342 container create 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 10:20:39 compute-0 systemd[1]: Started libpod-conmon-44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c.scope.
Nov 22 10:20:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:20:39 compute-0 podman[441834]: 2025-11-22 10:20:39.058587243 +0000 UTC m=+0.024173554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:20:39 compute-0 podman[441834]: 2025-11-22 10:20:39.170744771 +0000 UTC m=+0.136331072 container init 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 10:20:39 compute-0 podman[441834]: 2025-11-22 10:20:39.178276015 +0000 UTC m=+0.143862316 container start 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:20:39 compute-0 podman[441834]: 2025-11-22 10:20:39.181971976 +0000 UTC m=+0.147558337 container attach 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 10:20:39 compute-0 magical_lehmann[441851]: 167 167
Nov 22 10:20:39 compute-0 podman[441834]: 2025-11-22 10:20:39.186159629 +0000 UTC m=+0.151745910 container died 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:20:39 compute-0 systemd[1]: libpod-44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c.scope: Deactivated successfully.
Nov 22 10:20:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f838eef0af9a60cc141d8bc078239a9a97443439a3906f35c8e44163a3e7b5e8-merged.mount: Deactivated successfully.
Nov 22 10:20:39 compute-0 podman[441834]: 2025-11-22 10:20:39.224221485 +0000 UTC m=+0.189807766 container remove 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 10:20:39 compute-0 systemd[1]: libpod-conmon-44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c.scope: Deactivated successfully.
Nov 22 10:20:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:39 compute-0 podman[441873]: 2025-11-22 10:20:39.431628031 +0000 UTC m=+0.044681299 container create 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 10:20:39 compute-0 systemd[1]: Started libpod-conmon-370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a.scope.
Nov 22 10:20:39 compute-0 nova_compute[253661]: 2025-11-22 10:20:39.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:20:39 compute-0 podman[441873]: 2025-11-22 10:20:39.414388008 +0000 UTC m=+0.027441296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:20:39 compute-0 podman[441873]: 2025-11-22 10:20:39.52065447 +0000 UTC m=+0.133707748 container init 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 10:20:39 compute-0 podman[441873]: 2025-11-22 10:20:39.531380843 +0000 UTC m=+0.144434151 container start 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:20:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1815 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:39 compute-0 podman[441873]: 2025-11-22 10:20:39.537150654 +0000 UTC m=+0.150203932 container attach 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:20:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:39.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:39 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1815 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:39 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]: {
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "osd_id": 1,
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "type": "bluestore"
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:     },
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "osd_id": 0,
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "type": "bluestore"
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:     },
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "osd_id": 2,
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:         "type": "bluestore"
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]:     }
Nov 22 10:20:40 compute-0 stupefied_archimedes[441890]: }
Nov 22 10:20:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:40.510+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:40 compute-0 systemd[1]: libpod-370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a.scope: Deactivated successfully.
Nov 22 10:20:40 compute-0 systemd[1]: libpod-370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a.scope: Consumed 1.014s CPU time.
Nov 22 10:20:40 compute-0 podman[441873]: 2025-11-22 10:20:40.539944619 +0000 UTC m=+1.152997897 container died 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:20:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1-merged.mount: Deactivated successfully.
Nov 22 10:20:40 compute-0 podman[441873]: 2025-11-22 10:20:40.602171478 +0000 UTC m=+1.215224746 container remove 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 10:20:40 compute-0 systemd[1]: libpod-conmon-370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a.scope: Deactivated successfully.
Nov 22 10:20:40 compute-0 sudo[441770]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:40 compute-0 nova_compute[253661]: 2025-11-22 10:20:40.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:20:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:20:40 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 85d70833-0a74-4e08-a351-20fde5b84a3c does not exist
Nov 22 10:20:40 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev c96460a7-bbde-46da-99ed-8aed1868288d does not exist
Nov 22 10:20:40 compute-0 sudo[441937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:20:40 compute-0 sudo[441937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:40 compute-0 sudo[441937]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:40 compute-0 sudo[441962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:20:40 compute-0 sudo[441962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:20:40 compute-0 sudo[441962]: pam_unix(sudo:session): session closed for user root
Nov 22 10:20:40 compute-0 ceph-mon[75021]: pgmap v3569: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:40 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:40 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:20:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:41.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:41 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:42.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:42 compute-0 ceph-mon[75021]: pgmap v3570: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:42 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:43.597+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:43 compute-0 ceph-mon[75021]: pgmap v3571: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:43 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:44 compute-0 nova_compute[253661]: 2025-11-22 10:20:44.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1820 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:44.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:44 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1820 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:44 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:45.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:45 compute-0 nova_compute[253661]: 2025-11-22 10:20:45.645 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:45 compute-0 ceph-mon[75021]: pgmap v3572: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:45 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:46.628+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:46 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:47.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:47 compute-0 ceph-mon[75021]: pgmap v3573: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:47 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:48.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:48 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:49 compute-0 nova_compute[253661]: 2025-11-22 10:20:49.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1825 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:49.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:49 compute-0 ceph-mon[75021]: pgmap v3574: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:49 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1825 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:49 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:50.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:50 compute-0 nova_compute[253661]: 2025-11-22 10:20:50.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:50 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:51.619+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:51 compute-0 ceph-mon[75021]: pgmap v3575: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:51 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:20:52
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.control', 'vms']
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:20:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:52.649+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:20:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:20:52 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:53.628+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:54 compute-0 ceph-mon[75021]: pgmap v3576: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:54 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:54 compute-0 nova_compute[253661]: 2025-11-22 10:20:54.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1830 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:54.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:55 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1830 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:55 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:55.631+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:55 compute-0 nova_compute[253661]: 2025-11-22 10:20:55.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:56 compute-0 ceph-mon[75021]: pgmap v3577: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:56 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:56.681+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:57 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:20:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:20:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:20:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:20:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:20:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:57.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:20:57 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 17K writes, 81K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1840 writes, 8776 keys, 1840 commit groups, 1.0 writes per commit group, ingest: 9.55 MB, 0.02 MB/s
                                           Interval WAL: 1840 writes, 1840 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     45.3      1.99              0.34        59    0.034       0      0       0.0       0.0
                                             L6      1/0   10.81 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.5    107.5     92.7      5.38              1.61        58    0.093    434K    30K       0.0       0.0
                                            Sum      1/0   10.81 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.5     78.4     79.9      7.37              1.95       117    0.063    434K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.8    125.4    126.4      0.61              0.31        14    0.044     77K   3578       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    107.5     92.7      5.38              1.61        58    0.093    434K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.4      1.99              0.34        58    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.088, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.58 GB write, 0.09 MB/s write, 0.56 GB read, 0.09 MB/s read, 7.4 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 63.55 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000467 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4085,60.54 MB,19.9133%) FilterBlock(118,1.19 MB,0.390218%) IndexBlock(118,1.83 MB,0.600619%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 10:20:58 compute-0 ceph-mon[75021]: pgmap v3578: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:58 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:58.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:20:59 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:20:59 compute-0 nova_compute[253661]: 2025-11-22 10:20:59.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:20:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:20:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:20:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:20:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:20:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:20:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1835 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:20:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:20:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:59.650+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:20:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:00 compute-0 podman[441987]: 2025-11-22 10:21:00.378111986 +0000 UTC m=+0.059971905 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 10:21:00 compute-0 podman[441988]: 2025-11-22 10:21:00.391915235 +0000 UTC m=+0.073173059 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:21:00 compute-0 ceph-mon[75021]: pgmap v3579: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:00 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1835 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:00 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:00 compute-0 nova_compute[253661]: 2025-11-22 10:21:00.649 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:00.659+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:01 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:01.642+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:02 compute-0 ceph-mon[75021]: pgmap v3580: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:02 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:02.639+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:21:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:21:03 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:03.644+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:04 compute-0 nova_compute[253661]: 2025-11-22 10:21:04.496 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:04 compute-0 ceph-mon[75021]: pgmap v3581: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:04 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:04 compute-0 sshd-session[442026]: Invalid user solana from 92.118.39.92 port 40498
Nov 22 10:21:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1840 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:04 compute-0 sshd-session[442026]: Connection closed by invalid user solana 92.118.39.92 port 40498 [preauth]
Nov 22 10:21:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:04.650+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:05 compute-0 podman[442028]: 2025-11-22 10:21:05.374871601 +0000 UTC m=+0.072621585 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 10:21:05 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1840 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:05 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:05.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:05 compute-0 nova_compute[253661]: 2025-11-22 10:21:05.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:06 compute-0 ceph-mon[75021]: pgmap v3582: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:06 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:06.677+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:07 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:07.653+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:08 compute-0 ceph-mon[75021]: pgmap v3583: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:08 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:08.617+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:09 compute-0 nova_compute[253661]: 2025-11-22 10:21:09.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1845 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:09 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:09 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1845 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:09.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:10 compute-0 ceph-mon[75021]: pgmap v3584: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:10 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:10 compute-0 nova_compute[253661]: 2025-11-22 10:21:10.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:10.668+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:11 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:11.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:21:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/94693905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:21:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:21:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/94693905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:21:12 compute-0 ceph-mon[75021]: pgmap v3585: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:12 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/94693905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:21:12 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/94693905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:21:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:12.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:13 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:13.701+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:14 compute-0 nova_compute[253661]: 2025-11-22 10:21:14.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1850 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:14.663+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:14 compute-0 ceph-mon[75021]: pgmap v3586: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:14 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:14 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1850 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:15.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:15 compute-0 nova_compute[253661]: 2025-11-22 10:21:15.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:15 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:16.671+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:16 compute-0 ceph-mon[75021]: pgmap v3587: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:16 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:16 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:17.624+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:17 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:18.597+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:18 compute-0 ceph-mon[75021]: pgmap v3588: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:18 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:19 compute-0 nova_compute[253661]: 2025-11-22 10:21:19.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:21:19 compute-0 nova_compute[253661]: 2025-11-22 10:21:19.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:21:19 compute-0 nova_compute[253661]: 2025-11-22 10:21:19.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:21:19 compute-0 nova_compute[253661]: 2025-11-22 10:21:19.245 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:21:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:19 compute-0 nova_compute[253661]: 2025-11-22 10:21:19.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1855 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:19 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:19.608+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:19 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:19 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:19 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1855 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:19 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:20.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:20 compute-0 nova_compute[253661]: 2025-11-22 10:21:20.658 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:20 compute-0 ceph-mon[75021]: pgmap v3589: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:20 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:21 compute-0 nova_compute[253661]: 2025-11-22 10:21:21.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:21:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:21.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:21 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:21 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:21 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:22 compute-0 nova_compute[253661]: 2025-11-22 10:21:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:21:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:22.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:21:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:21:22 compute-0 ceph-mon[75021]: pgmap v3590: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:22 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:23.633+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:23 compute-0 ceph-mon[75021]: pgmap v3591: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:23 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:24 compute-0 nova_compute[253661]: 2025-11-22 10:21:24.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:21:24 compute-0 nova_compute[253661]: 2025-11-22 10:21:24.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1860 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:24.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:24 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:24 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:24 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1860 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:24 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:25.660+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:25 compute-0 nova_compute[253661]: 2025-11-22 10:21:25.687 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:25 compute-0 ceph-mon[75021]: pgmap v3592: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:25 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.262 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.262 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:21:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:26.706+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:21:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2603466631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.733 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.907 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.908 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.908 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:21:26 compute-0 nova_compute[253661]: 2025-11-22 10:21:26.909 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:21:26 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2603466631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:21:27 compute-0 nova_compute[253661]: 2025-11-22 10:21:27.005 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:21:27 compute-0 nova_compute[253661]: 2025-11-22 10:21:27.006 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:21:27 compute-0 nova_compute[253661]: 2025-11-22 10:21:27.039 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:21:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:21:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433023368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:21:27 compute-0 nova_compute[253661]: 2025-11-22 10:21:27.486 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:21:27 compute-0 nova_compute[253661]: 2025-11-22 10:21:27.494 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:21:27 compute-0 nova_compute[253661]: 2025-11-22 10:21:27.511 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:21:27 compute-0 nova_compute[253661]: 2025-11-22 10:21:27.514 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:21:27 compute-0 nova_compute[253661]: 2025-11-22 10:21:27.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:21:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:27.692+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:27 compute-0 ceph-mon[75021]: pgmap v3593: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/433023368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:21:27 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:21:28.032 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:21:28.032 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:21:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:21:28.032 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:21:28 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:28.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:28 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:28 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:28 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:29 compute-0 nova_compute[253661]: 2025-11-22 10:21:29.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1864 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:29.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:29 compute-0 ceph-mon[75021]: pgmap v3594: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:29 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1864 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:29 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:30 compute-0 nova_compute[253661]: 2025-11-22 10:21:30.516 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:21:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:30.614+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:30 compute-0 nova_compute[253661]: 2025-11-22 10:21:30.689 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:30 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:31 compute-0 nova_compute[253661]: 2025-11-22 10:21:31.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:21:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:31 compute-0 podman[442099]: 2025-11-22 10:21:31.375683357 +0000 UTC m=+0.055934576 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 10:21:31 compute-0 podman[442100]: 2025-11-22 10:21:31.45230719 +0000 UTC m=+0.121401905 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 10:21:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:31.631+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:31 compute-0 ceph-mon[75021]: pgmap v3595: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:31 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:32.649+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:33 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:33.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:34 compute-0 ceph-mon[75021]: pgmap v3596: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:34 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:34 compute-0 nova_compute[253661]: 2025-11-22 10:21:34.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1870 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:34.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:35 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1870 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:35 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:35 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:35.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:35 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:35 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:35 compute-0 nova_compute[253661]: 2025-11-22 10:21:35.690 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:36 compute-0 ceph-mon[75021]: pgmap v3597: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:36 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:36 compute-0 podman[442138]: 2025-11-22 10:21:36.420517535 +0000 UTC m=+0.110576038 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 10:21:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:36.646+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:37 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:37.668+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:38 compute-0 ceph-mon[75021]: pgmap v3598: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:38 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:38.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:39 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:39 compute-0 nova_compute[253661]: 2025-11-22 10:21:39.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1875 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:39.734+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:40 compute-0 ceph-mon[75021]: pgmap v3599: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:40 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1875 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:40 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:40 compute-0 nova_compute[253661]: 2025-11-22 10:21:40.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:40.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:40 compute-0 sudo[442166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:21:40 compute-0 sudo[442166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:40 compute-0 sudo[442166]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:40 compute-0 sudo[442191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:21:40 compute-0 sudo[442191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:40 compute-0 sudo[442191]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:40 compute-0 sudo[442216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:21:40 compute-0 sudo[442216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:40 compute-0 sudo[442216]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:41 compute-0 sudo[442241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 10:21:41 compute-0 sudo[442241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:41 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:41 compute-0 sudo[442241]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:21:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:21:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 10:21:41 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:21:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 10:21:41 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:21:41 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 5da41cd7-8ed4-48b3-80aa-aa4aab63c7f8 does not exist
Nov 22 10:21:41 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 4408186e-0b23-45ee-8858-d3204aef80fe does not exist
Nov 22 10:21:41 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 41d65415-6e42-4530-9875-fe121bd0cb25 does not exist
Nov 22 10:21:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 10:21:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:21:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 10:21:41 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:21:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:21:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:21:41 compute-0 sudo[442296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:21:41 compute-0 sudo[442296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:41 compute-0 sudo[442296]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:41.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:41 compute-0 sudo[442321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:21:41 compute-0 sudo[442321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:41 compute-0 sudo[442321]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:41 compute-0 sudo[442346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:21:41 compute-0 sudo[442346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:41 compute-0 sudo[442346]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:41 compute-0 sudo[442371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 10:21:41 compute-0 sudo[442371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:42 compute-0 ceph-mon[75021]: pgmap v3600: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:21:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 10:21:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:21:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 10:21:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 10:21:42 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:21:42 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:42 compute-0 podman[442438]: 2025-11-22 10:21:42.217925698 +0000 UTC m=+0.053429244 container create 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 10:21:42 compute-0 systemd[1]: Started libpod-conmon-30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c.scope.
Nov 22 10:21:42 compute-0 podman[442438]: 2025-11-22 10:21:42.191088578 +0000 UTC m=+0.026592194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:21:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:21:42 compute-0 podman[442438]: 2025-11-22 10:21:42.308539914 +0000 UTC m=+0.144043450 container init 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:21:42 compute-0 podman[442438]: 2025-11-22 10:21:42.315976997 +0000 UTC m=+0.151480563 container start 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:21:42 compute-0 podman[442438]: 2025-11-22 10:21:42.319964695 +0000 UTC m=+0.155468231 container attach 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 10:21:42 compute-0 reverent_greider[442455]: 167 167
Nov 22 10:21:42 compute-0 systemd[1]: libpod-30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c.scope: Deactivated successfully.
Nov 22 10:21:42 compute-0 podman[442438]: 2025-11-22 10:21:42.326247709 +0000 UTC m=+0.161751235 container died 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:21:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4489e64e7ac81052c16533d6952790d2a675e6f4e691898e76d66efd5ceb67d2-merged.mount: Deactivated successfully.
Nov 22 10:21:42 compute-0 podman[442438]: 2025-11-22 10:21:42.374815173 +0000 UTC m=+0.210318729 container remove 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:21:42 compute-0 systemd[1]: libpod-conmon-30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c.scope: Deactivated successfully.
Nov 22 10:21:42 compute-0 podman[442480]: 2025-11-22 10:21:42.577092374 +0000 UTC m=+0.038212510 container create 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:21:42 compute-0 systemd[1]: Started libpod-conmon-73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27.scope.
Nov 22 10:21:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:21:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:42 compute-0 podman[442480]: 2025-11-22 10:21:42.643777953 +0000 UTC m=+0.104898139 container init 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 10:21:42 compute-0 podman[442480]: 2025-11-22 10:21:42.654593539 +0000 UTC m=+0.115713675 container start 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 10:21:42 compute-0 podman[442480]: 2025-11-22 10:21:42.561015739 +0000 UTC m=+0.022135885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:21:42 compute-0 podman[442480]: 2025-11-22 10:21:42.659101859 +0000 UTC m=+0.120222015 container attach 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 10:21:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:42.729+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:43 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:43 compute-0 bold_ganguly[442497]: --> passed data devices: 0 physical, 3 LVM
Nov 22 10:21:43 compute-0 bold_ganguly[442497]: --> relative data size: 1.0
Nov 22 10:21:43 compute-0 bold_ganguly[442497]: --> All data devices are unavailable
Nov 22 10:21:43 compute-0 systemd[1]: libpod-73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27.scope: Deactivated successfully.
Nov 22 10:21:43 compute-0 systemd[1]: libpod-73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27.scope: Consumed 1.007s CPU time.
Nov 22 10:21:43 compute-0 podman[442480]: 2025-11-22 10:21:43.72403579 +0000 UTC m=+1.185155936 container died 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:21:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:43.752+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461-merged.mount: Deactivated successfully.
Nov 22 10:21:43 compute-0 podman[442480]: 2025-11-22 10:21:43.861521669 +0000 UTC m=+1.322641805 container remove 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:21:43 compute-0 systemd[1]: libpod-conmon-73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27.scope: Deactivated successfully.
Nov 22 10:21:43 compute-0 sudo[442371]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:43 compute-0 sudo[442539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:21:43 compute-0 sudo[442539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:44 compute-0 sudo[442539]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:44 compute-0 sudo[442564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:21:44 compute-0 sudo[442564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:44 compute-0 sudo[442564]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:44 compute-0 ceph-mon[75021]: pgmap v3601: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:44 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:44 compute-0 sudo[442589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:21:44 compute-0 sudo[442589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:44 compute-0 sudo[442589]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:44 compute-0 sudo[442614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- lvm list --format json
Nov 22 10:21:44 compute-0 sudo[442614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:44 compute-0 nova_compute[253661]: 2025-11-22 10:21:44.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1880 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:44 compute-0 podman[442679]: 2025-11-22 10:21:44.561035889 +0000 UTC m=+0.043406797 container create 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:21:44 compute-0 systemd[1]: Started libpod-conmon-43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32.scope.
Nov 22 10:21:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:21:44 compute-0 podman[442679]: 2025-11-22 10:21:44.634185567 +0000 UTC m=+0.116556495 container init 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:21:44 compute-0 podman[442679]: 2025-11-22 10:21:44.538835684 +0000 UTC m=+0.021206642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:21:44 compute-0 podman[442679]: 2025-11-22 10:21:44.641581359 +0000 UTC m=+0.123952267 container start 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 10:21:44 compute-0 podman[442679]: 2025-11-22 10:21:44.645604458 +0000 UTC m=+0.127975386 container attach 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 10:21:44 compute-0 boring_merkle[442695]: 167 167
Nov 22 10:21:44 compute-0 systemd[1]: libpod-43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32.scope: Deactivated successfully.
Nov 22 10:21:44 compute-0 conmon[442695]: conmon 43a7c7b8cdbf7f8aaea1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32.scope/container/memory.events
Nov 22 10:21:44 compute-0 podman[442679]: 2025-11-22 10:21:44.648240713 +0000 UTC m=+0.130611621 container died 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4a7ef8a479cbc979bb48582dffb246e089e2ea5a4ecc77efe7270d51c42c71f-merged.mount: Deactivated successfully.
Nov 22 10:21:44 compute-0 podman[442679]: 2025-11-22 10:21:44.68801872 +0000 UTC m=+0.170389658 container remove 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:21:44 compute-0 systemd[1]: libpod-conmon-43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32.scope: Deactivated successfully.
Nov 22 10:21:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:44.796+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:44 compute-0 podman[442718]: 2025-11-22 10:21:44.921229231 +0000 UTC m=+0.062774753 container create f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:21:44 compute-0 systemd[1]: Started libpod-conmon-f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c.scope.
Nov 22 10:21:44 compute-0 podman[442718]: 2025-11-22 10:21:44.887698338 +0000 UTC m=+0.029243860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:21:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:21:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:45 compute-0 podman[442718]: 2025-11-22 10:21:45.014579615 +0000 UTC m=+0.156125117 container init f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 10:21:45 compute-0 podman[442718]: 2025-11-22 10:21:45.022202713 +0000 UTC m=+0.163748195 container start f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 10:21:45 compute-0 podman[442718]: 2025-11-22 10:21:45.026513199 +0000 UTC m=+0.168058681 container attach f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 10:21:45 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1880 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:45 compute-0 nova_compute[253661]: 2025-11-22 10:21:45.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:45.809+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]: {
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:     "0": [
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:         {
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "devices": [
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "/dev/loop3"
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             ],
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_name": "ceph_lv0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_size": "21470642176",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "name": "ceph_lv0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "tags": {
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.cluster_name": "ceph",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.crush_device_class": "",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.encrypted": "0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.osd_id": "0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.type": "block",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.vdo": "0"
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             },
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "type": "block",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "vg_name": "ceph_vg0"
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:         }
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:     ],
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:     "1": [
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:         {
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "devices": [
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "/dev/loop4"
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             ],
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_name": "ceph_lv1",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_size": "21470642176",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "name": "ceph_lv1",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "tags": {
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.cluster_name": "ceph",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.crush_device_class": "",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.encrypted": "0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.osd_id": "1",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.type": "block",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.vdo": "0"
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             },
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "type": "block",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "vg_name": "ceph_vg1"
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:         }
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:     ],
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:     "2": [
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:         {
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "devices": [
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "/dev/loop5"
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             ],
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_name": "ceph_lv2",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_size": "21470642176",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "name": "ceph_lv2",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "tags": {
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.cluster_name": "ceph",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.crush_device_class": "",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.encrypted": "0",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.osd_id": "2",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.type": "block",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:                 "ceph.vdo": "0"
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             },
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "type": "block",
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:             "vg_name": "ceph_vg2"
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:         }
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]:     ]
Nov 22 10:21:45 compute-0 gallant_blackwell[442735]: }
Nov 22 10:21:45 compute-0 systemd[1]: libpod-f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c.scope: Deactivated successfully.
Nov 22 10:21:45 compute-0 podman[442718]: 2025-11-22 10:21:45.870247883 +0000 UTC m=+1.011793385 container died f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 10:21:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2-merged.mount: Deactivated successfully.
Nov 22 10:21:45 compute-0 podman[442718]: 2025-11-22 10:21:45.939801843 +0000 UTC m=+1.081347325 container remove f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 22 10:21:45 compute-0 systemd[1]: libpod-conmon-f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c.scope: Deactivated successfully.
Nov 22 10:21:45 compute-0 sudo[442614]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:46 compute-0 sudo[442755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:21:46 compute-0 sudo[442755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:46 compute-0 sudo[442755]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:46 compute-0 sudo[442780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 10:21:46 compute-0 sudo[442780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:46 compute-0 sudo[442780]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:46 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:46 compute-0 ceph-mon[75021]: pgmap v3602: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:46 compute-0 sudo[442805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:21:46 compute-0 sudo[442805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:46 compute-0 sudo[442805]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:46 compute-0 sudo[442830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -- raw list --format json
Nov 22 10:21:46 compute-0 sudo[442830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:46 compute-0 podman[442894]: 2025-11-22 10:21:46.542185766 +0000 UTC m=+0.036986870 container create 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 10:21:46 compute-0 systemd[1]: Started libpod-conmon-3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91.scope.
Nov 22 10:21:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:21:46 compute-0 podman[442894]: 2025-11-22 10:21:46.52480085 +0000 UTC m=+0.019601953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:21:46 compute-0 podman[442894]: 2025-11-22 10:21:46.622057259 +0000 UTC m=+0.116858382 container init 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:21:46 compute-0 podman[442894]: 2025-11-22 10:21:46.631565993 +0000 UTC m=+0.126367096 container start 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 10:21:46 compute-0 podman[442894]: 2025-11-22 10:21:46.635303525 +0000 UTC m=+0.130104688 container attach 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 10:21:46 compute-0 cranky_ritchie[442910]: 167 167
Nov 22 10:21:46 compute-0 systemd[1]: libpod-3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91.scope: Deactivated successfully.
Nov 22 10:21:46 compute-0 conmon[442910]: conmon 3c992f8f775e8d2dc47c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91.scope/container/memory.events
Nov 22 10:21:46 compute-0 podman[442894]: 2025-11-22 10:21:46.639724824 +0000 UTC m=+0.134525967 container died 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 10:21:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3422d41979479a482f661f2bea7c7c53dcd51778466c6765e890217be093bb4-merged.mount: Deactivated successfully.
Nov 22 10:21:46 compute-0 podman[442894]: 2025-11-22 10:21:46.696909399 +0000 UTC m=+0.191710502 container remove 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 10:21:46 compute-0 systemd[1]: libpod-conmon-3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91.scope: Deactivated successfully.
Nov 22 10:21:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:46.772+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:46 compute-0 podman[442933]: 2025-11-22 10:21:46.885989016 +0000 UTC m=+0.061146244 container create 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 10:21:46 compute-0 systemd[1]: Started libpod-conmon-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope.
Nov 22 10:21:46 compute-0 podman[442933]: 2025-11-22 10:21:46.855549068 +0000 UTC m=+0.030706356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 10:21:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 10:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 10:21:46 compute-0 podman[442933]: 2025-11-22 10:21:46.98867479 +0000 UTC m=+0.163832008 container init 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 10:21:47 compute-0 podman[442933]: 2025-11-22 10:21:47.004440397 +0000 UTC m=+0.179597635 container start 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:21:47 compute-0 podman[442933]: 2025-11-22 10:21:47.008566128 +0000 UTC m=+0.183723356 container attach 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 10:21:47 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:47 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:47 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:47.727+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:47 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:47 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:47 compute-0 agitated_napier[442950]: {
Nov 22 10:21:47 compute-0 agitated_napier[442950]:     "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "osd_id": 1,
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "type": "bluestore"
Nov 22 10:21:47 compute-0 agitated_napier[442950]:     },
Nov 22 10:21:47 compute-0 agitated_napier[442950]:     "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "osd_id": 0,
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "type": "bluestore"
Nov 22 10:21:47 compute-0 agitated_napier[442950]:     },
Nov 22 10:21:47 compute-0 agitated_napier[442950]:     "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "osd_id": 2,
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 10:21:47 compute-0 agitated_napier[442950]:         "type": "bluestore"
Nov 22 10:21:47 compute-0 agitated_napier[442950]:     }
Nov 22 10:21:47 compute-0 agitated_napier[442950]: }
Nov 22 10:21:48 compute-0 systemd[1]: libpod-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope: Deactivated successfully.
Nov 22 10:21:48 compute-0 systemd[1]: libpod-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope: Consumed 1.009s CPU time.
Nov 22 10:21:48 compute-0 conmon[442950]: conmon 5ce8f48e30bc59fd4b17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope/container/memory.events
Nov 22 10:21:48 compute-0 podman[442933]: 2025-11-22 10:21:48.007325624 +0000 UTC m=+1.182482822 container died 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 10:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71-merged.mount: Deactivated successfully.
Nov 22 10:21:48 compute-0 podman[442933]: 2025-11-22 10:21:48.069229225 +0000 UTC m=+1.244386443 container remove 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 10:21:48 compute-0 systemd[1]: libpod-conmon-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope: Deactivated successfully.
Nov 22 10:21:48 compute-0 sudo[442830]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 10:21:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:21:48 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 10:21:48 compute-0 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:21:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 2c4a32c9-d883-450b-bc39-29b2180fdc33 does not exist
Nov 22 10:21:48 compute-0 ceph-mgr[75315]: [progress WARNING root] complete: ev 0f09524e-428e-4749-80c2-e805a939c562 does not exist
Nov 22 10:21:48 compute-0 ceph-mon[75021]: pgmap v3603: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:48 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:21:48 compute-0 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 10:21:48 compute-0 sudo[442994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 10:21:48 compute-0 sudo[442994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:48 compute-0 sudo[442994]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:48 compute-0 sudo[443019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 10:21:48 compute-0 sudo[443019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 10:21:48 compute-0 sudo[443019]: pam_unix(sudo:session): session closed for user root
Nov 22 10:21:48 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:48 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:48 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:48.776+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:49 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:49 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:49 compute-0 nova_compute[253661]: 2025-11-22 10:21:49.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:49 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1884 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:49 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:49 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:49 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:49 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:49.802+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:50 compute-0 ceph-mon[75021]: pgmap v3604: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:50 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1884 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:50 compute-0 nova_compute[253661]: 2025-11-22 10:21:50.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:50 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:50 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:50.794+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:50 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:51 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:51 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:51 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:51 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:51.844+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:51 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:52 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:52 compute-0 ceph-mon[75021]: pgmap v3605: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:21:52
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.meta']
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:21:52 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:21:52 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:52 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:52.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:52 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:53 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:53 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:53 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:53 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:53.819+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:53 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:54 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:54 compute-0 ceph-mon[75021]: pgmap v3606: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:54 compute-0 nova_compute[253661]: 2025-11-22 10:21:54.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:54 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1889 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:54 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:54 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:54 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:54.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:54 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:55 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:55 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1889 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:55 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:55 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:55 compute-0 nova_compute[253661]: 2025-11-22 10:21:55.696 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:55 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:55 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:55 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:55.810+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:56 compute-0 ceph-mon[75021]: pgmap v3607: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:56 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:56 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:56 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:56.830+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:57 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:57 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 10:21:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:21:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:21:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:21:57 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:21:57 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:57 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:57 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:57.809+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:58 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:58 compute-0 ceph-mon[75021]: pgmap v3608: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:58 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:58 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:58 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:58.826+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:59 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:21:59 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:21:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 10:21:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 10:21:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 10:21:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 10:21:59 compute-0 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 10:21:59 compute-0 nova_compute[253661]: 2025-11-22 10:21:59.533 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:21:59 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1894 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:21:59 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:21:59 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:59 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:59.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:21:59 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:00 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:00 compute-0 ceph-mon[75021]: pgmap v3609: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:00 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1894 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:00 compute-0 nova_compute[253661]: 2025-11-22 10:22:00.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:00 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:00 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:00 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:00.838+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:01 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:01 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:01 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:01.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:01 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:01 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:02 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:02 compute-0 ceph-mon[75021]: pgmap v3610: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:02 compute-0 podman[443044]: 2025-11-22 10:22:02.393424856 +0000 UTC m=+0.083243787 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 10:22:02 compute-0 podman[443045]: 2025-11-22 10:22:02.408436845 +0000 UTC m=+0.089294426 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 10:22:02 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:02.878+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:02 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:02 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:03 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:03 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:22:03 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:03.917+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:03 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:03 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:04 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:04 compute-0 ceph-mon[75021]: pgmap v3611: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:04 compute-0 nova_compute[253661]: 2025-11-22 10:22:04.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:04 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1899 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:04 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.559308) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924559386, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1424, "num_deletes": 250, "total_data_size": 1700418, "memory_usage": 1735272, "flush_reason": "Manual Compaction"}
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924571700, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1076226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80972, "largest_seqno": 82395, "table_properties": {"data_size": 1071240, "index_size": 2061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15535, "raw_average_key_size": 21, "raw_value_size": 1059315, "raw_average_value_size": 1475, "num_data_blocks": 91, "num_entries": 718, "num_filter_entries": 718, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806826, "oldest_key_time": 1763806826, "file_creation_time": 1763806924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 12486 microseconds, and 6451 cpu microseconds.
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.571818) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1076226 bytes OK
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.571847) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.574358) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.574395) EVENT_LOG_v1 {"time_micros": 1763806924574387, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.574427) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1693896, prev total WAL file size 1693896, number of live WAL files 2.
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.575660) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303038' seq:72057594037927935, type:22 .. '6D6772737461740033323539' seq:0, type:0; will stop at (end)
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(1051KB)], [191(10MB)]
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924575738, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12416241, "oldest_snapshot_seqno": -1}
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 11273 keys, 9641834 bytes, temperature: kUnknown
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924655555, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 9641834, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9577750, "index_size": 34787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 301453, "raw_average_key_size": 26, "raw_value_size": 9386396, "raw_average_value_size": 832, "num_data_blocks": 1296, "num_entries": 11273, "num_filter_entries": 11273, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.656128) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9641834 bytes
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.658076) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.2 rd, 120.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(20.5) write-amplify(9.0) OK, records in: 11742, records dropped: 469 output_compression: NoCompression
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.658112) EVENT_LOG_v1 {"time_micros": 1763806924658095, "job": 120, "event": "compaction_finished", "compaction_time_micros": 79979, "compaction_time_cpu_micros": 39672, "output_level": 6, "num_output_files": 1, "total_output_size": 9641834, "num_input_records": 11742, "num_output_records": 11273, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924659042, "job": 120, "event": "table_file_deletion", "file_number": 193}
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924663720, "job": 120, "event": "table_file_deletion", "file_number": 191}
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.575534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:22:04 compute-0 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 10:22:04 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:04.906+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:04 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:04 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:05 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:05 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1899 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:05 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:05 compute-0 nova_compute[253661]: 2025-11-22 10:22:05.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:05 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:05.929+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:05 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:05 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:06 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:06 compute-0 ceph-mon[75021]: pgmap v3612: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:06 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:06.943+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:06 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:06 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:07 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:07 compute-0 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:07 compute-0 podman[443082]: 2025-11-22 10:22:07.429611751 +0000 UTC m=+0.117525989 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:22:07 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:07.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:07 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:07 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:08 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:08 compute-0 ceph-mon[75021]: pgmap v3613: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:08 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:08.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:08 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:08 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:09 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:09 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:09 compute-0 nova_compute[253661]: 2025-11-22 10:22:09.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:09 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1904 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:09 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:22:09 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:09.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:09 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:09 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:10 compute-0 sshd-session[443109]: Accepted publickey for zuul from 192.168.122.10 port 40106 ssh2: ECDSA SHA256:tikRPC42/ncVfP2lnh0iO6vjJo8w9amYgweJm9+SStg
Nov 22 10:22:10 compute-0 systemd-logind[822]: New session 55 of user zuul.
Nov 22 10:22:10 compute-0 systemd[1]: Started Session 55 of User zuul.
Nov 22 10:22:10 compute-0 sshd-session[443109]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 10:22:10 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:10 compute-0 ceph-mon[75021]: pgmap v3614: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:10 compute-0 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1904 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:10 compute-0 sudo[443113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 22 10:22:10 compute-0 sudo[443113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 10:22:10 compute-0 nova_compute[253661]: 2025-11-22 10:22:10.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:10 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:10.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:10 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:10 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:11 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:11 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:11 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:11.972+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:11 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:11 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:12 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:12 compute-0 ceph-mon[75021]: pgmap v3615: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 10:22:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303457913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:22:12 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 10:22:12 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303457913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:22:12 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:12 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:12.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:12 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:13 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:13 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2303457913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 10:22:13 compute-0 ceph-mon[75021]: from='client.? 192.168.122.10:0/2303457913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 10:22:13 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23001 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:13 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:13.943+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:13 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:13 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:14 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23003 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:14 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:14 compute-0 ceph-mon[75021]: pgmap v3616: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:14 compute-0 ceph-mon[75021]: from='client.23001 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:14 compute-0 nova_compute[253661]: 2025-11-22 10:22:14.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:14 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1909 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:22:14 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 10:22:14 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364009612' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 10:22:14 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:14 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:14 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:14.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:15 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:15 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:15 compute-0 ceph-mon[75021]: from='client.23003 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:15 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1909 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:15 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/364009612' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 10:22:15 compute-0 nova_compute[253661]: 2025-11-22 10:22:15.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:15 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:15.937+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:15 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:15 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:16 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:16 compute-0 ceph-mon[75021]: pgmap v3617: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:16 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:16.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:16 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:16 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:17 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:17 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:17 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:17.963+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:17 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:17 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:18 compute-0 ovs-vsctl[443398]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 22 10:22:18 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:18 compute-0 ceph-mon[75021]: pgmap v3618: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:18 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:18 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:18.982+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:18 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:19 compute-0 virtqemud[254229]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 22 10:22:19 compute-0 virtqemud[254229]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 22 10:22:19 compute-0 virtqemud[254229]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 22 10:22:19 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:19 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:19 compute-0 nova_compute[253661]: 2025-11-22 10:22:19.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:19 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1914 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:19 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:22:19 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: cache status {prefix=cache status} (starting...)
Nov 22 10:22:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:20.015+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:20 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: client ls {prefix=client ls} (starting...)
Nov 22 10:22:20 compute-0 lvm[443730]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 10:22:20 compute-0 lvm[443730]: VG ceph_vg1 finished
Nov 22 10:22:20 compute-0 nova_compute[253661]: 2025-11-22 10:22:20.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:20 compute-0 nova_compute[253661]: 2025-11-22 10:22:20.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 10:22:20 compute-0 nova_compute[253661]: 2025-11-22 10:22:20.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 10:22:20 compute-0 nova_compute[253661]: 2025-11-22 10:22:20.245 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 10:22:20 compute-0 lvm[443755]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 10:22:20 compute-0 lvm[443755]: VG ceph_vg2 finished
Nov 22 10:22:20 compute-0 lvm[443764]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 10:22:20 compute-0 lvm[443764]: VG ceph_vg0 finished
Nov 22 10:22:20 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23007 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:20 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:20 compute-0 ceph-mon[75021]: pgmap v3619: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:20 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1914 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:20 compute-0 nova_compute[253661]: 2025-11-22 10:22:20.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:20 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: damage ls {prefix=damage ls} (starting...)
Nov 22 10:22:20 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23009 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:20 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump loads {prefix=dump loads} (starting...)
Nov 22 10:22:20 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:20 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:20 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:20.994+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:21 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 22 10:22:21 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 22 10:22:21 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:21 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 22 10:22:21 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 22 10:22:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 22 10:22:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3498881504' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 10:22:21 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:21 compute-0 ceph-mon[75021]: from='client.23007 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:21 compute-0 ceph-mon[75021]: from='client.23009 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:21 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3498881504' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 10:22:21 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23015 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:21 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T10:22:21.641+0000 7f9e5f8d3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 10:22:21 compute-0 ceph-mgr[75315]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 10:22:21 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 22 10:22:21 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 10:22:21 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890965168' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:22:21 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 22 10:22:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:22.023+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 22 10:22:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2888067176' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: ops {prefix=ops} (starting...)
Nov 22 10:22:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 22 10:22:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1295135441' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 10:22:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4195121845' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:22 compute-0 ceph-mon[75021]: pgmap v3620: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:22 compute-0 ceph-mon[75021]: from='client.23015 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/890965168' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2888067176' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1295135441' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4195121845' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 10:22:22 compute-0 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 10:22:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 22 10:22:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3577998894' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: session ls {prefix=session ls} (starting...)
Nov 22 10:22:22 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 10:22:22 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316279566' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 10:22:22 compute-0 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: status {prefix=status} (starting...)
Nov 22 10:22:22 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:22.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:22 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:22 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:23 compute-0 nova_compute[253661]: 2025-11-22 10:22:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:23 compute-0 nova_compute[253661]: 2025-11-22 10:22:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:23 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23029 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 10:22:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138727374' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 10:22:23 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:23 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3577998894' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 10:22:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/316279566' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 10:22:23 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4138727374' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 10:22:23 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23033 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:23 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 10:22:23 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490473122' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 10:22:23 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:23.965+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:23 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:23 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 22 10:22:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959812179' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 10:22:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 10:22:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1228817719' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 10:22:24 compute-0 nova_compute[253661]: 2025-11-22 10:22:24.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:24 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1919 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:22:24 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:24 compute-0 ceph-mon[75021]: from='client.23029 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:24 compute-0 ceph-mon[75021]: pgmap v3621: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:24 compute-0 ceph-mon[75021]: from='client.23033 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2490473122' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 10:22:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/959812179' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 10:22:24 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1228817719' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 10:22:24 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1919 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 22 10:22:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1584366632' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 10:22:24 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 10:22:24 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695941985' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 10:22:24 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23045 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:24 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T10:22:24.992+0000 7f9e5f8d3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 10:22:24 compute-0 ceph-mgr[75315]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 10:22:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:25.001+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:25 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23047 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:25 compute-0 nova_compute[253661]: 2025-11-22 10:22:25.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:25 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 22 10:22:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133801750' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 10:22:25 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23051 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:25 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1584366632' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 10:22:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1695941985' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 10:22:25 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/133801750' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 10:22:25 compute-0 nova_compute[253661]: 2025-11-22 10:22:25.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:25 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23053 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:25 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:25.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:25 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:25 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:25 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 22 10:22:25 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471810433' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 10:22:26 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 10:22:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507056786' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.532464981s of 30.654958725s, submitted: 23
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cca625a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:48:55.534010+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:48:56.534139+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:48:57.534340+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:48:58.534539+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902456 data_alloc: 218103808 data_used: 2912256
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:48:59.534717+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:00.534870+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:01.535020+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:02.535177+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:03.535507+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902456 data_alloc: 218103808 data_used: 2912256
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:04.535629+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:05.535802+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbd20800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.075837135s of 11.098571777s, submitted: 6
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6cddde000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cc587000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc587000 session 0x55a6ce34ad20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cca4fe00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbd20800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6ce39c5a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cc9eac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cdd994a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:06.535939+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 58449920 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:07.536141+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 58449920 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:08.536281+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 58449920 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2971718 data_alloc: 218103808 data_used: 2912256
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:09.536422+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279871488 unmapped: 58441728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:10.536550+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279871488 unmapped: 58441728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecd47000/0x0/0x4ffc00000, data 0xfc17c2/0x1177000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cd8e3800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:11.536952+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279871488 unmapped: 58441728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cda81e00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d1b11c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbe2f400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:12.537367+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279871488 unmapped: 58441728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:13.537502+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281100288 unmapped: 57212928 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cc586800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc586800 session 0x55a6cdb83c20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce540f00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbd20800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6cca63e00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cc9eac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cdddf2c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cd8e3800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3040700 data_alloc: 234881024 data_used: 12414976
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:14.537657+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecd47000/0x0/0x4ffc00000, data 0xfc17c2/0x1177000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 282312704 unmapped: 56000512 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecd46000/0x0/0x4ffc00000, data 0xfc17eb/0x1178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cb88e960
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce0f92c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cb8b10e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:15.537895+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce0f9e00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbd20800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6cda80d20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:16.538127+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:17.538620+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:18.538792+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062351 data_alloc: 234881024 data_used: 12414976
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:19.538999+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:20.539155+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:21.539329+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:22.539477+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cc9eac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.506616592s of 17.300605774s, submitted: 33
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cca4e960
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:23.539665+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281280512 unmapped: 57032704 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa7000/0x0/0x4ffc00000, data 0x125f847/0x1417000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3147521 data_alloc: 234881024 data_used: 12566528
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:24.539865+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 53805056 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cd8e3800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6ce17ac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:25.540048+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 53338112 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:26.540331+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ebf21000/0x0/0x4ffc00000, data 0x1de5847/0x1f9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:27.540464+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:28.540643+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ebf21000/0x0/0x4ffc00000, data 0x1de5847/0x1f9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170863 data_alloc: 234881024 data_used: 14196736
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:29.540855+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:30.541006+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:31.541147+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ebf00000/0x0/0x4ffc00000, data 0x1e06847/0x1fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:32.542665+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:33.542779+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3167159 data_alloc: 234881024 data_used: 14200832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:34.542980+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:35.543199+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:36.543362+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d365ac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.227722168s of 13.740644455s, submitted: 87
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6d365ac00 session 0x55a6d1939860
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce06e3c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbd20800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6ce5412c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cc9eac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ebf00000/0x0/0x4ffc00000, data 0x1e06847/0x1fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cb89d4a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cddbb2c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:37.543492+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 49561600 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:38.543625+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 50593792 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3283031 data_alloc: 234881024 data_used: 14934016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:39.543748+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 50593792 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:40.543865+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 50593792 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbe2f000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbe2f000 session 0x55a6cddbb0e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:41.543989+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 50585600 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cddbbc20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:42.544127+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4eb221000/0x0/0x4ffc00000, data 0x2ae4847/0x2c9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 50585600 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:43.544793+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 50585600 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbd20800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6cddba780
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cc9eac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cddba3c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4eb221000/0x0/0x4ffc00000, data 0x2ae4847/0x2c9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284443 data_alloc: 234881024 data_used: 14938112
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:44.544915+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 50438144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:45.545103+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 51093504 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:46.545275+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 47562752 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4eb1fe000/0x0/0x4ffc00000, data 0x2b08847/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:47.545627+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4eb1fe000/0x0/0x4ffc00000, data 0x2b08847/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 47562752 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cca06c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cca06c00 session 0x55a6ce34a5a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:48.546050+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdaed000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 47562752 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _renew_subs
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.964159012s of 12.413674355s, submitted: 64
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdaed000 session 0x55a6cbbd45a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344633 data_alloc: 234881024 data_used: 22663168
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbd20800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:49.546430+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbd20800 session 0x55a6ce1df680
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cca5c960
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cc9eac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 290766848 unmapped: 47546368 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb1fa000/0x0/0x4ffc00000, data 0x2b0a3c4/0x2cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,3])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:50.546781+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298745856 unmapped: 39567360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:51.546985+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:52.547223+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:53.547685+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea929000/0x0/0x4ffc00000, data 0x33dc3c4/0x3595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436237 data_alloc: 234881024 data_used: 23756800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:54.548014+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:55.548228+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:56.548415+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:57.548579+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea929000/0x0/0x4ffc00000, data 0x33dc3c4/0x3595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,3])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:58.548801+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:59.549026+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436765 data_alloc: 234881024 data_used: 23756800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:00.549205+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea929000/0x0/0x4ffc00000, data 0x33dc3c4/0x3595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:01.549369+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:02.549511+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 6.877428532s
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 6.877429485s
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.751138687s of 13.710234642s, submitted: 8
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.877703667s, txc = 0x55a6cbc8a900
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.878297806s, txc = 0x55a6cc8b7500
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:03.549855+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 37093376 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.962179661s, txc = 0x55a6cbd4db00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:04.550029+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421433 data_alloc: 234881024 data_used: 23797760
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 44507136 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea927000/0x0/0x4ffc00000, data 0x33de3c4/0x3597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:05.550223+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 44498944 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:06.550570+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 44498944 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:07.550809+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 44498944 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:08.551161+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 42844160 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:09.551429+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451777 data_alloc: 234881024 data_used: 23818240
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 41803776 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:10.551593+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0x36fc3c4/0x38b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 41730048 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:11.551769+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 41730048 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:12.551954+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 41664512 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:13.552097+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 41656320 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea563000/0x0/0x4ffc00000, data 0x37a23c4/0x395b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.929909229s of 11.050208092s, submitted: 33
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:14.552234+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3453977 data_alloc: 234881024 data_used: 23797760
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297820160 unmapped: 40493056 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:15.552388+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297828352 unmapped: 40484864 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea4b1000/0x0/0x4ffc00000, data 0x384e3c4/0x3a07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:16.552543+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297828352 unmapped: 40484864 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:17.552693+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297893888 unmapped: 40419328 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:18.553050+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298008576 unmapped: 40304640 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:19.553231+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460069 data_alloc: 234881024 data_used: 23945216
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 40239104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:20.553454+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 40239104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:21.553590+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 40239104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea495000/0x0/0x4ffc00000, data 0x38683c4/0x3a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:22.553715+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 40239104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea495000/0x0/0x4ffc00000, data 0x38683c4/0x3a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:23.553858+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298082304 unmapped: 40230912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:24.554188+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467881 data_alloc: 234881024 data_used: 23949312
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298082304 unmapped: 40230912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea495000/0x0/0x4ffc00000, data 0x38683c4/0x3a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:25.554393+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce34b860
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cddbbc20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298082304 unmapped: 40230912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cca06c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea495000/0x0/0x4ffc00000, data 0x38683c4/0x3a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.882210732s of 12.113567352s, submitted: 43
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:26.554595+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:27.555355+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cca06c00 session 0x55a6ce06f2c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:28.555524+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:29.555674+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309917 data_alloc: 234881024 data_used: 16707584
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:30.555828+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x2c543c4/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:31.555985+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11c00 session 0x55a6ce22f4a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6d1938b40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce34a1e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:32.556499+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x2c543c4/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:33.556755+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:34.556989+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:35.557254+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:36.557515+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:37.557729+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:38.557889+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:39.558115+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:40.558422+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:41.558686+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:42.558874+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:43.559157+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:44.559449+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:45.559795+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:46.559969+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:47.560168+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:48.560404+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:49.560597+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:50.560864+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:51.561032+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:52.561190+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:53.561371+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:54.561513+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:55.561701+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:56.561839+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:57.561984+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:58.562134+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:59.562342+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:00.562504+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:01.562645+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:02.562925+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:03.563085+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:04.563227+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:05.563464+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:06.563651+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 44310528 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:07.563828+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 44310528 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:08.563979+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 44310528 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:09.564144+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 44310528 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:10.564302+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:11.564582+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:12.565111+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:13.565276+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:14.565407+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:15.565631+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:16.565813+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:17.566096+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 44294144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:18.566370+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 44294144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:19.566506+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 44285952 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:20.566646+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 44285952 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:21.566785+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 44285952 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:22.566925+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:23.567022+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:24.567175+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:25.567327+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:26.567455+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:27.567589+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:28.567663+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 44269568 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:29.567774+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.296806335s of 63.591075897s, submitted: 21
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3189342 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 35684352 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6d1938d20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbd20800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbd20800 session 0x55a6ce34ba40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:30.567881+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce5403c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 44105728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cca2fc20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbe2f400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6cdd99860
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ebd49000/0x0/0x4ffc00000, data 0x1fbc3c4/0x2175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:31.568018+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 44105728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:32.568195+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 44040192 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:33.568347+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 44040192 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:34.568474+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d1b11c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3167422 data_alloc: 218103808 data_used: 11350016
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 44023808 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11c00 session 0x55a6ce06eb40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:35.568638+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdb5d400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 44023808 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:36.568762+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 44023808 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ebd24000/0x0/0x4ffc00000, data 0x1fe03e7/0x219a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:37.568900+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:38.569068+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:39.569208+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236484 data_alloc: 234881024 data_used: 20135936
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ebd24000/0x0/0x4ffc00000, data 0x1fe03e7/0x219a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:40.569394+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:41.569580+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:42.569720+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:43.569889+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:44.570017+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236484 data_alloc: 234881024 data_used: 20135936
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:45.570174+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ebd24000/0x0/0x4ffc00000, data 0x1fe03e7/0x219a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:46.570371+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:47.570514+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.954549789s of 17.778802872s, submitted: 21
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 37060608 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:48.570649+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 37027840 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:49.570810+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339572 data_alloc: 234881024 data_used: 22122496
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302260224 unmapped: 36052992 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:50.571038+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302260224 unmapped: 36052992 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:51.571199+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302260224 unmapped: 36052992 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:52.571683+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:53.571910+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:54.572090+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339572 data_alloc: 234881024 data_used: 22122496
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:55.572399+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:56.572589+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:57.572750+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:58.572922+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:59.573065+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339892 data_alloc: 234881024 data_used: 22130688
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:00.573218+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:01.573437+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:02.573808+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:03.574191+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:04.574378+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339892 data_alloc: 234881024 data_used: 22130688
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:05.574647+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d1b11800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11800 session 0x55a6ce39c5a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d1b11800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11800 session 0x55a6cca4fe00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce34ad20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cddde000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbe2f400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:06.574898+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.562274933s of 19.042518616s, submitted: 92
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6cca625a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d1b11c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11c00 session 0x55a6cb890d20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce06fe00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6ce06fa40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbe2f400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6cb89c000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:07.575064+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:08.575226+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eac1e000/0x0/0x4ffc00000, data 0x2cd4459/0x2e90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:09.575442+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360913 data_alloc: 234881024 data_used: 22130688
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:10.575566+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d1b11800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11800 session 0x55a6cb8c3e00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdacf400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdacf400 session 0x55a6cb8b1a40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:11.575705+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eac1e000/0x0/0x4ffc00000, data 0x2cd4459/0x2e90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdacf400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdacf400 session 0x55a6d1938f00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 35938304 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cbbd4d20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:12.575816+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbe2f400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 36962304 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:13.575987+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eabf9000/0x0/0x4ffc00000, data 0x2cf847c/0x2eb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:14.576134+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eabf9000/0x0/0x4ffc00000, data 0x2cf847c/0x2eb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387642 data_alloc: 234881024 data_used: 23896064
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:15.576369+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:16.576520+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eabf9000/0x0/0x4ffc00000, data 0x2cf847c/0x2eb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:17.576687+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:18.576808+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:19.576951+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.984582901s of 13.379371643s, submitted: 33
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387946 data_alloc: 234881024 data_used: 23900160
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:20.577089+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:21.577341+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eabf9000/0x0/0x4ffc00000, data 0x2cf847c/0x2eb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:22.577486+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:23.577653+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:24.577816+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397636 data_alloc: 234881024 data_used: 23916544
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 37519360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:25.577972+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 37511168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:26.578525+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:27.579199+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea9b0000/0x0/0x4ffc00000, data 0x2f3947c/0x30f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:28.579351+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:29.579517+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420022 data_alloc: 234881024 data_used: 24059904
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea9b0000/0x0/0x4ffc00000, data 0x2f3947c/0x30f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:30.579993+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:31.580131+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:32.580365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea9b0000/0x0/0x4ffc00000, data 0x2f3947c/0x30f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.431725502s of 13.091328621s, submitted: 67
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 35864576 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:33.580858+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 35864576 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:34.581185+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414874 data_alloc: 234881024 data_used: 24064000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 35864576 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:35.581457+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea997000/0x0/0x4ffc00000, data 0x2f5a47c/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 35856384 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:36.581613+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 35856384 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:37.581766+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cca5cd20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6ce34a780
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbe2e000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2e000 session 0x55a6cdb830e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:38.581939+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:39.582083+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3341272 data_alloc: 234881024 data_used: 20606976
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:40.582220+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:41.582368+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf48000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:42.582484+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cbc37680
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.537497520s of 10.164901733s, submitted: 43
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6cbc372c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 38273024 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:43.582637+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cbc365a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:44.582783+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:45.583006+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:46.583144+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:47.583380+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:48.583515+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:49.583686+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:50.583915+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:51.584158+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:52.584363+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:53.584583+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:54.584788+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:55.584958+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:56.585098+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:57.585278+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 40574976 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:58.585487+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 40574976 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:59.585641+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 40574976 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:00.585794+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 40574976 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:01.585949+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:02.586108+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:03.586276+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:04.586383+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:05.586577+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:06.586725+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:07.586910+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:08.587049+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:09.587303+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297754624 unmapped: 40558592 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cca5d4a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbe2f400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6cb89de00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce0f8780
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:10.587479+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6ce06f680
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 40542208 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.186452866s of 27.352705002s, submitted: 34
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb7b8000/0x0/0x4ffc00000, data 0x213c3d4/0x22f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [0,0,0,0,3,0,8])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce22e780
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdb5d400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6cb8c3e00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdacf400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdacf400 session 0x55a6ce06fe00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cddde000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cdd99860
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:11.587644+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x25003d4/0x26ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:12.587787+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:13.587913+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:14.588099+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x25003d4/0x26ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3232119 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:15.588272+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:16.588422+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:17.588583+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce34ba40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:18.588739+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298008576 unmapped: 43982848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdb5d400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6ce2ea800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:19.588939+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298008576 unmapped: 43982848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324944 data_alloc: 234881024 data_used: 22831104
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:20.589128+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:21.589458+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:22.589594+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.318983078s of 12.736421585s, submitted: 36
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6d1938b40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:23.589720+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce2ea800 session 0x55a6cddbbc20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:24.589876+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324812 data_alloc: 234881024 data_used: 22831104
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:25.592559+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:26.592831+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:27.592984+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:28.593167+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:29.593335+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324912 data_alloc: 234881024 data_used: 22835200
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 144K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.92 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3530 writes, 14K keys, 3530 commit groups, 1.0 writes per commit group, ingest: 17.14 MB, 0.03 MB/s
                                           Interval WAL: 3530 writes, 1327 syncs, 2.66 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:30.594101+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:31.594457+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:32.594787+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cb8912c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cda80d20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:33.594992+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:34.595357+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324780 data_alloc: 234881024 data_used: 22835200
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:35.595581+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:36.595910+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:37.596108+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:38.596622+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.976135254s of 15.380604744s, submitted: 3
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdb5d400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:39.596922+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324912 data_alloc: 234881024 data_used: 22835200
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:40.597230+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:41.597478+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:42.597880+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce06f2c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6ce1dfe00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6ce56f000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:43.598163+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:44.613948+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293765120 unmapped: 48226304 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce56f000 session 0x55a6cbbd5680
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:45.614148+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:46.614297+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:47.614580+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:48.614743+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:49.614911+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:50.615019+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:51.615168+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:52.615350+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:53.615542+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:54.615734+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:55.615959+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:56.616132+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:57.616304+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:58.616487+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:59.616620+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:00.616830+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:01.617003+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:02.617177+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293797888 unmapped: 48193536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:03.617404+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 48185344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:04.617579+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 48185344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:05.617802+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:06.617968+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:07.618190+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:08.618394+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:09.618564+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:10.618720+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:11.618899+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:12.619162+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:13.619458+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:14.619662+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:15.619831+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:16.620002+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:17.620139+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:18.620281+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:19.620435+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:20.620576+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:21.620702+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:22.620860+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:23.621009+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:24.621169+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:25.621390+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:26.621598+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:27.621762+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:28.621994+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 48152576 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:29.622139+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:30.622393+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:31.622534+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:32.622725+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:33.622873+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:34.623034+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:35.623205+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:36.623393+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:37.623556+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:38.623766+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:39.623924+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:40.624071+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:41.624220+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:42.624418+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:43.624611+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:44.624846+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:45.625033+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:46.625259+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293863424 unmapped: 48128000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:47.625419+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293863424 unmapped: 48128000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:48.625584+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293863424 unmapped: 48128000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:49.625776+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293871616 unmapped: 48119808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:50.625950+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 70.865798950s of 72.201339722s, submitted: 33
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293871616 unmapped: 48119808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6d19392c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce17ac00 session 0x55a6cca4c1e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:51.626070+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cb88ed20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293871616 unmapped: 48119808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:52.626230+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293871616 unmapped: 48119808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,1])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:53.626434+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 48095232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:54.626589+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:55.626747+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:56.626877+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cddba3c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:57.627039+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:58.627205+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:59.627373+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:00.627547+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293912576 unmapped: 48078848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.929806709s of 10.555549622s, submitted: 96
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdb5d400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6cda80000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:01.627671+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293912576 unmapped: 48078848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:02.627809+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293912576 unmapped: 48078848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:03.627974+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:04.628138+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:05.628348+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:06.628443+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdb5d400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6ce34a1e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:07.628594+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:08.628757+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:09.628917+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:10.629075+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.821827888s of 10.002042770s, submitted: 4
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cb88e780
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:11.629209+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:12.629391+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:13.629532+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:14.629724+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:15.629854+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 48054272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cd8e3800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cb88e1e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:16.629997+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 48054272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:17.630143+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 48054272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:18.630388+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 48054272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:19.630557+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 48046080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:20.630689+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 48046080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.985437393s of 10.002745628s, submitted: 5
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cca4f0e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:21.630816+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 48037888 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:22.630948+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 48029696 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:23.631108+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:24.631410+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:25.631626+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:26.631783+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6ce17ac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce17ac00 session 0x55a6cca11e00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:27.631943+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:28.632051+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:29.632244+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:30.632436+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.981091499s of 10.003016472s, submitted: 4
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6ce17ac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:31.632601+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce17ac00 session 0x55a6cdb832c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:32.632726+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:33.632870+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:34.633004+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:35.633197+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:36.633326+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cc6350e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:37.633449+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:38.634238+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:39.634589+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 47996928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:40.635618+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 47996928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.981596947s of 10.001847267s, submitted: 3
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cd8e3800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6ce39c3c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:41.635774+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:42.636693+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:43.637053+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:44.637222+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:45.637541+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:46.637687+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 47980544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:47.637859+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cb8c2d20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 47972352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdb5d400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6ce1521e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:48.638062+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cbbd5860
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:49.638392+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:50.638792+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:51.638942+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:52.639081+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:53.639246+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:54.639460+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:55.639699+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:56.639935+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:57.640173+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:58.640455+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:59.640667+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:00.640907+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:01.641046+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:02.641437+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:03.641634+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:04.641827+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:05.642022+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:06.642185+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:07.642417+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:08.642602+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:09.642823+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:10.643427+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 47931392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:11.643779+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 47931392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:12.644023+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 47931392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:13.644398+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:14.644576+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:15.644938+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:16.645126+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:17.645370+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:18.645649+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:19.645818+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:20.645948+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:21.646127+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:22.646350+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:23.646527+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:24.646726+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 47906816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:25.647031+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 47906816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:26.647302+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:27.647709+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:28.647939+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cddba3c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cd8e3800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cca4c1e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6d19392c0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6ce17ac00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce17ac00 session 0x55a6cbbd5680
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.754524231s of 47.820213318s, submitted: 29
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:29.648111+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,5])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce1dfe00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cddbbc20
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cd8e3800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6d1938b40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce34ba40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6ce176c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce176c00 session 0x55a6cdd99860
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:30.648462+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194964 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec9bf000/0x0/0x4ffc00000, data 0x1f773a1/0x212f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:31.648632+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294199296 unmapped: 47792128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:32.648822+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294199296 unmapped: 47792128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:33.648961+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cddde000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cd8e3800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:34.649124+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec99b000/0x0/0x4ffc00000, data 0x1f9b3a1/0x2153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:35.649309+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252245 data_alloc: 234881024 data_used: 17235968
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:36.649568+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:37.649771+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cb89de00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cdb87680
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:38.649937+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec99b000/0x0/0x4ffc00000, data 0x1f9b3a1/0x2153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cc634000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:39.650172+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:40.650460+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:41.650667+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:42.650855+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:43.651051+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:44.651248+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:45.651426+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:46.651577+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:47.651760+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:48.651931+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:49.652099+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:50.652240+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:51.652528+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:52.652766+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:53.652986+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:54.653199+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:55.653396+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:56.653559+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:57.653741+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:58.653940+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:59.654125+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:00.654291+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-mon[75021]: from='client.23045 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:26 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:26 compute-0 ceph-mon[75021]: from='client.23047 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:26 compute-0 ceph-mon[75021]: pgmap v3622: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:26 compute-0 ceph-mon[75021]: from='client.23051 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:26 compute-0 ceph-mon[75021]: from='client.23053 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1471810433' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 10:22:26 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/507056786' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:01.654490+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:02.654650+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:03.654857+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:04.655079+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:05.655242+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:06.655457+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:07.655644+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:08.655803+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:09.655940+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:10.656069+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:11.656234+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:12.656390+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:13.656571+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:14.656733+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:15.656903+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:16.657068+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:17.657229+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:18.657377+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:19.657539+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:20.657786+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:21.658779+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:22.658957+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:23.659804+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:24.660026+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:25.660233+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:26.660425+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:27.661180+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:28.661331+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:29.661496+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:30.661699+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:31.661864+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:32.662067+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:33.662221+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:34.662418+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:35.662629+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:36.662858+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:37.663043+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:38.663188+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:39.663343+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:40.663515+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6ce176c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:41.663675+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:42.663898+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:43.664028+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:44.664186+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:45.664408+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:46.664636+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:47.669611+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:48.673732+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:49.677082+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:50.680592+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:51.684264+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:52.687370+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:53.687681+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:54.690076+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:55.692049+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:56.693888+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:57.694587+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:58.694919+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:59.696266+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:00.696918+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:01.697299+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:02.698375+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:03.698681+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:04.699424+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:05.699667+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:06.700269+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:07.700877+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:08.701118+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:09.701365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:10.701620+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:11.701876+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:12.702067+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:13.702287+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:14.702482+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:15.702793+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:16.703017+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:17.703175+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:18.703345+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:19.703494+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:20.704823+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:21.705010+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:22.705212+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:23.705390+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:24.706226+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:25.707211+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:26.707383+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:27.707672+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:28.707868+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:29.708012+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:30.708370+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:31.708726+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:32.709103+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:33.709427+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:34.715095+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:35.715280+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:36.715560+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:37.715737+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:38.715950+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:39.716171+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:40.716427+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:41.716651+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:42.716867+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:43.717064+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:44.717359+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:45.717558+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:46.717731+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:47.717902+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:48.718084+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:49.718253+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:50.718475+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:51.718664+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:52.719984+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:53.720623+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:54.720799+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:55.721266+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:56.721598+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:57.721968+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:58.722206+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:59.722446+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:00.722765+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:01.722950+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:02.723156+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:03.723336+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:04.723523+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:05.723789+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:06.723998+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:07.724293+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:08.724611+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:09.724766+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:10.724993+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:11.725187+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:12.725393+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:13.725692+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:14.725865+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:15.726033+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:16.726177+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:17.726361+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:18.726538+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:19.726777+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:20.726978+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:21.727122+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:22.727280+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:23.727427+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:24.727597+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:25.727748+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:26.727885+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:27.728062+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:28.728229+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:29.728511+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:30.728681+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:31.728844+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:32.728990+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:33.729167+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:34.729347+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:35.729517+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:36.729657+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:37.729798+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:38.729976+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:39.730161+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:40.730383+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:41.730521+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:42.730672+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:43.730883+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:44.731084+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:45.731411+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:46.731585+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:47.731737+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:48.731904+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:49.732119+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:50.732270+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:51.732402+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:52.732574+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:53.732779+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:54.733025+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:55.733253+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 47489024 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:56.733508+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 47489024 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:57.735375+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:58.735561+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:59.736663+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:00.737390+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:01.737932+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:02.738390+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:03.738627+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:04.738780+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:05.739383+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:06.739871+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:07.740701+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:08.740932+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:09.741670+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:10.742155+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:11.742736+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:12.743397+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:13.743650+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:14.744216+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:15.744714+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:16.745166+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:17.745496+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:18.745651+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:19.745872+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:20.746155+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:21.746408+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:22.746541+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:23.746746+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:24.746917+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:25.747114+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:26.747274+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:27.747501+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:28.747706+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:29.747847+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:30.748022+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:31.748138+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:32.748305+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:33.748458+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:34.748584+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:35.748734+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:36.748944+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:37.749090+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cda86400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:38.749227+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:39.749417+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:40.749637+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:41.749808+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:42.749933+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:43.750100+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:44.750209+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:45.750365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:46.750535+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:47.750692+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:48.750861+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:49.751010+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:50.751165+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:51.751306+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:52.751413+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:53.751541+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:54.751672+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:55.751889+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:56.752044+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:57.752157+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:58.752373+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:59.752523+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:00.752660+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:01.755594+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:02.760202+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:03.760893+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:04.764116+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:05.765886+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:06.767281+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:07.768875+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:08.769460+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:09.770841+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:10.771012+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:11.772067+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:12.773010+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:13.773173+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:14.773474+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:15.773684+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:16.773978+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:17.774287+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:18.774507+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:19.774741+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:20.774990+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:21.775217+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:22.775368+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294699008 unmapped: 47292416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:23.775615+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294699008 unmapped: 47292416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:24.775877+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:25.776112+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:26.776397+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:27.776540+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:29.288347+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [3])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:30.288760+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:31.288994+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:32.289202+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:33.289363+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 47972352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:34.289492+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 47972352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:35.289599+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:36.289834+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:37.290003+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:38.290166+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:39.290358+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:40.290505+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:41.290644+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:42.290740+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:43.290897+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:44.291021+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:45.291162+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d2128400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:46.291392+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:47.291540+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 47939584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:48.291672+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:49.291833+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:50.291999+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:51.292166+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:52.292341+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:53.292529+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:54.292672+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:55.292827+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:56.293047+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:57.293202+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 47906816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:58.293405+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:59.293550+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:00.293706+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:01.293867+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:02.294073+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:03.294210+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:04.294391+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:05.294570+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294109184 unmapped: 47882240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:06.295531+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294109184 unmapped: 47882240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:07.295714+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 47874048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:08.296029+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 47874048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:09.296202+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 47874048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:10.296836+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 47874048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:11.297075+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:12.297299+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:13.297572+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:14.297754+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:15.297962+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:16.298258+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:17.298524+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:18.298820+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:19.299027+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:20.299606+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:21.299763+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:22.299907+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:23.300051+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:24.300176+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:25.300359+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:26.300621+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:27.300860+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 47841280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:28.334668+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 47841280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:29.334871+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:30.335087+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:31.335240+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:32.335522+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:33.335732+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:34.335939+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:35.336260+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:36.336533+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:37.336747+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:38.336940+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:39.337081+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:40.337253+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:41.337435+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:42.337613+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:43.337822+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:44.338081+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:45.338267+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:46.338513+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:47.338696+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d144e000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:48.338918+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:49.339197+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:50.339420+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:51.339630+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294199296 unmapped: 47792128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:52.339833+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294199296 unmapped: 47792128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:53.339994+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:54.340165+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:55.340359+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:56.340517+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:57.340653+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 47767552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:58.340793+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 47767552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:59.340995+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:00.341306+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:01.341690+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294240256 unmapped: 47751168 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:02.341950+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294240256 unmapped: 47751168 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:03.342170+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294240256 unmapped: 47751168 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:04.342369+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:05.342517+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:06.342933+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:07.343268+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:08.343423+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:09.343550+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:10.343781+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:11.343954+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:12.344091+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:13.344263+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:14.344412+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:15.344554+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294264832 unmapped: 47726592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:16.344794+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294264832 unmapped: 47726592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:17.344977+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:18.345166+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:19.345435+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:20.345668+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:21.345923+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:22.346091+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 47710208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:23.346413+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:24.346547+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:25.346709+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:26.346943+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:27.347093+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:28.347229+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:29.347401+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:30.347641+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 145K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.90 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 417 writes, 910 keys, 417 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 417 writes, 202 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets getting new tickets!
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:31.347953+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _finish_auth 0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:31.349024+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:32.348153+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:33.348415+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:34.348577+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:35.348828+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:36.348979+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:37.349165+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:38.349377+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:39.349602+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:40.349754+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:41.349968+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:42.351169+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:43.352302+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:44.353405+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:45.354421+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:46.355391+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:47.356232+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:48.356660+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:49.357399+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:50.357685+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cef83400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:51.357945+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:52.358146+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:53.358409+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:54.358854+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:55.359303+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:56.359780+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:57.360145+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:58.360553+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:59.360847+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:00.361129+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:01.361365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:02.361670+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:03.361868+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:04.362047+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:05.362201+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:06.362515+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:07.362742+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:08.362932+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:09.363130+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:10.365289+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:11.365532+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:12.365706+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:13.365874+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:14.366092+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:15.366273+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:16.366581+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:17.366774+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:18.366930+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:19.367204+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:20.367430+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:21.367582+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:22.367776+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:23.367978+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:24.368185+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:25.368410+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:26.368678+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:27.368862+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:28.369029+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:29.369228+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:30.369442+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:31.369643+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:32.369841+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:33.370017+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:34.370177+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:35.370436+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:36.370662+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:37.370814+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:38.371061+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:39.371351+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:40.371538+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:41.371689+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:42.371821+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:43.372083+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:44.372283+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:45.372426+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:46.372649+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:47.372787+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:48.372947+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:49.373091+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:50.373399+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 501.154937744s of 501.844177246s, submitted: 40
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:51.373532+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:52.373687+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:53.373873+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:54.374183+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:55.374366+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:56.374522+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:57.374683+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:58.374875+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:59.375113+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:00.375460+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:01.375682+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:02.375907+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:03.376113+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:04.376418+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:05.376767+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:06.377014+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:07.377241+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:08.377466+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:09.377652+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:10.377909+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:11.378153+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:12.378423+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:13.378589+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:14.378776+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:15.378921+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:16.379097+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:17.379219+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:18.379411+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:19.379545+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:20.379691+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:21.379933+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:22.380155+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:23.380591+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:24.380894+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:25.381211+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:26.381421+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:27.381621+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:28.381794+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:29.382005+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:30.382195+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:31.382397+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:32.382578+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:33.382780+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:34.383083+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:35.383284+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:36.383615+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:37.383795+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:38.384298+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:39.384506+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:40.384879+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:41.385010+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:42.385503+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:43.385744+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:44.386018+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:45.386183+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:46.386452+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:47.386680+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:48.386969+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:49.387130+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:50.387465+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:51.387767+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:52.387998+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:53.388162+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:54.388427+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:55.388618+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:56.388857+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:57.388938+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:58.389093+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:59.389221+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:00.389411+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:01.389643+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:02.389839+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:03.390029+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:04.390208+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:05.390354+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:06.390550+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:07.390741+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:08.390904+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:09.391096+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:10.391431+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:11.391573+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:12.391709+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:13.391940+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:14.392091+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:15.392247+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:16.392461+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:17.392621+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:18.392760+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:19.392925+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:20.393071+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:21.393233+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:22.393391+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:23.393528+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:24.393670+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:25.393808+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:26.393969+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:27.394127+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:28.394271+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 47316992 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:29.394426+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:30.394571+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:31.394718+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:32.394879+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:33.395065+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:34.395212+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:35.395365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:36.395528+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:37.395695+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294699008 unmapped: 47292416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:38.395823+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294699008 unmapped: 47292416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:39.396017+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:40.396187+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:41.396359+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:42.396499+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:43.396644+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294715392 unmapped: 47276032 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:44.396792+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294715392 unmapped: 47276032 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:45.396955+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:46.397157+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:47.397363+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:48.397532+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:49.397742+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:50.397951+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:51.398140+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:52.398281+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:53.398431+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:54.398611+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:55.398739+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:56.398928+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:57.399067+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:58.399256+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294739968 unmapped: 47251456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:59.399419+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294739968 unmapped: 47251456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:00.399574+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294739968 unmapped: 47251456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:01.399696+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:02.399824+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:03.399971+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:04.400114+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:05.400257+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:06.400497+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:07.400660+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 47235072 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:08.400833+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294764544 unmapped: 47226880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:09.400998+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:10.401160+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:11.401373+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:12.401538+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:13.401689+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:14.401858+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:15.402061+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294780928 unmapped: 47210496 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:16.402385+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294780928 unmapped: 47210496 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:17.402582+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294780928 unmapped: 47210496 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:18.402749+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 47202304 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:19.402911+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 47194112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:20.403010+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 47194112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:21.403154+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 47194112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:22.403387+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 47194112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:23.403560+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 47185920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:24.403708+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 47185920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:25.403896+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 47177728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:26.404123+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 47177728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:27.404435+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 47177728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:28.405255+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 47177728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:29.405735+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 47169536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:30.407428+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 47169536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:31.407568+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 47169536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:32.409752+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 47169536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:33.409961+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 47161344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:34.410095+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 47153152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:35.410279+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294846464 unmapped: 47144960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:36.411142+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294846464 unmapped: 47144960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:37.411289+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294846464 unmapped: 47144960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:38.411668+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294846464 unmapped: 47144960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:39.411810+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 47136768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:40.412008+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 47136768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:41.412239+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:42.412411+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:43.412561+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:44.412698+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:45.412872+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:46.413108+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:47.413275+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:48.413405+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:49.413576+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:50.413724+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:51.413893+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:52.414105+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:53.414274+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:54.414451+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 47104000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:55.414597+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 47104000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:56.414804+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 47104000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:57.414950+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 47104000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:58.415141+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 47095808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:59.415380+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:00.415610+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:01.415746+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:02.415968+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:03.416137+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:04.416419+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:05.416580+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 47079424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:06.416788+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:07.416930+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:08.417164+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:09.417307+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:10.417529+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:11.417697+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:12.417881+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 47063040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:13.418086+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:14.418252+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:15.418405+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:16.418621+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:17.418783+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:18.418997+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:19.419162+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:20.419342+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:21.419516+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 47046656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:22.419697+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 47046656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:23.419885+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 47046656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:24.420133+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294952960 unmapped: 47038464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:25.420296+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294952960 unmapped: 47038464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:26.420512+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294952960 unmapped: 47038464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:27.420714+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 47030272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:28.420913+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 47030272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:29.421071+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 47022080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:30.421269+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 47022080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:31.421478+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 47022080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:32.421687+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 47013888 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:33.421836+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 47013888 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:34.422055+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 46997504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:35.422287+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 46997504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:36.422618+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 46997504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:37.422786+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 46997504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:38.422936+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:39.423095+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:40.423436+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:41.423589+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:42.424220+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:43.424379+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:44.424651+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:45.424798+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 46981120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:46.424998+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295018496 unmapped: 46972928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:47.425251+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295018496 unmapped: 46972928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:48.425498+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295018496 unmapped: 46972928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:49.425657+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295018496 unmapped: 46972928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:50.425821+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295026688 unmapped: 46964736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:51.425969+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295026688 unmapped: 46964736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:52.426142+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295026688 unmapped: 46964736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:53.426307+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:54.426477+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:55.426612+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:56.426835+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:57.427025+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:58.427233+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:59.427407+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:00.427624+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:01.427753+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:02.427913+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:03.428123+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:04.428409+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:05.428826+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:06.429220+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:07.429452+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 46931968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:08.429772+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 46931968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:09.430050+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 46923776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:10.430171+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:11.430391+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:12.430510+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:13.430756+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:14.430942+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:15.431108+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:16.431379+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:17.431618+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:18.431834+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:19.431995+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:20.432136+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:21.432383+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 46899200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:22.432529+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 46899200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:23.432666+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 46891008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:24.432855+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 46891008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:25.432987+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 46891008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:26.433204+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:27.433394+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:28.433540+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:29.433699+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:30.433835+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:31.433962+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:32.434146+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:33.434286+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:34.434448+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:35.434604+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:36.434824+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:37.435005+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:38.435881+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:39.436431+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:40.437103+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:41.437535+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:42.438055+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:43.438187+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:44.438349+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:45.438793+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:46.438970+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:47.439091+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295141376 unmapped: 46850048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:48.439220+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295141376 unmapped: 46850048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:49.439405+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295141376 unmapped: 46850048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:50.439570+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 46841856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:51.440164+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:52.440308+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:53.440472+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:54.440613+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:55.440773+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:56.441149+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:57.441342+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 46825472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:58.441542+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:59.441780+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:00.442081+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:01.442297+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:02.442585+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:03.442722+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:04.442885+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:05.443029+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 46800896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:06.443256+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:07.443492+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:08.443662+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:09.443791+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:10.443943+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:11.444081+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 46784512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:12.444264+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 46784512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:13.444432+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:14.444585+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:15.444760+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:16.445007+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:17.445155+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:18.445362+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 46759936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:19.445538+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 46759936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:20.445703+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 46759936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:21.445848+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 46759936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:22.446230+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 46743552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:23.446368+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:24.446550+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:25.446684+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:26.446937+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:27.447127+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:28.447427+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:29.447650+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:30.447803+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:31.447994+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:32.448138+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:33.448268+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:34.448412+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:35.448613+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:36.448767+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:37.448940+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 46710784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:38.449077+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 46710784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:39.449276+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 46710784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:40.449420+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 46702592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:41.449544+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 46702592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:42.449936+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 46702592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:43.450423+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:44.450844+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23061 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:45.451296+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:46.451876+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:47.452049+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:48.452309+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:49.452653+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:50.452929+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 46686208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:51.453173+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 46686208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:52.453390+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 46686208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:53.453653+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 46669824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:54.453883+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:55.454126+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:56.454433+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:57.454613+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:58.454950+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:59.455135+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:00.455298+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:01.455496+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:02.455716+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:03.455932+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:04.456155+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:05.456419+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:06.456625+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:07.456813+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:08.456940+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:09.457071+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 46637056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:10.457210+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:11.457376+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:12.457514+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:13.457650+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:14.457806+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:15.457978+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:16.458166+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:17.458353+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:18.458627+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:19.458883+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:20.459114+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:21.459371+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:22.459585+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:23.459808+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:24.460098+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:25.460401+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:26.460680+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:27.460884+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:28.461159+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:29.461452+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:30.461685+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 46596096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:31.461933+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 46596096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:32.462249+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 46596096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:33.462468+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:34.462859+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:35.463158+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:36.463586+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:37.463787+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:38.463967+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:39.464188+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 46579712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:40.464412+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 46579712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:41.464704+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:42.464896+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:43.465184+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:44.465438+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:45.465613+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:46.465899+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:47.466099+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:48.466299+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:49.466584+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:50.466762+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:51.467065+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:52.467307+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:53.467641+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:54.467949+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:55.468193+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:56.468497+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:57.468751+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:58.469044+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:59.469300+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:00.469633+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:01.469943+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:02.470242+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 46538752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:03.470527+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:04.470797+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:05.471107+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:06.471684+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:07.472867+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:08.473021+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 46522368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:09.473252+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 46522368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:10.473455+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 46522368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:11.473607+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295477248 unmapped: 46514176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:12.473840+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295477248 unmapped: 46514176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:13.474006+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:14.474204+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:15.474396+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:16.474584+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:17.474730+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:18.474985+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 46497792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:19.475222+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 46497792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:20.475435+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 46497792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:21.475598+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:22.475733+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:23.475977+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:24.476195+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:25.476406+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:26.476654+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 46481408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:27.476879+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 46473216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:28.477071+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 46465024 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:29.477239+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 46465024 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:30.477399+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 46456832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:31.477587+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:32.477817+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:33.477989+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:34.478122+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:35.478308+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:36.478585+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 46440448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:37.478710+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:38.478874+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:39.479039+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:40.479202+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:41.479371+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:42.479504+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 46424064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:43.479647+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:44.479787+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:45.479923+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:46.480351+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:47.480495+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:48.480632+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:49.480769+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:50.481002+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 46399488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:51.481217+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 46399488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:52.481437+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 46399488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:53.481625+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 46391296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:54.481772+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 46391296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:55.481906+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 46391296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:56.482208+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:57.482397+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:58.482617+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:59.482818+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:00.482973+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:01.483130+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46374912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:02.483307+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46366720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:03.483511+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46366720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:04.483663+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46366720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:05.483796+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46366720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:06.484030+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 46358528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:07.484203+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 46358528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:08.484389+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 46358528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:09.484588+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:10.484762+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:11.484915+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:12.485077+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:13.485206+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:14.485369+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 46333952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:15.485621+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 46333952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:16.485838+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 46333952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:17.485964+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 46333952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:18.486115+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:19.486252+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:20.486402+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:21.486618+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:22.486776+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:23.486985+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:24.487158+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 46317568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:25.487381+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:26.487599+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:27.487833+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:28.488063+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:29.488268+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 145K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.90 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:30.488484+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:31.488638+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:32.488837+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:33.488990+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:34.489118+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:35.489307+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:36.489624+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:37.489815+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 46292992 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:38.490038+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:39.490213+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 46284800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:40.490455+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 46276608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:41.490641+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:42.490827+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:43.490982+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:44.491218+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:45.491365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:46.491581+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:47.491834+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:48.492051+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:49.492230+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:50.492410+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:51.492570+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:52.492709+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:53.492892+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:54.493075+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:55.493279+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 46243840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:56.493578+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 46243840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:57.493749+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 46235648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:58.493928+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 46235648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:59.494130+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:00.494418+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:01.494621+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:02.494784+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:03.494979+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:04.495176+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:05.495393+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 46219264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:06.495697+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 46219264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:07.495893+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 46211072 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:08.496051+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 46211072 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:09.496193+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 46211072 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d144fc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:10.496373+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 46202880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:11.496541+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 46202880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:12.496768+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 46202880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:13.496935+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 46202880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:14.497068+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 46194688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:15.497201+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 46194688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:16.497377+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 46194688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:17.497585+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 46194688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:18.497727+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 46186496 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:19.497856+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 46178304 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:20.498153+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 46178304 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:21.498303+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 46170112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:22.498473+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:23.498656+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:24.499475+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:25.499715+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:26.499928+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:27.500104+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 46153728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:28.500242+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 46153728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:29.500392+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 46153728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:30.500493+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 46153728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:31.500603+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 46145536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:32.500744+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 46145536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:33.500862+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 46145536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:34.501039+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 46137344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:35.501287+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 46137344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:36.501579+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 46137344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:37.501797+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 46137344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:38.501970+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:39.502105+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:40.502256+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:41.502457+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:42.502613+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:43.502756+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:44.502914+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:45.503019+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:46.503150+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:47.503249+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:48.503389+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:49.503582+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:50.503735+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.833068848s of 600.166870117s, submitted: 90
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:51.503912+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 46096384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:52.504062+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 46096384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:53.504281+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 46071808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:54.504502+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:55.504721+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:56.504996+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:57.505211+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:58.505490+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:59.505863+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:00.506199+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:01.506791+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:02.507365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:03.507807+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:04.508138+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:05.508395+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:06.508605+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:07.508733+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:08.509640+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:09.509885+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:10.510041+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:11.510294+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d061b800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:12.510559+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:13.510850+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:14.511074+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:15.511253+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:16.511495+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:17.511691+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:18.512607+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:19.512827+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:20.512983+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:21.513161+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:22.513373+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:23.513588+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:24.513787+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:25.513999+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:26.514186+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:27.514382+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:28.514576+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:29.514718+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:30.514868+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:31.515078+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:32.515198+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:33.515284+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:34.515399+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:35.515572+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:36.515861+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:37.515996+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cda86400 session 0x55a6cb2ab4a0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbba2c00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:38.516118+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:39.516301+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 46014464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:40.516462+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 46014464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:41.517374+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 46014464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:42.517548+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:43.517735+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:44.518002+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:45.518166+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:46.518408+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:47.518566+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:48.518729+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:49.518895+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:50.519045+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 45998080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:51.519263+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 45998080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:52.519411+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 45998080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:53.519563+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 45998080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:54.519824+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 45981696 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:55.519969+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 45981696 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:56.520149+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 45981696 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:57.520373+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 45973504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:58.520552+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 45973504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:59.520662+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 45965312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:00.520784+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 45965312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:01.520949+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 45965312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:02.521085+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 45957120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:03.521226+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 45957120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:04.521519+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 45957120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:05.522654+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:06.523487+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:07.523834+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:08.524165+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:09.524695+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:10.524939+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:11.525165+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:12.525551+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cbccfc00
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:13.525901+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:14.526149+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:15.526361+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:16.526648+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:17.526821+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:18.526968+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 45932544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:19.527188+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 45932544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:20.527387+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 45932544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:21.527556+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:22.527716+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:23.527865+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:24.528023+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:25.528188+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:26.528346+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 45916160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:27.528483+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 45916160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:28.528642+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 45916160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:29.528803+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 45916160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:30.529014+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:31.529190+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:32.529443+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:33.529592+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:34.529818+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:35.530029+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:36.530209+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:37.530390+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:38.530510+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:39.530618+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:40.530757+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:41.530855+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:42.530968+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:43.531039+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:44.531170+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:45.532107+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:46.532365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:47.532500+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:48.532639+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:49.532789+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:50.532963+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:51.533095+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:52.533274+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:53.533410+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 45875200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:54.533579+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 45867008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:55.533773+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 45867008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:56.534008+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 45867008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:57.534167+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 45867008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:58.534403+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 45858816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:59.534572+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 45858816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:00.534704+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 45858816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:01.534854+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 45850624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:02.534998+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 45850624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:03.535178+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 45850624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:04.535305+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 45842432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:05.535450+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 45842432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:06.535654+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 45834240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:07.535789+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 45834240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:08.535965+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 45834240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:09.536117+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:10.536266+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:11.536412+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:12.536558+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:13.536739+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:14.536912+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:15.537082+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:16.537268+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cd8e3800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:17.537396+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 45817856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:18.537562+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 45817856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:19.537735+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:20.537925+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:21.538141+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:22.538459+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:23.538598+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:24.538763+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 45785088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:25.538906+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:26.539081+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:27.539248+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:28.539411+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:29.539601+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:30.539751+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:31.539909+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:32.540066+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:33.540222+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:34.540356+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:35.540529+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:36.540718+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 45752320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:37.540872+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 45752320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:38.541024+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 45752320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:39.541202+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 45752320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:40.541382+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 45744128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:41.541538+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 45744128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:42.541694+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 45735936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:43.541823+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 45735936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:44.541970+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 45735936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:45.542106+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 45735936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:46.542269+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296263680 unmapped: 45727744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:47.542423+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d144e000 session 0x55a6cb89d680
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cdadb000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296263680 unmapped: 45727744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:48.542556+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:49.542691+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:50.542850+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:51.542974+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:52.543104+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:53.543216+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:54.543350+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:55.543492+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:56.543692+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:57.543875+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:58.543998+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:59.544143+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:00.544358+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:01.544491+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 45703168 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:02.545066+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 45694976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:03.545400+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 45694976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:04.545569+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:05.545673+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:06.545834+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:07.545997+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:08.546163+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:09.546349+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:10.546517+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 45670400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:11.546701+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 45670400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:12.546831+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 45670400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:13.547066+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 45670400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:14.547236+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:15.547365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:16.547506+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:17.547673+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:18.547857+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:19.548043+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:20.548224+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:21.548398+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 45654016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:22.548537+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 45654016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:23.548676+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 45645824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:24.548884+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:25.549030+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:26.549218+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:27.549444+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:28.549620+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:29.549818+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:30.549971+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:31.550102+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:32.550231+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:33.550386+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:34.550499+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:35.550683+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:36.551018+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:37.551290+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 45613056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: mgrc ms_handle_reset ms_handle_reset con 0x55a6ce181000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1636168236
Nov 22 10:22:26 compute-0 ceph-osd[90703]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1636168236,v1:192.168.122.100:6801/1636168236]
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: get_auth_request con 0x55a6cda86400 auth_method 0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: mgrc handle_mgr_configure stats_period=5
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:38.551627+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:39.551811+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:40.551995+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:41.552243+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:42.552406+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:43.552639+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:44.552823+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:45.553040+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:46.553289+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:47.553456+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:48.553644+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:49.553803+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cef83400 session 0x55a6cca5c960
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d0f2d800
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:50.553934+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:51.554077+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:52.554209+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:53.554363+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:54.554486+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:55.554670+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:56.554831+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:57.554976+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:58.555096+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:59.555219+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:00.555389+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:01.555521+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:02.555689+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:03.555823+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:04.555989+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:05.556143+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:06.556407+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:07.556579+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:08.556771+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:09.556953+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:10.557147+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:11.557444+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:12.557594+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:13.557756+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:14.557887+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:15.558080+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:16.558255+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:17.558400+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:18.558813+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:19.559561+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:20.559864+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:21.560360+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:22.560798+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:23.561000+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:24.561281+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:25.561552+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:26.561820+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:27.561937+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:28.562152+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:29.562387+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:30.562851+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:31.563063+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:32.563216+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:33.563356+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:34.563480+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:35.563590+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:36.563798+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:37.563965+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:38.564231+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:39.564449+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 47767552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:40.564621+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 47767552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:41.564725+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:42.564919+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:43.565117+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:44.565293+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:45.565445+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:46.565652+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:47.565798+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:48.565936+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:49.566123+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cca5c1e0
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6cef83400
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:50.566291+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:51.566483+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:52.566871+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:53.567014+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:54.567255+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:55.567452+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:56.567651+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:57.567769+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294264832 unmapped: 47726592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:58.567894+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:59.568073+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:00.568232+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:01.568389+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:02.568582+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:03.568716+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 47710208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:04.568857+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 47710208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:05.569040+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 47710208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:06.569266+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:07.569481+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:08.569679+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:09.569843+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:10.570410+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:11.570555+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:12.570740+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:13.570978+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:14.571140+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:15.571371+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:16.571606+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:17.571882+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:18.572057+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:19.572215+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:20.572440+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:21.572623+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294313984 unmapped: 47677440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:22.572782+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:23.573005+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:24.573274+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:25.573565+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:26.573809+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:27.574024+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:28.574173+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:29.574383+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:30.574565+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:31.574810+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:32.574987+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:33.575166+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:34.575421+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:35.575624+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:36.575888+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:37.576036+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:38.576231+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:39.576401+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:40.576664+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:41.576808+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:42.577012+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:43.577255+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:44.577453+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:45.577600+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:46.577882+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:47.578028+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:48.578218+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:49.578435+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:50.578645+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:51.578854+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:52.579050+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:53.579299+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:54.579567+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:55.580670+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:56.580838+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:57.581602+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:58.582217+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:59.582984+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:00.583624+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:01.583943+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:02.584365+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:03.584563+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:04.585059+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:05.585388+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:06.585928+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:07.586126+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:08.586540+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:09.586922+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:10.587163+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:11.587356+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:12.587490+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:13.587703+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:14.587848+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:15.588068+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:16.588244+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:17.588417+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:18.588606+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:19.588889+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:20.589033+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:21.589171+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:22.589468+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:23.589655+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:24.589860+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:25.590091+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:26.590270+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:27.592785+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:28.593380+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:29.595239+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:30.596078+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:31.596503+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:32.597968+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:33.599518+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:34.599755+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:35.600216+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb400 session 0x55a6cca4f860
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: handle_auth_request added challenge on 0x55a6d1012000
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:36.601897+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:37.602462+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:38.603058+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:39.603367+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:40.603670+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:41.604433+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:42.604680+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:43.605005+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:44.605154+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:45.605389+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:46.605581+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:47.605789+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:48.606022+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:49.606195+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:50.606347+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:51.606526+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:26 compute-0 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:26 compute-0 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:52.606662+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:53.606826+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: do_command 'config diff' '{prefix=config diff}'
Nov 22 10:22:26 compute-0 ceph-osd[90703]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 10:22:26 compute-0 ceph-osd[90703]: do_command 'config show' '{prefix=config show}'
Nov 22 10:22:26 compute-0 ceph-osd[90703]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 10:22:26 compute-0 ceph-osd[90703]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 10:22:26 compute-0 ceph-osd[90703]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 10:22:26 compute-0 ceph-osd[90703]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 10:22:26 compute-0 ceph-osd[90703]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293888000 unmapped: 48103424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:54.607146+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293675008 unmapped: 48316416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: tick
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_tickets
Nov 22 10:22:26 compute-0 ceph-osd[90703]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:55.607286+0000)
Nov 22 10:22:26 compute-0 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293355520 unmapped: 48635904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:26 compute-0 ceph-osd[90703]: do_command 'log dump' '{prefix=log dump}'
Nov 22 10:22:26 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 10:22:26 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454304081' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 10:22:26 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:26.963+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:26 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:26 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:27 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23065 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.257 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:22:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 10:22:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457181493' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 10:22:27 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:27 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23069 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:27 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:27 compute-0 ceph-mon[75021]: from='client.23057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:27 compute-0 ceph-mon[75021]: from='client.23061 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/454304081' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 10:22:27 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2457181493' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 10:22:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:22:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040159002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:22:27 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 10:22:27 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1787959640' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.750 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:22:27 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23075 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:27 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:27.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:27 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:27 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.961 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.962 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3293MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.962 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:22:27 compute-0 nova_compute[253661]: 2025-11-22 10:22:27.963 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:22:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:22:28.033 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 10:22:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:22:28.033 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 10:22:28 compute-0 ovn_metadata_agent[162856]: 2025-11-22 10:22:28.033 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.130 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.131 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.190 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 10:22:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 10:22:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357294411' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.265 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.265 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.286 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.312 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.328 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 10:22:28 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23079 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 22 10:22:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669056139' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 10:22:28 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:28 compute-0 ceph-mon[75021]: from='client.23065 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:28 compute-0 ceph-mon[75021]: pgmap v3623: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:28 compute-0 ceph-mon[75021]: from='client.23069 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2040159002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:22:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1787959640' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 10:22:28 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2357294411' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 10:22:28 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 10:22:28 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1741712470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.851 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.860 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.879 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.883 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 10:22:28 compute-0 nova_compute[253661]: 2025-11-22 10:22:28.887 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 10:22:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:28.998+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:29 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:29 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:29 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23089 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:29 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T10:22:29.334+0000 7f9e5f8d3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 10:22:29 compute-0 ceph-mgr[75315]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 10:22:29 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:29 compute-0 nova_compute[253661]: 2025-11-22 10:22:29.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:29 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1924 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:22:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 22 10:22:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995914162' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 10:22:29 compute-0 ceph-mon[75021]: from='client.23075 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:29 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:29 compute-0 ceph-mon[75021]: from='client.23079 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3669056139' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 10:22:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1741712470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 10:22:29 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1924 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:29 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3995914162' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 10:22:29 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 22 10:22:29 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/466885523' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 10:22:29 compute-0 nova_compute[253661]: 2025-11-22 10:22:29.880 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:29 compute-0 nova_compute[253661]: 2025-11-22 10:22:29.881 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:29 compute-0 nova_compute[253661]: 2025-11-22 10:22:29.881 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:29 compute-0 nova_compute[253661]: 2025-11-22 10:22:29.881 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 10:22:30 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:30.000+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:30 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:30 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 22 10:22:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/882830295' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 10:22:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 22 10:22:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2813742188' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 10:22:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 22 10:22:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/260563121' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 10:22:30 compute-0 crontab[445334]: (root) LIST (root)
Nov 22 10:22:30 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:30 compute-0 ceph-mon[75021]: from='client.23089 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:30 compute-0 ceph-mon[75021]: pgmap v3624: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/466885523' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 10:22:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/882830295' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 10:22:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2813742188' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 10:22:30 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/260563121' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 10:22:30 compute-0 nova_compute[253661]: 2025-11-22 10:22:30.712 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 22 10:22:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817642346' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 10:22:30 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 22 10:22:30 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4136342949' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:31.029+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 22 10:22:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424716433' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 10:22:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 22 10:22:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811321850' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 10:22:31 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 22 10:22:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446227911' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 10:22:31 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2817642346' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 10:22:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/4136342949' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 10:22:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/424716433' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 10:22:31 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1811321850' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 10:22:31 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 22 10:22:31 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2160580551' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:00.575126+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1185 sent 1184 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:29.978902+0000 osd.1 (osd.1) 1185 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368566272 unmapped: 48791552 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9994> 2025-11-22T10:08:30.949+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1185) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:29.978902+0000 osd.1 (osd.1) 1185 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:01.575370+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1186 sent 1185 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:30.950175+0000 osd.1 (osd.1) 1186 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368574464 unmapped: 48783360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9983> 2025-11-22T10:08:31.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1186) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:30.950175+0000 osd.1 (osd.1) 1186 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:02.575624+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1187 sent 1186 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:31.947449+0000 osd.1 (osd.1) 1187 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 48775168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9971> 2025-11-22T10:08:32.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1187) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:31.947449+0000 osd.1 (osd.1) 1187 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:03.575802+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1188 sent 1187 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:32.985163+0000 osd.1 (osd.1) 1188 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 48766976 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9957> 2025-11-22T10:08:33.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1188) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:32.985163+0000 osd.1 (osd.1) 1188 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:04.576046+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1189 sent 1188 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:33.962727+0000 osd.1 (osd.1) 1189 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 48766976 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9946> 2025-11-22T10:08:34.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1189) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:33.962727+0000 osd.1 (osd.1) 1189 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:05.576231+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1190 sent 1189 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:34.922466+0000 osd.1 (osd.1) 1190 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 48766976 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9935> 2025-11-22T10:08:35.958+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1190) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:34.922466+0000 osd.1 (osd.1) 1190 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:06.576423+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1191 sent 1190 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:35.959422+0000 osd.1 (osd.1) 1191 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 48766976 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9924> 2025-11-22T10:08:36.928+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1191) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:35.959422+0000 osd.1 (osd.1) 1191 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:07.576612+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1192 sent 1191 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:36.929731+0000 osd.1 (osd.1) 1192 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 48758784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9912> 2025-11-22T10:08:37.902+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1192) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:36.929731+0000 osd.1 (osd.1) 1192 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:08.576791+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1193 sent 1192 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:37.903234+0000 osd.1 (osd.1) 1193 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 48758784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9900> 2025-11-22T10:08:38.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1193) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:37.903234+0000 osd.1 (osd.1) 1193 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:09.576937+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1194 sent 1193 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:38.858866+0000 osd.1 (osd.1) 1194 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 48758784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9885> 2025-11-22T10:08:39.859+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1194) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:38.858866+0000 osd.1 (osd.1) 1194 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:10.577115+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1195 sent 1194 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:39.860532+0000 osd.1 (osd.1) 1195 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 48758784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9874> 2025-11-22T10:08:40.845+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:11.577260+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1196 sent 1195 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:40.846971+0000 osd.1 (osd.1) 1196 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1195) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:39.860532+0000 osd.1 (osd.1) 1195 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9863> 2025-11-22T10:08:41.815+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:12.577525+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1197 sent 1196 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:41.816016+0000 osd.1 (osd.1) 1197 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1196) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:40.846971+0000 osd.1 (osd.1) 1196 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1197) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:41.816016+0000 osd.1 (osd.1) 1197 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9850> 2025-11-22T10:08:42.772+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:13.577753+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1198 sent 1197 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:42.773518+0000 osd.1 (osd.1) 1198 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1198) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:42.773518+0000 osd.1 (osd.1) 1198 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9838> 2025-11-22T10:08:43.817+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:14.577980+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1199 sent 1198 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:43.818724+0000 osd.1 (osd.1) 1199 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1199) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:43.818724+0000 osd.1 (osd.1) 1199 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9825> 2025-11-22T10:08:44.787+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:15.578185+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1200 sent 1199 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:44.788411+0000 osd.1 (osd.1) 1200 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1200) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:44.788411+0000 osd.1 (osd.1) 1200 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9813> 2025-11-22T10:08:45.779+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:16.578403+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1201 sent 1200 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:45.780028+0000 osd.1 (osd.1) 1201 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1201) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:45.780028+0000 osd.1 (osd.1) 1201 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9802> 2025-11-22T10:08:46.804+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:17.578595+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1202 sent 1201 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:46.805173+0000 osd.1 (osd.1) 1202 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1202) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:46.805173+0000 osd.1 (osd.1) 1202 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9791> 2025-11-22T10:08:47.768+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:18.578787+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1203 sent 1202 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:47.769535+0000 osd.1 (osd.1) 1203 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1203) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:47.769535+0000 osd.1 (osd.1) 1203 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9780> 2025-11-22T10:08:48.760+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:19.578990+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1204 sent 1203 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:48.761598+0000 osd.1 (osd.1) 1204 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1204) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:48.761598+0000 osd.1 (osd.1) 1204 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9765> 2025-11-22T10:08:49.737+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:20.579167+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1205 sent 1204 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:49.738974+0000 osd.1 (osd.1) 1205 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1205) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:49.738974+0000 osd.1 (osd.1) 1205 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9754> 2025-11-22T10:08:50.702+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:21.579399+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1206 sent 1205 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:50.704158+0000 osd.1 (osd.1) 1206 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1206) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:50.704158+0000 osd.1 (osd.1) 1206 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9743> 2025-11-22T10:08:51.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:22.579651+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1207 sent 1206 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:51.709610+0000 osd.1 (osd.1) 1207 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9734> 2025-11-22T10:08:52.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1207) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:51.709610+0000 osd.1 (osd.1) 1207 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:23.579928+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1208 sent 1207 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:52.668068+0000 osd.1 (osd.1) 1208 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1208) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:52.668068+0000 osd.1 (osd.1) 1208 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9721> 2025-11-22T10:08:53.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:24.580155+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1209 sent 1208 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:53.704648+0000 osd.1 (osd.1) 1209 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9708> 2025-11-22T10:08:54.682+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1209) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:53.704648+0000 osd.1 (osd.1) 1209 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:25.580391+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1210 sent 1209 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:54.684280+0000 osd.1 (osd.1) 1210 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9697> 2025-11-22T10:08:55.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1210) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:54.684280+0000 osd.1 (osd.1) 1210 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:26.580631+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1211 sent 1210 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:55.681525+0000 osd.1 (osd.1) 1211 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9686> 2025-11-22T10:08:56.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1211) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:55.681525+0000 osd.1 (osd.1) 1211 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:27.580868+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1212 sent 1211 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:56.678304+0000 osd.1 (osd.1) 1212 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9675> 2025-11-22T10:08:57.689+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1212) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:56.678304+0000 osd.1 (osd.1) 1212 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:28.581092+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1213 sent 1212 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:57.691073+0000 osd.1 (osd.1) 1213 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9663> 2025-11-22T10:08:58.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1213) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:57.691073+0000 osd.1 (osd.1) 1213 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:29.581415+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1214 sent 1213 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:58.677848+0000 osd.1 (osd.1) 1214 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9649> 2025-11-22T10:08:59.641+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1214) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:58.677848+0000 osd.1 (osd.1) 1214 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:30.581721+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1215 sent 1214 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:08:59.642110+0000 osd.1 (osd.1) 1215 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9638> 2025-11-22T10:09:00.675+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1215) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:08:59.642110+0000 osd.1 (osd.1) 1215 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:31.582075+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1216 sent 1215 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:00.675624+0000 osd.1 (osd.1) 1216 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9626> 2025-11-22T10:09:01.719+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1216) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:00.675624+0000 osd.1 (osd.1) 1216 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:32.582333+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1217 sent 1216 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:01.720122+0000 osd.1 (osd.1) 1217 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9615> 2025-11-22T10:09:02.695+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1217) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:01.720122+0000 osd.1 (osd.1) 1217 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:33.582529+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1218 sent 1217 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:02.695604+0000 osd.1 (osd.1) 1218 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9603> 2025-11-22T10:09:03.713+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1218) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:02.695604+0000 osd.1 (osd.1) 1218 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:34.582845+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1219 sent 1218 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:03.713414+0000 osd.1 (osd.1) 1219 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9589> 2025-11-22T10:09:04.687+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1219) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:03.713414+0000 osd.1 (osd.1) 1219 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:35.583142+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1220 sent 1219 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:04.687580+0000 osd.1 (osd.1) 1220 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9578> 2025-11-22T10:09:05.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 48685056 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1220) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:04.687580+0000 osd.1 (osd.1) 1220 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:36.583517+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1221 sent 1220 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:05.658670+0000 osd.1 (osd.1) 1221 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9566> 2025-11-22T10:09:06.684+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 48685056 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1221) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:05.658670+0000 osd.1 (osd.1) 1221 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:37.583869+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1222 sent 1221 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:06.684404+0000 osd.1 (osd.1) 1222 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9555> 2025-11-22T10:09:07.704+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 48685056 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1222) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:06.684404+0000 osd.1 (osd.1) 1222 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:38.584113+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1223 sent 1222 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:07.705047+0000 osd.1 (osd.1) 1223 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9544> 2025-11-22T10:09:08.689+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 48685056 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1223) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:07.705047+0000 osd.1 (osd.1) 1223 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:39.584440+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1224 sent 1223 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:08.689452+0000 osd.1 (osd.1) 1224 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9529> 2025-11-22T10:09:09.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 48676864 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1224) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:08.689452+0000 osd.1 (osd.1) 1224 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:40.584763+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1225 sent 1224 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:09.711568+0000 osd.1 (osd.1) 1225 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9518> 2025-11-22T10:09:10.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 48676864 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1225) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:09.711568+0000 osd.1 (osd.1) 1225 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:41.585021+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1226 sent 1225 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:10.690932+0000 osd.1 (osd.1) 1226 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9507> 2025-11-22T10:09:11.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 48676864 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1226) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:10.690932+0000 osd.1 (osd.1) 1226 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:42.585426+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1227 sent 1226 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:11.678620+0000 osd.1 (osd.1) 1227 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9496> 2025-11-22T10:09:12.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 48676864 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1227) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:11.678620+0000 osd.1 (osd.1) 1227 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:43.585665+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1228 sent 1227 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:12.711453+0000 osd.1 (osd.1) 1228 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9485> 2025-11-22T10:09:13.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1228) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:12.711453+0000 osd.1 (osd.1) 1228 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:44.585840+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1229 sent 1228 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:13.731895+0000 osd.1 (osd.1) 1229 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9470> 2025-11-22T10:09:14.709+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1229) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:13.731895+0000 osd.1 (osd.1) 1229 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:45.586058+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1230 sent 1229 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:14.710728+0000 osd.1 (osd.1) 1230 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9459> 2025-11-22T10:09:15.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1230) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:14.710728+0000 osd.1 (osd.1) 1230 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:46.586460+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1231 sent 1230 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:15.715663+0000 osd.1 (osd.1) 1231 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9448> 2025-11-22T10:09:16.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1231) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:15.715663+0000 osd.1 (osd.1) 1231 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:47.586680+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1232 sent 1231 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:16.690044+0000 osd.1 (osd.1) 1232 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9436> 2025-11-22T10:09:17.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1232) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:16.690044+0000 osd.1 (osd.1) 1232 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:48.586908+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1233 sent 1232 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:17.671134+0000 osd.1 (osd.1) 1233 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9425> 2025-11-22T10:09:18.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1233) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:17.671134+0000 osd.1 (osd.1) 1233 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:49.587123+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1234 sent 1233 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:18.694625+0000 osd.1 (osd.1) 1234 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9411> 2025-11-22T10:09:19.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1234) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:18.694625+0000 osd.1 (osd.1) 1234 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:50.587436+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1235 sent 1234 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:19.693968+0000 osd.1 (osd.1) 1235 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9400> 2025-11-22T10:09:20.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1235) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:19.693968+0000 osd.1 (osd.1) 1235 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:51.587761+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1236 sent 1235 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:20.690169+0000 osd.1 (osd.1) 1236 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9389> 2025-11-22T10:09:21.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 48635904 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1236) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:20.690169+0000 osd.1 (osd.1) 1236 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:52.588156+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1237 sent 1236 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:21.670755+0000 osd.1 (osd.1) 1237 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9377> 2025-11-22T10:09:22.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1237) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:21.670755+0000 osd.1 (osd.1) 1237 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:53.588385+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1238 sent 1237 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:22.648368+0000 osd.1 (osd.1) 1238 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9365> 2025-11-22T10:09:23.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1238) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:22.648368+0000 osd.1 (osd.1) 1238 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:54.588600+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1239 sent 1238 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:23.608138+0000 osd.1 (osd.1) 1239 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9350> 2025-11-22T10:09:24.636+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1239) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:23.608138+0000 osd.1 (osd.1) 1239 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:55.588798+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1240 sent 1239 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:24.637296+0000 osd.1 (osd.1) 1240 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9339> 2025-11-22T10:09:25.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1240) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:24.637296+0000 osd.1 (osd.1) 1240 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:56.589019+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1241 sent 1240 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:25.626705+0000 osd.1 (osd.1) 1241 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9327> 2025-11-22T10:09:26.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:57.589223+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1242 sent 1241 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:26.590412+0000 osd.1 (osd.1) 1242 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1241) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:25.626705+0000 osd.1 (osd.1) 1241 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9316> 2025-11-22T10:09:27.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:58.589451+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1243 sent 1242 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:27.610401+0000 osd.1 (osd.1) 1243 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1242) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:26.590412+0000 osd.1 (osd.1) 1242 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1243) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:27.610401+0000 osd.1 (osd.1) 1243 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9302> 2025-11-22T10:09:28.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:59.589667+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1244 sent 1243 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:28.615181+0000 osd.1 (osd.1) 1244 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1244) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:28.615181+0000 osd.1 (osd.1) 1244 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9288> 2025-11-22T10:09:29.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 48619520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9285> 2025-11-22T10:09:30.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:00.589859+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1246 sent 1244 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:29.626744+0000 osd.1 (osd.1) 1245 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:30.589420+0000 osd.1 (osd.1) 1246 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1246) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:29.626744+0000 osd.1 (osd.1) 1245 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:30.589420+0000 osd.1 (osd.1) 1246 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 48619520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:01.590183+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9269> 2025-11-22T10:09:31.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:02.590397+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1247 sent 1246 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:31.614471+0000 osd.1 (osd.1) 1247 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9259> 2025-11-22T10:09:32.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1247) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:31.614471+0000 osd.1 (osd.1) 1247 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:03.590678+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1248 sent 1247 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:32.593407+0000 osd.1 (osd.1) 1248 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9247> 2025-11-22T10:09:33.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1248) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:32.593407+0000 osd.1 (osd.1) 1248 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:04.590980+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1249 sent 1248 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:33.610833+0000 osd.1 (osd.1) 1249 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1249) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:33.610833+0000 osd.1 (osd.1) 1249 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9230> 2025-11-22T10:09:34.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:05.591386+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1250 sent 1249 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:34.653393+0000 osd.1 (osd.1) 1250 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9221> 2025-11-22T10:09:35.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1250) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:34.653393+0000 osd.1 (osd.1) 1250 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9215> 2025-11-22T10:09:36.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:06.591594+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1252 sent 1250 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:35.614926+0000 osd.1 (osd.1) 1251 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:36.591586+0000 osd.1 (osd.1) 1252 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1252) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:35.614926+0000 osd.1 (osd.1) 1251 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:36.591586+0000 osd.1 (osd.1) 1252 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:07.591909+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9199> 2025-11-22T10:09:37.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:08.592198+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1253 sent 1252 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:37.595498+0000 osd.1 (osd.1) 1253 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9190> 2025-11-22T10:09:38.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1253) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:37.595498+0000 osd.1 (osd.1) 1253 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9182> 2025-11-22T10:09:39.589+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:09.592392+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1255 sent 1253 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:38.601588+0000 osd.1 (osd.1) 1254 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:39.590550+0000 osd.1 (osd.1) 1255 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1255) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:38.601588+0000 osd.1 (osd.1) 1254 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:39.590550+0000 osd.1 (osd.1) 1255 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:10.592631+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9166> 2025-11-22T10:09:40.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9163> 2025-11-22T10:09:41.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:11.592752+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1257 sent 1255 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:40.607738+0000 osd.1 (osd.1) 1256 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:41.586703+0000 osd.1 (osd.1) 1257 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1257) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:40.607738+0000 osd.1 (osd.1) 1256 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:41.586703+0000 osd.1 (osd.1) 1257 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9149> 2025-11-22T10:09:42.546+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:12.593029+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1258 sent 1257 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:42.546931+0000 osd.1 (osd.1) 1258 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1258) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:42.546931+0000 osd.1 (osd.1) 1258 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 48578560 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9138> 2025-11-22T10:09:43.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:13.593481+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1259 sent 1258 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:43.556514+0000 osd.1 (osd.1) 1259 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1259) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:43.556514+0000 osd.1 (osd.1) 1259 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 48578560 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9124> 2025-11-22T10:09:44.514+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:14.594168+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1260 sent 1259 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:44.514998+0000 osd.1 (osd.1) 1260 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1260) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:44.514998+0000 osd.1 (osd.1) 1260 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 48578560 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9112> 2025-11-22T10:09:45.493+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:15.594615+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1261 sent 1260 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:45.494236+0000 osd.1 (osd.1) 1261 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1261) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:45.494236+0000 osd.1 (osd.1) 1261 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9101> 2025-11-22T10:09:46.454+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:16.595184+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1262 sent 1261 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:46.454584+0000 osd.1 (osd.1) 1262 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1262) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:46.454584+0000 osd.1 (osd.1) 1262 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9089> 2025-11-22T10:09:47.481+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:17.595457+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1263 sent 1262 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:47.482059+0000 osd.1 (osd.1) 1263 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1263) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:47.482059+0000 osd.1 (osd.1) 1263 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9078> 2025-11-22T10:09:48.505+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:18.596235+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1264 sent 1263 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:48.505875+0000 osd.1 (osd.1) 1264 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1264) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:48.505875+0000 osd.1 (osd.1) 1264 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9064> 2025-11-22T10:09:49.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:19.597230+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1265 sent 1264 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:49.531716+0000 osd.1 (osd.1) 1265 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1265) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:49.531716+0000 osd.1 (osd.1) 1265 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9052> 2025-11-22T10:09:50.558+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:20.597453+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1266 sent 1265 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:50.559090+0000 osd.1 (osd.1) 1266 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1266) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:50.559090+0000 osd.1 (osd.1) 1266 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9041> 2025-11-22T10:09:51.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:21.597678+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1267 sent 1266 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:51.588544+0000 osd.1 (osd.1) 1267 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1267) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:51.588544+0000 osd.1 (osd.1) 1267 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:22.597896+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9027> 2025-11-22T10:09:52.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:23.598070+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1268 sent 1267 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:52.615813+0000 osd.1 (osd.1) 1268 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9018> 2025-11-22T10:09:53.634+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1268) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:52.615813+0000 osd.1 (osd.1) 1268 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368803840 unmapped: 48553984 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:24.598296+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1269 sent 1268 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:53.634935+0000 osd.1 (osd.1) 1269 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -9004> 2025-11-22T10:09:54.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1269) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:53.634935+0000 osd.1 (osd.1) 1269 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:25.598541+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1270 sent 1269 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:54.628112+0000 osd.1 (osd.1) 1270 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8993> 2025-11-22T10:09:55.660+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1270) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:54.628112+0000 osd.1 (osd.1) 1270 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:26.598747+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1271 sent 1270 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:55.661486+0000 osd.1 (osd.1) 1271 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8981> 2025-11-22T10:09:56.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1271) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:55.661486+0000 osd.1 (osd.1) 1271 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:27.599023+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1272 sent 1271 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:56.709631+0000 osd.1 (osd.1) 1272 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8970> 2025-11-22T10:09:57.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1272) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:56.709631+0000 osd.1 (osd.1) 1272 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:28.599258+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1273 sent 1272 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:57.700625+0000 osd.1 (osd.1) 1273 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8959> 2025-11-22T10:09:58.691+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1273) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:57.700625+0000 osd.1 (osd.1) 1273 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:29.599527+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1274 sent 1273 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:58.691848+0000 osd.1 (osd.1) 1274 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8944> 2025-11-22T10:09:59.656+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1274) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:58.691848+0000 osd.1 (osd.1) 1274 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:30.599736+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1275 sent 1274 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:09:59.656911+0000 osd.1 (osd.1) 1275 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8933> 2025-11-22T10:10:00.696+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1275) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:09:59.656911+0000 osd.1 (osd.1) 1275 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:31.599932+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1276 sent 1275 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:00.697036+0000 osd.1 (osd.1) 1276 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8921> 2025-11-22T10:10:01.705+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1276) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:00.697036+0000 osd.1 (osd.1) 1276 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:32.600197+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1277 sent 1276 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:01.706590+0000 osd.1 (osd.1) 1277 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8910> 2025-11-22T10:10:02.748+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1277) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:01.706590+0000 osd.1 (osd.1) 1277 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:33.600372+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1278 sent 1277 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:02.748853+0000 osd.1 (osd.1) 1278 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8899> 2025-11-22T10:10:03.698+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1278) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:02.748853+0000 osd.1 (osd.1) 1278 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:34.600538+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1279 sent 1278 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:03.699577+0000 osd.1 (osd.1) 1279 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8884> 2025-11-22T10:10:04.723+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1279) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:03.699577+0000 osd.1 (osd.1) 1279 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:35.600816+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1280 sent 1279 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:04.724152+0000 osd.1 (osd.1) 1280 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8872> 2025-11-22T10:10:05.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1280) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:04.724152+0000 osd.1 (osd.1) 1280 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:36.601116+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1281 sent 1280 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:05.679871+0000 osd.1 (osd.1) 1281 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8861> 2025-11-22T10:10:06.644+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1281) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:05.679871+0000 osd.1 (osd.1) 1281 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:37.601349+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1282 sent 1281 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:06.645946+0000 osd.1 (osd.1) 1282 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8850> 2025-11-22T10:10:07.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1282) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:06.645946+0000 osd.1 (osd.1) 1282 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:38.602022+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1283 sent 1282 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:07.681908+0000 osd.1 (osd.1) 1283 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8839> 2025-11-22T10:10:08.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1283) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:07.681908+0000 osd.1 (osd.1) 1283 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:39.602627+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1284 sent 1283 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:08.709726+0000 osd.1 (osd.1) 1284 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8825> 2025-11-22T10:10:09.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1284) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:08.709726+0000 osd.1 (osd.1) 1284 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:40.603164+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1285 sent 1284 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:09.678273+0000 osd.1 (osd.1) 1285 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8813> 2025-11-22T10:10:10.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1285) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:09.678273+0000 osd.1 (osd.1) 1285 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:41.603650+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1286 sent 1285 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:10.718870+0000 osd.1 (osd.1) 1286 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8801> 2025-11-22T10:10:11.763+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1286) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:10.718870+0000 osd.1 (osd.1) 1286 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:42.604116+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1287 sent 1286 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:11.764550+0000 osd.1 (osd.1) 1287 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8790> 2025-11-22T10:10:12.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1287) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:11.764550+0000 osd.1 (osd.1) 1287 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:43.604442+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1288 sent 1287 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:12.756547+0000 osd.1 (osd.1) 1288 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8779> 2025-11-22T10:10:13.792+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1288) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:12.756547+0000 osd.1 (osd.1) 1288 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:44.604748+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1289 sent 1288 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:13.793535+0000 osd.1 (osd.1) 1289 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8765> 2025-11-22T10:10:14.744+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1289) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:13.793535+0000 osd.1 (osd.1) 1289 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368852992 unmapped: 48504832 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:45.605547+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1290 sent 1289 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:14.745274+0000 osd.1 (osd.1) 1290 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8754> 2025-11-22T10:10:15.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1290) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:14.745274+0000 osd.1 (osd.1) 1290 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368852992 unmapped: 48504832 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:46.605977+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1291 sent 1290 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:15.713677+0000 osd.1 (osd.1) 1291 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8743> 2025-11-22T10:10:16.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1291) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:15.713677+0000 osd.1 (osd.1) 1291 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 48496640 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:47.606373+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1292 sent 1291 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:16.690166+0000 osd.1 (osd.1) 1292 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8731> 2025-11-22T10:10:17.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1292) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:16.690166+0000 osd.1 (osd.1) 1292 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 48496640 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:48.606596+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1293 sent 1292 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:17.691672+0000 osd.1 (osd.1) 1293 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8719> 2025-11-22T10:10:18.664+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1293) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:17.691672+0000 osd.1 (osd.1) 1293 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:49.606774+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1294 sent 1293 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:18.665668+0000 osd.1 (osd.1) 1294 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8705> 2025-11-22T10:10:19.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1294) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:18.665668+0000 osd.1 (osd.1) 1294 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:50.606990+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1295 sent 1294 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:19.691861+0000 osd.1 (osd.1) 1295 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8694> 2025-11-22T10:10:20.733+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1295) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:19.691861+0000 osd.1 (osd.1) 1295 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:51.607456+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1296 sent 1295 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:20.735360+0000 osd.1 (osd.1) 1296 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8683> 2025-11-22T10:10:21.715+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1296) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:20.735360+0000 osd.1 (osd.1) 1296 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:52.607647+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1297 sent 1296 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:21.716750+0000 osd.1 (osd.1) 1297 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8672> 2025-11-22T10:10:22.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1297) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:21.716750+0000 osd.1 (osd.1) 1297 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:53.607843+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1298 sent 1297 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:22.678058+0000 osd.1 (osd.1) 1298 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8659> 2025-11-22T10:10:23.672+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1298) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:22.678058+0000 osd.1 (osd.1) 1298 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:54.608038+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1299 sent 1298 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:23.672771+0000 osd.1 (osd.1) 1299 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8645> 2025-11-22T10:10:24.720+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1299) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:23.672771+0000 osd.1 (osd.1) 1299 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:55.608246+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1300 sent 1299 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:24.721245+0000 osd.1 (osd.1) 1300 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8633> 2025-11-22T10:10:25.728+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1300) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:24.721245+0000 osd.1 (osd.1) 1300 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:56.608502+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1301 sent 1300 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:25.729183+0000 osd.1 (osd.1) 1301 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8622> 2025-11-22T10:10:26.731+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1301) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:25.729183+0000 osd.1 (osd.1) 1301 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:57.608730+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1302 sent 1301 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:26.731434+0000 osd.1 (osd.1) 1302 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8611> 2025-11-22T10:10:27.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1302) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:26.731434+0000 osd.1 (osd.1) 1302 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:58.608942+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1303 sent 1302 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:27.781109+0000 osd.1 (osd.1) 1303 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8600> 2025-11-22T10:10:28.817+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1303) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:27.781109+0000 osd.1 (osd.1) 1303 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:59.609202+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1304 sent 1303 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:28.817596+0000 osd.1 (osd.1) 1304 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8585> 2025-11-22T10:10:29.794+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1304) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:28.817596+0000 osd.1 (osd.1) 1304 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:00.609535+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1305 sent 1304 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:29.795353+0000 osd.1 (osd.1) 1305 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8574> 2025-11-22T10:10:30.843+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1305) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:29.795353+0000 osd.1 (osd.1) 1305 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:01.609845+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1306 sent 1305 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:30.844277+0000 osd.1 (osd.1) 1306 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8563> 2025-11-22T10:10:31.803+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1306) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:30.844277+0000 osd.1 (osd.1) 1306 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:02.610183+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1307 sent 1306 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:31.804030+0000 osd.1 (osd.1) 1307 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8550> 2025-11-22T10:10:32.847+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1307) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:31.804030+0000 osd.1 (osd.1) 1307 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:03.610396+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1308 sent 1307 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:32.848287+0000 osd.1 (osd.1) 1308 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8539> 2025-11-22T10:10:33.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1308) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:32.848287+0000 osd.1 (osd.1) 1308 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:04.610651+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1309 sent 1308 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:33.835045+0000 osd.1 (osd.1) 1309 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8525> 2025-11-22T10:10:34.848+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1309) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:33.835045+0000 osd.1 (osd.1) 1309 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:05.610877+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1310 sent 1309 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:34.848809+0000 osd.1 (osd.1) 1310 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8514> 2025-11-22T10:10:35.880+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1310) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:34.848809+0000 osd.1 (osd.1) 1310 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:06.611102+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1311 sent 1310 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:35.880761+0000 osd.1 (osd.1) 1311 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8503> 2025-11-22T10:10:36.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1311) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:35.880761+0000 osd.1 (osd.1) 1311 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:07.611373+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1312 sent 1311 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:36.832614+0000 osd.1 (osd.1) 1312 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8490> 2025-11-22T10:10:37.791+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1312) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:36.832614+0000 osd.1 (osd.1) 1312 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:08.611553+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1313 sent 1312 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:37.791820+0000 osd.1 (osd.1) 1313 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8479> 2025-11-22T10:10:38.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1313) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:37.791820+0000 osd.1 (osd.1) 1313 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:09.611772+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1314 sent 1313 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:38.832204+0000 osd.1 (osd.1) 1314 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8463> 2025-11-22T10:10:39.870+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368918528 unmapped: 48439296 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1314) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:38.832204+0000 osd.1 (osd.1) 1314 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:10.611988+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1315 sent 1314 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:39.870741+0000 osd.1 (osd.1) 1315 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8452> 2025-11-22T10:10:40.840+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1315) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:39.870741+0000 osd.1 (osd.1) 1315 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:11.612139+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1316 sent 1315 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:40.841520+0000 osd.1 (osd.1) 1316 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8440> 2025-11-22T10:10:41.793+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:12.612375+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1317 sent 1316 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:41.794614+0000 osd.1 (osd.1) 1317 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1316) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:40.841520+0000 osd.1 (osd.1) 1316 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8429> 2025-11-22T10:10:42.777+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:13.612625+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1318 sent 1317 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:42.777860+0000 osd.1 (osd.1) 1318 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1317) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:41.794614+0000 osd.1 (osd.1) 1317 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1318) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:42.777860+0000 osd.1 (osd.1) 1318 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8416> 2025-11-22T10:10:43.816+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:14.612841+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1319 sent 1318 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:43.817582+0000 osd.1 (osd.1) 1319 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1319) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:43.817582+0000 osd.1 (osd.1) 1319 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8402> 2025-11-22T10:10:44.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:15.613045+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1320 sent 1319 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:44.861251+0000 osd.1 (osd.1) 1320 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1320) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:44.861251+0000 osd.1 (osd.1) 1320 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8390> 2025-11-22T10:10:45.877+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368934912 unmapped: 48422912 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:16.613359+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1321 sent 1320 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:45.878426+0000 osd.1 (osd.1) 1321 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1321) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:45.878426+0000 osd.1 (osd.1) 1321 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8379> 2025-11-22T10:10:46.832+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368934912 unmapped: 48422912 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:17.613543+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1322 sent 1321 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:46.833423+0000 osd.1 (osd.1) 1322 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8370> 2025-11-22T10:10:47.818+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1322) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:46.833423+0000 osd.1 (osd.1) 1322 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368943104 unmapped: 48414720 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:18.613814+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1323 sent 1322 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:47.820150+0000 osd.1 (osd.1) 1323 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8359> 2025-11-22T10:10:48.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1323) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:47.820150+0000 osd.1 (osd.1) 1323 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:19.614141+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1324 sent 1323 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:48.814201+0000 osd.1 (osd.1) 1324 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1324) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:48.814201+0000 osd.1 (osd.1) 1324 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8343> 2025-11-22T10:10:49.851+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:20.614423+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1325 sent 1324 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:49.852071+0000 osd.1 (osd.1) 1325 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1325) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:49.852071+0000 osd.1 (osd.1) 1325 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8331> 2025-11-22T10:10:50.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:21.614606+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1326 sent 1325 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:50.875798+0000 osd.1 (osd.1) 1326 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1326) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:50.875798+0000 osd.1 (osd.1) 1326 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8320> 2025-11-22T10:10:51.891+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:22.614841+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1327 sent 1326 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:51.892697+0000 osd.1 (osd.1) 1327 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1327) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:51.892697+0000 osd.1 (osd.1) 1327 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8309> 2025-11-22T10:10:52.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:23.615059+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1328 sent 1327 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:52.923116+0000 osd.1 (osd.1) 1328 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1328) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:52.923116+0000 osd.1 (osd.1) 1328 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8297> 2025-11-22T10:10:53.950+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:24.615252+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1329 sent 1328 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:53.951582+0000 osd.1 (osd.1) 1329 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1329) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:53.951582+0000 osd.1 (osd.1) 1329 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8283> 2025-11-22T10:10:54.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:25.615496+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1330 sent 1329 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:54.926817+0000 osd.1 (osd.1) 1330 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1330) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:54.926817+0000 osd.1 (osd.1) 1330 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8272> 2025-11-22T10:10:55.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:26.615715+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1331 sent 1330 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:55.940738+0000 osd.1 (osd.1) 1331 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1331) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:55.940738+0000 osd.1 (osd.1) 1331 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8260> 2025-11-22T10:10:56.920+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 48390144 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:27.615964+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1332 sent 1331 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:56.921296+0000 osd.1 (osd.1) 1332 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1332) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:56.921296+0000 osd.1 (osd.1) 1332 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8249> 2025-11-22T10:10:57.931+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 48390144 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:28.616275+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1333 sent 1332 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:57.932541+0000 osd.1 (osd.1) 1333 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8239> 2025-11-22T10:10:58.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1333) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:57.932541+0000 osd.1 (osd.1) 1333 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:29.616538+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1334 sent 1333 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:58.891806+0000 osd.1 (osd.1) 1334 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8225> 2025-11-22T10:10:59.856+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1334) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:58.891806+0000 osd.1 (osd.1) 1334 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:30.616702+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1335 sent 1334 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:10:59.858365+0000 osd.1 (osd.1) 1335 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8214> 2025-11-22T10:11:00.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1335) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:10:59.858365+0000 osd.1 (osd.1) 1335 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:31.616894+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1336 sent 1335 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:00.835360+0000 osd.1 (osd.1) 1336 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8203> 2025-11-22T10:11:01.818+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1336) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:00.835360+0000 osd.1 (osd.1) 1336 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:32.617257+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1337 sent 1336 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:01.820039+0000 osd.1 (osd.1) 1337 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8191> 2025-11-22T10:11:02.801+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1337) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:01.820039+0000 osd.1 (osd.1) 1337 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:33.617463+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1338 sent 1337 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:02.802584+0000 osd.1 (osd.1) 1338 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8180> 2025-11-22T10:11:03.805+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1338) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:02.802584+0000 osd.1 (osd.1) 1338 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:34.617620+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1339 sent 1338 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:03.807040+0000 osd.1 (osd.1) 1339 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8165> 2025-11-22T10:11:04.840+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1339) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:03.807040+0000 osd.1 (osd.1) 1339 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 48357376 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:35.617834+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1340 sent 1339 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:04.841392+0000 osd.1 (osd.1) 1340 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8154> 2025-11-22T10:11:05.825+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1340) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:04.841392+0000 osd.1 (osd.1) 1340 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 48357376 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:36.618038+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1341 sent 1340 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:05.825884+0000 osd.1 (osd.1) 1341 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8143> 2025-11-22T10:11:06.871+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1341) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:05.825884+0000 osd.1 (osd.1) 1341 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 48357376 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:37.618283+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1342 sent 1341 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:06.871622+0000 osd.1 (osd.1) 1342 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8132> 2025-11-22T10:11:07.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1342) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:06.871622+0000 osd.1 (osd.1) 1342 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 48357376 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:38.618532+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1343 sent 1342 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:07.883188+0000 osd.1 (osd.1) 1343 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8120> 2025-11-22T10:11:08.855+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1343) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:07.883188+0000 osd.1 (osd.1) 1343 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369008640 unmapped: 48349184 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:39.618874+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1344 sent 1343 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:08.855441+0000 osd.1 (osd.1) 1344 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8106> 2025-11-22T10:11:09.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1344) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:08.855441+0000 osd.1 (osd.1) 1344 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369008640 unmapped: 48349184 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:40.619043+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1345 sent 1344 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:09.883379+0000 osd.1 (osd.1) 1345 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8095> 2025-11-22T10:11:10.873+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1345) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:09.883379+0000 osd.1 (osd.1) 1345 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369008640 unmapped: 48349184 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:41.619241+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1346 sent 1345 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:10.874233+0000 osd.1 (osd.1) 1346 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8084> 2025-11-22T10:11:11.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1346) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:10.874233+0000 osd.1 (osd.1) 1346 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369008640 unmapped: 48349184 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:42.619728+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1347 sent 1346 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:11.859218+0000 osd.1 (osd.1) 1347 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8073> 2025-11-22T10:11:12.836+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1347) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:11.859218+0000 osd.1 (osd.1) 1347 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369016832 unmapped: 48340992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:43.619935+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1348 sent 1347 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:12.836498+0000 osd.1 (osd.1) 1348 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8062> 2025-11-22T10:11:13.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1348) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:12.836498+0000 osd.1 (osd.1) 1348 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369016832 unmapped: 48340992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:44.620370+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1349 sent 1348 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:13.812596+0000 osd.1 (osd.1) 1349 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8047> 2025-11-22T10:11:14.811+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1349) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:13.812596+0000 osd.1 (osd.1) 1349 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369016832 unmapped: 48340992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:45.620778+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1350 sent 1349 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:14.811420+0000 osd.1 (osd.1) 1350 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8036> 2025-11-22T10:11:15.799+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1350) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:14.811420+0000 osd.1 (osd.1) 1350 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369016832 unmapped: 48340992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:46.621025+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1351 sent 1350 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:15.799489+0000 osd.1 (osd.1) 1351 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8025> 2025-11-22T10:11:16.807+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1351) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:15.799489+0000 osd.1 (osd.1) 1351 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369025024 unmapped: 48332800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:47.621363+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1352 sent 1351 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:16.808347+0000 osd.1 (osd.1) 1352 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8014> 2025-11-22T10:11:17.841+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1352) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:16.808347+0000 osd.1 (osd.1) 1352 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369025024 unmapped: 48332800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:48.621562+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1353 sent 1352 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:17.841650+0000 osd.1 (osd.1) 1353 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -8003> 2025-11-22T10:11:18.876+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1353) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:17.841650+0000 osd.1 (osd.1) 1353 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369025024 unmapped: 48332800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:49.621866+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1354 sent 1353 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:18.877702+0000 osd.1 (osd.1) 1354 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7989> 2025-11-22T10:11:19.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1354) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:18.877702+0000 osd.1 (osd.1) 1354 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369025024 unmapped: 48332800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:50.622186+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1355 sent 1354 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:19.864506+0000 osd.1 (osd.1) 1355 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7977> 2025-11-22T10:11:20.872+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1355) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:19.864506+0000 osd.1 (osd.1) 1355 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369041408 unmapped: 48316416 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:51.622483+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1356 sent 1355 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:20.873209+0000 osd.1 (osd.1) 1356 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7965> 2025-11-22T10:11:21.891+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1356) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:20.873209+0000 osd.1 (osd.1) 1356 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369041408 unmapped: 48316416 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:52.622754+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1357 sent 1356 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:21.891889+0000 osd.1 (osd.1) 1357 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7953> 2025-11-22T10:11:22.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1357) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:21.891889+0000 osd.1 (osd.1) 1357 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369041408 unmapped: 48316416 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:53.622935+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1358 sent 1357 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:22.891430+0000 osd.1 (osd.1) 1358 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7942> 2025-11-22T10:11:23.924+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1358) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:22.891430+0000 osd.1 (osd.1) 1358 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369049600 unmapped: 48308224 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:54.623135+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1359 sent 1358 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:23.925153+0000 osd.1 (osd.1) 1359 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7928> 2025-11-22T10:11:24.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1359) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:23.925153+0000 osd.1 (osd.1) 1359 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 48300032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:55.623479+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1360 sent 1359 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:24.941744+0000 osd.1 (osd.1) 1360 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7916> 2025-11-22T10:11:25.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1360) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:24.941744+0000 osd.1 (osd.1) 1360 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 48300032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:56.623777+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1361 sent 1360 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:25.974626+0000 osd.1 (osd.1) 1361 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7905> 2025-11-22T10:11:27.016+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1361) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:25.974626+0000 osd.1 (osd.1) 1361 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 48300032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:57.624018+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1362 sent 1361 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:27.017038+0000 osd.1 (osd.1) 1362 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7894> 2025-11-22T10:11:28.046+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1362) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:27.017038+0000 osd.1 (osd.1) 1362 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 48300032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:58.624216+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1363 sent 1362 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:28.048236+0000 osd.1 (osd.1) 1363 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7882> 2025-11-22T10:11:29.076+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1363) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:28.048236+0000 osd.1 (osd.1) 1363 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:59.624375+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1364 sent 1363 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:29.077542+0000 osd.1 (osd.1) 1364 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7868> 2025-11-22T10:11:30.107+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1364) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:29.077542+0000 osd.1 (osd.1) 1364 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:00.624652+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1365 sent 1364 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:30.108625+0000 osd.1 (osd.1) 1365 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7857> 2025-11-22T10:11:31.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1365) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:30.108625+0000 osd.1 (osd.1) 1365 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:01.624854+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1366 sent 1365 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:31.146171+0000 osd.1 (osd.1) 1366 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7846> 2025-11-22T10:11:32.189+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1366) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:31.146171+0000 osd.1 (osd.1) 1366 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:02.625057+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1367 sent 1366 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:32.190411+0000 osd.1 (osd.1) 1367 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7835> 2025-11-22T10:11:33.145+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1367) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:32.190411+0000 osd.1 (osd.1) 1367 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:03.625263+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1368 sent 1367 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:33.147112+0000 osd.1 (osd.1) 1368 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7823> 2025-11-22T10:11:34.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1368) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:33.147112+0000 osd.1 (osd.1) 1368 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:04.625509+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1369 sent 1368 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:34.176590+0000 osd.1 (osd.1) 1369 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7809> 2025-11-22T10:11:35.161+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1369) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:34.176590+0000 osd.1 (osd.1) 1369 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:05.625798+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1370 sent 1369 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:35.163204+0000 osd.1 (osd.1) 1370 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7798> 2025-11-22T10:11:36.162+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1370) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:35.163204+0000 osd.1 (osd.1) 1370 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:06.626088+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1371 sent 1370 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:36.164044+0000 osd.1 (osd.1) 1371 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7787> 2025-11-22T10:11:37.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1371) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:36.164044+0000 osd.1 (osd.1) 1371 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369090560 unmapped: 48267264 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:07.626305+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1372 sent 1371 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:37.139051+0000 osd.1 (osd.1) 1372 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7776> 2025-11-22T10:11:38.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1372) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:37.139051+0000 osd.1 (osd.1) 1372 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:08.626574+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1373 sent 1372 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:38.177388+0000 osd.1 (osd.1) 1373 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7764> 2025-11-22T10:11:39.133+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1373) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:38.177388+0000 osd.1 (osd.1) 1373 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:09.626822+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1374 sent 1373 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:39.134998+0000 osd.1 (osd.1) 1374 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7750> 2025-11-22T10:11:40.105+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1374) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:39.134998+0000 osd.1 (osd.1) 1374 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:10.627042+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1375 sent 1374 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:40.106392+0000 osd.1 (osd.1) 1375 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7738> 2025-11-22T10:11:41.079+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1375) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:40.106392+0000 osd.1 (osd.1) 1375 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:11.627277+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1376 sent 1375 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:41.080368+0000 osd.1 (osd.1) 1376 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7726> 2025-11-22T10:11:42.102+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1376) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:41.080368+0000 osd.1 (osd.1) 1376 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:12.627563+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1377 sent 1376 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:42.103548+0000 osd.1 (osd.1) 1377 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7715> 2025-11-22T10:11:43.065+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1377) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:42.103548+0000 osd.1 (osd.1) 1377 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:13.627766+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1378 sent 1377 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:43.066121+0000 osd.1 (osd.1) 1378 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7704> 2025-11-22T10:11:44.094+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1378) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:43.066121+0000 osd.1 (osd.1) 1378 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:14.627941+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1379 sent 1378 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:44.095518+0000 osd.1 (osd.1) 1379 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7689> 2025-11-22T10:11:45.051+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1379) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:44.095518+0000 osd.1 (osd.1) 1379 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369106944 unmapped: 48250880 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:15.628129+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1380 sent 1379 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:45.052751+0000 osd.1 (osd.1) 1380 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7677> 2025-11-22T10:11:46.044+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1380) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:45.052751+0000 osd.1 (osd.1) 1380 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369115136 unmapped: 48242688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:16.628399+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1381 sent 1380 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:46.045500+0000 osd.1 (osd.1) 1381 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7666> 2025-11-22T10:11:47.031+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1381) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:46.045500+0000 osd.1 (osd.1) 1381 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369115136 unmapped: 48242688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:17.628645+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1382 sent 1381 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:47.032066+0000 osd.1 (osd.1) 1382 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7655> 2025-11-22T10:11:47.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1382) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:47.032066+0000 osd.1 (osd.1) 1382 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369123328 unmapped: 48234496 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:18.628841+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1383 sent 1382 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:47.988527+0000 osd.1 (osd.1) 1383 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7643> 2025-11-22T10:11:48.954+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1383) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:47.988527+0000 osd.1 (osd.1) 1383 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369131520 unmapped: 48226304 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:19.629071+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1384 sent 1383 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:48.954907+0000 osd.1 (osd.1) 1384 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7628> 2025-11-22T10:11:49.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1384) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:48.954907+0000 osd.1 (osd.1) 1384 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:20.629297+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1385 sent 1384 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:49.932894+0000 osd.1 (osd.1) 1385 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369131520 unmapped: 48226304 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7617> 2025-11-22T10:11:50.900+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1385) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:49.932894+0000 osd.1 (osd.1) 1385 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:21.629700+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1386 sent 1385 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:50.900819+0000 osd.1 (osd.1) 1386 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369131520 unmapped: 48226304 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7605> 2025-11-22T10:11:51.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1386) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:50.900819+0000 osd.1 (osd.1) 1386 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:22.630163+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1387 sent 1386 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:51.905261+0000 osd.1 (osd.1) 1387 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369131520 unmapped: 48226304 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7594> 2025-11-22T10:11:52.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1387) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:51.905261+0000 osd.1 (osd.1) 1387 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:23.630365+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1388 sent 1387 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:52.890451+0000 osd.1 (osd.1) 1388 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7582> 2025-11-22T10:11:53.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1388) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:52.890451+0000 osd.1 (osd.1) 1388 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:24.630584+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1389 sent 1388 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:53.860630+0000 osd.1 (osd.1) 1389 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7567> 2025-11-22T10:11:54.886+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1389) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:53.860630+0000 osd.1 (osd.1) 1389 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:25.630930+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1390 sent 1389 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:54.887120+0000 osd.1 (osd.1) 1390 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7556> 2025-11-22T10:11:55.916+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:26.631183+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1391 sent 1390 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:55.917194+0000 osd.1 (osd.1) 1391 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1390) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:54.887120+0000 osd.1 (osd.1) 1390 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7544> 2025-11-22T10:11:56.951+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:27.631406+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1392 sent 1391 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:56.951782+0000 osd.1 (osd.1) 1392 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1391) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:55.917194+0000 osd.1 (osd.1) 1391 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1392) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:56.951782+0000 osd.1 (osd.1) 1392 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7531> 2025-11-22T10:11:57.982+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:28.631622+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1393 sent 1392 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:57.982663+0000 osd.1 (osd.1) 1393 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1393) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:57.982663+0000 osd.1 (osd.1) 1393 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7520> 2025-11-22T10:11:58.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:29.631824+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1394 sent 1393 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:11:58.984794+0000 osd.1 (osd.1) 1394 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1394) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:11:58.984794+0000 osd.1 (osd.1) 1394 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7505> 2025-11-22T10:12:00.011+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:30.632162+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1395 sent 1394 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:00.011521+0000 osd.1 (osd.1) 1395 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1395) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:00.011521+0000 osd.1 (osd.1) 1395 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7494> 2025-11-22T10:12:00.978+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:31.632448+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1396 sent 1395 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:00.979522+0000 osd.1 (osd.1) 1396 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1396) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:00.979522+0000 osd.1 (osd.1) 1396 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7482> 2025-11-22T10:12:01.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:32.632654+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1397 sent 1396 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:01.973763+0000 osd.1 (osd.1) 1397 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1397) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:01.973763+0000 osd.1 (osd.1) 1397 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7471> 2025-11-22T10:12:02.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:33.632836+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1398 sent 1397 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:02.972229+0000 osd.1 (osd.1) 1398 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1398) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:02.972229+0000 osd.1 (osd.1) 1398 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7460> 2025-11-22T10:12:03.985+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:34.633053+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1399 sent 1398 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:03.986414+0000 osd.1 (osd.1) 1399 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1399) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:03.986414+0000 osd.1 (osd.1) 1399 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7446> 2025-11-22T10:12:04.959+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:35.633305+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1400 sent 1399 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:04.959860+0000 osd.1 (osd.1) 1400 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1400) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:04.959860+0000 osd.1 (osd.1) 1400 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7434> 2025-11-22T10:12:05.967+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:36.633592+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1401 sent 1400 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:05.968579+0000 osd.1 (osd.1) 1401 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1401) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:05.968579+0000 osd.1 (osd.1) 1401 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7423> 2025-11-22T10:12:06.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:37.633816+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1402 sent 1401 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:06.962618+0000 osd.1 (osd.1) 1402 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1402) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:06.962618+0000 osd.1 (osd.1) 1402 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7412> 2025-11-22T10:12:07.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:38.633983+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1403 sent 1402 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:07.984944+0000 osd.1 (osd.1) 1403 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369172480 unmapped: 48185344 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1403) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:07.984944+0000 osd.1 (osd.1) 1403 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7401> 2025-11-22T10:12:09.030+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:39.634189+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1404 sent 1403 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:09.031577+0000 osd.1 (osd.1) 1404 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1404) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:09.031577+0000 osd.1 (osd.1) 1404 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7387> 2025-11-22T10:12:10.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:40.634373+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1405 sent 1404 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:10.007971+0000 osd.1 (osd.1) 1405 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1405) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:10.007971+0000 osd.1 (osd.1) 1405 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7376> 2025-11-22T10:12:10.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:41.634541+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1406 sent 1405 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:10.988681+0000 osd.1 (osd.1) 1406 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1406) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:10.988681+0000 osd.1 (osd.1) 1406 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7364> 2025-11-22T10:12:11.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:42.634697+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1407 sent 1406 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:11.981130+0000 osd.1 (osd.1) 1407 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1407) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:11.981130+0000 osd.1 (osd.1) 1407 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7353> 2025-11-22T10:12:13.022+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:43.634868+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1408 sent 1407 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:13.023890+0000 osd.1 (osd.1) 1408 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1408) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:13.023890+0000 osd.1 (osd.1) 1408 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7342> 2025-11-22T10:12:13.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:44.635126+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1409 sent 1408 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:13.978615+0000 osd.1 (osd.1) 1409 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1409) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:13.978615+0000 osd.1 (osd.1) 1409 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7327> 2025-11-22T10:12:14.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:45.635370+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1410 sent 1409 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:14.981297+0000 osd.1 (osd.1) 1410 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1410) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:14.981297+0000 osd.1 (osd.1) 1410 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7316> 2025-11-22T10:12:15.934+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:46.635579+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1411 sent 1410 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:15.936354+0000 osd.1 (osd.1) 1411 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1411) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:15.936354+0000 osd.1 (osd.1) 1411 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7304> 2025-11-22T10:12:16.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:47.635844+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1412 sent 1411 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:16.943050+0000 osd.1 (osd.1) 1412 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7295> 2025-11-22T10:12:17.909+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1412) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:16.943050+0000 osd.1 (osd.1) 1412 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:48.636197+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1413 sent 1412 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:17.910714+0000 osd.1 (osd.1) 1413 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1413) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:17.910714+0000 osd.1 (osd.1) 1413 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7281> 2025-11-22T10:12:18.958+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:49.636469+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1414 sent 1413 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:18.959908+0000 osd.1 (osd.1) 1414 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1414) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:18.959908+0000 osd.1 (osd.1) 1414 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7267> 2025-11-22T10:12:19.992+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:50.636807+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1415 sent 1414 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:19.993570+0000 osd.1 (osd.1) 1415 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7258> 2025-11-22T10:12:20.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1415) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:19.993570+0000 osd.1 (osd.1) 1415 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:51.636963+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1416 sent 1415 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:20.949993+0000 osd.1 (osd.1) 1416 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7247> 2025-11-22T10:12:21.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1416) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:20.949993+0000 osd.1 (osd.1) 1416 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:52.637122+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1417 sent 1416 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:21.984706+0000 osd.1 (osd.1) 1417 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7236> 2025-11-22T10:12:22.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1417) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:21.984706+0000 osd.1 (osd.1) 1417 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:53.637371+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1418 sent 1417 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:22.953961+0000 osd.1 (osd.1) 1418 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7225> 2025-11-22T10:12:23.989+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1418) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:22.953961+0000 osd.1 (osd.1) 1418 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:54.637584+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1419 sent 1418 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:23.990740+0000 osd.1 (osd.1) 1419 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7210> 2025-11-22T10:12:24.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1419) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:23.990740+0000 osd.1 (osd.1) 1419 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:55.637821+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1420 sent 1419 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:24.974775+0000 osd.1 (osd.1) 1420 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369221632 unmapped: 48136192 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7199> 2025-11-22T10:12:25.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1420) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:24.974775+0000 osd.1 (osd.1) 1420 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:56.638003+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1421 sent 1420 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:25.944305+0000 osd.1 (osd.1) 1421 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369221632 unmapped: 48136192 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7188> 2025-11-22T10:12:26.908+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1421) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:25.944305+0000 osd.1 (osd.1) 1421 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:57.638213+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1422 sent 1421 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:26.909625+0000 osd.1 (osd.1) 1422 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7176> 2025-11-22T10:12:27.876+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1422) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:26.909625+0000 osd.1 (osd.1) 1422 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:58.638479+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1423 sent 1422 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:27.878061+0000 osd.1 (osd.1) 1423 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7165> 2025-11-22T10:12:28.838+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1423) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:27.878061+0000 osd.1 (osd.1) 1423 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:59.638698+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1424 sent 1423 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:28.838688+0000 osd.1 (osd.1) 1424 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7151> 2025-11-22T10:12:29.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1424) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:28.838688+0000 osd.1 (osd.1) 1424 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:00.638926+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1425 sent 1424 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:29.883064+0000 osd.1 (osd.1) 1425 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7140> 2025-11-22T10:12:30.917+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1425) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:29.883064+0000 osd.1 (osd.1) 1425 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:01.639130+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1426 sent 1425 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:30.917872+0000 osd.1 (osd.1) 1426 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7128> 2025-11-22T10:12:31.957+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1426) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:30.917872+0000 osd.1 (osd.1) 1426 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:02.639392+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1427 sent 1426 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:31.957497+0000 osd.1 (osd.1) 1427 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7117> 2025-11-22T10:12:32.965+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1427) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:31.957497+0000 osd.1 (osd.1) 1427 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:03.639654+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1428 sent 1427 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:32.966056+0000 osd.1 (osd.1) 1428 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369246208 unmapped: 48111616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7106> 2025-11-22T10:12:33.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1428) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:32.966056+0000 osd.1 (osd.1) 1428 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:04.639902+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1429 sent 1428 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:33.933281+0000 osd.1 (osd.1) 1429 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369246208 unmapped: 48111616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7092> 2025-11-22T10:12:34.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1429) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:33.933281+0000 osd.1 (osd.1) 1429 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:05.640115+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1430 sent 1429 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:34.976453+0000 osd.1 (osd.1) 1430 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369246208 unmapped: 48111616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7080> 2025-11-22T10:12:35.951+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1430) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:34.976453+0000 osd.1 (osd.1) 1430 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:06.640274+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1431 sent 1430 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:35.952134+0000 osd.1 (osd.1) 1431 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7069> 2025-11-22T10:12:36.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1431) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:35.952134+0000 osd.1 (osd.1) 1431 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:07.640524+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1432 sent 1431 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:36.983667+0000 osd.1 (osd.1) 1432 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7058> 2025-11-22T10:12:37.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1432) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:36.983667+0000 osd.1 (osd.1) 1432 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:08.640692+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1433 sent 1432 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:37.948950+0000 osd.1 (osd.1) 1433 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7047> 2025-11-22T10:12:38.933+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1433) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:37.948950+0000 osd.1 (osd.1) 1433 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:09.640865+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1434 sent 1433 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:38.934296+0000 osd.1 (osd.1) 1434 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7033> 2025-11-22T10:12:39.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1434) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:38.934296+0000 osd.1 (osd.1) 1434 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:10.641062+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1435 sent 1434 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:39.906077+0000 osd.1 (osd.1) 1435 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7021> 2025-11-22T10:12:40.861+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1435) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:39.906077+0000 osd.1 (osd.1) 1435 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:11.641259+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1436 sent 1435 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:40.862369+0000 osd.1 (osd.1) 1436 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -7009> 2025-11-22T10:12:41.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1436) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:40.862369+0000 osd.1 (osd.1) 1436 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:12.641539+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1437 sent 1436 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:41.864464+0000 osd.1 (osd.1) 1437 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6998> 2025-11-22T10:12:42.867+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1437) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:41.864464+0000 osd.1 (osd.1) 1437 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:13.641783+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1438 sent 1437 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:42.868547+0000 osd.1 (osd.1) 1438 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6987> 2025-11-22T10:12:43.846+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1438) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:42.868547+0000 osd.1 (osd.1) 1438 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:14.642068+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1439 sent 1438 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:43.848018+0000 osd.1 (osd.1) 1439 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6977> 2025-11-22T10:12:44.825+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1439) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:43.848018+0000 osd.1 (osd.1) 1439 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:15.642258+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1440 sent 1439 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:44.826425+0000 osd.1 (osd.1) 1440 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6963> 2025-11-22T10:12:45.844+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1440) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:44.826425+0000 osd.1 (osd.1) 1440 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:16.642499+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1441 sent 1440 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:45.844716+0000 osd.1 (osd.1) 1441 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6951> 2025-11-22T10:12:46.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1441) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:45.844716+0000 osd.1 (osd.1) 1441 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:17.642761+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1442 sent 1441 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:46.813057+0000 osd.1 (osd.1) 1442 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6940> 2025-11-22T10:12:47.773+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1442) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:46.813057+0000 osd.1 (osd.1) 1442 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:18.642982+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1443 sent 1442 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:47.774206+0000 osd.1 (osd.1) 1443 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6928> 2025-11-22T10:12:48.782+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1443) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:47.774206+0000 osd.1 (osd.1) 1443 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:19.643211+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1444 sent 1443 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:48.783999+0000 osd.1 (osd.1) 1444 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6917> 2025-11-22T10:12:49.804+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1444) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:48.783999+0000 osd.1 (osd.1) 1444 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:20.643482+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1445 sent 1444 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:49.805421+0000 osd.1 (osd.1) 1445 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6903> 2025-11-22T10:12:50.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1445) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:49.805421+0000 osd.1 (osd.1) 1445 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:21.643816+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1446 sent 1445 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:50.756998+0000 osd.1 (osd.1) 1446 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6892> 2025-11-22T10:12:51.763+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1446) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:50.756998+0000 osd.1 (osd.1) 1446 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:22.644038+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1447 sent 1446 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:51.764928+0000 osd.1 (osd.1) 1447 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6881> 2025-11-22T10:12:52.799+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1447) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:51.764928+0000 osd.1 (osd.1) 1447 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:23.644257+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1448 sent 1447 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:52.800503+0000 osd.1 (osd.1) 1448 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6870> 2025-11-22T10:12:53.822+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1448) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:52.800503+0000 osd.1 (osd.1) 1448 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:24.644535+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1449 sent 1448 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:53.823450+0000 osd.1 (osd.1) 1449 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6858> 2025-11-22T10:12:54.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369295360 unmapped: 48062464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1449) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:53.823450+0000 osd.1 (osd.1) 1449 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:25.644924+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1450 sent 1449 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:54.861471+0000 osd.1 (osd.1) 1450 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369295360 unmapped: 48062464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6843> 2025-11-22T10:12:55.905+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:26.645132+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1451 sent 1450 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:55.906306+0000 osd.1 (osd.1) 1451 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6834> 2025-11-22T10:12:56.875+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369295360 unmapped: 48062464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1450) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:54.861471+0000 osd.1 (osd.1) 1450 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:27.645426+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1452 sent 1451 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:56.875882+0000 osd.1 (osd.1) 1452 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6823> 2025-11-22T10:12:57.841+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1451) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:55.906306+0000 osd.1 (osd.1) 1451 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1452) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:56.875882+0000 osd.1 (osd.1) 1452 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:28.645735+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1453 sent 1452 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:57.842407+0000 osd.1 (osd.1) 1453 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6810> 2025-11-22T10:12:58.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1453) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:57.842407+0000 osd.1 (osd.1) 1453 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:29.645954+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1454 sent 1453 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:58.871082+0000 osd.1 (osd.1) 1454 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6799> 2025-11-22T10:12:59.829+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1454) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:58.871082+0000 osd.1 (osd.1) 1454 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:30.646181+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1455 sent 1454 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:12:59.830715+0000 osd.1 (osd.1) 1455 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6785> 2025-11-22T10:13:00.847+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1455) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:12:59.830715+0000 osd.1 (osd.1) 1455 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:31.646396+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1456 sent 1455 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:00.848144+0000 osd.1 (osd.1) 1456 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6774> 2025-11-22T10:13:01.833+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1456) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:00.848144+0000 osd.1 (osd.1) 1456 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:32.646602+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1457 sent 1456 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:01.834436+0000 osd.1 (osd.1) 1457 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6762> 2025-11-22T10:13:02.793+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1457) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:01.834436+0000 osd.1 (osd.1) 1457 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:33.646780+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1458 sent 1457 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:02.794487+0000 osd.1 (osd.1) 1458 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6751> 2025-11-22T10:13:03.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1458) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:02.794487+0000 osd.1 (osd.1) 1458 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:34.646977+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1459 sent 1458 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:03.832460+0000 osd.1 (osd.1) 1459 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6740> 2025-11-22T10:13:04.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 48046080 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1459) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:03.832460+0000 osd.1 (osd.1) 1459 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:35.647212+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1460 sent 1459 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:04.875873+0000 osd.1 (osd.1) 1460 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6726> 2025-11-22T10:13:05.846+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 48037888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1460) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:04.875873+0000 osd.1 (osd.1) 1460 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:36.647390+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1461 sent 1460 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:05.847587+0000 osd.1 (osd.1) 1461 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6715> 2025-11-22T10:13:06.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 48037888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1461) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:05.847587+0000 osd.1 (osd.1) 1461 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:37.647591+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1462 sent 1461 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:06.832876+0000 osd.1 (osd.1) 1462 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6703> 2025-11-22T10:13:07.801+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1462) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:06.832876+0000 osd.1 (osd.1) 1462 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:38.647916+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1463 sent 1462 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:07.802267+0000 osd.1 (osd.1) 1463 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6691> 2025-11-22T10:13:08.754+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1463) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:07.802267+0000 osd.1 (osd.1) 1463 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:39.648259+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1464 sent 1463 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:08.755993+0000 osd.1 (osd.1) 1464 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6680> 2025-11-22T10:13:09.766+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1464) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:08.755993+0000 osd.1 (osd.1) 1464 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:40.648520+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1465 sent 1464 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:09.767357+0000 osd.1 (osd.1) 1465 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6666> 2025-11-22T10:13:10.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1465) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:09.767357+0000 osd.1 (osd.1) 1465 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:41.648708+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1466 sent 1465 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:10.780503+0000 osd.1 (osd.1) 1466 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6655> 2025-11-22T10:13:11.798+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1466) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:10.780503+0000 osd.1 (osd.1) 1466 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:42.648937+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1467 sent 1466 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:11.799186+0000 osd.1 (osd.1) 1467 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6643> 2025-11-22T10:13:12.805+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369336320 unmapped: 48021504 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1467) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:11.799186+0000 osd.1 (osd.1) 1467 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:43.649125+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1468 sent 1467 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:12.806178+0000 osd.1 (osd.1) 1468 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6632> 2025-11-22T10:13:13.771+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 48005120 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1468) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:12.806178+0000 osd.1 (osd.1) 1468 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:44.649300+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1469 sent 1468 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:13.771942+0000 osd.1 (osd.1) 1469 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6621> 2025-11-22T10:13:14.773+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 48005120 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1469) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:13.771942+0000 osd.1 (osd.1) 1469 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:45.649568+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1470 sent 1469 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:14.773727+0000 osd.1 (osd.1) 1470 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6606> 2025-11-22T10:13:15.767+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 48005120 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1470) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:14.773727+0000 osd.1 (osd.1) 1470 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:46.649744+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1471 sent 1470 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:15.768210+0000 osd.1 (osd.1) 1471 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6595> 2025-11-22T10:13:16.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 48005120 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1471) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:15.768210+0000 osd.1 (osd.1) 1471 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:47.649956+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1472 sent 1471 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:16.731368+0000 osd.1 (osd.1) 1472 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6584> 2025-11-22T10:13:17.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369360896 unmapped: 47996928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1472) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:16.731368+0000 osd.1 (osd.1) 1472 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:48.650271+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1473 sent 1472 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:17.712523+0000 osd.1 (osd.1) 1473 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6572> 2025-11-22T10:13:18.685+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369360896 unmapped: 47996928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1473) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:17.712523+0000 osd.1 (osd.1) 1473 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:49.650506+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1474 sent 1473 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:18.686090+0000 osd.1 (osd.1) 1474 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6561> 2025-11-22T10:13:19.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369360896 unmapped: 47996928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1474) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:18.686090+0000 osd.1 (osd.1) 1474 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:50.650716+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1475 sent 1474 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:19.693398+0000 osd.1 (osd.1) 1475 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6546> 2025-11-22T10:13:20.662+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369360896 unmapped: 47996928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1475) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:19.693398+0000 osd.1 (osd.1) 1475 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:51.650970+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1476 sent 1475 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:20.662746+0000 osd.1 (osd.1) 1476 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6534> 2025-11-22T10:13:21.707+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1476) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:20.662746+0000 osd.1 (osd.1) 1476 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:52.651233+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1477 sent 1476 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:21.708141+0000 osd.1 (osd.1) 1477 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6522> 2025-11-22T10:13:22.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1477) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:21.708141+0000 osd.1 (osd.1) 1477 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:53.651561+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1478 sent 1477 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:22.715131+0000 osd.1 (osd.1) 1478 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6510> 2025-11-22T10:13:23.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1478) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:22.715131+0000 osd.1 (osd.1) 1478 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:54.651725+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1479 sent 1478 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:23.693966+0000 osd.1 (osd.1) 1479 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6499> 2025-11-22T10:13:24.706+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1479) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:23.693966+0000 osd.1 (osd.1) 1479 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:55.651874+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1480 sent 1479 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:24.707566+0000 osd.1 (osd.1) 1480 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6485> 2025-11-22T10:13:25.662+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1480) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:24.707566+0000 osd.1 (osd.1) 1480 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:56.652050+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1481 sent 1480 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:25.663055+0000 osd.1 (osd.1) 1481 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6473> 2025-11-22T10:13:26.695+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1481) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:25.663055+0000 osd.1 (osd.1) 1481 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:57.652290+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1482 sent 1481 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:26.696820+0000 osd.1 (osd.1) 1482 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6461> 2025-11-22T10:13:27.697+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369377280 unmapped: 47980544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1482) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:26.696820+0000 osd.1 (osd.1) 1482 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:58.652543+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1483 sent 1482 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:27.698240+0000 osd.1 (osd.1) 1483 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6450> 2025-11-22T10:13:28.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369377280 unmapped: 47980544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1483) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:27.698240+0000 osd.1 (osd.1) 1483 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:59.652742+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1484 sent 1483 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:28.681643+0000 osd.1 (osd.1) 1484 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6439> 2025-11-22T10:13:29.657+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369393664 unmapped: 47964160 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1484) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:28.681643+0000 osd.1 (osd.1) 1484 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6431> 2025-11-22T10:13:30.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:00.652967+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1486 sent 1484 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:29.658513+0000 osd.1 (osd.1) 1485 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:30.622098+0000 osd.1 (osd.1) 1486 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369393664 unmapped: 47964160 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1486) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:29.658513+0000 osd.1 (osd.1) 1485 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:30.622098+0000 osd.1 (osd.1) 1486 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6418> 2025-11-22T10:13:31.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:01.653174+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1487 sent 1486 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:31.649000+0000 osd.1 (osd.1) 1487 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369393664 unmapped: 47964160 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1487) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:31.649000+0000 osd.1 (osd.1) 1487 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:02.653378+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6404> 2025-11-22T10:13:32.663+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369393664 unmapped: 47964160 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:03.653535+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1488 sent 1487 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:32.664284+0000 osd.1 (osd.1) 1488 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6394> 2025-11-22T10:13:33.664+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369393664 unmapped: 47964160 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1488) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:32.664284+0000 osd.1 (osd.1) 1488 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6389> 2025-11-22T10:13:34.631+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:04.653735+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1490 sent 1488 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:33.665656+0000 osd.1 (osd.1) 1489 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:34.632012+0000 osd.1 (osd.1) 1490 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369393664 unmapped: 47964160 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1490) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:33.665656+0000 osd.1 (osd.1) 1489 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:34.632012+0000 osd.1 (osd.1) 1490 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6373> 2025-11-22T10:13:35.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:05.653939+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1491 sent 1490 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:35.653354+0000 osd.1 (osd.1) 1491 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369393664 unmapped: 47964160 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1491) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:35.653354+0000 osd.1 (osd.1) 1491 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:06.654154+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6359> 2025-11-22T10:13:36.686+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369393664 unmapped: 47964160 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:07.654360+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1492 sent 1491 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:36.687842+0000 osd.1 (osd.1) 1492 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6350> 2025-11-22T10:13:37.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369401856 unmapped: 47955968 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1492) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:36.687842+0000 osd.1 (osd.1) 1492 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:08.654577+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1493 sent 1492 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:37.655966+0000 osd.1 (osd.1) 1493 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6338> 2025-11-22T10:13:38.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369410048 unmapped: 47947776 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1493) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:37.655966+0000 osd.1 (osd.1) 1493 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:09.654758+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1494 sent 1493 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:38.694127+0000 osd.1 (osd.1) 1494 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6326> 2025-11-22T10:13:39.685+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369418240 unmapped: 47939584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1494) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:38.694127+0000 osd.1 (osd.1) 1494 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:10.655000+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1495 sent 1494 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:39.686499+0000 osd.1 (osd.1) 1495 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6312> 2025-11-22T10:13:40.700+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369418240 unmapped: 47939584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1495) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:39.686499+0000 osd.1 (osd.1) 1495 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:11.655264+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1496 sent 1495 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:40.702162+0000 osd.1 (osd.1) 1496 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6301> 2025-11-22T10:13:41.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369418240 unmapped: 47939584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1496) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:40.702162+0000 osd.1 (osd.1) 1496 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:12.655590+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1497 sent 1496 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:41.718065+0000 osd.1 (osd.1) 1497 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6290> 2025-11-22T10:13:42.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369418240 unmapped: 47939584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1497) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:41.718065+0000 osd.1 (osd.1) 1497 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6285> 2025-11-22T10:13:43.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:13.655777+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1499 sent 1497 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:42.680189+0000 osd.1 (osd.1) 1498 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:43.638355+0000 osd.1 (osd.1) 1499 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369418240 unmapped: 47939584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1499) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:42.680189+0000 osd.1 (osd.1) 1498 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:43.638355+0000 osd.1 (osd.1) 1499 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6271> 2025-11-22T10:13:44.611+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:14.655976+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1500 sent 1499 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:44.612527+0000 osd.1 (osd.1) 1500 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369418240 unmapped: 47939584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1500) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:44.612527+0000 osd.1 (osd.1) 1500 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6257> 2025-11-22T10:13:45.645+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:15.656216+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1501 sent 1500 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:45.646967+0000 osd.1 (osd.1) 1501 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369434624 unmapped: 47923200 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1501) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:45.646967+0000 osd.1 (osd.1) 1501 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6246> 2025-11-22T10:13:46.614+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:16.656409+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1502 sent 1501 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:46.615743+0000 osd.1 (osd.1) 1502 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369442816 unmapped: 47915008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1502) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:46.615743+0000 osd.1 (osd.1) 1502 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6235> 2025-11-22T10:13:47.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:17.656604+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1503 sent 1502 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:47.622746+0000 osd.1 (osd.1) 1503 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369442816 unmapped: 47915008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1503) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:47.622746+0000 osd.1 (osd.1) 1503 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6223> 2025-11-22T10:13:48.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:18.656816+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1504 sent 1503 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:48.602170+0000 osd.1 (osd.1) 1504 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369442816 unmapped: 47915008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1504) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:48.602170+0000 osd.1 (osd.1) 1504 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6212> 2025-11-22T10:13:49.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:19.657047+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1505 sent 1504 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:49.649220+0000 osd.1 (osd.1) 1505 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369442816 unmapped: 47915008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1505) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:49.649220+0000 osd.1 (osd.1) 1505 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6197> 2025-11-22T10:13:50.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:20.657292+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1506 sent 1505 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:50.611509+0000 osd.1 (osd.1) 1506 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369442816 unmapped: 47915008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1506) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:50.611509+0000 osd.1 (osd.1) 1506 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6186> 2025-11-22T10:13:51.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:21.657589+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1507 sent 1506 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:51.616985+0000 osd.1 (osd.1) 1507 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369442816 unmapped: 47915008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1507) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:51.616985+0000 osd.1 (osd.1) 1507 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6175> 2025-11-22T10:13:52.575+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:22.657817+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1508 sent 1507 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:52.575557+0000 osd.1 (osd.1) 1508 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369442816 unmapped: 47915008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1508) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:52.575557+0000 osd.1 (osd.1) 1508 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6164> 2025-11-22T10:13:53.595+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:23.658013+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1509 sent 1508 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:53.596149+0000 osd.1 (osd.1) 1509 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 47906816 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1509) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:53.596149+0000 osd.1 (osd.1) 1509 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6152> 2025-11-22T10:13:54.608+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:24.658243+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1510 sent 1509 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:54.609185+0000 osd.1 (osd.1) 1510 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 47906816 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6001.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 179K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.83 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 274 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a31090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a31090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a31090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1510) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:54.609185+0000 osd.1 (osd.1) 1510 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6136> 2025-11-22T10:13:55.630+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:25.658574+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1511 sent 1510 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:55.630840+0000 osd.1 (osd.1) 1511 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 47906816 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1511) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:55.630840+0000 osd.1 (osd.1) 1511 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6125> 2025-11-22T10:13:56.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:26.658773+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1512 sent 1511 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:56.591931+0000 osd.1 (osd.1) 1512 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 47906816 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1512) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:56.591931+0000 osd.1 (osd.1) 1512 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6113> 2025-11-22T10:13:57.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:27.658972+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1513 sent 1512 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:57.629780+0000 osd.1 (osd.1) 1513 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 47906816 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1513) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:57.629780+0000 osd.1 (osd.1) 1513 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6102> 2025-11-22T10:13:58.648+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:28.659153+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1514 sent 1513 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:58.649348+0000 osd.1 (osd.1) 1514 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 47906816 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1514) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:58.649348+0000 osd.1 (osd.1) 1514 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6090> 2025-11-22T10:13:59.651+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:29.659338+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1515 sent 1514 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:13:59.652298+0000 osd.1 (osd.1) 1515 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 47906816 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1515) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:13:59.652298+0000 osd.1 (osd.1) 1515 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6076> 2025-11-22T10:14:00.616+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:30.659530+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1516 sent 1515 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:00.616770+0000 osd.1 (osd.1) 1516 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 47906816 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1516) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:00.616770+0000 osd.1 (osd.1) 1516 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6065> 2025-11-22T10:14:01.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:31.659720+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1517 sent 1516 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:01.588875+0000 osd.1 (osd.1) 1517 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369459200 unmapped: 47898624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6056> 2025-11-22T10:14:02.579+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1517) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:01.588875+0000 osd.1 (osd.1) 1517 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:32.659942+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1518 sent 1517 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:02.579977+0000 osd.1 (osd.1) 1518 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369459200 unmapped: 47898624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6045> 2025-11-22T10:14:03.571+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1518) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:02.579977+0000 osd.1 (osd.1) 1518 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:33.660170+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1519 sent 1518 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:03.572158+0000 osd.1 (osd.1) 1519 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369459200 unmapped: 47898624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6033> 2025-11-22T10:14:04.582+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1519) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:03.572158+0000 osd.1 (osd.1) 1519 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:34.660406+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1520 sent 1519 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:04.582629+0000 osd.1 (osd.1) 1520 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369459200 unmapped: 47898624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1520) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:04.582629+0000 osd.1 (osd.1) 1520 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6017> 2025-11-22T10:14:05.616+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:35.660604+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1521 sent 1520 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:05.617358+0000 osd.1 (osd.1) 1521 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369459200 unmapped: 47898624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -6007> 2025-11-22T10:14:06.580+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1521) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:05.617358+0000 osd.1 (osd.1) 1521 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:36.660790+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1522 sent 1521 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:06.581569+0000 osd.1 (osd.1) 1522 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369459200 unmapped: 47898624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5996> 2025-11-22T10:14:07.596+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1522) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:06.581569+0000 osd.1 (osd.1) 1522 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:37.661042+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1523 sent 1522 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:07.597573+0000 osd.1 (osd.1) 1523 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369459200 unmapped: 47898624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5984> 2025-11-22T10:14:08.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1523) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:07.597573+0000 osd.1 (osd.1) 1523 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:38.661230+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1524 sent 1523 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:08.603582+0000 osd.1 (osd.1) 1524 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369459200 unmapped: 47898624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5973> 2025-11-22T10:14:09.614+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1524) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:08.603582+0000 osd.1 (osd.1) 1524 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:39.661405+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1525 sent 1524 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:09.615306+0000 osd.1 (osd.1) 1525 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369491968 unmapped: 47865856 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5958> 2025-11-22T10:14:10.640+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1525) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:09.615306+0000 osd.1 (osd.1) 1525 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:40.661608+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1526 sent 1525 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:10.642186+0000 osd.1 (osd.1) 1526 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369500160 unmapped: 47857664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5947> 2025-11-22T10:14:11.648+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1526) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:10.642186+0000 osd.1 (osd.1) 1526 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:41.661781+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1527 sent 1526 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:11.649866+0000 osd.1 (osd.1) 1527 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369500160 unmapped: 47857664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1527) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:11.649866+0000 osd.1 (osd.1) 1527 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:42.661992+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5931> 2025-11-22T10:14:12.683+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369500160 unmapped: 47857664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:43.662137+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1528 sent 1527 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:12.684562+0000 osd.1 (osd.1) 1528 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5922> 2025-11-22T10:14:13.729+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369500160 unmapped: 47857664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:44.662387+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1529 sent 1528 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:13.730670+0000 osd.1 (osd.1) 1529 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1528) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:12.684562+0000 osd.1 (osd.1) 1528 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5911> 2025-11-22T10:14:14.700+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369500160 unmapped: 47857664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:45.662599+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1530 sent 1529 num 2 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:14.701607+0000 osd.1 (osd.1) 1530 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1529) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:13.730670+0000 osd.1 (osd.1) 1529 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1530) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:14.701607+0000 osd.1 (osd.1) 1530 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5895> 2025-11-22T10:14:15.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369500160 unmapped: 47857664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:46.662813+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1531 sent 1530 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:15.711803+0000 osd.1 (osd.1) 1531 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5885> 2025-11-22T10:14:16.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1531) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:15.711803+0000 osd.1 (osd.1) 1531 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369500160 unmapped: 47857664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:47.663016+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1532 sent 1531 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:16.668818+0000 osd.1 (osd.1) 1532 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1532) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:16.668818+0000 osd.1 (osd.1) 1532 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5871> 2025-11-22T10:14:17.701+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369516544 unmapped: 47841280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:48.663258+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1533 sent 1532 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:17.702781+0000 osd.1 (osd.1) 1533 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5862> 2025-11-22T10:14:18.677+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1533) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:17.702781+0000 osd.1 (osd.1) 1533 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369516544 unmapped: 47841280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:49.663487+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1534 sent 1533 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:18.678960+0000 osd.1 (osd.1) 1534 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5851> 2025-11-22T10:14:19.673+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1534) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:18.678960+0000 osd.1 (osd.1) 1534 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369516544 unmapped: 47841280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:50.663717+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1535 sent 1534 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:19.674767+0000 osd.1 (osd.1) 1535 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5837> 2025-11-22T10:14:20.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1535) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:19.674767+0000 osd.1 (osd.1) 1535 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369516544 unmapped: 47841280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5832> 2025-11-22T10:14:21.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:51.663935+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1537 sent 1535 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:20.704167+0000 osd.1 (osd.1) 1536 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:21.659659+0000 osd.1 (osd.1) 1537 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1537) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:20.704167+0000 osd.1 (osd.1) 1536 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:21.659659+0000 osd.1 (osd.1) 1537 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369516544 unmapped: 47841280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5819> 2025-11-22T10:14:22.654+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:52.664236+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1538 sent 1537 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:22.654929+0000 osd.1 (osd.1) 1538 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1538) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:22.654929+0000 osd.1 (osd.1) 1538 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369516544 unmapped: 47841280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:53.664459+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5804> 2025-11-22T10:14:23.668+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369516544 unmapped: 47841280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5800> 2025-11-22T10:14:24.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:54.664612+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1540 sent 1538 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:23.669709+0000 osd.1 (osd.1) 1539 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:24.656514+0000 osd.1 (osd.1) 1540 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1540) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:23.669709+0000 osd.1 (osd.1) 1539 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:24.656514+0000 osd.1 (osd.1) 1540 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:55.664944+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5781> 2025-11-22T10:14:25.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:56.665106+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1541 sent 1540 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:25.667746+0000 osd.1 (osd.1) 1541 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5772> 2025-11-22T10:14:26.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1541) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:25.667746+0000 osd.1 (osd.1) 1541 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5767> 2025-11-22T10:14:27.663+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:57.665354+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1543 sent 1541 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:26.700794+0000 osd.1 (osd.1) 1542 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:27.664162+0000 osd.1 (osd.1) 1543 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1543) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:26.700794+0000 osd.1 (osd.1) 1542 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:27.664162+0000 osd.1 (osd.1) 1543 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:58.665626+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5751> 2025-11-22T10:14:28.674+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:59.665787+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1544 sent 1543 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:28.675286+0000 osd.1 (osd.1) 1544 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5738> 2025-11-22T10:14:29.709+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1544) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:28.675286+0000 osd.1 (osd.1) 1544 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:00.666009+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1545 sent 1544 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:29.710967+0000 osd.1 (osd.1) 1545 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5724> 2025-11-22T10:14:30.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1545) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:29.710967+0000 osd.1 (osd.1) 1545 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:01.666229+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1546 sent 1545 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:30.712276+0000 osd.1 (osd.1) 1546 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5713> 2025-11-22T10:14:31.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1546) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:30.712276+0000 osd.1 (osd.1) 1546 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369549312 unmapped: 47808512 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5707> 2025-11-22T10:14:32.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:02.666675+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1548 sent 1546 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:31.680435+0000 osd.1 (osd.1) 1547 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:32.639031+0000 osd.1 (osd.1) 1548 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1548) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:31.680435+0000 osd.1 (osd.1) 1547 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:32.639031+0000 osd.1 (osd.1) 1548 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5694> 2025-11-22T10:14:33.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:03.666919+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1549 sent 1548 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:33.629171+0000 osd.1 (osd.1) 1549 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1549) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:33.629171+0000 osd.1 (osd.1) 1549 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5683> 2025-11-22T10:14:34.646+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:04.667144+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1550 sent 1549 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:34.646969+0000 osd.1 (osd.1) 1550 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1550) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:34.646969+0000 osd.1 (osd.1) 1550 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:05.667383+0000)
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5666> 2025-11-22T10:14:35.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:06.667563+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1551 sent 1550 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:35.668307+0000 osd.1 (osd.1) 1551 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5657> 2025-11-22T10:14:36.670+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1551) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:35.668307+0000 osd.1 (osd.1) 1551 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5651> 2025-11-22T10:14:37.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:07.667753+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1553 sent 1551 num 2 unsent 2 sending 2
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:36.670820+0000 osd.1 (osd.1) 1552 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:37.638965+0000 osd.1 (osd.1) 1553 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1553) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:36.670820+0000 osd.1 (osd.1) 1552 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:37.638965+0000 osd.1 (osd.1) 1553 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5637> 2025-11-22T10:14:38.596+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:08.667999+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1554 sent 1553 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:38.597152+0000 osd.1 (osd.1) 1554 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1554) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:38.597152+0000 osd.1 (osd.1) 1554 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5626> 2025-11-22T10:14:39.569+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:09.668266+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1555 sent 1554 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:39.570037+0000 osd.1 (osd.1) 1555 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x561387229000
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1555) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:39.570037+0000 osd.1 (osd.1) 1555 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5611> 2025-11-22T10:14:40.593+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:10.668537+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1556 sent 1555 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:40.593833+0000 osd.1 (osd.1) 1556 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1556) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:40.593833+0000 osd.1 (osd.1) 1556 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369565696 unmapped: 47792128 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5600> 2025-11-22T10:14:41.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:11.668804+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1557 sent 1556 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:41.570621+0000 osd.1 (osd.1) 1557 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1557) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:41.570621+0000 osd.1 (osd.1) 1557 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369565696 unmapped: 47792128 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5589> 2025-11-22T10:14:42.537+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:12.669118+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1558 sent 1557 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:42.537616+0000 osd.1 (osd.1) 1558 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1558) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:42.537616+0000 osd.1 (osd.1) 1558 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369565696 unmapped: 47792128 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5577> 2025-11-22T10:14:43.500+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:13.669361+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1559 sent 1558 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:43.500722+0000 osd.1 (osd.1) 1559 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1559) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:43.500722+0000 osd.1 (osd.1) 1559 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369565696 unmapped: 47792128 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5565> 2025-11-22T10:14:44.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:14.669618+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1560 sent 1559 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:44.526194+0000 osd.1 (osd.1) 1560 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1560) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:44.526194+0000 osd.1 (osd.1) 1560 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369565696 unmapped: 47792128 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5551> 2025-11-22T10:14:45.521+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:15.669844+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1561 sent 1560 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:45.522248+0000 osd.1 (osd.1) 1561 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1561) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:45.522248+0000 osd.1 (osd.1) 1561 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369565696 unmapped: 47792128 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5540> 2025-11-22T10:14:46.504+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:16.670080+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1562 sent 1561 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:46.504868+0000 osd.1 (osd.1) 1562 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1562) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:46.504868+0000 osd.1 (osd.1) 1562 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369565696 unmapped: 47792128 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5529> 2025-11-22T10:14:47.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:17.670379+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1563 sent 1562 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:47.541073+0000 osd.1 (osd.1) 1563 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1563) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:47.541073+0000 osd.1 (osd.1) 1563 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369565696 unmapped: 47792128 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5518> 2025-11-22T10:14:48.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:18.670559+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1564 sent 1563 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:48.567133+0000 osd.1 (osd.1) 1564 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1564) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:48.567133+0000 osd.1 (osd.1) 1564 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369590272 unmapped: 47767552 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5506> 2025-11-22T10:14:49.535+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:19.670787+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1565 sent 1564 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:49.536635+0000 osd.1 (osd.1) 1565 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1565) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:49.536635+0000 osd.1 (osd.1) 1565 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369590272 unmapped: 47767552 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5492> 2025-11-22T10:14:50.537+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:20.671007+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1566 sent 1565 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:50.538036+0000 osd.1 (osd.1) 1566 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1566) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:50.538036+0000 osd.1 (osd.1) 1566 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369598464 unmapped: 47759360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5480> 2025-11-22T10:14:51.505+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:21.671244+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1567 sent 1566 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:51.507220+0000 osd.1 (osd.1) 1567 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1567) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:51.507220+0000 osd.1 (osd.1) 1567 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369598464 unmapped: 47759360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5469> 2025-11-22T10:14:52.511+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:22.671472+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1568 sent 1567 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:52.513003+0000 osd.1 (osd.1) 1568 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1568) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:52.513003+0000 osd.1 (osd.1) 1568 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369598464 unmapped: 47759360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5458> 2025-11-22T10:14:53.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:23.671667+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1569 sent 1568 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:53.528783+0000 osd.1 (osd.1) 1569 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1569) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:53.528783+0000 osd.1 (osd.1) 1569 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369598464 unmapped: 47759360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5447> 2025-11-22T10:14:54.490+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:24.671836+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1570 sent 1569 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:54.492260+0000 osd.1 (osd.1) 1570 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1570) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:54.492260+0000 osd.1 (osd.1) 1570 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369598464 unmapped: 47759360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5432> 2025-11-22T10:14:55.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:25.672052+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1571 sent 1570 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:55.528783+0000 osd.1 (osd.1) 1571 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1571) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:55.528783+0000 osd.1 (osd.1) 1571 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369598464 unmapped: 47759360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5421> 2025-11-22T10:14:56.567+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:26.672216+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1572 sent 1571 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:56.569178+0000 osd.1 (osd.1) 1572 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1572) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:56.569178+0000 osd.1 (osd.1) 1572 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369623040 unmapped: 47734784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5410> 2025-11-22T10:14:57.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:27.672360+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1573 sent 1572 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:57.558123+0000 osd.1 (osd.1) 1573 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1573) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:57.558123+0000 osd.1 (osd.1) 1573 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369623040 unmapped: 47734784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5399> 2025-11-22T10:14:58.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:28.672591+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1574 sent 1573 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:58.555354+0000 osd.1 (osd.1) 1574 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1574) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:58.555354+0000 osd.1 (osd.1) 1574 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369623040 unmapped: 47734784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,5,1])
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5387> 2025-11-22T10:14:59.603+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:29.672786+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1575 sent 1574 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:14:59.604830+0000 osd.1 (osd.1) 1575 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1575) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:14:59.604830+0000 osd.1 (osd.1) 1575 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369623040 unmapped: 47734784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:31 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:31 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5373> 2025-11-22T10:15:00.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:30.673006+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1576 sent 1575 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:00.592520+0000 osd.1 (osd.1) 1576 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1576) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:00.592520+0000 osd.1 (osd.1) 1576 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369623040 unmapped: 47734784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5362> 2025-11-22T10:15:01.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:31.673208+0000)
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1577 sent 1576 num 1 unsent 1 sending 1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:01.623016+0000 osd.1 (osd.1) 1577 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1577) v1
Nov 22 10:22:31 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:01.623016+0000 osd.1 (osd.1) 1577 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:31 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369623040 unmapped: 47734784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:31 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5351> 2025-11-22T10:15:02.573+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:32.673464+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1578 sent 1577 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:02.574231+0000 osd.1 (osd.1) 1578 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1578) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:02.574231+0000 osd.1 (osd.1) 1578 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369623040 unmapped: 47734784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5340> 2025-11-22T10:15:03.601+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:33.673777+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1579 sent 1578 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:03.602770+0000 osd.1 (osd.1) 1579 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1579) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:03.602770+0000 osd.1 (osd.1) 1579 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369623040 unmapped: 47734784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5329> 2025-11-22T10:15:04.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:34.673970+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1580 sent 1579 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:04.610651+0000 osd.1 (osd.1) 1580 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,5,1])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1580) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:04.610651+0000 osd.1 (osd.1) 1580 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369631232 unmapped: 47726592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5314> 2025-11-22T10:15:05.593+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:35.674171+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1581 sent 1580 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:05.595262+0000 osd.1 (osd.1) 1581 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1581) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:05.595262+0000 osd.1 (osd.1) 1581 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369631232 unmapped: 47726592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5303> 2025-11-22T10:15:06.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:36.674421+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1582 sent 1581 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:06.628945+0000 osd.1 (osd.1) 1582 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1582) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:06.628945+0000 osd.1 (osd.1) 1582 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369631232 unmapped: 47726592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5292> 2025-11-22T10:15:07.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:37.674647+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1583 sent 1582 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:07.606637+0000 osd.1 (osd.1) 1583 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,5,1])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1583) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:07.606637+0000 osd.1 (osd.1) 1583 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369631232 unmapped: 47726592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5280> 2025-11-22T10:15:08.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:38.674810+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1584 sent 1583 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:08.584587+0000 osd.1 (osd.1) 1584 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1584) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:08.584587+0000 osd.1 (osd.1) 1584 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369631232 unmapped: 47726592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5269> 2025-11-22T10:15:09.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:39.675018+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1585 sent 1584 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:09.552230+0000 osd.1 (osd.1) 1585 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1585) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:09.552230+0000 osd.1 (osd.1) 1585 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369631232 unmapped: 47726592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5255> 2025-11-22T10:15:10.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:40.675207+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1586 sent 1585 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:10.593745+0000 osd.1 (osd.1) 1586 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1586) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:10.593745+0000 osd.1 (osd.1) 1586 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369631232 unmapped: 47726592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5244> 2025-11-22T10:15:11.578+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:41.675408+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1587 sent 1586 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:11.579761+0000 osd.1 (osd.1) 1587 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1587) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:11.579761+0000 osd.1 (osd.1) 1587 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369631232 unmapped: 47726592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5233> 2025-11-22T10:15:12.572+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:42.675615+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1588 sent 1587 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:12.574037+0000 osd.1 (osd.1) 1588 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1588) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:12.574037+0000 osd.1 (osd.1) 1588 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 47710208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,5,1])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5221> 2025-11-22T10:15:13.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:43.675832+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1589 sent 1588 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:13.606416+0000 osd.1 (osd.1) 1589 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1589) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:13.606416+0000 osd.1 (osd.1) 1589 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 47710208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5210> 2025-11-22T10:15:14.619+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:44.676105+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1590 sent 1589 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:14.620777+0000 osd.1 (osd.1) 1590 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1590) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:14.620777+0000 osd.1 (osd.1) 1590 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 47710208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5196> 2025-11-22T10:15:15.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:45.676301+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1591 sent 1590 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:15.611659+0000 osd.1 (osd.1) 1591 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,5,1])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1591) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:15.611659+0000 osd.1 (osd.1) 1591 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369647616 unmapped: 47710208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5184> 2025-11-22T10:15:16.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:46.676478+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1592 sent 1591 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:16.570690+0000 osd.1 (osd.1) 1592 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1592) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:16.570690+0000 osd.1 (osd.1) 1592 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369655808 unmapped: 47702016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5173> 2025-11-22T10:15:17.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:47.676630+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1593 sent 1592 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:17.611008+0000 osd.1 (osd.1) 1593 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1593) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:17.611008+0000 osd.1 (osd.1) 1593 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369655808 unmapped: 47702016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5162> 2025-11-22T10:15:18.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:48.676793+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1594 sent 1593 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:18.565740+0000 osd.1 (osd.1) 1594 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1594) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:18.565740+0000 osd.1 (osd.1) 1594 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369655808 unmapped: 47702016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5151> 2025-11-22T10:15:19.561+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:49.676960+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1595 sent 1594 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:19.562058+0000 osd.1 (osd.1) 1595 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1595) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:19.562058+0000 osd.1 (osd.1) 1595 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369655808 unmapped: 47702016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5137> 2025-11-22T10:15:20.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:50.677189+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1596 sent 1595 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:20.606421+0000 osd.1 (osd.1) 1596 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1596) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:20.606421+0000 osd.1 (osd.1) 1596 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369680384 unmapped: 47677440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.817382812s of 600.161315918s, submitted: 90
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5125> 2025-11-22T10:15:21.607+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:51.677401+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1597 sent 1596 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:21.607470+0000 osd.1 (osd.1) 1597 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,5,1])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1597) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:21.607470+0000 osd.1 (osd.1) 1597 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369680384 unmapped: 47677440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5113> 2025-11-22T10:15:22.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:52.677638+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1598 sent 1597 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:22.565408+0000 osd.1 (osd.1) 1598 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1598) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:22.565408+0000 osd.1 (osd.1) 1598 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369704960 unmapped: 47652864 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5102> 2025-11-22T10:15:23.564+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:53.677923+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1599 sent 1598 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:23.565301+0000 osd.1 (osd.1) 1599 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1599) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:23.565301+0000 osd.1 (osd.1) 1599 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369754112 unmapped: 47603712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5091> 2025-11-22T10:15:24.533+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:54.678205+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1600 sent 1599 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:24.533759+0000 osd.1 (osd.1) 1600 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1600) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:24.533759+0000 osd.1 (osd.1) 1600 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369754112 unmapped: 47603712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5077> 2025-11-22T10:15:25.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:55.678476+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1601 sent 1600 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:25.551652+0000 osd.1 (osd.1) 1601 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369754112 unmapped: 47603712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1601) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:25.551652+0000 osd.1 (osd.1) 1601 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5066> 2025-11-22T10:15:26.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:56.678744+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1602 sent 1601 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:26.509927+0000 osd.1 (osd.1) 1602 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,5,1])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369754112 unmapped: 47603712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1602) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:26.509927+0000 osd.1 (osd.1) 1602 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5054> 2025-11-22T10:15:27.536+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:57.678969+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1603 sent 1602 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:27.537281+0000 osd.1 (osd.1) 1603 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369754112 unmapped: 47603712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1603) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:27.537281+0000 osd.1 (osd.1) 1603 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5043> 2025-11-22T10:15:28.577+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:58.679170+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1604 sent 1603 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:28.577935+0000 osd.1 (osd.1) 1604 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369762304 unmapped: 47595520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1604) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:28.577935+0000 osd.1 (osd.1) 1604 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5032> 2025-11-22T10:15:29.598+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:59.679409+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1605 sent 1604 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:29.599192+0000 osd.1 (osd.1) 1605 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369762304 unmapped: 47595520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1605) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:29.599192+0000 osd.1 (osd.1) 1605 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5018> 2025-11-22T10:15:30.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:00.679777+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1606 sent 1605 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:30.567671+0000 osd.1 (osd.1) 1606 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369762304 unmapped: 47595520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1606) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:30.567671+0000 osd.1 (osd.1) 1606 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -5007> 2025-11-22T10:15:31.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:01.680146+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1607 sent 1606 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:31.571497+0000 osd.1 (osd.1) 1607 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,5,1])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369762304 unmapped: 47595520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1607) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:31.571497+0000 osd.1 (osd.1) 1607 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4995> 2025-11-22T10:15:32.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:02.680504+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1608 sent 1607 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:32.567207+0000 osd.1 (osd.1) 1608 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369762304 unmapped: 47595520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1608) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:32.567207+0000 osd.1 (osd.1) 1608 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4984> 2025-11-22T10:15:33.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:03.680940+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1609 sent 1608 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:33.584888+0000 osd.1 (osd.1) 1609 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369762304 unmapped: 47595520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1609) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:33.584888+0000 osd.1 (osd.1) 1609 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4973> 2025-11-22T10:15:34.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:04.681523+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1610 sent 1609 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:34.541581+0000 osd.1 (osd.1) 1610 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369762304 unmapped: 47595520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1610) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:34.541581+0000 osd.1 (osd.1) 1610 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4959> 2025-11-22T10:15:35.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:05.682090+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1611 sent 1610 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:35.569667+0000 osd.1 (osd.1) 1611 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369762304 unmapped: 47595520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1611) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:35.569667+0000 osd.1 (osd.1) 1611 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4948> 2025-11-22T10:15:36.571+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:06.682588+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1612 sent 1611 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:36.572863+0000 osd.1 (osd.1) 1612 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369770496 unmapped: 47587328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1612) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:36.572863+0000 osd.1 (osd.1) 1612 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4937> 2025-11-22T10:15:37.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,5,1])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:07.682959+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1613 sent 1612 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:37.569125+0000 osd.1 (osd.1) 1613 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369770496 unmapped: 47587328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1613) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:37.569125+0000 osd.1 (osd.1) 1613 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4925> 2025-11-22T10:15:38.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:08.683369+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1614 sent 1613 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:38.567481+0000 osd.1 (osd.1) 1614 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369770496 unmapped: 47587328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1614) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:38.567481+0000 osd.1 (osd.1) 1614 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4914> 2025-11-22T10:15:39.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:09.683796+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1615 sent 1614 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:39.592489+0000 osd.1 (osd.1) 1615 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369770496 unmapped: 47587328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1615) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:39.592489+0000 osd.1 (osd.1) 1615 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4900> 2025-11-22T10:15:40.581+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:10.684067+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1616 sent 1615 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:40.582812+0000 osd.1 (osd.1) 1616 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369770496 unmapped: 47587328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1616) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:40.582812+0000 osd.1 (osd.1) 1616 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4889> 2025-11-22T10:15:41.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x5613852f7c00
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:11.684376+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1617 sent 1616 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:41.596044+0000 osd.1 (osd.1) 1617 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369770496 unmapped: 47587328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1617) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:41.596044+0000 osd.1 (osd.1) 1617 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4877> 2025-11-22T10:15:42.601+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:12.684694+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1618 sent 1617 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:42.602617+0000 osd.1 (osd.1) 1618 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,1,4,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369770496 unmapped: 47587328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1618) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:42.602617+0000 osd.1 (osd.1) 1618 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4865> 2025-11-22T10:15:43.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:13.684989+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1619 sent 1618 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:43.584809+0000 osd.1 (osd.1) 1619 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369770496 unmapped: 47587328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4856> 2025-11-22T10:15:44.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1619) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:43.584809+0000 osd.1 (osd.1) 1619 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:14.685275+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1620 sent 1619 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:44.552550+0000 osd.1 (osd.1) 1620 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369778688 unmapped: 47579136 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1620) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:44.552550+0000 osd.1 (osd.1) 1620 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4840> 2025-11-22T10:15:45.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:15.685565+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1621 sent 1620 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:45.593104+0000 osd.1 (osd.1) 1621 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369778688 unmapped: 47579136 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1621) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:45.593104+0000 osd.1 (osd.1) 1621 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4829> 2025-11-22T10:15:46.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:16.685854+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1622 sent 1621 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:46.601887+0000 osd.1 (osd.1) 1622 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369778688 unmapped: 47579136 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,1,4,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1622) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:46.601887+0000 osd.1 (osd.1) 1622 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4817> 2025-11-22T10:15:47.626+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:17.686159+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1623 sent 1622 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:47.628632+0000 osd.1 (osd.1) 1623 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369778688 unmapped: 47579136 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1623) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:47.628632+0000 osd.1 (osd.1) 1623 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4806> 2025-11-22T10:15:48.607+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:18.686527+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1624 sent 1623 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:48.609362+0000 osd.1 (osd.1) 1624 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369778688 unmapped: 47579136 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4797> 2025-11-22T10:15:49.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1624) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:48.609362+0000 osd.1 (osd.1) 1624 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:19.686796+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1625 sent 1624 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:49.607284+0000 osd.1 (osd.1) 1625 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,1,4,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369778688 unmapped: 47579136 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4782> 2025-11-22T10:15:50.585+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1625) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:49.607284+0000 osd.1 (osd.1) 1625 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:20.686985+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1626 sent 1625 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:50.586855+0000 osd.1 (osd.1) 1626 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369778688 unmapped: 47579136 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4771> 2025-11-22T10:15:51.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1626) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:50.586855+0000 osd.1 (osd.1) 1626 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:21.687199+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1627 sent 1626 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:51.587646+0000 osd.1 (osd.1) 1627 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369778688 unmapped: 47579136 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4760> 2025-11-22T10:15:52.574+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1627) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:51.587646+0000 osd.1 (osd.1) 1627 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:22.687408+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1628 sent 1627 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:52.575815+0000 osd.1 (osd.1) 1628 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 47562752 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4749> 2025-11-22T10:15:53.608+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1628) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:52.575815+0000 osd.1 (osd.1) 1628 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:23.687668+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1629 sent 1628 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:53.610290+0000 osd.1 (osd.1) 1629 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 47562752 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4738> 2025-11-22T10:15:54.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1629) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:53.610290+0000 osd.1 (osd.1) 1629 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:24.687876+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1630 sent 1629 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:54.589801+0000 osd.1 (osd.1) 1630 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 47562752 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4724> 2025-11-22T10:15:55.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:25.688028+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1631 sent 1630 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:55.630747+0000 osd.1 (osd.1) 1631 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1630) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:54.589801+0000 osd.1 (osd.1) 1630 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 47562752 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4712> 2025-11-22T10:15:56.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:26.688175+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1632 sent 1631 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:56.669373+0000 osd.1 (osd.1) 1632 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1631) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:55.630747+0000 osd.1 (osd.1) 1631 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1632) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:56.669373+0000 osd.1 (osd.1) 1632 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 47562752 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4699> 2025-11-22T10:15:57.671+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:27.688358+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1633 sent 1632 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:57.673574+0000 osd.1 (osd.1) 1633 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1633) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:57.673574+0000 osd.1 (osd.1) 1633 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369795072 unmapped: 47562752 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:28.688574+0000)
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4685> 2025-11-22T10:15:58.701+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369803264 unmapped: 47554560 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:29.688754+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1634 sent 1633 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:58.701931+0000 osd.1 (osd.1) 1634 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4676> 2025-11-22T10:15:59.750+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1634) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:58.701931+0000 osd.1 (osd.1) 1634 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369803264 unmapped: 47554560 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:30.688939+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1635 sent 1634 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:15:59.751218+0000 osd.1 (osd.1) 1635 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4661> 2025-11-22T10:16:00.764+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1635) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:15:59.751218+0000 osd.1 (osd.1) 1635 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369811456 unmapped: 47546368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:31.689130+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1636 sent 1635 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:00.765271+0000 osd.1 (osd.1) 1636 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4650> 2025-11-22T10:16:01.784+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1636) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:00.765271+0000 osd.1 (osd.1) 1636 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369811456 unmapped: 47546368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:32.689370+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1637 sent 1636 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:01.784530+0000 osd.1 (osd.1) 1637 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4639> 2025-11-22T10:16:02.823+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1637) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:01.784530+0000 osd.1 (osd.1) 1637 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369811456 unmapped: 47546368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:33.689590+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1638 sent 1637 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:02.823504+0000 osd.1 (osd.1) 1638 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4628> 2025-11-22T10:16:03.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1638) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:02.823504+0000 osd.1 (osd.1) 1638 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369811456 unmapped: 47546368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:34.689850+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1639 sent 1638 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:03.870121+0000 osd.1 (osd.1) 1639 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4617> 2025-11-22T10:16:04.881+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1639) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:03.870121+0000 osd.1 (osd.1) 1639 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369811456 unmapped: 47546368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:35.690180+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1640 sent 1639 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:04.882292+0000 osd.1 (osd.1) 1640 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4601> 2025-11-22T10:16:05.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1640) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:04.882292+0000 osd.1 (osd.1) 1640 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369811456 unmapped: 47546368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:36.690420+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1641 sent 1640 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:05.882981+0000 osd.1 (osd.1) 1641 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4589> 2025-11-22T10:16:06.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1641) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:05.882981+0000 osd.1 (osd.1) 1641 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369811456 unmapped: 47546368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:37.690676+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1642 sent 1641 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:06.859005+0000 osd.1 (osd.1) 1642 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 ms_handle_reset con 0x56138628c000 session 0x561384fd4d20
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x561385075000
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4576> 2025-11-22T10:16:07.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1642) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:06.859005+0000 osd.1 (osd.1) 1642 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369811456 unmapped: 47546368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:38.690965+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1643 sent 1642 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:07.869948+0000 osd.1 (osd.1) 1643 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4564> 2025-11-22T10:16:08.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1643) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:07.869948+0000 osd.1 (osd.1) 1643 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369819648 unmapped: 47538176 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:39.691214+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1644 sent 1643 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:08.883023+0000 osd.1 (osd.1) 1644 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4553> 2025-11-22T10:16:09.897+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1644) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:08.883023+0000 osd.1 (osd.1) 1644 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369819648 unmapped: 47538176 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:40.691441+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1645 sent 1644 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:09.898189+0000 osd.1 (osd.1) 1645 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4538> 2025-11-22T10:16:10.879+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1645) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:09.898189+0000 osd.1 (osd.1) 1645 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369819648 unmapped: 47538176 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:41.691603+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1646 sent 1645 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:10.880056+0000 osd.1 (osd.1) 1646 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4527> 2025-11-22T10:16:11.888+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1646) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:10.880056+0000 osd.1 (osd.1) 1646 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369819648 unmapped: 47538176 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:42.691882+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1647 sent 1646 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:11.889023+0000 osd.1 (osd.1) 1647 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4516> 2025-11-22T10:16:12.845+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1647) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:11.889023+0000 osd.1 (osd.1) 1647 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369819648 unmapped: 47538176 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:43.692144+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1648 sent 1647 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:12.846413+0000 osd.1 (osd.1) 1648 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4505> 2025-11-22T10:16:13.823+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1648) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:12.846413+0000 osd.1 (osd.1) 1648 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369827840 unmapped: 47529984 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:44.692428+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1649 sent 1648 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:13.823771+0000 osd.1 (osd.1) 1649 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4494> 2025-11-22T10:16:14.777+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1649) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:13.823771+0000 osd.1 (osd.1) 1649 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369827840 unmapped: 47529984 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:45.692593+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1650 sent 1649 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:14.777633+0000 osd.1 (osd.1) 1650 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4480> 2025-11-22T10:16:15.738+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1650) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:14.777633+0000 osd.1 (osd.1) 1650 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369827840 unmapped: 47529984 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:46.692820+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1651 sent 1650 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:15.739234+0000 osd.1 (osd.1) 1651 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4468> 2025-11-22T10:16:16.750+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1651) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:15.739234+0000 osd.1 (osd.1) 1651 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 47513600 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:47.693015+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1652 sent 1651 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:16.751727+0000 osd.1 (osd.1) 1652 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4457> 2025-11-22T10:16:17.702+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1652) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:16.751727+0000 osd.1 (osd.1) 1652 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 47513600 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4452> 2025-11-22T10:16:18.665+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:48.693173+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1654 sent 1652 num 2 unsent 2 sending 2
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:17.702901+0000 osd.1 (osd.1) 1653 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:18.666132+0000 osd.1 (osd.1) 1654 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1654) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:17.702901+0000 osd.1 (osd.1) 1653 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:18.666132+0000 osd.1 (osd.1) 1654 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 47513600 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4439> 2025-11-22T10:16:19.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:49.693376+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1655 sent 1654 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:19.638512+0000 osd.1 (osd.1) 1655 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1655) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:19.638512+0000 osd.1 (osd.1) 1655 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 47513600 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4428> 2025-11-22T10:16:20.634+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:50.693595+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1656 sent 1655 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:20.635560+0000 osd.1 (osd.1) 1656 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1656) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:20.635560+0000 osd.1 (osd.1) 1656 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 47513600 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4414> 2025-11-22T10:16:21.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:51.693810+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1657 sent 1656 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:21.656501+0000 osd.1 (osd.1) 1657 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1657) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:21.656501+0000 osd.1 (osd.1) 1657 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 47513600 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:52.693984+0000)
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4399> 2025-11-22T10:16:22.704+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 47513600 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4394> 2025-11-22T10:16:23.672+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:53.694095+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1659 sent 1657 num 2 unsent 2 sending 2
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:22.704926+0000 osd.1 (osd.1) 1658 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:23.673260+0000 osd.1 (osd.1) 1659 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1659) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:22.704926+0000 osd.1 (osd.1) 1658 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:23.673260+0000 osd.1 (osd.1) 1659 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4382> 2025-11-22T10:16:24.622+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 47513600 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:54.694258+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1660 sent 1659 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:24.623687+0000 osd.1 (osd.1) 1660 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1660) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:24.623687+0000 osd.1 (osd.1) 1660 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4370> 2025-11-22T10:16:25.595+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 47505408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:55.694457+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1661 sent 1660 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:25.596575+0000 osd.1 (osd.1) 1661 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1661) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:25.596575+0000 osd.1 (osd.1) 1661 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4355> 2025-11-22T10:16:26.585+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 47505408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:56.694651+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1662 sent 1661 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:26.586796+0000 osd.1 (osd.1) 1662 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1662) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:26.586796+0000 osd.1 (osd.1) 1662 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4343> 2025-11-22T10:16:27.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 47505408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:57.694859+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1663 sent 1662 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:27.590059+0000 osd.1 (osd.1) 1663 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1663) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:27.590059+0000 osd.1 (osd.1) 1663 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4332> 2025-11-22T10:16:28.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 47505408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:58.695058+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1664 sent 1663 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:28.552833+0000 osd.1 (osd.1) 1664 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1664) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:28.552833+0000 osd.1 (osd.1) 1664 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4321> 2025-11-22T10:16:29.517+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 47505408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:59.695277+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1665 sent 1664 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:29.518145+0000 osd.1 (osd.1) 1665 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1665) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:29.518145+0000 osd.1 (osd.1) 1665 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4310> 2025-11-22T10:16:30.508+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 47505408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:00.695516+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1666 sent 1665 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:30.509111+0000 osd.1 (osd.1) 1666 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1666) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:30.509111+0000 osd.1 (osd.1) 1666 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4295> 2025-11-22T10:16:31.500+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 47505408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:01.695686+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1667 sent 1666 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:31.502221+0000 osd.1 (osd.1) 1667 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1667) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:31.502221+0000 osd.1 (osd.1) 1667 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4284> 2025-11-22T10:16:32.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 47505408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:02.695865+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1668 sent 1667 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:32.549077+0000 osd.1 (osd.1) 1668 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1668) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:32.549077+0000 osd.1 (osd.1) 1668 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4273> 2025-11-22T10:16:33.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369885184 unmapped: 47472640 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:03.696134+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1669 sent 1668 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:33.549933+0000 osd.1 (osd.1) 1669 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1669) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:33.549933+0000 osd.1 (osd.1) 1669 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4262> 2025-11-22T10:16:34.547+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369885184 unmapped: 47472640 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:04.696663+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1670 sent 1669 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:34.548824+0000 osd.1 (osd.1) 1670 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1670) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:34.548824+0000 osd.1 (osd.1) 1670 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4251> 2025-11-22T10:16:35.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369885184 unmapped: 47472640 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:05.697050+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1671 sent 1670 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:35.569219+0000 osd.1 (osd.1) 1671 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1671) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:35.569219+0000 osd.1 (osd.1) 1671 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4235> 2025-11-22T10:16:36.521+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:06.697390+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1672 sent 1671 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:36.522843+0000 osd.1 (osd.1) 1672 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369893376 unmapped: 47464448 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4226> 2025-11-22T10:16:37.506+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:07.697758+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1673 sent 1672 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:37.507924+0000 osd.1 (osd.1) 1673 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369893376 unmapped: 47464448 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1672) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:36.522843+0000 osd.1 (osd.1) 1672 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4215> 2025-11-22T10:16:38.488+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:08.698385+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1674 sent 1673 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:38.489767+0000 osd.1 (osd.1) 1674 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369901568 unmapped: 47456256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1673) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:37.507924+0000 osd.1 (osd.1) 1673 : cluster [WRN] 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1674) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:38.489767+0000 osd.1 (osd.1) 1674 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4202> 2025-11-22T10:16:39.515+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:09.698723+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1675 sent 1674 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:39.516396+0000 osd.1 (osd.1) 1675 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369901568 unmapped: 47456256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1675) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:39.516396+0000 osd.1 (osd.1) 1675 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4191> 2025-11-22T10:16:40.546+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:10.699019+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1676 sent 1675 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:40.547105+0000 osd.1 (osd.1) 1676 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369901568 unmapped: 47456256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1676) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:40.547105+0000 osd.1 (osd.1) 1676 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4176> 2025-11-22T10:16:41.544+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:11.699284+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1677 sent 1676 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:41.544546+0000 osd.1 (osd.1) 1677 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 47448064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1677) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:41.544546+0000 osd.1 (osd.1) 1677 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4165> 2025-11-22T10:16:42.500+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:12.700124+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1678 sent 1677 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:42.500837+0000 osd.1 (osd.1) 1678 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 47448064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1678) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:42.500837+0000 osd.1 (osd.1) 1678 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x561383a0b400
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4153> 2025-11-22T10:16:43.480+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:13.700540+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1679 sent 1678 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:43.481012+0000 osd.1 (osd.1) 1679 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 47448064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1679) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:43.481012+0000 osd.1 (osd.1) 1679 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4142> 2025-11-22T10:16:44.452+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:14.700859+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1680 sent 1679 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:44.453107+0000 osd.1 (osd.1) 1680 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 47448064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1680) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:44.453107+0000 osd.1 (osd.1) 1680 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4131> 2025-11-22T10:16:45.412+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:15.701127+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1681 sent 1680 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:45.412923+0000 osd.1 (osd.1) 1681 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 47448064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1681) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:45.412923+0000 osd.1 (osd.1) 1681 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4117> 2025-11-22T10:16:46.366+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:16.701606+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1682 sent 1681 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:46.366873+0000 osd.1 (osd.1) 1682 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 47448064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1682) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:46.366873+0000 osd.1 (osd.1) 1682 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4106> 2025-11-22T10:16:47.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,2,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:17.701841+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1683 sent 1682 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:47.363559+0000 osd.1 (osd.1) 1683 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 47448064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1683) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:47.363559+0000 osd.1 (osd.1) 1683 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4094> 2025-11-22T10:16:48.377+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:18.702108+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1684 sent 1683 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:48.378227+0000 osd.1 (osd.1) 1684 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369917952 unmapped: 47439872 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1684) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:48.378227+0000 osd.1 (osd.1) 1684 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,2,0,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4082> 2025-11-22T10:16:49.348+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:19.702428+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1685 sent 1684 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:49.348467+0000 osd.1 (osd.1) 1685 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369934336 unmapped: 47423488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1685) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:49.348467+0000 osd.1 (osd.1) 1685 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4071> 2025-11-22T10:16:50.394+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:20.702740+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1686 sent 1685 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:50.394920+0000 osd.1 (osd.1) 1686 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369934336 unmapped: 47423488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1686) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:50.394920+0000 osd.1 (osd.1) 1686 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4057> 2025-11-22T10:16:51.385+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:21.703029+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1687 sent 1686 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:51.385517+0000 osd.1 (osd.1) 1687 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369934336 unmapped: 47423488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1687) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:51.385517+0000 osd.1 (osd.1) 1687 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4046> 2025-11-22T10:16:52.375+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:22.703239+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1688 sent 1687 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:52.376278+0000 osd.1 (osd.1) 1688 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369934336 unmapped: 47423488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1688) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:52.376278+0000 osd.1 (osd.1) 1688 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4035> 2025-11-22T10:16:53.346+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:23.703489+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1689 sent 1688 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:53.348034+0000 osd.1 (osd.1) 1689 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369934336 unmapped: 47423488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1689) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:53.348034+0000 osd.1 (osd.1) 1689 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4024> 2025-11-22T10:16:54.341+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:24.703778+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1690 sent 1689 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:54.341597+0000 osd.1 (osd.1) 1690 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369934336 unmapped: 47423488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1690) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:54.341597+0000 osd.1 (osd.1) 1690 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -4012> 2025-11-22T10:16:55.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:25.704002+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1691 sent 1690 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:55.364426+0000 osd.1 (osd.1) 1691 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369934336 unmapped: 47423488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1691) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:55.364426+0000 osd.1 (osd.1) 1691 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3998> 2025-11-22T10:16:56.406+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:26.704210+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1692 sent 1691 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:56.407242+0000 osd.1 (osd.1) 1692 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369934336 unmapped: 47423488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1692) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:56.407242+0000 osd.1 (osd.1) 1692 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3987> 2025-11-22T10:16:57.438+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:27.704390+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1693 sent 1692 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:57.439588+0000 osd.1 (osd.1) 1693 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369950720 unmapped: 47407104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1693) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:57.439588+0000 osd.1 (osd.1) 1693 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3976> 2025-11-22T10:16:58.446+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:28.704603+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1694 sent 1693 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:58.446903+0000 osd.1 (osd.1) 1694 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369950720 unmapped: 47407104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1694) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:58.446903+0000 osd.1 (osd.1) 1694 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3965> 2025-11-22T10:16:59.464+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:29.704826+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1695 sent 1694 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:16:59.465436+0000 osd.1 (osd.1) 1695 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369950720 unmapped: 47407104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1695) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:16:59.465436+0000 osd.1 (osd.1) 1695 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3953> 2025-11-22T10:17:00.459+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:30.705017+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1696 sent 1695 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:00.460404+0000 osd.1 (osd.1) 1696 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369950720 unmapped: 47407104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1696) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:00.460404+0000 osd.1 (osd.1) 1696 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3939> 2025-11-22T10:17:01.496+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:31.705247+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1697 sent 1696 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:01.496870+0000 osd.1 (osd.1) 1697 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369950720 unmapped: 47407104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1697) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:01.496870+0000 osd.1 (osd.1) 1697 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3928> 2025-11-22T10:17:02.486+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:32.705606+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1698 sent 1697 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:02.487297+0000 osd.1 (osd.1) 1698 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369950720 unmapped: 47407104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1698) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:02.487297+0000 osd.1 (osd.1) 1698 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3917> 2025-11-22T10:17:03.499+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:33.705956+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1699 sent 1698 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:03.500910+0000 osd.1 (osd.1) 1699 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369950720 unmapped: 47407104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1699) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:03.500910+0000 osd.1 (osd.1) 1699 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3905> 2025-11-22T10:17:04.499+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:34.706177+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1700 sent 1699 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:04.501209+0000 osd.1 (osd.1) 1700 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369950720 unmapped: 47407104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1700) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:04.501209+0000 osd.1 (osd.1) 1700 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3894> 2025-11-22T10:17:05.526+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:35.706370+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1701 sent 1700 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:05.527501+0000 osd.1 (osd.1) 1701 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369958912 unmapped: 47398912 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1701) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:05.527501+0000 osd.1 (osd.1) 1701 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3880> 2025-11-22T10:17:06.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:36.706566+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1702 sent 1701 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:06.528441+0000 osd.1 (osd.1) 1702 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369958912 unmapped: 47398912 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1702) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:06.528441+0000 osd.1 (osd.1) 1702 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3869> 2025-11-22T10:17:07.477+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:37.706820+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1703 sent 1702 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:07.478803+0000 osd.1 (osd.1) 1703 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369958912 unmapped: 47398912 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1703) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:07.478803+0000 osd.1 (osd.1) 1703 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3858> 2025-11-22T10:17:08.474+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:38.707020+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1704 sent 1703 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:08.475467+0000 osd.1 (osd.1) 1704 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369967104 unmapped: 47390720 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1704) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:08.475467+0000 osd.1 (osd.1) 1704 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3847> 2025-11-22T10:17:09.463+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:39.707145+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1705 sent 1704 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:09.464420+0000 osd.1 (osd.1) 1705 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369967104 unmapped: 47390720 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1705) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:09.464420+0000 osd.1 (osd.1) 1705 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3835> 2025-11-22T10:17:10.430+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:40.707284+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1706 sent 1705 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:10.432169+0000 osd.1 (osd.1) 1706 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369967104 unmapped: 47390720 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1706) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:10.432169+0000 osd.1 (osd.1) 1706 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3821> 2025-11-22T10:17:11.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:41.707473+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1707 sent 1706 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:11.462619+0000 osd.1 (osd.1) 1707 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369967104 unmapped: 47390720 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1707) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:11.462619+0000 osd.1 (osd.1) 1707 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3810> 2025-11-22T10:17:12.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:42.707656+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1708 sent 1707 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:12.463089+0000 osd.1 (osd.1) 1708 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369967104 unmapped: 47390720 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1708) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:12.463089+0000 osd.1 (osd.1) 1708 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3799> 2025-11-22T10:17:13.413+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:43.707815+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1709 sent 1708 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:13.414863+0000 osd.1 (osd.1) 1709 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369983488 unmapped: 47374336 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1709) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:13.414863+0000 osd.1 (osd.1) 1709 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3787> 2025-11-22T10:17:14.373+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:44.707993+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1710 sent 1709 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:14.375124+0000 osd.1 (osd.1) 1710 : cluster [WRN] 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369991680 unmapped: 47366144 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3778> 2025-11-22T10:17:15.388+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1710) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:14.375124+0000 osd.1 (osd.1) 1710 : cluster [WRN] 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 ms_handle_reset con 0x561386397c00 session 0x561385d5a960
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x561383a0b000
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:45.708259+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1711 sent 1710 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:15.389167+0000 osd.1 (osd.1) 1711 : cluster [WRN] 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369991680 unmapped: 47366144 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3761> 2025-11-22T10:17:16.405+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1711) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:15.389167+0000 osd.1 (osd.1) 1711 : cluster [WRN] 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:46.708576+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1712 sent 1711 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:16.407172+0000 osd.1 (osd.1) 1712 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369991680 unmapped: 47366144 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3750> 2025-11-22T10:17:17.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1712) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:16.407172+0000 osd.1 (osd.1) 1712 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:47.708747+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1713 sent 1712 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:17.380985+0000 osd.1 (osd.1) 1713 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369991680 unmapped: 47366144 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3739> 2025-11-22T10:17:18.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1713) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:17.380985+0000 osd.1 (osd.1) 1713 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:48.708927+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1714 sent 1713 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:18.416490+0000 osd.1 (osd.1) 1714 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369991680 unmapped: 47366144 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3728> 2025-11-22T10:17:19.398+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1714) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:18.416490+0000 osd.1 (osd.1) 1714 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:49.709157+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1715 sent 1714 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:19.399842+0000 osd.1 (osd.1) 1715 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369999872 unmapped: 47357952 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3716> 2025-11-22T10:17:20.414+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1715) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:19.399842+0000 osd.1 (osd.1) 1715 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:50.709410+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1716 sent 1715 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:20.415485+0000 osd.1 (osd.1) 1716 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369999872 unmapped: 47357952 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3702> 2025-11-22T10:17:21.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1716) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:20.415485+0000 osd.1 (osd.1) 1716 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:51.709632+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1717 sent 1716 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:21.391645+0000 osd.1 (osd.1) 1717 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370008064 unmapped: 47349760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3691> 2025-11-22T10:17:22.438+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1717) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:21.391645+0000 osd.1 (osd.1) 1717 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:52.709887+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1718 sent 1717 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:22.438611+0000 osd.1 (osd.1) 1718 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370016256 unmapped: 47341568 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3680> 2025-11-22T10:17:23.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1718) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:22.438611+0000 osd.1 (osd.1) 1718 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:53.710211+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1719 sent 1718 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:23.475645+0000 osd.1 (osd.1) 1719 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370016256 unmapped: 47341568 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3669> 2025-11-22T10:17:24.430+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1719) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:23.475645+0000 osd.1 (osd.1) 1719 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:54.710578+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1720 sent 1719 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:24.431368+0000 osd.1 (osd.1) 1720 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370016256 unmapped: 47341568 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3657> 2025-11-22T10:17:25.436+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1720) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:24.431368+0000 osd.1 (osd.1) 1720 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:55.710917+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1721 sent 1720 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:25.436550+0000 osd.1 (osd.1) 1721 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370016256 unmapped: 47341568 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3643> 2025-11-22T10:17:26.436+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1721) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:25.436550+0000 osd.1 (osd.1) 1721 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:56.711165+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1722 sent 1721 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:26.436694+0000 osd.1 (osd.1) 1722 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370016256 unmapped: 47341568 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3632> 2025-11-22T10:17:27.411+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1722) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:26.436694+0000 osd.1 (osd.1) 1722 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:57.711380+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1723 sent 1722 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:27.411406+0000 osd.1 (osd.1) 1723 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370016256 unmapped: 47341568 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3620> 2025-11-22T10:17:28.417+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1723) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:27.411406+0000 osd.1 (osd.1) 1723 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:58.711627+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1724 sent 1723 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:28.418077+0000 osd.1 (osd.1) 1724 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370024448 unmapped: 47333376 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3609> 2025-11-22T10:17:29.458+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1724) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:28.418077+0000 osd.1 (osd.1) 1724 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:59.711856+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1725 sent 1724 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:29.458747+0000 osd.1 (osd.1) 1725 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370040832 unmapped: 47316992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3597> 2025-11-22T10:17:30.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1725) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:29.458747+0000 osd.1 (osd.1) 1725 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:00.712122+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1726 sent 1725 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:30.452136+0000 osd.1 (osd.1) 1726 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370040832 unmapped: 47316992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3583> 2025-11-22T10:17:31.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1726) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:30.452136+0000 osd.1 (osd.1) 1726 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:01.712378+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1727 sent 1726 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:31.471759+0000 osd.1 (osd.1) 1727 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370040832 unmapped: 47316992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3572> 2025-11-22T10:17:32.485+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1727) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:31.471759+0000 osd.1 (osd.1) 1727 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:02.712558+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1728 sent 1727 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:32.485803+0000 osd.1 (osd.1) 1728 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370040832 unmapped: 47316992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3561> 2025-11-22T10:17:33.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1728) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:32.485803+0000 osd.1 (osd.1) 1728 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:03.712796+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1729 sent 1728 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:33.489634+0000 osd.1 (osd.1) 1729 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370040832 unmapped: 47316992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3549> 2025-11-22T10:17:34.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1729) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:33.489634+0000 osd.1 (osd.1) 1729 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:04.713018+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1730 sent 1729 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:34.461603+0000 osd.1 (osd.1) 1730 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370040832 unmapped: 47316992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3538> 2025-11-22T10:17:35.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1730) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:34.461603+0000 osd.1 (osd.1) 1730 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:05.713248+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1731 sent 1730 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:35.462486+0000 osd.1 (osd.1) 1731 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370040832 unmapped: 47316992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3524> 2025-11-22T10:17:36.418+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:06.713459+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1732 sent 1731 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:36.419241+0000 osd.1 (osd.1) 1732 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1731) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:35.462486+0000 osd.1 (osd.1) 1731 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370040832 unmapped: 47316992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3513> 2025-11-22T10:17:37.394+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:07.713655+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1733 sent 1732 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:37.395586+0000 osd.1 (osd.1) 1733 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1732) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:36.419241+0000 osd.1 (osd.1) 1732 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1733) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:37.395586+0000 osd.1 (osd.1) 1733 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370049024 unmapped: 47308800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3500> 2025-11-22T10:17:38.386+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:08.713946+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1734 sent 1733 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:38.386905+0000 osd.1 (osd.1) 1734 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1734) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:38.386905+0000 osd.1 (osd.1) 1734 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370049024 unmapped: 47308800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3489> 2025-11-22T10:17:39.395+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:09.714227+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1735 sent 1734 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:39.396699+0000 osd.1 (osd.1) 1735 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1735) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:39.396699+0000 osd.1 (osd.1) 1735 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370057216 unmapped: 47300608 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3477> 2025-11-22T10:17:40.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:10.714572+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1736 sent 1735 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:40.416497+0000 osd.1 (osd.1) 1736 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1736) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:40.416497+0000 osd.1 (osd.1) 1736 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370057216 unmapped: 47300608 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3463> 2025-11-22T10:17:41.413+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:11.714772+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1737 sent 1736 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:41.414407+0000 osd.1 (osd.1) 1737 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1737) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:41.414407+0000 osd.1 (osd.1) 1737 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370057216 unmapped: 47300608 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3452> 2025-11-22T10:17:42.455+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:12.715017+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1738 sent 1737 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:42.456399+0000 osd.1 (osd.1) 1738 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1738) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:42.456399+0000 osd.1 (osd.1) 1738 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370057216 unmapped: 47300608 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3441> 2025-11-22T10:17:43.435+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:13.715380+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1739 sent 1738 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:43.435954+0000 osd.1 (osd.1) 1739 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1739) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:43.435954+0000 osd.1 (osd.1) 1739 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370065408 unmapped: 47292416 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3429> 2025-11-22T10:17:44.466+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:14.715669+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1740 sent 1739 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:44.467517+0000 osd.1 (osd.1) 1740 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1740) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:44.467517+0000 osd.1 (osd.1) 1740 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370065408 unmapped: 47292416 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3418> 2025-11-22T10:17:45.514+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:15.715850+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1741 sent 1740 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:45.516286+0000 osd.1 (osd.1) 1741 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1741) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:45.516286+0000 osd.1 (osd.1) 1741 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370081792 unmapped: 47276032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3404> 2025-11-22T10:17:46.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'images' : 12 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:16.716056+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1742 sent 1741 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:46.472556+0000 osd.1 (osd.1) 1742 : cluster [WRN] 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'images' : 12 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1742) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:46.472556+0000 osd.1 (osd.1) 1742 : cluster [WRN] 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'images' : 12 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370081792 unmapped: 47276032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x5613852f4800
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,1,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3391> 2025-11-22T10:17:47.465+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:17.716244+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1743 sent 1742 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:47.466847+0000 osd.1 (osd.1) 1743 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1743) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:47.466847+0000 osd.1 (osd.1) 1743 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370081792 unmapped: 47276032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,2,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3379> 2025-11-22T10:17:48.436+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:18.716432+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1744 sent 1743 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:48.437685+0000 osd.1 (osd.1) 1744 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1744) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:48.437685+0000 osd.1 (osd.1) 1744 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370081792 unmapped: 47276032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3368> 2025-11-22T10:17:49.444+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:19.716658+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1745 sent 1744 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:49.445977+0000 osd.1 (osd.1) 1745 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1745) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:49.445977+0000 osd.1 (osd.1) 1745 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370081792 unmapped: 47276032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3357> 2025-11-22T10:17:50.421+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,3,1,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:20.716879+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1746 sent 1745 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:50.422803+0000 osd.1 (osd.1) 1746 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1746) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:50.422803+0000 osd.1 (osd.1) 1746 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370081792 unmapped: 47276032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3342> 2025-11-22T10:17:51.374+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:21.717144+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1747 sent 1746 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:51.375609+0000 osd.1 (osd.1) 1747 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1747) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:51.375609+0000 osd.1 (osd.1) 1747 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370081792 unmapped: 47276032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3331> 2025-11-22T10:17:52.348+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:22.717751+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1748 sent 1747 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:52.349697+0000 osd.1 (osd.1) 1748 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1748) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:52.349697+0000 osd.1 (osd.1) 1748 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370081792 unmapped: 47276032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3320> 2025-11-22T10:17:53.368+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:23.717980+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1749 sent 1748 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:53.369229+0000 osd.1 (osd.1) 1749 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1749) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:53.369229+0000 osd.1 (osd.1) 1749 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370089984 unmapped: 47267840 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3309> 2025-11-22T10:17:54.336+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:24.718194+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1750 sent 1749 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:54.337697+0000 osd.1 (osd.1) 1750 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1750) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:54.337697+0000 osd.1 (osd.1) 1750 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370089984 unmapped: 47267840 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3298> 2025-11-22T10:17:55.373+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:25.718390+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1751 sent 1750 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:55.375398+0000 osd.1 (osd.1) 1751 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1751) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:55.375398+0000 osd.1 (osd.1) 1751 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370089984 unmapped: 47267840 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3284> 2025-11-22T10:17:56.338+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,2,2,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:26.718573+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1752 sent 1751 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:56.339213+0000 osd.1 (osd.1) 1752 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1752) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:56.339213+0000 osd.1 (osd.1) 1752 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370089984 unmapped: 47267840 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3272> 2025-11-22T10:17:57.327+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:27.718726+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1753 sent 1752 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:57.328634+0000 osd.1 (osd.1) 1753 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1753) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:57.328634+0000 osd.1 (osd.1) 1753 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370089984 unmapped: 47267840 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3261> 2025-11-22T10:17:58.370+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:28.718950+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1754 sent 1753 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:58.371615+0000 osd.1 (osd.1) 1754 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1754) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:58.371615+0000 osd.1 (osd.1) 1754 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370089984 unmapped: 47267840 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,2,2,0,5,2])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3249> 2025-11-22T10:17:59.344+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:29.719169+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1755 sent 1754 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:17:59.345628+0000 osd.1 (osd.1) 1755 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1755) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:17:59.345628+0000 osd.1 (osd.1) 1755 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370089984 unmapped: 47267840 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3238> 2025-11-22T10:18:00.392+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:30.719390+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1756 sent 1755 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:00.393870+0000 osd.1 (osd.1) 1756 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1756) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:00.393870+0000 osd.1 (osd.1) 1756 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370089984 unmapped: 47267840 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3224> 2025-11-22T10:18:01.369+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:31.719585+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1757 sent 1756 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:01.371089+0000 osd.1 (osd.1) 1757 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1757) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:01.371089+0000 osd.1 (osd.1) 1757 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370106368 unmapped: 47251456 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3213> 2025-11-22T10:18:02.351+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:32.719793+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1758 sent 1757 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:02.352873+0000 osd.1 (osd.1) 1758 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1758) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:02.352873+0000 osd.1 (osd.1) 1758 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370106368 unmapped: 47251456 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3202> 2025-11-22T10:18:03.304+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:33.720008+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1759 sent 1758 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:03.305932+0000 osd.1 (osd.1) 1759 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1759) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:03.305932+0000 osd.1 (osd.1) 1759 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370106368 unmapped: 47251456 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3191> 2025-11-22T10:18:04.331+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,2,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:34.720194+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1760 sent 1759 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:04.331557+0000 osd.1 (osd.1) 1760 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1760) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:04.331557+0000 osd.1 (osd.1) 1760 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370114560 unmapped: 47243264 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3179> 2025-11-22T10:18:05.286+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:35.720393+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1761 sent 1760 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:05.286625+0000 osd.1 (osd.1) 1761 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1761) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:05.286625+0000 osd.1 (osd.1) 1761 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370114560 unmapped: 47243264 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3165> 2025-11-22T10:18:06.296+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:36.720576+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1762 sent 1761 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:06.296425+0000 osd.1 (osd.1) 1762 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1762) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:06.296425+0000 osd.1 (osd.1) 1762 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370114560 unmapped: 47243264 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3154> 2025-11-22T10:18:07.339+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:37.720760+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1763 sent 1762 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:07.339480+0000 osd.1 (osd.1) 1763 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1763) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:07.339480+0000 osd.1 (osd.1) 1763 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370114560 unmapped: 47243264 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3143> 2025-11-22T10:18:08.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:38.720932+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1764 sent 1763 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:08.300782+0000 osd.1 (osd.1) 1764 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1764) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:08.300782+0000 osd.1 (osd.1) 1764 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370114560 unmapped: 47243264 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3132> 2025-11-22T10:18:09.251+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:39.721116+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1765 sent 1764 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:09.251872+0000 osd.1 (osd.1) 1765 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,2,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370139136 unmapped: 47218688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1765) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:09.251872+0000 osd.1 (osd.1) 1765 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3120> 2025-11-22T10:18:10.254+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:40.721391+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1766 sent 1765 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:10.254807+0000 osd.1 (osd.1) 1766 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370139136 unmapped: 47218688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1766) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:10.254807+0000 osd.1 (osd.1) 1766 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3106> 2025-11-22T10:18:11.257+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:41.721602+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1767 sent 1766 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:11.257487+0000 osd.1 (osd.1) 1767 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370139136 unmapped: 47218688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1767) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:11.257487+0000 osd.1 (osd.1) 1767 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3095> 2025-11-22T10:18:12.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:42.721788+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1768 sent 1767 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:12.300925+0000 osd.1 (osd.1) 1768 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,2,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370139136 unmapped: 47218688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1768) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:12.300925+0000 osd.1 (osd.1) 1768 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3083> 2025-11-22T10:18:13.319+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:43.721950+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1769 sent 1768 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:13.319438+0000 osd.1 (osd.1) 1769 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370139136 unmapped: 47218688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1769) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:13.319438+0000 osd.1 (osd.1) 1769 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3072> 2025-11-22T10:18:14.292+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:44.722143+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1770 sent 1769 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:14.292453+0000 osd.1 (osd.1) 1770 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370139136 unmapped: 47218688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1770) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:14.292453+0000 osd.1 (osd.1) 1770 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3061> 2025-11-22T10:18:15.283+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:45.722391+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1771 sent 1770 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:15.284336+0000 osd.1 (osd.1) 1771 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370139136 unmapped: 47218688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1771) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:15.284336+0000 osd.1 (osd.1) 1771 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3047> 2025-11-22T10:18:16.313+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:46.722565+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1772 sent 1771 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:16.314137+0000 osd.1 (osd.1) 1772 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370139136 unmapped: 47218688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1772) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:16.314137+0000 osd.1 (osd.1) 1772 : cluster [WRN] 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3036> 2025-11-22T10:18:17.292+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:47.722771+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1773 sent 1772 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:17.293581+0000 osd.1 (osd.1) 1773 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 ms_handle_reset con 0x5613876a4000 session 0x561385ccbc20
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x5613851eb000
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370163712 unmapped: 47194112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3025> 2025-11-22T10:18:18.290+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1773) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:17.293581+0000 osd.1 (osd.1) 1773 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,1,2,2,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:48.722951+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1774 sent 1773 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:18.291429+0000 osd.1 (osd.1) 1774 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370163712 unmapped: 47194112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3013> 2025-11-22T10:18:19.270+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:49.723152+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1775 sent 1774 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:19.270933+0000 osd.1 (osd.1) 1775 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1774) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:18.291429+0000 osd.1 (osd.1) 1774 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370163712 unmapped: 47194112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -3002> 2025-11-22T10:18:20.268+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:50.723371+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1776 sent 1775 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:20.269721+0000 osd.1 (osd.1) 1776 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1775) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:19.270933+0000 osd.1 (osd.1) 1775 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1776) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:20.269721+0000 osd.1 (osd.1) 1776 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370163712 unmapped: 47194112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2986> 2025-11-22T10:18:21.236+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:51.723566+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1777 sent 1776 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:21.237801+0000 osd.1 (osd.1) 1777 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1777) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:21.237801+0000 osd.1 (osd.1) 1777 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370163712 unmapped: 47194112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2975> 2025-11-22T10:18:22.228+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:52.723722+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1778 sent 1777 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:22.229349+0000 osd.1 (osd.1) 1778 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1778) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:22.229349+0000 osd.1 (osd.1) 1778 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370163712 unmapped: 47194112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2964> 2025-11-22T10:18:23.269+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:53.723876+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1779 sent 1778 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:23.270362+0000 osd.1 (osd.1) 1779 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,2,3,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370163712 unmapped: 47194112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2954> 2025-11-22T10:18:24.230+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1779) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:23.270362+0000 osd.1 (osd.1) 1779 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:54.724058+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1780 sent 1779 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:24.231449+0000 osd.1 (osd.1) 1780 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370171904 unmapped: 47185920 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2943> 2025-11-22T10:18:25.242+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1780) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:24.231449+0000 osd.1 (osd.1) 1780 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:55.724288+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1781 sent 1780 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:25.244362+0000 osd.1 (osd.1) 1781 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370180096 unmapped: 47177728 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2929> 2025-11-22T10:18:26.254+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1781) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:25.244362+0000 osd.1 (osd.1) 1781 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:56.724525+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1782 sent 1781 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:26.255708+0000 osd.1 (osd.1) 1782 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370180096 unmapped: 47177728 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2918> 2025-11-22T10:18:27.213+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1782) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:26.255708+0000 osd.1 (osd.1) 1782 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:57.724782+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1783 sent 1782 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:27.215221+0000 osd.1 (osd.1) 1783 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370180096 unmapped: 47177728 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2907> 2025-11-22T10:18:28.235+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1783) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:27.215221+0000 osd.1 (osd.1) 1783 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:58.724979+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1784 sent 1783 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:28.237133+0000 osd.1 (osd.1) 1784 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370180096 unmapped: 47177728 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2896> 2025-11-22T10:18:29.209+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1784) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:28.237133+0000 osd.1 (osd.1) 1784 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:59.725181+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1785 sent 1784 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:29.210596+0000 osd.1 (osd.1) 1785 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,2,3,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370180096 unmapped: 47177728 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2884> 2025-11-22T10:18:30.235+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,2,3,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1785) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:29.210596+0000 osd.1 (osd.1) 1785 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:00.725419+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1786 sent 1785 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:30.236034+0000 osd.1 (osd.1) 1786 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,2,3,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370180096 unmapped: 47177728 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2868> 2025-11-22T10:18:31.209+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1786) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:30.236034+0000 osd.1 (osd.1) 1786 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:01.725593+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1787 sent 1786 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:31.210540+0000 osd.1 (osd.1) 1787 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370180096 unmapped: 47177728 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2857> 2025-11-22T10:18:32.177+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,2,3,0,5,2])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1787) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:31.210540+0000 osd.1 (osd.1) 1787 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:02.725813+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1788 sent 1787 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:32.178389+0000 osd.1 (osd.1) 1788 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370188288 unmapped: 47169536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2845> 2025-11-22T10:18:33.212+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1788) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:32.178389+0000 osd.1 (osd.1) 1788 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:03.726009+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1789 sent 1788 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:33.213098+0000 osd.1 (osd.1) 1789 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370204672 unmapped: 47153152 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2834> 2025-11-22T10:18:34.254+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1789) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:33.213098+0000 osd.1 (osd.1) 1789 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:04.726225+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1790 sent 1789 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:34.255559+0000 osd.1 (osd.1) 1790 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370204672 unmapped: 47153152 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2823> 2025-11-22T10:18:35.260+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1790) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:34.255559+0000 osd.1 (osd.1) 1790 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:05.726381+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1791 sent 1790 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:35.262070+0000 osd.1 (osd.1) 1791 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370204672 unmapped: 47153152 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2809> 2025-11-22T10:18:36.238+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1791) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:35.262070+0000 osd.1 (osd.1) 1791 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:06.726541+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1792 sent 1791 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:36.239661+0000 osd.1 (osd.1) 1792 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370204672 unmapped: 47153152 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2798> 2025-11-22T10:18:37.265+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1792) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:36.239661+0000 osd.1 (osd.1) 1792 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:07.726784+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1793 sent 1792 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:37.267043+0000 osd.1 (osd.1) 1793 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,3,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370212864 unmapped: 47144960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2786> 2025-11-22T10:18:38.256+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1793) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:37.267043+0000 osd.1 (osd.1) 1793 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:08.727158+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1794 sent 1793 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:38.257568+0000 osd.1 (osd.1) 1794 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370212864 unmapped: 47144960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2775> 2025-11-22T10:18:39.208+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,3,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1794) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:38.257568+0000 osd.1 (osd.1) 1794 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:09.727371+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1795 sent 1794 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:39.210070+0000 osd.1 (osd.1) 1795 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370212864 unmapped: 47144960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2763> 2025-11-22T10:18:40.227+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:10.727553+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1796 sent 1795 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:40.229081+0000 osd.1 (osd.1) 1796 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1795) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:39.210070+0000 osd.1 (osd.1) 1795 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370212864 unmapped: 47144960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2749> 2025-11-22T10:18:41.263+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:11.727794+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1797 sent 1796 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:41.264186+0000 osd.1 (osd.1) 1797 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1796) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:40.229081+0000 osd.1 (osd.1) 1796 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370229248 unmapped: 47128576 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2738> 2025-11-22T10:18:42.310+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:12.728041+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1798 sent 1797 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:42.311890+0000 osd.1 (osd.1) 1798 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1797) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:41.264186+0000 osd.1 (osd.1) 1797 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1798) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:42.311890+0000 osd.1 (osd.1) 1798 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370229248 unmapped: 47128576 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2725> 2025-11-22T10:18:43.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:13.728213+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1799 sent 1798 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:43.302199+0000 osd.1 (osd.1) 1799 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1799) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:43.302199+0000 osd.1 (osd.1) 1799 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370229248 unmapped: 47128576 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2714> 2025-11-22T10:18:44.343+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:14.728437+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1800 sent 1799 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:44.345027+0000 osd.1 (osd.1) 1800 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1800) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:44.345027+0000 osd.1 (osd.1) 1800 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,3,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370253824 unmapped: 47104000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2702> 2025-11-22T10:18:45.380+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:15.728701+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1801 sent 1800 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:45.381867+0000 osd.1 (osd.1) 1801 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1801) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:45.381867+0000 osd.1 (osd.1) 1801 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370253824 unmapped: 47104000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2688> 2025-11-22T10:18:46.411+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,3,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:16.728946+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1802 sent 1801 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:46.411407+0000 osd.1 (osd.1) 1802 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1802) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:46.411407+0000 osd.1 (osd.1) 1802 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370253824 unmapped: 47104000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2676> 2025-11-22T10:18:47.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:17.729146+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1803 sent 1802 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:47.371989+0000 osd.1 (osd.1) 1803 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1803) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:47.371989+0000 osd.1 (osd.1) 1803 : cluster [WRN] 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370253824 unmapped: 47104000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,3,0,4,3])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2664> 2025-11-22T10:18:48.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:18.729347+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1804 sent 1803 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:48.379912+0000 osd.1 (osd.1) 1804 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1804) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:48.379912+0000 osd.1 (osd.1) 1804 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370253824 unmapped: 47104000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2653> 2025-11-22T10:18:49.382+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,3,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:19.729573+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1805 sent 1804 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:49.382594+0000 osd.1 (osd.1) 1805 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1805) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:49.382594+0000 osd.1 (osd.1) 1805 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 47087616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2641> 2025-11-22T10:18:50.408+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:20.729855+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1806 sent 1805 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:50.408966+0000 osd.1 (osd.1) 1806 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1806) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:50.408966+0000 osd.1 (osd.1) 1806 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,3,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 47087616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2626> 2025-11-22T10:18:51.411+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:21.730105+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1807 sent 1806 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:51.411510+0000 osd.1 (osd.1) 1807 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1807) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:51.411510+0000 osd.1 (osd.1) 1807 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 47087616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2615> 2025-11-22T10:18:52.440+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:22.730384+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1808 sent 1807 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:52.440731+0000 osd.1 (osd.1) 1808 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1808) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:52.440731+0000 osd.1 (osd.1) 1808 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 47087616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2604> 2025-11-22T10:18:53.418+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:23.730632+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1809 sent 1808 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:53.418986+0000 osd.1 (osd.1) 1809 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1809) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:53.418986+0000 osd.1 (osd.1) 1809 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 47087616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2592> 2025-11-22T10:18:54.432+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:24.730885+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1810 sent 1809 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:54.433371+0000 osd.1 (osd.1) 1810 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1810) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:54.433371+0000 osd.1 (osd.1) 1810 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 47087616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2581> 2025-11-22T10:18:55.398+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:25.731055+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1811 sent 1810 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:55.398898+0000 osd.1 (osd.1) 1811 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1811) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:55.398898+0000 osd.1 (osd.1) 1811 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 47087616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2567> 2025-11-22T10:18:56.357+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:26.731251+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1812 sent 1811 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:56.357434+0000 osd.1 (osd.1) 1812 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1812) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:56.357434+0000 osd.1 (osd.1) 1812 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370278400 unmapped: 47079424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2556> 2025-11-22T10:18:57.381+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:27.731495+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1813 sent 1812 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:57.381694+0000 osd.1 (osd.1) 1813 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1813) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:57.381694+0000 osd.1 (osd.1) 1813 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370278400 unmapped: 47079424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2545> 2025-11-22T10:18:58.335+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:28.731671+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1814 sent 1813 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:58.335816+0000 osd.1 (osd.1) 1814 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1814) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:58.335816+0000 osd.1 (osd.1) 1814 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370278400 unmapped: 47079424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2533> 2025-11-22T10:18:59.376+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:29.731870+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1815 sent 1814 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:18:59.377599+0000 osd.1 (osd.1) 1815 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1815) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:18:59.377599+0000 osd.1 (osd.1) 1815 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370278400 unmapped: 47079424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2522> 2025-11-22T10:19:00.343+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:30.732046+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1816 sent 1815 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:00.344682+0000 osd.1 (osd.1) 1816 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 ms_handle_reset con 0x5613856db400 session 0x561385611a40
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x561382f16400
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1816) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:00.344682+0000 osd.1 (osd.1) 1816 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,0,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370278400 unmapped: 47079424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2504> 2025-11-22T10:19:01.359+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:31.732484+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1817 sent 1816 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:01.360485+0000 osd.1 (osd.1) 1817 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1817) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:01.360485+0000 osd.1 (osd.1) 1817 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 ms_handle_reset con 0x561384cf7400 session 0x561385118000
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x56138593ec00
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 ms_handle_reset con 0x5613872c4000 session 0x5613832ecf00
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x561384cf7400
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370278400 unmapped: 47079424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2489> 2025-11-22T10:19:02.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:32.732791+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1818 sent 1817 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:02.391388+0000 osd.1 (osd.1) 1818 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1818) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:02.391388+0000 osd.1 (osd.1) 1818 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370278400 unmapped: 47079424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2478> 2025-11-22T10:19:03.361+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:33.732978+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1819 sent 1818 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:03.362055+0000 osd.1 (osd.1) 1819 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1819) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:03.362055+0000 osd.1 (osd.1) 1819 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370286592 unmapped: 47071232 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2466> 2025-11-22T10:19:04.385+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:34.733171+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1820 sent 1819 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:04.386141+0000 osd.1 (osd.1) 1820 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1820) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:04.386141+0000 osd.1 (osd.1) 1820 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370311168 unmapped: 47046656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2455> 2025-11-22T10:19:05.354+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:35.733428+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1821 sent 1820 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:05.355596+0000 osd.1 (osd.1) 1821 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1821) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:05.355596+0000 osd.1 (osd.1) 1821 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370311168 unmapped: 47046656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2441> 2025-11-22T10:19:06.351+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:36.733690+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1822 sent 1821 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:06.352685+0000 osd.1 (osd.1) 1822 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1822) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:06.352685+0000 osd.1 (osd.1) 1822 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370311168 unmapped: 47046656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2430> 2025-11-22T10:19:07.323+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:37.733936+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1823 sent 1822 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:07.324480+0000 osd.1 (osd.1) 1823 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1823) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:07.324480+0000 osd.1 (osd.1) 1823 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370311168 unmapped: 47046656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2419> 2025-11-22T10:19:08.349+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:38.734178+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1824 sent 1823 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:08.351343+0000 osd.1 (osd.1) 1824 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1824) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:08.351343+0000 osd.1 (osd.1) 1824 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370311168 unmapped: 47046656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2407> 2025-11-22T10:19:09.365+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:39.734445+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1825 sent 1824 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:09.366597+0000 osd.1 (osd.1) 1825 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1825) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:09.366597+0000 osd.1 (osd.1) 1825 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370311168 unmapped: 47046656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2396> 2025-11-22T10:19:10.381+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:40.734726+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1826 sent 1825 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:10.382592+0000 osd.1 (osd.1) 1826 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1826) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:10.382592+0000 osd.1 (osd.1) 1826 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370311168 unmapped: 47046656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2381> 2025-11-22T10:19:11.370+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:41.735027+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1827 sent 1826 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:11.372007+0000 osd.1 (osd.1) 1827 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1827) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:11.372007+0000 osd.1 (osd.1) 1827 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370311168 unmapped: 47046656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2370> 2025-11-22T10:19:12.344+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:42.735397+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1828 sent 1827 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:12.345649+0000 osd.1 (osd.1) 1828 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1828) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:12.345649+0000 osd.1 (osd.1) 1828 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370319360 unmapped: 47038464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2359> 2025-11-22T10:19:13.380+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:43.735635+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1829 sent 1828 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:13.382127+0000 osd.1 (osd.1) 1829 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1829) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:13.382127+0000 osd.1 (osd.1) 1829 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370319360 unmapped: 47038464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2348> 2025-11-22T10:19:14.362+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:44.735967+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1830 sent 1829 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:14.363649+0000 osd.1 (osd.1) 1830 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1830) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:14.363649+0000 osd.1 (osd.1) 1830 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370319360 unmapped: 47038464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2337> 2025-11-22T10:19:15.357+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:45.736251+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1831 sent 1830 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:15.358492+0000 osd.1 (osd.1) 1831 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1831) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:15.358492+0000 osd.1 (osd.1) 1831 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370319360 unmapped: 47038464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2322> 2025-11-22T10:19:16.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:46.736479+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1832 sent 1831 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:16.364897+0000 osd.1 (osd.1) 1832 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1832) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:16.364897+0000 osd.1 (osd.1) 1832 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370319360 unmapped: 47038464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2311> 2025-11-22T10:19:17.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:47.736695+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1833 sent 1832 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:17.391658+0000 osd.1 (osd.1) 1833 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1833) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:17.391658+0000 osd.1 (osd.1) 1833 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370319360 unmapped: 47038464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2300> 2025-11-22T10:19:18.428+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:48.736988+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1834 sent 1833 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:18.430047+0000 osd.1 (osd.1) 1834 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1834) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:18.430047+0000 osd.1 (osd.1) 1834 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370319360 unmapped: 47038464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2289> 2025-11-22T10:19:19.431+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:49.737171+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1835 sent 1834 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:19.432846+0000 osd.1 (osd.1) 1835 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1835) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:19.432846+0000 osd.1 (osd.1) 1835 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370319360 unmapped: 47038464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2277> 2025-11-22T10:19:20.433+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 ms_handle_reset con 0x56138b1d8800 session 0x561385ba0d20
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x5613899b3c00
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:50.737380+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1836 sent 1835 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:20.434410+0000 osd.1 (osd.1) 1836 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1836) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:20.434410+0000 osd.1 (osd.1) 1836 : cluster [WRN] 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370343936 unmapped: 47013888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,2,3,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2260> 2025-11-22T10:19:21.392+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:51.737551+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1837 sent 1836 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:21.393580+0000 osd.1 (osd.1) 1837 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1837) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:21.393580+0000 osd.1 (osd.1) 1837 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370343936 unmapped: 47013888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2249> 2025-11-22T10:19:22.416+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:52.737957+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1838 sent 1837 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:22.417281+0000 osd.1 (osd.1) 1838 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1838) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:22.417281+0000 osd.1 (osd.1) 1838 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370343936 unmapped: 47013888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2238> 2025-11-22T10:19:23.437+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:53.738241+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1839 sent 1838 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:23.438426+0000 osd.1 (osd.1) 1839 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1839) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:23.438426+0000 osd.1 (osd.1) 1839 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370343936 unmapped: 47013888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2227> 2025-11-22T10:19:24.452+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:54.738524+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1840 sent 1839 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:24.453484+0000 osd.1 (osd.1) 1840 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1840) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:24.453484+0000 osd.1 (osd.1) 1840 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370343936 unmapped: 47013888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2216> 2025-11-22T10:19:25.426+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:55.738671+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1841 sent 1840 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:25.427953+0000 osd.1 (osd.1) 1841 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,3,3,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1841) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:25.427953+0000 osd.1 (osd.1) 1841 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370343936 unmapped: 47013888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2201> 2025-11-22T10:19:26.463+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:56.738835+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1842 sent 1841 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:26.465117+0000 osd.1 (osd.1) 1842 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1842) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:26.465117+0000 osd.1 (osd.1) 1842 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370343936 unmapped: 47013888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2190> 2025-11-22T10:19:27.503+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:57.739042+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1843 sent 1842 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:27.505237+0000 osd.1 (osd.1) 1843 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1843) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:27.505237+0000 osd.1 (osd.1) 1843 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370343936 unmapped: 47013888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2179> 2025-11-22T10:19:28.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:58.739203+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1844 sent 1843 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:28.531483+0000 osd.1 (osd.1) 1844 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1844) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:28.531483+0000 osd.1 (osd.1) 1844 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370352128 unmapped: 47005696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2168> 2025-11-22T10:19:29.535+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:59.739388+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1845 sent 1844 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:29.535720+0000 osd.1 (osd.1) 1845 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370360320 unmapped: 46997504 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1845) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:29.535720+0000 osd.1 (osd.1) 1845 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2157> 2025-11-22T10:19:30.571+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,2,4,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:00.739573+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1846 sent 1845 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:30.571638+0000 osd.1 (osd.1) 1846 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370360320 unmapped: 46997504 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1846) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:30.571638+0000 osd.1 (osd.1) 1846 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2141> 2025-11-22T10:19:31.576+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:01.739724+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1847 sent 1846 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:31.577173+0000 osd.1 (osd.1) 1847 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1847) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:31.577173+0000 osd.1 (osd.1) 1847 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2130> 2025-11-22T10:19:32.577+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:02.739901+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1848 sent 1847 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:32.577399+0000 osd.1 (osd.1) 1848 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1848) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:32.577399+0000 osd.1 (osd.1) 1848 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2119> 2025-11-22T10:19:33.560+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:03.740093+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1849 sent 1848 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:33.561257+0000 osd.1 (osd.1) 1849 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1849) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:33.561257+0000 osd.1 (osd.1) 1849 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2108> 2025-11-22T10:19:34.563+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:04.740297+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1850 sent 1849 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:34.563462+0000 osd.1 (osd.1) 1850 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,2,4,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370384896 unmapped: 46972928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1850) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:34.563462+0000 osd.1 (osd.1) 1850 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2096> 2025-11-22T10:19:35.559+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:05.740631+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1851 sent 1850 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:35.560279+0000 osd.1 (osd.1) 1851 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370384896 unmapped: 46972928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1851) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:35.560279+0000 osd.1 (osd.1) 1851 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2082> 2025-11-22T10:19:36.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:06.740957+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1852 sent 1851 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:36.570915+0000 osd.1 (osd.1) 1852 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 46956544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1852) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:36.570915+0000 osd.1 (osd.1) 1852 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2071> 2025-11-22T10:19:37.617+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:07.741204+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1853 sent 1852 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:37.618080+0000 osd.1 (osd.1) 1853 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 46956544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1853) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:37.618080+0000 osd.1 (osd.1) 1853 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2060> 2025-11-22T10:19:38.654+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:08.741389+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1854 sent 1853 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:38.654619+0000 osd.1 (osd.1) 1854 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 46956544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1854) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:38.654619+0000 osd.1 (osd.1) 1854 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2049> 2025-11-22T10:19:39.687+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:09.741613+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1855 sent 1854 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:39.687434+0000 osd.1 (osd.1) 1855 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,4,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370327552 unmapped: 47030272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1855) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:39.687434+0000 osd.1 (osd.1) 1855 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2037> 2025-11-22T10:19:40.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:10.741857+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1856 sent 1855 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:40.653112+0000 osd.1 (osd.1) 1856 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370327552 unmapped: 47030272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1856) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:40.653112+0000 osd.1 (osd.1) 1856 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2023> 2025-11-22T10:19:41.603+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:11.742058+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1857 sent 1856 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:41.604562+0000 osd.1 (osd.1) 1857 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370327552 unmapped: 47030272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1857) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:41.604562+0000 osd.1 (osd.1) 1857 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2012> 2025-11-22T10:19:42.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:12.742247+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1858 sent 1857 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:42.653539+0000 osd.1 (osd.1) 1858 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,4,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370335744 unmapped: 47022080 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1858) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:42.653539+0000 osd.1 (osd.1) 1858 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -2000> 2025-11-22T10:19:43.611+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:13.742412+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1859 sent 1858 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:43.612490+0000 osd.1 (osd.1) 1859 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,4,1,4,3])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370335744 unmapped: 47022080 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1859) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:43.612490+0000 osd.1 (osd.1) 1859 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1988> 2025-11-22T10:19:44.598+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:14.742588+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1860 sent 1859 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:44.599066+0000 osd.1 (osd.1) 1860 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370343936 unmapped: 47013888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1860) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:44.599066+0000 osd.1 (osd.1) 1860 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1977> 2025-11-22T10:19:45.642+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:15.742768+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1861 sent 1860 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:45.643529+0000 osd.1 (osd.1) 1861 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370352128 unmapped: 47005696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1861) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:45.643529+0000 osd.1 (osd.1) 1861 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1963> 2025-11-22T10:19:46.603+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:16.742943+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1862 sent 1861 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:46.605310+0000 osd.1 (osd.1) 1862 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370352128 unmapped: 47005696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1862) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:46.605310+0000 osd.1 (osd.1) 1862 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1952> 2025-11-22T10:19:47.618+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:17.743535+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1863 sent 1862 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:47.619498+0000 osd.1 (osd.1) 1863 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370352128 unmapped: 47005696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1863) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:47.619498+0000 osd.1 (osd.1) 1863 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1941> 2025-11-22T10:19:48.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:18.743862+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1864 sent 1863 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:48.622232+0000 osd.1 (osd.1) 1864 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370352128 unmapped: 47005696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,4,1,3,4])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1864) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:48.622232+0000 osd.1 (osd.1) 1864 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1929> 2025-11-22T10:19:49.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:19.744038+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1865 sent 1864 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:49.653243+0000 osd.1 (osd.1) 1865 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370352128 unmapped: 47005696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1865) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:49.653243+0000 osd.1 (osd.1) 1865 : cluster [WRN] 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1918> 2025-11-22T10:19:50.635+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:20.744382+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1866 sent 1865 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:50.636363+0000 osd.1 (osd.1) 1866 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370352128 unmapped: 47005696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1866) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:50.636363+0000 osd.1 (osd.1) 1866 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1904> 2025-11-22T10:19:51.589+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,4,1,3,4])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:21.745027+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1867 sent 1866 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:51.590737+0000 osd.1 (osd.1) 1867 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370352128 unmapped: 47005696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1867) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:51.590737+0000 osd.1 (osd.1) 1867 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1892> 2025-11-22T10:19:52.560+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:22.745715+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1868 sent 1867 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:52.561890+0000 osd.1 (osd.1) 1868 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1883> 2025-11-22T10:19:53.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1868) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:52.561890+0000 osd.1 (osd.1) 1868 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:23.746090+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1869 sent 1868 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:53.526611+0000 osd.1 (osd.1) 1869 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1869) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:53.526611+0000 osd.1 (osd.1) 1869 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1870> 2025-11-22T10:19:54.567+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:24.746351+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1870 sent 1869 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:54.568517+0000 osd.1 (osd.1) 1870 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1861> 2025-11-22T10:19:55.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1870) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:54.568517+0000 osd.1 (osd.1) 1870 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:25.747040+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1871 sent 1870 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:55.566555+0000 osd.1 (osd.1) 1871 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,1,3,4])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1846> 2025-11-22T10:19:56.519+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:26.747248+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1872 sent 1871 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:56.521173+0000 osd.1 (osd.1) 1872 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1871) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:55.566555+0000 osd.1 (osd.1) 1871 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1835> 2025-11-22T10:19:57.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:27.747644+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1873 sent 1872 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:57.532104+0000 osd.1 (osd.1) 1873 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1872) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:56.521173+0000 osd.1 (osd.1) 1872 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1873) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:57.532104+0000 osd.1 (osd.1) 1873 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1822> 2025-11-22T10:19:58.520+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:28.747926+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1874 sent 1873 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:58.521711+0000 osd.1 (osd.1) 1874 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1874) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:58.521711+0000 osd.1 (osd.1) 1874 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1812> 2025-11-22T10:19:59.522+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:29.748713+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1875 sent 1874 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:19:59.523402+0000 osd.1 (osd.1) 1875 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1803> 2025-11-22T10:20:00.495+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1875) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:19:59.523402+0000 osd.1 (osd.1) 1875 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 46989312 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:30.749042+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1876 sent 1875 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:00.496639+0000 osd.1 (osd.1) 1876 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,5,1,3,4])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1876) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:00.496639+0000 osd.1 (osd.1) 1876 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370376704 unmapped: 46981120 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1785> 2025-11-22T10:20:01.542+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:31.749732+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1877 sent 1876 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:01.543727+0000 osd.1 (osd.1) 1877 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,5,1,3,4])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1776> 2025-11-22T10:20:02.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1877) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:01.543727+0000 osd.1 (osd.1) 1877 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370384896 unmapped: 46972928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:32.750209+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1878 sent 1877 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:02.517775+0000 osd.1 (osd.1) 1878 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1765> 2025-11-22T10:20:03.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370384896 unmapped: 46972928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1878) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:02.517775+0000 osd.1 (osd.1) 1878 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:33.750459+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1879 sent 1878 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:03.517575+0000 osd.1 (osd.1) 1879 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1754> 2025-11-22T10:20:04.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370384896 unmapped: 46972928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1879) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:03.517575+0000 osd.1 (osd.1) 1879 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:34.750967+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1880 sent 1879 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:04.510566+0000 osd.1 (osd.1) 1880 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1743> 2025-11-22T10:20:05.485+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370384896 unmapped: 46972928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1880) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:04.510566+0000 osd.1 (osd.1) 1880 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,4,2,3,4])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:35.751399+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1881 sent 1880 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:05.487214+0000 osd.1 (osd.1) 1881 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1728> 2025-11-22T10:20:06.534+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370384896 unmapped: 46972928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1881) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:05.487214+0000 osd.1 (osd.1) 1881 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:36.751656+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1882 sent 1881 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:06.535966+0000 osd.1 (osd.1) 1882 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1717> 2025-11-22T10:20:07.555+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370384896 unmapped: 46972928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1882) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:06.535966+0000 osd.1 (osd.1) 1882 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:37.752063+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1883 sent 1882 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:07.557389+0000 osd.1 (osd.1) 1883 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1706> 2025-11-22T10:20:08.538+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370384896 unmapped: 46972928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1883) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:07.557389+0000 osd.1 (osd.1) 1883 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,4,2,3,4])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:38.752484+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1884 sent 1883 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:08.540209+0000 osd.1 (osd.1) 1884 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1694> 2025-11-22T10:20:09.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 46956544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1884) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:08.540209+0000 osd.1 (osd.1) 1884 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:39.752868+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1885 sent 1884 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:09.556712+0000 osd.1 (osd.1) 1885 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1683> 2025-11-22T10:20:10.517+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 46956544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1885) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:09.556712+0000 osd.1 (osd.1) 1885 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:40.753157+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1886 sent 1885 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:10.517452+0000 osd.1 (osd.1) 1886 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1672> 2025-11-22T10:20:11.520+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 46956544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1886) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:10.517452+0000 osd.1 (osd.1) 1886 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:41.753497+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1887 sent 1886 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:11.520517+0000 osd.1 (osd.1) 1887 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,4,2,3,4])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1657> 2025-11-22T10:20:12.538+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 46956544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1887) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:11.520517+0000 osd.1 (osd.1) 1887 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:42.753794+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1888 sent 1887 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:12.538387+0000 osd.1 (osd.1) 1888 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1646> 2025-11-22T10:20:13.536+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370401280 unmapped: 46956544 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1888) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:12.538387+0000 osd.1 (osd.1) 1888 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:43.754124+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1889 sent 1888 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:13.536386+0000 osd.1 (osd.1) 1889 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1635> 2025-11-22T10:20:14.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370409472 unmapped: 46948352 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1889) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:13.536386+0000 osd.1 (osd.1) 1889 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:44.754372+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1890 sent 1889 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:14.489932+0000 osd.1 (osd.1) 1890 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1624> 2025-11-22T10:20:15.468+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370409472 unmapped: 46948352 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,4,2,3,4])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1890) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:14.489932+0000 osd.1 (osd.1) 1890 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:45.754554+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1891 sent 1890 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:15.468476+0000 osd.1 (osd.1) 1891 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1612> 2025-11-22T10:20:16.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370409472 unmapped: 46948352 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1891) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:15.468476+0000 osd.1 (osd.1) 1891 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:46.754823+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1892 sent 1891 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:16.451918+0000 osd.1 (osd.1) 1892 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,4,2,3,4])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1597> 2025-11-22T10:20:17.492+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370425856 unmapped: 46931968 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1892) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:16.451918+0000 osd.1 (osd.1) 1892 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:47.755092+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1893 sent 1892 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:17.493131+0000 osd.1 (osd.1) 1893 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1586> 2025-11-22T10:20:18.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370425856 unmapped: 46931968 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1893) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:17.493131+0000 osd.1 (osd.1) 1893 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:48.755297+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1894 sent 1893 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:18.531425+0000 osd.1 (osd.1) 1894 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1575> 2025-11-22T10:20:19.503+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370425856 unmapped: 46931968 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1894) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:18.531425+0000 osd.1 (osd.1) 1894 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:49.755502+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1895 sent 1894 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:19.503500+0000 osd.1 (osd.1) 1895 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1564> 2025-11-22T10:20:20.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370425856 unmapped: 46931968 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1895) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:19.503500+0000 osd.1 (osd.1) 1895 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:50.755699+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1896 sent 1895 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:20.550709+0000 osd.1 (osd.1) 1896 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1553> 2025-11-22T10:20:21.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370425856 unmapped: 46931968 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1896) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:20.550709+0000 osd.1 (osd.1) 1896 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:51.756128+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1897 sent 1896 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:21.509550+0000 osd.1 (osd.1) 1897 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1539> 2025-11-22T10:20:22.529+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370425856 unmapped: 46931968 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:52.756429+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1898 sent 1897 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:22.530351+0000 osd.1 (osd.1) 1898 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1897) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:21.509550+0000 osd.1 (osd.1) 1897 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,4,2,3,4])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1527> 2025-11-22T10:20:23.547+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370425856 unmapped: 46931968 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:53.756676+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1899 sent 1898 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:23.547967+0000 osd.1 (osd.1) 1899 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1898) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:22.530351+0000 osd.1 (osd.1) 1898 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1899) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:23.547967+0000 osd.1 (osd.1) 1899 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1514> 2025-11-22T10:20:24.561+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370425856 unmapped: 46931968 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:54.757015+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1900 sent 1899 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:24.561836+0000 osd.1 (osd.1) 1900 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1900) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:24.561836+0000 osd.1 (osd.1) 1900 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1503> 2025-11-22T10:20:25.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 46915584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:55.757420+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1901 sent 1900 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:25.591094+0000 osd.1 (osd.1) 1901 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1901) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:25.591094+0000 osd.1 (osd.1) 1901 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1492> 2025-11-22T10:20:26.543+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 46915584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:56.757663+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1902 sent 1901 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:26.544901+0000 osd.1 (osd.1) 1902 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1902) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:26.544901+0000 osd.1 (osd.1) 1902 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1478> 2025-11-22T10:20:27.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 46915584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:57.757968+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1903 sent 1902 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:27.566940+0000 osd.1 (osd.1) 1903 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1903) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:27.566940+0000 osd.1 (osd.1) 1903 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,2,3,4])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1466> 2025-11-22T10:20:28.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 46915584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:58.758408+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1904 sent 1903 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:28.584409+0000 osd.1 (osd.1) 1904 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1904) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:28.584409+0000 osd.1 (osd.1) 1904 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1455> 2025-11-22T10:20:29.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 46915584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:59.758610+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1905 sent 1904 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:29.592546+0000 osd.1 (osd.1) 1905 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1905) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:29.592546+0000 osd.1 (osd.1) 1905 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1444> 2025-11-22T10:20:30.580+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 46915584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:00.758827+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1906 sent 1905 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:30.581481+0000 osd.1 (osd.1) 1906 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1435> 2025-11-22T10:20:31.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 46915584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1906) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:30.581481+0000 osd.1 (osd.1) 1906 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:01.759057+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1907 sent 1906 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:31.591545+0000 osd.1 (osd.1) 1907 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1421> 2025-11-22T10:20:32.567+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,3,4])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370442240 unmapped: 46915584 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1907) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:31.591545+0000 osd.1 (osd.1) 1907 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:02.759361+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1908 sent 1907 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:32.568598+0000 osd.1 (osd.1) 1908 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1409> 2025-11-22T10:20:33.539+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370458624 unmapped: 46899200 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:03.759560+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1909 sent 1908 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:33.541063+0000 osd.1 (osd.1) 1909 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1908) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:32.568598+0000 osd.1 (osd.1) 1908 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1398> 2025-11-22T10:20:34.541+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370458624 unmapped: 46899200 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:04.759748+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1910 sent 1909 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:34.542753+0000 osd.1 (osd.1) 1910 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1909) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:33.541063+0000 osd.1 (osd.1) 1909 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1910) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:34.542753+0000 osd.1 (osd.1) 1910 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1385> 2025-11-22T10:20:35.577+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370458624 unmapped: 46899200 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:05.759935+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1911 sent 1910 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:35.578915+0000 osd.1 (osd.1) 1911 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1911) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:35.578915+0000 osd.1 (osd.1) 1911 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1374> 2025-11-22T10:20:36.545+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370458624 unmapped: 46899200 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:06.760160+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1912 sent 1911 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:36.546807+0000 osd.1 (osd.1) 1912 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1912) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:36.546807+0000 osd.1 (osd.1) 1912 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1360> 2025-11-22T10:20:37.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370458624 unmapped: 46899200 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:07.760370+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1913 sent 1912 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:37.526584+0000 osd.1 (osd.1) 1913 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1913) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:37.526584+0000 osd.1 (osd.1) 1913 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,3,4])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1348> 2025-11-22T10:20:38.574+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370466816 unmapped: 46891008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:08.760551+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1914 sent 1913 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:38.576177+0000 osd.1 (osd.1) 1914 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1914) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:38.576177+0000 osd.1 (osd.1) 1914 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1337> 2025-11-22T10:20:39.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370466816 unmapped: 46891008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:09.760828+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1915 sent 1914 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:39.551662+0000 osd.1 (osd.1) 1915 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1915) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:39.551662+0000 osd.1 (osd.1) 1915 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1326> 2025-11-22T10:20:40.510+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370466816 unmapped: 46891008 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:10.761036+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1916 sent 1915 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:40.512094+0000 osd.1 (osd.1) 1916 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1916) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:40.512094+0000 osd.1 (osd.1) 1916 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1315> 2025-11-22T10:20:41.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370483200 unmapped: 46874624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:11.761280+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1917 sent 1916 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:41.531601+0000 osd.1 (osd.1) 1917 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1917) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:41.531601+0000 osd.1 (osd.1) 1917 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1301> 2025-11-22T10:20:42.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370483200 unmapped: 46874624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:12.761624+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1918 sent 1917 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:42.554731+0000 osd.1 (osd.1) 1918 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1918) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:42.554731+0000 osd.1 (osd.1) 1918 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,3,4])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1289> 2025-11-22T10:20:43.597+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370483200 unmapped: 46874624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,3,4])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:13.761938+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1919 sent 1918 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:43.599305+0000 osd.1 (osd.1) 1919 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1919) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:43.599305+0000 osd.1 (osd.1) 1919 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1277> 2025-11-22T10:20:44.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370483200 unmapped: 46874624 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:14.762171+0000)
Nov 22 10:22:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 22 10:22:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3643849374' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1920 sent 1919 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:44.601412+0000 osd.1 (osd.1) 1920 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1920) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:44.601412+0000 osd.1 (osd.1) 1920 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1266> 2025-11-22T10:20:45.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370491392 unmapped: 46866432 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:15.762463+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1921 sent 1920 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:45.588111+0000 osd.1 (osd.1) 1921 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1921) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:45.588111+0000 osd.1 (osd.1) 1921 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1255> 2025-11-22T10:20:46.628+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370491392 unmapped: 46866432 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:16.762716+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1922 sent 1921 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:46.629567+0000 osd.1 (osd.1) 1922 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1922) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:46.629567+0000 osd.1 (osd.1) 1922 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1241> 2025-11-22T10:20:47.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370491392 unmapped: 46866432 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:17.762940+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1923 sent 1922 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:47.639950+0000 osd.1 (osd.1) 1923 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1923) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:47.639950+0000 osd.1 (osd.1) 1923 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1230> 2025-11-22T10:20:48.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370491392 unmapped: 46866432 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:18.763232+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1924 sent 1923 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:48.615007+0000 osd.1 (osd.1) 1924 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1924) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:48.615007+0000 osd.1 (osd.1) 1924 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,2,5])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1218> 2025-11-22T10:20:49.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 46858240 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:19.763598+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1925 sent 1924 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:49.570127+0000 osd.1 (osd.1) 1925 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1925) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:49.570127+0000 osd.1 (osd.1) 1925 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1207> 2025-11-22T10:20:50.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 46858240 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:20.763882+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1926 sent 1925 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:50.608194+0000 osd.1 (osd.1) 1926 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1926) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:50.608194+0000 osd.1 (osd.1) 1926 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1196> 2025-11-22T10:20:51.619+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 46858240 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:21.764101+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1927 sent 1926 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:51.620169+0000 osd.1 (osd.1) 1927 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1927) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:51.620169+0000 osd.1 (osd.1) 1927 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1182> 2025-11-22T10:20:52.649+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 46858240 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:22.764519+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1928 sent 1927 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:52.649666+0000 osd.1 (osd.1) 1928 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1928) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:52.649666+0000 osd.1 (osd.1) 1928 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1171> 2025-11-22T10:20:53.628+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 46858240 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:23.764780+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1929 sent 1928 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:53.629239+0000 osd.1 (osd.1) 1929 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1929) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:53.629239+0000 osd.1 (osd.1) 1929 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,2,5])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1159> 2025-11-22T10:20:54.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 46858240 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:24.765044+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1930 sent 1929 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:54.605665+0000 osd.1 (osd.1) 1930 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1930) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:54.605665+0000 osd.1 (osd.1) 1930 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1148> 2025-11-22T10:20:55.631+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 46858240 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:25.765304+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1931 sent 1930 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:55.631449+0000 osd.1 (osd.1) 1931 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1931) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:55.631449+0000 osd.1 (osd.1) 1931 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1137> 2025-11-22T10:20:56.681+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 46858240 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:26.765625+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1932 sent 1931 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:56.681613+0000 osd.1 (osd.1) 1932 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1932) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:56.681613+0000 osd.1 (osd.1) 1932 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1123> 2025-11-22T10:20:57.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370524160 unmapped: 46833664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:27.765823+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1933 sent 1932 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:57.667102+0000 osd.1 (osd.1) 1933 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1933) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:57.667102+0000 osd.1 (osd.1) 1933 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1112> 2025-11-22T10:20:58.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370524160 unmapped: 46833664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:28.766094+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1934 sent 1933 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:58.639040+0000 osd.1 (osd.1) 1934 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1934) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:58.639040+0000 osd.1 (osd.1) 1934 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,2,5])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1100> 2025-11-22T10:20:59.650+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370540544 unmapped: 46817280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:29.766391+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1935 sent 1934 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:20:59.650628+0000 osd.1 (osd.1) 1935 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1935) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:20:59.650628+0000 osd.1 (osd.1) 1935 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1089> 2025-11-22T10:21:00.659+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370540544 unmapped: 46817280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:30.766670+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1936 sent 1935 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:00.659760+0000 osd.1 (osd.1) 1936 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1936) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:00.659760+0000 osd.1 (osd.1) 1936 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1077> 2025-11-22T10:21:01.642+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370540544 unmapped: 46817280 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:31.767001+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1937 sent 1936 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:01.642981+0000 osd.1 (osd.1) 1937 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1937) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:01.642981+0000 osd.1 (osd.1) 1937 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1063> 2025-11-22T10:21:02.639+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370548736 unmapped: 46809088 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:32.767392+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1938 sent 1937 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:02.639792+0000 osd.1 (osd.1) 1938 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1938) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:02.639792+0000 osd.1 (osd.1) 1938 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1052> 2025-11-22T10:21:03.644+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370548736 unmapped: 46809088 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:33.767642+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1939 sent 1938 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:03.645161+0000 osd.1 (osd.1) 1939 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1939) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:03.645161+0000 osd.1 (osd.1) 1939 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1041> 2025-11-22T10:21:04.650+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370548736 unmapped: 46809088 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:34.767997+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1940 sent 1939 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:04.651086+0000 osd.1 (osd.1) 1940 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,3,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1940) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:04.651086+0000 osd.1 (osd.1) 1940 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1028> 2025-11-22T10:21:05.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370565120 unmapped: 46792704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:35.768247+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1941 sent 1940 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:05.630612+0000 osd.1 (osd.1) 1941 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,4,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,4,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1941) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:05.630612+0000 osd.1 (osd.1) 1941 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1015> 2025-11-22T10:21:06.677+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370565120 unmapped: 46792704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:36.768537+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1942 sent 1941 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:06.678653+0000 osd.1 (osd.1) 1942 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1942) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:06.678653+0000 osd.1 (osd.1) 1942 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:  -1001> 2025-11-22T10:21:07.653+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370565120 unmapped: 46792704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:37.768758+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1943 sent 1942 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:07.655137+0000 osd.1 (osd.1) 1943 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1943) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:07.655137+0000 osd.1 (osd.1) 1943 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -990> 2025-11-22T10:21:08.617+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:38.768995+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1944 sent 1943 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:08.618154+0000 osd.1 (osd.1) 1944 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370565120 unmapped: 46792704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1944) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:08.618154+0000 osd.1 (osd.1) 1944 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -979> 2025-11-22T10:21:09.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:39.769216+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1945 sent 1944 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:09.626518+0000 osd.1 (osd.1) 1945 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370565120 unmapped: 46792704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1945) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:09.626518+0000 osd.1 (osd.1) 1945 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -968> 2025-11-22T10:21:10.668+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:40.769424+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1946 sent 1945 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:10.669581+0000 osd.1 (osd.1) 1946 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370565120 unmapped: 46792704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,4,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1946) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:10.669581+0000 osd.1 (osd.1) 1946 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -956> 2025-11-22T10:21:11.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:41.769620+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1947 sent 1946 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:11.638726+0000 osd.1 (osd.1) 1947 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370565120 unmapped: 46792704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1947) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:11.638726+0000 osd.1 (osd.1) 1947 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -942> 2025-11-22T10:21:12.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:42.769907+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1948 sent 1947 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:12.659418+0000 osd.1 (osd.1) 1948 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370573312 unmapped: 46784512 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1948) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:12.659418+0000 osd.1 (osd.1) 1948 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -931> 2025-11-22T10:21:13.701+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:43.770187+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1949 sent 1948 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:13.702271+0000 osd.1 (osd.1) 1949 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370589696 unmapped: 46768128 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -922> 2025-11-22T10:21:14.663+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1949) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:13.702271+0000 osd.1 (osd.1) 1949 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:44.770419+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1950 sent 1949 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:14.664015+0000 osd.1 (osd.1) 1950 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370597888 unmapped: 46759936 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -911> 2025-11-22T10:21:15.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:45.770701+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1951 sent 1950 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:15.628253+0000 osd.1 (osd.1) 1951 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370597888 unmapped: 46759936 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1950) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:14.664015+0000 osd.1 (osd.1) 1950 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,4,2,5])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -899> 2025-11-22T10:21:16.671+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:46.770906+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1952 sent 1951 num 2 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:16.672546+0000 osd.1 (osd.1) 1952 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370597888 unmapped: 46759936 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1951) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:15.628253+0000 osd.1 (osd.1) 1951 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1952) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:16.672546+0000 osd.1 (osd.1) 1952 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -883> 2025-11-22T10:21:17.624+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:47.771101+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1953 sent 1952 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:17.625930+0000 osd.1 (osd.1) 1953 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370597888 unmapped: 46759936 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1953) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:17.625930+0000 osd.1 (osd.1) 1953 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -872> 2025-11-22T10:21:18.597+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:48.771419+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1954 sent 1953 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:18.598672+0000 osd.1 (osd.1) 1954 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370597888 unmapped: 46759936 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1954) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:18.598672+0000 osd.1 (osd.1) 1954 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -861> 2025-11-22T10:21:19.608+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:49.771673+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1955 sent 1954 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:19.609851+0000 osd.1 (osd.1) 1955 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370597888 unmapped: 46759936 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,4,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1955) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:19.609851+0000 osd.1 (osd.1) 1955 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -849> 2025-11-22T10:21:20.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:50.771875+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1956 sent 1955 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:20.616558+0000 osd.1 (osd.1) 1956 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370597888 unmapped: 46759936 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1956) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:20.616558+0000 osd.1 (osd.1) 1956 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -838> 2025-11-22T10:21:21.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:51.772158+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1957 sent 1956 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:21.626078+0000 osd.1 (osd.1) 1957 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370614272 unmapped: 46743552 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1957) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:21.626078+0000 osd.1 (osd.1) 1957 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -824> 2025-11-22T10:21:22.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:52.772712+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1958 sent 1957 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:22.603856+0000 osd.1 (osd.1) 1958 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370614272 unmapped: 46743552 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1958) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:22.603856+0000 osd.1 (osd.1) 1958 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -813> 2025-11-22T10:21:23.633+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:53.773026+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1959 sent 1958 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:23.634634+0000 osd.1 (osd.1) 1959 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370614272 unmapped: 46743552 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1959) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:23.634634+0000 osd.1 (osd.1) 1959 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,4,2,5])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -801> 2025-11-22T10:21:24.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:54.773477+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1960 sent 1959 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:24.612073+0000 osd.1 (osd.1) 1960 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370622464 unmapped: 46735360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1960) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:24.612073+0000 osd.1 (osd.1) 1960 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -790> 2025-11-22T10:21:25.660+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:55.774772+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1961 sent 1960 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:25.662161+0000 osd.1 (osd.1) 1961 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370622464 unmapped: 46735360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1961) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:25.662161+0000 osd.1 (osd.1) 1961 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -779> 2025-11-22T10:21:26.706+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:56.775002+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1962 sent 1961 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:26.707425+0000 osd.1 (osd.1) 1962 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370622464 unmapped: 46735360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1962) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:26.707425+0000 osd.1 (osd.1) 1962 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -765> 2025-11-22T10:21:27.692+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:57.775703+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1963 sent 1962 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:27.693892+0000 osd.1 (osd.1) 1963 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370622464 unmapped: 46735360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1963) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:27.693892+0000 osd.1 (osd.1) 1963 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -754> 2025-11-22T10:21:28.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:58.776865+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1964 sent 1963 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:28.691531+0000 osd.1 (osd.1) 1964 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370622464 unmapped: 46735360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1964) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:28.691531+0000 osd.1 (osd.1) 1964 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -743> 2025-11-22T10:21:29.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:59.777524+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1965 sent 1964 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:29.659639+0000 osd.1 (osd.1) 1965 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,4,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370630656 unmapped: 46727168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1965) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:29.659639+0000 osd.1 (osd.1) 1965 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -731> 2025-11-22T10:21:30.614+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:00.778048+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1966 sent 1965 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:30.616264+0000 osd.1 (osd.1) 1966 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370630656 unmapped: 46727168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1966) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:30.616264+0000 osd.1 (osd.1) 1966 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -720> 2025-11-22T10:21:31.631+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:01.778526+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1967 sent 1966 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:31.632774+0000 osd.1 (osd.1) 1967 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370630656 unmapped: 46727168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1967) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:31.632774+0000 osd.1 (osd.1) 1967 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -706> 2025-11-22T10:21:32.649+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:02.778795+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1968 sent 1967 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:32.650581+0000 osd.1 (osd.1) 1968 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370630656 unmapped: 46727168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1968) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:32.650581+0000 osd.1 (osd.1) 1968 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,2,5])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -694> 2025-11-22T10:21:33.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:03.779498+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1969 sent 1968 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:33.667518+0000 osd.1 (osd.1) 1969 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370630656 unmapped: 46727168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1969) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:33.667518+0000 osd.1 (osd.1) 1969 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -683> 2025-11-22T10:21:34.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:04.779905+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1970 sent 1969 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:34.630239+0000 osd.1 (osd.1) 1970 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370630656 unmapped: 46727168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1970) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:34.630239+0000 osd.1 (osd.1) 1970 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -672> 2025-11-22T10:21:35.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:05.780600+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1971 sent 1970 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:35.602720+0000 osd.1 (osd.1) 1971 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370630656 unmapped: 46727168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1971) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:35.602720+0000 osd.1 (osd.1) 1971 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -661> 2025-11-22T10:21:36.646+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:06.780832+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1972 sent 1971 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:36.646692+0000 osd.1 (osd.1) 1972 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370630656 unmapped: 46727168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1972) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:36.646692+0000 osd.1 (osd.1) 1972 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -647> 2025-11-22T10:21:37.668+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:07.781094+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1973 sent 1972 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:37.668731+0000 osd.1 (osd.1) 1973 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370655232 unmapped: 46702592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1973) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:37.668731+0000 osd.1 (osd.1) 1973 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -636> 2025-11-22T10:21:38.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:08.781491+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1974 sent 1973 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:38.703685+0000 osd.1 (osd.1) 1974 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370655232 unmapped: 46702592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1974) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:38.703685+0000 osd.1 (osd.1) 1974 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,2,5])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -624> 2025-11-22T10:21:39.734+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:09.781667+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1975 sent 1974 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:39.735025+0000 osd.1 (osd.1) 1975 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370655232 unmapped: 46702592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1975) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:39.735025+0000 osd.1 (osd.1) 1975 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -613> 2025-11-22T10:21:40.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:10.781899+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1976 sent 1975 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:40.712723+0000 osd.1 (osd.1) 1976 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370655232 unmapped: 46702592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1976) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:40.712723+0000 osd.1 (osd.1) 1976 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -601> 2025-11-22T10:21:41.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:11.782078+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1977 sent 1976 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:41.691615+0000 osd.1 (osd.1) 1977 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370655232 unmapped: 46702592 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1977) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:41.691615+0000 osd.1 (osd.1) 1977 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -587> 2025-11-22T10:21:42.729+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:12.782568+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1978 sent 1977 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:42.729672+0000 osd.1 (osd.1) 1978 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370663424 unmapped: 46694400 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1978) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:42.729672+0000 osd.1 (osd.1) 1978 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -576> 2025-11-22T10:21:43.752+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:13.782948+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1979 sent 1978 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:43.752637+0000 osd.1 (osd.1) 1979 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370663424 unmapped: 46694400 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1979) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:43.752637+0000 osd.1 (osd.1) 1979 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:14.783168+0000)
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -562> 2025-11-22T10:21:44.796+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370671616 unmapped: 46686208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,2,5])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:15.783341+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1980 sent 1979 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:44.797258+0000 osd.1 (osd.1) 1980 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -552> 2025-11-22T10:21:45.809+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370679808 unmapped: 46678016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1980) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:44.797258+0000 osd.1 (osd.1) 1980 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,2,5])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -546> 2025-11-22T10:21:46.772+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:16.783529+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1982 sent 1980 num 2 unsent 2 sending 2
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:45.811452+0000 osd.1 (osd.1) 1981 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:46.773087+0000 osd.1 (osd.1) 1982 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370679808 unmapped: 46678016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1982) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:45.811452+0000 osd.1 (osd.1) 1981 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:46.773087+0000 osd.1 (osd.1) 1982 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -530> 2025-11-22T10:21:47.727+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:17.783747+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1983 sent 1982 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:47.729075+0000 osd.1 (osd.1) 1983 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370679808 unmapped: 46678016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1983) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:47.729075+0000 osd.1 (osd.1) 1983 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -519> 2025-11-22T10:21:48.776+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:18.784787+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1984 sent 1983 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:48.777961+0000 osd.1 (osd.1) 1984 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370679808 unmapped: 46678016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1984) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:48.777961+0000 osd.1 (osd.1) 1984 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:19.785894+0000)
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -505> 2025-11-22T10:21:49.802+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370679808 unmapped: 46678016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:20.786139+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1985 sent 1984 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:49.803524+0000 osd.1 (osd.1) 1985 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -495> 2025-11-22T10:21:50.794+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370679808 unmapped: 46678016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1985) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:49.803524+0000 osd.1 (osd.1) 1985 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:21.786485+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1986 sent 1985 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:50.796077+0000 osd.1 (osd.1) 1986 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -483> 2025-11-22T10:21:51.844+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370679808 unmapped: 46678016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1986) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:50.796077+0000 osd.1 (osd.1) 1986 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:22.786748+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1987 sent 1986 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:51.845558+0000 osd.1 (osd.1) 1987 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -469> 2025-11-22T10:21:52.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370679808 unmapped: 46678016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1987) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:51.845558+0000 osd.1 (osd.1) 1987 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:23.787041+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1988 sent 1987 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:52.813801+0000 osd.1 (osd.1) 1988 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -457> 2025-11-22T10:21:53.819+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 46661632 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1988) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:52.813801+0000 osd.1 (osd.1) 1988 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -452> 2025-11-22T10:21:54.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:24.787375+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 2 last_log 1990 sent 1988 num 2 unsent 2 sending 2
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:53.820977+0000 osd.1 (osd.1) 1989 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:54.781242+0000 osd.1 (osd.1) 1990 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 46661632 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1990) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:53.820977+0000 osd.1 (osd.1) 1989 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:54.781242+0000 osd.1 (osd.1) 1990 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:25.787639+0000)
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -436> 2025-11-22T10:21:55.810+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 46661632 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:26.787848+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1991 sent 1990 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:55.811391+0000 osd.1 (osd.1) 1991 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -427> 2025-11-22T10:21:56.830+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 46661632 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1991) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:55.811391+0000 osd.1 (osd.1) 1991 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:27.788357+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1992 sent 1991 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:56.831475+0000 osd.1 (osd.1) 1992 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -413> 2025-11-22T10:21:57.809+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 46661632 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1992) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:56.831475+0000 osd.1 (osd.1) 1992 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:28.789401+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1993 sent 1992 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:57.810910+0000 osd.1 (osd.1) 1993 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -402> 2025-11-22T10:21:58.826+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 46661632 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1993) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:57.810910+0000 osd.1 (osd.1) 1993 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:29.790110+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1994 sent 1993 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:58.827274+0000 osd.1 (osd.1) 1994 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -390> 2025-11-22T10:21:59.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 46661632 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1994) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:58.827274+0000 osd.1 (osd.1) 1994 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:30.790548+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1995 sent 1994 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:21:59.835297+0000 osd.1 (osd.1) 1995 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -378> 2025-11-22T10:22:00.838+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369500160 unmapped: 47857664 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1995) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:21:59.835297+0000 osd.1 (osd.1) 1995 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:31.790769+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1996 sent 1995 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:00.839403+0000 osd.1 (osd.1) 1996 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -365> 2025-11-22T10:22:01.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369508352 unmapped: 47849472 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1996) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:00.839403+0000 osd.1 (osd.1) 1996 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:32.791555+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1997 sent 1996 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:01.864856+0000 osd.1 (osd.1) 1997 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -351> 2025-11-22T10:22:02.878+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369508352 unmapped: 47849472 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1997) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:01.864856+0000 osd.1 (osd.1) 1997 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:33.792160+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1998 sent 1997 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:02.879950+0000 osd.1 (osd.1) 1998 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -340> 2025-11-22T10:22:03.917+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369508352 unmapped: 47849472 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1998) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:02.879950+0000 osd.1 (osd.1) 1998 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:34.792556+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 1999 sent 1998 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:03.918907+0000 osd.1 (osd.1) 1999 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -329> 2025-11-22T10:22:04.906+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369508352 unmapped: 47849472 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 1999) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:03.918907+0000 osd.1 (osd.1) 1999 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:35.792819+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2000 sent 1999 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:04.907197+0000 osd.1 (osd.1) 2000 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -317> 2025-11-22T10:22:05.929+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369508352 unmapped: 47849472 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 ms_handle_reset con 0x5613856dac00 session 0x5613852e21e0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: handle_auth_request added challenge on 0x561382f16800
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2000) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:04.907197+0000 osd.1 (osd.1) 2000 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:36.793414+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2001 sent 2000 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:05.931232+0000 osd.1 (osd.1) 2001 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -303> 2025-11-22T10:22:06.943+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369508352 unmapped: 47849472 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2001) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:05.931232+0000 osd.1 (osd.1) 2001 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,3,5,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:37.793823+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2002 sent 2001 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:06.944653+0000 osd.1 (osd.1) 2002 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -288> 2025-11-22T10:22:07.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369508352 unmapped: 47849472 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2002) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:06.944653+0000 osd.1 (osd.1) 2002 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:38.794161+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2003 sent 2002 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:07.922293+0000 osd.1 (osd.1) 2003 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -277> 2025-11-22T10:22:08.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369508352 unmapped: 47849472 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2003) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:07.922293+0000 osd.1 (osd.1) 2003 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:39.794410+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2004 sent 2003 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:08.953464+0000 osd.1 (osd.1) 2004 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -266> 2025-11-22T10:22:09.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2004) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:08.953464+0000 osd.1 (osd.1) 2004 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:40.795513+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2005 sent 2004 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:09.948113+0000 osd.1 (osd.1) 2005 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,2,6,1,6])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -254> 2025-11-22T10:22:10.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2005) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:09.948113+0000 osd.1 (osd.1) 2005 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:41.795718+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2006 sent 2005 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:10.947938+0000 osd.1 (osd.1) 2006 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -243> 2025-11-22T10:22:11.972+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2006) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:10.947938+0000 osd.1 (osd.1) 2006 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:42.795976+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2007 sent 2006 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:11.973616+0000 osd.1 (osd.1) 2007 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -229> 2025-11-22T10:22:12.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2007) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:11.973616+0000 osd.1 (osd.1) 2007 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:43.796404+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2008 sent 2007 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:12.978267+0000 osd.1 (osd.1) 2008 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -218> 2025-11-22T10:22:13.943+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2008) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:12.978267+0000 osd.1 (osd.1) 2008 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:44.796630+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2009 sent 2008 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:13.945271+0000 osd.1 (osd.1) 2009 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -207> 2025-11-22T10:22:14.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2009) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:13.945271+0000 osd.1 (osd.1) 2009 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,2,6,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:45.796930+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2010 sent 2009 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:14.964259+0000 osd.1 (osd.1) 2010 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -195> 2025-11-22T10:22:15.937+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2010) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:14.964259+0000 osd.1 (osd.1) 2010 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:46.797390+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2011 sent 2010 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:15.937656+0000 osd.1 (osd.1) 2011 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -184> 2025-11-22T10:22:16.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369541120 unmapped: 47816704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2011) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:15.937656+0000 osd.1 (osd.1) 2011 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:47.797631+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2012 sent 2011 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:16.961740+0000 osd.1 (osd.1) 2012 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -170> 2025-11-22T10:22:17.963+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369549312 unmapped: 47808512 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2012) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:16.961740+0000 osd.1 (osd.1) 2012 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:48.797837+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2013 sent 2012 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:17.963772+0000 osd.1 (osd.1) 2013 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -159> 2025-11-22T10:22:18.982+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369549312 unmapped: 47808512 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2013) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:17.963772+0000 osd.1 (osd.1) 2013 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:49.798053+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2014 sent 2013 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:18.982931+0000 osd.1 (osd.1) 2014 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -148> 2025-11-22T10:22:20.015+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369549312 unmapped: 47808512 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2014) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:18.982931+0000 osd.1 (osd.1) 2014 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:50.798386+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2015 sent 2014 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:20.015560+0000 osd.1 (osd.1) 2015 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,2,6,1,6])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -136> 2025-11-22T10:22:20.994+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2015) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:20.015560+0000 osd.1 (osd.1) 2015 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:51.798566+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2016 sent 2015 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:20.994770+0000 osd.1 (osd.1) 2016 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -125> 2025-11-22T10:22:22.023+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2016) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:20.994770+0000 osd.1 (osd.1) 2016 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:52.798799+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2017 sent 2016 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:22.023723+0000 osd.1 (osd.1) 2017 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -111> 2025-11-22T10:22:22.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2017) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:22.023723+0000 osd.1 (osd.1) 2017 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:53.799027+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2018 sent 2017 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:22.980115+0000 osd.1 (osd.1) 2018 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:   -100> 2025-11-22T10:22:23.965+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2018) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:22.980115+0000 osd.1 (osd.1) 2018 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:54.799293+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2019 sent 2018 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:23.965681+0000 osd.1 (osd.1) 2019 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:    -89> 2025-11-22T10:22:25.001+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 47800320 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2019) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:23.965681+0000 osd.1 (osd.1) 2019 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:55.799507+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2020 sent 2019 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:25.001603+0000 osd.1 (osd.1) 2020 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:    -78> 2025-11-22T10:22:25.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369590272 unmapped: 47767552 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2020) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:25.001603+0000 osd.1 (osd.1) 2020 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,2,6,1,6])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:56.800377+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2021 sent 2020 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:25.974276+0000 osd.1 (osd.1) 2021 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:    -66> 2025-11-22T10:22:26.963+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:32 compute-0 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:32 compute-0 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369590272 unmapped: 47767552 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2021) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:25.974276+0000 osd.1 (osd.1) 2021 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:57.800585+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2022 sent 2021 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:26.964128+0000 osd.1 (osd.1) 2022 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:    -52> 2025-11-22T10:22:27.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369598464 unmapped: 47759360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2022) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:26.964128+0000 osd.1 (osd.1) 2022 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: do_command 'config diff' '{prefix=config diff}'
Nov 22 10:22:32 compute-0 ceph-osd[89679]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:58.800853+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2023 sent 2022 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:27.961488+0000 osd.1 (osd.1) 2023 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: do_command 'config show' '{prefix=config show}'
Nov 22 10:22:32 compute-0 ceph-osd[89679]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:    -37> 2025-11-22T10:22:28.998+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 10:22:32 compute-0 ceph-osd[89679]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 10:22:32 compute-0 ceph-osd[89679]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2023) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:27.961488+0000 osd.1 (osd.1) 2023 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:59.801115+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2024 sent 2023 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:29.000146+0000 osd.1 (osd.1) 2024 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:    -22> 2025-11-22T10:22:30.000+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368517120 unmapped: 48840704 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2024) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:29.000146+0000 osd.1 (osd.1) 2024 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:22:00.801369+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2025 sent 2024 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:30.001767+0000 osd.1 (osd.1) 2025 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]:    -11> 2025-11-22T10:22:31.029+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368230400 unmapped: 49127424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client handle_log_ack log(last 2025) v1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  logged 2025-11-22T10:22:30.001767+0000 osd.1 (osd.1) 2025 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: tick
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_tickets
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:22:01.801551+0000)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  log_queue is 1 last_log 2026 sent 2025 num 1 unsent 1 sending 1
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_client  will send 2025-11-22T10:22:31.030661+0000 osd.1 (osd.1) 2026 : cluster [WRN] 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-osd[89679]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:32 compute-0 ceph-osd[89679]: do_command 'log dump' '{prefix=log dump}'
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:31.989+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 22 10:22:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380691754' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 22 10:22:32 compute-0 nova_compute[253661]: 2025-11-22 10:22:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:32 compute-0 rsyslogd[1005]: imjournal from <np0005532048:ceph-osd>: begin to drop messages due to rate-limiting
Nov 22 10:22:32 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 10:22:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 10:22:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77046748' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 10:22:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 22 10:22:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3353379513' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 22 10:22:32 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-mon[75021]: pgmap v3625: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3446227911' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 10:22:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2160580551' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 10:22:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3643849374' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 22 10:22:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1380691754' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 22 10:22:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/77046748' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 10:22:32 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3353379513' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 22 10:22:32 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 22 10:22:32 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2819866200' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 22 10:22:32 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:32 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:32 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:32.953+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:33 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23123 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:33 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23125 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:33 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:33 compute-0 podman[445707]: 2025-11-22 10:22:33.428756116 +0000 UTC m=+0.099163678 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 10:22:33 compute-0 podman[445705]: 2025-11-22 10:22:33.451788042 +0000 UTC m=+0.121570818 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 10:22:33 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23127 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:33 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:33 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2819866200' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 22 10:22:33 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23129 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:33 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23131 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:33 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:33 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:33 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:33.980+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:34 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23135 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:34 compute-0 nova_compute[253661]: 2025-11-22 10:22:34.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:34 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1929 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:22:34 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 22 10:22:34 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2683263380' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 22 10:22:34 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23139 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:34 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:34 compute-0 ceph-mon[75021]: from='client.23123 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:34 compute-0 ceph-mon[75021]: from='client.23125 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:34 compute-0 ceph-mon[75021]: pgmap v3626: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:34 compute-0 ceph-mon[75021]: from='client.23127 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:34 compute-0 ceph-mon[75021]: from='client.23129 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:34 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1929 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:34 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2683263380' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 22 10:22:34 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:34 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:34 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:34.990+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 22 10:22:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814617644' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 22 10:22:35 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23143 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:35 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:35 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 22 10:22:35 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240827454' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 10:22:35 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:35 compute-0 nova_compute[253661]: 2025-11-22 10:22:35.714 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:35 compute-0 ceph-mon[75021]: from='client.23131 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:35 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:35 compute-0 ceph-mon[75021]: from='client.23135 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:35 compute-0 ceph-mon[75021]: from='client.23139 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3814617644' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 22 10:22:35 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2240827454' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 10:22:36 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:36 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:36 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:36.008+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 22 10:22:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470089991' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 22 10:22:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 22 10:22:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 22 10:22:36 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 22 10:22:36 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987303036' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 22 10:22:36 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:36 compute-0 ceph-mon[75021]: from='client.23143 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:36 compute-0 ceph-mon[75021]: pgmap v3627: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:36 compute-0 ceph-mon[75021]: from='client.23147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3470089991' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 22 10:22:36 compute-0 ceph-mon[75021]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 22 10:22:36 compute-0 ceph-mon[75021]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 22 10:22:36 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1987303036' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 22 10:22:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:37.008+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:37 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23157 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:48:59.214155+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 72867840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:00.214393+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 72867840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:01.214605+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 72867840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:02.214749+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8426000/0x0/0x4ffc00000, data 0x166962f/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 72867840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:03.214875+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600551 data_alloc: 218103808 data_used: 14823424
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 72859648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:04.215058+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e8426000/0x0/0x4ffc00000, data 0x166962f/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:05.215232+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 72859648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080459800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080459800 session 0x562080aa3e00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56208045a400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x56208045a400 session 0x562080af12c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080458000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080458000 session 0x562081f212c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080459800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080459800 session 0x562084caad20
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.193279266s of 11.388338089s, submitted: 62
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x5620819aa000 session 0x5620824d1860
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f81000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562081f81000 session 0x562082898780
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080944000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080944000 session 0x562082899a40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080458000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:06.215366+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 72712192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080458000 session 0x56207e3701e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080459800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080459800 session 0x56207e3712c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:07.215544+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 72712192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:08.215711+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 72712192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654639 data_alloc: 218103808 data_used: 14823424
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:09.215873+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 72704000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e7e7b000/0x0/0x4ffc00000, data 0x1c19691/0x1dd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x5620819aa000 session 0x562084caba40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:10.216088+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 72704000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f81000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562081f81000 session 0x562084caab40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:11.216267+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 72704000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56208264ec00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x56208264ec00 session 0x562084caa960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080458000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080458000 session 0x5620828341e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080459800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:12.216403+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 72646656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:13.216534+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364584960 unmapped: 72884224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700748 data_alloc: 218103808 data_used: 20701184
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:14.216833+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f81000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364584960 unmapped: 72884224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:15.217064+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562081f81000 session 0x56208195c960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364642304 unmapped: 72826880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e7b7c000/0x0/0x4ffc00000, data 0x1f176b4/0x20d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:16.217258+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364642304 unmapped: 72826880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:17.217474+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364642304 unmapped: 72826880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e7b7c000/0x0/0x4ffc00000, data 0x1f176b4/0x20d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:18.217614+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364642304 unmapped: 72826880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727344 data_alloc: 218103808 data_used: 20705280
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:19.217844+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364642304 unmapped: 72826880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:20.218067+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364642304 unmapped: 72826880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:21.218254+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364642304 unmapped: 72826880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562083f5c000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562083f5c000 session 0x5620827b6f00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56208045b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x56208045b800 session 0x562084cab680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:22.218422+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364642304 unmapped: 72826880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e7b7c000/0x0/0x4ffc00000, data 0x1f176b4/0x20d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080458c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080458c00 session 0x56208195c1e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:23.218541+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080458000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.677547455s of 17.389436722s, submitted: 56
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080458000 session 0x562082898960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 72957952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804730 data_alloc: 218103808 data_used: 20717568
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:24.218750+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370130944 unmapped: 67338240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56208045b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:25.218897+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370155520 unmapped: 67313664 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e70de000/0x0/0x4ffc00000, data 0x29a66c2/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f81000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:26.219014+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:27.219114+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:28.219251+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3855190 data_alloc: 218103808 data_used: 24612864
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:29.219399+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:30.219546+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e70bd000/0x0/0x4ffc00000, data 0x29bf6c2/0x2b7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:31.219692+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:32.219869+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:33.220004+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3855526 data_alloc: 218103808 data_used: 24621056
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:34.220244+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:35.220413+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562081c0b800 session 0x5620824cfc20
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562083f5c000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:36.220536+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e70bd000/0x0/0x4ffc00000, data 0x29bf6c2/0x2b7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370368512 unmapped: 67100672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081c0b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562081c0b800 session 0x562081dc6b40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc1b400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x56207fc1b400 session 0x5620824d05a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x56207fc37c00 session 0x56207f912f00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620824c2800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x5620824c2800 session 0x5620828354a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc1b400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.249974251s of 13.689875603s, submitted: 118
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x56207fc1b400 session 0x562082228960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x56207fc37c00 session 0x5620827b61e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080458000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080458000 session 0x562082834000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081c0b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:37.220646+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e70bd000/0x0/0x4ffc00000, data 0x29bf6c2/0x2b7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [0,0,0,7,0,2])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 370843648 unmapped: 66625536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562081c0b800 session 0x562080b2cf00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620824c2800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x5620824c2800 session 0x562081dc63c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:38.220772+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 66240512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:39.220953+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4015898 data_alloc: 234881024 data_used: 25059328
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 371458048 unmapped: 66011136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:40.221093+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 66002944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:41.221258+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc1b400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x56207fc1b400 session 0x5620828990e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 66002944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x56207fc37c00 session 0x562080af1680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:42.221413+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 66002944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:43.221563+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e5d42000/0x0/0x4ffc00000, data 0x3d4f724/0x3f0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 66002944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080458000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562080458000 session 0x56207fdab680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:44.221737+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081c0b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4015898 data_alloc: 234881024 data_used: 25059328
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 ms_handle_reset con 0x562081c0b800 session 0x562080b2e780
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 371417088 unmapped: 66052096 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:45.221871+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 371417088 unmapped: 66052096 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e5d20000/0x0/0x4ffc00000, data 0x3d70734/0x3f2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:46.222338+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374439936 unmapped: 63029248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.419674873s of 10.035657883s, submitted: 122
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:47.222801+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 375431168 unmapped: 62038016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e5d0f000/0x0/0x4ffc00000, data 0x3d81734/0x3f3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:48.222952+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 375439360 unmapped: 62029824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:49.223077+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4095252 data_alloc: 234881024 data_used: 35975168
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 375439360 unmapped: 62029824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562083f5c400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _renew_subs
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620871cc000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620871cc000 session 0x5620827f2f00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562083f5c400 session 0x562084caa1e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc1b400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e58fa000/0x0/0x4ffc00000, data 0x3d83313/0x3f43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:50.223217+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380665856 unmapped: 56803328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e58fa000/0x0/0x4ffc00000, data 0x3d83313/0x3f43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1,2])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:51.223374+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385990656 unmapped: 51478528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:52.223642+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385990656 unmapped: 51478528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:53.224176+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385990656 unmapped: 51478528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e51c9000/0x0/0x4ffc00000, data 0x44b4313/0x4674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:54.224432+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4199971 data_alloc: 234881024 data_used: 42737664
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385990656 unmapped: 51478528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:55.224627+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385990656 unmapped: 51478528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:56.224886+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385998848 unmapped: 51470336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:57.225211+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385998848 unmapped: 51470336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:58.225587+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e51c9000/0x0/0x4ffc00000, data 0x44b4313/0x4674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385998848 unmapped: 51470336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:49:59.225878+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4199971 data_alloc: 234881024 data_used: 42737664
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385998848 unmapped: 51470336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:00.226067+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385998848 unmapped: 51470336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e51c9000/0x0/0x4ffc00000, data 0x44b4313/0x4674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:01.226235+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385998848 unmapped: 51470336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:02.226374+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e51c9000/0x0/0x4ffc00000, data 0x44b4313/0x4674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385998848 unmapped: 51470336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e51c9000/0x0/0x4ffc00000, data 0x44b4313/0x4674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 11.245452881s
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 11.245452881s
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.803940296s of 16.148284912s, submitted: 32
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.245686531s, txc = 0x56207faafb00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:03.226587+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385998848 unmapped: 51470336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e51c9000/0x0/0x4ffc00000, data 0x44b4313/0x4674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,3])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:04.226799+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4185667 data_alloc: 234881024 data_used: 42770432
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:05.226994+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378314752 unmapped: 59154432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:06.227228+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378322944 unmapped: 59146240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:07.227438+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378322944 unmapped: 59146240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:08.227647+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378527744 unmapped: 58941440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:09.227787+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4243929 data_alloc: 234881024 data_used: 42778624
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383000576 unmapped: 54468608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5126000/0x0/0x4ffc00000, data 0x471c313/0x4718000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,1,19,0,0,0,36])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:10.227975+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 382009344 unmapped: 55459840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:11.228120+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 382681088 unmapped: 54788096 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:12.228272+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 382697472 unmapped: 54771712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:13.228502+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 0.021883780s of 10.293661118s, submitted: 95
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383074304 unmapped: 54394880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:14.228718+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4286189 data_alloc: 234881024 data_used: 42770432
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 382025728 unmapped: 55443456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:15.228914+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 59785216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e454b000/0x0/0x4ffc00000, data 0x52f7313/0x52f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:16.229051+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 59785216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:17.229223+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 59785216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:18.229365+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377692160 unmapped: 59777024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e450a000/0x0/0x4ffc00000, data 0x5338313/0x5334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:19.229557+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4296297 data_alloc: 251658240 data_used: 43380736
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377913344 unmapped: 59555840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:20.229721+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377921536 unmapped: 59547648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e44d0000/0x0/0x4ffc00000, data 0x5372313/0x536e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:21.230027+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377921536 unmapped: 59547648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:22.230209+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377921536 unmapped: 59547648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:23.230361+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377921536 unmapped: 59547648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:24.230542+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4300717 data_alloc: 251658240 data_used: 43737088
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377921536 unmapped: 59547648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:25.230729+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.799511909s of 12.124077797s, submitted: 54
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e44d0000/0x0/0x4ffc00000, data 0x5372313/0x536e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377929728 unmapped: 59539456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x562082835680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x5620828345a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:26.230988+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377954304 unmapped: 59514880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:27.231114+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377962496 unmapped: 59506688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x562081dc6f00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:28.231270+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377962496 unmapped: 59506688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:29.231719+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4034142 data_alloc: 234881024 data_used: 31260672
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377970688 unmapped: 59498496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:30.231889+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5cf2000/0x0/0x4ffc00000, data 0x3b512a1/0x3b4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377970688 unmapped: 59498496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:31.232197+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562080459800 session 0x562082835860
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x562080a31a40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377970688 unmapped: 59498496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:32.232387+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x562081ebd680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:33.232530+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:34.232832+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:35.233157+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:36.233344+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:37.233572+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:38.233763+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:39.233969+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:40.234121+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:41.234300+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:42.234552+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:43.234734+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:44.234937+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:45.235240+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:46.235459+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:47.235679+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:48.235980+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:49.236212+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:50.236368+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:51.236493+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 62873600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:52.236771+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374603776 unmapped: 62865408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:53.236912+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374603776 unmapped: 62865408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:54.237168+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374603776 unmapped: 62865408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:55.237342+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374603776 unmapped: 62865408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:56.237567+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374603776 unmapped: 62865408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:57.237778+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374603776 unmapped: 62865408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:58.238009+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374603776 unmapped: 62865408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:50:59.238192+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 62857216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:00.238376+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 62857216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:01.238589+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 62857216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:02.238834+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 62857216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:03.239528+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 62857216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:04.239731+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 62857216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:05.239944+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 62857216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:06.240112+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 62857216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:07.240450+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374620160 unmapped: 62849024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:08.240609+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374620160 unmapped: 62849024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:09.240747+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374620160 unmapped: 62849024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:10.240868+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374628352 unmapped: 62840832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:11.241004+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374628352 unmapped: 62840832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:12.241157+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374628352 unmapped: 62840832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:13.241353+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374628352 unmapped: 62840832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:14.241540+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374628352 unmapped: 62840832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:15.241720+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 62832640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:16.241861+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 62832640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:17.242077+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 62832640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:18.242281+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 62832640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:19.242426+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 62832640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:20.242565+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 62832640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:21.242828+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 62832640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:22.243005+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 62832640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:23.243191+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374644736 unmapped: 62824448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:24.243411+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848160 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374644736 unmapped: 62824448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:25.243584+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374644736 unmapped: 62824448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:26.243724+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374644736 unmapped: 62824448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:27.243849+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374644736 unmapped: 62824448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:28.243974+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x56207f912960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x562080a305a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562083f5c400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562083f5c400 session 0x562080b61680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 374644736 unmapped: 62824448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x562082832f00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e6a30000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 62.618110657s of 63.636665344s, submitted: 87
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:29.244100+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3851414 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377790464 unmapped: 59678720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:30.244239+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x5620828332c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x562084cab4a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x562082228b40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562080458000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378093568 unmapped: 59375616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:31.244360+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562080458000 session 0x5620824d1e00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x5620824d1a40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378093568 unmapped: 59375616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:32.244476+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378093568 unmapped: 59375616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e50a9000/0x0/0x4ffc00000, data 0x35fb28e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x562081989e00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:33.244597+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378093568 unmapped: 59375616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x562082228f00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:34.244950+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e50a9000/0x0/0x4ffc00000, data 0x35fb28e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x56207f0ff860
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081c0b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3934935 data_alloc: 234881024 data_used: 26705920
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378101760 unmapped: 59367424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081c0b800 session 0x562081949680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:35.250903+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378101760 unmapped: 59367424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:36.251043+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378101760 unmapped: 59367424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e50a9000/0x0/0x4ffc00000, data 0x35fb28e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:37.251211+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:38.251356+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:39.251531+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4008695 data_alloc: 234881024 data_used: 37068800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:40.251676+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:41.251795+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e50a9000/0x0/0x4ffc00000, data 0x35fb28e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:42.251937+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e50a9000/0x0/0x4ffc00000, data 0x35fb28e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:43.252051+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:44.252201+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4008695 data_alloc: 234881024 data_used: 37068800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:45.252428+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e50a9000/0x0/0x4ffc00000, data 0x35fb28e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:46.252569+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:47.252751+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.098112106s of 18.473463058s, submitted: 39
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378454016 unmapped: 59015168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:48.252879+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378290176 unmapped: 59179008 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:49.253062+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4035959 data_alloc: 234881024 data_used: 37564416
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:50.253242+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e4e0f000/0x0/0x4ffc00000, data 0x389428e/0x388e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:51.253400+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:52.253664+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:53.254222+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:54.254763+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4036119 data_alloc: 234881024 data_used: 37568512
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:55.255175+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:56.255437+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e4df1000/0x0/0x4ffc00000, data 0x38b228e/0x38ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:57.255624+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:58.255792+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:51:59.256018+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4034739 data_alloc: 234881024 data_used: 37572608
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:00.256209+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:01.256407+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.887150764s of 13.739984512s, submitted: 54
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:02.256574+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e4de6000/0x0/0x4ffc00000, data 0x38be28e/0x38b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:03.256739+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e4de5000/0x0/0x4ffc00000, data 0x38bf28e/0x38b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:04.257047+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4034995 data_alloc: 234881024 data_used: 37572608
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e4de5000/0x0/0x4ffc00000, data 0x38bf28e/0x38b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:05.257298+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378535936 unmapped: 58933248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:06.257568+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x562082899680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x562080b2d680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x5620827ba5a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620871cd800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620871cd800 session 0x562084caaf00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081a11800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 389316608 unmapped: 48152576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081a11800 session 0x5620823b61e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:07.257729+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x5620827b6f00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x562081dc63c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x5620827b7680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620871cd800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620871cd800 session 0x562082899a40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:08.258170+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e42a2000/0x0/0x4ffc00000, data 0x440228e/0x43fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:09.258335+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4124418 data_alloc: 234881024 data_used: 37572608
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:10.258752+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:11.259066+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e42a2000/0x0/0x4ffc00000, data 0x440228e/0x43fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:12.259210+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081db7c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.360326767s of 11.005005836s, submitted: 31
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081db7c00 session 0x5620824d1860
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:13.259377+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379912192 unmapped: 57556992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:14.259763+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4202003 data_alloc: 251658240 data_used: 48168960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e42a2000/0x0/0x4ffc00000, data 0x440228e/0x43fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:15.260027+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:16.260347+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:17.260542+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:18.260831+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e42a2000/0x0/0x4ffc00000, data 0x440228e/0x43fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:19.261059+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4202003 data_alloc: 251658240 data_used: 48168960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:20.261228+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:21.261389+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:22.261523+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e42a2000/0x0/0x4ffc00000, data 0x440228e/0x43fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:23.261667+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 383205376 unmapped: 54263808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:24.261801+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4203795 data_alloc: 251658240 data_used: 48205824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.426350594s of 12.480834007s, submitted: 10
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 385835008 unmapped: 51634176 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:25.261981+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 388489216 unmapped: 48979968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e3349000/0x0/0x4ffc00000, data 0x535b28e/0x5355000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:26.262284+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390217728 unmapped: 47251456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:27.262438+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390324224 unmapped: 47144960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:28.262574+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e3319000/0x0/0x4ffc00000, data 0x538a28e/0x5384000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390324224 unmapped: 47144960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:29.263091+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4331523 data_alloc: 251658240 data_used: 49938432
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390324224 unmapped: 47144960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:30.263333+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390324224 unmapped: 47144960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:31.263468+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390324224 unmapped: 47144960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:32.263865+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390324224 unmapped: 47144960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e3319000/0x0/0x4ffc00000, data 0x538a28e/0x5384000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [0,0,1])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:33.264161+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390332416 unmapped: 47136768 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:34.264451+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4325091 data_alloc: 251658240 data_used: 49938432
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390332416 unmapped: 47136768 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:35.264614+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390340608 unmapped: 47128576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:36.264770+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 390340608 unmapped: 47128576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:37.265232+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.178096771s of 12.788263321s, submitted: 115
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x562081ebc1e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x56208195c960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e3317000/0x0/0x4ffc00000, data 0x538d28e/0x5387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 386154496 unmapped: 51314688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:38.265375+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x56207fdaa3c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 386162688 unmapped: 51306496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:39.265545+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4047549 data_alloc: 234881024 data_used: 37572608
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 386162688 unmapped: 51306496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:40.265676+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 386162688 unmapped: 51306496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:41.265914+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e4dde000/0x0/0x4ffc00000, data 0x38c628e/0x38c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 386170880 unmapped: 51298304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:42.266053+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 386170880 unmapped: 51298304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:43.266241+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x562080aa2960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620871cd800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620871cd800 session 0x562081ebdc20
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381452288 unmapped: 56016896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:44.266449+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3869777 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381452288 unmapped: 56016896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:45.266675+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381452288 unmapped: 56016896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:46.266872+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381452288 unmapped: 56016896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bce000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:47.267020+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381452288 unmapped: 56016896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:48.267285+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:49.267481+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3869777 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bce000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:50.267781+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:51.268142+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:52.268457+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bce000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:53.268742+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:54.268942+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3869777 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:55.269160+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:56.269381+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:57.269494+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:58.269693+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bce000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:52:59.269827+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3869777 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:00.269997+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:01.270175+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:02.270325+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bce000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:03.270473+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:04.270704+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bce000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3869777 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:05.270847+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:06.271015+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bce000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:07.271349+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:08.271524+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:09.271703+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3869777 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:10.271890+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381460480 unmapped: 56008704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.688121796s of 33.018711090s, submitted: 66
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x562084caa1e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:11.272082+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381591552 unmapped: 55877632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2b000/0x0/0x4ffc00000, data 0x2b7b21c/0x2b73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:12.272261+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381591552 unmapped: 55877632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:13.272395+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381591552 unmapped: 55877632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:14.272630+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381591552 unmapped: 55877632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3881589 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:15.272786+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381591552 unmapped: 55877632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x562084caab40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:16.272955+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381599744 unmapped: 55869440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2b000/0x0/0x4ffc00000, data 0x2b7b21c/0x2b73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x56207f912f00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:17.273129+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381599744 unmapped: 55869440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x5620828341e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081db8c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081db8c00 session 0x562082898960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2b000/0x0/0x4ffc00000, data 0x2b7b21c/0x2b73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:18.273390+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381599744 unmapped: 55869440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 184K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.93 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4662 writes, 21K keys, 4662 commit groups, 1.0 writes per commit group, ingest: 26.32 MB, 0.04 MB/s
                                           Interval WAL: 4662 writes, 1573 syncs, 2.96 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:19.273549+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381599744 unmapped: 55869440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3889328 data_alloc: 234881024 data_used: 27291648
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:20.273680+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2a000/0x0/0x4ffc00000, data 0x2b7b22c/0x2b74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:21.273835+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:22.273984+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:23.274143+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x56207f9665a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.635367393s of 12.874802589s, submitted: 14
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x5620828354a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:24.274352+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2a000/0x0/0x4ffc00000, data 0x2b7b22c/0x2b74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3889196 data_alloc: 234881024 data_used: 27291648
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:25.274503+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:26.274682+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:27.274883+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:28.275081+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:29.275332+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2a000/0x0/0x4ffc00000, data 0x2b7b22c/0x2b74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3889152 data_alloc: 234881024 data_used: 27291648
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:30.275487+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:31.275598+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380493824 unmapped: 56975360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2a000/0x0/0x4ffc00000, data 0x2b7b22c/0x2b74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:32.276419+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x562080b2e780
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:33.276619+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:34.276803+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3889020 data_alloc: 234881024 data_used: 27291648
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:35.276968+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2a000/0x0/0x4ffc00000, data 0x2b7b22c/0x2b74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:36.277206+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:37.277436+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2a000/0x0/0x4ffc00000, data 0x2b7b22c/0x2b74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:38.277911+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.914298058s of 15.382857323s, submitted: 3
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:39.278145+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3889312 data_alloc: 234881024 data_used: 27295744
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:40.278415+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5b2a000/0x0/0x4ffc00000, data 0x2b7b22c/0x2b74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:41.278618+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:42.278788+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 56967168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x562084caa1e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aac00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:43.279019+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380510208 unmapped: 56958976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:44.279189+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380510208 unmapped: 56958976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aac00 session 0x56207fdaa3c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:45.279379+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 56950784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:46.279521+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 56950784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:47.279723+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 56950784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:48.279927+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 56950784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:49.280446+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 56950784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:50.280676+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380526592 unmapped: 56942592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:51.280856+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380526592 unmapped: 56942592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:52.281012+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380526592 unmapped: 56942592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:53.281213+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380526592 unmapped: 56942592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:54.281583+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380526592 unmapped: 56942592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:55.281939+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380526592 unmapped: 56942592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:56.282213+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380534784 unmapped: 56934400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:57.282485+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380534784 unmapped: 56934400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:58.282675+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380534784 unmapped: 56934400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:53:59.282883+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380534784 unmapped: 56934400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:00.283035+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380534784 unmapped: 56934400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:01.283259+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380534784 unmapped: 56934400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:02.283472+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380534784 unmapped: 56934400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:03.283623+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380542976 unmapped: 56926208 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:04.283841+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380542976 unmapped: 56926208 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:05.284066+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380542976 unmapped: 56926208 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:06.284357+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380551168 unmapped: 56918016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:07.284565+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380551168 unmapped: 56918016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:08.284791+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380551168 unmapped: 56918016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:09.284989+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380559360 unmapped: 56909824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:10.285138+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380559360 unmapped: 56909824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:11.285406+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380559360 unmapped: 56909824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:12.285584+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380567552 unmapped: 56901632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:13.285771+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380567552 unmapped: 56901632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:14.285973+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380567552 unmapped: 56901632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:15.286124+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380567552 unmapped: 56901632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:16.286394+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380567552 unmapped: 56901632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:17.286635+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380567552 unmapped: 56901632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:18.286850+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380567552 unmapped: 56901632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:19.286988+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380567552 unmapped: 56901632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:20.287186+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 56893440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:21.287340+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 56893440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:22.287490+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 56893440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:23.287672+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 56893440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:24.287867+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 56893440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:25.288069+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 56893440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:26.288226+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 56893440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:27.288372+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380583936 unmapped: 56885248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:28.288538+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380583936 unmapped: 56885248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:29.288816+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380583936 unmapped: 56885248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:30.289034+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380592128 unmapped: 56877056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:31.289210+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380592128 unmapped: 56877056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:32.289362+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380592128 unmapped: 56877056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:33.289483+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380592128 unmapped: 56877056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:34.289600+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380592128 unmapped: 56877056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:35.289795+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380592128 unmapped: 56877056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:36.290063+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380592128 unmapped: 56877056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:37.290180+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380592128 unmapped: 56877056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:38.290388+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380600320 unmapped: 56868864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:39.290565+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380600320 unmapped: 56868864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:40.290736+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380600320 unmapped: 56868864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:41.290875+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380600320 unmapped: 56868864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:42.291000+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380600320 unmapped: 56868864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:43.291151+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380600320 unmapped: 56868864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:44.291345+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380608512 unmapped: 56860672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:45.291491+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380608512 unmapped: 56860672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:46.291632+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380608512 unmapped: 56860672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:47.291798+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380608512 unmapped: 56860672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:48.291955+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380608512 unmapped: 56860672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:49.292111+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380608512 unmapped: 56860672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:50.292258+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380608512 unmapped: 56860672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 71.696319580s of 72.095497131s, submitted: 22
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f81000 session 0x562081ecc780
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:51.292392+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56208045b800 session 0x562082834b40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380624896 unmapped: 56844288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:52.292522+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380633088 unmapped: 56836096 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:53.292708+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380698624 unmapped: 56770560 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:54.292915+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380723200 unmapped: 56745984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:55.293056+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380723200 unmapped: 56745984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:56.293203+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380723200 unmapped: 56745984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:57.293393+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380723200 unmapped: 56745984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:58.293591+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380723200 unmapped: 56745984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:54:59.293735+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380723200 unmapped: 56745984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:00.293882+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380723200 unmapped: 56745984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:01.294049+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380723200 unmapped: 56745984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:02.294210+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380723200 unmapped: 56745984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:03.294390+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380731392 unmapped: 56737792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:04.294565+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380731392 unmapped: 56737792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:05.294764+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380739584 unmapped: 56729600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:06.295059+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380747776 unmapped: 56721408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:07.295388+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380747776 unmapped: 56721408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:08.295539+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380747776 unmapped: 56721408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:09.295727+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380747776 unmapped: 56721408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:10.295868+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380755968 unmapped: 56713216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:11.296009+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380755968 unmapped: 56713216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:12.296179+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380755968 unmapped: 56713216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:13.296379+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380755968 unmapped: 56713216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:14.296566+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380755968 unmapped: 56713216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:15.296726+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380764160 unmapped: 56705024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:16.296862+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380764160 unmapped: 56705024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:17.297006+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380764160 unmapped: 56705024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:18.297165+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380764160 unmapped: 56705024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:19.297340+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380764160 unmapped: 56705024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:20.297489+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380764160 unmapped: 56705024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:21.297631+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380772352 unmapped: 56696832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:22.297777+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380772352 unmapped: 56696832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:23.297947+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380772352 unmapped: 56696832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:24.298135+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380772352 unmapped: 56696832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:25.298301+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380772352 unmapped: 56696832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:26.298486+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380780544 unmapped: 56688640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:27.406879+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380780544 unmapped: 56688640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:28.407037+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380780544 unmapped: 56688640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:29.407192+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380780544 unmapped: 56688640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:30.407367+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380780544 unmapped: 56688640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:31.407523+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380780544 unmapped: 56688640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:32.407661+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380780544 unmapped: 56688640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:33.407800+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380780544 unmapped: 56688640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:34.407952+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380788736 unmapped: 56680448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:35.408088+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380788736 unmapped: 56680448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:36.408220+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380788736 unmapped: 56680448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:37.408386+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380788736 unmapped: 56680448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:38.408679+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380796928 unmapped: 56672256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:39.408892+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380805120 unmapped: 56664064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:40.409091+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380805120 unmapped: 56664064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:41.409245+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380805120 unmapped: 56664064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:42.409462+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380805120 unmapped: 56664064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:43.409622+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380805120 unmapped: 56664064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:44.410185+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380805120 unmapped: 56664064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:45.410385+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3874203 data_alloc: 234881024 data_used: 26701824
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380805120 unmapped: 56664064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd1000/0x0/0x4ffc00000, data 0x2ad521c/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:46.410654+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380805120 unmapped: 56664064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:47.410846+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380813312 unmapped: 56655872 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:48.410994+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380813312 unmapped: 56655872 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.187793732s of 57.912258148s, submitted: 106
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x5620824d1a40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:49.411180+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380821504 unmapped: 56647680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:50.411409+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873318 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380829696 unmapped: 56639488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:51.411567+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380829696 unmapped: 56639488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:52.411698+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380829696 unmapped: 56639488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:53.411876+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380837888 unmapped: 56631296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:54.412276+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380837888 unmapped: 56631296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:55.412430+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873318 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380837888 unmapped: 56631296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:56.412630+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380846080 unmapped: 56623104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:57.412770+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380846080 unmapped: 56623104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:58.412905+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380854272 unmapped: 56614912 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:55:59.413033+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380854272 unmapped: 56614912 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:00.413421+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873318 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380854272 unmapped: 56614912 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:01.413562+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380854272 unmapped: 56614912 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:02.413808+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380854272 unmapped: 56614912 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:03.413947+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:04.414149+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:05.414299+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873318 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:06.414569+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:07.414760+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:08.414945+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:09.415157+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:10.415387+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873318 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:11.415582+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:12.415983+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 56598528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:13.416234+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380878848 unmapped: 56590336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:14.416593+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380878848 unmapped: 56590336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:15.416778+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873318 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380878848 unmapped: 56590336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:16.417046+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380878848 unmapped: 56590336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:17.417212+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380878848 unmapped: 56590336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:18.417453+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380878848 unmapped: 56590336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:19.417725+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380887040 unmapped: 56582144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:20.418136+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873318 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380887040 unmapped: 56582144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:21.418304+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380895232 unmapped: 56573952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:22.418574+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380903424 unmapped: 56565760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:23.418745+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380903424 unmapped: 56565760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:24.419000+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380903424 unmapped: 56565760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:25.419230+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3873318 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380903424 unmapped: 56565760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:26.419464+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380903424 unmapped: 56565760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:27.419578+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380919808 unmapped: 56549376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:28.419828+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x562080b2b680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f8f400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f8f400 session 0x5620824d1680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x56207fdaa780
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56208045b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56208045b800 session 0x5620824ce780
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380919808 unmapped: 56549376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.256301880s of 40.264076233s, submitted: 2
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:29.419988+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x5620827f2f00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f81000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f81000 session 0x56207f9123c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082646c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562082646c00 session 0x562081ebde00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381206528 unmapped: 56262656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x56207f913680
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56208045b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56208045b800 session 0x562084caa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:30.420134+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3932948 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x5620824d0d20
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5593000/0x0/0x4ffc00000, data 0x311321e/0x310b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381206528 unmapped: 56262656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:31.420393+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f81000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f81000 session 0x56208199a3c0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381206528 unmapped: 56262656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:32.420567+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381206528 unmapped: 56262656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:33.420764+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562084e05400
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562084e05400 session 0x5620827b7a40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x562082229a40
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381206528 unmapped: 56262656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56208045b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:34.421024+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381206528 unmapped: 56262656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:35.421189+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978210 data_alloc: 234881024 data_used: 32858112
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381206528 unmapped: 56262656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:36.421364+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5592000/0x0/0x4ffc00000, data 0x311322e/0x310c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381206528 unmapped: 56262656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:37.421569+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 381206528 unmapped: 56262656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56208045b800 session 0x5620827f2d20
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x5620819aa000 session 0x562082228d20
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:38.421770+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f81000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562081f81000 session 0x56208059b4a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:39.421984+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:40.422154+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:41.422359+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:42.422567+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:43.422716+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:44.423024+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:45.423244+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:46.423481+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:47.423665+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:48.423817+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:49.423966+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:50.424151+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:51.424429+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:52.424739+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:53.424971+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377847808 unmapped: 59621376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:54.425261+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377856000 unmapped: 59613184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:55.425420+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377856000 unmapped: 59613184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:56.425666+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377856000 unmapped: 59613184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:57.425864+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377856000 unmapped: 59613184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:58.426073+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377856000 unmapped: 59613184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:56:59.426303+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377864192 unmapped: 59604992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:00.426563+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377864192 unmapped: 59604992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:01.426712+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377864192 unmapped: 59604992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:02.427033+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377864192 unmapped: 59604992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:03.427190+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377864192 unmapped: 59604992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:04.427451+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377864192 unmapped: 59604992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:05.427595+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377864192 unmapped: 59604992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:06.427762+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377864192 unmapped: 59604992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:07.427969+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 59596800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:08.428207+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 59596800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:09.428394+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 59596800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:10.428532+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 59596800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:11.428662+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 59596800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:12.428858+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 59596800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:13.429007+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 59596800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:14.429264+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 59596800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:15.429377+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377880576 unmapped: 59588608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:16.429569+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377880576 unmapped: 59588608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:17.429771+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377880576 unmapped: 59588608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:18.429918+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 59580416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:19.430100+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 59580416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:20.430235+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 59580416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:21.430394+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 59580416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:22.430545+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 59580416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:23.430710+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 59572224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:24.430905+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 59572224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:25.431069+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 59572224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:26.431287+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 59572224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:27.431504+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 59572224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:28.431650+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 59572224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:29.431874+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 59572224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:30.432109+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 59572224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:31.432265+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 59564032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:32.432462+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 59564032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:33.432723+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 59564032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:34.432962+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377913344 unmapped: 59555840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:35.433128+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377913344 unmapped: 59555840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:36.433305+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377913344 unmapped: 59555840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:37.433490+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377913344 unmapped: 59555840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:38.433719+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377913344 unmapped: 59555840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:39.433939+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377929728 unmapped: 59539456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:40.434086+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377929728 unmapped: 59539456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:41.434263+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377929728 unmapped: 59539456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:42.434434+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377929728 unmapped: 59539456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:43.434603+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377937920 unmapped: 59531264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:44.434880+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377937920 unmapped: 59531264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:45.435024+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377937920 unmapped: 59531264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:46.435219+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377937920 unmapped: 59531264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:47.435776+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:48.437292+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377946112 unmapped: 59523072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:49.438690+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377946112 unmapped: 59523072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:50.438879+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377946112 unmapped: 59523072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:51.439138+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377954304 unmapped: 59514880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:52.440012+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377954304 unmapped: 59514880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:53.441474+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377954304 unmapped: 59514880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:54.442787+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377962496 unmapped: 59506688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:55.443405+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377962496 unmapped: 59506688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:56.443825+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377970688 unmapped: 59498496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:57.444078+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377970688 unmapped: 59498496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:58.444259+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377978880 unmapped: 59490304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:57:59.444674+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377978880 unmapped: 59490304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:00.445031+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377978880 unmapped: 59490304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:01.445379+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377978880 unmapped: 59490304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:02.445709+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377978880 unmapped: 59490304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:03.445973+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377978880 unmapped: 59490304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:04.446362+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377978880 unmapped: 59490304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:05.446601+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377978880 unmapped: 59490304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:06.446776+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377987072 unmapped: 59482112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:07.446943+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377987072 unmapped: 59482112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:08.447086+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377987072 unmapped: 59482112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:09.447297+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377987072 unmapped: 59482112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:10.447544+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377987072 unmapped: 59482112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:11.447736+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377987072 unmapped: 59482112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:12.447936+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377995264 unmapped: 59473920 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:13.448161+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 377995264 unmapped: 59473920 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:14.448427+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378003456 unmapped: 59465728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:15.448588+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378003456 unmapped: 59465728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:16.448725+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378003456 unmapped: 59465728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:17.448892+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378003456 unmapped: 59465728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:18.449220+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378003456 unmapped: 59465728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:19.449422+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378003456 unmapped: 59465728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:20.450586+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378011648 unmapped: 59457536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:21.451073+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378011648 unmapped: 59457536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:22.451281+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378011648 unmapped: 59457536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:23.451438+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378011648 unmapped: 59457536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:24.451629+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378011648 unmapped: 59457536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:25.451810+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378011648 unmapped: 59457536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:26.452202+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378011648 unmapped: 59457536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:27.452346+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378011648 unmapped: 59457536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:28.452565+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378028032 unmapped: 59441152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:29.453012+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378028032 unmapped: 59441152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:30.453422+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378028032 unmapped: 59441152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:31.453747+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378028032 unmapped: 59441152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:32.453882+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378028032 unmapped: 59441152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:33.454105+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378028032 unmapped: 59441152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:34.454392+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378028032 unmapped: 59441152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:35.454582+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378036224 unmapped: 59432960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:36.454707+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378036224 unmapped: 59432960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:37.454882+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378036224 unmapped: 59432960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:38.457202+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378036224 unmapped: 59432960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:39.457461+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378036224 unmapped: 59432960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:40.457683+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378036224 unmapped: 59432960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:41.457916+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378036224 unmapped: 59432960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:42.458135+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378036224 unmapped: 59432960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:43.458360+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378052608 unmapped: 59416576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:44.458526+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378052608 unmapped: 59416576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:45.458738+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378052608 unmapped: 59416576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:46.458931+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378052608 unmapped: 59416576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:47.459087+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378052608 unmapped: 59416576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:48.459235+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378052608 unmapped: 59416576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:49.459412+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378052608 unmapped: 59416576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:50.459545+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378052608 unmapped: 59416576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:51.459716+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 59408384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:52.460218+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 59408384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:53.460674+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 59408384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:54.461006+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 59400192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:55.461203+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 59400192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:56.461370+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 59400192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:57.461618+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 59400192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:58.461759+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 59392000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:58:59.461974+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 59392000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:00.462161+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 59392000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:01.462705+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 59392000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:02.462875+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 59383808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:03.463047+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 59383808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:04.463421+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 59383808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:05.463782+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 59383808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:06.463949+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 59383808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:07.464111+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 59383808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:08.464366+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 59383808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:09.464618+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 59383808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:10.464819+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378101760 unmapped: 59367424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:11.465192+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378101760 unmapped: 59367424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:12.465433+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378101760 unmapped: 59367424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:13.465700+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378101760 unmapped: 59367424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:14.465934+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378101760 unmapped: 59367424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:15.466109+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:16.466278+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:17.466426+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:18.466600+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:19.466780+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:20.466985+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:21.467149+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:22.467359+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378109952 unmapped: 59359232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:23.467512+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378118144 unmapped: 59351040 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:24.467679+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378118144 unmapped: 59351040 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:25.468027+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378118144 unmapped: 59351040 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:26.468194+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378134528 unmapped: 59334656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:27.468422+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378134528 unmapped: 59334656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:28.468635+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378134528 unmapped: 59334656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:29.468817+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378134528 unmapped: 59334656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:30.468941+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378134528 unmapped: 59334656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:31.469074+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378134528 unmapped: 59334656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:32.469207+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378134528 unmapped: 59334656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:33.469365+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378142720 unmapped: 59326464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:34.469569+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378142720 unmapped: 59326464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:35.469722+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378142720 unmapped: 59326464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:36.469837+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378142720 unmapped: 59326464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:37.469937+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378142720 unmapped: 59326464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:38.470073+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378142720 unmapped: 59326464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:39.470259+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378150912 unmapped: 59318272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:40.470426+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378150912 unmapped: 59318272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:41.470606+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378150912 unmapped: 59318272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:42.470790+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378150912 unmapped: 59318272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:43.470996+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378150912 unmapped: 59318272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:44.471299+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378150912 unmapped: 59318272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:45.471575+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378159104 unmapped: 59310080 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:46.471769+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378159104 unmapped: 59310080 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:47.472006+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378175488 unmapped: 59293696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:48.472206+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378175488 unmapped: 59293696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:49.472404+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378175488 unmapped: 59293696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:50.472619+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378175488 unmapped: 59293696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:51.472823+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378175488 unmapped: 59293696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:52.473039+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378175488 unmapped: 59293696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:53.473216+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378175488 unmapped: 59293696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:54.473464+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378175488 unmapped: 59293696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:55.473634+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378183680 unmapped: 59285504 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:56.473862+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378183680 unmapped: 59285504 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:57.474454+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378183680 unmapped: 59285504 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:58.474840+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378191872 unmapped: 59277312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T09:59:59.475090+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378191872 unmapped: 59277312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:00.477035+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378191872 unmapped: 59277312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:01.478078+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378191872 unmapped: 59277312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:02.478828+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378200064 unmapped: 59269120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:03.478968+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378200064 unmapped: 59269120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:04.479255+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378200064 unmapped: 59269120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:05.479381+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378208256 unmapped: 59260928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:06.479735+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378208256 unmapped: 59260928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:07.479866+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378208256 unmapped: 59260928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:08.480515+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378208256 unmapped: 59260928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:09.480674+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378208256 unmapped: 59260928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:10.481274+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378208256 unmapped: 59260928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:11.481445+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378216448 unmapped: 59252736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:12.481885+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378216448 unmapped: 59252736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:13.482032+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378216448 unmapped: 59252736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:14.482474+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378216448 unmapped: 59252736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:15.482641+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378216448 unmapped: 59252736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:16.482776+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378216448 unmapped: 59252736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:17.482926+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378216448 unmapped: 59252736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:18.483274+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378224640 unmapped: 59244544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:19.483476+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378241024 unmapped: 59228160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:20.483862+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378241024 unmapped: 59228160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:21.484079+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378241024 unmapped: 59228160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:22.484385+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 59219968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:23.484566+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 59219968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:24.484931+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 59219968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:25.485128+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 59219968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:26.485423+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 59219968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:27.485773+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 59219968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:28.485989+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 59219968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:29.486100+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 59219968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:30.486668+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378257408 unmapped: 59211776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:31.486801+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378257408 unmapped: 59211776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:32.486967+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378257408 unmapped: 59211776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:33.487071+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378257408 unmapped: 59211776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:34.487253+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378257408 unmapped: 59211776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:35.487378+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378273792 unmapped: 59195392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:36.487546+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378273792 unmapped: 59195392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:37.487728+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081c0a000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378281984 unmapped: 59187200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:38.487903+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378281984 unmapped: 59187200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:39.488098+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378281984 unmapped: 59187200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:40.488237+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378281984 unmapped: 59187200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:41.488432+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378281984 unmapped: 59187200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:42.488582+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378281984 unmapped: 59187200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:43.488732+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378290176 unmapped: 59179008 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:44.488895+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378290176 unmapped: 59179008 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:45.489042+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378298368 unmapped: 59170816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:46.489193+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378306560 unmapped: 59162624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:47.489382+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378306560 unmapped: 59162624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:48.489515+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378306560 unmapped: 59162624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:49.489670+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378306560 unmapped: 59162624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:50.489838+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378306560 unmapped: 59162624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:51.490072+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378314752 unmapped: 59154432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:52.490377+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378314752 unmapped: 59154432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:53.490523+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378314752 unmapped: 59154432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:54.490749+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378314752 unmapped: 59154432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:55.490919+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378314752 unmapped: 59154432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:56.491165+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378314752 unmapped: 59154432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:57.491373+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378314752 unmapped: 59154432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:58.491534+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378314752 unmapped: 59154432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:00:59.491722+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378322944 unmapped: 59146240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:00.491919+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378322944 unmapped: 59146240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:01.492307+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378331136 unmapped: 59138048 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:02.493076+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378339328 unmapped: 59129856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:03.493389+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378339328 unmapped: 59129856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:04.493790+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378339328 unmapped: 59129856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:05.494078+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378339328 unmapped: 59129856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:06.494419+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378347520 unmapped: 59121664 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:07.494713+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:08.495203+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:09.495418+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:10.496002+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:11.496219+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:12.496666+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:13.496932+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:14.497414+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:15.497636+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:16.497932+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:17.498190+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 59113472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:18.498539+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 59097088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:19.498693+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 59097088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:20.498977+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 59097088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:21.499147+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 59097088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:22.499405+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 59097088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:23.499550+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 59097088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:24.499758+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 59097088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:25.499935+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378372096 unmapped: 59097088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:26.500076+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378380288 unmapped: 59088896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:27.500218+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378380288 unmapped: 59088896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:28.500396+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378380288 unmapped: 59088896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:29.500610+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378380288 unmapped: 59088896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:30.500911+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378380288 unmapped: 59088896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:31.501250+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378380288 unmapped: 59088896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:32.501538+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378396672 unmapped: 59072512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:33.501739+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378396672 unmapped: 59072512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:34.501951+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378404864 unmapped: 59064320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:35.502164+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378404864 unmapped: 59064320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:36.502409+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378404864 unmapped: 59064320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:37.502691+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378404864 unmapped: 59064320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:38.503049+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378404864 unmapped: 59064320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:39.503268+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378404864 unmapped: 59064320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:40.503491+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378413056 unmapped: 59056128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:41.503734+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378413056 unmapped: 59056128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:42.503980+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378413056 unmapped: 59056128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:43.504258+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378413056 unmapped: 59056128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:44.504572+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378413056 unmapped: 59056128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:45.504791+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378413056 unmapped: 59056128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:46.505066+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378413056 unmapped: 59056128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:47.505292+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378421248 unmapped: 59047936 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:48.505545+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378429440 unmapped: 59039744 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:49.505749+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378429440 unmapped: 59039744 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:50.506013+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 59023360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:51.506276+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 59023360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 234881024 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:52.506524+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 59023360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:53.506711+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 59023360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:54.507142+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 59023360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:55.507394+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 59023360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:56.507615+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378454016 unmapped: 59015168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:57.507807+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378454016 unmapped: 59015168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:58.507939+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378454016 unmapped: 59015168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:01:59.508147+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378454016 unmapped: 59015168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:00.508384+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378454016 unmapped: 59015168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:01.508606+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378454016 unmapped: 59015168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:02.508771+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378454016 unmapped: 59015168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:03.508969+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378462208 unmapped: 59006976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:04.509144+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378462208 unmapped: 59006976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:05.509466+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378462208 unmapped: 59006976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:06.510059+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378470400 unmapped: 58998784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:07.510407+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378478592 unmapped: 58990592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:08.511768+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378478592 unmapped: 58990592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:09.512078+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378478592 unmapped: 58990592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:10.513115+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378478592 unmapped: 58990592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:11.513506+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378478592 unmapped: 58990592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:12.514479+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378478592 unmapped: 58990592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:13.514882+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378478592 unmapped: 58990592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:14.515210+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378486784 unmapped: 58982400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:15.515584+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378486784 unmapped: 58982400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:16.515908+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378486784 unmapped: 58982400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:17.516172+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378486784 unmapped: 58982400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:18.516419+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378486784 unmapped: 58982400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:19.516652+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378503168 unmapped: 58966016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:20.517063+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378503168 unmapped: 58966016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:21.517436+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378503168 unmapped: 58966016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:22.517726+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378511360 unmapped: 58957824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:23.517970+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378511360 unmapped: 58957824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:24.518238+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378511360 unmapped: 58957824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:25.518578+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378511360 unmapped: 58957824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:26.518870+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378511360 unmapped: 58957824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:27.519060+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:28.519419+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:29.519587+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:30.519800+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:31.520092+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:32.520435+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:33.520674+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:34.520959+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:35.521194+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:36.521488+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:37.521710+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378519552 unmapped: 58949632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:38.521987+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378544128 unmapped: 58925056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:39.522152+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378544128 unmapped: 58925056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:40.522368+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378544128 unmapped: 58925056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:41.522579+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378544128 unmapped: 58925056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:42.522811+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378544128 unmapped: 58925056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:43.523005+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378552320 unmapped: 58916864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:44.523228+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378552320 unmapped: 58916864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:45.523461+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378552320 unmapped: 58916864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:46.523621+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378552320 unmapped: 58916864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:47.523776+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620840d1800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378552320 unmapped: 58916864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:48.523952+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378552320 unmapped: 58916864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:49.524130+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378552320 unmapped: 58916864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:50.524396+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378552320 unmapped: 58916864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:51.524620+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378568704 unmapped: 58900480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:52.524890+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378568704 unmapped: 58900480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:53.525070+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378568704 unmapped: 58900480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:54.525431+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378568704 unmapped: 58900480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:55.525566+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378568704 unmapped: 58900480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:56.525821+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378568704 unmapped: 58900480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:57.525969+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378568704 unmapped: 58900480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:58.526103+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378568704 unmapped: 58900480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:02:59.526335+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378585088 unmapped: 58884096 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:00.527653+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378585088 unmapped: 58884096 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:01.527896+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378585088 unmapped: 58884096 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:02.528103+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378585088 unmapped: 58884096 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:03.528494+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378593280 unmapped: 58875904 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:04.528847+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378593280 unmapped: 58875904 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:05.529088+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 58867712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:06.529433+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 58867712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:07.529670+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 58867712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:08.529894+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 58867712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:09.530137+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 58867712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:10.530404+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 58859520 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:11.530722+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 58859520 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:12.531023+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 58859520 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:13.531532+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 58859520 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:14.531888+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 58859520 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:15.532058+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 58851328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:16.532503+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 58851328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:17.532812+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 58851328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:18.533055+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 185K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 15K syncs, 2.92 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 311 writes, 649 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 311 writes, 148 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 58851328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets getting new tickets!
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:19.533831+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _finish_auth 0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:19.535081+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 58851328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:20.534034+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 58851328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:21.534262+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 58851328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:22.534580+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 58851328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:23.534748+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378634240 unmapped: 58834944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:24.535037+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378634240 unmapped: 58834944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:25.535291+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378634240 unmapped: 58834944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:26.535643+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:27.535847+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: mgrc ms_handle_reset ms_handle_reset con 0x56208264bc00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1636168236
Nov 22 10:22:37 compute-0 ceph-osd[88656]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1636168236,v1:192.168.122.100:6801/1636168236]
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: get_auth_request con 0x562082646c00 auth_method 0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: mgrc handle_mgr_configure stats_period=5
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:28.536002+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:29.536211+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:30.536383+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:31.536572+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37000 session 0x5620827b6960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207fc37c00
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:32.536787+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:33.536997+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:34.537209+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:35.537462+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:36.537711+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:37.538568+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:38.538720+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:39.538847+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:40.539012+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:41.539204+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378642432 unmapped: 58826752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:42.539740+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 58818560 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:43.540102+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 58818560 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:44.540555+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 58818560 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:45.540836+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 58818560 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:46.541007+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 58818560 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:47.541288+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378658816 unmapped: 58810368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:48.541753+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378658816 unmapped: 58810368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:49.542159+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 58802176 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:50.542481+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 58793984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:51.542740+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 58793984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:52.543119+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 58793984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:53.543390+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 58793984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:54.543739+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 58793984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:55.544005+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 58785792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:56.544230+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 58785792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:57.544488+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 58785792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:58.544676+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 58785792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:03:59.544900+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 58785792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:00.545123+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 58785792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:01.545381+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 58785792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:02.545586+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 58785792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:03.545884+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378691584 unmapped: 58777600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:04.546101+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378691584 unmapped: 58777600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:05.546289+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378691584 unmapped: 58777600 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:06.546557+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 58769408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:07.546757+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 58769408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:08.546900+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 58769408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:09.547045+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 58769408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:10.547241+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378707968 unmapped: 58761216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:11.547416+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378707968 unmapped: 58761216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:12.547588+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378707968 unmapped: 58761216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:13.547806+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378716160 unmapped: 58753024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:14.547966+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378724352 unmapped: 58744832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:15.548135+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378724352 unmapped: 58744832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:16.548421+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378724352 unmapped: 58744832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:17.548598+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378724352 unmapped: 58744832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:18.548838+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378724352 unmapped: 58744832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:19.548995+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378732544 unmapped: 58736640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:20.549137+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378732544 unmapped: 58736640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:21.549358+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378732544 unmapped: 58736640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:22.549520+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 58728448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:23.549679+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 58728448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:24.550093+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 58728448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:25.550385+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 58728448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:26.550668+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 58728448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:27.550875+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378748928 unmapped: 58720256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:28.551124+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378748928 unmapped: 58720256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:29.551486+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378748928 unmapped: 58720256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:30.551685+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378748928 unmapped: 58720256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:31.551905+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378748928 unmapped: 58720256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:32.552090+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378748928 unmapped: 58720256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:33.552269+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378748928 unmapped: 58720256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:34.552594+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 58712064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:35.552740+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x562083f5c000 session 0x562080a9c000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56208045b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 58712064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:36.553024+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378765312 unmapped: 58703872 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:37.553237+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378773504 unmapped: 58695680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:38.553522+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378781696 unmapped: 58687488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:39.553723+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:40.554011+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378781696 unmapped: 58687488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:41.554282+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378781696 unmapped: 58687488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:42.554520+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378781696 unmapped: 58687488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:43.554757+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378781696 unmapped: 58687488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:44.554993+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378781696 unmapped: 58687488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:45.555146+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378781696 unmapped: 58687488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:46.555294+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378789888 unmapped: 58679296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:47.555440+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378789888 unmapped: 58679296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:48.555612+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378789888 unmapped: 58679296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:49.555780+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378789888 unmapped: 58679296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:50.556009+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378789888 unmapped: 58679296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 501.404815674s of 502.049285889s, submitted: 29
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:51.556185+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378798080 unmapped: 58671104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:52.556419+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378798080 unmapped: 58671104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880953 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:53.556578+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378822656 unmapped: 58646528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:54.556816+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378839040 unmapped: 58630144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:55.556984+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378839040 unmapped: 58630144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:56.557269+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378839040 unmapped: 58630144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:57.557391+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378839040 unmapped: 58630144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:58.557636+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378839040 unmapped: 58630144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:04:59.557812+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378839040 unmapped: 58630144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:00.558022+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:01.558186+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:02.558529+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:03.558712+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:04.559012+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:05.559185+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:06.559430+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:07.559618+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378855424 unmapped: 58613760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:08.559846+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378855424 unmapped: 58613760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:09.560036+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378855424 unmapped: 58613760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:10.560267+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:11.560536+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:12.560763+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:13.560950+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:14.561140+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:15.561274+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:16.561474+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:17.561639+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:18.561794+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378871808 unmapped: 58597376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:19.561966+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:20.562480+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:21.562711+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:22.562943+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:23.563404+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:24.564152+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:25.564555+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:26.565131+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:27.565479+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:28.565706+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:29.565862+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:30.566216+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:31.566583+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378896384 unmapped: 58572800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:32.566940+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378896384 unmapped: 58572800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:33.567240+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378896384 unmapped: 58572800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:34.567586+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378896384 unmapped: 58572800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:35.567787+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378896384 unmapped: 58572800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:36.567953+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 58556416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:37.568114+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 58556416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:38.568254+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 58556416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:39.568506+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 58556416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:40.568789+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 58556416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:41.569044+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 58556416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:42.569207+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:43.569393+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:44.569685+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:45.569853+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:46.570059+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:47.570268+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:48.570500+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:49.570703+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:50.570900+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:51.571160+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:52.571376+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:53.571547+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:54.571732+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:55.571886+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:56.572033+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:57.572183+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:58.572438+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:05:59.572639+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:00.572813+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:01.573012+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:02.573241+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378945536 unmapped: 58523648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:03.573425+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378945536 unmapped: 58523648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:04.573611+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378945536 unmapped: 58523648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:05.573799+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378945536 unmapped: 58523648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:06.573953+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:07.574154+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:08.574430+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:09.574611+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:10.574743+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:11.574916+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378970112 unmapped: 58499072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:12.575030+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378970112 unmapped: 58499072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:13.575186+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378970112 unmapped: 58499072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:14.575515+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378970112 unmapped: 58499072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:15.575667+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378978304 unmapped: 58490880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:16.575875+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378978304 unmapped: 58490880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:17.576032+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378978304 unmapped: 58490880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:18.576204+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378978304 unmapped: 58490880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:19.576365+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378986496 unmapped: 58482688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:20.576529+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378986496 unmapped: 58482688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:21.576775+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378986496 unmapped: 58482688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:22.576949+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378994688 unmapped: 58474496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:23.577122+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378994688 unmapped: 58474496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:24.577406+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379002880 unmapped: 58466304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:25.577554+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379002880 unmapped: 58466304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:26.577707+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379002880 unmapped: 58466304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:27.577864+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379011072 unmapped: 58458112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:28.578603+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379011072 unmapped: 58458112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:29.578785+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379019264 unmapped: 58449920 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:30.578939+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379019264 unmapped: 58449920 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:31.579092+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379019264 unmapped: 58449920 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:32.579266+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379019264 unmapped: 58449920 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:33.579428+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379019264 unmapped: 58449920 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:34.579611+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379019264 unmapped: 58449920 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:35.579754+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379027456 unmapped: 58441728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562083f5c000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:36.579894+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379027456 unmapped: 58441728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:37.580082+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379027456 unmapped: 58441728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:38.580303+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379027456 unmapped: 58441728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:39.580503+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379027456 unmapped: 58441728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:40.580682+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379035648 unmapped: 58433536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:41.580831+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379043840 unmapped: 58425344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:42.580984+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379043840 unmapped: 58425344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:43.581123+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:44.581486+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:45.581645+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379060224 unmapped: 58408960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:46.581860+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379060224 unmapped: 58408960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:47.582034+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379060224 unmapped: 58408960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:48.582206+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379060224 unmapped: 58408960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:49.582377+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379060224 unmapped: 58408960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:50.582575+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379060224 unmapped: 58408960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:51.582998+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379060224 unmapped: 58408960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:52.583155+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379060224 unmapped: 58408960 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:53.583407+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379068416 unmapped: 58400768 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:54.583659+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379068416 unmapped: 58400768 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:55.583843+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379068416 unmapped: 58400768 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:56.584023+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379076608 unmapped: 58392576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:57.584219+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 58384384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:58.584427+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 58384384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:06:59.584599+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:00.584787+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:01.584991+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:02.585264+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:03.585473+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:04.585692+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:05.585857+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:06.586066+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:07.586244+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:08.586425+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:09.586588+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379109376 unmapped: 58359808 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:10.586821+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 58351616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:11.586992+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 58351616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:12.587261+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 58351616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:13.587442+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 58351616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:14.587649+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 58351616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:15.587847+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379125760 unmapped: 58343424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:16.588033+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379125760 unmapped: 58343424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:17.588252+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379125760 unmapped: 58343424 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:18.588461+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379133952 unmapped: 58335232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:19.588676+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379133952 unmapped: 58335232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:20.588865+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379133952 unmapped: 58335232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:21.589077+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379133952 unmapped: 58335232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:22.589252+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379133952 unmapped: 58335232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:23.589473+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379133952 unmapped: 58335232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:24.589757+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379133952 unmapped: 58335232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:25.589973+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379133952 unmapped: 58335232 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:26.590167+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379150336 unmapped: 58318848 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:27.590787+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379158528 unmapped: 58310656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:28.590932+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379158528 unmapped: 58310656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:29.591248+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379158528 unmapped: 58310656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:30.592248+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379158528 unmapped: 58310656 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:31.593408+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379166720 unmapped: 58302464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:32.593560+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379166720 unmapped: 58302464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:33.594130+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379166720 unmapped: 58302464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:34.595008+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379166720 unmapped: 58302464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:35.595180+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379166720 unmapped: 58302464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:36.595884+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379166720 unmapped: 58302464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:37.596548+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379166720 unmapped: 58302464 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:38.597092+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379174912 unmapped: 58294272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:39.597574+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379174912 unmapped: 58294272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:40.598198+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379174912 unmapped: 58294272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:41.598660+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379174912 unmapped: 58294272 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:42.599025+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:43.599293+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:44.599710+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:45.599899+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:46.600118+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:47.600275+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:48.600570+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:49.600751+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:50.601132+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:51.601467+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:52.601654+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:53.601857+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:54.602083+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:55.602242+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:56.602385+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:57.602533+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:58.602733+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:07:59.603062+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:00.603287+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:01.603548+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:02.603805+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:03.604067+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379232256 unmapped: 58236928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:04.604422+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379232256 unmapped: 58236928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:05.604659+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379232256 unmapped: 58236928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:06.604854+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379232256 unmapped: 58236928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:07.605040+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379232256 unmapped: 58236928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:08.605374+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379240448 unmapped: 58228736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:09.605581+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379240448 unmapped: 58228736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:10.605814+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:11.606016+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:12.606257+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:13.606468+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:14.608144+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379265024 unmapped: 58204160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:15.608357+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379265024 unmapped: 58204160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:16.608618+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379265024 unmapped: 58204160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:17.608939+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379265024 unmapped: 58204160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:18.609141+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379265024 unmapped: 58204160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:19.609442+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:20.609662+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:21.609851+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:22.610059+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379281408 unmapped: 58187776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:23.610226+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379281408 unmapped: 58187776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:24.610584+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379281408 unmapped: 58187776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:25.610798+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379281408 unmapped: 58187776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:26.610953+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379281408 unmapped: 58187776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:27.611213+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379289600 unmapped: 58179584 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:28.611398+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379289600 unmapped: 58179584 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:29.611657+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 58171392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:30.611851+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 58171392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:31.612166+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 58171392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:32.612413+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 58171392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:33.612733+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 58171392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:34.612951+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 58171392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:35.613181+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379305984 unmapped: 58163200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:36.613366+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379305984 unmapped: 58163200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:37.613547+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:38.613791+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379314176 unmapped: 58155008 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:39.613949+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379314176 unmapped: 58155008 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:40.614116+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379314176 unmapped: 58155008 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:41.614253+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379314176 unmapped: 58155008 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:42.614458+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:43.614621+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:44.614902+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:45.615110+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:46.615244+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:47.615385+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:48.615572+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:49.615771+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:50.615929+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:51.616151+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:52.616430+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379338752 unmapped: 58130432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:53.616605+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379338752 unmapped: 58130432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:54.616836+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379346944 unmapped: 58122240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:55.616999+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379346944 unmapped: 58122240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:56.617220+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379346944 unmapped: 58122240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:57.617401+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379346944 unmapped: 58122240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:58.617776+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379355136 unmapped: 58114048 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:08:59.617987+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379355136 unmapped: 58114048 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:00.618151+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:01.618292+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:02.618448+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:03.618619+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:04.619444+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:05.619592+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:06.620085+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:07.620268+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379371520 unmapped: 58097664 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:08.620518+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:09.620683+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:10.620884+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:11.621060+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:12.621372+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:13.621516+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:14.621902+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:15.622277+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:16.622642+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379404288 unmapped: 58064896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:17.622943+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379404288 unmapped: 58064896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:18.623174+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379404288 unmapped: 58064896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:19.623419+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379404288 unmapped: 58064896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:20.623685+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379404288 unmapped: 58064896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:21.623867+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379404288 unmapped: 58064896 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:22.624009+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379412480 unmapped: 58056704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:23.624201+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379420672 unmapped: 58048512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:24.624503+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379420672 unmapped: 58048512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:25.624736+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379420672 unmapped: 58048512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:26.624955+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379428864 unmapped: 58040320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:27.625169+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379428864 unmapped: 58040320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:28.625436+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379428864 unmapped: 58040320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:29.625618+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379428864 unmapped: 58040320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:30.625759+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379428864 unmapped: 58040320 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:31.625910+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:32.626042+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:33.626377+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:34.626583+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379445248 unmapped: 58023936 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:35.626809+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379445248 unmapped: 58023936 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:36.627024+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379445248 unmapped: 58023936 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:37.627213+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379445248 unmapped: 58023936 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:38.627555+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379445248 unmapped: 58023936 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:39.628242+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 58015744 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:40.629129+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379461632 unmapped: 58007552 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:41.629713+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379461632 unmapped: 58007552 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:42.630288+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:43.630712+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:44.630935+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:45.631367+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:46.631631+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:47.632061+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:48.632202+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:49.632388+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:50.632565+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379478016 unmapped: 57991168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:51.632781+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379478016 unmapped: 57991168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:52.632952+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379478016 unmapped: 57991168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:53.633111+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379478016 unmapped: 57991168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:54.633342+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379478016 unmapped: 57991168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:55.633522+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 57974784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:56.633665+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 57974784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:57.633800+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 57974784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:58.633945+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 57974784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:09:59.634155+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:00.634392+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:01.634696+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:02.634982+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:03.635199+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:04.635446+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:05.635663+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:06.635856+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379510784 unmapped: 57958400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:07.636107+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379510784 unmapped: 57958400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:08.636308+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379510784 unmapped: 57958400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:09.636698+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379518976 unmapped: 57950208 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:10.636879+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379518976 unmapped: 57950208 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:11.637029+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379543552 unmapped: 57925632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:12.637230+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379543552 unmapped: 57925632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:13.637370+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379543552 unmapped: 57925632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:14.637551+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379543552 unmapped: 57925632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:15.637746+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379543552 unmapped: 57925632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:16.637958+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379543552 unmapped: 57925632 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:17.638141+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 57917440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:18.638302+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 57917440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:19.638563+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:20.638737+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:21.638938+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:22.639139+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:23.639300+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:24.639609+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:25.639815+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:26.639978+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:27.640132+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379568128 unmapped: 57901056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:28.640378+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379568128 unmapped: 57901056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:29.640532+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379576320 unmapped: 57892864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:30.640670+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379576320 unmapped: 57892864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:31.640818+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379576320 unmapped: 57892864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:32.640971+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379576320 unmapped: 57892864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:33.641087+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379576320 unmapped: 57892864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:34.641488+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379576320 unmapped: 57892864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:35.641636+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:36.641857+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:37.642067+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:38.642205+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:39.642394+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:40.642550+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:41.642738+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379592704 unmapped: 57876480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:42.642993+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379592704 unmapped: 57876480 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:43.643171+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:44.643774+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:45.644062+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:46.644561+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379617280 unmapped: 57851904 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:47.644718+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:48.645037+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:49.645474+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:50.645778+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:51.646009+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:52.646229+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:53.646491+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:54.646739+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:55.646980+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:56.647195+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:57.647456+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379633664 unmapped: 57835520 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:58.647619+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379633664 unmapped: 57835520 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:10:59.647897+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379641856 unmapped: 57827328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:00.648052+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379641856 unmapped: 57827328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:01.648213+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379641856 unmapped: 57827328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:02.648423+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 57819136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:03.648688+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 57819136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:04.648969+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 57819136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:05.649512+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 57819136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:06.649760+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 57819136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:07.649916+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 57810944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:08.650119+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:09.650377+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:10.650603+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:11.650768+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:12.650963+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:13.651142+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:14.651398+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:15.651557+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:16.651717+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:17.651932+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:18.652187+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 57778176 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:19.652411+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 57778176 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:20.652577+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 57778176 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:21.652730+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 57778176 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:22.652959+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 57778176 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:23.653138+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:24.653329+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:25.653479+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:26.779490+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:27.779730+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:28.779983+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:29.780303+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:30.780589+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:31.780774+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379723776 unmapped: 57745408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:32.780925+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379723776 unmapped: 57745408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:33.781169+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379723776 unmapped: 57745408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:34.781577+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379723776 unmapped: 57745408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:35.781795+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379723776 unmapped: 57745408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:36.781970+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379723776 unmapped: 57745408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:37.782114+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379723776 unmapped: 57745408 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:38.782276+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379731968 unmapped: 57737216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:39.782419+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379740160 unmapped: 57729024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:40.782673+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379740160 unmapped: 57729024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:41.782824+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379740160 unmapped: 57729024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:42.782972+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:43.783191+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:44.783374+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:45.783571+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:46.783787+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:47.783995+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:48.784399+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:49.784552+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379772928 unmapped: 57696256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:50.784658+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379772928 unmapped: 57696256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:51.784779+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379772928 unmapped: 57696256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:52.785002+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379772928 unmapped: 57696256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:53.785145+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379772928 unmapped: 57696256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:54.785367+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379772928 unmapped: 57696256 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:55.785542+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379781120 unmapped: 57688064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:56.785641+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379781120 unmapped: 57688064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:57.785762+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379781120 unmapped: 57688064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:58.785920+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379781120 unmapped: 57688064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:11:59.786012+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379781120 unmapped: 57688064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:00.786146+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379781120 unmapped: 57688064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:01.786406+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379781120 unmapped: 57688064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:02.786585+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379781120 unmapped: 57688064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:03.786739+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:04.786955+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:05.787182+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:06.787334+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:07.787501+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:08.787731+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:09.787978+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:10.788239+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379805696 unmapped: 57663488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:11.788519+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379805696 unmapped: 57663488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:12.788717+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379805696 unmapped: 57663488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:13.788842+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379813888 unmapped: 57655296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:14.789015+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379822080 unmapped: 57647104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:15.789175+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379822080 unmapped: 57647104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:16.789385+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379822080 unmapped: 57647104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:17.789538+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379822080 unmapped: 57647104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:18.789701+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379822080 unmapped: 57647104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:19.789946+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:20.790192+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:21.790413+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:22.790548+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:23.790737+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:24.790958+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:25.791141+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:26.791250+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379854848 unmapped: 57614336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:27.791462+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379863040 unmapped: 57606144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:28.791632+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379863040 unmapped: 57606144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:29.791780+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379863040 unmapped: 57606144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:30.791967+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379863040 unmapped: 57606144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:31.792129+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379863040 unmapped: 57606144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:32.792269+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379863040 unmapped: 57606144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:33.792408+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379863040 unmapped: 57606144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:34.792564+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379871232 unmapped: 57597952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:35.792724+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:36.792863+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:37.792989+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:38.793103+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:39.793296+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:40.793525+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:41.793717+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:42.793890+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:43.794016+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378822656 unmapped: 58646528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:44.794241+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378822656 unmapped: 58646528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:45.794404+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378822656 unmapped: 58646528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:46.794543+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378830848 unmapped: 58638336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:47.794709+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378830848 unmapped: 58638336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:48.794923+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378830848 unmapped: 58638336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:49.795088+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378830848 unmapped: 58638336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:50.795252+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378830848 unmapped: 58638336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:51.795454+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378839040 unmapped: 58630144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:52.795617+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378839040 unmapped: 58630144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:53.795819+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378839040 unmapped: 58630144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:54.796105+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:55.796358+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:56.796575+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:57.796772+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:58.796953+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378847232 unmapped: 58621952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:12:59.797138+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378855424 unmapped: 58613760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:00.797360+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378855424 unmapped: 58613760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:01.797505+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378855424 unmapped: 58613760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:02.797716+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:03.797926+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:04.798220+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:05.798390+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 58605568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:06.798603+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378871808 unmapped: 58597376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:07.798748+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378871808 unmapped: 58597376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:08.798872+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378871808 unmapped: 58597376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:09.799056+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:10.799275+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:11.799435+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:12.799626+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:13.799800+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:14.800242+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 58589184 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:15.800412+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:16.800588+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:17.801585+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 185K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 15K syncs, 2.91 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e325090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e325090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e325090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:18.801815+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:19.802024+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:20.802273+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378888192 unmapped: 58580992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:21.802530+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378904576 unmapped: 58564608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:22.802715+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378904576 unmapped: 58564608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:23.802924+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:24.803160+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:25.803347+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378920960 unmapped: 58548224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:26.803478+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:27.803707+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:28.803827+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:29.804027+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:30.804240+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 58540032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:31.804389+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:32.804564+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:33.804752+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:34.804959+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:35.805175+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:36.805399+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:37.805541+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:38.805736+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 58531840 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:39.805902+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378953728 unmapped: 58515456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:40.806058+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378953728 unmapped: 58515456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:41.806188+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378953728 unmapped: 58515456 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:42.806390+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:43.827367+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:44.827578+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:45.827728+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:46.827911+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378961920 unmapped: 58507264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:47.828128+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378970112 unmapped: 58499072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:48.828264+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378970112 unmapped: 58499072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:49.828391+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378970112 unmapped: 58499072 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:50.828543+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378978304 unmapped: 58490880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:51.828749+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378978304 unmapped: 58490880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:52.828923+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378978304 unmapped: 58490880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:53.829166+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378978304 unmapped: 58490880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:54.829382+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378978304 unmapped: 58490880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:55.829501+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378986496 unmapped: 58482688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:56.829633+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378986496 unmapped: 58482688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:57.829819+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378986496 unmapped: 58482688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:58.829985+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378986496 unmapped: 58482688 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:13:59.830168+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378994688 unmapped: 58474496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:00.830438+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378994688 unmapped: 58474496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:01.830569+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378994688 unmapped: 58474496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:02.830700+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 378994688 unmapped: 58474496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:03.830849+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379002880 unmapped: 58466304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:04.831026+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379002880 unmapped: 58466304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:05.831201+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379011072 unmapped: 58458112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:06.831396+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379011072 unmapped: 58458112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:07.831514+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379011072 unmapped: 58458112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:08.831723+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379011072 unmapped: 58458112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:09.831961+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379011072 unmapped: 58458112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:10.832189+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379011072 unmapped: 58458112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:11.832387+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379027456 unmapped: 58441728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:12.832587+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379027456 unmapped: 58441728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:13.832734+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379027456 unmapped: 58441728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:14.833003+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379035648 unmapped: 58433536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:15.833282+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379035648 unmapped: 58433536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:16.833504+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379035648 unmapped: 58433536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:17.833666+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379035648 unmapped: 58433536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:18.833897+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379035648 unmapped: 58433536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:19.834147+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379035648 unmapped: 58433536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:20.834420+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379035648 unmapped: 58433536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:21.834537+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379043840 unmapped: 58425344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:22.834674+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379043840 unmapped: 58425344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:23.834851+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379043840 unmapped: 58425344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:24.835007+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379043840 unmapped: 58425344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:25.835218+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379043840 unmapped: 58425344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:26.835390+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379043840 unmapped: 58425344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:27.835544+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:28.835718+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:29.835850+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:30.836035+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:31.836212+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:32.836415+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:33.836623+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:34.836902+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379052032 unmapped: 58417152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:35.837067+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379076608 unmapped: 58392576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:36.837234+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379076608 unmapped: 58392576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:37.837410+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379076608 unmapped: 58392576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:38.837557+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 58384384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:39.837707+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 58384384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:40.837869+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 58384384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:41.838167+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 58384384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:42.838403+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 58384384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:43.838556+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 58376192 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:44.838717+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:45.838862+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:46.839030+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:47.839212+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:48.839449+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:49.839694+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:50.839855+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.775512695s of 600.162658691s, submitted: 106
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 58368000 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:51.840053+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 58351616 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:52.840244+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379142144 unmapped: 58327040 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:53.840378+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:54.841103+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:55.841468+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:56.841781+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:57.841970+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:58.842175+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:14:59.842827+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:00.843384+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:01.843778+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379191296 unmapped: 58277888 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:02.844072+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:03.844617+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:04.845588+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:05.845880+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:06.846156+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:07.846435+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:08.846611+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379199488 unmapped: 58269696 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:09.846842+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:10.846996+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x5620819aa000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:11.847165+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:12.847408+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:13.847620+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:14.847837+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379215872 unmapped: 58253312 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:15.848008+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379224064 unmapped: 58245120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:16.848225+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379224064 unmapped: 58245120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:17.848444+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379224064 unmapped: 58245120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:18.848632+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379224064 unmapped: 58245120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:19.848827+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379224064 unmapped: 58245120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:20.849024+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379224064 unmapped: 58245120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:21.849172+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379224064 unmapped: 58245120 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:22.849480+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379232256 unmapped: 58236928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:23.849658+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379232256 unmapped: 58236928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:24.849852+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379232256 unmapped: 58236928 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:25.849955+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379240448 unmapped: 58228736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:26.850141+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379240448 unmapped: 58228736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:27.850375+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379240448 unmapped: 58228736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:28.850529+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379240448 unmapped: 58228736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:29.850661+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379240448 unmapped: 58228736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:30.850831+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379240448 unmapped: 58228736 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:31.850988+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:32.851142+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:33.851278+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:34.851473+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:35.851636+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:36.851816+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:37.852122+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:38.852390+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 58220544 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:39.852618+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379256832 unmapped: 58212352 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:40.852791+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379256832 unmapped: 58212352 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:41.852989+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379256832 unmapped: 58212352 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:42.853176+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379256832 unmapped: 58212352 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:43.853440+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379256832 unmapped: 58212352 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:44.853613+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379265024 unmapped: 58204160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:45.853784+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379265024 unmapped: 58204160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:46.853964+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379265024 unmapped: 58204160 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:47.854221+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:48.854391+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:49.854537+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:50.854692+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:51.854848+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:52.855011+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:53.855236+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:54.855458+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 58195968 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:55.855646+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379281408 unmapped: 58187776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:56.855798+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379281408 unmapped: 58187776 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:57.855964+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379289600 unmapped: 58179584 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:58.856084+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379289600 unmapped: 58179584 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:15:59.856194+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379289600 unmapped: 58179584 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:00.856309+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379289600 unmapped: 58179584 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:01.856471+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379289600 unmapped: 58179584 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:02.856626+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 58171392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:03.856761+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 58171392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:04.857519+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 58171392 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:05.858221+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379305984 unmapped: 58163200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:06.858821+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379305984 unmapped: 58163200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:07.859138+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379305984 unmapped: 58163200 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:08.859590+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379314176 unmapped: 58155008 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:09.860009+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379314176 unmapped: 58155008 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:10.860425+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:11.860776+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:12.860965+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:13.861364+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:14.862336+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:15.862525+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:16.862765+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:17.863016+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:18.863265+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 58146816 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:19.863525+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:20.863710+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:21.863857+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 58138624 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:22.864004+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379338752 unmapped: 58130432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:23.864226+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379338752 unmapped: 58130432 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:24.864447+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379346944 unmapped: 58122240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:25.864598+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379346944 unmapped: 58122240 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:26.864860+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379355136 unmapped: 58114048 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:27.865011+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379355136 unmapped: 58114048 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:28.865245+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379355136 unmapped: 58114048 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:29.865486+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379355136 unmapped: 58114048 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:30.865742+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:31.865904+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:32.866073+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:33.866305+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:34.866662+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379363328 unmapped: 58105856 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:35.866850+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379371520 unmapped: 58097664 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:36.866990+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379371520 unmapped: 58097664 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:37.867548+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379371520 unmapped: 58097664 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:38.867714+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379379712 unmapped: 58089472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:39.867854+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379379712 unmapped: 58089472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:40.867988+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379379712 unmapped: 58089472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:41.868151+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379379712 unmapped: 58089472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:42.868309+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379379712 unmapped: 58089472 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:43.868463+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379387904 unmapped: 58081280 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:44.868643+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:45.868787+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:46.868921+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:47.869140+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:48.869338+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:49.869507+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:50.869634+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379396096 unmapped: 58073088 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:51.869785+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379412480 unmapped: 58056704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:52.869961+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379412480 unmapped: 58056704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:53.870226+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379412480 unmapped: 58056704 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:54.870600+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379420672 unmapped: 58048512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:55.870815+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379420672 unmapped: 58048512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:56.871041+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379420672 unmapped: 58048512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:57.871246+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379420672 unmapped: 58048512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:58.871451+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379420672 unmapped: 58048512 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:16:59.871728+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:00.871952+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:01.872147+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:02.872484+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:03.872648+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:04.872874+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:05.873092+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:06.873278+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 58032128 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:07.873442+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 58015744 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:08.873650+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 58015744 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:09.873851+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 58015744 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:10.874037+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 58015744 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:11.874197+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 58015744 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:12.874392+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 58015744 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:13.874596+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379461632 unmapped: 58007552 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:14.874867+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379461632 unmapped: 58007552 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:15.875048+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379461632 unmapped: 58007552 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:16.875217+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379461632 unmapped: 58007552 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562081f81000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:17.875370+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379461632 unmapped: 58007552 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:18.875544+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:19.875683+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:20.875804+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:21.875926+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:22.876113+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 57999360 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:23.876301+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379478016 unmapped: 57991168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:24.876591+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379478016 unmapped: 57991168 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:25.876768+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379486208 unmapped: 57982976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:26.876958+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379486208 unmapped: 57982976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:27.877107+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379486208 unmapped: 57982976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:28.877262+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379486208 unmapped: 57982976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:29.877449+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379486208 unmapped: 57982976 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:30.877581+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 57974784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:31.877733+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 57974784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:32.877911+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 57974784 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:33.878140+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:34.878391+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:35.878569+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:36.878708+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:37.878888+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 57966592 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:38.879118+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379510784 unmapped: 57958400 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:39.879307+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 57942016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:40.879595+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 57942016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:41.879831+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 57942016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:42.879997+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 57942016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:43.880177+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 57942016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:44.880391+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 57942016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:45.880533+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 57942016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:46.880673+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 57942016 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:47.880864+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 57933824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:48.881034+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 57933824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:49.881179+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 57933824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:50.881430+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 57933824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:51.881624+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 57933824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:52.881780+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 57933824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:53.881979+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 57933824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:54.882176+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 57933824 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:55.882405+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 57917440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:56.882783+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 57917440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:57.882930+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 57917440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:58.883105+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 57917440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:17:59.883279+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 57917440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:00.883479+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 57917440 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:01.883643+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:02.883820+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:03.883990+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:04.884194+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379559936 unmapped: 57909248 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:05.884391+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379568128 unmapped: 57901056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:06.884586+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379568128 unmapped: 57901056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:07.884796+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379568128 unmapped: 57901056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:08.885042+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379568128 unmapped: 57901056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:09.885393+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379568128 unmapped: 57901056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:10.885679+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379568128 unmapped: 57901056 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:11.885869+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379576320 unmapped: 57892864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:12.886062+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379576320 unmapped: 57892864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:13.886213+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379576320 unmapped: 57892864 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:14.886634+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:15.886810+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:16.887055+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:17.887250+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:18.887499+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379584512 unmapped: 57884672 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:19.887691+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:20.887852+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:21.888013+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:22.888245+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:23.888441+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:24.888668+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:25.888883+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:26.889054+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379600896 unmapped: 57868288 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:27.889238+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379617280 unmapped: 57851904 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:28.889433+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379617280 unmapped: 57851904 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:29.889606+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379617280 unmapped: 57851904 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:30.889897+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:31.890148+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc37c00 session 0x5620827bb0e0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x562082652000
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:32.890391+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379625472 unmapped: 57843712 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:33.890583+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379633664 unmapped: 57835520 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:34.890768+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379633664 unmapped: 57835520 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:35.890895+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379641856 unmapped: 57827328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:36.891038+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379641856 unmapped: 57827328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:37.891374+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379641856 unmapped: 57827328 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:38.891609+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 57819136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:39.891851+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 57819136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:40.892025+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 57819136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:41.892184+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 57819136 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:42.892477+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 57810944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:43.892553+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 57810944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:44.892813+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 57810944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:45.893010+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 57810944 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:46.893199+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:47.893426+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:48.893615+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:49.893761+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:50.893995+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379666432 unmapped: 57802752 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:51.894122+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:52.894426+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:53.894602+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:54.894763+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:55.895068+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:56.895219+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:57.895405+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:58.895548+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379682816 unmapped: 57786368 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 10:22:37 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:18:59.895672+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379699200 unmapped: 57769984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:00.895803+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379699200 unmapped: 57769984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:01.895987+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379699200 unmapped: 57769984 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:02.896134+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:03.896347+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:04.896536+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:05.896724+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:06.896903+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 57761792 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:07.897028+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379731968 unmapped: 57737216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:08.897218+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379731968 unmapped: 57737216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:09.897421+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379731968 unmapped: 57737216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:10.897658+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379731968 unmapped: 57737216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:11.897816+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379731968 unmapped: 57737216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:12.897984+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379731968 unmapped: 57737216 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:13.898146+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379740160 unmapped: 57729024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:14.898383+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379740160 unmapped: 57729024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:15.898519+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379740160 unmapped: 57729024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:16.898709+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379740160 unmapped: 57729024 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:17.900403+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:18.901516+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:19.902248+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:20.902749+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:21.903140+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:22.903459+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379748352 unmapped: 57720832 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:23.903719+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:24.903940+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:25.904079+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:26.905241+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:27.905537+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:28.906267+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:29.906693+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:30.907687+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 57712640 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:31.907920+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379764736 unmapped: 57704448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:32.908696+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379764736 unmapped: 57704448 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:33.909303+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379781120 unmapped: 57688064 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:34.909838+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379789312 unmapped: 57679872 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:35.910407+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56208045b800 session 0x5620819485a0
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56207f8ba800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379789312 unmapped: 57679872 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:36.910949+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379789312 unmapped: 57679872 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:37.911504+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379789312 unmapped: 57679872 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:38.911690+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379789312 unmapped: 57679872 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:39.911871+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:40.912076+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:41.912308+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:42.912631+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:43.912789+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:44.913069+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:45.913301+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:46.913485+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 57671680 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:47.913621+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379805696 unmapped: 57663488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:48.913814+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379805696 unmapped: 57663488 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 ms_handle_reset con 0x56207fc1b400 session 0x562082832960
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: handle_auth_request added challenge on 0x56208045b800
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:49.913968+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379813888 unmapped: 57655296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:50.914128+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379813888 unmapped: 57655296 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:51.914399+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379822080 unmapped: 57647104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:52.914592+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379822080 unmapped: 57647104 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:53.914891+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379830272 unmapped: 57638912 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:54.915153+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379830272 unmapped: 57638912 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:55.915376+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379830272 unmapped: 57638912 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:56.915517+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379830272 unmapped: 57638912 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:57.915642+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:58.915720+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:19:59.915872+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:00.916041+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:01.916213+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:02.916402+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:03.916580+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:04.916781+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379838464 unmapped: 57630720 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:05.916928+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:06.917061+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:07.917204+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:08.917340+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:09.917609+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:10.917858+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 57622528 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:11.918045+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379854848 unmapped: 57614336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:12.918192+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379854848 unmapped: 57614336 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:13.918356+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379863040 unmapped: 57606144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:14.918529+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379863040 unmapped: 57606144 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:15.918691+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379871232 unmapped: 57597952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:16.918836+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379871232 unmapped: 57597952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:17.919011+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379871232 unmapped: 57597952 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:18.919171+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:19.919333+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:20.919527+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379879424 unmapped: 57589760 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:21.919697+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379887616 unmapped: 57581568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:22.919862+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379887616 unmapped: 57581568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:23.920049+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379887616 unmapped: 57581568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:24.920230+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379887616 unmapped: 57581568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:25.920476+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379887616 unmapped: 57581568 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:26.920687+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379895808 unmapped: 57573376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:27.920885+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379895808 unmapped: 57573376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:28.921091+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379895808 unmapped: 57573376 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:29.921240+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379912192 unmapped: 57556992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:30.921482+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379912192 unmapped: 57556992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:31.921612+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379912192 unmapped: 57556992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:32.921758+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379912192 unmapped: 57556992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:33.921901+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379912192 unmapped: 57556992 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:34.922068+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379920384 unmapped: 57548800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:35.922247+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379920384 unmapped: 57548800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:36.922502+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379920384 unmapped: 57548800 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:37.922657+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379928576 unmapped: 57540608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:38.922872+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379928576 unmapped: 57540608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:39.923007+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379928576 unmapped: 57540608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:40.923238+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379928576 unmapped: 57540608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:41.923434+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379928576 unmapped: 57540608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:42.923607+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379928576 unmapped: 57540608 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:43.923786+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379936768 unmapped: 57532416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:44.923973+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379936768 unmapped: 57532416 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:45.924122+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379944960 unmapped: 57524224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:46.924275+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379944960 unmapped: 57524224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:47.924423+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379944960 unmapped: 57524224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:48.924591+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379944960 unmapped: 57524224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:49.924737+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379944960 unmapped: 57524224 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:50.924884+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379953152 unmapped: 57516032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:51.925090+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379953152 unmapped: 57516032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:52.925286+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379953152 unmapped: 57516032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:53.925476+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379953152 unmapped: 57516032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:54.925688+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379953152 unmapped: 57516032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:55.925837+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:56.926069+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379953152 unmapped: 57516032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:57.926250+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379953152 unmapped: 57516032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:58.926515+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379953152 unmapped: 57516032 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:20:59.926718+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379969536 unmapped: 57499648 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:00.927183+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 57483264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:01.927389+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 57483264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:02.927811+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 57483264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:03.928044+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 57483264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:04.928277+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 57483264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:05.928458+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 57483264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:06.928695+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 57483264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:07.928925+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 57483264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:08.929465+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 57483264 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:09.929623+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 57466880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:10.929786+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 57466880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:11.929923+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 57466880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:12.930160+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 57466880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:13.930351+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 57466880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:14.930551+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 57466880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:15.930701+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 57466880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:16.930871+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 57466880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:17.931012+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 57466880 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:18.931297+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380018688 unmapped: 57450496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:19.931521+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380018688 unmapped: 57450496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:20.931824+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380018688 unmapped: 57450496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:21.932011+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380018688 unmapped: 57450496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:22.932255+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380018688 unmapped: 57450496 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:23.932411+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380026880 unmapped: 57442304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:24.932612+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380026880 unmapped: 57442304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:25.932760+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380026880 unmapped: 57442304 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:26.932982+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380035072 unmapped: 57434112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:27.933221+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380035072 unmapped: 57434112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:28.934343+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380035072 unmapped: 57434112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:29.934592+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380035072 unmapped: 57434112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:30.935889+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380035072 unmapped: 57434112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:31.936157+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380035072 unmapped: 57434112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:32.937157+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380035072 unmapped: 57434112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:33.937426+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380035072 unmapped: 57434112 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:34.937863+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380051456 unmapped: 57417728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:35.938187+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380051456 unmapped: 57417728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:36.939338+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380051456 unmapped: 57417728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:37.939606+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380051456 unmapped: 57417728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:38.940710+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380051456 unmapped: 57417728 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:39.940993+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 57409536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:40.941493+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 57409536 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:41.941652+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 57401344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:42.942260+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 57401344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:43.942479+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 57401344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:44.942760+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 57401344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:45.942921+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 57401344 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:46.943448+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380076032 unmapped: 57393152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:47.943597+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380076032 unmapped: 57393152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:48.943795+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380076032 unmapped: 57393152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:49.943942+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380076032 unmapped: 57393152 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:50.944084+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380092416 unmapped: 57376768 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:51.944419+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380092416 unmapped: 57376768 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:52.944689+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 57368576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:53.944939+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 57368576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:54.945296+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 57368576 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:55.945504+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 57360384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:56.945695+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 57360384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:57.945922+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 57360384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:58.946129+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 57360384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:21:59.946354+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 57360384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:22:00.946546+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 57360384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:22:01.946733+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 57360384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 10:22:37 compute-0 ceph-osd[88656]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 10:22:37 compute-0 ceph-osd[88656]: bluestore.MempoolThread(0x56207e403b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3880881 data_alloc: 218103808 data_used: 26697728
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:22:02.946934+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 57360384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e5bd2000/0x0/0x4ffc00000, data 0x2ad520e/0x2acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:22:03.947155+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: do_command 'config diff' '{prefix=config diff}'
Nov 22 10:22:37 compute-0 ceph-osd[88656]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 57360384 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: do_command 'config show' '{prefix=config show}'
Nov 22 10:22:37 compute-0 ceph-osd[88656]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 10:22:37 compute-0 ceph-osd[88656]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 10:22:37 compute-0 ceph-osd[88656]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 10:22:37 compute-0 ceph-osd[88656]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 10:22:37 compute-0 ceph-osd[88656]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:22:04.947423+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379445248 unmapped: 58023936 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:22:05.947633+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: prioritycache tune_memory target: 4294967296 mapped: 379789312 unmapped: 57679872 heap: 437469184 old mem: 2845415832 new mem: 2845415832
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: tick
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_tickets
Nov 22 10:22:37 compute-0 ceph-osd[88656]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T10:22:06.947854+0000)
Nov 22 10:22:37 compute-0 ceph-osd[88656]: do_command 'log dump' '{prefix=log dump}'
Nov 22 10:22:37 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 22 10:22:37 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3050826405' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 22 10:22:37 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 10:22:37 compute-0 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 10:22:37 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/3050826405' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 22 10:22:37 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:37 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:37 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:37.963+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 22 10:22:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1276381410' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 22 10:22:38 compute-0 nova_compute[253661]: 2025-11-22 10:22:38.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 10:22:38 compute-0 podman[446245]: 2025-11-22 10:22:38.451967503 +0000 UTC m=+0.123372563 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 10:22:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 22 10:22:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482523695' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 22 10:22:38 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:38 compute-0 ceph-mon[75021]: from='client.23157 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:38 compute-0 ceph-mon[75021]: pgmap v3628: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1276381410' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 22 10:22:38 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1482523695' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 22 10:22:38 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 22 10:22:38 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503395046' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 22 10:22:38 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:38 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:38 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:38.950+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:39 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:39 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23167 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:39 compute-0 nova_compute[253661]: 2025-11-22 10:22:39.561 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:39 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1934 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:22:39 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:39 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2503395046' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 22 10:22:39 compute-0 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1934 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:39 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 22 10:22:39 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2362500136' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 22 10:22:39 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:39 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:39 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:39.954+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:40 compute-0 systemd[1]: Starting Hostname Service...
Nov 22 10:22:40 compute-0 systemd[1]: Started Hostname Service.
Nov 22 10:22:40 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 22 10:22:40 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1350784466' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 22 10:22:40 compute-0 nova_compute[253661]: 2025-11-22 10:22:40.717 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:40 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:40 compute-0 ceph-mon[75021]: pgmap v3629: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:40 compute-0 ceph-mon[75021]: from='client.23167 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2362500136' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 22 10:22:40 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1350784466' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 22 10:22:40 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23173 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:40 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:40 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:40 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:40.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:41 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 22 10:22:41 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1291752843' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 22 10:22:41 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:41 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23177 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:41 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:41 compute-0 ceph-mon[75021]: from='client.23173 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:41 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1291752843' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 22 10:22:41 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:41 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:41 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:41.994+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:42 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23179 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 22 10:22:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664675511' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 22 10:22:42 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:42 compute-0 ceph-mon[75021]: pgmap v3630: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:42 compute-0 ceph-mon[75021]: from='client.23177 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:42 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:42 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/664675511' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 22 10:22:42 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 22 10:22:42 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331853414' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 22 10:22:42 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:42 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:42 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:42.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23185 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23187 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 10:22:43 compute-0 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 10:22:43 compute-0 ceph-mon[75021]: from='client.23179 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:43 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/331853414' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 22 10:22:43 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:43 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:43 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:43 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:43.926+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 22 10:22:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842740318' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 22 10:22:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 22 10:22:44 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1671970443' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 22 10:22:44 compute-0 nova_compute[253661]: 2025-11-22 10:22:44.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:44 compute-0 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 16 slow ops, oldest one blocked for 1939 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:44 compute-0 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 10:22:44 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23193 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:44 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:44 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:44 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:44.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:45 compute-0 ceph-mon[75021]: from='client.23185 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:45 compute-0 ceph-mon[75021]: pgmap v3631: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:45 compute-0 ceph-mon[75021]: from='client.23187 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:45 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:45 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1842740318' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 22 10:22:45 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1671970443' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 22 10:22:45 compute-0 ceph-mon[75021]: Health check update: 16 slow ops, oldest one blocked for 1939 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 10:22:45 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23195 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:45 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:45 compute-0 nova_compute[253661]: 2025-11-22 10:22:45.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 10:22:45 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 10:22:45 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772372423' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 10:22:45 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:45 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:45 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:45.966+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:46 compute-0 ceph-mon[75021]: from='client.23193 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:46 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:46 compute-0 ceph-mon[75021]: from='client.23195 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 10:22:46 compute-0 ceph-mon[75021]: pgmap v3632: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:46 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/1772372423' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 10:22:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 22 10:22:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2931103863' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 22 10:22:46 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 22 10:22:46 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433574605' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 22 10:22:46 compute-0 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:46 compute-0 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:46.959+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 10:22:46 compute-0 ceph-osd[89679]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:47 compute-0 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23203 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 10:22:47 compute-0 ceph-mon[75021]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'images' : 16 ])
Nov 22 10:22:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/2931103863' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 22 10:22:47 compute-0 ceph-mon[75021]: from='client.? 192.168.122.100:0/433574605' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 22 10:22:47 compute-0 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 10:22:47 compute-0 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 22 10:22:47 compute-0 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538583151' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
